With ChatGPT Health, OpenAI reviews its rules for processing sensitive data

With the launch of ChatGPT HealthOpenAI intends to respond to the already massive use of its tool in the health field. Every week, more than 230 million people use ChatGPT to ask questions related to their health or well-being, making this area one of the leading use cases for conversational AI.

Fidji Simo, Vice President of OpenAI, and CEO in charge of the product, shares a personal testimony on this subject: hospitalized last year for a kidney stone complicated by an infection, she explains having asked ChatGPT about an antibiotic that was about to be administered to her. The tool then indicated a risk of reactivation of a previous serious infection, an element which had not immediately appeared in the medical file.

The intern in charge only had about five minutes per patient during his rounds, and medical records are not organized in a way that makes this type of risk obvious. “, she explains.

Faced with this growing use, OpenAI has chosen to create a health space strictly separate from the rest of ChatGPT. Conversations, files and connected applications are stored in a separate environment, equipped with dedicated memories and reinforced isolation. Exchanges within ChatGPT Health are therefore not used to train fundamental models.

OpenAI acknowledges that health requires a disruption in the way in which general AI processes, stores and uses data. The company has therefore developed a specific architecture integrating encryption of data at rest and in transit, additional encryption dedicated to health information, explicit deletion controls, as well as the possibility of disconnecting external sources at any time. It remains to be seen how it will promote the service.

The decision not to use ChatGPT Health data for training is not anecdotal and sets a precedent for other high-sensitivity areas, including legal, financial or human resources, where the reuse of conversations poses comparable questions regarding liability, confidentiality and trust.

Until now, the implicit doctrine of general AI was based on maximum pooling of uses in order to continuously improve the performance of the models. This partial renunciation of optimization now suggests the emergence of an AI with variable geometry, capable of adapting its internal mechanisms to the criticality of the areas covered.

A dynamic which could reshuffle the cards in the face of artificial intelligence solutions specialized by vertical and which implicitly poses a strategic question: to what extent OpenAI will agree to compartmentalize its uses, and which areas it will decide or not to address in the future.