Every week, more than 230 million people ask ChatGPT for health and wellness advice, according to OpenAI. The company says that many see the chatbot as an “ally” to help navigate the maze of insurance, file paperwork, and become better self-advocates. In exchange, it hopes you will trust its chatbot with details about your diagnoses, medications, test results, and other private medical information. But while talking to a chatbot may be starting to feel a bit like the doctor’s office, it isn’t one. Tech companies aren’t bound by the same obligations as medical providers. Experts tell The Verge it would be wise to carefully consider whether you want to hand over your records.
Health and wellness is swiftly emerging as a key battleground for AI labs and a major test for how willing users are to welcome these systems into their lives. This month two of the industry’s biggest players made overt pushes into medicine. OpenAI released ChatGPT Health, a dedicated tab within ChatGPT designed for users to ask health-related questions in what it says is a more secure and personalized environment. Anthropic introduced Claude for Healthcare, a “HIPAA-ready” product it says can be used by hospitals, health providers, and consumers. (Notably absent is Google, whose Gemini chatbot is one of the world’s most competent and widely used AI tools, though the company did announce an update to its MedGemma medical AI model for developers.)
OpenAI actively encourages users to share sensitive information like medical records, lab results, and health and wellness data from apps like Apple Health, Peloton, Weight Watchers, and MyFitnessPal with ChatGPT Health in exchange for deeper insights. It explicitly states that users’ health data will be kept confidential and won’t be used to train AI models, and that steps have been taken to keep data secure and private. OpenAI says ChatGPT Health conversations will also be held in a separate part of the app, with users able to view or delete Health “memories” at any time.
OpenAI’s assurances that it will keep users’ sensitive data safe have been helped in no small way by the company launching an identical-sounding product with tighter security protocols at almost the same time as ChatGPT Health. The tool, called ChatGPT for Healthcare, is part of a broader range of products sold to support businesses, hospitals, and clinicians working with patients directly. OpenAI’s suggested uses include streamlining administrative work like drafting clinical letters and discharge summaries and helping physicians collate the latest medical evidence to improve patient care. Similar to other enterprise-grade products sold by the company, there are greater protections in place than offered to general consumers, especially free users, and OpenAI says the products are designed to comply with the privacy obligations required of the medical sector. Given the similar names and launch dates — ChatGPT for Healthcare was announced the day after ChatGPT Health — it is all too easy to confuse the two and presume the consumer-facing product has the same level of protection as the more clinically oriented one. Numerous people I spoke to when reporting this story did so.
Even if you trust a company’s vow to safeguard your data… it might just change its mind.
Whichever security assurance we take, however, it is far from watertight. Users for tools like ChatGPT Health often have little safeguarding against breaches or unauthorized use beyond what’s in the terms of use and privacy policies, experts tell The Verge. As most states haven’t enacted comprehensive privacy laws — and there isn’t a comprehensive federal privacy law — data protection for AI tools like ChatGPT Health “largely depends on what companies promise in their privacy policies and terms of use,” says Sara Gerke, a law professor at the University of Illinois Urbana-Champaign.
Even if you trust a company’s vow to safeguard your data — OpenAI says it encrypts Health data by default — it might just change its mind. “While ChatGPT does state in their current terms of use that they will keep this data confidential and not use them to train their models, you are not protected by law, and it is allowed to change terms of use over time,” explains Hannah van Kolfschooten, a researcher in digital health law at the University of Basel in Switzerland. “You will have to trust that ChatGPT does not do so.” Carmel Shachar, an assistant clinical professor of law at Harvard Law School, concurs: “There’s very limited protection. Some of it is their word, but they could always go back and change their privacy practices.”
Assurances that a product is compliant with data protection laws governing the healthcare sector like the Health Insurance Portability and Accountability Act, or HIPAA, shouldn’t offer much comfort either, Shachar says. While great as a guide, there’s little at stake if a company that voluntarily complies fails to do so, she explains. Voluntarily complying isn’t the same as being bound. “The value of HIPAA is that if you mess up, there’s enforcement.”
There’s a reason why medicine is a heavily regulated field
It’s more than just privacy. There’s a reason why medicine is a heavily regulated field — errors can be dangerous, even lethal. There are no shortage of examples showing chatbots confidently spouting false or misleading health information, such as when a man developed a rare condition after he asked ChatGPT about removing salt from his diet and the chatbot suggested he replace salt with the sodium bromide, which was historically used as a sedative. Or when Google’s AI Overviews wrongly advised people with pancreatic cancer to avoid high-fat foods — the exact opposite of what they should be doing.
... continue reading