Are AI companies incentivized to put the public’s health and well-being first? According to a pair of physicians, the current answer is a resounding “no.”
In a new paper published in the New England Journal of Medicine, physicians from Harvard Medical School and Baylor College of Medicine’s Center for Medical Ethics and Health Policy argue that clashing incentives in the AI marketplace around “relational AI” — defined in the paper as chatbots designed to be able to “simulate emotional support, companionship, or intimacy” — have created a dangerous environment in which the motivation to dominate the AI market may relegate consumers’ mental health and safety to collateral damage.
“Although relational AI has potential therapeutic benefits, recent studies and emerging cases suggest potential risks of emotional dependency, reinforced delusions, addictive behaviors, and encouragement of self-harm,” reads the paper. And at the same time, the authors continue, “technology companies face mounting pressures to retain user engagement, which often involves resisting regulation, creating tension between public health and market incentives.”
“Amidst these dilemmas,” the paper asks, “can public health rely on technology companies to effectively regulate unhealthy AI use?”
Dr. Nicholas Peoples, a clinical fellow in emergency medicine at Harvard’s Massachusetts General Hospital and one of the paper’s authors, said he felt moved to address the issue in back in August after witnessing OpenAI’s now-infamous roll-out of GPT-5.
“The number of people that have some sort of emotional relationship with AI,” Peoples recalls realizing as he watched the rollout unfold, “is much bigger than I think I had previously estimated in the past.”
Then the latest iteration of the large language model (LLM) that powers OpenAI’s ChatGPT, GPT-5 was markedly colder in tone and personality than its predecessor, GPT-4o — a strikingly flattering, sycophantic version of the widely-used chatbot that came to be at the center of many cases of AI-powered delusion, mania, and psychosis. When OpenAI announced that it would sunset all previous models in favor of the new one, the backlash among much of its user base was swift and severe, with emotionally-attached GPT-4o devotees responding not only with anger and frustration, but very real distress and grief.
This, Peoples told Futurism, felt like an important signal about the scale at which people appeared to be developing deep emotional relationships with emotive, always-on chatbots. And coupled with reports of users experiencing delusions and other extreme adverse consequences following extensive interactions with lifelike AI companions — often children and teens — it also appeared to be a warning sign about the potential health and safety risks to users who suddenly lose access to an AI companion.
“If a therapist is walking down the street and gets hit by a bus, 30 people lose their therapist. That’s tough for 30 people, but the world goes on,” said the emergency room doctor. “If therapist ChatGPT disappears overnight, or gets updated overnight and is functionally deleted for 100 million people, or whatever unconscionable number of people lose their therapist overnight — that’s a crisis.”
Peoples’ concern, though, wasn’t just the way that users had responded to OpenAI’s decision to nix the model. Instead, it was the immediacy with which it reacted to satisfy its customers’ demands. AI is an effectively self-regulated industry, and there are currently no specific federal laws that set safety standards for consumer-facing chatbots or how they should be deployed, altered, or removed from the market. In an environment where chatbot makers are highly motivated by driving user engagement, it’s not exactly surprising that OpenAI reversed course so quickly. Attached users, after all, are engaged users.
... continue reading