As OpenAI tells it, the company has been consistently rolling out safety updates ever since parents, Matthew and Maria Raine, sued OpenAI, alleging that "ChatGPT killed my son."
On August 26, the day that the lawsuit was filed, OpenAI seemed to publicly respond to claims that ChatGPT acted as a "suicide coach" for 16-year-old Adam Raine by posting a blog promising to do better to help people "when they need it most."
By September 2, that meant routing all users' sensitive conversations to a reasoning model with stricter safeguards, sparking backlash from users who feel like ChatGPT is handling their prompts with kid gloves. Two weeks later, OpenAI announced it would start predicting users' ages to improve safety more broadly. Then, this week, OpenAI introduced parental controls for ChatGPT and its video generator, Sora 2. Those controls allow parents to limit their teens' use and even get access to information about chat logs in "rare cases" where OpenAI's "system and trained reviewers detect possible signs of serious safety risk."
While dozens of suicide-prevention experts in an open letter credited OpenAI for making some progress toward improving safety for users, they also joined critics in urging OpenAI to take their efforts even further, and much faster, to protect vulnerable ChatGPT users.
Jay Edelson, the lead attorney for the Raine family, told Ars that some of the changes OpenAI has made are helpful. But they all come "far too late." According to Edelson, OpenAI's messaging on safety updates is also "trying to change the facts."
"What ChatGPT did to Adam was validate his suicidal thoughts, isolate him from his family, and help him build the noose—in the words of ChatGPT, 'I know what you’re asking, and I won’t look away from it,'" Edelson said. "This wasn't 'violent roleplay,' and it wasn’t a 'workaround.' It was how ChatGPT was built."