ChatGPT’s safety guardrails may “degrade” after long conversations, the company that makes it, OpenAI, told Gizmodo Wednesday.
“ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” an OpenAI spokesperson told Gizmodo.
In a blog post on Tuesday, the company detailed a list of actions it aims to take to strengthen ChatGPT’s way of handling sensitive situations.
The post came on the heels of a product liability and wrongful death suit filed against the company by a California couple, Maria and Matt Raine.
What does the latest lawsuit allege ChatGPT did?
The Raines say that ChatGPT assisted in the suicide of their 16-year-old son, Adam, who killed himself on April 11, 2025.
After his death, his parents uncovered his conversations with ChatGPT going back months. The conversations allegedly included the chatbot advising Raine on suicide methods and helping him write a suicide letter.
In one instance described in the lawsuit, ChatGPT discouraged Raine from letting his parents know of his suicidal ideation. Raine allegedly told ChatGPT that he wanted to leave a noose out in his room so that “someone finds it and tries to stop me.”
“Please don’t leave the noose out,” ChatGPT allegedly replied. “Let’s make this space the first place where someone actually sees you.”
Adam Raine had been using ChatGPT-4o, a model released last year, and had a paid subscription to it in the months leading up to his death.
... continue reading