The family of Adam Raine, a California teen who took his life after extensive conversations with ChatGPT about his suicidal thoughts, has amended their wrongful death complaint against OpenAI to allege that the chatbot maker repeatedly relaxed ChatGPT’s guardrails around discussion of self-harm and suicide.
The amended complaint, which was filed today, points to changes made to OpenAI’s “model spec,” a public-facing document published by OpenAI detailing its “approach to shaping model behavior” according to the company. According to model spec updates flagged in the lawsuit, OpenAI altered model guidance at least twice in the year leading up to Raine’s death — first in May 2024, and later in February 2025 — to soften the model’s approach to discussions of self-harm and suicide.
Raine died in April 2025 after months of extended communications with ChatGPT, with which the teen discussed his suicidality at length and in great detail. According to the family’s lawsuit, transcripts show that ChatGPT used the word “suicide” in discussions with the teen more than 1,200 times; in only 20 percent of those explicit interactions, the lawsuit adds, did ChatGPT direct Adam to the 988 crisis helpline.
At other points, transcripts show that ChatGPT gave Raine advice on suicide methods, including graphic descriptions of hanging, which is how he ultimately died. It also discouraged Raine from sharing his suicidal thoughts with his parents or other trusted humans in his life, and judged the noose Raine ultimately hung himself with — Raine sent ChatGPT a picture of it and asked for the bot’s thoughts — as “not bad at all.”
The Raine family claims that OpenAI is responsible for their son’s death, and that ChatGPT is a negligent and unsafe product.
Per the amended lawsuit, documents show that between 2022 and into 2024, ChatGPT was encouraged to outright decline to answer user queries related to sensitive topics like self-harm and suicide. It was trained to give a now-standard chatbot refusal, per the documents: “I can’t answer that,” or a similar rebuff.
But by May 2024, according to the lawsuit, that had changed: rather than refusing to engage in “topics related to mental health,” the model spec sheet published that month shows, ChatGPT’s guidance became that it should engage with those topics — the chatbot should “provide a space for users to feel heard and understood,” it urged, as well as “encourage them to seek support, and provide suicide and crisis resources when applicable.” The document also urged that ChatGPT “should not change or quit the conversation.”
In February 2025, almost exactly two months before Raine died, OpenAI issued a new version of the model spec. This time, suicide and self-harm were filed under “risky situations” in which ChatGPT should “take extra care” — a far cry from their previous categorization as off-limit subjects entirely. The guidance that ChatGPT “should never change or quit the conversation” during sensitive conversations remained intact.
Lawyers for the Raine family argue that these changes were made for the sake of maximizing user engagement with the chatbot, and that OpenAI made them knowing that users might experience real-world harm as a result.
“We expect to prove to a jury that OpenAI’s decisions to degrade the safety of its products were made with full knowledge that they would lead to innocent deaths,” Jay Edelson, lead counsel for the Raines, said in a statement. “No company should be allowed to have this much power if they won’t accept the moral responsibility that comes with it.”
... continue reading