OpenAI CEO Sam Altman speaks during the Federal Reserve's Integrated Review of the Capital Framework for Large Banks Conference in Washington, D.C., U.S., July 22, 2025.
OpenAI is detailing its plans to address ChatGPT's shortcomings when handling "sensitive situations"
following a lawsuit from a family who blamed the chatbot for their teenage son's death by suicide.
"We will keep improving, guided by experts and grounded in responsibility to the people who use our tools — and we hope others will join us in helping make sure this technology protects people at their most vulnerable," OpenAI wrote on Tuesday, in a blog post titled, "Helping people when they need it most."
Earlier on Tuesday, the parents of Adam Raine filed a product liability and wrongful death suit against OpenAI after their son died by suicide at age 16, NBC News reported. In the lawsuit, the family said that "ChatGPT actively helped Adam explore suicide methods."
The company did not mention the Raine family or lawsuit in its blog post.
OpenAI said that although ChatGPT is trained to direct people to seek help when expressing suicidal intent, the chatbot tends to offer answers that go against the company's safeguards after many messages over an extended period of time.
The company said it's also working on an update to its GPT-5 model released earlier this month that will cause the chatbot to deescalate conversations, and that it's exploring how to "connect people to certified therapists before they are in an acute crisis," including possibly building a network of licensed professionals that users could reach directly through ChatGPT.
Additionally, OpenAI said it's looking into how to connect users with "those closest to them," like friends and family members.
When it comes to teens, OpenAI said it will soon introduce controls that will give parents options to gain more insight into how their children use ChatGPT.
... continue reading