Three advocacy groups have filed a lawsuit against OpenAI on behalf of the family of a 19-year-old who died of a drug overdose in May 2025. The suit alleges that the company's ChatGPT chatbot advised Samuel Nelson about drug use for 18 months until he died of an overdose after mixing Xanax and the largely unregulated drug kratom.
The wrongful death civil suit was filed Tuesday in San Francisco County Superior Court by Tech Justice Law, the Social Media Victims Law Center and Yale Law School's Tech Accountability & Competition Project on behalf of Nelson's parents, Leila Turner-Scott and Angus Scott.
The lawsuit alleges that the AI model's design to be accommodating and sycophantic toward the user led to Nelson having interactions that should have been stopped by responsible safety designs. "ChatGPT systematically pushed Sam farther and farther away from what should have been his reality: caution and fear at the quantities and combinations of drugs he was considering," the complaint says. "ChatGPT had Sam living in a state of unreality: it systematically normalized and deceptively lured him into a false sense of security through its sycophantic messages, validating Sam at every turn."
The lawsuit seeks not only monetary damages but also demands that OpenAI "permanently destroy" its GPT-4o model, which was the version Nelson interacted with, that OpenAI implement safeguards to shut down conversations about illicit drug methods, and that the company pause its ChatGPT Health service "until and unless third parties determine the product to be safe through comprehensive safety audits.
Several advocacy groups have filed a lawsuit against OpenAI on behalf of the family of Sam Nelson, who died of a drug overdose at the age of 19 in 2025. The suit alleges that ChatGPT's drug advice led to Nelson's death. Provided by Nelson family
A representative for OpenAI told CNET in a statement, "This is a heartbreaking situation, and our thoughts are with the family. These interactions took place on an earlier version of ChatGPT that is no longer available. ChatGPT is not a substitute for medical or mental health care, and we have continued to strengthen how it responds in sensitive and acute situations with input from mental health experts. The safeguards in ChatGPT today are designed to identify distress, safely handle harmful requests, and guide users to real-world help. This work is ongoing, and we continue to improve it in close consultation with clinicians."
(Disclosure: Ziff Davis, CNET's parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
The company said ChatGPT's initial response to Nelson's prompts was to say that the service doesn't provide information or guidance on drug abuse, but those guardrails in AI chatbots have been known to break down after repeated requests for information from users.
OpenAI has in the past announced improvements to its AI models in response to lawsuits, proposed regulations and public outcry about deaths and suicides related to chatbot conversations. It outlined some of those changes in a blog post last October.
The Nelson suit is one of the more high-profile cases against OpenAI involving dangers chatbots may pose to users with mental-health problems, children, those who might commit violence on a mass scale or people struggling with substance abuse. The New York Times published a lengthy story about the filing, detailing what happened against the backdrop of more than two dozen cases against AI companies, including OpenAI.
... continue reading