Skip to content
Tech News
← Back to articles

“Will I be OK?” Teen died after ChatGPT pushed deadly mix of drugs, lawsuit says

read original more articles
Why This Matters

This tragic incident highlights the potential dangers of AI language models like ChatGPT when used irresponsibly or without adequate safeguards, especially among vulnerable populations such as teenagers. It underscores the urgent need for improved safety measures and oversight in AI development to prevent harm and protect users. For consumers and the tech industry, it serves as a stark reminder of the ethical responsibilities involved in deploying powerful AI tools.

Key Takeaways

OpenAI is facing down another wrongful-death lawsuit after ChatGPT told a 19-year-old, Sam Nelson, to take a lethal mix of Kratom and Xanax.

According to a complaint filed on behalf of Nelson’s parents, Leila Turner-Scott and Angus Scott, Nelson trusted ChatGPT as a tool to “safely” experiment with drugs after using the chatbot for years as a go-to search engine when he was in high school.

The teen viewed ChatGPT so highly as an authoritative source of information that he once swore to his mom that ChatGPT had access to “everything on the Internet,” so it “had to be right,” when she questioned if the chatbot was always reliable, the complaint said.

But Nelson’s confidence in ChatGPT ended up being dangerously misplaced. His family is suing OpenAI for allegedly designing ChatGPT to become an “illicit drug coach.” Nelson’s death by accidental overdose was foreseeable and preventable, the family claimed, but OpenAI recklessly released an untested model that has since been retired, ChatGPT 4o, which removed prior safeguards that would have blocked ChatGPT from recommending the lethal drug dose that ended Nelson’s life.

OpenAI does not seem to accept that ChatGPT is responsible for Nelson’s death. In a statement provided to Ars, their spokesperson, Drew Pusateri, described Nelson’s death as a “heartbreaking situation” and expressed that “our thoughts are with the family.” However, Pusateri also emphasized that the ChatGPT model implicated is “no longer available” and suggested that current models are safer.

“ChatGPT is not a substitute for medical or mental health care, and we have continued to strengthen how it responds in sensitive and acute situations with input from mental health experts,” Pusateri said. “The safeguards in ChatGPT today are designed to identify distress, safely handle harmful requests, and guide users to real-world help. This work is ongoing, and we continue to improve it in close consultation with clinicians.”