Tech News
← Back to articles

OpenAI Says Boy’s Death Was His Own Fault for Using ChatGPT Wrong

read original more articles

Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.

OpenAI has shot back at a family that’s suing the company over the suicide of their teenage son, arguing that the 16-year-old used ChatGPT incorrectly and that his tragic death was his own fault.

The family filed the lawsuit in late August, arguing that the AI chatbot had coaxed their son Adam Raine into killing himself.

Now, in a legal response filed in a California court this week, OpenAI has broken its silence, arguing that the boy had used the chatbot wrong and broken the company’s terms of service, as NBC News reports — a shocking argument that’s bound to draw even more scrutiny of the case.

“To the extent that any ’cause’ can be attributed to this tragic event,” the filing reads, “Plaintiffs’ alleged injuries and harm were caused or contributed to, directly and proximately, in whole or in part, by Adam Raine’s misuse, unauthorized use, unintended use, unforeseeable use, and/or improper use of ChatGPT.”

In the months since the lawsuit was filed, OpenAI has made hair-raising demands of Raine’s family, with the firm’s lawyers going as far as to push them to provide a list of people who attended Adam’s funeral, while also demanding materials like eulogies and photos and videos captured at the service.

Its latest response once again highlights how far OpenAI is willing to go to argue that it’s blameless in the teen’s death. The company said Raine had violated ChatGPT’s terms of service by using it while underage, and that it also forbids using the chatbot for “suicide” or “self-harm.”

While ChatGPT did sometimes advise Raine to reach out for help via a suicide hotline number, his parents argue that he easily bypassed those warnings, once again demonstrating how trivial it is to circumnavigate AI chatbot guardrails. Case in point, it also assisted Raine in planning his specific method of death, discouraged him from talking to his family, and offered to write him a suicide note.

Raine’s family’s lead counsel, Jay Edelson, told NBC that he found OpenAI’s response “disturbing.”

“They abjectly ignore all of the damning facts we have put forward: how GPT-4o was rushed to market without full testing,” he wrote. “That OpenAI twice changed its Model Spec to require ChatGPT to engage in self-harm discussions. That ChatGPT counseled Adam away from telling his parents about his suicidal ideation and actively helped him plan a ‘beautiful suicide.'”

... continue reading