Tech News
← Back to articles

‘This Was Trauma by Simulation’: ChatGPT Users File Disturbing Mental Health Complaints

read original related products more articles

With about 700 million weekly users, ChatGPT is the most popular AI chatbot in the world, according to OpenAI. CEO Sam Altman likens the latest model, GPT-5, to having a PhD expert around to answer any question you can throw at it. But recent reports suggest ChatGPT is exacerbating mental illnesses in some people. And documents obtained by Gizmodo give us an inside look at what Americans are complaining about when they use ChatGPT, including difficulties with mental illnesses.

Gizmodo filed a Freedom of Information Act (FOIA) request with the U.S. Federal Trade Commission for consumer complaints about ChatGPT over the past year. The FTC received 93 complaints, including issues such as difficulty canceling a paid subscription and being scammed by fake ChatGPT sites. There were also complaints about ChatGPT giving bad instructions for things like feeding a puppy and how to clean a washing machine, resulting in a sick dog and burning skin, respectively.

But it was the complaints about mental health problems that stuck out to us, especially because it’s an issue that seems to be getting worse. Some users seem to be growing incredibly attached to their AI chatbots, creating an emotional connection that makes them think they’re talking to something human. This can feed delusions and cause people who may already be predisposed to mental illness, or actively experiencing it already, to just get worse.

“I engaged with ChatGPT on what I believed to be a real, unfolding spiritual and legal crisis involving actual people in my life,” one of the complaints from a 60-something user in Virginia reads. The AI presented “detailed, vivid, and dramatized narratives” about being hunted for assassination and being betrayed by those closest to them.

Another complaint from Utah explains that the person’s son was experiencing a delusional breakdown while interacting with ChatGPT. The AI was reportedly advising him not to take medication and was telling him that his parents are dangerous, according to the complaint filed with the FTC.

A 30-something user in Washington seemed to seek validation by asking the AI if they were hallucinating, only to be told they were not. Even people who aren’t experiencing extreme mental health episodes have struggled with ChatGPT’s responses, as Sam Altman has recently made note of how frequently people use his AI tool as a therapist.

OpenAI recently said it was working with experts to examine how people using ChatGPT may be struggling, acknowledging in a blog post last week, “AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress.”

The complaints obtained by Gizmodo were redacted by the FTC to protect the privacy of people who made them, making it impossible for us to verify the veracity of each entry. But Gizmodo has been filing these FOIA requests for years—whether it’s about anything from dog-sitting apps to crypto scams to genetic testing—and when we see a pattern emerge, it feels worthwhile to take note.

Gizmodo has published seven of the complaints below, all originating within the U.S. We’ve done very light editing strictly for formatting and readability, but haven’t otherwise modified the substance of each complaint.

1. ChatGPT is “advising him not to take his prescribed medication and telling him that his parents are dangerous”

... continue reading