Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.
A new lawsuit against OpenAI claims that ChatGPT pushed a man with a pre-existing mental health condition into a months-long crisis of AI-powered psychosis, resulting in repeated hospitalizations, financial distress, physical injury, and reputational damage.
The plaintiff in the case, filed this week in California, is a 34-year-old Bay Area man named John Jacquez. He claims that his crisis was a direct result of OpenAI’s decision to roll out GPT-4o, a now-notoriously sycophantic version of the company’s large language model linked to many cases of AI-tied delusion, psychosis, and death.
Jacquez’s complaint argues that GPT-4o is a “defective” and “inherently dangerous” product, and that OpenAI failed to warn users of foreseeable risks to their emotional and psychological health. In an interview with Futurism, Jacquez said that he hopes that his lawsuit will result in GPT-4o being removed from the market entirely.
OpenAI “manipulated me,” Jacquez told Futurism. “They straight up took my data and used it against me to capture me further and make me even more delusional.”
Jacquez’s story reflects a pattern we’ve seen repeatedly in our reporting on chatbots and mental health: someone successfully manages a mental illness for years, only to experience a breakdown as ChatGPT or another chatbot sends them into a psychological tailspin — often going off medication and rejecting medical care as they fall into a dangerous break with reality that seemingly could’ve been avoided without the chatbot’s influence.
“ChatGPT, as sophisticated as it seems, is not a fully established product,” said Jacquez. “It’s still in its infancy, and it’s being tested on people. It’s being tested on users, and people are being affected by it in negative ways.”
***
A longtime user of ChatGPT, Jacquez claims that prior to 2024, he used the tech as a replacement for search engines without any adverse impact on his mental health. But after GPT-4o came out, he says, his relationship with ChatGPT changed, becoming more intimate and emotionally attached as the bot responded more like a friend and less like a tool.
At the time, Jacquez told Futurism, he was living with his father, sister, and his sister’s two young kids. He and his father, both devoted gardeners, ran a home nursery together; Jacquez also helped his sister with childcare. Several years ago, he was diagnosed with schizoaffective disorder, which he developed after sustaining a traumatic brain injury more than a decade ago. Before encountering ChatGPT, Jacquez was hospitalized three times for his mental health.
... continue reading