On March 13, a woman from Salt Lake City, Utah called the Federal Trade Commission to file a complaint against OpenAI’s ChatGPT. She claimed to be acting “on behalf of her son, who was experiencing a delusional breakdown.”
“The consumer’s son has been interacting with an AI chatbot called ChatGPT, which is advising him not to take his prescribed medication and telling him that his parents are dangerous,” reads the FTC’s summary of the call. “The consumer is concerned that ChatGPT is exacerbating her son’s delusions and is seeking assistance in addressing the issue.”
The mother’s complaint is one of seven that have been filed to the FTC alleging that ChatGPT had caused people to experience incidents that included severe delusions, paranoia, and spiritual crises.
WIRED sent a public record request to the FTC requesting all complaints mentioning ChatGPT since the tool launched in November 2022. The tool represents more than 50 percent of the market for AI chatbots globally. In response, WIRED received 200 complaints submitted between January 25, 2023 and August 12, 2025, when WIRED filed the request.
Most people had ordinary complaints: They couldn’t figure out how to cancel their ChatGPT subscriptions, or were frustrated when the chatbot didn’t produce satisfactory essays or rap lyrics when prompted. But a handful of other people, who varied in age and geographical location in the US, had far more serious allegations of psychological harm. The complaints were all filed between March and August of 2025.
In recent months, there has been a growing number of documented incidents of so-called “AI psychosis” in which interactions with generative AI chatbots, like ChatGPT or Google Gemini, appear to induce or worsen a user’s delusions or other mental health issues.
Ragy Girgis, a professor of clinical psychiatry at Columbia University who specializes in psychosis and has consulted on AI psychosis cases related to AI, tells WIRED that some of the risk factors for psychosis can be related to genetics or early-life trauma. What specifically triggers someone to have a psychotic episode is less clear, but he says it’s often tied to a stressful event or time period.
The phenomenon known as “AI psychosis,” he says, is not when a large language model actually triggers symptoms, but rather, when it reinforces a delusion or disorganized thoughts that a person was already experiencing in some form. The LLM helps bring someone "from one level of belief to another level of belief," Girgis explains. It’s not unlike a psychotic episode that worsens after someone falls into an internet rabbit hole. But compared to search engines, he says, chatbots can be stronger agents of reinforcement.