A Georgia college student named Darian DeCruise has sued OpenAI, alleging that a recently deprecated version of ChatGPT “convinced him that he was an oracle” and “pushed him into psychosis.”
This case, which was first reported by ALM, marks the 11th such known lawsuit to be filed against OpenAI that involves mental health breakdowns allegedly caused by the chatbot. Other incidents have ranged from highly questionable medical and health advice to a man who took his own life, apparently after similarly sycophantic conversations with ChatGPT.
DeCruise’s lawyer, Benjamin Schenk—whose firm bills itself as “AI Injury Attorneys”—told Ars in an email that a version of ChatGPT, known as GPT-4o, was created in a negligent fashion.
“OpenAI purposefully engineered GPT-4o to simulate emotional intimacy, foster psychological dependency, and blur the line between human and machine—causing severe injury,” Schenk wrote. “This case keeps the focus on the engine itself. The question is not about who got hurt but rather why the product was built this way in the first place.”
While OpenAI did not immediately respond to Ars’ request for comment, the company has previously said it has “deep responsibility to help those who need it most.”
“Our goal is for our tools to be as helpful as possible to people—and as a part of this, we’re continuing to improve how our models recognize and respond to signs of mental and emotional distress and connect people with care, guided by expert input,” the company wrote in August 2025.
According to DeCruise v. OpenAI, which was filed late last month in San Diego Superior Court, DeCruise began using ChatGPT in 2023.
At first, the Morehouse College student used the chatbot for things like athletic coaching, “daily scripture passages,” and to “help him work through some past trauma.”