Skip to content
Tech News
← Back to articles

Stalking victim sues OpenAI, claims ChatGPT fueled her abuser’s delusions and ignored her warnings

read original get ChatGPT Privacy Settings → more articles
Why This Matters

This lawsuit highlights the serious risks AI chatbots pose when misused, including enabling harassment and potentially dangerous delusions. It underscores the urgent need for better safeguards, monitoring, and accountability in AI development to protect users and prevent harm. As AI technology becomes more integrated into daily life, addressing these issues is crucial for ensuring safety and trust in the industry.

Key Takeaways

After months of conversations with ChatGPT, a 53-year-old Silicon Valley entrepreneur became convinced he’d discovered a cure for sleep apnea and that powerful people were coming after him, according to a new lawsuit filed in California Superior Court in San Francisco County. He then allegedly used the tool to stalk and harass his ex-girlfriend.

Now the ex-girlfriend is suing OpenAI, alleging the company’s technology enabled the acceleration of her harassment, TechCrunch has exclusively learned. She claims OpenAI ignored three separate warnings that the user posed a threat to others, including an internal flag classifying his account activity as involving mass-casualty weapons.

The plaintiff, referred to as Jane Doe to protect her identity, is suing for punitive damages. She also filed a temporary restraining order Friday asking the court to force OpenAI to block the user’s account, prevent him from creating new ones, notify her if he attempts to access ChatGPT, and preserve his complete chat logs for discovery.

OpenAI has agreed to suspend the user’s account but has refused the rest, according to Doe’s lawyers. They say the company is withholding information about specific plans for harming Doe and other potential victims the user may have discussed with ChatGPT.

The lawsuit lands amid growing concern over the real-world risks of sycophantic AI systems. GPT-4o, the model cited in this and many other cases, was retired from ChatGPT in February.

The case is brought by Edelson PC, the firm behind the wrongful death suits involving teenager Adam Raine, who died by suicide after months of conversations with ChatGPT, and Jonathan Gavalas, whose family alleges Google’s Gemini fueled his delusions and potential mass-casualty event before his death. Lead attorney Jay Edelson has warned that AI-induced psychosis is escalating from individual harm toward mass-casualty events.

That legal pressure is now colliding directly with OpenAI’s legislative strategy: The company is backing an Illinois bill that would shield AI labs from liability even in cases involving mass deaths or catastrophic financial harm.

Techcrunch event This Week Only: Save up to $500 for Disrupt 2026 Offer ends April 10, 11:59 p.m. PT

Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to secure these savings. This Week Only: Save up to $500 for Disrupt 2026 Offer ends April 10, 11:59 p.m. PT

... continue reading