After a string of disturbing mental health incidents involving AI chatbots, a group of state attorneys general have sent a letter to the AI industry’s top companies, with a warning to fix “delusional outputs” or risk being in breach of state law.
The letter, signed by dozens of AGs from U.S. states and territories with the National Association of Attorneys General, asks the companies, including Microsoft, OpenAI, Google, and 10 other major AI firms, to implement a variety of new internal safeguards to protect their users. Anthropic, Apple, Chai AI, Character Technologies, Luka, Meta, Nomi AI, Perplexity AI, Replika, and xAI were also included in the letter.
The letter comes as a fight over AI regulations has been brewing between state and federal government.
Those safeguards include transparent third-party audits of large language models that look for signs of delusional or sycophantic ideations, as well as new incident reporting procedures designed to notify users when chatbots produce psychologically harmful outputs. Those third parties, which could include academic and civil society groups, should be allowed to “evaluate systems pre-release without retaliation and to publish their findings without prior approval from the company,” the letter states.
“GenAI has the potential to change how the world works in a positive way. But it also has caused—and has the potential to cause—serious harm, especially to vulnerable populations,” the letter states, pointing to a number of well-publicized incidents over the past year — including suicides and murder — in which violence has been linked to excessive AI use, the letter states. “In many of these incidents, the GenAI products generated sycophantic and delusional outputs that either encouraged users’ delusions or assured users that they were not delusional.”
AGs also suggest companies treat mental health incidents the same way tech companies handle cybersecurity incidents — with clear and transparent incident reporting policies and procedures.
Companies should develop and publish “detection and response timelines for sycophantic and delusional outputs,” the letter states. In a similar fashion to how data breaches are currently handled, companies should also “promptly, clearly, and directly notify users if they were exposed to potentially harmful sycophantic or delusional outputs,” the letter says.
Techcrunch event Join the Disrupt 2026 Waitlist Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector. Join the Disrupt 2026 Waitlist Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector. San Francisco | WAITLIST NOW
Another ask is that the companies develop “reasonable and appropriate safety tests” on GenAI models to “ensure the models do not produce potentially harmful sycophantic and delusional outputs.” These tests should be conducted before the models are ever offered to the public, it adds.
TechCrunch was unable to reach Google, Microsoft, or OpenAI for comment prior to publication. The article will be updated if the companies respond.
... continue reading