The National Association of Attorneys General has issued a letter to 13 tech companies, including Apple, calling for stronger action and safeguards against the harm AI can cause, and has caused, “especially to vulnerable populations.” Here are the details.
AGs want sycophantic and delusional outputs to be addressed
In a 12-page document (which, to be fair, has four full pages of signatures) addressed to Apple, Anthropic, Chai AI, Character Technologies (Character.AI), Google, Luka Inc. (Replika), Meta, Microsoft, Nomi AI, OpenAI, Perplexity AI, Replika, and xAI, Attorneys General for 42 US states manifested what they defined as:
[S]erious concerns about the rise in sycophantic and delusional outputs to users emanating from the generative artificial intelligence software promoted and distributed by your companies, as well as the increasingly disturbing reports of AI interactions with children that indicate a need for much stronger child-safety and operational safeguards
Together, they say, these threats demand action, as some of them have been associated with real-life violence and harm. That includes murders and suicides, domestic violence and poisoning incidents, and hospitalizations for psychosis.
In the letter, they go as far as claiming that some of the addressed companies may have already violated state laws, including consumer protection statutes, requirements to warn users of risks, children’s online privacy laws, and in some cases, even criminal statutes.
Worrisome cases seem to be getting worse
Over the last few years, many of these cases were widely reported, including that of Allan Brooks, a 47-year-old Canadian man who, after repeated interactions with ChatGPT, became convinced he had discovered a new kind of mathematics, and that of 14-year-old Sewell Setzer III, whose death by suicide is the subject of an ongoing lawsuit alleging that a Character.AI chatbot encouraged him to “join her.”
While these are just two examples, there are many more quoted in the letter, which also states that its list is by no means comprehensive enough to illustrate the potential for harm that generative AI models have over “children, the elderly, and those with mental illness—and people without prior vulnerabilities”.
They also mention what they refer to as “troubling” interactions between AI chatbots and children, including bots with adult personas pursuing romantic relationships with minors, encouraging drug use and violence, attacking children’s self-esteem, advising them to stop taking prescribed medication, and instructing them to keep these interactions secret from their parents.
... continue reading