Top officials in dozens of states have seen how generative AI chatbots and characters, if handled poorly, can be bad for children. And they have a stern warning for the industry: "If you knowingly harm kids, you will answer for it."
That message is clear in a letter sent this week from 44 state attorneys general to the heads of 13 AI companies. The AGs said they were writing to tell CEOs they would "use every facet of our authority to protect children from exploitation by predatory artificial intelligence products."
Worries about AI's impact on children have been around for a while, but interest has heightened in recent weeks. The AGs particularly cited a recent report from Reuters that showed Meta's guidelines allowed AI to engage children in conversations that were "romantic or sensual." The company told Reuters the examples cited were "erroneous and inconsistent" with the company's policies, which prohibit content that sexualizes children.
Meta did not immediately respond to a request for comment.
The AGs said the issues were not limited to Meta. "In the short history of chatbot parasocial relationships, we have repeatedly seen companies display inability or apathy toward basic obligations to protect children," they wrote.
Watch this: How You Talk to ChatGPT Matters. Here's Why 04:12
The risks of relationships and treacherous interactions with AI chatbots are growing clearer. In June, the American Psychological Association issued a warning calling for guardrails around AI use for teens and young adults, saying parents should help their children use the tools widely. The fast-spreading use of AI chatbots as "therapists" has increased the possibility of people receiving harmful advice in an interaction when they are particularly vulnerable. A study released this week found large language models are inconsistent in answering questions about suicide.
At the same time, there are few actual rules around what AI developers can and can't do and how these tools can operate. A move to stop states from enforcing laws and rules around AI failed in Congress earlier this year, but there's still no federal framework for how AI can be done safely. Lawmakers and advocates, like the AGs in this week's letter, have said they want to avoid the free-for-all-like atmosphere of the social media era, but whether clear rules actually take shape is yet to be seen. President Trump's AI Action Plan, released in July, concentrated on reducing regulations for AI companies, not introducing new ones.
Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts
State AGs said they would take matters into their own hands if necessary.
"You will be held accountable for your decisions," they wrote. "Social media platforms caused significant harm to children, in part because government watchdogs did not do their job fast enough. Lesson learned. The potential harms of AI, like the potential benefits, dwarf the impact of social media. We wish you all success in the race for AI dominance. But we are paying attention."
If you feel like you or someone you know is in immediate danger, call 911 (or your country's local emergency line) or go to an emergency room to get immediate help. Explain that it is a psychiatric emergency and ask for someone who is trained for these kinds of situations. If you're struggling with negative thoughts or suicidal feelings, resources are available to help. In the US, call the National Suicide Prevention Lifeline at 988.
.