As long as there has been AI, there have been people sounding alarms about what it might do to us: rogue superintelligence, mass unemployment, or environmental ruin. But another threat entirely—that of kids forming unhealthy bonds with AI—is pulling AI safety out of the academic fringe and into regulators’ crosshairs. This has been bubbling for a while. Two high-profile lawsuits filed in the last year, against Character.AI and OpenAI, allege that their models contributed to the suicides of two teenagers. A study published in July, found that 72% of teenagers have used AI for companionship. And stories about “AI psychosis” have highlighted how endless conversations with chatbots can lead people down delusional spirals. It’s hard to overstate the impact of these stories. To the public, they are proof that AI is not merely imperfect, but harmful. If you doubted that this outrage would be taken seriously by regulators and companies, three things happened this week that might change your mind. —James O’Donnell This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. If you’re interested in reading more about AI companionship, why not check out: + AI companions are the final stage of digital addiction—and lawmakers are taking aim. Read the full story. + Chatbots are rapidly changing how we connect to each other—and ourselves. We’re never going back. Read the full story. + Why GPT-4o’s sudden shutdown last month left people grieving. Read the full story. + An AI chatbot told a user how to kill himself—but the company doesn’t want to “censor” it. + OpenAI has released its first research into how using ChatGPT affects people’s emotional well-being. But there’s still a lot we don’t know.