It’s hard to overstate the impact of these stories. To the public, they are proof that AI is not merely imperfect, but a technology that’s more harmful than helpful. If you doubted that this outrage would be taken seriously by regulators and companies, three things happened this week that might change your mind. A California law passes the legislature On Thursday, the California state legislature passed a first-of-its-kind bill. It would require AI companies to include reminders for users they know to be minors that responses are AI generated. Companies would also need to have a protocol for addressing suicide and self-harm and provide annual reports on instances of suicidal ideation in users’ conversations with their chatbots. It was led by Democratic state senator Steve Padilla, passed with heavy bipartisan support, and now awaits Governor Gavin Newsom’s signature. There are reasons to be skeptical of the bill’s impact. It doesn’t specify efforts companies should take to identify which users are minors, and lots of AI companies already include referrals to crisis providers when someone is talking about suicide. (In the case of Adam Raine, one of the teenagers whose survivors are suing, his conversations with ChatGPT before his death included this type of information, but the chatbot allegedly went on to give advice related to suicide anyway.) Still, it is undoubtedly the most significant of the efforts to rein in companion-like behaviors in AI models, which are in the works in other states too. If the bill becomes law, it would strike a blow to the position OpenAI has taken, which is that “America leads best with clear, nationwide rules, not a patchwork of state or local regulations,” as the company’s chief global affairs officer, Chris Lehane, wrote on LinkedIn last week. The Federal Trade Commission takes aim The very same day, the Federal Trade Commission announced an inquiry into seven companies, seeking information about how they develop companion-like characters, monetize engagement, measure and test the impact of their chatbots, and more. The companies are Google, Instagram, Meta, OpenAI, Snap, X, and Character Technologies, the maker of Character.AI. The White House now wields immense, and potentially illegal, political influence over the agency. In March, President Trump fired its lone Democratic commissioner, Rebecca Slaughter. In July, a federal judge ruled that firing illegal, but last week the US Supreme Court temporarily permitted the firing. “Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy,” said FTC chairman Andrew Ferguson in a press release about the inquiry. Right now, it’s just that—an inquiry—but the process might (depending on how public the FTC makes its findings) reveal the inner workings of how the companies build their AI companions to keep users coming back again and again.