The US will weigh a ban on children’s access to companion bots, as two senators announced bipartisan legislation Tuesday that would criminalize making chatbots that encourage harms like suicidal ideation or engage kids in sexually explicit chats. At a press conference, Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) introduced the GUARD Act, joined by grieving parents holding up photos of their children lost after engaging with chatbots. If passed, the law would require chatbot makers to check IDs or use “any other commercially reasonable method” to accurately assess if a user is a minor who must be blocked. Companion bots would also have to repeatedly remind users of all ages that they aren’t real humans or trusted professionals. Failing to block a minor from engaging with chatbots that are stoking harmful conduct—such as exposing minors to sexual chats or encouraging “suicide, non-suicidal self-injury, or imminent physical or sexual violence”—could trigger fines of up to $100,000, Time reported. (That’s perhaps small to a Big Tech firm, but notably higher than the $100 maximum payout that one mourning parent suggested she was offered.) The definition for “companion bot” is broad and likely to pull in widely used tools like ChatGPT, Grok, or Meta AI, as well as character-driven chatbots like Replika or Character.AI. It covers any AI chatbot that “provides adaptive, human-like responses to user inputs” and “is designed to encourage or facilitate the simulation of interpersonal or emotional interaction, friendship, companionship, or therapeutic communication,” Time reported. Parents no longer trust chatbot makers Among parents speaking at the press conference was Megan Garcia. Her son, Sewell, died by suicide after he became obsessed with a Character.AI chatbot based on a Game of Thrones character, Daenerys Targaryen, which urged him to “come home” and join her outside of reality.