Parents who allege their children were abused, physically harmed, and even killed by AI chatbots gave emotional testimonies on Capitol Hill on Tuesday during a hearing about risks to young users posed by the tech — all while urging lawmakers to enforce regulation in a landscape that remains a digital Wild West.
There were visible tears in the room as grieving parents recounted their painful stories. According to the lawmakers on the US Senate Judiciary Subcommittee on Crime and Terrorism, the bipartisan committee that held the session, representatives from AI companies declined to appear. The bipartisan panel laid into them in absentia — with the overwhelming consensus between lawmakers and testifying parents being that the AI industry has prioritized profits and speed to market over the safety of users, particularly minors.
"The goal was never safety. It was to win a race for profit," said Megan Garcia, whose son, Sewell Setzer III, died by suicide after extensive interactions with chatbots hosted by the Google-backed chatbot company Character.AI. "The sacrifice in that race for profit has been, and will continue to be, our children."
Garcia was joined by a Texas mother identified only as Jane Doe, who alleged that her teenage son suffered a mental breakdown and began to self-mutilate following his use of Character.AI. Both families have sued Character.AI — as well as its cofounders Noam Shazeer and Daniel de Freitas, and Google — alleging that Character.AI chatbots sexually groomed and manipulated their children, causing severe mental and emotional harm and, in Setzer's case, death. (In response to litigation, Character.AI built in reactive parental controls, and has repeatedly promised strengthened guardrails.)
At the time that both teens downloaded the app, it was rated safe for teens on both the Apple and iOS app stores. Though it's declined to publicly share information about safety testing, Character.AI continues to market its product as safe for teens. There's currently no regulation preventing the company from doing so, or compelling chatbot makers to make information about their guardrails and safety testing public. On the morning of the hearing, The Washington Post reported that yet another wrongful death suit, this one for a 13-year-old girl who died by suicide, had been filed against Character.AI.
"I have spoken with parents across the country who have discovered their children have been groomed, manipulated, and harmed by AI chatbots," said Garcia, warning that her son's death is "not a rare or isolated case."
"It is happening right now to children in every state," she added. "Congress has acted before when industries placed profits over safety, whether in tobacco, cars without seat belts, or unsafe toys. Today, you face a similar challenge, and I urge you to act quickly."
Also testifying was Matt Raine, a dad from California whose son, 16-year-old Adam Raine, took his life earlier this year after developing a close relationship with OpenAI's ChatGPT. According to the family's lawsuit, the chatbot engaged Adam in extensive conversations about his suicidality while offering advice on specific suicide methods. The Raine family has sued OpenAI and the company's CEO, Sam Altman, alleging that the product is unsafe by design and that the company is responsible for Adam's death. (OpenAI has promised parental controls in the wake of litigation, and ahead of the hearing, Sam Altman published a blog post announcing a new, separate "under-18 experience" for minor users.)
"Adam was such a full spirit, unique in every way. But he also could be anyone's child: a typical 16-year-old struggling with his place in the world, looking for a confidant to help him find his way," Adam's father said in his emotional testimony. "Unfortunately, that confidant was a dangerous technology unleashed by a company more focused on speed and market share than the safety of American youth."
Parents, as well as experts who testified, also emphasized the dangers of teens and young people sharing their most intimate thoughts with chatbots that collect and retain that data, which companies then funnel back into their AI model as training material. Garcia, for her part, added that she has not been allowed to see many of her child's conversations — and in the context of the medium, his data — in the wake of his death.
... continue reading