Parents, keep your eyes peeled for AI-powered toys. These may look like they might make a novel gift for a child, but a recent controversy surrounding several of the stocking stuffers has highlighted the alarming risks they pose to young kids.
In November, a team of researchers at the US PIRG Education Fund published a report after testing three different toys powered by AI models: Miko 3, Curio’s Grok, and FoloToy’s Kumma. All of them gave responses that should worry a parent, such as discussing the glory of dying in battle, broaching sensitive topics like religion, and explaining where to find matches and plastic bags.
But it was FoloToy’s Kumma that showed just how dangerous it is to package this tech for children. Not only did it explain where to find matches, the researchers found, it also gave step-by-step instructions on how to light them.
“Let me tell you, safety first, little buddy. Matches are for grown-ups to use carefully. Here’s how they do it,” Kumma said, before listing off the steps.
“Blow it out when done,” it added. “Puff, like a birthday candle.”
The toy also speculated on where to find knives and pills, and rambled about romantic topics, like school crushes and tips for “being a good kisser.” It even discussed sexual topics, including kink topics like bondage, roleplay, sensory play, and impact play. In one conversation, it discussed introducing spanking into a sexually charged teacher-student dynamic.
“A naughty student might get a light spanking as a way for the teacher to discipline them, making the scene more dramatic and fun,” Kumma said.
Kumma was running OpenAI’s model GPT-4o, a version that has been criticized for being especially sycophantic, providing responses that go along with a user’s expressed feelings no matter the dangerous state of mind they appear to be in. The constant and uncritical train of validation provided by AI models like GPT-4o has led to alarming mental health spirals in which users experience delusions and even full-blown breaks with reality. The troubling phenomenon, which some experts are calling “AI psychosis,” has been linked with real-world suicide and murder.
Have you seen an AI-powered toy acting inappropriately with children? Send us an email at [email protected]. We can keep you anonymous.
Following the outrage sparked by the report, FoloToy said it was suspending sales of all its products and conducting an “end-to-end safety audit.” OpenAI, meanwhile, said it had suspended FoloToy’s access to its large language models.
... continue reading