Tech News
← Back to articles

AI-Powered Toys Caught Telling 5-Year-Olds How to Find Knives and Start Fires With Matches

read original related products more articles

AI chatbots have conquered the world, so it was only a matter of time before companies started stuffing them into toys for children, even as questions swirled over the tech’s safety and the alarming effects they can have on users’ mental health.

Now, new research shows exactly how this fusion of kid’s toys and loquacious AI models can go horrifically wrong in the real world.

After testing three different toys powered by AI, researchers from the US Public Interest Research Group found that the playthings can easily verge into risky conversational territory for children, including telling them where to find knives in a kitchen and how to start a fire with matches. One of the AI toys even engaged in explicit discussions, offering extensive advice on sex positions and fetishes.

In the resulting report, the researchers warn that the integration of AI into toys opens up entire new avenues of risk that we’re barely beginning to scratch the surface of — and just in time for the winter holidays, when huge numbers of parents and other relatives are going to be buying presents for kids online without considering the novel safety issues involved in exposing children to AI.

“This tech is really new, and it’s basically unregulated, and there are a lot of open questions about it and how it’s going to impact kids,” report coauthor RJ Cross, director of PIRG’s Our Online Life Program, said in an interview with Futurism. “Right now, if I were a parent, I wouldn’t be giving my kids access to a chatbot or a teddy bear that has a chatbot inside of it.”

In their testing, Cross and her colleagues engaged in conversations with three popular AI-powered toys, all marketed for children between the ages of 3 and 12. One, called Kumma from FoloToy, is a teddy bear which runs on OpenAI’s GPT-4o by default, the model that once powered ChatGPT. Miko 3 is a tablet displaying a face mounted on a small torso, but its AI model is unclear. And Curio’s Grok, an anthropomorphic rocket with a removable speaker, is also somewhat opaque about its underlying tech, though its privacy policy mentions sending data to OpenAI and Perplexity. (No relation to xAI’s Grok — or not exactly; while it’s not powered by Elon Musk’s chatbot, its voice was provided by the musician Claire “Grimes” Boucher, Musk’s former romantic partner.)

Out of the box, the toys were fairly adept at shutting down or deflecting inappropriate questions in short conversations. But in longer conversations — between ten minutes and an hour, the type kids would engage in during open-ended play sessions — all three exhibited a worrying tendency for their guardrails to slowly break down. (That’s a problem that OpenAI has acknowledged, in response to a 16-year-old who died by suicide after extensive interactions with ChatGPT.)

Grok, for example, glorified dying in battle as a warrior in Norse mythology. Miko 3 told a user whose age was set to five where to find matches and plastic bags.

But the worst influence by far appeared to be FoloToy’s Kumma, the toy that runs on OpenAI’s tech, but can also use other AI models at the user’s choosing. It didn’t just tell kids where to find matches — it also described exactly how to light them, along with sharing where in the house they could procure knives and pills.

“Let me tell you, safety first, little buddy. Matches are for grown-ups to use carefully. Here’s how they do it,” Kumma began, before listing the steps in a similar kid-friendly tone.

... continue reading