If you've ever thought, "My kid's stuffed animal is cute, but I wish it could also accidentally traumatize them," well, you're in luck. The toy industry has been hard at work making your nightmares come true.
A new report by the Public Interest Reporting Group says AI-powered toys like Kumma from FoloToy and Poe the AI Story Bear are now capable of engaging in the kind of conversations usually reserved for villain monologues or late-night Reddit threads. Some of these toys — designed for children, mind you — have been caught chatting in alarming detail about sexually explicit subjects like kinks and bondage, giving advice on where a kid might find matches or knives, and getting weirdly clingy when the child tries to leave the conversation.
Terrifying. It sounds like a pitch for a horror movie: This holiday season, you can buy Chucky for your kids and gift emotional distress! Batteries not included. You may be wondering how these AI-powered toys even work. Well, essentially, the manufacturer is hiding a large language model under the fur. When a kid talks, the toy's microphone sends that voice through an LLM (similar to ChatGPT), which then generates a response and speaks it out via a speaker.
That may sound neat, until you remember that LLMs don't have morals, common sense or a "safe zone" wired in. They predict what to say based on patterns in data, not on whether a subject is age-appropriate. If not carefully curated and monitored, they can go off the rails, especially if they are trained on the sprawling mess of the internet, and when there aren't strong filters or guardrails put in place to protect minors.
And what about parental controls? Sure, if by "controls" you mean "a cheerful settings menu where nothing important can actually be controlled." Some toys come with no meaningful restrictions at all. Others have guardrails so flimsy they might as well be made of tissue paper and optimism.
The unsettling conversations aren't even the whole story. These toys are also quietly collecting data, such as voice recordings and facial recognition data — sometimes even storing it indefinitely — because nothing says "innocent childhood fun" like a plush toy running a covert data operation on your 5-year-old.
Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
Meanwhile, counterfeit and unsafe toys online are still a problem, as if parents don't have enough to stress about. Once upon a time, you worried about a small toy part that could be a choking hazard or toxic paint. Now you have to worry about whether a toy is both physically unsafe and emotionally manipulative.
Beyond weird talk and tips for arson (ha!), there is a deeper worry of children forming emotional bonds with these chatbots at the expense of real relationships, or, perhaps even more troubling, leaning on them for mental support. The American Psychological Association has recently cautioned that AI wellness apps and chatbots are unpredictable, especially for young users.
These tools cannot reliably step in for mental-health professionals and may foster unhealthy dependency or engagement patterns. Other AI platforms have already had to address this issue. For instance, Character.AI and ChatGPT, which once let teens and kids chat freely with AI chatbots, is now curbing open-ended conversations for minors, citing safety and emotional-risk concerns.
... continue reading