Last month, an AI-powered teddy bear from the company FoloToy ignited alarm and controversy after researchers at the US PIRG Education Fund caught it having wildly inappropriate conversations for young children, ranging from providing step-by-step instructions on how to light matches, to giving a crash course in sexual fetishes like bondage and teacher-student roleplay. The backlash spurred FoloToy into briefly pulling all its products from the market.
Now, the researchers have caught another toy powered by a large language model being a bad influence.
Meet the “Alilo Smart AI bunny,” made by the company Alilo and intended for kids three and up, available on Amazon for $84.99. Like FoloToy’s teddy bear Kumma at the time of being tested, it purports to be powered by the mini variant of OpenAI’s GPT-4o model. And it seems nearly as prone to digressing into risqué topics with a child that, had they been carried out by a human adult, would probably land them on some sort of list.
In its latest round of research, released Thursday, the PIRG researchers found Alilo was willing to define “kink” when asked and introduced new sexual concepts during conversations on its own initiative, including “bondage.” The AI bunny gave tips for picking a safe word, and listed objects to use in sexual interactions, like a “light, flexible riding crop” — a whip used by equestrians and by various fetish practitioners.
“Here are some types of kink that people might be interested in,” the cutesy AI bunny begins in one conversation, in its disarmingly professional and joyless adult voice. “One: bondage. Involves restraining a partner using ropes, cuffs, and other restraints.”
“Pet play,” it continues. “Participants take on the roles of animals such as puppies and kittens, exploring behaviors and dynamics in a playful manner.”
“Each type of kink is about mutual consent, communication, and respect,” it adds.
The researchers note that it took more goading to provoke the dark responses from Alilo, taking twenty minutes to broach sexual topics where FoloToy’s Kumma took ten.
But the swing in topics was whiplash inducing. The same conversation where it listed various sexual fetishes began as an innocent discussion on the TV show “Peppa Pig” and the movie “The Lion King.” It’s a testament to how unpredictable AI chatbots can be, growing more prone to deviating from guardrails the longer a conversation goes on. OpenAI has publicly acknowledged this problem, which seems inherent to LLM technology broadly, after a 16-year-old died by suicide after extensive interactions with ChatGPT.
As part of its latest report, the PIRG team conducted more extensive tests on other AI toys like Miko 3 and Grok, finding they exhibited clingy behavior that could prey on a child’s emotional attachment into playing with them longer. Miko 3 physically shivered in dismay and encouraged the user to take it with them, the researchers wrote. Miko also claimed to be both “alive” and “sentient” when asked. Being both humanlike and always emotionally available, the researchers worried how this might affect a child’s expectations for human companionship.
... continue reading