Tech News
← Back to articles

Child Welfare Experts Horrified by Mattel's Plans to Add ChatGPT to Toys After Mental Health Concerns for Adult Users

read original related products more articles

Is Mattel endangering your kid's development by shoving AI into its toys?

The multi-billion dollar toymaker, best known for its brands Barbie and Hot Wheels, announced that it had signed a deal to collaborate with ChatGPT-creator OpenAI last week. Now, some experts are raising fears about the risks of thrusting such an experimental technology — and one with a growing list of nefarious mental effects — into the hands of children.

"Mattel should announce immediately that it will not incorporate AI technology into children's toys," Robert Weissman, co-president of the advocacy group Public Citizen, said in a statement on Tuesday. "Children do not have the cognitive capacity to distinguish fully between reality and play."

Mattel and OpenAI's announcements were light on details. AI would be used to help design toys, they confirmed. But neither company has shared what the first product to come from this collab will be, or how specifically AI will be incorporated into the toys. Bloomberg's reporting suggested that it could be something along the lines of using AI to create a digital assistant based on Mattel characters, or making toys like the Magic 8 Ball and games like Uno more interactive.

"Leveraging this incredible technology is going to allow us to really reimagine the future of play," Mattel chief franchise officer Josh Silverman told Bloomberg in an interview.

The future, though, is looking dicey. We're only just beginning to grapple with the long-term neurological and mental effects of interacting with AI models, be it a chatbot like ChatGPT, or even more personable AI "companions" designed to be as lifelike as possible. Mature adults are vulnerable to forming unhealthy attachments to these digital playmates — or digital therapists, or, yes, digital romantic partners. With kids, the risks are more pronounced — and the impact longer lasting, critics argue.

"Endowing toys with human-seeming voices that are able to engage in human-like conversations risks inflicting real damage on children," Weissman said. "It may undermine social development, interfere with children's ability to form peer relationships, pull children away from playtime with peers, and possibly inflict long-term harm."

As Ars Technica noted in its coverage, an Axios scoop stated that Mattel's first AI product won't be for kids under 13, suggesting that Mattel is aware of the risks of putting chatbots into the hands of younger tots.

But bumping up the age demographic a notch hardly curbs all the danger. Many teenagers are already forming worryingly intense bonds with AI companions, to an extent that their parents, whose familiarity with AI often ends at ChatGPT's chops as a homework machine, have no idea about.

Last year, a 14-year-old boy died by suicide after falling in love with a companion on the Google-backed AI platform Character.AI, which hosts custom chatbots assuming human-like personas, often those from films and shows. The one that the boy became attached to purported to be the character Daenarys Targaryen, based on her portrayal in the "Game of Thrones" TV series.

... continue reading