After Mattel and OpenAI announced a partnership that would result in an AI product marketed to kids, a consumer rights advocacy group is warning that the collaboration may endanger children.
It remains unclear what shape Mattel's first-ever AI product will take. But on Tuesday, Public Citizen co-President Robert Weissman issued a statement urging more transparency so that parents can prepare for potential risks. Weissman is particularly concerned that ChatGPT-fueled toys could hurt kids in unknown ways.
"Endowing toys with human-seeming voices that are able to engage in human-like conversations risks inflicting real damage on children," Weissman said. "It may undermine social development, interfere with children’s ability to form peer relationships, pull children away from playtime with peers, and possibly inflict long-term harm."
One anonymous source told Axios that Mattel's plans for the AI partnership are still in "early stages," so perhaps more will be revealed as Mattel gears up for its first launch. That source suggested that the first product would not be marketed to kids under 13, which some think suggests that Mattel may recognize that exposing young kids to AI is possibly a step too far at this stage. But more likely, it's due to OpenAI age restrictions on its API, prohibiting use under 13.
Parents shouldn't be blindsided by new products, Weissman suggested, and some red lines should be drawn before any toy hits the shelves. Perhaps most urgently, "Mattel should announce immediately that it will not incorporate AI technology into children’s toys," Weissman said. "Children do not have the cognitive capacity to distinguish fully between reality and play."
"Mattel should not leverage its trust with parents to conduct a reckless social experiment on our children by selling toys that incorporate AI," Weissman said.
OpenAI declined to comment. Mattel did not immediately respond to Ars' request for comment.