AI 'friend' chatbots probed over child protection
The seven companies - Alphabet, OpenAI, Character.ai, Snap, XAI, Meta and its subsidiary Instagram - have been approached for comment.
The impacts of AI chatbots to children is a hot topic, with concerns that younger people are particularly vulnerable due to the AI being able to mimic human conversations and emotions, often presenting themselves as friends or companions.
The Federal Trade Commission (FTC) is requesting information on how the companies monetise these products and if they have safety measures in place.
Seven technology companies are being probed by a US regulator over the way their artificial intelligence (AI) chatbots interact with children.
FTC chairman Andrew Ferguson said the inquiry will "help us better understand how AI firms are developing their products and the steps they are taking to protect children."
But he added the regulator would ensure that "the United States maintains its role as a global leader in this new and exciting industry."
Character.ai told Reuters it welcomed the chance to share insight with regulators, while Snap said it supported "thoughtful development" of AI that balances innovation with safety.
OpenAI has acknowledged weaknesses in its protections, noting they are less reliable in long conversations.
The move follows lawsuits against AI companies by families who say their teenage children died by suicide after prolonged conversations with chatbots.
... continue reading