The Federal Trade Commission is launching an investigation into AI chatbots from seven companies, including Alphabet, Meta and OpenAI, over their use as companions. The inquiry involves finding how the companies test, monitor and measure the potential harm to children and teens.
A Common Sense Media survey of 1,060 teens in April and May found that over 70% used AI companions and that more than 50% used them consistently -- a few times or more per month.
Experts have been warning for some time that exposure to chatbots could be harmful to young people. A study revealed that ChatGPT, for instance, provided bad advice to teenagers, like how to conceal an eating disorder or how to personalize a suicide notes. In some cases, chatbots have ignored comments that should have been recognized as concerning, instead simply continuing the previous conversation. Psychologists are calling for guardrails to protect young people, like reminders in the chat that the chatbot is not human, and for educators to prioritize AI literacy in schools.
There are plenty of adults, too, who've experienced negative consequences of relying on chatbots -- whether for companionship and advice or as their personal search engine for facts and trusted sources. Chatbots more often than not tell what it thinks you want to hear, which can lead to flat-out lies. And blindly following the instructions of a chatbot isn't always the right thing to do.
"As AI technologies evolve, it is important to consider the effects chatbots can have on children," FTC Chairman Andrew N. Ferguson said in a statement. "The study we're launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children."
A Character.ai spokesperson told CNET every conversation on the service has prominent disclaimers that all chats should be treated as fiction.
"In the past year we've rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature," the spokesperson said.
Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
The company behind the Snapchat social network likewise said it has taken steps to reduce risks. "Since introducing My AI, Snap has harnessed its rigorous safety and privacy processes to create a product that is not only beneficial for our community, but is also transparent and clear about its capabilities and limitations," the spokesperson said.
Meta declined to comment, and neither the FTC nor any of the remaining four companies immediately responded to our request for comment. The FTC has issued orders and is seeking a teleconference with the seven companies about the timing and format of its submissions no later than Sept 25. The companies under investigation include the makers of some of the biggest AI chatbots in the world or popular social networks that incorporate generative AI:
Alphabet (parent company of Google)
Character Technologies
Instagram
Meta Platforms
OpenAI
Snap
X.ai
Starting late last year, some of those companies have updated or bolstered their protection features for younger individuals. Character.ai began imposing limits on how chatbots can respond to people under the age of 17 and added parental controls. Instagram introduced teen accounts last year and switched all users under the age of 17 to them and Meta recently set limits on subjects teens can have with chatbots.
The FTC is seeking information from the seven companies on how they: