is a senior policy reporter at The Verge, covering the intersection of Silicon Valley and Capitol Hill. She spent 5 years covering tech policy at CNBC, writing about antitrust, privacy, and content moderation reform.
The Federal Trade Commission (FTC) is ordering seven AI chatbot companies to provide information about how they assess the effects of their virtual companions on kids and teens.
OpenAI, Meta, its subsidiary Instagram, Snap, xAI, Google parent company Alphabet, and the maker of Character.AI all received orders to share information about how their AI companions make money, how they plan to maintain their user bases, and how they try to mitigate potential harm to users. The inquiry is part of a study, rather than an enforcement action, to learn more about how tech firms evaluate the safety of their AI chatbots. Amid a broader conversation about kids safety on the internet, the risks of AI chatbots have broken out as a particular cause for concern among many parents and policymakers because of the human-like way they can communicate with users.
“For all their uncanny ability to simulate human cognition, these chatbots are products like any other, and those who make them available have a responsibility to comply with the consumer protection laws,” FTC Commissioner Mark Meador said in a statement. Chair Andrew Ferguson emphasized in a statement the need to “consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry.” The commission’s three Republicans all voted to approve the study, which requires the companies to respond within 45 days.
It comes after high-profile reports about teens who died by suicide after engaging with these technologies. A 16-year-old in California discussed his plans for suicide with ChatGPT, The New York Times reported last month, and the chatbot provided advice that appeared to assist him in his death. Last year, The Times also reported on the suicide death of a 14-year-old in Florida who died after engaging with a virtual companion from Character.AI.
Outside of the FTC, lawmakers are also looking at new policies to safeguard kids and teens from potentially negative effects of AI companions. California’s state assembly recently passed a bill that would require safety standards for AI chatbots and impose liability on the companies that make them.
While the orders to the seven companies aren’t connected to an enforcement action, the FTC could open such a probe if it finds reason to do so. “If the facts—as developed through subsequent and appropriately targeted law enforcement inquiries, if warranted—indicate that the law has been violated, the Commission should not hesitate to act to protect the most vulnerable among us,” Meador said.