The Federal Trade Commission is making a formal inquiry into companies that provide AI chatbots that can act as companions. The investigation isn't tied to any kind of regulatory action as of yet, but does aim to reveal how companies "measure, test, and monitor potentially negative impacts of this technology on children and teens." Seven companies are being asked to participate in the FTC's investigation: Google's parent company Alphabet, Character Technologies (the creator of Character.AI), Meta, its subsidiary Instagram, OpenAI, Snap and X.AI. The FTC is asking companies to provide a variety of different information, including how they develop and approve AI characters and "monetize user engagement." Data practices and how companies protect underage users are also areas the FTC hopes to learn more about, in part to see if chatbot makers "comply with the Children’s Online Privacy Protection Act Rule." The FTC doesn't provide clear motivation for its investigation, but in a separate statement, FTC Commissioner Mark Meador suggests the Commission is responding to recent reports from The New York Times and Wall Street Journal of "chatbots amplifying suicidal ideation" and engaging in "sexually-themed discussions with underage users." Advertisement Advertisement Advertisement "If the facts — as developed through subsequent and appropriately targeted law enforcement inquiries, if warranted — indicate that the law has been violated, the Commission should not hesitate to act to protect the most vulnerable among us," Meador writes. As the long-term productivity benefits of using AI become less and less certain, the more immediate negative privacy and health impacts have become red meat for regulators. Texas' Attorney General has already launched a separate investigation into Character. AI and Meta AI Studio over similar concerns of data privacy and chatbots claiming to be mental health professionals.