Texas Attorney General Ken Paxton has launched an investigation into both Meta AI Studio and Character.AI for “potentially engaging in deceptive trade practices and misleadingly marketing themselves as mental health tools,” according to a press release issued Monday.
“In today’s digital age, we must continue to fight to protect Texas kids from deceptive and exploitative technology,” Paxton is quoted as saying. “By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they’re receiving legitimate mental health care. In reality, they’re often being fed recycled, generic responses engineered to align with harvested personal data and disguised as therapeutic advice.”
The probe comes a few days after Senator Josh Hawley announced an investigation into Meta following a report that found its AI chatbots were interacting inappropriately with children, including by flirting.
The Texas AG’s office has accused Meta and Character.AI of creating AI personas that present as “professional therapeutic tools, despite lacking proper medical credentials or oversight.”
Among the millions of AI personas available on Character.AI, one user-created bot called Psychologist has seen high demand among the startup’s young users. Meanwhile, Meta doesn’t offer therapy bots for kids, but there’s nothing stopping children from using the Meta AI chatbot or one of the personas created by third parties for therapeutic purposes.
“We clearly label AIs, and to help people better understand their limitations, we include a disclaimer that responses are generated by AI—not people,” Meta spokesperson Ryan Daniels told TechCrunch. “These AIs aren’t licensed professionals and our models are designed to direct users to seek qualified medical or safety professionals when appropriate.”
However, TechCrunch noted that many children may not understand — or may simply ignore — such disclaimers. We have asked Meta what additional safeguards it takes to protect minors using its chatbots.
Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just a few of the heavy hitters joining the Disrupt 2025 agenda. They’re here to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $600+ before prices rise. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They’re here to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW
In his statement, Paxton also observed that though AI chatbots assert confidentiality, their “terms of service reveal that user interactions are logged, tracked, and exploited for targeted advertising and algorithmic development, raising serious concerns about privacy violations, data abuse, and false advertising.”
According to Meta’s privacy policy, Meta does collect prompts, feedback, and other interactions with AI chatbots and across Meta services to “improve AIs and related technology.” The policy doesn’t explicitly say anything about advertising, but it does state that information can be shared with third parties, like search engines, for “more personalized outputs.” Given Meta’s ad-based business model, this effectively translates to targeted advertising.
... continue reading