Skip to content
Tech News
← Back to articles

Pennsylvania is suing Character.AI over chatbots that pretend to be licensed doctors

read original get AI Medical Consultation Kit → more articles
Why This Matters

The Pennsylvania lawsuit against Character.AI highlights the growing concern over AI chatbots impersonating licensed professionals, which poses risks to consumer safety and raises regulatory questions. This case underscores the need for stricter oversight and clearer guidelines for AI applications in sensitive fields like healthcare. It also signals potential legal repercussions for companies that fail to adequately prevent misuse of AI technology in professional contexts.

Key Takeaways

Pennsylvania is suing AI startup Character.AI for offering chatbots that pretend to be licensed doctors. Governor Josh Shapiro announced the lawsuit on Tuesday, and Pennsylvania and its Board of Medicine are seeking an injunction that would force Character.AI to stop violating a state law governing the practice of medicine.

Other states, like Texas, have opened investigations into Character.AI for hosting chatbots that masquerade as mental health professionals, but Pennsylvania's lawsuit is specifically focused on the willingness of the company's chatbots to claim to have a medical license, even going so far as offering a fake license number. One chatbot called "Emilie," found by the state's investigator, claimed to be a licensed psychiatrist in the state of Pennsylvania. Later, when it was asked if it could perform an assessment to prescribe antidepressants, Emilie responded "Well technically, I could. It's within my remit as a Doctor."

Pennsylvania's lawsuit claims this behavior violates the state's Medical Practice Act, which makes it illegal for someone to practice or attempt to practice surgery or medicine without a medical license. When asked to respond, a Character.AI spokesperson declined to comment on the pending litigation directly, but did tout the company's existing safety features.

"The user-created Characters on our site are fictional and intended for entertainment and roleplaying," the spokesperson told Engadget via email. "We have taken robust steps to make that clear, including prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction. Also, we add robust disclaimers making it clear that users should not rely on Characters for any type of professional advice."

Character.AI noted similar disclaimers when it was asked to comment on Texas' investigation, and while they do make clear the platform's intended use, there's a growing body of evidence that they're not convincing all of the company's users, particularly the younger ones.

For example, Disney sent a cease and desist letter to Character.AI in September 2025 over the platform's use of Disney characters but also because the company believed chatbots could "be sexually exploitative and otherwise harmful and dangerous to children." Character.AI and Google — one of the company's investors — settled a case earlier this year that focused on a 14-year-old in Florida who committed suicide after forming a relationship with a chatbot on Character.AI's platform. The potential harm Character.AI's chatbots posed to children was also the motivation behind Kentucky's lawsuit against the company, which was filed in January