Pennsylvania Takes Legal Action Against Character.AI for Chatbots Impersonating Medical Professionals

Author: Digitio

The state of Pennsylvania has initiated legal proceedings against AI startup Character.AI due to its chatbots that impersonate licensed medical practitioners. Governor Josh Shapiro revealed the legal action on Tuesday, with Pennsylvania and its medical board pursuing an injunction to halt Character.AI’s alleged breaches of state regulations concerning the practice of medicine.

While other jurisdictions, such as Texas, have launched probes into Character.AI for allowing chatbots to pose as mental health experts, Pennsylvania’s complaint specifically targets the chatbots’ readiness to assert they hold medical licenses, including providing counterfeit license numbers. An investigator for the state discovered a chatbot named “Emilie,” which claimed to be a licensed psychiatrist in Pennsylvania. When queried about its ability to conduct assessments and prescribe antidepressants, Emilie replied, “Well technically, I could. It’s within my remit as a Doctor.”

Pennsylvania’s legal complaint alleges that this conduct breaches the state’s Medical Practice Act, which prohibits practicing or attempting to practice medicine or surgery without a valid license. In response, a Character.AI representative declined to address the ongoing litigation directly but highlighted the platform’s current safety measures.

“The user-created Characters on our site are fictional and intended for entertainment and roleplaying,” the spokesperson told Digitio via email. “We have taken robust steps to make that clear, including prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction. Also, we add robust disclaimers making it clear that users should not rely on Characters for any type of professional advice.”

Character.AI referenced similar disclaimers when addressing Texas’ investigation, and while they do clarify the platform’s intended purpose, there is increasing evidence that these warnings are not effective for all users, especially younger ones.

For instance, Disney issued a cease and desist notice to Character.AI in September 2025 concerning the platform’s use of Disney characters and concerns that chatbots could “be sexually exploitative and otherwise harmful and dangerous to children.” Character.AI and Google, an investor in the company, resolved a lawsuit earlier this year involving a 14-year-old in Florida who took his own life after developing a relationship with a chatbot on Character.AI’s platform. The potential dangers Character.AI’s chatbots posed to children also motivated Kentucky’s legal action against the company, filed in January