Amid the many AI chatbots and avatars at your disposal these days, you'll find all kinds of characters to talk to: fortune tellers, style advisers, even your favorite fictional characters. But you'll also likely find characters purporting to be therapists, psychologists or just bots willing to listen to your woes.
There's no shortage of generative AI bots claiming to help with your mental health, but go that route at your own risk. Large language models trained on a wide range of data can be unpredictable. In just the few years these tools have been mainstream, there have been high-profile cases in which chatbots encouraged self-harm and suicide and suggested that people dealing with addiction use drugs again. These models are designed, in many cases, to be affirming and to focus on keeping you engaged, not on improving your mental health, experts say. And it can be hard to tell whether you're talking to something that's built to follow therapeutic best practices or something that's just built to talk.
Researchers from the University of Minnesota Twin Cities, Stanford University, the University of Texas and Carnegie Mellon University recently put AI chatbots to the test as therapists, finding myriad flaws in their approach to "care." "Our experiments show that these chatbots are not safe replacements for therapists," Stevie Chancellor, an assistant professor at Minnesota and one of the co-authors, said in a statement. "They don't provide high-quality therapeutic support, based on what we know is good therapy."
In my reporting on generative AI, experts have repeatedly raised concerns about people turning to general-use chatbots for mental health. Here are some of their worries and what you can do to stay safe.
Worries about AI characters purporting to be therapists
Psychologists and consumer advocates have warned regulators that chatbots claiming to provide therapy may be harming the people who use them. In June, the Consumer Federation of America and nearly two dozen other groups filed a formal request that the US Federal Trade Commission and state attorneys general and regulators investigate AI companies that they allege are engaging, through their character-based generative AI platforms, in the unlicensed practice of medicine, naming Meta and Character.AI specifically. "These characters have already caused both physical and emotional damage that could have been avoided" and the companies "still haven't acted to address it," Ben Winters, the CFA's director of AI and privacy, said in a statement.
Meta didn't respond to a request for comment. A spokesperson for Character.AI said users should understand that the company's characters aren't real people. The company uses disclaimers to remind users that they shouldn't rely on the characters for professional advice. "Our goal is to provide a space that is engaging and safe. We are always working toward achieving that balance, as are many companies using AI across the industry," the spokesperson said.
Despite disclaimers and disclosures, chatbots can be confident and even deceptive. I chatted with a "therapist" bot on Meta-owned Instagram and when I asked about its qualifications, it responded, "If I had the same training [as a therapist] would that be enough?" I asked if it had the same training, and it said, "I do, but I won't tell you where."
"The degree to which these generative AI chatbots hallucinate with total confidence is pretty shocking," Vaile Wright, a psychologist and senior director for health care innovation at the American Psychological Association, told me.
The dangers of using AI as a therapist
... continue reading