But so far this largely uncontrolled experiment has produced mixed results. Many people have found solace in chatbots based on large language models (LLMs), and some experts see promise in them as therapists, but other users have been sent into delusional spirals by AI’s hallucinatory whims and breathless sycophancy. Most tragically, multiple families have alleged that chatbots contributed to the suicides of their loved ones, sparking lawsuits against companies responsible for these tools. In October, OpenAI CEO Sam Altman revealed in a blog post that 0.15% of ChatGPT users “have conversations that include explicit indicators of potential suicidal planning or intent.” That’s roughly a million people sharing suicidal ideations with just one of these software systems every week.
The real-world consequences of AI therapy came to a head in unexpected ways in 2025 as we waded through a critical mass of stories about human-chatbot relationships, the flimsiness of guardrails on many LLMs, and the risks of sharing profoundly personal information with products made by corporations that have economic incentives to harvest and monetize such sensitive data.
Several authors anticipated this inflection point. Their timely books are a reminder that while the present feels like a blur of breakthroughs, scandals, and confusion, this disorienting time is rooted in deeper histories of care, technology, and trust.
LLMs have often been described as “black boxes” because nobody knows exactly how they produce their results. The inner workings that guide their outputs are opaque because their algorithms are so complex and their training data is so vast. In mental-health circles, people often describe the human brain as a “black box,” for analogous reasons. Psychology, psychiatry, and related fields must grapple with the impossibility of seeing clearly inside someone else’s head, let alone pinpointing the exact causes of their distress.
These two types of black boxes are now interacting with each other, creating unpredictable feedback loops that may further impede clarity about the origins of people’s mental-health struggles and the solutions that may be possible. Anxiety about these developments has much to do with the explosive recent advances in AI, but it also revives decades-old warnings from pioneers such as the MIT computer scientist Joseph Weizenbaum, who argued against computerized therapy as early as the 1960s.
Dr. Bot: Why Doctors Can Fail Us— and
How AI Could Save Lives
Charlotte Blease YALE UNIVERSITY PRESS, 2025
Charlotte Blease, a philosopher of medicine, makes the optimist’s case in Dr. Bot: Why Doctors Can Fail Us—and How AI Could Save Lives. Her book broadly explores the possible positive impacts of AI in a range of medical fields. While she remains clear-eyed about the risks, warning that readers who are expecting “a gushing love letter to technology” will be disappointed, she suggests that these models can help relieve patient suffering and medical burnout alike.
“Health systems are crumbling under patient pressure,” Blease writes. “Greater burdens on fewer doctors create the perfect petri dish for errors,” and “with palpable shortages of doctors and increasing waiting times for patients, many of us are profoundly frustrated.”
... continue reading