Skip to content
Tech News
← Back to articles

Startup Approved to Let AI System Prescribe Psychiatric Medication

read original get AI Psychiatric Medication Guide → more articles
Why This Matters

The approval of Legion Health's AI system to prescribe psychiatric medications marks a significant step toward integrating automation into mental health care, potentially increasing access but raising concerns about safety and over-treatment. While promising for streamlining certain processes, experts warn that human oversight remains crucial to ensure accurate diagnosis and appropriate treatment. This development highlights the ongoing balance between technological innovation and patient safety in the healthcare industry.

Key Takeaways

Sign up to see the future, today Can’t-miss innovations from the bleeding edge of science and tech Email address Sign Up Thank you!

You’ve probably heard of AI psychosis. Well, now get ready for AI psychiatrists — with prescription pads.

A San Francisco startup called Legion Health has been approved to let its AI app prescribe psychiatric medications to patients in Utah

As The Verge reports, there are efforts to keep the idea from becoming the disaster that it sounds like. The chatbot can only renew prescriptions for a specific set of medications, including fluoxetine (Prozac), sertraline (Zoloft), and other substances used to treat anxiety and depression. It can only prescribe drugs that were previously prescribed by a human psychiatrist, and patients will also need to be stable and not have been hospitalized for a psychiatric condition in the last year.

Despite those considerable carve-outs, experts are warning the system may do little to improve access to those who need care the most — while cracking the door to an ominous era for medicine.

University of Utah School of Medicine psychiatrist Brent Kious told The Verge that automating the process could contribute to an “epidemic of over-treatment” in psychiatry. The medications should “require more active management, changes, and careful consideration,” as Harvard Medical School director of digital psychiatry John Torous added.

The experts also cautioned that the chatbots may gloss over important details or not realize that a patient was answering questions inaccurately on purpose to speed up care. Human clinicians still have the advantage of being able to read between the lines and realize when patients are being misleading or intentionally obtuse.

“It would be better if there were greater transparency, more science, and more rigorous testing before people are asked to use this,” Kious told The Verge.

The rollout of Legion Health’s tool is Utah’s second foray into automating healthcare using AI chatbots. An initial pilot of a model in December, dubbed Doctronic, turned out to be a major point of contention, with cybersecurity researchers finding that it could be easily coaxed into spreading conspiracy theories about vaccines, recommending meth as a treatment for social withdrawal, and tripling a patient’s suggested dosage of Oxycontin.

Meanwhile, Legion Health claims it’s playing it safe with its latest AI chatbot, agreeing to file monthly reports to Utah regulators and physicians for review. The company also says it will closely involve pharmacists in the renewal of prescriptions.

... continue reading