Skip to content
Tech News
← Back to articles

AI Outperforms ER Doctors in Diagnostic Cases, Study Points to Collaborative Care

read original get AI Diagnostic Assistant → more articles
Why This Matters

This study highlights that advanced AI models can outperform human doctors in emergency diagnostic tasks, especially in early triage and handling uncertain data. While not ready to replace clinicians, these findings underscore the potential for AI to enhance medical decision-making and streamline emergency care. The results call for the development of rigorous standards and guidelines for integrating AI into healthcare settings.

Key Takeaways

Have you ever thought about how artificial intelligence compares to a human physician in an emergency diagnostic setting? New research published Thursday might have you thinking over this question.

The study, published in the journal Science, found that a state-of-the-art large language model outperformed human doctors on a range of common clinical tasks. Using real emergency department data and hundreds of physician comparisons, the model matched or even exceeded human clinician performance in diagnostic choices, emergency triage and determining next steps in management.

The authors of the study said those results do not mean AI models are ready to replace human doctors. Instead, the results indicate that industry professionals need faster, more rigorous standards for evaluation and rules for using AI in medicine.

The researchers tested OpenAI's o1 series large language model, released in 2024, across six experiments that blended standardized clinical cases with a real-world sample of randomly selected emergency room patients at a medical center in Massachusetts.

The model's advantage was most evident in early-stage triage, when decisions must be made with little information. Both the human clinicians and the AI model improved as more data became available to them, but the study found that the LLM handled uncertainty far better, using fragmented or unstructured health data and notes more effectively.

These findings build on decades of using difficult diagnostic cases to evaluate medical-computing systems. Earlier LLMs already outperformed older algorithmic approaches, but what sets this study apart is the scale and the head-to-head comparison between a human doctor and AI in a real clinical scenario.

The authors stressed that we should remain skeptical of these results. Real clinical work in hospitals and emergency rooms often relies on visual and auditory cues -- rather than text-based reasoning -- which AI cannot interpret fully and accurately. "Future work is needed to assess how humans and machines may effectively collaborate in the use of nontext signals," the study notes.

When considering AI-assisted medical care, it's also critical to assess whether it will be safe, equitable and cost-effective, aspects that were not tested in this study.

Read also: If AI Health Advice From Apple Is Coming, I Want to Be Ready

"Long story short, the model outperformed our very large physician baseline. You'll see this in detail, but this included board-certified, actively practicing physicians and real messy cases," Arjun Manrai, an assistant professor of Biomedical Informatics at Harvard Medical School, said during a virtual press briefing call.

... continue reading