Image by Getty / Futurism Developments
Health practitioners are becoming increasingly uneasy about the medical community making widespread use of error-prone generative AI tools.
The proliferation of the tech has repeatedly been hampered by rampant "hallucinations," a euphemistic term for the bots' made-up facts and convincingly-told lies.
One glaring error proved so persuasive that it took over a year to be caught. In their May 2024 research paper introducing a healthcare AI model, dubbed Med-Gemini, Google researchers showed off the AI analyzing brain scans from the radiology lab for various conditions.
It identified an "old left basilar ganglia infarct," referring to a purported part of the brain — "basilar ganglia" — that simply doesn't exist in the human body. Board-certified neurologist Bryan Moore flagged the issue to The Verge, highlighting that Google fixed its blog post about the AI — but failed to revise the research paper itself.
The AI likely conflated the basal ganglia, an area of the brain that's associated with motor movements and habit formation, and the basilar artery, a major blood vessel at the base of the brainstem. Google blamed the incident on a simple misspelling of "basal ganglia."
It's an embarrassing reveal that underlines persistent and impactful shortcomings of the tech. Even the latest "reasoning" AIs by the likes of Google and OpenAI are spreading falsehoods dreamed up by large language models that are trained on vast swathes of the internet.
In Google's search results, this can lead to headaches for users during their research and fact-checking efforts.
But in a hospital setting, those kinds of slip-ups could have devastating consequences. While Google's faux pas more than likely didn't result in any danger to human patients, it sets a worrying precedent, experts argue.
"What you’re talking about is super dangerous," healthcare system Providence's chief medical information officer Maulin Shah told The Verge. "Two letters, but it’s a big deal."
... continue reading