In May 2024, Google threw caution to the wind by rolling out its controversial AI Overviews feature in a purported effort to make information easier to find.
But the AI hallucinations that followed — like telling users to eat rocks and put glue on their pizzas — ended up perfectly illustrated the persistent issues that plague large language model-based tools to this day.
And while not being able to reliably tell what year it is or making up explanations for nonexistent idioms might sound like innocent gaffes that at most lead to user frustration, some advice Google’s AI Overviews feature is offering up could have far more serious consequences
In a new investigation, The Guardian found that the tool’s AI-powered summaries are loaded with inaccurate health information that could put people at risk. Experts warn that it’s only a matter of time until the bad advice endangers users — or, in a worst-case scenario, results in someone’s death.
The issue is severe. For instance, The Guardian found that it advised those with pancreatic cancer to avoid high-fat foods, despite doctors recommending the exact opposite. It also completely bungled information about women’s cancer tests, which could lead to people ignoring real symptoms of the disease.
It’s a precarious situation as those who are vulnerable and suffering often turn to self-diagnosis on the internet for answers.
“People turn to the internet in moments of worry and crisis,” end-of-life charity Marie Curie director of digital Stephanie Parker told The Guardian. “If the information they receive is inaccurate or out of context, it can seriously harm their health.”
Others were alarmed by the feature turning up completely different responses to the same prompts, a well-documented shortcoming of large language model-based tools that can lead to confusion.
Mental health charity Mind’s head of information, Stephen Buckle, told the newspaper that AI Overviews offered “very dangerous advice” about eating disorders and psychosis, summaries that were “incorrect, harmful or could lead people to avoid seeking help.”
A Google spokesperson told The Guardian in a statement that the tech giant invests “significantly in the quality of AI Overviews, particularly for topics like health, and the vast majority provide accurate information.”
... continue reading