A Guardian investigation published Friday found that Google's AI Overviews -- the generative AI summaries that appear at the top of search results -- are serving up inaccurate health information that experts say puts people at risk of harm. The investigation, which came after health groups, charities and professionals raised concerns, uncovered several cases of misleading medical advice despite Google's claims that the feature is "helpful" and "reliable."
In one case described by experts as "really dangerous," Google advised people with pancreatic cancer to avoid high-fat foods, which is the exact opposite of what should be recommended and could jeopardize a patient's chances of tolerating chemotherapy or surgery. A search for liver blood test normal ranges produced masses of numbers without accounting for nationality, sex, ethnicity or age of patients, potentially leaving people with serious liver disease thinking they are healthy. The company also incorrectly listed a pap test as a test for vaginal cancer.
The Eve Appeal cancer charity noted that the AI summaries changed when running the exact same search, pulling from different sources each time. Mental health charity Mind said some summaries for conditions such as psychosis and eating disorders offered "very dangerous advice."
Google said the vast majority of its AI Overviews were factual and that many examples shared were "incomplete screenshots," adding that the accuracy rate was on par with featured snippets.
Read more of this story at Slashdot.