Iana Kunitsa/Moment via Getty
Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
New research shows that AI chatbots often distort news stories.
45% of the AI responses analyzed were found to be problematic.
The authors warn of serious political and social consequences.
A new study conducted by the European Broadcasting Union (EBU) and the BBC has found that leading AI chatbots routinely distort and misrepresent news stories. The consequence could be large-scale erosion in public trust towards news organizations and in the stability of democracy itself, the organizations warn.
Spanning 18 countries and 14 languages, the study involved professional journalists evaluating thousands of responses from ChatGPT, Copilot, Gemini, and Perplexity about recent news stories based on criteria like accuracy, sourcing, and the differentiation of fact from opinion.
Also: This free Google AI course could transform how you research and write - but act fast
The researchers found that close to half (45%) of all of the responses generated by the four AI systems "had at least one significant issue," according to the BBC, while many (20%) "contained major accuracy issues," such as hallucination -- i.e., fabricating information and presenting it as fact -- or providing outdated information. Google's Gemini had the worst performance of all, with 76% of its responses containing significant issues, especially regarding sourcing.
... continue reading