New research coordinated by the European Broadcasting Union (EBU) and led by the BBC has found that AI assistants – already a daily information gateway for millions of people – routinely misrepresent news content no matter which language, territory, or AI platform is tested.
The intensive international study of unprecedented scope and scale was launched at the EBU News Assembly, in Naples. Involving 22 public service media (PSM) organizations in 18 countries working in 14 languages, it identified multiple systemic issues across four leading AI tools.
Professional journalists from participating PSM evaluated more than 3,000 responses from ChatGPT, Copilot, Gemini, and Perplexity against key criteria, including accuracy, sourcing, distinguishing opinion from fact, and providing context.
Key findings:
45% of all AI answers had at least one significant issue.
31% of responses showed serious sourcing problems – missing, misleading, or incorrect attributions.
20% contained major accuracy issues, including hallucinated details and outdated information.
Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.
Comparison between the BBC’s results earlier this year and this study show some improvements but still high levels of errors.
Why this distortion matters
AI assistants are already replacing search engines for many users. According to the Reuters Institute’s Digital News Report 2025, 7% of total online news consumers use AI assistants to get their news, rising to 15% of under-25s.
‘This research conclusively shows that these failings are not isolated incidents,’ says EBU Media Director and Deputy Director General Jean Philip De Tender. ‘They are systemic, cross-border, and multilingual, and we believe this endangers public trust. When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation.’
Peter Archer, BBC Programme Director, Generative AI, says: ‘We’re excited about AI and how it can help us bring even more value to audiences. But people must be able to trust what they read, watch and see. Despite some improvements, it’s clear that there are still significant issues with these assistants. We want these tools to succeed and are open to working with AI companies to deliver for audiences and wider society.’
Next steps
The research team have also released a News Integrity in AI Assistants Toolkit, to help develop solutions to the issues uncovered in the report. It includes improving AI assistant responses and media literacy among users. Building on the extensive insights and examples identified in the current research, the Toolkit addresses two main questions: “What makes a good AI assistant response to a news question?” and “What are the problems that need to be fixed?”.
In addition, the EBU and its Members are pressing EU and national regulators to enforce existing laws on information integrity, digital services, and media pluralism. And they stress that ongoing independent monitoring of AI assistants is essential, given the fast pace of AI development, and are seeking options for continuing the research on a rolling basis.
About the project
This study built on research by the BBC published in February 2025, which first highlighted AI’s problems in handling news. This second round expanded the scope internationally, confirming that the issue is systemic and is not tied to language, market or AI assistant.
Participating broadcasters:
Belgium (RTBF, VRT)
Canada (CBC-Radio Canada)
Czechia (Czech Radio)
Finland (YLE)
France (Radio France)
Georgia (GPB)
Germany (ARD, ZDF, Deutsche Welle)
Italy (Rai)
Lithuania (LRT)
Netherlands (NOS/NPO)
Norway (NRK)
Portugal (RTP)
Spain (RTVE)
Sweden (SVT)
Switzerland (SRF)
Ukraine (Suspilne)
United Kingdom (BBC)
USA (NPR)
Separately, the BBC has today published research into audience use and perceptions of AI assistants for News. This shows that many people trust AI assistants to be accurate - with just over a third of UK adults saying that they trust AI to produce accurate summaries, rising to almost half for people under-35.
The findings raise major concerns. Many people assume AI summaries of news content are accurate, when they are not; and when they see errors, they blame news providers as well as AI developers – even if those mistakes are a product of the AI assistant. Ultimately, these errors could negatively impact people’s trust in news and news brands.
The full findings can be found here: Research Findings: Audience Use and Perceptions of AI Assistants for News
IW
Follow for more