Most leading chatbots routinely exaggerate science findings
Published on: 2025-06-21 03:54:53
It seems so convenient: when you are short of time, asking ChatGPT or another chatbot to summarise a scientific paper to quickly get a gist of it. But in up to 73 per cent of the cases, these large language models (LLMs) produce inaccurate conclusions, a new study by Uwe Peters (Utrecht University) and Benjamin Chin-Yee (Western University and University of Cambridge) finds.
Almost 5,000 LLM-generated summaries analysed
The researchers tested ten of the most prominent LLMs, including ChatGPT, DeepSeek, Claude, and LLaMA. “We entered abstracts and articles from top science journals, such as Nature, Science, and The Lancet,” says Peters, “and asked the models to summarise them. Our key question: how accurate are the summaries that the models generate?”
... Read full article.