Skip to content
Tech News
← Back to articles

How much of the scientific literature is generated by AI?

read original get AI Research Paper Generator → more articles
Why This Matters

The increasing use of AI to generate scientific literature raises concerns about the quality and authenticity of research, potentially overwhelming current review systems and introducing fake or low-quality papers. This evolving landscape highlights the urgent need for improved detection tools and regulatory measures to maintain scientific integrity. The situation underscores a broader challenge for the tech industry and academia to balance AI's benefits with ethical considerations and quality control.

Key Takeaways

Research papers are increasingly being written by artificial intelligence.Credit: Yagi Studio/Getty

How much of the scientific literature is generated by AI? The first studies of the size of the AI footprint in scientific journals, preprint repositories and peer-review reports give a spread of answers — and indicate a rapidly evolving situation that it is difficult to get a handle on.

The fear of many in the research community is that poor-quality or entirely fabricated research produced by large language models (LLMs) could overwhelm the ability of current quality-control systems to detect it, thereby polluting the scientific canon.

“The ground is shifting underneath us in ways that we are totally unprepared for,” says Maria Antoniak, a computer scientist at the University of Colorado Boulder.

“We live in an escalating arms race” between people using AI unscrupulously and those who are trying to constrain or detect it, says Richard She, a stem-cell biologist at Nanyang Technological University in Singapore.

Source: ref 1.

AI detectors

Concerns about the extent of AI-generated content in the scientific literature mirror broader online trends. At the end of March, AI-generated articles were estimated to outnumber those written by humans, according to an analysis of 55,000 newly published webpages shared with Nature by the private firm Graphite in San Francisco, California.

LINK AI tool detects LLM-generated text in research papers and peer reviews

AI might have legitimate uses in the production of scientific literature, and can accelerate research progress. But AI-generated content is also potentially problematic because it can be used to create fake or low-quality papers.

... continue reading