Skip to content
Tech News
← Back to articles

ArXiv will ban researchers who upload papers full of AI slop

read original get AI Research Paper Guide → more articles
Why This Matters

arXiv is implementing strict policies to ensure the integrity of scientific submissions by holding authors accountable for AI-generated content. This move underscores the importance of responsible AI use in research and aims to maintain trust in scientific publishing. It signals a broader industry push for transparency and accuracy in AI-assisted research outputs.

Key Takeaways

Attention @arxiv authors: Our Code of Conduct states that by signing your name as an author of a paper, each author takes full responsibility for all its contents, irrespective of how the contents were generated.

If generative AI tools generate inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content, and that output is included in scientific works, it is the responsibility of the author(s).

We have recently clarified our penalties for this. If a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can’t trust anything in the paper.

The penalty is a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted at a reputable peer-reviewed venue.

Examples of incontrovertible evidence: hallucinated references, meta-comments from the LLM (“here is a 200 word summary; would you like me to make any changes?”; “the data in this table is illustrative, fill it in with the real numbers from your experiments”)