You have documented how large language models (LLMs) risk hardwiring scientific inequalities into the texts that they help to produce (see, for example, Nature 645, 285; 2025). As the use of LLMs spreads, it becomes more likely that some voices will be treated as central and others as peripheral. But the risks go further — these biases might affect the way that scholars from low-income countries formulate and frame their own science.
Nature 649, 555 (2026)
doi: https://doi.org/10.1038/d41586-026-00121-x
Competing Interests The authors declare no competing interests.
Related Articles
Subjects