The ability to automate the discovery process in some areas of scientific inquiry raises unanswered questions about how research should be conducted.
Artificial-intelligence systems might be used to perform large parts of the research process with minimal human oversight.Credit: Ben Brewer/Bloomberg/Getty
This week, Nature is publishing a paper1 with, perhaps, an unusually underwhelming finding at its heart: that a particular technique failed to improve how artificial neural networks learn. It was not these findings that our editors found noteworthy. Rather, it was how the work was done, which is in fact the main focus of the paper.
AI is saving time and money in research — but at what cost?
The result was created using The AI Scientist engineered by researchers at Tokyo-based company Sakana AI, details of which were first made available2 as a preprint in 2024. This artificial-intelligence system represents an effort to automate the scientific process entirely, from performing literature review and conceiving an idea to executing experiments and writing them up. As detailed in the Nature paper, The AI Scientist was able to follow this process and generate a research paper about its (negative) result; the work passed the first round of peer review for submissions to a workshop at a major machine-learning conference.
AI research assistants have since proliferated, with technology firms Google, OpenAI and Anthropic all trialling ways to automate research. Although their outputs have been limited and rarely innovative so far, the effects of being able to generate research papers quickly and cheaply are rippling through the scientific ecosystem. Universities, funders, publishers and researchers must plan how they will adapt.
Many researchers hope that generative large language models (LLMs) will accelerate discovery by automating repetitive or challenging parts of the research process, such as coding, data analysis and literature review. The AI Scientist goes further. It aims to use AI systems’ speed, pattern-recognition skills and ability to access vast amounts of interdisciplinary knowledge to automate even processes such as generating hypotheses and interpreting results.
Researchers built an ‘AI Scientist’ — what can it do?
Nature has published details of The AI Scientist because it is essential to know how AI research assistants work, and their limitations, to assess their likely impact on science. As a result of peer review, the Nature paper expands on the preprint’s description of the system’s weaknesses, includes more ethical considerations, and tones down the authors’ original statements about automating the entire research process (humans helped to filter the most promising outputs). The AI Scientist produced three papers, one of which, following peer review, reached the bar for acceptance at a workshop of the prestigious International Conference on Learning Representations. It did not meet the bar for the main conference track.
It comes after researchers last month released a theoretical-physics preprint3 in which the state-of-the-art generative AI model GPT5, from OpenAI in San Francisco, California, played an essential part. Nathaniel Craig, a physicist at the University of California, Santa Barbara, who was not involved in the work, described the paper as ‘journal-level research’.
... continue reading