Skip to content
Tech News
← Back to articles

Responses to the AI grant flood must prioritize fairness as part of excellence

read original get AI Ethics and Fairness Book → more articles
Why This Matters

The surge in AI-driven applications for research funding highlights the need for fairness and transparency in grant processes. As AI tools increasingly influence proposal development, funding agencies must adapt policies that balance innovation with equitable treatment of applicants. This shift is crucial for maintaining trust and encouraging bold, forward-thinking research in the tech industry and academia.

Key Takeaways

The agencies that disburse research funds must have clear rationales for rejecting grant proposals amid a surge in applications.Credit: Matt Cardy/Getty

Last month, the European Research Council (ERC) announced a policy change for some of its grants: it extended the period in which some unsuccessful applicants would not be able to reapply. The ERC, Europe’s premier research funder with more than €16 billion (US$19 billion) to disburse in 2021–27, was responding to a surge in applications, which appear to be driven partly by the use of artificial-intelligence tools.

Could agentic AI topple grant-funding systems?

Last week, however, the funder adjusted that change following an outcry from researchers. Many said that it was unfair, too sudden, too blunt, that it would discourage bold proposals and make researchers less able to respond to new advances. The council was right to rethink — and in the process it showed others how to listen to the concerns of the community. But the problem of how to handle AI in grant funding remains. Solutions must have fairness at their core.

As neuroscientist Geraint Rees and social scientist James Wilsdon wrote in Nature last week, funding bodies from Australia to the United Kingdom have seen a sharp rise in applications since 2022 (G. Rees and J. Wilsdon Nature 652, 1119–1121; 2026). This coincides with the advent of OpenAI’s ChatGPT, the first AI chatbot to be publicly available worldwide. And there is good evidence to suggest that many of these increases are AI-driven. Researchers are using AI tools not just to scan the literature or summarize studies, but for proposing ideas for projects, drafting the text of grant applications and refining applications on the basis of predictions of how grant review panels might react.

Current guidelines from some of the world’s key research funders allow limited use of generative AI in grant applications. In such cases, the guidelines state that it must be acknowledged and declared, be done responsibly and in line with ethical and legal requirements. By contrast, those who peer review grant proposals for funders are prohibited from uploading them to generative AI tools for the purpose of producing reviews. This is partly to maintain confidentiality, and because funders want peer reviewers to exercise their own judgement and not rely on a machine.

Prestigious European science funder scraps stricter rules after researcher backlash

In practice, these policies are not always followed. If anything, the research world has ended up with a situation in which the increased ease of writing and reviewing grant applications has not been matched by improvements in ways to verify the degree of AI use.

Researchers are starting to show how such verification could happen. Pangram Labs, a firm in New York City, has developed tools to detect AI-generated text, which are being tested. Separately, researchers at Northwestern University in Evanston, Illinois, used a different method to compare evidence of AI use in grant applications to US federal agencies from two universities. A team led by computational social scientists Dashun Wang and Yifan Qian accessed publicly available grant abstracts from a database of US federally funded grants spanning 2021–25 (Y. Qian et al. Preprint at arXiv https://doi.org/q435; 2026). To spot AI-tool use, they got an AI model to rewrite the human-written abstracts from 2021 (before ChatGPT’s release), and then compared the human and AI versions of the same text. This enabled them to learn the telltale signs that distinguish the two types.

Radical rethinking

... continue reading