Tech News
← Back to articles

AI could transform research assessment — and some academics are worried

read original related products more articles

Higher-education researchers probed attitudes towards AI use in research assessment.Credit: Jose Sarmento Matos/Bloomberg/Getty

In 2023, Australia abandoned its expensive and bureaucratic scholar-led research-assessment programme. New Zealand followed suit soon after. The hope, according to a transition plan unveiled by the Australian federal government’s Department of Education and the research sector, was to find a “more modern, data-driven approach”.

In the United Kingdom, where financial pressures on universities are especially acute, there are similar calls to reform the Research Excellence Framework (REF), the country’s performance-based research-funding system. When it last ran in 2021, it cost an estimated £471 million (US$649 million). Each of the 157 submitting institutions spent £3 million, on average — a cost that many institutions running on deficits can ill afford. The next REF will run in 2029.

An AI no-brainer?

A growing body of evidence demonstrates that artificial intelligence can improve the efficiency and cost-effectiveness of research assessment1, which can, for example, spare academics from the burden of reviewing and scoring published outputs or documenting societal impact. It seems reasonable to ask, therefore, whether AI use for the REF is a no-brainer.

To test this hypothesis, our team of researchers at the Centre for Higher Education Transformations at the University of Bristol, UK, and Jisc, a non-profit membership organization providing technology and data services to the UK education and research sectors, toured 16 UK universities as part of a consultation exercise funded by Research England. Research England is part of UK Research and Innovation, the country’s higher-education funder, which is responsible for distributing £8.8 billion in research funding in 2025–26. Our aim was to investigate how AI is being used for research assessment and assess opinions about such use.

Our visits included nine large research-intensive institutions, six modern institutions (former polytechnics that became universities) making smaller (but no less valuable) research contributions and one specialist, discipline-specific institution.

We conducted focus-group discussions with more than 200 senior academics and professional-services staff members, 32 interviews with university pro vice-chancellors for research and institutional REF leads and a national attitudinal survey of 386 (self-selected) UK academics and university professional-services staff members.

Mixed bag

Participants thought that large language models (LLMs), in particular, could ease many of the labour-intensive demands of the REF. However, our consultation reveals a dearth of understanding in how AI is being used for research assessment in universities. We found that officially sanctioned use of generative AI (genAI) for REF purposes is patchy across UK universities, but there is a high rate of informal adoption and experimentation. Some participants strongly objected to the use of genAI tools for REF purposes. These objections typically came from academics — not, paradoxically, professional-services staff members, whose roles are perhaps most at risk from the use of AI in research assessment. Instead, they seem to be leaning more into and recognizing the benefits of the tools. We found that academic resistance seems to be particularly prevalent at research-intensive universities. It is also more conspicuous among academics in arts, humanities and social-science disciplines, in which use of the tools tends to be limited, if they are used at all.

... continue reading