Since the release of the chatbot ChatGPT in late 2022, there has been frantic debate at universities about artificial intelligence (AI). These conversations have centred on undergraduate teaching — how to prevent cheating and how to use AI to improve learning. But a quieter, deeper disruption is unfolding in research, the other core activity of universities.
A doctoral education has long been seen as the pinnacle of academic training, an apprenticeship in original thinking, critical analysis and independent enquiry. However, that model is now under pressure. AI is not just another research tool; it is redefining what research is, how it is done and what counts as an original contribution.
How money, politics and technology are redefining the PhD experience
Universities are mostly unprepared for the scale of disruption, with few having comprehensive governance strategies. Many academics remain focused on the failings of early generative AI tools, such as hallucinations (confidently stated but false information), inconsistencies and superficial responses. But AI models that were clumsy in 2023 are becoming increasingly fluent and accurate.
AI tools can already draft literature reviews, write sophisticated code with human guidance and even generate hypotheses when provided with data sets. ‘Agentic’ AI systems that can set their own sub-goals, coordinate tasks and learn from feedback represent another leap forwards. If the current trajectory continues, we’re fast approaching a moment when much of the conventional PhD workflow can be completed, or at least be heavily supported, by machines.
Unanswered questions
This shift poses challenges for educators. What constitutes an original contribution becomes unclear when AI tools produce literature reviews, acquire and analyse data, and draft thesis chapters. Students might need to pivot from executing research tasks to framing questions and interrogating AI outputs.
To explore what the near future of research training might look like, I conducted a role play simulating a PhD student working with a hypothetical AI assistant. I used Claude, a leading AI system built by the firm Anthropic in San Francisco, California.
‘Science saved my life’ — and it must save other at-risk scholars
I fed the chatbot a detailed prompt (see Supplementary Information) describing a fictional AI research assistant called HALe — inspired by the AI character HAL 9000 from the science-fiction film 2001: A Space Odyssey. I gave HALe capabilities that are already under development and are likely to improve in coming years. These include accessing external databases, integrating environmental and biological data, and performing advanced analyses autonomously. I then played the part of the student, asking questions and responding to the chatbot’s replies. The dialogue was generated in a single, unedited session — offering a fictional, yet plausible, glimpse of how future doctoral research could unfold.
... continue reading