Leona Diala uses artificial-intelligence tools for many aspects of her PhD: to search the literature, design presentations, generate code and summarize papers. The tools save her time and give her ideas, she says. “I would say AI is a blessing for researchers today.”
At the same time, Diala, who is studying infectious-disease modelling at the University of Abuja, is concerned that AI overuse is undermining essential academic skills. She checks every reference and fact that AI tools give her and rewrites their text in her own words. But she worries about the next generation of researchers. When she was an undergraduate, she says, “there was no AI like this. We sat for hours and read, practised, tried and retried until we got it. Now people want AI to write everything.” She adds: “AI is a blessing, but it has made students lazy. People don’t go the extra mile to build the skill.”
Leona Diala checks every reference and fact that AI tools give her.Credit: Courtesty of Leona Diala
Diala’s ambivalence is typical of PhD students. When Nature surveyed almost 3,800 PhD students last year, three-quarters thought AI tools could help students to work more efficiently, and 71% felt it was acceptable to use them to support their studies — yet the majority also voiced strong concerns. Some 81% said they don’t fully trust AI tools and 65% worried that AI weakens thinking, research and writing skills.
Since ChatGPT launched in November 2022, AI use has exploded across higher education. In a survey of 1,041 UK undergraduates published in February 2025, 88% admitted to using AI for assessments, up from 53% the year before (see go.nature.com/4d37rcc). The proportion of respondents who had used any AI tool also jumped, from 66% in 2024 to 92% in 2025. Such a rapid change in behaviour is “almost unheard of”, wrote study author Josh Freeman, policy manager at the Higher Education Policy Institute in Oxford, UK, in a statement accompanying the results.
Doctoral students are now charting paths through territory their supervisors never had to navigate. Some use AI daily and swear by it; others refuse to touch it, worried about the cost to their development as researchers. Most fall in between, working out their own rules for when AI helps and when it hinders.
Yinghui He uses a mix of AI tools for tasks such as checking grammar and generating code.Credit: Courtesy of Yinghui He
“AI can be your greatest ally or your worst enemy. It all depends on how you use it,” says Yinghui He, a PhD student at Tsinghua University in Beijing who uses computer methods to study microorganisms in the deep ocean. She turns to AI tools such as ChatGPT and Gemini every day, as well as Chinese AI tools, for tasks including checking grammar in her written English and generating code. “The AI writes code faster than me, and saves me a lot of time,” she says. “But once it’s generated the code, I need to check if it’s right. That is very important.”
That’s a lesson that Richard Ang, a PhD student in soil microbiology at the University of Western Australia in Perth, learnt the hard way. He once asked ChatGPT to calculate fertilizer doses for an experiment. When the experiment failed, he asked the tool to show its thinking. “It totally misunderstood my question,” he says. “AI will never tell us that our design is uncommon, or wrong,” he adds. “If we ask AI to carry out a ridiculous or impossible task, it will do it.” Now, he always asks the tool to explain its reasoning step by step, and he cross-checks answers using multiple tools.
Diala learnt the same lesson when she plotted a graph and asked AI to explain it. “It said a value increased when the graph showed a decrease,” she says. “Before we use AI, we should have some knowledge of what we’re asking, so we can catch mistakes.”
... continue reading