Some researchers who refuse to use AI have been accused of being anti-progress — similar to the nineteenth-century Luddites who resisted the new machinery they feared would replace their jobs — but they say their views are more nuanced than that.Credit: Chronicle/Alamy
Danielle Crowley is getting tired of people telling her to use generative artificial intelligence (genAI). As a marine zoologist at Bangor University, UK, she says that she is pretty much the only PhD student in her cohort who does not use it. She has seen colleagues use genAI tools for coding and for getting the tone of e-mails right. On one occasion, she was even encouraged by a lecturer to use it to generate a conference poster.
She says her colleagues are often surprised to hear she hasn’t tried it and have suggested she uses it for applications such as coding. “I’ve had a lot of people go like ‘oh but you have to use it’,” she recalls. But Crowley has her reasons. She has concerns about the ethics of copyright, what she calls a lack of transparency from companies about how they’re using the data, the environmental effects of AI tools and the accuracy of what genAI models spit out.
She also thinks that using the tools would be counterproductive to her studies. “Coding is a skill I want to learn and develop, because it’s not the thing I’m the most confident in,” she says. She would rather try and do it herself, learning from her mistakes.
Marine biologist Danielle Crowley has concerns around the ethics and environmental impacts of generative AI tools.Credit: Laura Oatley
GenAI has become a hot topic over the past few years, as technology companies compete to release the most impressive model for public use. Researchers are using these tools for tasks such as writing papers , peer review and coding. It can save them time, mental energy and sometimes money. But Crowley and others who are purposefully abstaining often find themselves judged by their peers.
“A lot of people say ‘it’s the future, everyone is using it’,” she says. Not using it, she continues, “kind of feels like showing up to a function and saying you don’t drink”.
Efficient, but at what cost?
According to a Nature survey of about 5,000 researchers published in May last year, scientists are split on the ethics of AI use in academia. More than 90% of respondents felt it was acceptable to use AI for editing or translating their own text, but fewer were open to the idea of using it to generate text directly. And only a minority said they had actually used AI tools in their work. About one-quarter of respondents used them to edit their papers, whereas only 8% had used them to translate, summarize or write a first draft.
More recently, a survey of 3,234 researchers published last November by the academic publisher Elsevier found that 58% of researchers used AI in their work, up from 37% the previous year. In terms of how researchers use or would like to use AI tools, 61% said to locate new research, 51% said for collecting and summarizing literature and 41% said for preparing grant applications. Those surveyed were generally positive about the potential of the technology to boost efficiency.
... continue reading