Think, know, understand, remember.
These are everyday words people use to describe what goes on in the human mind. But when those same terms are applied to artificial intelligence, they can unintentionally make machines seem more human than they really are.
"We use mental verbs all the time in our daily lives, so it makes sense that we might also use them when we talk about machines -- it helps us relate to them," said Jo Mackiewicz, professor of English at Iowa State. "But at the same time, when we apply mental verbs to machines, there's also a risk of blurring the line between what humans and AI can do."
Mackiewicz and Jeanine Aune, a teaching professor of English and director of the advanced communication program at Iowa State, are part of a research team that studied how writers describe AI using human-like language. This type of wording, known as anthropomorphism, assigns human traits to non-human systems. Their study, "Anthropomorphizing Artificial Intelligence: A Corpus Study of Mental Verbs Used with AI and ChatGPT," was published in Technical Communication Quarterly.
The research team also included Matthew J. Baker, associate professor of linguistics at Brigham Young University, and Jordan Smith, assistant professor of English at the University of Northern Colorado. Both previously studied at Iowa State University.
Why Human-Like Language About AI Can Be Misleading
According to the researchers, using mental verbs to describe AI can create a false impression. Words such as "think," "know," "understand," and "want" suggest that a system has thoughts, intentions, or awareness. In reality, AI does not possess beliefs or feelings. It produces responses by analyzing patterns in data, not by forming ideas or making conscious decisions.
Mackiewicz and Aune also pointed out that this kind of language can overstate what AI is capable of. Phrases like "AI decided" or "ChatGPT knows" can make systems seem more independent or intelligent than they actually are. This can lead to unrealistic expectations about how reliable or capable AI is.
There is also a broader concern. When AI is described as if it has intentions, it can distract from the humans behind it. Developers, engineers, and organizations are responsible for how these systems are built and used.
"Certain anthropomorphic phrases may even stick in readers' minds and can potentially shape public perception of AI in unhelpful ways," Aune said.
... continue reading