Part of what makes us human is the unique ways we think and solve problems. But using large language models like ChatGPT might be eroding this uniqueness and leading humans to think and communicate the same way, according to a group of scientists and psychologists who have co-authored a new opinion paper.
"Individuals differ in how they write, reason, and view the world," Zhivar Sourati, a computer scientist of the University of Southern California and first author for the paper, said in a statement.
"When these differences are mediated by the same LLMs, their distinct linguistic style, perspective and reasoning strategies become homogenized, producing standardized expressions and thoughts across users," Sourati continued.
The paper, published Wednesday in the journal Trends in Cognitive Sciences, examines how hundreds of millions of people worldwide use the same handful of chatbots and what that means for our individuality.
Thinking inside the box
Pew Research found that one-third of all Americans used ChatGPT last year, double the 2023 figure. And chatbot use is much more common among teens: Two-thirds say they use chatbots, and almost a third use them daily.
Businesses are also going all in on artificial intelligence. Stanford found that 78% of organizations reported using AI in 2024, up from 55% in 2023.
So we're using AI a lot. But the danger is that we could lose the diversity in the ways we think. The team points out that LLMs generate writing that varies less than what people come up with on their own.
Part of the reason LLMs may be pushing homogenized thought, according to the paper's authors, is the data used to train them.
"Because LLMs are trained to capture and reproduce statistical regularities in their training data, which often overrepresent dominant languages and ideologies, their outputs often mirror a narrow and skewed slice of human experience," Sourati says.
... continue reading