Skip to content
Tech News
← Back to articles

LLM may be standardizing human expression – and subtly influencing how we think

read original get AI Language Model Notebook → more articles
Why This Matters

This article highlights the potential risks of large language models homogenizing human expression, which could diminish cognitive diversity and creativity across society. As AI chatbots become more integrated into daily life, understanding their influence on our thinking and communication is crucial for maintaining human uniqueness and adaptability. Addressing these issues can help ensure AI development supports a diverse and innovative future for the tech industry and consumers alike.

Key Takeaways

Artificial intelligence chatbots are standardizing how people speak, write and think. If this homogenization continues unchecked, it risks reducing humanity’s collective wisdom and ability to adapt, argue USC computer scientists and psychologists in an opinion paper published March 11 in the Cell Press journal Trends in Cognitive Sciences.

The researchers — led by Morteza Dehghani, professor of psychology and computer science at the USC Dornsife College of Letters, Arts and Sciences — say that AI developers should incorporate more real-world diversity into large language model (LLM) training sets, not only to help preserve human cognitive diversity, but also to improve chatbots’ reasoning abilities.

“Individuals differ in how they write, reason and view the world,” says study first author Zhivar Sourati, a PhD student at the USC Viterbi School of Engineering. “When these differences are mediated by the same LLMs, their distinct linguistic style, perspective, and reasoning strategies become homogenized, producing standardized expressions and thoughts across users.”

Large language models dampen individuality

Within groups and societies, cognitive diversity bolsters creativity and problem-solving, say the researchers. However, cognitive diversity is shrinking worldwide as billions of people are using the same handful of AI chatbots for an increasing number of tasks, they add. When people use chatbots to help them polish their writing, for example, the writing ends up losing its stylistic individuality, and people feel less creative ownership over what they produce.

“The concern is not just that LLMs shape how people write or speak, but that they subtly redefine what counts as credible speech, correct perspective, or even good reasoning,” says Sourati, a member of Dehghani’s Morality and Language Lab.

The team points to multiple studies showing that LLM outputs are less varied than human-generated writing and that LLM outputs tend to reflect the language, values and reasoning styles of Western, educated, industrialized, rich and democratic societies.

“Because LLMs are trained to capture and reproduce statistical regularities in their training data, which often overrepresent dominant languages and ideologies, their outputs often mirror a narrow and skewed slice of human experience,” says Sourati.

Though studies show that individuals often generate more ideas with more details when they use LLMs, groups of people produce fewer and less creative ideas when they use LLMs than when they simply combine their collective powers, note the researchers.

“Even if people are not the firsthand users of LLMs, LLMs are still going to affect them indirectly,” says Sourati. “If a lot of people around me are thinking and speaking in a certain way, and I do things differently, I would feel a pressure to align with them because it would seem like a more credible or socially acceptable way of expressing my ideas.”

... continue reading