Sign up to see the future, today Can’t-miss innovations from the bleeding edge of science and tech Email address Sign Up Thank you!
Once you familiarize yourself with the characteristics of ChatGPT’s writing, it becomes impossible to miss. The internet has been flooded with AI-generated text that often features distinctive language patterns, from liberal use of em dashes and repetitive sentence structures to specific turns of phrase and tone.
The trend has become so ubiquitous that experts are now warning that it could even influence the way we speak in real life.
As historian Ada Palmer and cryptographer and author Bruce Schneier argue in an opinion piece for The Guardian, it’s a very real risk that could also entrench a fundamental flaw plaguing large language models today. While these models were trained on vast quantities of written text, social media posts, movies, TV shows, and other recordings, this data often comes up short in “unscripted conversations we have face-to-face or voice-to-voice” — which represents the “vast majority of speech, and a vital component of human culture.”
It’s a massive blind spot that could result in humans eventually adopting linguistic patterns of these models, among other far-reaching consequences.
“This will affect not just how we communicate with one another,” Palmer and Schneier write, “but also how we think about ourselves and what goes on around us.”
“Our sense of the world may become distorted in ways we have barely begun to comprehend,” they concluded.
Research has already shown that AI-generated language relies on shorter-than-average sentences, while using a narrower vocabulary than human speech. It also sacrifices what makes human-written text human, including what Palmer and Schneier term “meanders, interruptions and leaps of logic that communicate emotion.”
Worse yet, AI models developed after the advent of ChatGPT run the risk of being trained on output that was itself generated by an AI, a dangerous feedback loop that could further entrench these machine-inspired patterns.
Beyond linguistic choices, AI models have long been shown to be highly agreeable or “sycophantic” towards the user, often indulging in their potentially flawed or downright dangerous lines of thinking or beliefs. It’s a tendency that can “reinforce bias and even worsen psychosis,” Palmer and Schneier argue.
... continue reading