Tech News
← Back to articles

Large Language Models Will Never Be Intelligent, Expert Says

read original related products more articles

Are tech companies on the verge of creating thinking machines with their tremendous AI models, as top executives claim they are? Not according to one expert.

We humans tend to associate language with intelligence. We tend to be compelled by those with greater linguistic skills as orators or writers.

But the latest research suggests that language isn’t the same as intelligence, says Benjamin Riley, founder of the venture Cognitive Resonance, in a essay for The Verge. And that’s bad news for the AI industry, which is predicating its hopes and dreams of creating an all-knowing artificial general intelligence, or AGI, on the large language model architecture it’s already using.

“The problem is that according to current neuroscience, human thinking is largely independent of human language — and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own,” Riley wrote. “We use language to think, but that does not make language the same as thought. Understanding this distinction is the key to separating scientific fact from the speculative science fiction of AI-exuberant CEOs.”

AGI, to elaborate, would be an all-knowing AI system that equals or exceeds human cognition in a wide variety of tasks. But in practice, it’s often envisioned as helping solve all the biggest problems humankind can’t, from cancer to climate change. And by saying they’re creating one, AI leaders can justify the industry’s exorbitant spending and catastrophic environmental impact.

Part of the reason why AI capex has been so out of control is the obsession with scaling: by furnishing the AI models with more data and powering them with ever growing-numbers of GPUs, AI companies have made their models better problem solvers and more humanlike in their ability to hold a conversation.

But “LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build,” Riley wrote.

If language were essential to thinking, then taking it away should take away our ability to think. But this doesn’t happen, Riley points out, citing decades of research summarized in a commentary published in Nature last year.

For one, functional magnetic resonance imaging (fMRI) of human brains has shown that distinct parts of the brain are activated during different cognitive activities, Riley notes. We’re not recruiting the same region of neurons when pondering a math problem versus a language one. Meanwhile, studies of people who lost their language abilities showed that their ability to think was largely unimpaired, since they could still solve math problems, follow nonverbal instructions, and understand other peoples’ emotions.

Even some leading AI figures are skeptical of LLMs. Most famous of all is the Turing Award winner and “godfather” of modern AI Yann LeCun, who until recently was Meta’s top AI scientist. LeCun has long argued that LLMs will never reach general intelligence, and instead believes in pursuing so-called “world” models that are designed to understand the three dimensional world by training them on a variety of physical data, rather than just language. It’s likely that this view led to his recent departure; despite LeCun’s position, Meta CEO Mark Zuckerberg has pivoted to pouring billions of dollars into a new AI division for creating an artificial “superintelligence” using LLM technology.

... continue reading