Tech News
← Back to articles

The Four Fallacies of Modern AI

read original related products more articles

I've spent the last few years trying to make sense of the noise around Artificial Intelligence, and if there's one feeling that defines the experience, it's whiplash. One week, I'm reading a paper that promises AI will cure disease and unlock unimaginable abundance; the next, I'm seeing headlines about civilizational collapse. This dizzying cycle of AI springs, periods of massive investment and hype, followed by the chilling doubt of AI winters isn't new. It's been the engine of the field for decades.

After years of this, I've had to develop my own framework just to stay grounded. It’s not about being an optimist or a pessimist; it’s about rejecting both extremes. For me, it’s a commitment to a tireless reevaluation of the technology in front of us; to using reason and evidence to find a path forward, because I believe we have both the power and the responsibility to shape this technology’s future. That begins with a clear-eyed diagnosis of the present.

One of the most useful diagnostic tools I've found for this comes from computer scientist Melanie Mitchell. In a seminal paper back in 2021, she identified what she claims are four foundational fallacies, four deeply embedded assumptions that explain to a large extent our collective confusion about AI, and what it can and cannot do.

My goal in this article isn't to convince you that Mitchell is 100% right. I don't think she is, either, and I will provide my own criticism and counter arguments to some points. What I want is to use her ideas as a lens to dissect the hype, explore the counterarguments, and show why this intellectual tug-of-war has real-world consequences for our society, our economy, and our safety.

Deconstructing the Four Fallacies

For me, the most important test of any idea is its empirical validation. No plan, no matter how brilliant, survives its first encounter with reality. I find that Mitchell’s four fallacies are the perfect tool for this. They allow us to take the grand, sweeping claims made about AI and rigorously test them against the messy, complicated reality of what these systems can actually do.

Fallacy 1: The Illusion of a Smooth Continuum

The most common and seductive fallacy is the assumption that every impressive feat of narrow AI is an incremental step on a smooth path toward human-level Artificial General Intelligence (AGI). That is, that intelligence is a single, unidimensional metric on a continuum that goes from narrow to general.

We see this everywhere. When IBM's Deep Blue beat Garry Kasparov at chess, it was hailed as a first step towards AGI. The same narrative emerged when DeepMind's AlphaGo defeated Lee Sedol. This way of thinking creates, according to Mitchell, a flawed map of progress, tricking us into believing we are much closer to AGI than we are. It ignores the colossal, unsolved challenge known as the commonsense knowledge problem—the vast, implicit understanding of the world that humans use to navigate reality.

As philosopher Hubert Dreyfus famously said, this is like claiming that the first monkey that climbed a tree was making progress towards landing on the moon. Well, in a sense, maybe it is, but you get the point. We didn't get to the moon until we invented combustion rockets. Climbing ever taller trees gets us nowhere closer, it's just a distraction. In the same sense, mastering a closed-system game may be a fundamentally different challenge than understanding the open, ambiguous world.

... continue reading