AGI is not possible even in 10 years MD ZAID ANWAR 7 min read · Just now Just now -- Listen Share
Every few months, we hear another confident prediction: AGI by 2026, superintelligence by 2027. The CEOs of major AI companies paint a picture of machine intelligence surpassing humans just around the corner, but when you look past the headlines and listen to the researchers actually building these systems, a different story emerges.
The Researchers Who Actually Build AI Are More Cautious
Let’s start with the people who understand these systems from the ground up, not from the boardroom.
Yann LeCun, Meta’s Chief AI Scientist and one of the godfathers of deep learning, has been remarkably consistent: current large language models won’t get us to human level intelligence. In recent interview, he explained that most human knowledge doesn’t come from text it comes from our experience with the physical world in our first years of life. LLMs miss this entirely. They can write eloquently about gravity but have never dropped a ball. They can describe a cat but have never felt fur. This isn’t a minor gap; it’s a fundamental limitation. LeCun thinks we need completely different approaches, including what he calls “world models,” before we can talk seriously about AGI. His timeline? At least a decade, probably much longer.
Ilya Sutskever, who co-founded OpenAI and recently started his own company Safe Superintelligence, has a nuanced view. In his most recent interview, he made a crucial distinction: we’re shifting from an “age of scaling” back to an “age of research.” What does that mean? Simply throwing more computing power and data at current models isn’t working anymore. The easy gains are done. He estimates AGI could take anywhere from 5 to 20 years, and importantly, he frames it as a set of open research problems, not an engineering challenge where we just need to turn the knobs higher.
Andrej Karpathy, OpenAI co-founder and former head of AI at Tesla, goes even further. In a October 2025 podcast, he said we’re looking at the “decade of agents,” not the “year of agents.” His reasoning is grounded in daily reality: current AI agents are impressive for boilerplate code, but they struggle with anything remotely novel. They’ll loop through the same mistakes repeatedly. They can’t learn on the job. They lack what he calls “cognitive” capabilities that would let them function like even a junior intern. His timeline for truly useful agents? About ten years.
So Why Are CEOs Saying 2–3 Years?
This is where things get interesting. Sam Altman of OpenAI talks about AGI arriving in “a few thousand days.” Dario Amodei of Anthropic suggests 2026 or 2027. These aren’t stupid people they’re smart businesspeople operating in a very specific context.
Here’s what matters: these companies need to raise enormous amounts of money. We’re talking billions, soon trillions, for data centers and computing infrastructure. When you’re asking investors for that kind of capital, “maybe in 20 years if we solve several fundamental research problems” isn’t a compelling pitch. “AGI in 2-3 years” is.
... continue reading