This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.
For many, AGI is more than just a technology. In tech hubs like Silicon Valley, it’s talked about in mystical terms. Ilya Sutskever, cofounder and former chief scientist at OpenAI, is said to have led chants of “Feel the AGI!” at team meetings. And he feels it more than most: In 2024, he left OpenAI, whose stated mission is to ensure that AGI benefits all of humanity, to cofound Safe Superintelligence, a startup dedicated to figuring out how to avoid a so-called rogue AGI (or control it when it comes). Superintelligence is the hot new flavor—AGI but better!—introduced as talk of AGI becomes commonplace.
Sutskever also exemplifies the mixed-up motivations at play among many self-anointed AGI evangelists. He has spent his career building the foundations for a future technology that he now finds terrifying. “It’s going to be monumental, earth-shattering—there will be a before and an after,” he told me a few months before he quit OpenAI. When I asked him why he had redirected his efforts into reining that technology in, he said: “I’m doing it for my own self-interest. It’s obviously important that any superintelligence anyone builds does not go rogue. Obviously.”
He’s far from alone in his grandiose, even apocalyptic, thinking. Every age has its believers, people with an unshakeable faith that something huge is about to happen—a before and an after that they are privileged (or doomed) to live through.
For us, that’s the promised advent of AGI. People are used to hearing that this or that is the next big thing, says Shannon Vallor, who studies the ethics of technology at the University of Edinburgh. “It used to be the computer age and then it was the internet age and now it’s the AI age,” she says. “It’s normal to have something presented to you and be told that this thing is the future. What’s different, of course, is that in contrast to computers and the internet, AGI doesn’t exist.”
And that’s why feeling the AGI is not the same as boosting the next big thing. There’s something weirder going on. Here’s what I think: AGI is a lot like a conspiracy theory, and it may be the most consequential one of our time.
I have been reporting on artificial intelligence for more than a decade, and I’ve watched the idea of AGI bubble up from the backwaters to become the dominant narrative shaping an entire industry. A onetime pipe dream now props up the profit lines of some of the world’s most valuable companies and thus, you could argue, the US stock market. It justifies dizzying down payments on the new power plants and data centers that we’re told are needed to make the dream come true. Fixated on this hypothetical technology, AI firms are selling us hard.
Just listen to what the heads of some of those companies are telling us. AGI will be as smart as an entire “country of geniuses” (Dario Amodei, CEO of Anthropic); it will kick-start “an era of maximum human flourishing, where we travel to the stars and colonize the galaxy” (Demis Hassabis, CEO of Google DeepMind); it will “massively increase abundance and prosperity,” even encourage people to enjoy life more and have more children (Sam Altman, CEO of OpenAI). That’s some product.
Or not. Don’t forget the flip side, of course. When those people are not shilling for utopia, they’re saving us from hell. In 2023, Amodei, Hassabis, and Altman all put their names to a 22-word statement that read: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Elon Musk says AI has a 20% chance of annihilating humans.