is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets.
The cringe comes for us all, and for all our hot new turns of phrase. “Rizz” lost its luster when grandparents started asking about its meaning. Teachers who dressed up as “6-7” on Halloween drove a nail into the coffin of Gen Alpha’s rallying cry. And tech CEOs who once trumpeted the quest for “artificial general intelligence,” or AGI, are jumping ship for any other term they can find.
Until recently, AGI was the ultimate goal of the AI industry. The vaguely defined term was reportedly coined in 1997 by Mark Gubrud, a researcher who defined it as “AI systems that rival or surpass the human brain in complexity and speed.” The term still typically denotes AI that’s equal to or surpasses human intelligence. But now, several of the biggest companies are going for a rebrand — creating their own phrases or acronyms that (spoiler alert) still mean, essentially, the same thing.
CEOs have spent the past year downplaying the importance of “AGI” as a milestone. Dario Amodei, CEO of Amazon-backed Anthropic, has said publicly that he “dislike[s] the term AGI” and that he’s “always thought of it as a marketing term.” OpenAI CEO Sam Altman said in August that it’s “not a super useful term.” Jeff Dean, Google’s chief scientist and Gemini lead, has said he “tend[s] to steer away from AGI conversations.” Microsoft CEO Satya Nadella has said we’re getting “a little bit ahead of ourselves with all this AGI hype,” and that at the end of the day, “self-claiming some AGI milestone” is “just nonsensical benchmark hacking.” He also said on a recent earnings call that he doesn’t believe that “AGI as defined, at least by us in our contract, is ever going to be achieved anytime soon.”
In its place, they’re pushing a cornucopia of competing terminology. Meta has “personal superintelligence,” Microsoft has “humanist superintelligence,” Amazon has “useful general intelligence,” and Anthropic has “powerful AI.” It’s a sharp about-face for all of these companies, which previously bought into the AGI benchmark — and the fear of missing out that came from not chasing it — in recent years.
Part of the problem with “AGI” is that the more advanced AI gets, the more poorly defined the term seems — since the concept of AI that’s “equal to human intelligence” looks different to virtually everyone. “Lots of people have very different definitions of it, and the difficulty of the problem varies by factors of a trillion,” Dean said.
Yet some companies have billions of dollars riding on this nebulous phrase, a problem that’s clearest in the strange, ever-changing relationship between Microsoft and OpenAI.
In 2019, OpenAI and Microsoft famously signed a contract with an “AGI clause.” It gave Microsoft the right to use OpenAI’s tech until the latter achieved AGI. But the contract apparently didn’t fully define what that meant. When the deal was renewed in October, things got even more complicated. The terms shifted to say that “once AGI is declared by OpenAI, that declaration will now be verified by an independent expert panel” — meaning that now, it won’t just be OpenAI’s call to define what AGI means, it’ll be a group of industry experts — and Microsoft won’t lose all its rights to the tech once that happens, either. The simplest way to put this whole ordeal off? Just don’t say AGI.
Another problem is that AGI has developed some baggage. Tech companies have spent years detailing their own fears about how the technology could destroy everything. Books have been written (think: If Anyone Builds It, Everyone Dies). Hunger strikes have made headlines. For a while, it was still good publicity — saying your tech is so powerful that you’re worried about its influence on the Earth seems to draw big investor dollars. But the public, unsurprisingly, soured on that idea. So, with the complicated definitions, contract drama, and public fear around superpowerful AI, it’s a lot easier to market less-loaded terminology. That’s why every tech company seems to be making some new brand of “intelligence” its own.
One popular general-purpose replacement for AGI is “artificial superintelligence,” or ASI. ASI is AI that surpasses human intelligence in virtually every area — compared to AGI, which is now generally defined as AI that’s equal to human intelligence. But for some in the tech industry, even the idea of “superintelligence” has become amorphous and conflated with AGI. The multiple theoretical milestones don’t even have clearly distinguished timelines. Amodei says he expects “powerful AI” to come “as early as 2026.” Altman says he expects AGI to be developed in the “reasonably close-ish future.”
... continue reading