MirageC/Moment via Getty Images
Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
Leaders argue that AI could existentially threaten humans.
Prominent AI figures, alongside 1,300 others, endorsed the worry.
The public is equally concerned about "superintelligence."
The surprise release of ChatGPT just under three years ago was the starting gun for an AI race that has been rapidly accelerating ever since. Now, a group of industry experts is warning -- and not for the first time -- that AI labs should slow down before humanity drives itself off a cliff.
Also: What Bill Gates really said about AI replacing coding jobs
A statement published Wednesday by the Future of Life Institute (FLI), a nonprofit organization focused on existential AI risk, argues that the development of "superintelligence" -- an AI industry buzzword that usually refers to a hypothetical machine intelligence that can outperform humans on any cognitive task -- presents an existential risk and should therefore be halted until a safe pathway forward can be established.
A stark warning
The unregulated competition among leading AI labs to build superintelligence could result in "human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction," the authors of the statement wrote.
They go on to argue that a prohibition on the development of superintelligent machines could be enacted until there is (1) "broad scientific consensus that it will be done safely and controllably," as well as (2) "strong public buy-in."
The petition had more than 1,300 signatures as of late Wednesday morning. Prominent signatories include Geoffrey Hinton and Yoshua Bengio, both of whom shared a Turing Award in 2018 (along with fellow researcher Yann LeCun) for their pioneering work on neural networks and are now known as two of the "Godfathers of AI."
Also: What AI pioneer Yoshua Bengio is doing next to make AI safer
Computer scientist Stuart Russell, Apple cofounder Steve Wozniak, Virgin Group founder Sir Richard Branson, former Trump administration Chief Strategist Steve Bannon, political commentator Glenn Beck, author Yuval Noah Harari, and many other notable figures in tech, government, and academia have also signed the statement.
And they aren't the only ones who appear worried about superintelligence. On Sunday, the FLI published the results of a poll it conducted with 2,000 American adults that found that 64% of respondents "feel that superhuman AI should not be developed until it is proven safe and controllable, or should never be developed."
What is "superintelligence"?
It isn't always easy to draw a neat line between marketing bluster and technical legitimacy, especially when it comes to a technology as buzzy as AI.
Also: What Zuckerberg's 'personal superintelligence' sales pitch leaves out
Like artificial general intelligence, or AGI, "superintelligence" is a hazily defined term that's recently been co-opted by some tech developers to describe the next rung in the evolutionary ladder of AI: an as-yet-unrealized machine that can do anything the human brain can do, only better.
In June, Meta launched an internal R&D arm devoted to building the technology, which it calls Superintelligence Labs. At around the same time, Altman published a personal blog post arguing that the advent of superintelligence was imminent. (The FLI petition cited a 2015 blog post from Altman in which he described "superhuman machine intelligence" as "probably the greatest threat to the continued existence of humanity.")
Want more stories about AI? Sign up for AI Leaderboard, our weekly newsletter.
The term "superintelligence" was popularized by a 2014 book by the same name by the Oxford philosopher Nick Bostrom, which was largely written as a warning about the dangers of building self-improving AI systems that could one day escape human control.
Experts remain concerned
Bengio, Russell, and Wozniak were also among the signatories of a 2023 open letter, also published by the FLI, that called for a six-month pause on the training of powerful AI models.
Also: Google's latest AI safety report explores AI beyond human control
Though that letter received widespread attention in the media and helped kindle public debate about AI safety, the momentum to quickly build and commercialize new AI models -- which, by that point, had thoroughly overtaken the tech industry -- ultimately overpowered the will to implement a wide-scale moratorium. Significant AI regulation, at least in the US, is also still lacking.
That momentum has only grown as competition has spilled over the boundaries of Silicon Valley and across international borders. President Donald Trump and some prominent tech leaders like OpenAI CEO Sam Altman have framed the AI race as a geopolitical and economic competition between the US and China.
At the same time, safety researchers from prominent AI companies including OpenAI, Anthropic, Meta, and Google have issued occasional, smaller-scale statements about the importance of monitoring certain components of AI models for risky behavior as the field evolves.