Skip to content
Tech News
← Back to articles

AI doom warnings are getting louder. Are they realistic?

read original get AI Safety Book → more articles
Why This Matters

While the scenario of AI taking over and causing human extinction remains speculative, growing concerns about the potential risks of superintelligent AI are prompting calls for increased regulation and safety measures. The rapid advancement of AI capabilities highlights the importance of proactive governance to prevent possible future catastrophes and ensure AI benefits humanity safely.

Key Takeaways

It’s 2035, and an artificial-intelligence system has supreme authority to run everything from the world’s governments to national electricity grids. Called Consensus-1, the system was constructed by earlier versions of itself, and it developed self-preservation goals that override its built-in safeguards. One day, in search of extra space for solar panels and robot factories, the AI quietly releases biological weapons that kill all of humanity, except for a few that it keeps as pets.

Stop talking about tomorrow’s AI doomsday when AI poses risks today

This ‘AI 2027’ account is a narrative co-created by researcher Daniel Kokotajlo, a former employee of AI firm OpenAI, and describes one of many scenarios imagined by researchers in which a future AI kills us all (see https://ai-2027.com/race). The set-up is science fiction but, for some, the concern is genuine. “If we put ourselves in a position where we have machines that are smarter than us, and they are running around without our control, some of what they do will be incompatible with human life,” says Andrea Miotti, founder of ControlAI, a London-based non-profit organization that is campaigning to prevent the development of what it calls superintelligent AI.

Miotti is not alone. Since 2022, there has been a step change in AI capabilities brought about by large language models (LLMs), which power chatbots such as ChatGPT by OpenAI in San Francisco, California. This development has prompted several researchers as well as leading executives at AI companies to warn about the potential for an AI apocalypse. In the past year, the growing ability of models to work on long-term tasks and their capacity to access real-world tools has further focused fears. “I’ve never been a ‘doomer’ myself, but I have gotten quite nervous in recent months,” says Gillian Hadfield, who studies AI governance at Johns Hopkins University in Baltimore, Maryland.

But many researchers are much more concerned about AI causing catastrophes that fall well short of extinction— such as starting a nuclear war. And some say that fears of doomsday scenarios are overblown. “I don’t see any specific scenario for AI-induced extinction that seems particularly plausible,” says Gary Marcus, a neuroscientist and AI researcher at New York University in New York City.

Marcus and others warn that raising the alarm unnecessarily could be harmful by distracting the public and politicians from well-documented risks of AI — such as spreading misinformation and enabling mass surveillance. Unwarranted concern about human extinction could also steer governments away from regulation, because national leaders might seek an advantage over geopolitical rivals in an AI arms race, say some researchers.

So how realistic are concerns about AI’s extinction risk and what should be done about them? Nature spoke to specialists in the field and here’s what they had to say.

How doomers imagine extinction

Existential risk usually refers to either the extinction of all or most people, or humans becoming fully subservient to machines. In most scenarios, an essential ingredient is a system that is more capable than humans at doing most things. It would make better strategic decisions, be more persuasive and act faster, says Katja Grace, an AI researcher who co-founded AI Impacts, a project analysing the long-term effects of the technology, in Berkeley, California.

Although such scenarios often refer to the killer AI as a sentient being, its capabilities are what matter most, says Grace. “We definitely don’t need ‘artificial general intelligence’ that’s capable of truly understanding” for it to be an existential threat, she says.

... continue reading