"It’s going to find its way into everything."
Nobel laureates met with nuclear experts last month to discuss AI and the end of the world — and if that sounds like the opening to a sci-fi blockbuster set in the apocalypse, you're not alone.
As Wired reports, the convened experts seemed to broadly agree that it's only a matter of time until an AI will get hold of nuclear codes. Exactly why that needs to be true is hard to pin down, but the feeling of inevitability — and anxiety — is palpable in the magazine's reporting.
"It’s like electricity," retired US Air Force major general and member of the Bulletin of the Atomic Scientists’ Science and Security Board, Bob Latiff, told Wired. "It’s going to find its way into everything."
It's a bizarre situation. AIs have already been shown to exhibit numerous dark streaks, resorting to blackmailing human users at an astonishing rate when threatened with being shut down.
In the context of an AI, or networks of AIs, safeguarding a nuclear weapons stockpile, those sorts of poorly-understood risks become immense. And that's without getting into a genuine concern among some experts, which also happens to be the plot of the movie "The Terminator": a hypothetical superhuman AI going rogue and turning humanity's nuclear weapons against it.
Earlier this year, former Google CEO Eric Schmidt warned that a human-level AI may not be incentivized to "listen to us anymore," arguing that "people do not understand what happens when you have intelligence at this level."
That kind of AI doomerism has been on the minds of tech leaders for many years now, as reality plays a slow-motion game of catch-up. In their current form, the risks would probably be more banal, since the best AI models today still suffer from rampant hallucinations that greatly undercut the usefulness of their outputs.
Then there's the threat of flawed AI tech leaving gaps in our cybersecurity, allowing adversaries — or even adversary AIs — to access systems in control of nuclear weapons.
To get all members of last month's unusual meeting to agree on a topic as fraught as AI proved challenging, with Federation of American Scientists director of global risk Jon Wolfsthal admitting to the publication that "nobody really knows what AI is."
... continue reading