The chance that humans will literally go extinct at the hands of AI, I told Liron Shapira, in his podcast Doom Debates in May, was low. Humans are genetically diverse, geographically diverse, and remarkably resourceful. Some humans might die, at the hands of AI, but all of them? Shapira argued that doom was likely; I pushed back. Catastrophe seemed likely; outright doom seemed to me, then, to be vanishingly unlikely.
Part of my reasoning then was that actual malice on the part of AI was unlikely, at least any time soon. I have always thought a lot of the extinction scenarios were contrived, like Bostrom’s famous paper clip example (in which superintelligent AI, instructed to make paper clips, turns everything in the universe, including humans, into paper clips). I was pretty critical of the AGI-2027 scenario, too.
My main AI fears, as I have written before, have mainly been about bad actors, rather than malicious robots per se. But even so, I think most scenarios (e.g., people homebrewing biological weapons) could eventually be stopped, perhaps causing a lot of damage but coming nowhere near to literally extinguishing humanity.
But a number of connected events over the last several days have caused me to update my beliefs.
§
To really screw up the planet, you might need something like the following.
A really powerful person with tentacles across the entire planet
Substantial influence over the world’s information ecosphere
A large number of devoted followers willing to justify almost any choice
Leverage over world governments and their leaders
... continue reading