More than 20 years ago, futurist intellectual Nick Bostrom upended the psyches of tech bros the world around when he proposed in a 2003 Philosophical Quarterly paper that we all may be living in a computer simulation.
Beloved by such strange bedfellows as Elon Musk, Bill Gates, and Sam Altman, Bostrom has released two other influential missives — 2014's "Superintelligence: Paths, Dangers, Strategies," which detailed the ways AI could become smarter than humans, and 2024's "Deep Utopia: Life and Meaning in a Solved World," which ponders on what will happen if AI fixes everything — in the interim.
He also was embroiled in a minor controversy after a very racist email he sent in the 1990s was uncovered in 2023, and the following year, his Future of Humanity Institute at Oxford was shut down in what the philosopher lamented as "death by bureaucracy."
Now, speaking from the other side of the AI boom of the past few years, Bostrom has begun seeing, as he told The London Standard, some of his predictions about AI start coming to fruition in real-time.
"It’s all happening now," the philosopher said of AI advancement. "I’m quite impressed by the speed of developments that we’ve seen in the past several years."
Bostrom added that the world appears to be "on the track towards" artificial general intelligence, or the point at which AI systems become as intelligent as humans. When he wrote "Superintelligence" in the early 2010s, he was, as the philosopher told the Standard, mostly spitballing — and now, as we approach it, some of his ideas about it are changing too.
Back in 2019, when even today's nascent AI technology seemed like science fiction, Bostrom told Business Insider that "AI is a bigger threat to human existence than climate change," and that it won't be the "biggest change we see this century."
When asked if that was still his belief lately, the philosopher demurred.
"There remains always the possibility that human civilization might destroy itself in some other way," Bostrom told the Standard, "such that we don’t even get the chance to try our luck with superintelligence."
Even more interestingly, it appears that the former Oxford professor has also changed his tune somewhat about advanced AI, telling the London newspaper that AGI is an inevitability that he's not necessary against.
... continue reading