This week, Meta CEO Mark Zuckerberg shared his vision for the future of AI, a "personal intelligence" that can help you "achieve your goals, create what you want to see in the world, experience any adventure, be a better friend to those you care about, and grow to become the person you aspire to be."
The hazy announcement — which lacked virtually any degree of detail and smacked of the uninspired output of an AI chatbot — painted a rosy picture of a future where everybody uses our "newfound productivity to achieve more than was previously possible."
Zuckerberg couched it all in a humanist wrapper: instead of "automating all valuable work" like Meta's competitors in the AI space, which would result in humanity living "on a dole of its output," Zuckerberg argued that his "personal superintelligence" would put "power in people's hands to direct it towards what they value in their own lives."
But it's hard not to see the billionaire trying to have it both ways. Zuckerberg is dreaming up a utopia in which superintelligent AIs benevolently stop short of taking over everybody's jobs, instead just augmenting our lives in profound ways.
The problem? Well, basic reality, for starters: if you offer a truly superintelligent AI to the masses, the powerful are going to use it to automate other people's jobs. If you somehow force your AI not to do that, your competitors will.
As former OpenAI safety researcher Steven Adler pointed out on X-formerly-Twitter, "Mark seems to think it's important whether Meta *directs* superintelligence toward mass automation of work."
"This is not correct," he added."If you 'bring personal superintelligence to everyone' (including business-owners), they will personally choose to automate others' work, if they can."
Adler left OpenAI earlier this year, tweeting at the time that he was "pretty terrified by the pace of AI development these days."
"IMO, an AGI race is a very risky gamble, with huge downside," he added, referring to OpenAI CEO Sam Altman's quest for "artificial general intelligence," a poorly-defined point at which the capabilities of AIs would surpass those of humans. "No lab has a solution to AI alignment today. And the faster we race, the less likely that anyone finds one in time."
Adler saw plenty of parallels between his former employer's approach and Zuckerberg's.
... continue reading