Tech News
← Back to articles

Anthropic’s Chief Scientist Says We’re Rapidly Approaching the Moment That Could Doom Us All

read original related products more articles

Anthropic’s chief scientist Jared Kaplan is making some grave predictions about humanity’s future with AI.

The choice is ours, in his framing. For now, our fates are mostly in our hands, according to Kaplan — unless we decide to pass the proverbial baton to the machines, that is.

Such a point is fast approaching, he says in a new interview with The Guardian. By 2030, Kaplan predicts, or as soon as 2027, humanity will have to decide whether to take the “ultimate risk” of letting AI models train themselves. The ensuing “intelligence explosion” could elevate the tech to new heights, birthing a so-called artificial general intelligence (AGI) which equals or surpasses human intellect and benefits humankind with all sorts of scientific and medical advancements. Or it could allow AI’s power to snowball beyond our control, leaving us at the mercy of its whims.

“It sounds like a kind of scary process,” he told the newspaper. “You don’t know where you end up.”

Kaplan is one of many prominent figures in AI warning about the field’s potentially disastrous consequences. Geoffrey Hinton, one of the three so-called godfathers of AI, famously declared he regretted his life’s work, and has frequently warned about how AI could upend or even destroy society. OpenAI Sam Altman predicts that AI will will wipe out entire categories of labor. Kaplan’s boss, CEO Dario Amodei, recently warned AI could take over half of all entry-level white-collar jobs, and accused his competitors of “sugarcoating” just how badly AI will disrupt society.

It sounds like Kaplan agrees with his boss’s jobs assessment. AI will be able to do “most white-collar work” in two to three years, he said in the interview. And while’s he’s optimistic we’ll be able to keep AIs aligned to human interests, he’s also worried about the prospect of allowing powerful AI to train other AIs, a “an extremely high-stakes decision” we’ll have to make in the near future.

“That’s the thing that we view as maybe the biggest decision or scariest thing to do… once no one’s involved in the process, you don’t really know,” he told The Guardian. “One is do you lose control over it? Do you even know what the AIs are doing?”

To an extent, larger AI models are already used to train smaller AI models in a process called distillation, which allows the smaller AI to essentially catch up with its larger teacher. Kaplan, however, is worried about what’s termed recursive self-improvement, in which the AIs learn without human intervention and make substantial leaps in their capabilities.

Whether we allow that to happen comes down to some heavy philosophical questions about the tech.

“The main question there is: are the AIs good for humanity?” Kaplan said. “Are they helpful? Are they going to be harmless? Do they understand people? Are they going to allow people to continue to have agency over their lives and over the world?”

... continue reading