Find Related products on Amazon

Shop on Amazon

DeepMind’s 145-page paper on AGI safety may not convince skeptics

Published on: 2025-05-18 06:57:35

Google DeepMind on Wednesday published an exhaustive paper on its safety approach to AGI, roughly defined as AI that can accomplish any task a human can. AGI is a bit of a controversial subject in the AI field, with naysayers suggesting that it’s little more than a pipe dream. Others, including major AI labs like Anthropic, warn that it’s around the corner, and could result in catastrophic harms if steps aren’t taken to implement appropriate safeguards. DeepMind’s 145-page document, which was co-authored by DeepMind co-founder Shane Legg, predicts that AGI could arrive by 2030, and that it may result in what the authors call “severe harm.” The paper doesn’t concretely define this, but gives the alarmist example of “existential risks” that “permanently destroy humanity.” “[We anticipate] the development of an Exceptional AGI before the end of the current decade,” the authors wrote. “An Exceptional AGI is a system that has a capability matching at least 99th percentile of skilled adul ... Read full article.