Skip to content
Tech News
← Back to articles

Project Mario: the inside story of DeepMind

read original get DeepMind AI Puzzle Book → more articles
Why This Matters

The story of DeepMind's efforts highlights the critical importance of establishing responsible AI governance and safety measures as the technology advances. It underscores the need for collaborative oversight to ensure AI benefits society while mitigating risks, especially as powerful AI systems become more integrated into daily life.

Key Takeaways

Subscribe to print for your office or home.

The following is an exclusive excerpt adapted from the author’s new book, THE INFINITY MACHINE: Demis Hassabis, DeepMind, and the Quest for Superintelligence, out today.

In the autumn of 2015, Mustafa Suleyman embarked on a grand experiment in making AI good for society. Together with Demis Hassabis, the senior co-founder of the London-based artificial intelligence lab DeepMind, he began an extended negotiation with Google, to which they had sold the previous year. Suleyman was determined to ensure that powerful AI, when it emerged, would not fall under the sole sway of the parent company’s shareholders. For anyone concerned with AI safety, this saga remains relevant today. It shows what happens when, under unusually favorable conditions, a handful of leaders set out to create a control structure for a new technology.

The trigger for this experiment was the failure of DeepMind’s first AGI safety board meeting. In August 2015, Hassabis and Suleyman had convened the board at SpaceX; Elon Musk hosted, with Google’s leadership, Reid Hoffman, and other luminaries in attendance. The meeting produced no agreements or conclusions. Personal tensions, especially between Musk and Google CEO Larry Page, and clashing visions for AI governance overwhelmed the discussion. Worse, Musk proceeded to use what he’d learned about DeepMind’s progress at the meeting to found OpenAI as a direct rival.

Not only did that gathering achieve nothing; once Musk founded OpenAI as an explicitly anti‑Google, anti‑Hassabis venture, there was no way he could continue to watch over DeepMind’s progress. With that attempt at oversight stillborn, Suleyman in particular resolved to create an alternative arrangement. He imagined a novel, post‑capitalist form of governance: one that might balance the drastic tensions in the era of AI, when the imperatives of profit, existential risk, and social justice demanded a new reconciling mechanism. As always with Suleyman, his passion was not in doubt. But the obstacles were formidable.

Suleyman was fortunate in who he had around him. A preoccupation with safety had been baked into DeepMind even before its founding: Hassabis had first bonded with Shane Legg, DeepMind’s third co-founder, at a 2009 lecture in which Legg warned that superintelligent computers could develop agendas of their own and subjugate or annihilate humans. In the ensuing half dozen years, Hassabis had remained committed to the safety agenda, backing Suleyman’s efforts and adding his own vivid talk about disappearing into a bunker to birth superintelligence. Suleyman was fortunate in his parent company, too. By the standards of large enterprises, Google was remarkably open to governance experiments, having conducted several of its own. For example, the founders had awarded themselves super‑voting shares on the theory that this would allow them to stand up for the company motto, “Don’t Be Evil.” Moreover, at the time when Suleyman embarked on his safety mission, DeepMind was the world’s top AI lab, and its strongest rival, Google Brain, which included the researchers who would invent the transformer, was part of the same company. Suleyman and his collocutors were therefore in a privileged position. If they could solve AI governance internally, they would go much of the way to solving it, period.

The first potential replacement for the SpaceX oversight group landed in Suleyman’s lap, without him having to do anything. In 2015, Google decided to restructure itself, spinning out specialist chunks of its operation as semi‑independent “bets,” and creating a holding company called Alphabet to preside over them. In a conversation shortly before the SpaceX gathering, Google’s M&A chief, Don Harrison, had suggested to Hassabis and Suleyman that they could regain their independence via this route. The new, liberated DeepMind would have a so‑called 3‑3‑3 board: three people from DeepMind; three people from Alphabet; and three independent members. DeepMind’s leaders, fond of secretive code names, dubbed the ensuing governance talks “Project Mario.”

Google’s proposal had an operational and a financial logic. On the operational side, Larry Page worried that Google was growing unwieldy. It was hard to manage a money‑gusher like the online ad business under the same roof as a pre‑revenue moonshot such as DeepMind. On the financial side, Google reasoned that hiving off cash‑burning ventures would boost the profits of the mothership, resulting in a much higher stock price. To Hassabis and Suleyman, the commercial logic of the Alphabet plan was all to the good. The 3‑3‑3 board structure would give them a strong say over the deployment of AGI and bring in credible independent directors. If the plan also served to boost Google’s share price, that was a good reason to assume that it might actually be implemented.

The governance talks got underway in the first half of 2016. Hassabis met Page to go over the details on four occasions, and together with Suleyman he set about planning the revenue streams that would sustain DeepMind in its independence. Suleyman launched DeepMind Health, believing that, after a few years of pro bono work, DeepMind would earn a lucrative share of the savings that AI generated for hospitals. Hassabis, for his part, assembled a secretive hedge‑fund operation within DeepMind. He recruited a team of some 20 researchers to train high‑frequency trading algorithms, and explored a collaboration with the Wall Street behemoth BlackRock. It was not a project of which Google approved. But Hassabis, a five-time World Games Champion at the international Mind Sports Olympiad, hoped he’d found another game that he could win.

... continue reading