Skip to content
Tech News
← Back to articles

Elon Musk’s only AI expert witness at the OpenAI trial fears an AGI arms race

read original get AI Safety and Ethics Book → more articles
Why This Matters

This article highlights the ongoing concerns within the tech industry about the potential dangers of rapidly advancing AI, especially the development of Artificial General Intelligence (AGI). It underscores the importance of regulation and safety measures to prevent an arms race that could pose existential risks to humanity, making it a critical issue for both industry stakeholders and consumers. The debate also reveals the conflicting motivations of AI pioneers, balancing innovation with safety.

Key Takeaways

When do we take AI doomers seriously?

That’s a key subtext of Elon Musk’s attempt to shut down OpenAI’s for-profit AI business. His attorneys argue that the organization was set up as a charity focused on AI safety, and lost its way in pursuit of lucre. To prove that, they cite old emails and statements from the organization’s founders about the need for a public-spirited counterweight to Google DeepMind.

Today, they called the only expert witness to speak directly to AI technology: Stuart Russell, a University of California, Berkeley computer science professor who has studied AI for decades. His job was to offer background on AI, and establish that this technology is dangerous enough to worry about.

Russell co-signed an open letter in March 2023 calling for a six-month pause in AI research. In a sign of the contradictions here, Musk also signed the same letter, even as he was launching xAI, his own for-profit AI lab.

Russell told jurors and Judge Yvonne Gonzalez Rodgers that there were a variety of risks associated with the development of AI, ranging from cybersecurity threats to problems with misalignment and the winner-take-all nature of developing Artificial General Intelligence (AGI). Ultimately, he said that there was a tension between the pursuit of AGI and safety.

Russell’s larger concerns about the existential threats of unconstrained AI didn’t get aired in open court after objections from OpenAI’s attorneys led the judge to limit Russell’s testimony. But Russell has long been a critic of the arms-race dynamic created by frontier labs around the globe competing to reach AGI first, and called for governments to regulate the field more tightly.

OpenAI’s attorneys spent their cross-examination establishing that Russell wasn’t directly evaluating the organization’s corporate structure or its specific safety policies.

Techcrunch event Meet your next investor or portfolio startup at Disrupt

Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $410. Meet your next investor or portfolio startup at Disrupt

Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $410. San Francisco, CA | REGISTER NOW

... continue reading