is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets.
Posts from this author will be added to your daily email digest and your homepage feed.
Hey there, and welcome to Decoder! I’m Hayden Field, senior AI repoter at The Verge — and your Thursday episode guest host. I have another couple of shows for you while Nilay is out on parental leave, and we’re going to be spending more time diving into some of the unforeseen consequences of the generative AI boom.
Today, I’m talking with Heidy Khlaaf, who is chief AI scientist at the AI Now Institute and one of the industry’s leading experts in the safety of AI within autonomous weapons systems. Heidy has actually worked with OpenAI in the past; from late 2020 to mid-2021, she was a senior systems safety engineer for the company during a critical time, when it was developing safety and risk assessment frameworks for the company’s Codex coding tool.
Now, the same companies that have previously seemed to champion safety and ethics in their mission statements are now actively selling and developing new technology for military applications.
In 2024, OpenAI removed a ban on “military and warfare” use cases from its terms of service. Since then, the company has signed a deal with autonomous weapons maker Anduril and, this past June, signed a $200 million Department of Defense contract.
OpenAI is not alone. Anthropic, which has a reputation as one of the most safety-oriented AI labs, has partnered with Palantir to allow its models to be used for US defense and intelligence purposes, and it also landed its own $200 million DoD contract. And Big Tech players like Amazon, Google, and Microsoft, who have long worked with the government, are now also pushing AI products for defense and intelligence, despite growing outcry from critics and employee activist groups.
So I wanted to have Heidy on the show to walk me through this major shift in the AI industry, what’s motivating it, and why she thinks some of the leading AI companies are being far too cavalier about deploying generative AI in high-risk scenarios. I also wanted to know what this push to deploy military-grade AI means for bad actors who might want to use AI systems to develop chemical, biological, radiological, and nuclear weapons — a risk the AI companies themselves say they’re increasingly worried about.
Okay, here’s Heidi Khlaaf on AI in the military. Here we go.
If you’d like to read more on what we talked about in this episode, check out the links below:
Questions or comments about this episode? Hit us up at [email protected]. We really do read every email!
Decoder with Nilay Patel A podcast from The Verge about big ideas and other problems. SUBSCRIBE NOW!