Tech News
← Back to articles

AI is now part of the culture wars — and real wars

read original related products more articles

Hello and welcome to Regulator, the newsletter for Verge subscribers that goes inside Washington’s increasingly existential clashes between tech and politics. If this was forwarded to you, can I interest you in a full-fledged subscription to The Verge for only $40 a year? You’ll get so much more than doomer scenarios. We cover non-existential fun stuff like Legos, too.

Do you work somewhere involving government, technology, and existential threats? Send all tips to [email protected], or to my Signal account @tina.nguyen19.

This was, to put it mildly, not a chill weekend.

For a few hours on Saturday, I thought that the Anthropic-Pentagon contract dispute, which seemed to have concluded on Friday night when Defense Secretary Pete Hegseth declared that the company was a supply-chain risk, would take a backseat in the news cycle. You know, because right around 1AM Saturday morning, the US launched 100 military fighter jets and directed them toward Iran. I’d been texting sources late into the night about OpenAI’s new contract with the Pentagon, asking whether Sam Altman did get those red lines on mass surveillance and autonomous lethal weapons, but by the time I woke up, the United States had assassinated Ayatollah Ali Khamenei and several other Iranian leaders in an aerial strike on Tehran, openly and unapologetically in broad daylight.

Soon it became apparent, though, that Anthropic was part of the story, too. On Sunday, The Wall Street Journal reported that Claude-powered intelligence tools had been used by several military command centers during the strike, citing sources familiar. It’s unknown how the Pentagon used Claude in this specific operation in Iran, and such information would be classified and only known by people directly involved. But the Journal wrote that the Pentagon had already deeply embedded Claude, the only AI system that had the security clearance to handle classified information up until last week, into technology that performed “intelligence assessments, target identification and simulating battle scenarios” — technology that was, apparently, used in the Iran strike.

A few observations can be pulled from this: First, the entire conflict was never about Anthropic posing an actual national security risk (but the public could already kind of see that). But second, while AI may not have yet reached the “fully autonomous lethal weapon” stage, it’s developed to a level sophisticated enough to conduct an impressively precise (though uncomfortably extralegal) strike on a foreign leader. It is all the more impressive considering that Iran was under a near-total, government-imposted internet blackout for several months, with virtually no digital connection to the outside world.

I hit up Hamza Chaudhry, the AI and National Security lead at the nonpartisan Future of Life Institute, for his long view on Operation Epic Fury. He noted that both sides of the conflict were already using artificial intelligence in their warfare — Iran has deployed AI-assisted missiles in recent months — and while the US had clearly prevailed in this scenario, it was the prelude to what he described as a “dyadic automated warfare problem: two AI systems effectively talking to each other through the medium of kinetic action, each optimizing and responding faster than human decision-makers can follow.”

Chaudry’s nightmare scenario, however, suggested the end of nuclear deterrence as a tool for global stability:

“Recent analyses of the 2025 India-Pakistan and Iran-Israel conflicts found that AI renders second-strike forces more transparent and thus more vulnerable, and that while nuclear arsenals still impose a ceiling on all-out war, AI lowers the floor for sub-threshold aggression and compresses political reaction time. If an adversary believes its nuclear deterrent is becoming visible (e.g., submarines trackable, mobile launchers locatable, command infrastructure mappable) the rational response is to expand the arsenal or shift to a launch-on-warning posture. “Experts have described this as threatening ‘arms race stability’: the risk that one side might seek a breakout advantage in advanced technology, triggering complementary efforts by the other. This is not a hypothetical future problem. The technologies that made Operation Epic Fury possible are the same technologies that are slowly making nuclear deterrence more fragile. We have no international governance framework that addresses this adequately.”

Natsec Lawyer-GPT

... continue reading