Skip to content
Tech News
← Back to articles

AI got the blame for the Iran school bombing. The truth is more worrying

read original get AI Ethics Book → more articles
Why This Matters

This article highlights the complex and often misunderstood role of AI in military operations, emphasizing that the recent Iran school bombing was conducted using traditional targeting systems rather than AI like Claude. It underscores the importance of transparency and accurate information about AI's capabilities and limitations in defense, which directly impacts public trust and policy decisions. The incident also raises concerns about the ethical use of AI in warfare and the potential consequences of misattributing blame to AI systems.

Key Takeaways

On the first morning of Operation Epic Fury, 28 February 2026, American forces struck the Shajareh Tayyebeh primary school in Minab, in southern Iran, hitting the building at least two times during the morning session. American forces killed between 175 and 180 people, most of them girls between the ages of seven and 12.

Within days, the question that organised the coverage was whether Claude, a chatbot made by Anthropic, had selected the school as a target. Congress wrote to the US secretary of defense, Pete Hegseth, about the extent of AI use in the strikes. The New Yorker magazine asked whether Claude could be trusted to obey orders in combat, whether it might resort to blackmail as a self-preservation strategy, and whether the Pentagon’s chief concern should be that the chatbot had a personality. Almost none of this had any relationship to reality. The targeting for Operation Epic Fury ran on a system called Maven. Nobody was arguing about Maven.

Eight years ago, Maven was the most contested project in Silicon Valley. In 2018, more than 4,000 Google employees signed a letter opposing the company’s contract to build artificial intelligence for the Pentagon’s targeting systems. Workers organised a walk out. Engineers quit. And Google ultimately abandoned the contract. Palantir Technologies, a data analytics company and defence contractor co-founded by Peter Thiel, took it over and spent the next six years building Maven into a targeting infrastructure that pulls together satellite imagery, signals intelligence and sensor data to identify targets and carry them through every step from first detection to the order to strike.

The building in Minab had been classified as a military facility in a Defense Intelligence Agency database that, according to CNN, had not been updated to reflect that the building had been separated from the adjacent Islamic Revolutionary Guard Corps compound and converted into a school, a change that satellite imagery shows had occurred by 2016 at the latest. A chatbot did not kill those children. People failed to update a database, and other people built a system fast enough to make that failure lethal. By the start of the Iran war, Maven – the system that had enabled that speed – had sunk into the plumbing, it had become part of the military’s infrastructure, and the argument was all about Claude. This obsession with Claude is a kind of AI psychosis, though not of the kind we normally talk about, and it afflicts critics and opponents of the technology as fiercely as it does its boosters. You do not have to use a language model to let it organise your attention or distort your thinking.

In 2019, the scholar Morgan Ames published The Charisma Machine, a study of how certain technologies draw attention, resources and attribution toward themselves and away from everything else. The usual framework for understanding this dynamic is “hype”, but hype only describes what boosters do, and it assigns critics a privileged debunking role that still leaves the technology at the centre of every argument. A charismatic technology shapes the whole field around it, the way a magnet organises iron filings. LLMs may be the most powerful instance of this type in history.

By the time the war began, “AI safety” and “alignment” and “hallucination” and “stochastic parrots” had become the terms of every argument about artificial intelligence, structuring and limiting what we could even say. Worse, “artificial intelligence” itself had come to be synonymous with LLMs. When the school was bombed, those were the terms people reached for, despite the fact that this critical apparatus offered a poor fit for the older, more mature stack of technologies involved in targeting. The real question, the question almost nobody was asking, is not about Claude or any language model. It is a bureaucratic question about what happened to the kill chain, and the answer is Palantir.

As military jargon goes, “kill chain” is a remarkably honest term. In essence, it refers to the bureaucratic framework for organising the steps between detecting something and destroying it. The oldest reference to the term itself I can find is from the 1990s, but the idea is quite old – dating at least to the 1760s, when French artillery reformers began replacing the gunner’s experienced eye with ballistic tables, elevation screws and standardised firing procedures. The steps in the kill chain are subject to constant change, to keep pace with changes in targeting doctrine, but also to incorporate whatever management fads come to afflict the military’s strategic thinkers. The US military has named and renamed the steps for 80 years. In the second world war the sequence was find, fix, fight, finish. By the 1990s the air force had stretched it to find, fix, track, target, engage, assess, or F2T2EA. Every generation of military technology has been sold on the promise of making everything about kill chains shorter, except for the acronyms.

Palantir’s Maven Smart System is the latest iteration of this compression, and it grew out of a shift in strategic thinking during Obama’s second term. In 2014, the secretary of defense, Chuck Hagel, and his deputy, Robert Work, announced what they called the “third offset strategy”. An “offset” in this line of thinking is a bet that a technological advantage can compensate for a strategic weakness the country cannot fix directly. The first two offsets addressed the same problem: the United States could not match the Soviet Union in conventional forces. The thinking was that the Red Army could just continue to throw personnel at a problem, as they did at Stalingrad, or, to be anachronistic, as the contemporary Russian army did at Bakhmut and Avdiivka. Nuclear weapons, the first offset, made the personnel advantage irrelevant in the 1950s. When the Soviets reached nuclear parity in the 1970s, precision-guided munitions and stealth offered the promise that a smaller force could defeat a larger one. By 2014, that advantage was eroding. China and Russia had spent two decades acquiring precision-guided munitions and building defence systems designed to keep American forces out of range. Robert Work insisted that the third offset was not about any particular technology but about using technology to reorganise how the military operated, letting the US make decisions faster than China and Russia, overwhelming and disorienting the enemy by maintaining a faster operational tempo than they could match.

View image in fullscreen At the funeral of children killed in the US strike on Shajareh Tayyebeh primary school. Photograph: Amirhossein Khorgooei/AP

In April 2017, early in the first Trump administration, Work helped establish the Algorithmic Warfare Cross-Functional Team, designated Project Maven. One of the generals overseeing Maven, Lt Gen Jack Shanahan, put the problem plainly: thousands of intelligence analysts were spending 80% of their time on mundane tasks, drowning in footage from surveillance drones that no one had time to watch. A single Predator drone mission could generate hundreds of hours of video, and the analysts tasked with understanding this were faced with an information overload problem. “We’re not going to solve it by throwing more people at the problem,” Shanahan said. “That’s the last thing that we actually want to do.” The core conceit of the project was that the machine could watch so that the analyst could think.

... continue reading