is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets.
On Friday evening, amidst fallout from a standoff between the Department of Defense and Anthropic, OpenAI CEO Sam Altman announced that his own company had successfully negotiated new terms with the Pentagon. The US government had just moved to blacklist Anthropic for standing firm on two red lines for military use: no mass surveillance of Americans and no lethal autonomous weapons (or AI systems with the power to kill targets without human oversight). Altman, however, implied that he’d found a unique way to keep those same limits in OpenAI’s contract.
“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” he added, using the Trump Administration’s preferred name for the Defense Department, the Department of War.
Across social media and the AI industry, people immediately began to challenge Altman’s claim. Why, they asked, would the Pentagon suddenly agree to these red lines when it had said — in no uncertain terms — that it would never do so?
The answer, sources told The Verge, is that the Pentagon didn’t budge. OpenAI agreed to follow laws that have allowed for mass surveillance in the past, while insisting they protect its red lines.
One source familiar with the Pentagon’s negotiations with AI companies confirmed that OpenAI’s deal is much softer than the one Anthropic was pushing for, thanks largely to three words: “any lawful use.” In negotiations, the person said, the Pentagon wouldn’t back down on its desire to collect and analyze bulk data on Americans. If you look line-by-line at the OpenAI terms, the source said, every aspect of it boils down to: If it’s technically legal, then the US military can use OpenAI’s technology to carry it out. And over the past decades, the US government has stretched the definition of “technically legal” to cover sweeping mass surveillance programs — and more.
OpenAI’s former head of policy research, Miles Brundage, said on X that “in light of what external lawyers and the Pentagon are saying, OpenAI employees’ default assumption here should unfortunately be that OpenAI caved + framed it as not caving, and screwed Anthropic while framing it as helping them.”
In a statement to The Verge, OpenAI spokesperson Kate Waters said the Pentagon had not asked for mass surveillance powers and denied that the agreement allowed for the crossing of certain lines. “The system cannot be used to collect or analyze Americans’ data in a bulk, open-ended, or generalized way,” Waters said.
AI systems could help the military (or other departments) conduct widespread surveillance operations with unprecedented levels of detail. AI’s best talent is finding patterns, and human behavior is nothing if not a set of patterns — imagine an AI system layering, for any one individual, geolocation data, web browsing information, personal financial data, CCTV footage, voter registration records, and more — some publicly available, some purchased from data brokers. “Using these systems for mass domestic surveillance is incompatible with democratic values,” Amodei wrote in a statement. “Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.”
While Anthropic says it pushed for a contract that specifically proscribes the practice, OpenAI appears to rely heavily on existing legal limits. It said its Pentagon agreement states that “for intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose.”
... continue reading