Tech News
← Back to articles

How AI Companies Got Caught Up in US Military Efforts

read original related products more articles

At the start of 2024, Anthropic, Google, Meta, and OpenAI were united against military use of their AI tools. But over the next 12 months, something changed.

In January, OpenAI quietly rescinded its ban on using AI for “military and warfare” purposes, and soon after it was reported to be working on “a number of projects” with the Pentagon. In November, in the same week that Donald Trump was reelected US president, Meta announced that the United States and select allies would be able to employ Llama for defense uses. A few days later, Anthropic announced that it too would allow its models to be used by the military and that it was partnering with the defense firm Palantir. As the year ended, OpenAI announced its own partnership with the defense startup Anduril. Finally, in February 2025, Google revised its AI principles to allow for the development and use of weapons and technologies that might harm people. Over the course of a single year, worries about the existential risks of AGI had virtually disappeared, and the military use of AI had been normalized.

Courtesy of Polity Books Buy This Book At: Amazon

Wiley

Books-a-Million If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more.

Part of the change has to do with the immense costs involved in building these models. Research on general-purpose technologies (the other GPTs) has often highlighted the importance of the defense sector as a way to overcome issues of adoption. “GPTs develop faster when there’s a large, demanding, and income-generating application sector,” economist David J. Teece wrote in 2018, “such as the US Defense Department’s purchases of early transistors and microprocessors.” The soft budget constraints and long-term nature of defense contracting, combined with the often blurry metrics of success, make the military a highly desirable customer for new technologies. Given the need of AI startups, in particular, to secure large and patient investments, the turn to military funding was perhaps inevitable. But this doesn’t explain the rapidity of the shift nor the fact that all the leading American AI research labs moved in the same direction.

The past few years have dramatically shifted the landscape of capitalist competition—from one guided by neoliberal free market ideals to one saturated with geopolitical concerns. To understand the shift from neoliberalism to geopolitics, one must grok the relationships between states and their large technology companies. Such state-capitalist relationships have been central to earlier formations of imperialism—Lenin famously characterized the imperialism of his era as a merger between monopoly capital and great powers—and they remained influential throughout the 20th century. In recent decades, this took the form of a broad consensus between the tech and political elite about digital technology’s role in innovation, growth, and state power.

Over recent years, however, this harmony of interests amongst elite groups has unraveled. A series of overlapping processes, gathering particular momentum in the 2010s, has dismantled this order, leaving behind the fragments of potentially new arrangements in both the United States and China.

The Silicon Valley Consensus

Up until about the mid-2010s, the United States was dominated by what might be called the Silicon Valley Consensus. Here there was a broad agreement across both the political elite and tech elite about the role of technology in the world, about what was required in order to allow that technology to flourish, about what purportedly American values they embodied, and about the requirements for capital accumulation in the technology sector. For both the tech elite and the political establishment, globalized communication, capital, data, and technology served their interests.

... continue reading