Skip to content
Tech News
← Back to articles

The Download: how AI is used for military targeting, and the Pentagon’s war on Claude

read original get AI Military Targeting Kit → more articles
Why This Matters

The integration of generative AI systems like Claude into military decision-making highlights both the potential for enhanced strategic analysis and the significant ethical and security concerns. As AI becomes more embedded in defense, it underscores the need for careful oversight to prevent misuse and ensure reliability in high-stakes scenarios, impacting both the tech industry and global security policies.

Key Takeaways

A list of possible targets could first be fed into a generative AI system that the Pentagon is fielding for classified settings. Humans might then ask the system to analyze the information and prioritize the targets. They would then be responsible for checking and evaluating the results and recommendations.

OpenAI’s ChatGPT and xAI’s Grok could soon be at the center of exactly these sorts of high-stakes military decisions. Read the full story.

—James O'Donnell

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The Pentagon’s CTO claims Claude would “pollute” the defense supply chain

He blamed a “policy preference” that’s baked into the model. (CNBC)

+ Anthropic is reeling from OpenAI’s “compromise” with the DoD. (MIT Technology Review)

2 An ex-DOGE staffer has been accused of stealing social security data

Then taking the information to his new job in the IT division of a government contractor. (Wired)

... continue reading