Anthropic might have misgivings about giving the US military unfettered access to its AI models, but some startups are building advanced AI specifically for military applications.
Smack Technologies, which announced a $32 million funding round this week, is developing models that it says will soon surpass Claude’s capabilities when it comes to planning and executing military operations. And, unlike Anthropic, the startup appears less concerned with banning specific types of military use.
“When you serve in the military, you take an oath you're going to serve honorably, lawfully, in accordance with the rules of war,” says CEO Andy Markoff. “To me, the people who deploy the technology and make sure it is used ethically need to be in a uniform.”
Markoff is hardly a regular AI executive. A former commander in the US Marine Forces Special Operations Command, he helped execute high-stakes special forces operations in Iraq and Afghanistan. He cofounded Smack with Clint Alanis, another ex-Marine, and Dan Gould, a computer scientist who previously worked as the VP of technology at Tinder.
Smack’s models learn to identify optimal mission plans through a process of trial and error, similar to how Google trained its 2017 program AlphaGo. In Smack’s case, the strategy involves running the model through various war game scenarios and having expert analysts provide a signal that tells the model if its chosen strategy will pay off. The startup may not have the budget of a conventional frontier AI lab, but it’s spending millions to train its first AI models, Markoff says.
Battle Lines
Military use of AI has become a hot topic in Silicon Valley after officials at the Department of Defense went head-to-head with Anthropic executives over the terms of a roughly $200 million contract.
One of the issues that led to the breakdown, which resulted in defense secretary Pete Hegseth declaring Anthropic a supply chain risk, was Anthropic’s desire to limit the use of its models in autonomous weapons.
Markoff says the furor obscures the fact that today’s large language models are not optimized for military use. General-purpose models like Claude are good at summarizing reports, he says. But they’re not trained on military data and lack a human-level understanding of the physical world, making them ill suited to controlling physical hardware. “I can tell you they are absolutely not capable of target identification,” Markoff claims.
“No one that I'm aware of in the Department of War is talking about fully automating the kill chain,” he claims, referring to the steps involved in making decisions on the use of deadly force.
... continue reading