Skip to content
Tech News
← Back to articles

Google and the Pentagon sign classified deal to give the Department of Defense unfettered access to its AI models

read original get Google Cloud AI Services → more articles
Why This Matters

Google's classified AI deal with the Pentagon marks a significant shift in the tech industry's collaboration with government agencies, raising concerns about ethical use and oversight of AI technology. While intended to support national security, it highlights ongoing debates over transparency, ethical boundaries, and the potential for misuse of AI in sensitive applications. This development underscores the complex balance between technological innovation and ethical responsibility in the industry.

Key Takeaways

Google has signed a deal that allows the US Department of Defense to use its AI models for "any lawful government purpose." This is according to a report by The Information, which also notes that the full details of the contract are classified.

An anonymous source within the company has suggested that the two entities have agreed that the search giant's AI tech shouldn't be used for domestic mass surveillance or autonomous weapons "without appropriate human oversight and control." However, the contract also reportedly doesn't give Google "any right to control or veto" anything the government decides to do. In other words, the famously trustworthy US government will just have to be taken at its word.

“We believe that providing API access to our commercial models, including on Google infrastructure, with industry-standard practices and terms, represents a responsible approach to supporting national security,” a Google spokesperson told Reuters. The spokesperson also echoed that the company holds the opinion that AI shouldn't be used for mass surveillance or autonomous weaponry without appropriate human oversight. Some might argue that the technology shouldn't be used for that stuff at all, oversight or not.

To that end, nearly 600 Google employees just penned an open letter to CEO Sundar Pichai to urge the company against making this kind of deal with the Pentagon. This stems from concerns that the tech would be used in "inhumane or extremely harmful ways."

Advertisement Advertisement

Advertisement

"Human lives are already being lost and civil liberties put at risk at home and abroad from misuses of the technology we are playing a key role in building," the letter states. "As people working on AI, we know that these systems can centralize power and that they do make mistakes."

Google will join OpenAI and Elon Musk's xAI in this endeavor, as they both have made classified AI deals with the US government. Anthropic had a deal in place, but refused the government's demands to remove weapon and surveillance-related safeguards.

That refusal annoyed President Trump and the Pentagon so much that Anthropic was entirely blacklisted from federal use. This doesn't exactly sound like the actions of a government that is dedicated to "appropriate human oversight and control" of dangerous AI military tech. Engadget has reached out to Google to ask for more specifics and will update this post when we hear back.