Skip to content
Tech News
← Back to articles

Google and Pentagon reportedly agree on deal for ‘any lawful’ use of AI

read original get AI Ethics and Policy Book → more articles
Why This Matters

Google's classified AI deal with the Pentagon marks a significant shift in the tech industry's involvement with government military and surveillance activities, raising important ethical and strategic questions. This partnership underscores the growing reliance of national security on advanced AI technologies, potentially influencing future industry standards and regulations. It also highlights ongoing tensions within tech companies regarding ethical boundaries and government collaborations.

Key Takeaways

Posts from this author will be added to your daily email digest and your homepage feed.

Google has signed a classified deal that allows the US Department of Defense to use its AI models for “any lawful government purpose,” The Information reports. The agreement was reported less than a day after Google employees demanded CEO Sundar Pichai block the Pentagon from using its AI amid concerns that it would be used in “inhumane or extremely harmful ways.”

If the agreement is confirmed, it would place Google alongside OpenAI and xAI, which have also made classified AI deals with the US government. Anthropic was also among that list until it was blacklisted by the Pentagon for refusing the Department of Defense’s demands to remove weapon and surveillance-related guardrails from its AI models.

Citing a single anonymous source “with knowledge of the situation,” The Information reports that the deal states that both parties have agreed that the search giant’s AI systems shouldn’t be used for domestic mass surveillance or autonomous weapons “without appropriate human oversight and control.” But the contract also says it doesn’t give Google “any right to control or veto lawful government operational decision-making,” which would suggest the agreed restrictions are more of a pinky promise than legally binding obligations.

In a statement to Reuters, a Google spokesperson said the company upholds the opinion that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight. “We believe that providing API access to our commercial models, including on Google infrastructure, with industry-standard practices and terms, represents a responsible ⁠approach to supporting national security,” Google told the outlet.