Skip to content
Tech News
← Back to articles

Google Moves Forward With Pentagon AI Deal Despite Employee Pushback

read original get Google Cloud AI Services → more articles
Why This Matters

Google's decision to provide the Pentagon with access to its AI models for classified military purposes highlights the ongoing debate over ethical AI use in defense. While the company emphasizes responsible oversight, internal employee pushback underscores concerns about the potential misuse of AI technology in military applications. This development signals a broader industry challenge of balancing innovation, national security, and ethical considerations in AI deployment.

Key Takeaways

Google has reportedly signed an agreement allowing the US Department of Defense to use its AI models for classified work, despite an open letter from hundreds of employees urging the company to stay away from military uses that they say could become dangerous or impossible to oversee.

The deal, reported earlier Tuesday by The Information, allows the Pentagon to use Google's AI tools for "any lawful government purpose," including sensitive military applications. Google joins OpenAI and xAI, which have also struck similar classified AI agreements with the Pentagon.

The reported agreement includes language stating that Google's AI system is not intended for domestic mass surveillance or for autonomous weapons without appropriate human oversight. But it also says Google doesn't have the right to control or veto lawful government operational decisions, according to reports. Google will also help adjust safety settings and filters at the government's request.

A Google spokesperson told CNET in an emailed statement that the company remains committed to the position that AI shouldn't be used for domestic mass surveillance or autonomous weapons without human oversight, and said providing API access to commercial models under standard practices is a "responsible approach" to supporting national security.

The Pentagon declined to comment to CNET.

The deal lands in the middle of an internal backlash. In an open letter addressed to CEO Sundar Pichai, more than 600 Google employees asked the company to "refuse to make our AI systems available for classified workloads." The employees wrote that because they work close to the technology, they have a responsibility to highlight and prevent its "most unethical and dangerous uses.

"We want to see AI benefit humanity, not to see it being used in inhumane or extremely harmful ways," the letter says. The employees said their concerns include lethal autonomous weapons and mass surveillance, but extend beyond those examples because classified work could happen without employees' knowledge or ability to stop it.

The tension echoes one of Google's most prominent internal revolts. In 2018, thousands of workers protested Project Maven, a Pentagon program involving AI analysis of drone footage. Google later chose not to renew that contract.

The company's posture toward military and national-security AI has shifted since then.

Last year, Google removed a previous language from its AI principles that said it would not pursue technologies likely to cause overall harm, weapons, certain surveillance technologies or systems that violate widely accepted human rights and international law principles.

... continue reading