Skip to content
Tech News
← Back to articles

Sam Altman faced 'serious questions' in meeting with lawmakers about OpenAI's defense work

read original get OpenAI Official T-Shirt → more articles
Why This Matters

The article highlights the growing scrutiny and ethical concerns surrounding AI companies like OpenAI engaging with military and government agencies. It underscores the importance of establishing guardrails and safety principles to ensure AI technology is used responsibly, especially in sensitive areas like defense and surveillance. This development signals a critical conversation about balancing innovation with ethical and national security considerations in the tech industry.

Key Takeaways

OpenAI formed a deal with the DOD late last month just hours after rival Anthropic had been blacklisted by Defense Secretary Pete Hegseth , who declared the company a "Supply-Chain Risk to National Security."

"There's got to be guardrails in place, and we've got to make sure that we're always thinking about the Constitution and making sure that we comply with it," Kelly said.

In an interview with CNBC's Emily Wilkins , Kelly said the group talked "in detail" about surveillance and how artificial intelligence systems could be used within a kill chain. He called it a "good discussion."

OpenAI CEO Sam Altman met with a handful of lawmakers in Washington, D.C. where Sen. Mark Kelly, D-Ariz., said he raised some "serious questions" about the company's approach to warfare and its recent deal with the Department of Defense .

Anthropic had been trying to renegotiate its contract with the DOD, but the talks stalled over a disagreement about how the technology could be used. The DOD wanted Anthropic to grant the military unfettered access to its models for all lawful purposes, while Anthropic sought assurance that its models would not be used for fully autonomous weapons or domestic mass surveillance.

Altman said in a post on X the day the deal fell apart that prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems, are two of the company's "most important safety principles." He said the DOD agreed and put them into the arrangement.

OpenAI published an excerpt of its contract with the DOD, which says that the agency "may use the AI System for all lawful purposes." The company said it's confident the DOD will not be able to use its AI systems for mass surveillance or fully autonomous weapons because of OpenAI's safety stack, the contract language and existing laws.

"We think it's very important to support the United States government and the democratic process," Altman told CNBC on Thursday. He added that while OpenAI disagrees with the Pentagon's decision to designate Anthropic a supply chain risk, he thinks the government needs to be able to make decisions about "how the most important things in the country are going to work."

The clash between Anthropic and the DOD came as a shock to officials and technologists in Washington. Many had come to view Anthropic's models as superior — they were the first to be deployed in the agency's classified networks — and championed the company's ability to integrate with existing defense contractors like Palantir.

Kelly is working with other senators to draft legislation that would set guardrails around DOD contracts with AI organizations, and said that Congress "needs to have a role."

... continue reading