Skip to content
Tech News
← Back to articles

Top OpenAI Executive Quits in Protest

read original get OpenAI Logo Mug → more articles
Why This Matters

The resignation of a top OpenAI executive over the company's military contract highlights ongoing ethical debates surrounding AI's role in national security and autonomous weaponry. This incident underscores the importance of aligning AI development with ethical standards and public trust, influencing industry practices and consumer perceptions. It also signals potential shifts in how AI companies navigate government partnerships amid ethical concerns.

Key Takeaways

Sign up to see the future, today Can’t-miss innovations from the bleeding edge of science and tech Email address Sign Up Thank you!

A top OpenAI executive has quit the company over its agreement with the Department of Defense that allows its tech to be deployed across the military.

The employee, Caitlin Kalinowski, who led OpenAI’s hardware and robotics efforts, announced her resignation on social media Saturday.

“This wasn’t an easy call. AI has an important role in national security,” wrote Kalinowski in a tweet. “But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.”

“This was about principle, not people,” she added, insisting she still has “deep respect” for CEO Sam Altman.

Last month, OpenAI announced a new deal with the Pentagon amid its rival Anthropic facing threats from top Trump officials for not coming to a similar agreement. Talks had fallen apart between Anthropic and the Pentagon because, according to CEO Dario Amodei, the company insisted on prohibiting its AI systems being used in the mass surveillance of US citizens and autonomous weaponry without humans in the loop. When it became clear Anthropic wouldn’t budge, the Pentagon cut off the company and made good on its threat to declare it a “supply chain risk,” which prohibits it from signing any military contracts.

This spiralled into a public relations disaster for OpenAI, with many criticizing Altman for playing ball with a deeply unpopular and bellicose administration. Originally, OpenAI had agreed that its AI systems could be used for “all lawful purposes,” and it was only after heated backlash that Altman said he would update the Pentagon deal to include specific protections on surveillance and autonomous weaponry.

In short, OpenAI looked like it was selling out while Anthropic made out like heroes. The news sparked a mass exodus of users from OpenAI’s ChatGPT to Anthropic’s Claude, with the Claude app usurping ChatGPT from the top of the App Store.

Dissent came within the industry, too. Over 1,000 former and current workers from OpenAI and Google have signed an open letter demanding their employers to refuse the Pentagon’s demands to use AI tech for mass surveillance and autonomous weaponry.

Kalinowski, explaining her departure, said that her “issue is that the announcement was rushed without the guardrails defined.”

... continue reading