Skip to content
Tech News
← Back to articles

Google, Microsoft, and xAI will allow the US government to review their new AI models

read original get AI Model Review Toolkit → more articles
Why This Matters

The collaboration between Google DeepMind, Microsoft, and xAI with the US government to review AI models before release marks a significant step toward increased regulation and oversight in the AI industry. This initiative aims to ensure that new AI technologies are evaluated for safety and security, aligning industry development with national interests. Such measures could shape future AI governance, impacting how companies develop and deploy AI solutions for consumers and businesses alike.

Key Takeaways

is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO.

Posts from this author will be added to your daily email digest and your homepage feed.

Google DeepMind, Microsoft, and Elon Musk’s xAI have agreed to allow the US government to review new AI models before they’re released to the public. In an announcement on Tuesday, the Commerce Department’s Center for AI Standards and Innovation (CAISI) says it will work with the AI companies to perform “pre-deployment evaluations and targeted research to better assess frontier AI capabilities.”

CAISI, which started evaluating models from OpenAI and Anthropic in 2024, says it has performed 40 reviews so far. Both companies “have renegotiated their existing partnerships with the center to better align with priorities in President Donald Trump’s AI Action Plan,” according to Bloomberg.

The White House may take these evaluations even further in the future, as a Monday report from The New York Times suggests that Trump is considering an executive order that would “bring together tech executives and government officials” to oversee new AI models.

Here’s what CAISI director Chris Fall had to say in the press release:

Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications. These expanded industry collaborations help us scale our work in the public interest at a critical moment.