Skip to content
Tech News
← Back to articles

Trump administration considers mandatory pre-release vetting of AI models — Anthropic's Mythos cited as catalyst for policy reversal

read original get AI Model Vetting Software → more articles
Why This Matters

The Trump administration's consideration of mandatory pre-release vetting for AI models marks a significant shift towards increased regulation in the AI industry, aiming to address safety and security concerns. This move could influence how AI companies develop and deploy models, balancing innovation with oversight. The focus on models like Anthropic's Mythos highlights the growing importance of safety assessments in AI development.

Key Takeaways

The Trump administration is said to be discussing an executive order that would establish a government review process for new AI models before they’re released to the public, The New York Times has reported, citing unnamed U.S. officials.

The proposed order would create an “AI working group” of tech executives and government officials to develop oversight procedures, with White House staff briefing leaders from Anthropic, Google, and OpenAI on the plans last week. These discussions, if true, would represent a sharp departure from the administration’s current stance as something of a deregulatory champion — immediately upon taking office, the Trump administration revoked a Biden-era executive order addressing AI risks.

The sudden reversal coincides with a leadership vacuum in White House AI policy. David Sacks, who led the administration's deregulation push as AI czar, left the role in March, with White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent having since taken a more active role in shaping AI policy, according to The New York Times.

Article continues below

The new approach sounds a lot like the UK's AI Security Institute model, where government bodies evaluate frontier models against safety benchmarks before and after deployment. Officials told the New York Times that the NSA, the Office of the National Cyber Director, and the Director of National Intelligence could oversee the review. Critically, the system would grant the government early access to models without blocking their release.

Perhaps unsurprisingly, the catalyst for all this appears to have been Anthropic’s Mythos model, which the company’s marketing described as capable of finding thousands of critical software vulnerabilities and too dangerous for public release.

That naturally attracted a lot of unwanted government attention at a time when the Trump administration is already locking horns with Anthropic over the collapsed $200 million Pentagon contract. The Pentagon designated Anthropic a supply chain risk after the company refused to remove guardrails on autonomous weapons and mass surveillance, though a federal judge later called that "Orwellian."

The NSA has already used Mythos to assess vulnerabilities in government Microsoft software deployments, even as other agencies remain cut off from Anthropic's tools. Some analysts have questioned whether Mythos's capabilities justify Anthropic's dramatic framing, with some studies finding that cheaper models can achieve comparable results in vulnerability discovery.

Stay On the Cutting Edge: Get the Tom's Hardware Newsletter Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors

A White House official told The New York Times that talk of an executive order is "speculation," and that any announcement would come from Trump himself. Dean Ball, a former senior adviser on AI in the Trump administration, told the newspaper that officials are trying to avoid overregulation while keeping pace with the technology, calling it a “tricky balance.”

... continue reading