Hello and welcome to Regulator, a newsletter exclusively for Verge subscribers about tech, politics, and Washington intrigue. (It’s basically House of Cards, but for nerds.) Not a subscriber yet? You really should become one, and to save you a Google search, here is the direct link to do so! And do you think I should know something? Send it to [email protected].
On Monday, The New York Times reported that the White House was considering having the government review AI models before release. To the casual Verge reader, it appeared to be a total reversal in Donald Trump’s policies. For the past year, he had been a vocal champion of pro-industry deregulation, repealing former President Joe Biden’s massive executive order on AI safety, lifting export controls on advanced chips, and signing executive orders that would have legally punished states for passing and enforcing AI laws in the vacuum of federal legislation. Now, the Trump administration has seemingly pulled a 180, demanding federal oversight and vetting of pre-market models.
But to Washington, the shift in the White House’s policy was due to three major changes. First, Anthropic’s Mythos has genuinely spooked the national security apparatus, forcing the administration to confront a new threat: the possibility of adversaries using American AI models to attack America’s public and private sectors. Second, other countries are now beginning to lay out their own AI regulations, potentially in a manner that would go against the interests of the United States. (And yes, “destroying a Big Tech data center in a targeted drone strike” is a manner of government AI regulation, but we’ll get to that shortly.)
And third, David Sacks was pushed out of his job as the AI and crypto czar, giving Silicon Valley one less mechanism to pitch an industry-friendly, “innovation-at-all-costs” agenda to Trump himself.
The definition of political influence can be squishy and amorphous, especially around Donald Trump, who will pick up anyone’s calls and then act on that advice if he feels like it. (Remember when Laura Loomer had control over the National Security Council?) But what’s legally certain is that Sacks, the billionaire venture capitalist and Trump fundraiser in 2024, no longer has the privileges available to him as a special government employee, such as the ability to review sensitive information, to speak on behalf of the White House, or to hold official influence over government employees and agencies.
Instead, the “special government employee,” who was supposed to only spend 130 days working in the administration and somehow stuck around for an entire year, actively undermined the administration and torched its relationship with its political allies. During Sacks’ tenure, the White House went beyond simply advocating for less regulation. They tried twice to get Congress to pass a moratorium on state AI laws, and failing that, tried to use an executive order that would grant the Trump admin the powers to sue states passing or enforcing said laws. But his Valley-esque tactics, to say nothing of his attempts to consolidate power over AI policy by boxing out existing agencies, ended up infuriating Republican and MAGA allies, while alienating vast swaths of Trump’s base. (In fact, it was so unsuccessful that when unnamed White House officials recently attempted to pressure certain red states into dropping pending AI legislation, claiming that they were going against Trump’s agenda, four GOP state lawmakers spoke on the record to The Wall Street Journal instead. Then again, if the agenda was just to kill those bills in the cradle, it succeeded.)
Even if Sacks hadn’t crashed and burned — to say nothing of publicly criticizing Donald Trump, a man who does not like being criticized, for continuing to wage war against Iran — the job is also getting harder for one part-time employee maintaining ties to the private sector to handle. In recent months, the aperture of America’s AI policy has widened to a scale much, much broader than Sacks’ pro-innovation 2025 remit, into areas where a lack of regulation would be wildly irresponsible: national security and geopolitical stability.
A major turning point was the leak of Anthropic’s Mythos, the AI model that was so powerful at finding cybersecurity vulnerabilities that the company, whose reputation hinges on acting more responsibly than its competitors, refused to release it to the public. The possibility of a Mythos-level model becoming commercially available spooked the national security apparatus and the financial industry, and seized the attention of three powerful White House figures: Treasury Secretary Scott Bessent, Commerce Secretary Howard Lutnick, and Chief of Staff Susie Wiles.
When Bessent and Wiles met with Anthropic CEO Dario Amodei in April, it signaled that not only were they taking the threat seriously, they were now overriding Anthropic’s enemies in the Pentagon, who had, months prior, convinced Trump that Anthropic was “woke” and should be banned for government use.
“The national security implications of something like Mythos are hard to deny, and legitimately urgent national security issues are not easy to politicize,” Charlie Bullock, a senior research fellow at the Institute for Law and AI, told The Verge. “Once serious national security people get involved, it’s hard to dismiss or politicize the issue.”
... continue reading