Skip to content
Tech News
← Back to articles

Access to frontier AI will soon be limited by economic and security constraints

read original more articles
Why This Matters

The increasing restriction of access to frontier AI models due to economic and security concerns marks a significant shift in the AI landscape. This trend could limit innovation, competitive fairness, and global collaboration, impacting both industry progress and consumer safety. As access becomes more controlled, the tech industry must adapt to new constraints that could reshape AI development and deployment strategies.

Key Takeaways

There’s a common mantra in the outskirts of AI policy thought: driven by market pressures and overheated capital markets, AI tokens will soon be abundant—and the future belongs to those who can use them best. The further you get away from San Francisco, the louder this mantra grows. It reaches a fever pitch in the peripheries, the many middle powers of the world still caught up in a plan to navigate the AI revolution on the basis of merely good-enough models. That view requires important AI capabilities to be widely accessible: defenders have access to models before attackers do, firms in all domains compete based on access to the same AI capabilities.

Recent events have thrown that view for a loop, and it now seems clear that access to frontier AI will soon be limited by economic and security constraints. In early April, Anthropic announced it had developed Mythos, a leading cybersecurity model, and that it would only make its considerable ability to patch extant vulnerabilities available to a select few companies. Cybersecurity start-ups in the Mission District, systems integrators on the Eastern Seaboard and allied capitals on the Atlantic and Pacific all had a similar experience: scrolling down the page to see the list of privileged partners only to find a limited selection of U.S.-based corporations.

Perhaps you were hopeful that OpenAI was going to stick to its preferred method of rollout—that it would release gpt-5.5-cyber, a model reportedly similar to Mythos in capabilities, more broadly. And yet it did not: in their Daybreak initiative, OpenAI too committed to a limited release, dispelling hopes that this was a fluke or ‘doomer’ marketing. Even worse: while it’s not quite clear to anyone—including the U.S. government—what exactly the U.S. government will do about all this, by all reports, it’s at least planning to do something at some point. And while it’s easy to dismiss this as a confluence of current events, the Mythos moment actually reveals structural trends that have been ramping up for a while.

Mythos and Reality

Three trends—compute, security, and U.S. government involvement—will further constrain the availability of frontier AI in the future. They compound and reinforce each other, and have dramatically accelerated in recent weeks and months. Everyone outside the inner circle of U.S.-based developers needs to grapple with that fact.

Security & Distillation

The first and most obvious constraint on widespread availability is the one we’ve seen in the Mythos context: security considerations prevent developers from providing top-tier capabilities to every paying customer.

The canonical story starts with misuse risks: a highly capable new model seems realistically useful for conducting some sort of dangerous activity, such as cyberattacks or biological weapons design. Instead of rolling it out to the general public right away, you might first distribute it to defenders who can use their early access to shore up vulnerabilities—like we’ve seen in the case of Mythos. You continue by rolling out some models only to customers of which you’re reasonably sure they won’t outright abuse the model for criminal purposes; and perhaps only after the model is no longer state-of-the-art, you roll out to everyone.

Already now, we’re seeing the second stage: the U.S. government realises that this sort of restricted access is better both for the national interest and national security, and starts flirting with the idea of making the virtuous early example into a general rule. There are many reasons for the national security apparatus to do this—perhaps they don’t trust AI developers to keep dangerous capabilities away from just-as-dangerous criminals, non-state actors and adversaries. Or perhaps they’d rather like to know which exploits the new models are about to reveal so they can use them themselves first—as they’ve done before. Put differently: if I were the NSA and sitting on a bunch of zero-days, I’d also love to know which of them Mythos can find so I could use them to my advantage before everyone gets their patch online.

Next to misuse risks, there’s another dimension that might motivate even more straightforward crackdowns on availability: risks of model theft, espionage and distillation. The former would make developers wary of where to host models—weights in an unsecured datacenter would pose a substantial vulnerability, and many countries outside the U.S. haven’t even started thinking about securing datacenters. But the latter, distillation, is the more pressing concern. Multiple reports indicate that part of the success story of so-called fast followers—model developers 6-9 months behind the frontier like China’s DeepSeek—is based on distillation practices that require more or less unfettered access to API tokens.

... continue reading