Anthropic's Claude Mythos AI model made headlines last week, causing a wave of frenzy in the industry for its purported abilities, which included finding bugs in browsers and operating systems, spawning "Project Glasswing" — which would see Anthropic team up with tech titans to ensure that their products are patched up before Mythos, which is still in preview, gets released into the wild.
While the reports sound extreme, the reality of Claude Mythos's abilities isn't quite so dramatic; it's not a sentient model capable of bringing modern technology to its knees. Following the announcement, Aisle published a paper indicating that other AI models can also deliver similar levels of performance in finding exploits (and patching them) to Mythos. Although there is some suggestion that Mythos is the best AI model for aiding in cybersecurity efforts, it is not by a wide margin.
Researchers put Mythos to the test
AI use in cybersecurity is nothing new. Researchers have been trying to use it as part of defensive and offensive operations since the 1980s, but it became far more viable as a method of detecting threats like malware in the 2000s and 2010s, where the quantity of labeled data became large enough to make a real difference, and that's a trend that's only accelerated since.
Article continues below
But Anthropic has pitched its newest AI model as something different, something dangerous, positioning Mythos as so powerful that it could find zero-day exploits in just about everything, claiming many of these are critical and so dangerous that Anthropic needs to share this AI only with responsible companies. If it can find the bugs, it can help exploit them, is the publicly shared rationale.
The problem for Anthropic is that a bunch of other AI models can do most of the same job as Mythos already.
Aisle's research found that many of the flagship vulnerabilities discovered by Mythos can also be detected by more affordable, open source models like GPT-OSS-120b, which found the OpenBSD Sack analysis vulnerability, Qwen3 32B that found the FreeBSD NFS detection error, and the Kimi K2 (open-weight) model also found all the headline-grabbing flaws.
It's more complicated than that
Aisle's analysis also points out how Anthropic frames AI cybersecurity as a single overarching tool that can act out many stages of vulnerability discovery, verification, exploitation, and patching. In reality, these are all separate steps that have different requirements. Some of these steps can be achieved to a high standard by some of the lighter-weight models Aisle trialed.
... continue reading