Why the moat is the system, not the model
TL;DR: We tested Anthropic Mythos's showcase vulnerabilities on small, cheap, open-weights models. They recovered much of the same analysis. AI cybersecurity capability is very jagged: it doesn't scale smoothly with model size, and the moat is the system into which deep security expertise is built, not the model itself. Mythos validates the approach but it does not settle it yet.
The announcement
On April 7, Anthropic announced Claude Mythos Preview and Project Glasswing, a consortium of technology companies formed to use their new, limited-access AI model called Mythos, to find and patch security vulnerabilities in critical software. Anthropic committed up to 100M USD in usage credits and 4M USD in direct donations to open source security organizations.
The accompanying technical blog post from Anthropic's red team refers to Mythos autonomously finding thousands of zero-day vulnerabilities across every major operating system and web browser, with details including a 27-year-old bug in OpenBSD and a 16-year-old bug in FFmpeg. Beyond discovery, the post detailed exploit construction of high sophistication: multi-vulnerability privilege escalation chains in the Linux kernel, JIT heap sprays escaping browser sandboxes, and a remote code execution exploit against FreeBSD that Mythos wrote autonomously.
This is important work and the mission is one we share. We've spent the past year building and operating an AI system that discovers, validates, and patches zero-day vulnerabilities in critical open source software. The kind of results Anthropic describes are real.
But here is what we found when we tested: We took the specific vulnerabilities Anthropic showcases in their announcement, isolated the relevant code, and ran them through small, cheap, open-weights models. Those models recovered much of the same analysis. Eight out of eight models detected Mythos's flagship FreeBSD exploit, including one with only 3.6 billion active parameters costing $0.11 per million tokens. A 5.1B-active open model recovered the core chain of the 27-year-old OpenBSD bug.
And on a basic security reasoning task, small open models outperformed most frontier models from every major lab. The capability rankings reshuffled completely across tasks. There is no stable best model across cybersecurity tasks. The capability frontier is jagged.
This points to a more nuanced picture than "one model changed everything." The rest of this post presents the evidence in detail.
Context: where AI cybersecurity already stands
... continue reading