Last August, some of the best cybersecurity teams in the business gathered in Las Vegas to demonstrate the strength of their AI bug-finding systems at DARPA’s Artificial Intelligence Cyber Challenge (AIxCC). The tools had scanned 54 million lines of actual software code that DARPA had injected with artificial flaws. The teams were capable enough to identify most of the artificial bugs, but their automated tools went beyond that — they found more than a dozen bugs that DARPA hadn’t inserted at all.
Even before the security earthquake that Anthropic delivered this month with Claude Mythos — the new AI model that seems to find vulnerabilities in every piece of software it’s pointed at — automated systems were growing increasingly capable of finding coding flaws. And fears are growing that not only can AI detect these flaws, but also be used to exploit them, putting hacking skills into the hands of everyone across the planet.
“Mythos or not, this is coming.”
This isn’t an empty threat. For decades, this type of no-skill hacker, known as a script kiddie, has wreaked havoc, running scripts they ripped from the internet or copied from exploit tool kits. They didn’t fully understand or have the technical know-how to write these scripts themselves. And yet they were still able to deface websites and propagate viruses.
What’s happening now represents a major escalation, where people without technical backgrounds are able to use AI to enhance their capabilities in a way that wasn’t possible with simple scripts. It is likely to have far more wide-reaching repercussions.
“There’s a tidal wave coming. You can see it. We can all see it,” said Dan Guido, CEO and cofounder of cybersecurity firm Trail of Bits, which was a runner-up in the challenge. “Are you going to lay down and die, or are you going to do something about it?”
Image: Joseph Rogers / The Verge
Even beyond Project Glasswing, Anthropic is trying to prevent the misuse of its software by criminals. A week after announcing Mythos, the company released Claude Opus 4.7, which for the first time built in safeguards meant to block malicious cybersecurity requests. (Security professionals who want to use the model defensively can apply to the company’s Cyber Verification Program.)
Anthropic’s announcement of Mythos sent shockwaves throughout the industry, but there were warning signs of AI’s cybersecurity prowess prior to it. In June 2025, the autonomous offensive security platform XBOW beat out human hackers to top the leaderboard of HackerOne, a bug bounty platform, indicating big leaps in the ability of AI models to find bugs.
By the time AIxCC rolled around, “there were already 10 to 20 different bug-finding systems that could find orders of multitude more bugs than we could patch,” Guido said.“This is actually not a new problem.”
... continue reading