Dario Amodei, co-founder and chief executive officer of Anthropic, at the AI Impact Summit in New Delhi, India, on Thursday, Feb. 19, 2026. Prakash Singh | Bloomberg | Getty Images
But to those fighting in the trenches of cyber warfare, one of the key capabilities advertised by Anthropic — to find software vulnerabilities at scale — has been around since last year. "The models that we have right now are powerful enough to detect zero days in a large scale, and this is scary enough," Klaudia Kloc, CEO of cybersecurity firm Vidoc, told CNBC. That has been the case for "a couple of months, if not a year," she said. The term "zero-day" refers to a previously unknown software flaw that hasn't been patched, giving attackers a window to exploit it before defenders can respond. Researchers at Vidoc leaned on a technique called "orchestration" to test if they could find the same vulnerabilities that Mythos did. As the name suggests, the process involves creating workflows that split code into smaller pieces, coordinating between various tools or models to cross-check results. "We ran older models against the same code base to see if we'd be able to detect the same vulnerabilities," Kloc said. "We did, with both OpenAI and Anthropic's older models." Another cybersecurity firm, Aisle, found that many of Mythos's headline results could be reproduced using cheaper models working in parallel — suggesting that scale and coordination were more important than having the latest model. "A thousand adequate detectives searching everywhere will find more bugs than one brilliant detective who has to guess where to look," Aisle founder Stanislav Fort wrote in a blog post. In comments to CNBC, Anthropic didn't dispute that earlier models were capable of finding software vulnerabilities. In fact, a company spokesperson said, Anthropic has been warning for months that AI's cyber capabilities were advancing rapidly. They pointed to a February blog post showing that Claude Opus 4.6, a widely available model, found more than 500 "high severity" vulnerabilities in open-source software. At the Anthropic event this week, Amodei affirmed this point, saying that while the scale of software vulnerabilities found by Mythos surged from earlier models, the trend wasn't new. "The risks are very real. This is why we took the actions we did," Amodei said. "But they're also, in some sense, not that surprising. ... We've been seeing warnings of this for a while."
Hysteria and panic
What makes Mythos different is its ability to take the next step, developing working exploits with little or no human input, effectively automating a process that previously required skilled researchers, the Anthropic spokesperson said. But hackers working for criminal groups and adversarial nations already have this skill set, cyber researchers say. Hackers in North Korea, China and Russia "know how to do this, with or without Anthropic," Kloc said. The threat of AI-enabled hacking has corporations and government regulators worried about protecting crucial systems from a new wave of ransomware and other types of attacks, according to Harris. He described conversations with banks, insurers and regulators in recent weeks as "hysteria."
watch now
Even before the advent of generative AI, corporations faced the problem of skilled hackers exploiting newfound vulnerabilities in hours, while patching the code often takes days or weeks. Some patches require key systems to be taken offline, complicating matters. "The industry is panicking about the number of vulnerabilities they face now," Harris said. "But even before Mythos is widely available, it couldn't fix vulnerabilities fast enough." Before, only a tiny population of experts globally had the ability and time to find obscure vulnerabilities in software and exploit them, according to Harris. Now, using currently available AI models, the barriers of entry to wreaking cyber havoc have been lowered. That means that banks and other targets will see more attacks, and that software systems that previously didn't draw as much interest from cybercriminals will now face threats, Harris said.
Advantage: Offense
While Anthropic, OpenAI and others are working on developing cyber defense capabilities commensurate with the problems they have identified, the initial advantage goes to offense, not defense, researchers say. JPMorgan's Jamie Dimon suggested as much when he said last month that while AI tools could eventually help companies defend themselves from cyberattacks, they are first making them more vulnerable. "You have a significant increase in the volume of vulnerabilities discovered, but they don't seem to have deployed a tool that helps you fix them," said Justin Herring, partner at the law firm Mayer Brown and former executive deputy superintendent for cybersecurity at New York's financial regulator. "Vulnerability management is the great Sisyphean task of cybersecurity," Herring said. The limited group that was part of the initial Mythos release got a head start on patching vulnerabilities, but there is a downside. AI researchers haven't been given access to Mythos to independently verify Anthropic's claims or to begin building defenses against it. Some say it prevented the wider cyber community from being part of the solution. It has created "tiers of haves and have-nots," which could stunt the pace of cybersecurity innovation, said Pavel Gurvich, CEO of cybersecurity startup Tenzai, which uses Anthropic's models. Many cybersecurity startups are working on solutions that can help businesses in this new era of AI, he said. "They're trying to figure out the best way to fix the world before this becomes accessible to the world," said Ben Seri, co-founder of cybersecurity startup Zafran Security. "It's this kind of chicken-and-egg situation, and you're going to break some eggs. It's unavoidable."