Skip to content
Tech News
← Back to articles

With $1 Cyberattacks on the Rise, Durable Defenses Pay Off

read original get Cybersecurity Firewall Router → more articles
Why This Matters

The rise of AI-powered cyberattack tools underscores the urgent need for advanced defensive strategies in the tech industry. Leveraging generative AI for vulnerability discovery offers a promising way to proactively identify and patch security flaws, shifting the cybersecurity landscape from reactive to preventative. As AI becomes integral to both attack and defense, organizations must adapt their security practices to stay ahead of evolving threats.

Key Takeaways

Transforming a newly discovered software vulnerability into a cyberattack used to take months. Today—as the recent headlines over Anthropic’s Project Glasswing have shown—generative AI can do the job in minutes, often for less than a dollar of cloud-computing time.

But while large language models present a real cyberthreat, they also provide an opportunity to reinforce cyberdefenses. Anthropic reports its Claude Mythos preview model has already helped defenders preemptively discover over a thousand zero-day vulnerabilities, including flaws in every major operating system and web browser, with Anthropic coordinating disclosure and its efforts to patch the revealed flaws.

It is not yet clear whether AI-driven bug finding will ultimately favor attackers or defenders. But to understand how defenders can increase their odds, and perhaps hold the advantage, it helps to look at an earlier wave of automated vulnerability discovery.

In the early 2010s, a new category of software appeared that could attack programs with millions of random, malformed inputs—a proverbial monkey at a typewriter, tapping on the keys until it finds a vulnerability. When such “fuzzers” like American Fuzzy Lop (AFL) hit the scene, they found critical flaws in every major browser and operating system.

The security community’s response was instructive. Rather than panic, organizations industrialized the defense. For instance, Google built a system called OSS-Fuzz that runs fuzzers continuously, around the clock, on thousands of software projects. So software providers could catch bugs before they shipped, not after attackers found them. The expectation is that AI-driven vulnerability discovery will follow the same arc. Organizations will integrate the tools into standard development practice, run them continuously, and establish a new baseline for security.

But the analogy has a limit. Fuzzing requires significant technical expertise to set up and operate. It was a tool for specialists. An LLM, meanwhile, finds vulnerabilities with just a prompt—resulting in a troubling asymmetry. Attackers no longer need to be technically sophisticated to exploit code, while robust defenses still require engineers to read, evaluate, and act on what the AI models surface. The human cost of finding and exploiting bugs may approach zero, but fixing them won’t.

Is AI Better at Finding Bugs Than Fixing Them?

In the opening to his book Engineering Security (2014), Peter Gutmann observed that “a great many of today’s security technologies are ‘secure’ only because no one has ever bothered to look at them.” That observation was made before AI made looking for bugs dramatically cheaper. Most present-day code—including the open source infrastructure that commercial software depends on—is maintained by small teams, part-time contributors, or individual volunteers with no dedicated security resources. A bug in any open source project can have significant downstream impact, too.

In 2021, a critical vulnerability in Log4j—a logging library maintained by a handful of volunteers—exposed hundreds of millions of devices. Log4j’s widespread use meant that a vulnerability in a single volunteer-maintained library became one of the most widespread software vulnerabilities ever recorded. The popular code library is just one example of the broader problem of critical software dependencies that have never been seriously audited. For better or worse, AI-driven vulnerability discovery will likely perform a lot of auditing, at low cost and at scale.

An attacker targeting an under-resourced project requires little manual effort. AI tools can scan an unaudited codebase, identify critical vulnerabilities, and assist in building a working exploit with minimal human expertise.

... continue reading