Skip to content
Tech News
← Back to articles

Welcome to the Strip Mining Era of OSS Security

read original get Open Source Security Book → more articles
Why This Matters

The rise of AI-powered code scanning tools is transforming OSS security by increasing vulnerability detection, which can lead to more secure software but also flood maintainers with reports. This shift highlights the growing importance of AI in software security and the need for developers and users to adapt to a new era of automated vulnerability discovery.

Key Takeaways

Open source software is in for a rough 2026 summer. If you’re an Open Source maintainer, there’s something afoot you should already know about. If you’re an OSS user, you should be aware of it as it’ll explain some behavior around you that might otherwise seem odd.

TL;DR: High volume, LLM-powered scanning for security vulnerabilities is going to uncover lots of security issues in anything with public source code.

This all started a few months ago

Historically, Metabase averaged 10 submissions per month to our [email protected], most of which were trivial or not actually vulnerabilities. Many were false positives from scanning tools, and we spent most of our time explaining to the reporter that what they found wasn’t actually a problem.

At the turn of the year, things changed. Starting in January, we’ve been averaging 10 submissions per week, and many of these are legit. Most are not serious, and we’ve quietly fixed them, thanked the researcher, and went our merry way. However, it was a step change in both volume and quality of reporting. These come from a wide variety of locations and people, and sometimes, but not always, are looking for bug bounties. More often than not, the reports are in markdown, and read like they’re LLM generated. Others are seeing this as well.

It doesn’t take too insightful an eye to realize we’re seeing a remarkable improvement in automated code scanning. We’ve since tried out a few vendors in the space, and what do you know — more (thankfully minor) issues found. There’s no one vendor or model that’s the root cause. While we originally thought it could be Claude Security, that was only announced in February, after things had already picked up. And OpenAI is also getting into the game. Does this mean there’s another wave coming after everyone gets access to these? Likely. But regardless of specific foundational models, this is just a consequence of coding agents in general getting better at scanning codebases for flaws.

Historically, we tended to get two styles of vulnerability research:

Superficial scanners run in bulk: Running an OWASP scanner or other out-of-the-box vuln scanner. These tended to be mostly false positives.

Motivated deep digging: This was typically a serious user paying researchers, often with a knowledge of the application area or framework deeply and knowing where to probe. These tended to find a cluster of similar issues, often related to the style or speciality of the researcher.

Vulnerabilities are now being strip mined

... continue reading