Skip to content
Tech News
← Back to articles

AI Is Breaking Two Vulnerability Cultures

read original get Cybersecurity AI Toolkit → more articles
Why This Matters

The article highlights the evolving landscape of vulnerability disclosure in the tech industry, emphasizing how AI's rapid detection capabilities are challenging traditional disclosure cultures. This shift could lead to faster patching and improved security but also raises concerns about transparency and coordinated response strategies for consumers and organizations alike.

Key Takeaways

A week ago the

Copy Fail vulnerability came out, and Hyunwoo Kim immediately realized that the fixes were insufficient, sharing a patch the same day . In doing this he followed standard procedure for Linux, especially within networking: share the security impact with a closed list of Linux security engineers, while fixing the bug quietly and efficiently in the open. His goal was that with only the raw fix public, the knowledge that a serious vulnerability existed could be "embargoed": the people in a position to address it know, but they've agreed not to say anything for a few days.

Someone else noticed the change, however, realized the security implications, and shared it publicly. Since it was now out, the embargo was deemed over, and we can now see the full details.

It's interesting to see the tension here between two different approaches to vulnerabilities, and think about how this is likely to change with AI acceleration.

On one side you have "coordinated disclosure" culture. This is probably the most common approach in computer security. When you discover a security bug you tell the maintainers privately and give them some amount of time (often 90d) to fix it. The goal is that a fix is out before anyone learns about the hole.

On the other side you have "bugs are bugs" culture. This is especially common in Linux, where the argument is that if the kernel is doing something it shouldn't then someone somewhere may be able to turn it into an attack. Just fix things as quickly as possible, without drawing attention to them. Often people won't notice, with so many changes going past, and there's still time to get machines patched.

This approach never worked perfectly, but with AI getting good at finding vulnerabilities it's a much bigger problem. So many security fixes are coming out now that examining commits is much more attractive: the signal-to-noise ratio is higher. Additionally, having AI evaluate each commit as it passes is increasingly cheap and effective. [1]

Long embargoes, however, aren't doing well either. The historical pace of detection was slow: if you found something and reported it to the vendor with a 90d disclosure window, there was a very good chance no one else would notice during that time. But now with so many AI-assisted groups scanning software for vulnerabilities, that no longer holds. In this case, just nine hours after Kim reported the ESP vulnerability Kuan-Ting Chen also independently reported it. Embargoes can increase risk: they create a false sense of non-urgency and limit which actors can work to fix a flaw.

I don't know how to resolve this, but personally very short embargoes seem like a good approach, and they'd need to get even shorter over time. Luckily AI can speed up defenders as well as attackers here, allowing embargoes that would previously have been uselessly short.

... continue reading