Yes, “AI” will compromise your information security posture. No, not through some mythical self-aware galaxy-brain entity magically cracking your passwords in seconds or “autonomously” exploiting new vulnerabilities.
It’s way more mundane.
When immensely complex, poorly-understood systems get hurriedly integrated into your toolset and workflow, or deployed in your infrastructure, what inevitably follows is leaks, compromises, downtime, and a whole lot of grief.
Complexity means cost and risk¶
LLM-based systems are insanely complex, both on the conceptual level, and on the implementation level. Complexity has real cost and introduces very real risk. These costs and these risks are enormous, poorly understood – and usually just hand-waved away. As Suha Hussain puts it in a video I’ll discuss a bit later:
Machine learning is not a quick add-on, but something that will fundamentally change your system security posture.
The amount of risk companies and organizations take on by using, integrating, or implementing LLM-based – or more broadly, machine learning-based – systems is massive. And they have to eat all of that risk themselves: suppliers of these systems simply refuse to take any real responsibility for the tools they provide and problems they cause.
After all, taking responsibility is bad for the hype. And the hype is what makes the line go up.
The Hype¶
An important part of pushing that hype is inflating expectations and generating fear of missing out, one way or another. What better way to generate it than by using actual fear?
... continue reading