Skip to content
Tech News
← Back to articles

OWASP GenAI Security Project Gets Update, New Tools Matrix

read original get OWASP Security Tools Kit → more articles
Why This Matters

The OWASP GenAI Security Project update highlights the rapidly evolving landscape of AI security, emphasizing new risks and solutions for generative and agentic AI systems. This ongoing effort is crucial for industry stakeholders to better understand and mitigate emerging vulnerabilities in AI deployments, ensuring safer adoption for consumers and businesses alike.

Key Takeaways

The Open Web Application Security Project (OWASP) is updating its look at the risk and defensive landscape of artificial intelligence (AI), reflecting the fast adoption of the technology and the security issues that adoption poses.

The OWASP Foundation published expanded security recommendations for companies adopting AI systems, splitting its tracking of solutions into two groups — generative AI and agentic AI — while outlining 21 different risks for GenAI systems. The first guide focuses on securing GenAI and large language models (LLMs); the second focuses on agentic AI systems. In addition, OWASP published its first listing of GenAI Data Security risks, covering 21 potential data issues caused by AI systems, including sensitive data leakage, exposure of agent identities and credentials, and unsanctioned data flows due to shadow AI.

Because the field is changing so rapidly, the group's latest release comes only four months after the previous solutions guide, and the number of covered providers has expanded from 50 to more than 170, says Scott Clinton, co-lead of the OWASP GenAI Security Project. The pace has become more regular, though OWASP does not expect the ecosystem to continue needing such quick updates. It will instead move to a six-month schedule, he says.

Related:Chainguard Unveils Factory 2.0 to Automate Hardening the Software Supply Chain

"When we first started, we were publishing it every quarter because things were moving so incredibly fast," he says. "The industry is kind of still moving quickly, solutions are still coming in, but it's not quite at the same pace."

From Models to Swarms

A smattering of incidents underscore the risks as companies continue to struggle to secure their usage of LLMs, GenAI, and AI agents. Users have found that AI agents will often ignore security boundaries to complete tasks, and the shift to "swarms" — collections of AI agents — to complete tasks has led to even greater security complexity. Many layers of the AI development and deployment ecosystems, such as Model Context Protocol (MCP) servers, are woefully insecure, experts say.

Still, the use of these systems is exploding, dwarfing even the rise in software-as-a-service applications. A 10,000-employee company might have had 30 to 100 applications in the past, but now it has tens of thousands of AI applications running when you count specific LLM calls that generate scripts to gather data, says Sai Modalavalasa, chief architect at AI-security firm Straiker.

Tools to help manage the problem are still being developed, says Modalavalasa, a contributor to the OWASP GenAI Security Project. First, companies need to be able to see what AI agents are doing in their networks and systems.

Related:Flaw-Finding AI Assistants Face Criticism for Speed, Accuracy

... continue reading