Tech News
← Back to articles

How Agentic BAS AI Turns Threat Headlines Into Defense Strategies

read original related products more articles

By Sila Özeren Hacioglu, Security Research Engineer at Picus Security.

For security leaders, the most dreaded notification isn't always an alert from their SOC; it’s a link to a news article sent by a board member. The headline usually details a new campaign by a threat group like FIN8 or a recently exposed massive supply chain vulnerability. The accompanying question is brief but paralyzing by implication: "Are we exposed to this right now?".

In the pre-LLM world, answering that question set off a mad race against an unforgiving clock. Security teams had to wait for vendor SLAs, often eight hours or more for emerging threats, or manually reverse-engineer the attack themselves to build a simulation. Though this approach delivered an accurate response, the time it took to do so created a dangerous window of uncertainty.

AI-driven threat emulation has eliminated much of the investigative delay by accelerating analysis and expanding threat knowledge. However, AI emulation still carries risks due to limited transparency, susceptibility to manipulation, and hallucinations.

At the recent BAS Summit, Picus CTO and Co-founder Volkan Ertürk cautioned that “raw generative AI can create exploit risks nearly as serious as the threats themselves.” Picus addresses this by using an agentic, post-LLM approach that delivers AI-level speed without introducing new attack surfaces.

This blog breaks down what that approach looks like, and why it fundamentally improves the speed and safety of threat validation.

The "Prompt-and-Pray" Trap

The immediate reaction to the Generative AI boom was an attempt to automate red teaming by simply asking Large Language Models (LLMs) to generate attack scripts. Theoretically, an engineer could feed a threat intelligence report into a model and ask it to "draft an emulation campaign".

While this approach is undeniably fast, it fails in reliability and safety. As Picus’s Ertürk notes, there’s some danger in taking this approach:

“ ... Can you trust a payload that is built by an AI engine? I don't think so. Right? Maybe it just came up with the real sample that an APT group has been using or a ransomware group has been using. ... then you click that binary, and boom, you may have big problems.”

... continue reading