Google on Wednesday revealed five recent malware samples that were built using generative AI. The end results of each one were far below par with professional malware development, a finding that shows that vibe coding of malicious wares lags behind more traditional forms of development, which means it still has a long way to go before it poses a real-world threat.
One of the samples, for instance, tracked under the name PromptLock, was part of an academic study analyzing how effective the use of large language models can be “to autonomously plan, adapt, and execute the ransomware attack lifecycle.” The researchers, however, reported the malware had “clear limitations: it omits persistence, lateral movement, and advanced evasion tactics” and served as little more than a demonstration of the feasibility of AI for such purposes. Prior to the paper’s release, security firm ESET said it had discovered the sample and hailed it as “the first AI-powered ransomware.”
Don’t believe the hype
Like the other four samples Google analyzed—FruitShell, PromptFlux, PromptSteal, and QuietVault—PromptLock was easy to detect, even by less-sophisticated endpoint protections that rely on static signatures. All samples also employed previously seen methods in malware samples, making them easy to counteract. They also had no operational impact, meaning they didn’t require defenders to adopt new defenses.
“What this shows us is that more than three years into the generative AI craze, threat development is painfully slow,” independent researcher Kevin Beaumont told Ars. “If you were paying malware developers for this, you would be furiously asking for a refund as this does not show a credible threat or movement towards a credible threat.”
Another malware expert, who asked not to be named, agreed that Google’s report did not indicate that generative AI is giving developers of malicious wares a leg up over those relying on more traditional development practices.
“AI isn’t making any scarier-than-normal malware,” the researcher said. “It’s just helping malware authors do their job. Nothing novel. AI will surely get better. But when, and by how much is anybody’s guess.”