Tech News
← Back to articles

Google warns of new AI-powered malware families deployed in the wild

read original related products more articles

Google's Threat Intelligence Group (GTIG) has identified a major shift this year, with adversaries leveraging artificial intelligence to deploy new malware families that integrate large language models (LLMs) during execution.

This new approach enables dynamic altering mid-execution, which reaches new levels of operational versatilty that are virtually impossible to achieve with traditional malware.

Google calls the technique "just-in-time" self-modification and highlights the experimental PromptFlux malware dropper and the PromptSteal (a.k.a. LameHug) data miner deployed in Ukraine, as examples for dynamic script generation, code obfuscation, and creation of on-demand functions.

PromptFlux is an experimental VBScript dropper that leverages Google's LLM Gemini in its latest version to generate obfuscated VBScript variants.

It attempts persistence via Startup folder entries, and spreads laterally on removable drives and mapped network shares.

"The most novel component of PROMPTFLUX is its "Thinking Robot" module, designed to periodically query Gemini to obtain new code for evading antivirus software," explains Google.

The prompt is very specific and machine-parsable, according to the researchers, who see indications that the malware's creators aim to create an ever-evolving "metamorphic script."

PromptFlux "StartThinkingRobot" function

Source: Google

Google could not attribute PromptFlux to a specific threat actor, but noted that the tactics, techniques and procedures indicate that it is being used by a financially motivated group.

... continue reading