Tech News
← Back to articles

LameHug malware uses AI LLM to craft Windows data-theft commands in real-time

read original related products more articles

A novel malware family named LameHug is using a large language model (LLM) to generate commands to be executed on compromised Windows systems.

LameHug was discovered by Ukraine’s national cyber incident response team (CERT-UA) and attributed the attacks to Russian state-backed threat group APT28 (a.k.a. Sednit, Sofacy, Pawn Storm, Fancy Bear, STRONTIUM, Tsar Team, Forest Blizzard).

The malware is written in Python and relies on the Hugging Face API to interact with the Qwen 2.5-Coder-32B-Instruct LLM, which can generate commands according to the given prompts.

Created by Alibaba Cloud, the LLM is open-source and designed specifically to generate code, reasoning, and follow coding-focused instructions. It can convert natural language descriptions into executable code (in multiple languages) or shell commands.

CERT-UA found LameHug after receiving reports on July 10 about malicious emails sent from compromised accounts and impersonating ministry officials, attempting to distribute the malware to executive government bodies.

Malicious email attempting LameHug infection

Source: CERT-UA

The emails carry a ZIP attachment that contains a LameHub loader. CERT-UA has seen at least three variants named ‘Attachment.pif,’ ‘AI_generator_uncensored_Canvas_PRO_v0.9.exe,’ and ‘image.py.’

The Ukrainian agency attributes this activity with medium confidence to the Russian threat group APT28.

In the observed attacks, LameHug was tasked with executing system reconnaissance and data theft commands, generated dynamically via prompts to the LLM.

... continue reading