Tech News
← Back to articles

Malware devs abuse Anthropic’s Claude AI to build ransomware

read original related products more articles

Anthropic's Claude Code large language model has been abused by threat actors who used it in data extortion campaigns and to develop ransomware packages.

The company says that its tool has also been used in fraudulent North Korean IT worker schemes and to distribute lures for Contagious Interview campaigns, in Chinese APT campaigns, and by a Russian-speaking developer to create malware with advanced evasion capabilities.

AI-created ransomware

In another instance, tracked as ‘GTG-5004,’ a UK-based threat actor used Claude Code to develop and commercialize a ransomware-as-a-service (RaaS) operation.

The AI utility helped create all the required tools for the RaaS platform, implementing ChaCha20 stream cipher with RSA key management on the modular ransomware, shadow copy deletion, options for specific file targeting, and the ability to encrypt network shares.

On the evasion front, the ransomware loads via reflective DLL injection and features syscall invocation techniques, API hooking bypass, string obfuscation, and anti-debugging.

Anthropic says that the threat actor relied almost entirely on Claude to implement the most knowledge-demanding bits of the RaaS platform, noting that, without AI assistance, they would have most likely failed to produce a working ransomware.

“The most striking finding is the actor’s seemingly complete dependency on AI to develop functional malware,” reads the report.

“This operator does not appear capable of implementing encryption algorithms, anti-analysis techniques, or Windows internals manipulation without Claude’s assistance.”

After creating the RaaS operation, the threat actor offered ransomware executables, kits with PHP consoles and command-and-control (C2) infrastructure, and Windows crypters for $400 to $1,200 on dark web forums such as Dread, CryptBB, and Nulled.

... continue reading