As cybercrime surges around the world, new research increasingly shows that ransomware is evolving as a result of widely available generative AI tools. In some cases, attackers are using AI to draft more intimidating and coercive ransom notes and conduct more effective extortion attacks. But cybercriminals’ use of generative AI is rapidly becoming more sophisticated. Researchers from the generative AI company Anthropic today revealed that attackers are leaning on generative AI more heavily—sometimes entirely—to develop actual malware and offer ransomware services to other cybercriminals.
Ransomware criminals have recently been identified using Anthropic’s large language model Claude and its coding-specific model, Claude Code, in the ransomware development process, according to the company’s newly released threat intelligence report. Anthropic’s findings add to separate research this week from the security firm ESET that highlights an apparent proof of concept for a type of ransomware attack executed entirely by local LLMs running on a malicious server.
Taken together, the two sets of findings highlight how generative AI is pushing cybercrime forward and making it easier for attackers—even those who don’t have technical skills or ransomware experience—to execute such attacks. “Our investigation revealed not merely another ransomware variant, but a transformation enabled by artificial intelligence that removes traditional technical barriers to novel malware development,” researchers from Anthropic’s threat intelligence team wrote.
Over the last decade, ransomware has proven an intractable problem. Attackers have become increasingly ruthless and innovative so victims will keep paying out. By some estimates, the number of ransomware attacks hit record highs at the start of 2025, and criminals continue to make hundreds of millions of dollars per year. As former US National Security Agency and Cyber Command chief Paul Nakasone put it at the Defcon security conference in Las Vegas earlier this month: “We are not making progress against ransomware.”
Adding AI into the already hazardous ransomware cocktail only increases what hackers may be able to do. According to Anthropic’s research, a cybercriminal threat actor based in the United Kingdom, which is tracked as GTG-5004 and has been active since the start of this year, used Claude to “develop, market, and distribute ransomware with advanced evasion capabilities.”
On cybercrime forums, GTG-5004 has been selling ransomware services ranging from $400 to $1,200, with different tools being provided for different package levels, according to Anthropic’s research. The company says that while GTG-5004’s products include a range of encryption capabilities, different software reliability tools, and methods designed to help the hackers avoid detection, it appears the developer is not technically skilled. “This operator does not appear capable of implementing encryption algorithms, anti-analysis techniques, or Windows internals manipulation without Claude’s assistance,” the researchers write.
Anthropic says it banned the account linked to the ransomware operation and introduced “new methods” for detecting and preventing malware generation on its platforms. These include using pattern detection known as YARA rules to look for malware and malware hashes that may be uploaded to its platforms.