Chinese hackers used Anthropic’s Claude AI model to automate cybercrimes targeting banks and governments, the company admitted in a blog post this week.
Anthropic believes it’s the “first documented case of a large-scale cyberattack executed without substantial human intervention” and an “inflection point” in cybersecurity, a “point at which AI models had become genuinely useful for cybersecurity operations, both for good and for ill.”
AI agents, in particular, which are designed to autonomously complete a string of tasks without the need for intervention, could have considerable implications for future cybersecurity efforts, the company warned.
Anthropic said it had “detected suspicious activity that later investigation determined to be a highly sophisticated espionage campaign” back in September. The Chinese state-sponsored group exploited the AI’s agentic capabilities to infiltrate “roughly thirty global targets and succeeded in a small number of cases.” However, Anthropic stopped short of naming any of the targets — or the hacker group itself, for that matter — or even what kind of sensitive data may have been stolen or accessed.
Hilariously, the hackers were “pretending to work for legitimate security-testing organizations” to sidestep Anthropic’s AI guardrails and carry out real cybercrimes, as Anthropic’s head of threat intelligence Jacob Klein told the Wall Street Journal.
The hackers “broke down their attacks into small, seemingly innocent tasks that Claude would execute without being provided the full context of their malicious purpose,” the company wrote. “They also told Claude that it was an employee of a legitimate cybersecurity firm, and was being used in defensive testing.”
The incident once again highlights glaring holes in AI companies’ guardrails, letting perpetrators access powerful tools to infiltrate targets — a cat-and-mouse game between AI developers and hackers that’s already having real-life consequences.
“Overall, the threat actor was able to use AI to perform 80 to 90 percent of the campaign, with human intervention required only sporadically (perhaps four to six critical decision points per hacking campaign),” Anthropic wrote in its blog post. “The sheer amount of work performed by the AI would have taken vast amounts of time for a human team.”
But while Anthropic is boasting that its AI models have become good enough to be used for real crimes, the hackers still had to deal with some all-too-familiar AI-related headaches, forcing them to intervene.
For one, the model suffered from hallucinations during its crime spree.
... continue reading