is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets.
Posts from this author will be added to your daily email digest and your homepage feed.
“Agentic AI systems are being weaponized.”
That’s one of the first lines of Anthropic’s new Threat Intelligence report, out today, which details the wide range of cases in which Claude — and likely many other leading AI agents and chatbots — are being abused.
First up: “Vibe-hacking.” One sophisticated cybercrime ring that Anthropic says it recently disrupted used Claude Code, Anthropic’s AI coding agent, to extort data from at least 17 different organizations around the world within one month. The hacked parties included healthcare organizations, emergency services, religious institutions, and even government entities.
“If you’re a sophisticated actor, what would have otherwise required maybe a team of sophisticated actors, like the vibe-hacking case, to conduct — now, a single individual can conduct, with the assistance of agentic systems,” Jacob Klein, head of Anthropic’s threat intelligence team, told The Verge in an interview. He added that in this case, Claude was “executing the operation end-to-end.”
Anthropic wrote in the report that in cases like this, AI “serves as both a technical consultant and active operator, enabling attacks that would be more difficult and time-consuming for individual actors to execute manually.” For example, Claude was specifically used to write “psychologically targeted extortion demands.” Then the cybercriminals figured out how much the data — which included healthcare data, financial information, government credentials, and more — would be worth on the dark web and made ransom demands exceeding $500,000, per Anthropic.
“This is the most sophisticated use of agents I’ve seen … for cyber offense,” Klein said.
In another case study, Claude helped North Korean IT workers fraudulently get jobs at Fortune 500 companies in the U.S. in order to fund the country’s weapons program. Typically, in such cases, North Korea tries to leverage people who have been to college, have IT experience, or have some ability to communicate in English, per Klein — but he said that in this case, the barrier is much lower for people in North Korea to pass technical interviews at big tech companies and then keep their jobs.
With the assistance of Claude, Klein said, “we’re seeing people who don’t know how to write code, don’t know how to communicate professionally, know very little about the English language or culture, who are just asking Claude to do everything … and then once they land the job, most of the work they’re actually doing with Claude is maintaining the job.”
... continue reading