Anthropic, the company behind the popular AI model Claude, said in a new Threat Intelligence report that it disrupted a "vibe hacking" extortion scheme. In the report, the company detailed how the attack was carried out, allowing hackers to scale up a mass attack against 17 targets, including entities in government, healthcare, emergency services and religious organizations.
(You can read the full report in this PDF file.)
Anthropic says that its Claude AI technology was used as both a "technical consultant and active operator, enabling attacks that would be more difficult and time-consuming for individual actors to execute manually." Claude was used to "automate reconnaissance, credential harvesting, and network penetration at scale," the report said.
Making the findings more disturbing is that so-called vibe hacking was considered a future threat, with some experts believing it was not yet possible. What Anthropic shared in its report may represent a major shift in how AI models and agents are used to scale up massive cyberattacks, ransomware schemes or extortion scams.
Separately, Anthropic has also recently been dealing with other AI issues, namely settling a lawsuit by authors claiming Claude was trained on their copyrighted materials. Another company, Perplexity, has been dealing with its own security issues as its Comet AI browser was shown to have a major vulnerability.