Anthropic reports that a Chinese state-sponsored threat group, tracked as GTG-1002, carried out a cyber-espionage operation that was largely automated through the abuse of the company's Claude Code AI model.
However, Anthropic's claims immediately sparked widespread skepticism, with security researchers and AI practitioners calling the report "made up" and accusing the company of overstating the incident.
Others argued the report exaggerated what current AI systems can realistically accomplish.
"This Anthropic thing is marketing guff. AI is a super boost but it's not skynet, it doesn't think, it's not actually artificial intelligence (that's a marketing thing people came up with)," posted cybersecurity researcher Daniel Card.
Much of the skepticism stems from Anthropic providing no indicators of compromise (IOCs) behind the campaign. Furthermore, BleepingComputer's requests for technical information about the attacks were not answered.
Claims attacks were 80-90% AI-automated
Despite the criticism, Anthropic claims that the incident represents the first publicly documented case of large-scale autonomous intrusion activity conducted by an AI model.
The attack, which Anthropic says it disrupted in mid-September 2025, used its Claude Code model to target 30 entities, including large tech firms, financial institutions, chemical manufacturers, and government agencies.
Although the firm says only a small number of intrusions succeeded, it highlights the operation as the first of its kind at this scale, with AI allegedly autonomously conducting nearly all phases of the cyber-espionage workflow.
"The actor achieved what we believe is the first documented case of a cyberattack largely executed without human intervention at scale—the AI autonomously discovered vulnerabilities… exploited them in live operations, then performed a wide range of post-exploitation activities," Anthropic explains in its report.
... continue reading