A report was recently published by an AI-research company called Anthropic. They are the ones who notably created Claude, an AI-assistant for coding. Personally, I don’t use it but that is besides the point. Before we start, it’s important to say I don’t have anything against them, or AI in general. I do have some documented concerns but I am not “Anti-AI”, or whatever. Rather than the technology itself, it’s the industry’s perception of it, and the way it is inserted everywhere, even when unnecessary that bothers me. However, that too is a bit besides the point.
Today, I wanted to discuss the Paper (or Report, however you want to call it) that was recently published by them. Looking at the executive summary, this paragraph jumps out immediately.
In mid-September 2025, we detected a highly sophisticated cyber espionage operation conducted by a Chinese state-sponsored group we’ve designated GTG-1002 that represents a fundamental shift in how advanced threat actors use AI. Our investigation revealed a well-resourced, professionally coordinated operation involving multiple simultaneous targeted intrusions. The operation targeted roughly 30 entities and our investigation validated a handful of successful intrusions.
This is extremely interesting for many reasons:
Anthropic seemingly disrupted an APT’s campaign, though a number of companies and government entities were affected,
This highly-advanced APT doesn’t use its own infra, but rather relies on Claude to coordinate its automation (??? Why, though ?),
I assume they run exploits and custom tools ? If so, what are these ?
Anthropic was able to attribute this attack to a Chinese-affiliated state-sponsored group.
If you’re like me, you then eagerly read the rest of the paper, hoping to find clues and technical details on the TTPs (Tactics, Techniques and Procedures), or IoCs (Indicators of Compromise) to advance the research. However, the report very quickly falls flat, which sucks.
where are the IoCs, Mr.Claude ?
... continue reading