Tech News
← Back to articles

AI firm claims Chinese spies used its tech to automate cyber attacks

read original related products more articles

AI firm claims Chinese spies used its tech to automate cyber attacks

44 minutes ago Share Save Joe Tidy Cyber correspondent, BBC World Service Share Save

Getty Images

The makers of artificial intelligence (AI) chatbot Claude claim to have caught hackers sponsored by the Chinese government using the tool to perform automated cyber attacks against around 30 global organisations. Anthropic said hackers tricked the chatbot into carrying out automated tasks under the guise of carrying out cyber security research. The company claimed in a blog post this was the "first reported AI-orchestrated cyber espionage campaign". But sceptics are questioning the accuracy of that claim - and the motive behind it.

Anthropic said it discovered the hacking attempts in mid-September. Pretending they were legitimate cyber security workers, hackers gave the chatbot small automated tasks which, when strung together, formed a "highly sophisticated espionage campaign". Researchers at Anthropic said they had "high confidence" the people carrying out the attacks were "a Chinese state-sponsored group". They said humans chose the targets - large tech companies, financial institutions, chemical manufacturing companies, and government agencies – but the company would not be more specific. Hackers then built an unspecified programme using Claude's coding assistance to "autonomously compromise a chosen target with little human involvement". Anthropic claims the chatbot was able to successfully breach various unnamed organisations, extract sensitive data and sort through it for valuable information. The company said it had since banned the hackers from using the chatbot and had notified affected companies and law enforcement. But Martin Zugec from cyber firm Bitdefender said the cyber security world had mixed feelings about the news. "Anthropic's report makes bold, speculative claims but doesn't supply verifiable threat intelligence evidence," he said. "Whilst the report does highlight a growing area of concern, it's important for us to be given as much information as possible about how these attacks happen so that we can assess and define the true danger of AI attacks."

AI hackers