Hackers used AI to 'to commit large-scale theft' 21 minutes ago Share Save Imran Rahman-Jones Technology reporter Share Save Getty Images A top artificial intelligence (AI) company says the technology has been "weaponised" by hackers to carry out sophisticated cyber attacks. Anthropic, which makes the chatbot Claude, says its tools were used by hackers "to commit large-scale theft and extortion of personal data". The firm said its AI was used to help write code which carried out cyber-attacks, while in another case, North Korean scammers used Claude to fraudulently get remote jobs at top US companies. Anthropic says it was able to disrupt the threat actors and has reported the cases to the authorities along with improving its detection tools. Using AI to help write code has increased in popularity as the tech becomes more capable and accessible. Anthropic says it detected a case of so-called "vibe hacking", where its AI was used to write code which could hack into at least 17 different organisations, including government bodies. It said the hackers "used AI to what we believe is an unprecedented degree". They used Claude to "make both tactical and strategic decisions, such as deciding which data to exfiltrate, and how to craft psychologically targeted extortion demands". It even suggested ransom amounts for the victims. Agentic AI - where the tech operates autonomously - has been touted as the next big step in the space. But these examples show some of the risks powerful tools pose to potential victims of cyber-crime. The use of AI means "the time required to exploit cybersecurity vulnerabilities is shrinking rapidly", said Alina Timofeeva, an adviser on cyber-crime and AI. "Detection and mitigation must shift towards being proactive and preventative, not reactive after harm is done," she said. 'North Korean operatives'