Skip to content
Tech News
← Back to articles

The Fact That Anthropic Has Been Boasting About How Much Its Development Now Relies on Claude Makes It Very Interesting That It Just Suffered a Catastrophic Leak of Its Source Code

read original get Claude AI Source Code Book → more articles
Why This Matters

The leak of Anthropic's Claude source code highlights the cybersecurity risks associated with AI-driven development, especially when companies rely heavily on AI for critical tasks. This incident underscores the importance of robust security measures in AI research and development to protect sensitive information and maintain industry trust. For consumers and the tech industry, it serves as a cautionary tale about the vulnerabilities inherent in AI-powered systems and the need for vigilant security practices.

Key Takeaways

Sign up to see the future, today Can’t-miss innovations from the bleeding edge of science and tech Email address Sign Up Thank you!

Earlier this year, the head of Anthropic’s blockbuster Claude Code AI agent Boris Cherny boasted that “pretty much 100 percent” of the entire company’s code is AI-generated.

“For me personally, it has been 100 percent for two plus months now, I don’t even make small edits by hand,” he tweeted at the time.

But the glaring cybersecurity implications of giving an AI agent full access over a computer to carry out complex tasks — something experts have been ringing the alarm bells over for a while now — isn’t coinciding during a period of competence for the company: it confirmed on Tuesday that parts of the internal source code for its Claude Code had leaked, which is extremely bad.

“No sensitive customer data or credentials were involved or exposed,” a spokesperson told CNBC, in an apparent effort to focus on the bright side.

The news comes less than a week after news of Anthropic’s upcoming “Claude Mythos” AI model — which the company claimed poses “unprecedented cybersecurity risks” — leaked to the public.

Unsurprisingly, Anthropic attempted to downplay the latest situation and blame human agents, not AI ones, for the leak.

“This was a release packaging issue caused by human error, not a security breach,” the spokesperson added. “We’re rolling out measures to prevent this from happening again.”

A file the company shared on the coding platform GitHub included a link back to the source code, allowing anybody with an internet connection to download it. How the file ended up there in the end, or whether an AI agent could’ve been involved in the process leading up to the leak, remains unclear.

“Claude code source code has been leaked via a map file in their npm registry!” reads an X post, which was viewed tens of millions of times in less than a day.

... continue reading