is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets.
Over the past 18 months, the largest AI companies in the world have quietly settled on an approach to building the next generation of apps and services — an approach that would allow AI agents from any company to easily access information and tools across the internet in a standardized way. It’s a key step toward building a usable ecosystem of AI agents that might actually pay off some of the enormous investments these companies have made, and it all starts with three letters: MCP.
MCP, or Model Context Protocol, began as a passion project from two Anthropic employees, but since its creation in mid-2024, it’s been widely adopted by companies like OpenAI, Google, Microsoft, and Cursor. There are even hints that Apple will use MCP in its forthcoming AI-enabled version of Siri. There have been competitors to MCP, but so far it’s been a standards war without any real battle — MCP has quickly taken over the industry.
And now it’s official: This week, Anthropic is donating MCP to the Linux Foundation — and joining OpenAI, Google, Microsoft, AWS, Block, Bloomberg, and Cloudflare in establishing a new fund called the Agentic AI Foundation (AAIF), whose goal is to “advance open-source agentic AI.” The donation, and assigning a neutral body to govern MCP, will likely help supercharge its growth.
It’s also a move that should change up how AI systems operate as we know it. For AI companies, MCP is the new standard for how these systems should access apps, tools, and information — and by extension, how people use the internet.
A “ping-pong of intelligence.”
MCP essentially tells AI models which external tools, data sources, and workflows they’re able to access, then allows them to connect and perform tasks. When someone uses Claude to perform tasks in Slack, for example, MCP is what authorizes and establishes the connection between services. It’s what lets Claude redirect you to Slack and get notified once you’ve logged in. And it lets Slack tell Claude which tools, resources, and features it can access — “essentially a ‘show me what you’ve got,’” Conor Kelly, a product marketing manager for MCP at Anthropic, says.
From the user’s side, this simply means Slack and Claude can easily work together — a “ping-pong of intelligence,” as Anthropic CPO Mike Krieger puts the impact of MCP. When somebody prompts Claude to send a Slack message to a colleague, Claude knows that the Slack MCP server is connected, that a tool exists for sending messages, and that it can access that tool. Once it’s all set, Slack tells Claude that it happened successfully, then Claude tells the user. Message sent.
If you’re familiar with how computers generally worked before AI, this might all sound like a bunch of APIs — and you might recall that web apps and services opening their APIs to one another was the underpinning of the Web 2.0 era, and eventually the enormously lucrative explosion of mobile apps in the app store era. Moving users (and their money) from apps and websites to AI agents is one of the few ways AI companies can even begin to pay off their enormous investments. But AI agents need new kinds of APIs, and MCP seems like the standard those APIs will take. MCP’s webpage, aspirationally, likens it to the ubiquitous USB-C.
MCP started as a pet project by two Anthropic engineers, David Soria Parra and Justin Spahr-Summers. The initial goal wasn’t to build an industry-wide standard. The pair simply wanted Anthropic’s staff base to use Claude more in everyday work. They felt like something was missing in the chatbot: the ability, Soria Parra tells The Verge, to connect “to the outer world that you actually deeply care about, the things you interact with.” His initial name for the service was Claude Connect.
... continue reading