Despite the hype, Model Context Protocol (MCP) isn’t magic or revolutionary. But, it’s simple, well-timed, and well-executed. At Stainless, we’re betting it’s here to stay.
“MCP helps you build agents and complex workflows on top of LLMs”. If you’ve paid attention, you know we’ve been here before. There are numerous past attempts at connecting the world to an LLM in a structured, automatic way.
Function/tool calling : Write a JSON schema, the model picks a function. But you had to manually wire each function per request and assume most of the responsibility for implementing retry logic.
: Write a JSON schema, the model picks a function. But you had to manually wire each function per request and assume most of the responsibility for implementing retry logic. ReAct / LangChain : Let the model emit an Action: string, then parse it yourself—often flaky and hard to debug.
: Let the model emit an string, then parse it yourself—often flaky and hard to debug. ChatGPT plugins : Fancy, but gated. You had to host an OpenAPI server and obtain approval.
: Fancy, but gated. You had to host an OpenAPI server and obtain approval. Custom GPTs : Lower barrier to entry, but still stuck inside OpenAI’s runtime.
: Lower barrier to entry, but still stuck inside OpenAI’s runtime. AutoGPT, BabyAGI: Agents with ambition, but a mess of configuration, loops, and error cascades.
Heck, even MCP itself isn’t new—the spec was released by Anthropic in November, but it suddenly blew up in February, 3 months later.
Interest over time for MCP in Google search trends (link)
Why is it that MCP is seemingly in ascent, while previous attempts fell short?
... continue reading