Nobody is arguing about the right thing. The people defending MCP point to Claude Desktop connecting to your Jira in two clicks, or a docs server giving an agent perfect context about a framework's API. The people criticizing it point to their agent codebase becoming an unreliable mess of JSON-RPC boilerplate. Both are right. They're describing different use cases.
MCP is genuinely good software for two specific problems: connecting GUI AI clients to external tools, and giving agents access to documentation. Where it falls apart is when developers use it for programmatic tool calling inside agent loops.
MCP's core idea is sound. Without a standard, every AI client (Claude, Cursor, whatever comes next) needs a custom integration for every tool (GitHub, Jira, Notion, your internal API). That's an N×M problem. MCP makes it N+M. One server works with every client. That's real value.
The LSP analogy people keep reaching for is apt. The Language Server Protocol solved exactly this problem for editor integrations — one language server works in VS Code, Neovim, Emacs. Before LSP, every combination was a custom build. MCP is attempting the same thing for AI tools, and for the GUI client use case — Claude Desktop, Cursor, whatever you have running in a sidebar — it works well.
If you are a non-developer and you want your AI assistant to read your calendar, update a ticket, or query a database, MCP is the right answer. The tooling is there, the clients support it, and you don't need to write a line of code.
Documentation is where MCP quietly does its best work, and where much of the criticism misses the mark.
A docs MCP server gives an agent structured, up-to-date knowledge about a framework, API, or internal system. The agent asks for what it needs, the server returns clean markdown with code examples, type signatures, and usage patterns. Unlike shoving an entire docs site into the context window upfront, MCP lets the agent pull specific pages on demand. The context stays small. The information stays current.
This matters because LLMs are trained on a snapshot of the internet. Libraries ship breaking changes. New APIs launch. Internal tools have no public documentation at all. A docs MCP server bridges that gap in a way that's both standardized and composable — the same server works whether the agent runs in Claude Desktop, Cursor, or a custom client.
The pattern works especially well for:
Framework docs that change between versions — a Next.js or SvelteKit MCP server can serve docs matching the version in your package.json , not whatever version the LLM was trained on
... continue reading