Model Context Protocol (MCP) has become the standard for tool calling when building agents, but contrary to popular belief, your LLM does not need to understand MCP. You might have heard about the term "context engineering"; where you, as the person interacting with an LLM, are responsible for providing the right context to help it answer your questions. To gather this context, you can use tool calling to give the LLM access to a set of tools it can use to fetch information or take actions.
MCP helps by standardizing how your agent connects to these tools. But to your LLM, there’s no difference between “regular” tool calling and using a standard like MCP. It only sees a list of tool definitions, it doesn’t know or care what’s happening behind the scenes. And that’s a good thing.
By using MCP you get access to thousands of tools, without writing custom integration logic for each one. It heavily simplifies setting up an agentic loop that involves tool calling, often with almost zero development time. You, the developer, are responsible for calling the tools. The LLM only generates a snippet of what tool(s) to call and with which input parameters.
In this blog post, I’ll break down how tool calling works, what MCP actually does, and how both relate to context engineering.
Tool Calling
LLMs understand the concept of tool calling, sometimes also called tool use or function calling. You provide a list of tool definitions as part of your prompt. Each tool includes a name, description, and expected input parameters. Based on the question and available tools, the LLM may generate a call.
What is Tool Calling? Connecting LLMs to Your Data What is Tool Calling? Connecting LLMs to Your Data
But here’s the important part: LLMs don’t know how to use tools. They don’t have native tool calling support. They just generate text that represents a function call.
Input and output when interacting with a LLM
In the diagram above, you can see what the LLM actually sees: a prompt made up of instructions, previous user messages, and a list of available tools. Based on that, the LLM generates a text response which might include a tool that your system should call. It doesn’t understand tools in a meaningful way, it’s just making a prediction.
... continue reading