Find Related products on Amazon

Shop on Amazon

LLM function calls don't scale; code orchestration is simpler, more effective

Published on: 2025-06-28 20:18:52

LLM function calls don't scale; code orchestration is simpler, more effective. 20 May, 2025 TL;DR: Giving LLMs the full output of tool calls is costly and slow. Output schemas will enable us to get structured data, so we can let the LLM orchestrate processing with generated code. Tool calling in code is simplifying and effective. One common practice for working with MCP tools calls is to put the outputs from a tool back into the LLM as a message, and ask the LLM for the next step. The hope here is that the model figures out how to interpret the data, and identify the correct next action to take. This can work beautifully when the amount of data is small, but we found that when we tried MCP servers with real-world data, it quickly breaks down. MCP in the real-world We use Linear and Intercom at our company. We connected to their latest official MCP servers released last week to understand how they were returning tool calls. It turns out that both servers returned large JSON blobs ... Read full article.