Tech News
← Back to articles

The Bitter Lesson of LLM Extensions

read original related products more articles

11/24/2025

Three years ago, “using an LLM” meant pasting a wall of text into a chat box and hoping for something useful back. Today, we point agents at our codebases, our browsers, and let them go off and act on our behalf. A key question that has been brewing under the surface during this time has been: how do we let end users actually customize these systems?

As models have become more capable, the ways and mechanisms that end users have access to customize them have expanded as well. We've gone from simple system prompts to complex client-server protocols and back again.

I wanted to take a moment to reflect on the history of LLM extension over the last three years and where I see it going in the future.

ChatGPT Plugins (March 2023)

Just four months after launch, OpenAI announced ChatGPT Plugins. Looking back, these were wildly ahead of their time.

The idea was ambitious: give the LLM a link to an OpenAPI spec and let it "run wild" calling REST endpoints. It was a direct line to AGI-style thinking: universal tool use via standard APIs.

{ "schema_version" : "v1" , "name_for_human" : "TODO Manager" , "name_for_model" : "todo_manager" , "description_for_human" : "Manages your TODOs!" , "description_for_model" : "An app for managing a user's TODOs" , "api" : { "url" : "/openapi.json" } , "auth" : { "type" : "none" } , "logo_url" : "https://example.com/logo.png" , "legal_info_url" : "http://example.com" , "contact_email" : "[email protected]" }

The problem? The models weren't ready. GPT-3.5 (and even early GPT-4) struggled to navigate massive API specs without hallucinating or getting lost in context. Plus, the UX was clunky. You had to manually toggle plugins for every chat!

Here's what that looked like:

... continue reading