Skip to content
Tech News
← Back to articles

Optimizing Content for Agents

read original more articles
Why This Matters

Optimizing content for AI agents is crucial as it enhances their ability to efficiently access and interpret information, leading to better performance and user experiences. This approach aligns content presentation with how agents process data, making interactions more effective for both developers and consumers.

Key Takeaways

Just as useless of an idea as LLMs.txt was It’s all dumb abstractions that AI doesn’t need because AIs are as smart as humans so they can just use what was already there, which is APIs

LLMs.txt is indeed useless, but that’s the only thing correct in this statement. I’m here once again being rage baited to address more brainless takes on social media. This one is about content optimization.

Short and to the point: you should be optimizing content for agents, just as you optimize things for people. How you do that is an ever-evolving subject, but there are some common things we see:

order of content

content size

depth of nodes

Frontier models and the agents built on top of them all behave similarly, with similar constraints and optimizations. For example, one thing they’re known to do, to avoid context bloat, is to only read parts of files. The first N lines, or bytes, or characters. They’re also known to behave very differently when they’re told information exists somewhere vs. having to discover it on their own. Both of those concerns are actually why LLMs.txt was a valuable idea, but it was the wrong implementation.

The implementation today is simple: content negotiation. When a request comes in with Accept: text/markdown , you can confidently assume you have an agent. That’s your hook, and now it’s just up to you how you optimize it. I’m going to be brief and to the point and just give you a few examples of how we do that at Sentry.

We’ve put a bunch of time into optimizing our docs for agents, for obvious reasons. The primary optimizations are mostly simple:

Serve true markdown content - massive tokenization savings as well as improved accuracy Strip out things that only make sense in the context of the browser, especially navigation and JavaScript bits Optimize various pages to focus more on link hierarchy - our index, for example, is mostly a sitemap, completely different than non-markdown

... continue reading