## ** Human search and AI search solve different problems **
Traditional search engines were built for humans. They rank URLs, assuming someone will click through and navigate to a page. The search engine's job ends at the link. The system optimizes for keywords searches, click-through rates, and page layouts designed for browsing - done in milliseconds and as cheaply as possible.
The first wave of web search APIs used in AI-based search made this human search paradigm programmatically accessible, but failed to solve the underlying problem of how you design search for an AI agent’s needs.
AI search has to solve a different problem: **what tokens should go in an agent's context window to help it complete the task? We’re not ranking URLs for humans to click— we’re optimizing context and tokens for models to reason over.**
This requires a fundamentally different search architecture:
- ** Semantic objectives ** that capture intent beyond keyword matching, so agents can specify what they need to accomplish rather than guessing at search terms
that capture intent beyond keyword matching, so agents can specify what they need to accomplish rather than guessing at search terms - ** Token-relevance ranking ** to prioritize webpages most directly relevant to the objective, not pages optimized for human engagement metrics
to prioritize webpages most directly relevant to the objective, not pages optimized for human engagement metrics - ** Information-dense excerpts ** compressed and prioritized for reasoning quality, so LLMs have the highest-signal tokens in their context window
compressed and prioritized for reasoning quality, so LLMs have the highest-signal tokens in their context window - ** Single-call resolution ** for complex queries that normally require multiple search hops
With this search architecture built from the ground up for AIs, agents get access to the most information-dense web tokens in their context. The result is fewer search calls, higher accuracy, lower cost, and lower end-to-end latency.