Unless someone wrote an article about that exact thing, a plain full-text search engine cannot answer a question like this: What animal is featured on a flag of a country where the first small British colony was established in the same year that Sweden's King Gustav IV Adolf declared war on France? But ChatGPT got the correct answer in a few seconds. Flag of Dominica features the Sisserou parrot, which is only found in Dominica. Great Britain established a small colony on the island in 1805. Google's AI widget failed miserably, by the way. One of the best applications of modern LLM-based AI is surfacing answers from the chaos of the internet. Its success can be partly attributed to our failure to build systems that organize information well in the first place. This product pattern is not new. Take Google Drive: a glorified file system in the cloud with folders and files, but it offers a worse experience than almost any desktop file management application of the last 30 years. Organizing your stuff there is hard and tedious. So Google took a shortcut: full-text search. Just dump everything in, and type to find it later. The pattern of giving up on structure and relying on search has quietly become the dominant paradigm. "Search" here is a wide term: it can mean classic text-matching across indexed data, or complex multi-dimensional token matching across unwieldy models and weights. Why create a well-organized e-commerce site, just add a search bar and oversaturate each item's page with keywords. Why write high-quality user documentation, just add a support chat bot. Remember Semantic Web? The web was supposed to evolve into semantically structured, linked, machine-readable data that would enable amazing opportunities. That never happened. Not only data remains unstructured and lacking metadata, even the representation of the unstructured data became difficult for machines to read due to the switch from plain, somewhat-structured HTML to JS-driven dynamic pile of div s. We also never achieved truly personal computing. Computers could've been personal knowledge bases, with structured semantic connections akin to HyperCard, that take advantage of the semantic web and open standards. My point is that if all knowledge were stored in a structured way with rich semantic linking, then very primitive natural language processing algorithms could parse question like the example at the beginning of the article, and could find the answer using orders of magnitude fewer computational resources. And most importantly: the knowledge and the connections would remain accessible and comprehensible, not hidden within impenetrable AI models. AI is not a triumph of elegant design, but a brute-force workaround. LLMs like ChatGPT can infer structure from chaos. They scan the unstructured web and build ephemeral semantic maps across everything. It's not knowledge in the classic sense.. or perhaps it is exactly what knowledge is?