Skip to content
Tech News
← Back to articles

Borges' cartographers and the tacit skill of reading LM output

read original get AI Text Analysis Tool → more articles
Why This Matters

As language models become increasingly sophisticated, they are reshaping how we perceive and interact with information, effectively becoming new maps of our reality. This shift underscores the importance for users to develop skills in critically reading and interpreting these models to maintain a meaningful connection with the underlying territory. Mastering this skill is crucial for navigating the evolving landscape of AI-mediated knowledge and communication.

Key Takeaways

maps, territory and LMs

Listen to this post 0:00 0:00 1×

Borges has a very short story about an empire whose cartographers kept producing larger and larger maps, until they built one the size of the empire itself. the following generations, less enchanted, saw it was useless and abandoned it to the weather. maps are useful precisely because they are reductions. when the pursuit of fidelity destroys the compression it destroys the point.

for the purposes of this text, LMs are our maps.

i’ll cut to the chase (bear with me though). LMs have become really good. so good that they are now well beyond useful representations of the territory, and are in many ways beginning to reshape the territory itself. this means, i think, that we need to be much better at reading maps without losing our connection to the territory. we need more ways to stay engaged while reading and interacting with them. much of our (professional) interaction with computers is mediated through LMs now: when examining a new codebase, when reading a paper, when priming ourselves towards a task. sometimes even as an interface for thinking. this is an abstraction layer that we are not really willing to avoid at this point (and im not saying that we should) but it changes what we need to be good at.

Baudrillard, writing in 1981, proposed four stages that describe how representations relate to reality. i think they map (pun acknowledged) really nicely to LMs, and in a way that is unique, because LMs seem to occupy different stages all at once, depending on the use case.

in stage one, the image is a faithful copy of reality. LMs were in a sense designed this way: trained to predict and reproduce patterns in human-generated text as accurately as possible, a compressed but structurally faithful representation of what we’ve written and thought.

in stage two, the image masks and distorts reality. LMs do this too. what you get back is a smoothed-out, averaged version of the territory, and subtle distortions are easy to miss precisely because the surface looks coherent. ask an LM to explain the causes of the 2008 financial crisis and you’ll get subprime mortgages and deregulation. ask again with different framing, same answer. the response feels authoritative, but it’s closer to a popularity-weighted consensus than to the still-unresolved debates among economists.

in stage three, the image masks the absence of reality. once you have a good enough approximation, engagement with the territory itself becomes less needed. you (may) stop checking sources because the answer looks right. you stop exploring because the recommendation feels sufficient. the financial crisis question again: asking it feels like research but is really just consuming a pre-averaged explanation. the activity looks the same, but something has been hollowed out.

stage four, when the representation has no relation to reality at all, is trickier. i’m not sure we’re there yet, though part of what makes stage four unsettling is that you might not know when you’ve arrived. it possibly emerges when much of the content available for training new systems is mostly output of previous systems, or when “the chat” becomes everyone’s primary source of knowledge, “becoming both the image of god and god.”

... continue reading