HTTP caching never quite made sense, until AI tools made it legible enough to actually implement. And the reason it finally mattered: the audience had quietly changed.
I have a jar of screws on my workbench. For years, I would fish through it looking for the right size, usually not finding it.
Last week I sorted them, by type, by thread, by length. I used ChatGPT to help: photographed a handful, asked what I was looking at, got the taxonomy straight. Once I could name them, I could organise them. You can only sort what you understand.
HTTP caching was my jar of screws.
Thirty years of fog
I have been building for the web since the early nineties. Caching was always there, somewhere in the background, doing something. I knew enough to be aware of it, not enough to actually control it. Cache-Control headers, TTL values, edge behaviour, the difference between what a CDN caches and what a browser holds, what gets invalidated when and why. Every time I approached it seriously, I ran into a wall of context I did not quite have.
The documentation exists. The concepts are not secret. But caching is one of those domains where the gap between understanding the vocabulary and being able to apply it correctly is surprisingly wide. I would read, nod, implement something plausible, and move on with lingering doubt.
This year, working with Claude, that changed. The parallel is closer than it sounds. I had the pieces in front of me for years. What I was missing was someone to explain what I was looking at.
New instruments
We went through the whole thing together. What my Cloudflare Workers were actually doing. What headers were being sent and why. What a browser would cache versus what the edge would cache. Where the inconsistencies were. What a coherent strategy would look like for a site like mine: a moderate personal blog with a global readership, running on Ghost, served through Cloudflare.
... continue reading