Tech News
← Back to articles

LLM=True

read original related products more articles

Hi HN! Discussion thread here.

Disclaimer: this post has been written without AI. (Oh how the turns have tabled… (╯°□°)╯︵ ┻━┻ )

Your Best Friend

AI coding agents dogs are our best friends! I have lots of them. Going for walks with them every day and trying to get them to perform neat tricks. However, sometimes they misbehave and they don’t do the tricks we want them to do. This bad behaviour often comes from distractions from the environment around us. After all, our dogs can perform best when they are hyper-focused on their task, without distractions. That’s when they really shine ✨🐶✨.

The TURBO Case

When working with Claude Code, having an eye on your context window is a must. Seeing those context windows fill up pretty quickly, you start to realize this needs to be optimized so you can have longer/better sessions (with less distractions a.k.a. context rot). Then a brick hits you in the face when it dawns on you that all of our tools are dumping crazy amounts of non-relevant context into stdout thereby polluting your context windows. Let’s illustrate this with a very specific use case.

We have a Typescript monorepo which uses turbo for build management. It works fine. However, it used to dump all of its build output to stdout for each individual package. Here is the small snippet dump of a single npm run build command (package names replaced):

The total number of words of this build output is 1005 (roughly 750 tokens). All completely irrelevant for an LLM (if the build passes, but I’ll come back to this). The turbo-output contains 3 different sections which you want to stuff into a black hole:

the update block the packages being built the build output for each package

So you start to optimize…

... continue reading