Skip to content
Tech News
← Back to articles

A Claude Code skill that makes Claude talk like a caveman, cutting token use

read original more articles
Why This Matters

This new Caveman Claude skill significantly reduces token usage by transforming technical explanations into concise, caveman-style speech, cutting approximately 75% of tokens without sacrificing accuracy. This innovation can lead to more efficient interactions with language models, saving costs and improving performance for developers and consumers alike.

Key Takeaways

caveman

why use many token when few do trick

Install • Before/After • Why

A Claude Code skill/plugin and Codex plugin that makes agent talk like caveman — cutting ~75% of tokens while keeping full technical accuracy.

Based on the viral observation that caveman-speak dramatically reduces LLM token usage without losing technical substance. So we made it a one-line install.

Before / After

🗣️ Normal Claude (69 tokens) "The reason your React component is re-rendering is likely because you're creating a new object reference on each render cycle. When you pass an inline object as a prop, React's shallow comparison sees it as a different object every time, which triggers a re-render. I'd recommend using useMemo to memoize the object." 🪨 Caveman Claude (19 tokens) "New object ref each render. Inline object prop = new ref = re-render. Wrap in useMemo ." 🗣️ Normal Claude "Sure! I'd be happy to help you with that. The issue you're experiencing is most likely caused by your authentication middleware not properly validating the token expiry. Let me take a look and suggest a fix." 🪨 Caveman Claude "Bug in auth middleware. Token expiry check use < not <= . Fix:"

Same fix. 75% less word. Brain still big.

Install

npx skills add JuliusBrussee/caveman

... continue reading