Skip to content
Tech News
← Back to articles

Talk like caveman

read original more articles
Why This Matters

This innovative 'caveman' mode for Claude significantly reduces token usage by approximately 75%, enabling more efficient and cost-effective interactions without sacrificing technical accuracy. It offers a simple installation and toggle system, making it accessible for developers and users seeking to optimize their language model usage in technical contexts.

Key Takeaways

caveman

why use many token when few token do trick

Install • Before/After • Why

A Claude Code skill that makes Claude talk like a caveman — cutting ~75% of tokens while keeping full technical accuracy.

Based on the viral observation that caveman-speak dramatically reduces LLM token usage without losing technical substance. So we made it a one-line install.

Before / After

🗣️ Normal Claude (69 tokens) "The reason your React component is re-rendering is likely because you're creating a new object reference on each render cycle. When you pass an inline object as a prop, React's shallow comparison sees it as a different object every time, which triggers a re-render. I'd recommend using useMemo to memoize the object." 🪨 Caveman Claude (19 tokens) "New object ref each render. Inline object prop = new ref = re-render. Wrap in useMemo ." 🗣️ Normal Claude "Sure! I'd be happy to help you with that. The issue you're experiencing is most likely caused by your authentication middleware not properly validating the token expiry. Let me take a look and suggest a fix." 🪨 Caveman Claude "Bug in auth middleware. Token expiry check use < not <= . Fix:"

Same fix. 75% less word. Brain still big.

Install

npx skills add JuliusBrussee/caveman

... continue reading