Skip to content
Tech News
← Back to articles

Human typing habits and token counts

read original get Typing Speed Test Keyboard → more articles
Why This Matters

This article highlights how human typing habits—such as typos, shorthand, and filler words—can significantly impact token counts in AI prompts, affecting costs and processing efficiency. Understanding these nuances helps users optimize their interactions with AI models, potentially reducing expenses and improving response accuracy.

Key Takeaways

Human Typing Habits and Token Counts May 8, 2026 | Reading Time: 3 min gpt

Humans type for speed, tone, and habit. Tokenizers split text based on common patterns, and providers bill per token. That means ordinary habits like typos, shorthand, filler words, pasted IDs, and stray whitespace can change token counts without changing intent much.

I started noticing this on a tiny prompt: 5 words, 2 spelling mistakes, 13 tokens. I fixed the spelling and sent it again: 6 tokens, including the full stop.

Counts below use OpenAI’s tokenizer and Claude’s API based tokenizer. In general Claude spits out more tokens on the same text compared to OpenAI in my usage. Counts here are for isolated strings. In real prompts, counts can shift slightly based on surrounding spaces, punctuation, and casing.

Typos

Swapped letters, dropped letters, doubled letters, nearby-key misses: all normal typing habits, all billable.

template → 1 , tempalte → 3

→ , → loaded → 1 , lodaed → 2 , Claude: 3

→ , → , Claude: assistant → 1 , assitant → 2 , Claude: 3

→ , → , Claude: simple → 1 , simpel → 2

... continue reading