Tech News
← Back to articles

Without benchmarking LLMs, you're likely overpaying 5-10x

read original related products more articles

Jan 20, 2026

Last month I helped a friend cut his LLM API bill by 80%.

He's a non-technical founder building an AI-powered business. Like most people, he picked GPT-5 because it's the default: You have the API already, it has solid benchmarks, everyone uses it, why bother?!

But as usage grew, so did his bill. $1,500/month for API calls alone.

So we benchmarked his actual prompts against 100+ models and quickly realized that while GPT-5 is a solid choice, it almost never is the cheapest and there are always cheaper options with comparable quality. Figuring out which saved him thousands of dollars in the process. Here's how we did it.

The Problem: Benchmarks don't predict performance on your task

When picking an LLM, most people just choose a model from their favorite provider. For me, that's Anthropic, so depending on the task, I pick Opus, Sonnet, or Haiku. If you're sophisticated, you might check Artificial Analysis, or LM Arena, or whatever benchmark seems relevant: GPQA Diamond, AIME, SWE Bench, MATH 500, Humanity's Last Exam, ARC-AGI, MMLU...

But let's not fool ourselves here: none of these predict performance on your specific task. A model that tops reasoning benchmarks might be mediocre at damage cost estimation. Or customer support in your customers' native language. Or data extraction via Playwright. Or whatever you're actually building.

At best, they're a rough indicator of performance. And they do not account for costs at all.

The only way to know is to test on your actual prompts. And make a decision considering quality, cost, and latency.

... continue reading