Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now
A comprehensive new study has revealed that open-source artificial intelligence models consume significantly more computing resources than their closed-source competitors when performing identical tasks, potentially undermining their cost advantages and reshaping how enterprises evaluate AI deployment strategies.
The research, conducted by AI firm Nous Research, found that open-weight models use between 1.5 to 4 times more tokens — the basic units of AI computation — than closed models like those from OpenAI and Anthropic. For simple knowledge questions, the gap widened dramatically, with some open models using up to 10 times more tokens.
Measuring Thinking Efficiency in Reasoning Models: The Missing Benchmarkhttps://t.co/b1e1rJx6vZ
We measured token usage across reasoning models: open models output 1.5-4x more tokens than closed models on identical tasks, but with huge variance depending on task type (up to… pic.twitter.com/LY1083won8 — Nous Research (@NousResearch) August 14, 2025
“Open weight models use 1.5–4× more tokens than closed ones (up to 10× for simple knowledge questions), making them sometimes more expensive per query despite lower per‑token costs,” the researchers wrote in their report published Wednesday.
The findings challenge a prevailing assumption in the AI industry that open-source models offer clear economic advantages over proprietary alternatives. While open-source models typically cost less per token to run, the study suggests this advantage can be “easily offset if they require more tokens to reason about a given problem.”
AI Scaling Hits Its Limits Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are: Turning energy into a strategic advantage
Architecting efficient inference for real throughput gains
... continue reading