Quardia/iStock/Getty Images Plus via Getty Images
Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
MIT estimated the computing power for 809 large language models.
Total compute affected AI accuracy more than any algorithmic tricks.
Computing power will continue to dominate AI development.
It's well known that artificial intelligence models such as GPT-5.2 improve their performance on benchmark scores as more compute is added. It's a phenomenon known as "scaling laws," the AI rule of thumb that says accuracy improves in proportion to computing power.
But, how much effect does computing power have relative to other things that OpenAI, Google, and others bring -- such as better algorithms or different data?
... continue reading