Westend61/Getty Images
Artificial intelligence may have impressive inferencing powers, but don't count on it to have anything close to human reasoning powers anytime soon. The march to so-called artificial general intelligence (AGI), or AI capable of applying reasoning through changing tasks or environments in the same manner as humans, is still a long way off. Large reasoning models (LRMs), while not perfect, do offer a tentative step in that direction.
In other words, don't count on your meal-prep service robot to react appropriately to a kitchen fire or a pet jumping on the table and slurping up food.
Also: Meta's new AI lab aims to deliver 'personal superintelligence for everyone' - whatever that means
The holy grail of AI has long been to think and reason as humanly as possible -- and industry leaders and experts agree that we still have a long way to go before we reach such intelligence. But large language models (LLMs) and their slightly more advanced LRM offspring operate on predictive analytics based on data patterns, not complex human-like reasoning.
Nevertheless, the chatter around AGI and LRMs keeps growing, and it was inevitable that the hype would far outpace the actual available technology.
"We're currently in the middle of an AI success theatre plague," said Robert Blumofe, chief technology officer and executive VP at Akamai. "There's an illusion of progress created by headline-grabbing demos, anecdotal wins, and exaggerated capabilities. In reality, truly intelligent, thinking AI is a long ways away."
A recent paper written by Apple researchers downplayed LRMs' readiness. The researchers concluded that LRMs, as they currently stand, aren't really conducting much reasoning above and beyond the standard LLMs now in widespread use. (My ZDNET colleagues Lester Mapp and Sabrina Ortiz provide excellent overviews of the paper's findings.)
Also: Apple's 'The Illusion of Thinking' is shocking - but here's what it missed
LRMs are "derived from LLMs during the post-training phase, as seen in models like DeepSeek-R1," said Xuedong Huang, chief technology officer at Zoom. "The current generation of LRMs optimizes only for the final answer, not the reasoning process itself, which can lead to flawed or hallucinated intermediate steps."
... continue reading