Tech News
← Back to articles

Sutskever and LeCun: Scaling LLMs Won't Yield More Useful Results

read original related products more articles

When two of the most influential people in AI both say that today’s large language models are hitting their limits, it’s worth paying attention.

In a recent long-form interview, Ilya Sutskever – co-founder of OpenAI and now head of Safe Superintelligence Inc. – argued that the industry is moving from an “age of scaling” to an “age of research”. At the same time, Yann LeCun, VP & Chief AI Scientist at Meta, has been loudly insisting that LLMs are not the future of AI at all and that we need a completely different path based on “world models” and architectures like JEPA.

As developers and founders, we’re building products right in the middle of that shift.

This article breaks down Sutskever’s and LeCun’s viewpoints and what they mean for people actually shipping software.

1. Sutskever’s Timeline: From Research → Scaling → Research Again

Sutskever divides the last decade of AI into three phases:

1.1. 2012–2020: The first age of research

This is the era of “try everything”:

convolutional nets for vision

sequence models and attention

... continue reading