Tech News
← Back to articles

The AI industry’s biggest week: Google’s rise, RL mania, and a party boat

read original related products more articles

Andy Konwinski, founder of the Laude Institute: “I did a lot of interviews over the week, and when I asked people what felt overhyped to them, I heard agentic AI, RL, and world models, though I also heard RL and world models as areas people think are up-and-coming and most interesting to watch.”

Thomas Wolf, cofounder of Hugging Face: “AI x science, interpretability, RL long rollouts”

Roon, member of technical staff, OpenAI: “you can learn more from twitter than from literally being there / the tweets are saying the buzz is about continual learning / That’s possibly true / I can’t guarantee / mostly my on-the-ground feeling was ‘this is too much’”

Maya Bechler-Speicher, research scientist at Meta: “I can’t say with certainty what the buzziest topic was — the conference is massive, and my exposure was naturally limited — but tabular foundation models were undoubtedly gaining significant traction, and I expect this momentum to continue into 2026. After years in which decision-tree–based methods dominated generalization on tabular data, we are finally seeing foundation-model approaches that consistently outperform them. Another area drawing considerable attention is physical AI, which remains full of open research questions and opportunities.”

Anonymous researcher at a big AI lab: “I’m biased here, but AI for the physical world (robotics, engineering, etc, not just AI for science) looks like it’s finally taking off.”

Nathan Lambert, senior researcher at the Allen Institute for AI: “It was accepted that [Ilya Sutskever]‘s proclamation on the Dwarkesh Podcast that it’s now ‘The Age of Research’ rather than the age of scaling is a good moniker. No one area of the poster sessions or workshops was obviously labeled as the most important topic (e.g., last year’s NeurIPS was obsessed with reinforcement learning and reasoning after the launch of o1). Some groups reflected solemnly on how this was the first NeurIPS since DeepSeek R1 and a year of open model transformation, but most of the conference didn’t feel like it had an active role to play in it.”

Brian Wilt, head of data at Waymo: “The buzziest topic among my friends was how much research was happening in frontier labs vs. academia and was likely unpublished.. From my perspective at Waymo, many of the (applied) problems I need to solve only emerge at scale (e.g., data, performance). However, there’s also a deep sense that we need another fundamental breakthrough besides scaling current architectures (as Ilya/[Andrej] Karpathy/others have alluded to)”

Evgenii Nikishin, member of technical staff at OpenAI: “Continual learning was certainly among the buzziest topics. I don’t know yet how many scientific advances there will be in 2026 — maybe some, maybe little — but I think more people will be talking about it.”

Paige Bailey, developer lead for Google DeepMind: “Definitely sovereign open models, especially deploying them on-prem with fine-tuning + RL. In terms of what people will be talking about in 2026, I think world Models and robotics are the big ones.”

Sachin Dharashivkar, CEO of AthenaAgent: “Designing RL environments and training agents was the most discussed topic.”

... continue reading