Some elements here build upon the RL env post. Also, please read the footnotes, given a lot to cover, I moved many clarifications to the bottom.
Something is happening across all major labs simultaneously and it’s not a coincidence.
Yann LeCun announced he is leaving Meta to start a new lab focused entirely on World Models. He also has a technical lecture on what the world models are and what they do.
Ilya Sutskever, on Dwarkesh’s podcast, described emotions as value functions, a framing that makes a lot of sense when you are moving away from pattern matching to planning and simulation.
Google announced Genie 3, their approach to world simulation.
In a probably throwaway line, Demis Hassabis revealed how he is spending most of his research time on World Models.
Anthropic’s intepretability research shows that current models already develop internal world representations, but they are implicit, emergent and unreliable.
OpenAI, when launching Sora insisted on it being a world simulator, as opposed to being just a video model.
Similarly, Veo3 is also referred to as a physics model or a world model.
Meta released a paper on code world model (cwm) in September last year, where a 32B model matched or outperformed larger models on execution-dependent benchmarks.
... continue reading