Posts from this author will be added to your daily email digest and your homepage feed.
This is an excerpt of Sources by Alex Heath, a newsletter about AI and the tech industry, syndicated just for The Verge subscribers once a week.
Around the middle of last year, Pim de Witte started reaching out to a handful of prominent AI labs to see if they’d be interested in using data from Medal, his popular video game clipping platform, to train their agents.
Within weeks, it became clear that Medal’s data was more valuable to the labs than he expected. “We received multiple acquisition offers very quickly,” he told me. (He declined to name names, but it has been reported that OpenAI offered $500 million.) “Initially, we were quite interested in them,” he said of the offers, but that “was mostly a result of us not understanding what we were sitting on.”
He had read the Google DeepMind research paper showing that gaming data can be used to teach AI how to navigate a 3D environment. However, the interest from AI labs made him realize that his data from Medal, which receives roughly 2 billion video uploads per year from tens of thousands of video games, could be used to develop a unique foundational model for extending AI to the real world.
“It’s a pretty big bet.”
Today, Pim de Witte announced that Medal is spinning out a new AI lab called General Intuition that has raised a $133.7 million seed round. The money for the round is primarily from Vinod Khosla, founder of Khosla Ventures and one of the first investors in OpenAI. Other investors include General Catalyst and the Raine Group. Moritz Baier-Lentz, who oversees Lightspeed’s gaming investments, is also joining the startup part-time as a founding team member.
Khosla believes that General Intuition could be as impactful in the field of AI agents as OpenAI was on how people use large language models. It’s his firm’s largest seed check since it backed OpenAI in 2018. “It’s a pretty big bet,” he told me. “They have a unique dataset and a unique team.”
Unless you’re steeped in the AI world, you probably haven’t heard much about world models yet. It’s a branch of research that trains AI to have spatial understanding like a human. The idea is that a robot could, for example, predict when a glass of water will spill when knocked off a table and grab it before it falls. More practically, AI researchers are increasingly looking to world models as a way to train agents that can reliably generate and interact with a 3D space.
Among the prominent AI leaders, Google DeepMind CEO Demis Hassabis has been the most vocal advocate for world models and their importance in achieving AGI. Google recently demoed Genie 3, a model that generates a video game-like environment from scratch as you navigate through it. There are also a handful of startups working on similar models, including Fei-Fei Li’s World Labs, which this week released its own demo of a model that generates interactive video in real-time.
... continue reading