Reflection, a startup founded just last year by two former Google DeepMind researchers, has raised $2 billion at an $8 billion valuation, a whopping 15x leap from its $545 million valuation just seven months ago. The company, which originally focused on autonomous coding agents, is now positioning itself as both an open-source alternative to closed frontier labs like OpenAI and Anthropic, and a Western equivalent to Chinese AI firms like DeepSeek.
The startup was launched in March 2024 by Misha Laskin, who led reward modeling for DeepMind’s Gemini project, and Ioannis Antonoglou, who co-created AlphaGo, the AI system that famously beat the world champion in the board game Go in 2016. Their background developing these very advanced AI systems is central to their pitch, which is that the right AI talent can build frontier models outside established tech giants.
Along with its new round, Reflection announced that it has recruited a team of top talent from DeepMind and OpenAI, and built an advanced AI training stack that it promises will be open for all. Perhaps most importantly, Reflection says it has “identified a scalable commercial model that aligns with our open intelligence strategy.”
Reflection’s team currently numbers about 60 people — mostly AI researchers and engineers across infrastructure, data training, and algorithm development, per Laskin, the company’s CEO. Reflection has secured a compute cluster and hopes to release a frontier language model next year that’s trained on “tens of trillions of tokens,” he told TechCrunch.
“We built something once thought possible only inside the world’s top labs: a large-scale LLM and reinforcement learning platform capable of training massive Mixture-of-Experts (MoEs) models at frontier scale,” Reflection wrote in a post on X. “We saw the effectiveness of our approach first-hand when we applied it to the critical domain of autonomous coding. With this milestone unlocked, we’re now bringing these methods to general agentic reasoning.”
MoE refers to a specific architecture that powers frontier LLMs — systems that, previously, only large, closed AI labs were capable of training at scale. DeepSeek had a breakthrough moment when it figured out how to train these models at scale in an open way, followed by Qwen, Kimi, and other models in China.
“DeepSeek and Qwen and all these models are our wake up call because if we don’t do anything about it, then effectively, the global standard of intelligence will be built by someone else,” Laskin said. “It won’t be built by America.”
Techcrunch event Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025 Netflix, Box, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, Vinod Khosla — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch, and a chance to learn from the top voices in tech. Grab your ticket before doors open to save up to $444. Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025 Netflix, Box, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, Vinod Khosla — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss a chance to learn from the top voices in tech. Grab your ticket before doors open to save up to $444. San Francisco | REGISTER NOW
Laskin added that this puts the U.S. and its allies at a disadvantage because enterprises and sovereign states often won’t use Chinese models due to potential legal repercussions.
“So you can either choose to live at a competitive disadvantage or rise to the occasion,” Laskin said.
... continue reading