panithan pholpanichrassamee/Moment via Getty Images Follow ZDNET: Add us as a preferred source on Google. ZDNET's key takeaways A Gemini model won gold at a challenging coding competition. The model correctly answered 10 out of 12 problems. The win could have major implications for AGI, says Google. In recent years, large language models (LLMs) have become an integral part of many software developers' toolkits, helping them build, refine, and deploy apps more quickly and effectively. Now, Google says that one of its most advanced models has achieved a major coding breakthrough that could help lead to new scientific discoveries -- including, potentially, the attainment of artificial general intelligence, or AGI. Also: Will AI think like humans? We're not even close - and we're asking the wrong question Gemini 2.5 Deep Think, a state-of-the-art version of Google's flagship AI model that uses advanced reasoning capabilities to break problems down into multiple components, has achieved gold medal performance at the 2025 International Collegiate Programming Contest (ICPC) World Finals, the company announced Wednesday. Google wrote in a blog post that the "advanced version" of Gemini 2.5 Deep Think operates as a kind of automated and integrated team. "To tackle a problem, multiple Gemini agents each propose their own solutions using terminals to execute code and tests, and then iterate the solutions based on all of the attempts," the company wrote. Gemini's suprising victory The ICPC is widely recognized as the world's most prestigious and difficult university-level coding competition. Teams hailing from close to 3,000 universities across 103 countries competed in this year's finals, which were held Sept. 4 in Baku, Azerbaijan. Each team must solve a set of complex problems within a five-hour time period. There's no room for error: Only perfect answers get points. Also: I did 24 days of coding in 12 hours with a $20 AI tool - but there's one big pitfall Gemini correctly solved 10 out of the 12 problems in this year's ICPC finals, achieving a gold medal-level performance and the second-highest score overall compared to a group of human contestants. Gemini 2.5 Deep Think, along with an experimental reasoning model from OpenAI, also achieved gold medal-level performance at this year's International Mathematical Olympiad, the companies announced in July. "Together, these breakthroughs in competitive programming and mathematical reasoning demonstrate Gemini's profound leap in abstract problem-solving -- marking a significant step on our path toward artificial general intelligence (AGI)," Google wrote in its blog post. The model's breakthrough In what Google describes in a blog post as "an unprecedented moment," Gemini quickly and correctly solved one of the 12 problems in the competition that stymied all of the human competitors. There were two problems that it didn't manage to solve, on the other hand, which other teams did successfully. Also: OpenAI has new agentic coding partner for you now: GPT-5-Codex The third problem in the challenge, Problem C, asked competitors to devise a solution for distributing liquid through a series of interconnected ducts, so that reservoirs connected to each duct would be filled as quickly as possible. Each duct could be closed, open, or partially open, meaning there was an infinite number of possible configurations. In its search for the optimal configuration, Gemini took a surprising approach: It began by assigning a numerical value to each reservoir to determine the priority it should be assigned relative to the others. The model then deployed an algorithm and a game-theoretical concept known as the minimax theorem to find a solution. The whole process took less than half an hour. No human competitor was able to solve it. Also: I built a business plan with ChatGPT and it turned into a cautionary tale Although less monumental in its significance, this kind of problem-solving capability is reminiscent of the famous Move 37 during AlphaGo's 2016 game against Go world champion Lee Sedol, in which that AI model (developed by Google DeepMind) adopted a strategy that surprised human experts in the moment, but turned out to be decisive to its victory. Since then, "Move 37" has become shorthand for moments in which AI acts in creative or unexpected ways which challenge our conventional norms of intelligent problem-solving. What Gemini's win means Gemini's top-tier performance at the 2025 ICPC has implications far beyond software development, according to Google. "The skills needed for the ICPC -- understanding a complex problem, devising a multi-step logical plan, and implementing it flawlessly -- are the same skills needed in many scientific and engineering fields, such as designing new drugs, or microchips," the company wrote in its blog post, saying that this development shows AI could help solve difficult problems for the benefit of humanity (a familiar pseudo-promise AI companies often make). Also: Google's new open protocol secures AI agent transactions - and 60 companies already support it The notion that AI could eventually assist with scientific discovery has long been a dream for many computer scientists. Earlier this month, OpenAI launched an internal initiative aimed at this very goal. Earlier this month, Harvard Medical School designed an AI model that could help target degenerative disease and cancer treatment. According to Google, the best path forward in this regard will likely be some form of human-AI collaboration, through which advanced agentic models like Gemini 2.5 Deep Think suggest novel solutions to particularly difficult technical problems.