The students participating in the annual International Math Olympiad (IMO) represent some of the most talented young computational minds in the world. This year, they faced down a newly enhanced array of powerful AI models, including Google's Gemini Deep Think. The company says it put its model to the test using the same rules as human participants, and it improved on an already solid showing from last year.
Google says its specially tuned math AI got five of the six questions correct, which is good enough for gold medal status. And unlike OpenAI, Google played by the rules set forth by the IMO.
A new Gemini
The Google DeepMind team participated in last year's IMO competition using an AI composed of the AlphaProof and AlphaGeometry 2 models. This setup was able to get four of the six questions correct, earning silver medal status—only half of the human participants earn any medal at all.
In 2025, Google DeepMind was among a group of companies that worked with the IMO to have their models officially graded and certified by the coordinators. Google came prepared with a new model for the occasion. Gemini Deep Think was announced earlier this year as a more analytical take on simulated reasoning models. Rather than going down one linear line of "thought," Deep Think runs multiple reasoning processes in parallel, integrating and comparing the results before giving a final answer.
According to Thang Luong, DeepMind senior scientist and head of the IMO team, this is a paradigm shift from last year's effort. In 2024, an expert had to translate the natural language questions into "domain specific language." At the end of the process, said expert would have to interpret the output. Deep Think, however, is natural language, end to end, and was not specifically designed to do math.