However, there are a growing number of teams around the world trying to address the AI evaluation crisis.
One result is a new benchmark called LiveCodeBench Pro. It draws problems from international algorithmic olympiads—competitions for elite high school and university programmers where participants solve challenging problems without external tools. The top AI models currently manage only about 53% at first pass on medium-difficulty problems and 0% on the hardest ones. These are tasks where human experts routinely excel.
Zihan Zheng, a junior at NYU and a world finalist in competitive coding, led the project to develop LiveCodeBench Pro with a team of olympiad medalists. They’ve published both the benchmark and a detailed study showing that top-tier models like GPT-4o mini and Google’s Gemini 2.5 perform at a level comparable to the top 10% of human competitors. Across the board, Zheng observed a pattern: AI excels at making plans and executing tasks, but it struggles with nuanced algorithmic reasoning. “It shows that AI is still far from matching the best human coders,” he says.
LiveCodeBench Pro might define a new upper bar. But what about the floor? Earlier this month, a group of researchers from multiple universities argued that LLM agents should be evaluated primarily on the basis of their riskiness, not just how well they perform. In real-world, application-driven environments—especially with AI agents—unreliability, hallucinations, and brittleness are ruinous. One wrong move could spell disaster when money or safety are on the line.
There are other new attempts to address the problem. Some benchmarks, like ARC-AGI, now keep part of their data set private to prevent AI models from being optimized excessively for the test, a problem called “overfitting.” Meta’s Yann LeCun has created LiveBench, a dynamic benchmark where questions evolve every six months. The goal is to evaluate models not just on knowledge but on adaptability.
Xbench, a Chinese benchmark project developed by HongShan Capital Group (formerly Sequoia China), is another one of these effort. I just wrote about it in a story. Xbench was initially built in 2022—right after ChatGPT’s launch—as an internal tool to evaluate models for investment research. Over time, the team expanded the system and brought in external collaborators. It just made parts of its question set publicly available last week.
Xbench is notable for its dual-track design, which tries to bridge the gap between lab-based tests and real-world utility. The first track evaluates technical reasoning skills by testing a model’s STEM knowledge and ability to carry out Chinese-language research. The second track aims to assess practical usefulness—how well a model performs on tasks in fields like recruitment and marketing. For instance, one task asks an agent to identify five qualified battery engineer candidates; another has it match brands with relevant influencers from a pool of more than 800 creators.