OpenAI released a new benchmark on Thursday that tests how its AI models perform compared to human professionals across a wide range of industries and jobs. The test, GDPval, is an early attempt at understanding how close OpenAI’s systems are to outperforming humans at economically valuable work — a key part of the company’s founding mission to develop artificial general intelligence or AGI.
OpenAI says its found that its GPT-5 model and Anthropic’s Claude Opus 4.1 “are already approaching the quality of work produced by industry experts.”
That’s not to say that OpenAI’s models are going to start replacing humans in their jobs immediately. Despite some CEOs’ predictions that AI will take the jobs of humans in just a few years, OpenAI admits that GDPval today covers a very limited number of tasks people do in their real jobs. However, it is one of the latest ways the company is measuring AI’s progress towards this milestone.
GDPval is based on nine industries that contribute the most to America’s gross domestic product, including domains such as healthcare, finance, manufacturing, and government. The benchmark tests an AI model’s performance in 44 occupations among those industries, ranging from software engineers to nurses to journalists.
For OpenAI’s first version of the test, GDPval-v0, OpenAI asked experienced professionals to compare AI-generated reports with those produced by other professionals, and then choose the best one. For example, one prompt asked investment bankers to create a competitor landscape for the last mile delivery industry, and compare them to AI-generated reports. OpenAI then averages an AI model’s “win rate” against the human reports across all 44 occupations.
For GPT-5-high, a souped up version of GPT-5 with extra computational power, the company says the AI model was ranked as better than or on par with industry experts 40.6% of the time.
OpenAI also tested Anthropic’s Claude Opus 4.1 model, which was ranked as better than or on par with industry experts in 49% of tasks. OpenAI says that it believes Claude scored so high because of its tendency to make pleasing graphics, rather than sheer performance.
Techcrunch event Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025 Netflix, Box, a16z, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch, and a chance to learn from the top voices in tech. Grab your ticket before Sept 26 to save up to $668. Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025 Netflix, Box, a16z, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch, and a chance to learn from the top voices in tech. Grab your ticket before Sept 26 to save up to $668. San Francisco | REGISTER NOW
Credit: OpenAI
It’s worth noting that most working professionals do a lot more than submit research reports to their boss, which is all that GDPval-v0 tests for. OpenAI acknowledges this, and says it plans to create more robust tests in the future that can account for more industries and interactive workflows.
Nonetheless, the company sees the progress on GDPval as notable.
In an interview with TechCrunch, OpenAI’s chief economist Dr. Aaron Chatterji said GDPval’s results suggest that people in these jobs can now use AI models to spend time on more meaningful tasks.
“[Because] the model is getting good at some of these things,” Chatterji says, “people in those jobs can now use the model, increasingly as capabilities get better, to offload some of their work and do potentially higher value things.”
OpenAI’s evaluations lead Tejal Patwardhan tells TechCrunch that she’s encouraged by the rate of progress on GDPval. OpenAI’s GPT-4o model scored just 13.7% (wins and ties versus humans), which was released roughly 15 months ago. Now GPT-5 scores nearly triple that, a trend Patwardhan expects to continue.
Silicon Valley has a wide range of benchmarks it uses to measure the progress of AI models, and assess whether a given model is state-of-the-art. Among the most popular are AIME 2025 (a test of competitive math problems) and GPQA Diamond (a test of PhD level science questions). However, several AI models are nearing saturation on some of these benchmarks, and many AI researchers have cited the need for better tests that can measure AI’s proficiency on real-world tasks.
Benchmarks like GDPval could become increasingly important in that conversation, as OpenAI makes the case that its AI models are valuable for a wide range of industries. But OpenAI may need a more comprehensive version of the test to definitively say its AI models can outperform humans.