These new AI benchmarks could help make models less biased
Published on: 2025-06-21 05:00:00
“We have been sort of stuck with outdated notions of what fairness and bias means for a long time,” says Divya Siddarth, founder and executive director of the Collective Intelligence Project, who did not work on the new benchmarks. “We have to be aware of differences, even if that becomes somewhat uncomfortable.”
The work by Wang and her colleagues is a step in that direction. “AI is used in so many contexts that it needs to understand the real complexities of society, and that’s what this paper shows,” says Miranda Bogen, director of the AI Governance Lab at the Center for Democracy and Technology, who wasn’t part of the research team. “Just taking a hammer to the problem is going to miss those important nuances and [fall short of] addressing the harms that people are worried about.”
Benchmarks like the ones proposed in the Stanford paper could help teams better judge fairness in AI models—but actually fixing those models could take some other techniques. One may be to invest in mor
... Read full article.