Amsterdam thought it was on the right track. City officials in the welfare department believed they could build technology that would prevent fraud while protecting citizens’ rights. They followed these emerging best practices and invested a vast amount of time and money in a project that eventually processed live welfare applications. But in their pilot, they found that the system they’d developed was still not fair and effective. Why?
Lighthouse Reports, MIT Technology Review, and the Dutch newspaper Trouw have gained unprecedented access to the system to try to find out. Read about what we discovered.
—Eileen Guo, Gabriel Geiger & Justin-Casimir Braun
This story is a partnership between MIT Technology Review, Lighthouse Reports, and Trouw, and was supported by the Pulitzer Center.
+ Can you make AI fairer than a judge? Play our courtroom algorithm game to find out.
Why humanoid robots need their own safety rules
While humanoid robots are taking their first tentative steps into industrial applications, the ultimate goal is to have them operating in close quarters with humans.