Shortly after joining MIT in 2012, Williams created the Civic Data Design Lab to bridge that divide. Over the years, she and her colleagues have pushed the narrative and expository bounds of urban planning data using the latest technologies available—making numbers vivid and accessible through human stories and striking graphics. One project she was involved in, on rates of incarceration in New York City by neighborhood, is now in the permanent collection of the Museum of Modern Art in New York. Williams’s other projects have tracked the spread and impact of air pollution in Beijing using air quality monitors and mapped the daily commutes of Nairobi residents using geographic information systems.
Cities should be transparent in how they’re using AI and what its limitations are. In doing so, they have an opportunity to model more ethical and responsive ways of using this technology.
In recent years, as AI became more accessible, Williams was intrigued by what it could reveal about cities. “I really started thinking, ‘What are the implications for urban planning?’” she says. These tools have the potential to organize and illustrate vast amounts of data instantaneously. But having more information also increases the risks of misinformation and manipulation. “I wanted to help guide cities in thinking about the positives and negatives of these tools,” she says.
In 2024, that inquiry led to a collaboration with the city of Boston, which was exploring how and whether to apply AI in various government functions through its Office of Emerging Technology. Over the course of the year, Williams and her team followed along as Boston experimented with several new applications for AI in government and gathered feedback at community meetings.
On the basis of these findings, Williams and the Civic Data Design Lab published the Generative AI Playbook for Civic Engagement in the spring. It’s a publicly available document that helps city governments take advantage of AI’s capabilities and navigate its attendant risks. This kind of guidance is especially important as the federal government takes an increasingly laissez-faire approach to AI regulation.
“That gray zone is where nonprofits and academia can create research to help guide states and private institutions,” Williams says.