Skip to content
Tech News
← Back to articles

AI helps add 10k more photos to OldNYC

read original get AI Photo Enhancement Tool → more articles
Why This Matters

The integration of AI into OldNYC has significantly enhanced the platform by adding 10,000 more historic photos, improving geolocation accuracy, and utilizing open-source mapping tools. These advancements make historical data more accessible, accurate, and cost-effective, benefiting both researchers and the general public. This demonstrates how AI can modernize and democratize access to historical archives in the tech industry.

Key Takeaways

Over the past two years I’ve quietly rebuilt major parts of the OldNYC photo viewer. The result: 10,000 additional historic photos on the map, more accurate locations, and a site that’s cheaper and easier to run—thanks to modern AI tools and the OpenStreetMap ecosystem.

OldNYC had about 39,000 photos in 2016. Today it has 49,000.

Most of these changes happened in 2024, but I’m only writing about them now in 2026. (I got distracted by an unrelated project.) If you haven’t visited OldNYC in a while, take a look—you might find some photos you missed.

Here are the three biggest improvements: better geolocation, dramatically improved OCR, and a switch to an open mapping stack.

Better Geolocation with OpenAI and OpenStreetMap

OldNYC works by geocoding historical descriptions—turning text like “Broad Street, south from Wall Street” into a latitude and longitude.

Originally this mostly meant extracting cross streets from titles and sending them to the Google Maps Geocoding API. That worked well when the streets still existed—but many historical intersections don’t.

Two changes in 2024 improved this dramatically.

GPT for hard geocodes

Some images include useful location details only in the description. I now use the OpenAI API ( gpt-4o ) to extract locations from that text.

... continue reading