is a reviewer with over a decade of experience writing about consumer tech. She has a special interest in mobile photography and telecom. Previously, she worked at DPReview.
At The Verge, we like to ask “What is a photo?” when we’re trying to sort out real and unreal images — especially those taken with phone cameras. But I think there’s another question that we’ll want to add to the mix starting right now: what is a camera? With the introduction of the Pixel 10 Pro and Pro XL, that answer is more wild and complicated than ever, because generative AI isn’t just something you can use to edit a photo you’ve already taken; it’s baked right into the camera itself.
I’m talking about Pro Res Zoom, which is not to be confused with Apple’s ProRes video format or Google’s Super Res Zoom, so help us all. Pro Res Zoom kicks in when past 30x, all the way up to 100x digital zoom. Typically, the camera uses an algorithm to help fill in the gaps left by upscaling a small portion of your photo to the original resolution. Typically, the results look like hot garbage, especially when you get all the way to 75x or 100x, despite every camera maker’s best efforts over the past two decades. Pro Res Zoom aims to give you a usable image where you wouldn’t have gotten one before — and that’s where the diffusion model comes in.
It’s a latent diffusion model, Google’s Pixel camera product manager Isaac Reynolds tells me. He doesn’t see it as an entirely new process — more like a variation on what phone cameras have done for years. Algorithms have long helped identify subjects and improve detail, producing unwanted artifacts as a byproduct that engineers squash in subsequent updates. “Generative AI is just a different algorithm with different artifacts,” he says. But as opposed to a more conventional neural network, a diffusion model is “pretty good at killing the artifacts.”
That might be an understatement. In the handful of demos I saw, Pro Res Zoom cleaned up some pretty gnarly 100x zoom photos remarkably well. The processing all happens on device after you take the photo. Reynolds tells me that when Google started developing the feature, it took around a minute to run the diffusion model on the phone; his team got the runtime down to four or five seconds. Once the processing is done, the new version is saved alongside the original. I only saw it work a handful of times, but the results I saw looked pretty darn good.
Previous Next
1 / 3 The original photo before Pro Res Zoom.
Pro Res Zoom has one important guardrail: it doesn’t work on people. If it detects a person in the image, it’ll work around them and enhance everything else, leaving the human be. This is a good idea, not only because I do not want a phone camera hallucinating different features onto my face, but also because it could be problematic from a creepiness standpoint.
Google has also taken a responsible step to tag photos taken with the phone using C2PA content credentials, labeling Pro Res Zoom photos as “edited with AI tools.” But it doesn’t stop there — all photos taken with the Pixel 10 get tagged to indicate that they were taken with a camera and whether AI played a role. If a photo is the result of merging multiple frames, like a panorama, that’ll be noted in the content credentials, too.
... continue reading