ZDNET's Kerry Wan takes a photo with the Google Pixel 10 Pro camera. Sabrina Ortiz/ZDNET
Isaac Reynolds has been working on the Pixel Camera team at Google for almost a decade -- since the first Google Pixel phone launched in 2016. And yet, I think it's fair to say that he's never been more bullish about the technology that Google has integrated into a phone camera than he is with this year's Pixel 10 Pro. A new wave of AI breakthroughs in the past year have allowed Google to use Large Language Models, machine learning, and generative AI imaging to unlock new capabilities to power another meaningful leap forward in phone photography.
I got the chance to sit down with Reynolds as he was still catching his breath from the launch of the Pixel 10 phones -- and at the same time, ramping up for the next set of camera upgrades the team is preparing for the 2026 Pixel phones.
Also: Pixel just zoomed ahead of iPhone in the camera photography race
I peppered Reynolds with all of my burning questions about Pro Res Zoom, Conversational Editing, Camera Coach, AI models, the Tensor G5 chip, Auto Best Take and the larger ambitions of the Pixel Camera team. At the same time, he challenged me with information I didn't expect on Telephoto Panoramas, C2PA AI metadata, Guided Frame, and educating the public about AI.
I got to unpack a lot about how the Google team was able to engineer such big advances in the Pixel 10 Pro camera system, and we delved far deeper into the new photography features than Google talked about in its 2025 Made by Google event or in its published blog post.
Here's my reporter's notebook on what I learned.
Mission of the Pixel Camera team
"I think the major thing our team has always been focused on is what I call durable [photography] problems -- low light, zoom, dynamic range, and detail," said Reynolds. "And every generation [of Pixel] has brought new technologies."
Camera Coach
... continue reading