Tech News
← Back to articles

Reality is losing the deepfake war

read original related products more articles

Today, we’re going to talk about reality, and whether we can label photos and videos to protect our shared understanding of the world around us. No really, we’re gonna go there. It’s a deep one.

To do this, I’m going to bring on Verge reporter Jess Weatherbed, who covers creative tools for us — a space that’s been totally upended by generative AI in a huge variety of ways with an equally huge number of responses from artists, creatives, and the huge number of people who consume that art and creative output out in the world.

If you’ve been listening to this show or my other show The Vergecast, or even just been reading The Verge these past several years, you know we’ve been talking about how the photos and videos taken by our phones are getting more and more processed and AI-generated for years now. Here in 2026, we’re in the middle of a full-on reality crisis, as fake and manipulated ultra-believable images and videos flood social platforms at scale and without regard for responsibility, norms, or even basic decency. The White House is sharing AI-manipulated images of people getting arrested and defiantly saying it simply won’t stop when asked about it. We have gone totally off the deep end now.

Verge subscribers, don’t forget you get exclusive access to ad-free Decoder wherever you get your podcasts. Head here. Not a subscriber? You can sign up here.

Whenever we cover this, we get the same question from a lot of different parts of our audience: why isn’t there a system to help people tell the real photos and videos apart from fake ones? Some people even propose systems to us, and in fact, Jess has spent a lot of time covering a few of these systems that exist in the real world. The most promising is something called C2PA, and her view is that so far, it’s been almost entirely failures.

Is this episode, we’re going to focus on C2PA, because it’s the one with the most momentum. C2PA is a labeling initiative spearheaded by Adobe with buy-in from some of the biggest players in the industry, including Meta, Microsoft, and OpenAI. But C2PA, also sometimes referred to as Content Credentials, has some pretty serious flaws.

First, it was designed as more of a photography metadata tool, not an AI detection system. And second, it’s really only been only half-heartedly adopted by a handful, but not nearly all, of the players you would need to make it work across the internet. We’re at the point now where Instagram chief Adam Mosseri is publicly posting that the default should shift and you should not trust images or videos the way you maybe could before.

Think about that for one second. That’s a huge, pivotal shift in how society evaluates photos and videos and an idea I’m sure we’ll be coming back to a lot this year. But we have to start with the idea that we can solve this problem with metadata and labels — that we can label our way into a shared reality. And why that idea might simply never work.

Okay, Verge reporter Jess Weatherbed on C2PA and the effort to label our way into reality. Here we go.

This interview has been lightly edited for length and clarity.

... continue reading