Tech News
← Back to articles

If Big Tech cared about fighting AI slop, it wouldn’t be drowning us in it

read original related products more articles

As 2025 drew to a close, Instagram head Adam Mosseri ended the year by doom-posting about AI. “Authenticity is becoming infinitely reproducible,” Mosseri lamented. “Everything that made creators matter — the ability to be real, to connect, to have a voice that couldn’t be faked — is now accessible to anyone with the right tools.” But people, Mosseri insisted, still wanted “content that feels real.” His proposed solution was finding a way to label real media. “Camera manufacturers will cryptographically sign images at capture, creating a chain of custody,” he said. The result would be a trustworthy system for determining what’s not AI.

The good news is that Mosseri’s solution already exists: it’s called C2PA. The bad news is that Instagram is already using it, and it’s not doing shit to actually help. If anything, it’s starting to feel like a substitute for actual action, as Instagram goes full speed ahead on building generative AI tools.

AI is getting extremely good at mimicking reality, which threatens the culture and business models that many social media platforms have fostered around content creators. AI can copy dance trends and photo shoots, make artists and influencers who don’t exist, and generally replicate any of the same-y looking content that social media is already overrun with. Creators are fighting against this by leaning into aesthetics that look raw and imperfect, but AI is pretty good at that too. More concerningly, it can also be used to quickly spread misinformation about important events like the ICE protests in Minnesota, or the killing of Renee Nicole Good and Alex Pretti.

Over the past several years, some of the biggest names in tech have nominally fought this by adopting a system called Content Credentials or C2PA. C2PA — short for Coalition for Content Provenance and Authenticity — is a provenance-based standard founded in 2021 by Adobe, Intel, Microsoft, ARM, Truepic, and the BBC. As Mosseri suggested, C2PA addresses deepfakes not by directly labeling fake material, but by authenticating media that’s not AI-generated. It does this by attaching invisible metadata to images, videos, and audio at the point of creation or editing, allowing us to verify who made something, how and when it was made, and if AI has been used during that process. Meta joined the C2PA Steering Committee in September 2024 to support and promote the standard, noting that having the ability to understand digital content is “critical to maintaining the health of the digital ecosystem.”

While C2PA has the backing of Microsoft, Meta, Google, OpenAI, TikTok, Qualcomm, and many other large tech companies, it’s just one system that’s trying to establish real from fake. And while the system has its place, it clearly isn’t being implemented in a way that’s actually helping to protect people from AI slop or misleading deepfakes. Even if more synthetic content is embedded with C2PA information, everyday people are still largely expected to manually hunt for it themselves across the images and videos they see online, despite many not even being aware that C2PA exists. If anything, it seems like AI providers are using C2PA to distance themselves from the problem, while continuing work on their own slop factories.

Companies have thrown their weight behind C2PA and other provenance-based solutions like Google’s SynthID watermarking system. (There are also inference-based solutions available that scan for subtle signs of synthetic generation — like Reality Defender, which is also a member of the C2PA initiative — but those can only rank the likelihood that AI was used.) But provenance-based solutions have pitfalls. For one thing, absolutely everyone involved with every stage of media creation and hosting needs to be on board, which is laughably unachievable. C2PA, for instance, has been only gradually adopted by camera companies like Canon, Nikon, Sony, FujiFilm, and Leica, with support slow and mostly limited to new camera releases.

“Older cameras that do not support C2PA will continue to produce important and valid photographs,” Leica Camera USA spokesperson Nathan Kellum-Pathe told The Verge. “ For these images, trust will still rely on context, reputation, and editorial responsibility.”

Provenance metadata is also so flimsy that OpenAI — a steering member of C2PA — points out it can “easily be removed either accidentally or intentionally.” LinkedIn and TikTok still fail to reliably tag content that’s supposed to carry C2PA metadata. YouTube uses C2PA, Google’s SynthID, and other systems for proactive AI labeling, but those labels are also inconsistent and difficult to spot. And nobody even knows what a photo is these days, so boiling down what actually counts as real or fake is far easier said than done. Meta learned this the hard way by slapping real photographs on Instagram with “Made by AI” labels, pissing off a lot of photographers.

Meta has long since renamed these labels as “AI info” and made them far harder to spot. You should find this label in teeny text below someone’s account name when looking at AI-generated or manipulated content on the Instagram app, but it can intermittently be replaced with song names and other information about the post. If you spot it, you still need to open the three-dot menu on images and videos to actually read the AI info label. These AI labels also may not appear at all on Instagram’s desktop website, even on posts that feature the “AI Info” label on the platform’s mobile apps. If there are no labels or visual indicators of C2PA at all, you’re expected to scan suspicious content using a Chrome browser extension or by manually uploading it to one of the official C2PAchecker websites.

The “AI info” label location under this Instagram account name is also used to display information about location and audio details. And while the label appears for this image on the Instagram app, nothing appears if you view it on the web. Image by Chaosdreamland / Jess Weatherbed / The Verge

... continue reading