Skip to content
Tech News
← Back to articles

Really, you made this without AI? Prove it

read original get AI Art Generator Tool → more articles
Why This Matters

As AI-generated content becomes increasingly indistinguishable from human-created work, the tech industry faces urgent challenges in authenticating and labeling genuine content to protect creators and maintain trust. Implementing reliable identification standards could help consumers better discern original works from AI, fostering transparency and supporting human creators. This development underscores the need for industry-wide solutions to address the proliferation of AI-generated media and preserve authenticity online.

Key Takeaways

Posts from this author will be added to your daily email digest and your homepage feed.

“This looks like AI.”

It’s a phrase I dread seeing as a writer who dabbles in illustration and amateur photography. In a world where generative AI technology is increasingly adept at mimicking the work of humans, people are naturally skeptical when online platforms refuse to label even obvious AI content.

This leads me to one conclusion: maybe we should start labeling human-made text, images, audio, and video with something akin to a universally recognized Fair Trade logo. The machines sure as hell aren’t motivated to label their work, but the creators at risk of being displaced most definitely are.

Fortunately, I’m not alone in my thinking.

Instagram head Adam Mosseri suggested as much in December, saying that it will be “more practical to fingerprint real media than fake media” as AI technology improves to the point of making content that’s visually indistinguishable from that made by creative professionals.

Nobody can say for sure how much of what we find on the internet is AI-generated, but there’s widespread perception that news sites, social media platforms, and search engine results are rife with it, according to a recent Reuters Institute survey.

Authenticating human-made works was something the C2PA content credentials standard — which is already used by Meta’s platforms — was supposed to do. But so far, its implementation has been wholly ineffectual, despite having received broad industry support. It turns out that lots of people making and platforming AI content are motivated to hide its origins because of the clicks, chaos, and cash it can generate.

In a bid to help human creatives distinguish their work from that spat out by AI generators, a large number of solutions have emerged in recent years. And like C2PA, they face a number of challenges for widespread adoption.

Here are just a handful of the badges being offered by organizations trying to distinguish human-made works from AI-generated content. Image compiled by The Verge

... continue reading