Tech News
← Back to articles

Gemini's AI Image Detector Only Scratches the Surface. That's Not Good Enough

read original related products more articles

When you ask AI to look in the mirror, it doesn't always see itself. That's the sense you get when you ask it to determine whether an image is genuine or AI-generated.

Google last week took a stab at helping us distinguish real from deepfake, albeit an extremely limited one. In the Gemini app, you can share an image and ask if it's real, and Gemini will check for a SynthID -- a digital watermark -- to tell you whether or not it was made by Google's AI tools. (On the other hand, Google last week also rolled out Nano Banana Pro, its new image model, which makes it even harder to spot a fake with the naked eye.)

Within this limited scope, Google's reality check functions pretty well. Gemini works quickly and will tell you if something was made by Google's AI. In my testing, it even worked on a screenshot of an image. And the answer is quick and to the point -- yes, this image, or more than half of it at least, is fake.

But ask it about an image made by literally any other image generator and you won't get that smoking gun answer. What you get is a review of the evidence: The model looks for all of the typical tells of something being artificial. In this case, it's basically doing what we do with our own eyes, but we still can't totally trust its results.

As reliable and necessary as Google's SynthID check is, asking a chatbot to evaluate something that lacks a watermark is almost worthless. Google has provided a useful tool for checking the provenance of an image, but if we're going to be able to trust our own eyes on the internet again, every AI interface we use should be able to check images from every kind of AI model.

I hope that soon we'll be able to just drop an image into, say, Google Search and find out if it's fake. The deepfakes are getting too good not to have that reality check.

Checking images with chatbots is a mixed bag

There's very little to say about Google's SynthID check. When you ask Gemini (on the app) to evaluate a Google-generated image, it knows what it's looking at. It works. I'd like to see it rolled out across all the places Gemini appears -- like the browser version and Google Search -- and according to Google's blog post on the feature, that's already in the works.

The fact that Gemini in the browser doesn't have this functionality yet means we can see how the model (without SynthID) itself responds when asked if an AI-generated image is real. I asked the browser version of Gemini to evaluate an infographic Google provided reporters as a handout showing its new Nano Banana Pro model in action. This was AI-generated -- and even said so in its metadata. Gemini in the app used SynthID to suss it out. Gemini in the browser was wishy-washy: It said the design could be from AI or a human designer. It even said its SynthID tool didn't find anything indicating AI. (Although when I asked it to try again, it said it encountered an error with the tool.) The bottom line? It couldn't tell.

What about other chatbots? I had Nano Banana Pro generate an image of a tuxedo cat lying on a Monopoly board. The image, at a glance, was plausibly realistic. Unsuspecting coworkers I sent it to thought it was my cat. But if you look more closely, you'll see the errors: For example, the Monopoly set makes no sense -- Park Place is in multiple wrong places and the colors are off.

... continue reading