Tech News
← Back to articles

AI image generators are getting better by getting worse

read original related products more articles

is a senior reviewer with over a decade of experience writing about consumer tech. She has a special interest in mobile photography and telecom. Previously, she worked at DPReview.

This is The Stepback, a weekly newsletter breaking down one essential story from the tech world. For more on smartphones and digital imagery — real or otherwise — follow Allison Johnson. The Stepback arrives in our subscribers’ inboxes at 8AM ET. Opt in for The Stepback here.

How it started

Remember the early days of AI image generation? Oh how we laughed when our prompts resulted in people with too many fingers, rubbery limbs, and other details easily pointing to fakes. But if you haven’t been keeping up, I regret to inform you that the joke is over. AI image generators are getting way better at creating realistic fakes, partly thanks to a surprising new development: making image quality a little bit worse.

If you can believe it, OpenAI debuted its image generation tool DALL-E a little less than five years ago. In its first iteration, it could only generate 256 x 256 pixel images; tiny thumbnails, basically. A year later, DALL-E 2 debuted as a huge leap forward. Images were 1024 x 1024, and surprisingly real-looking. But there were always tells.

In Casey Newton’s hands-on with DALL-E 2 just after it launched in beta, he included an image made from his prompt: “A shiba inu dog dressed as a firefighter.” It’s not bad, and it might fool you if you saw it at a glance. But the contours of the dog’s fur are fuzzy, the patch on its (adorable little) coat is just some nonsense scribbles, and there’s a weird, chunky collar tag hanging to the side of the dog’s neck that doesn’t belong there. The cinnamon rolls with eyes from the same article were easier to believe.

Midjourney and Stable Diffusion also came to prominence around this time, embraced by AI artists and people with, uh, less savory designs. New, better models emerged over the next couple of years, minimizing the flaws and adding the ability to render text somewhat more accurately. But most AI generated images still carried a certain look: a little too smooth and perfect, with a kind of glow you’d associate with a stylized portrait more than a candid photo. Some AI images still look that way, but there’s a new trend toward actual realism that tones down the gloss.

How it’s going

OpenAI is a relative newcomer in the tech world when you compare it to the likes of Google and Meta, but those established companies haven’t been standing still as AI ascends. In the latter half of 2025, Google released a new image model in its Gemini app called Nano Banana. It went viral when people started using it to make realistic figurines of themselves. My colleague Robert Hart tried out the trend and noticed something interesting: the model preserved his actual likeness more faithfully than other AI tools.

That’s the thing about AI images: they often tend toward a neutral, bland middle ground. Your request for an image of a table will look basically right, but it will also feel like the result of a computer averaging out every table it’s ever seen into something lacking any actual character. The things that make an image of a table look like the real thing — or a reproduction of your own facial features — are actually imperfections. I don’t mean the bizarre artifacts of AI trying to understand letters of the alphabet. I mean a little clutter, messiness, and lighting that’s less than ideal. And lately, that also means imitating the imperfections of our most popular cameras.

... continue reading