Skip to content
Tech News
← Back to articles

I tested ChatGPT Images 2.0 vs. Gemini Nano Banana to see which is better - this model wins

read original more articles
Why This Matters

The recent advancements in ChatGPT Images 2.0 demonstrate significant improvements in context-aware and basic image generation, positioning it as a competitive tool in AI-driven visual content creation. However, challenges like Nano Banana's struggles with text prompts and privacy concerns related to Gemini's personalization highlight ongoing industry hurdles. These developments are crucial for consumers and developers seeking more accurate, versatile, and privacy-conscious AI image generation solutions.

Key Takeaways

David Gewirtz / Elyse Betters Picaro / ZDNET

Follow ZDNET: Add us as a preferred source on Google.

ZDNET's key takeaways

ChatGPT's image generation has improved dramatically.

Nano Banana stumbled on text and prompt discipline.

Gemini's personalization surprise raised privacy concerns.

Last week, OpenAI unveiled two major releases with some astounding capabilities. First, the company released ChatGPT Images 2.0, which goes beyond basic image generation and adds the ability to include text and context derived from real data. Second, the company introduced its latest frontier model, GPT-5.5, which is a better-and-faster spec bump from GPT-5.4.

Also: I tried ChatGPT Images 2.0: A fun, huge leap - and surprisingly useful for real work

After its release last week, I ran ChatGPT Images 2.0 through a series of tests to prove its context-aware capabilities, and it did a great job. But what about basic image generation? Did it get better, stay at the same level, or somehow get worse?

To find out, I went back to the basic image-generator testing protocols I usually use and compared the new ChatGPT Images 2.0 to Google Gemini's Nano Banana. When I ran these tests in December 2025, Nano Banana scored an impressive 93%, compared to ChatGPT's fairly disappointing 74%. ChatGPT's numbers were so poor mostly because the AI refused to run our pop-culture tests.

... continue reading