Javier Zayas Photography/Moment via Getty Images
Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
Google's state-of-the-art image model is available for all.
It first became famous under the codename "nano-banana."
The model now comes with 10 different aspect ratios.
Gemini 2.5 Flash Image, Google's state-of-the-art image-generating AI model otherwise known as "nano banana," formally introduced in August, is officially out of its testing phase and ready for full-scale, real-world use, the company announced Thursday.
Also: OpenAI's Sora 2 launches with insanely realistic video and an iPhone app
In addition to being generally available, Gemini 2.5 Flash Image now comes with 10 aspect ratios across four styles (landscape, square, portrait, and "flexible"), enabling "effortless content creation across various formats, from cinematic landscapes to vertical social media posts," Google wrote in its announcement.
The company also published developer docs and a "cookbook" to help users get started with Gemini 2.5 Flash Image, which costs $0.039 per image.
Why it stands apart
Available now through the Gemini API on Google AI Studio, and for enterprise use through Vertex AI, the model is known for its ability to maintain subject consistency across sets of images.
Also: My new favorite Photoshop AI tool lets me combine images in one click - and I can't stop
Brands, for example, can create images of the same product in multiple environments, giving them more options to choose from. Likewise, users can generate pictures of themselves or fictional characters wearing different outfits, say, without having to worry about the model adding rogue fingers, or falling prey to the other hallucinatory quirks for which image-generating AI tools have become notorious.
Gemini 2.5 Flash Image also specializes in making minor edits to images based on natural language instructions ("Remove that marinara stain from my shirt, please") and fusing multiple images together, among other technical abilities.
Want more stories about AI? Sign up for AI Leaderboard, our weekly newsletter.
Before its official debut -- and before Google had even publicly claimed credit for it -- the model went by the codename "nano-banana" and quickly became a superstar preview model on LMArena. On the same day that it was publicly introduced in late August, Adobe announced that it could be accessed through its Firefly and Express tools.
Watermarking and deepfakes
Many developers are investing heavily in image-generating AI tools, aiming to sell them to creators and businesses as cost-efficient alternatives to lengthy and expensive in-person photoshoots. Just this week, OpenAI released Sora 2, the latest model behind its Sora AI video generator, which showed massive improvement in photorealism.
Also: Meta gives advertisers new AI personalization tools - while using your chats to target content
As a result, technology has advanced rapidly, producing a flurry of tools that can generate photorealistic images in seconds -- as well as a torrent of deepfakes. Without comprehensive federal regulation, tech companies have had to take responsibility for building transparency measures into their image-generating tools to make sure their users know when they're seeing something generated by a machine -- or not.
Any images that have been created or edited using Gemini 2.5 Flash Image include an invisible Synth ID watermark, according to Google, which means they can be identified as AI-generated by another model specifically trained for that purpose -- but probably not by the human eye.