is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO.
Google set the bar high for Gemini 3. It’s promising a bunch of upgraded features in its shiny new AI model, from generating code that produces interactive 3D visualizations to “agentic” capabilities that complete tasks. But as we’ve seen in the past, what’s advertised doesn’t always match up to reality. So we put some of Google’s claims to the test and found that Gemini 3 delivers reasonably well — with caveats.
Google announced the Gemini 3 family of models earlier this week, with the flagship Gemini 3 Pro rolling out to users first. Gemini 3 Pro is supposed to come with big upgrades to reasoning, along with the ability to provide more concise and direct responses compared to Google’s previous models.
Some of the biggest promised improvements are to Canvas, the built-in workspace inside the Gemini app, where you can ask the AI chatbot to generate code, as well as preview the output. When building in Canvas, Google says Gemini 3 can interpret material from different kinds of sources at the same time, like text, images, and videos. The model can handle more complex prompts as well, allowing it to generate richer, more interactive user interfaces, models, and simulations, according to Google. The company says Gemini 3 is “exceptional” at zero-shot generation, too, which means it’s better at completing tasks that it hasn’t been trained on.
For my first test, I tried out one of the more complex requests that Google showed off in one of its demos: I asked Gemini 3 to create a 3D visualization of the difference in scale between a subatomic particle, an atom, a DNA strand, a beach ball, the Earth, the Sun, and the galaxy, as shown here.
The Earth is mostly land, apparently. Screenshot: The Verge
Gemini 3 created an interactive visual similar to what Google demonstrated, allowing me to scroll through and compare the size of different elements, which appeared to correctly list each one from small to large, starting at the proton and maxing out at the cosmic web. (To be fair, I’d hope Gemini could figure out that a beach ball is much smaller than the Sun.) It included almost everything shown in the demo, but its image quality fell short in a couple of areas, as the 3D models of the strand of DNA and beach ball were quite dim compared to what Google showed. I saw much of the same when feeding Google’s other demos into Gemini. The model spit out the correct concept, but it was always a little shoddier, whether it had lower resolution or was just a little more disorganized.
Gemini 3’s output didn’t quite stack up to Google’s demo when I tried something a little simpler, either. I asked it to re-create a model of a voxel-art eagle sitting on a tree branch, and while my results were quite similar to the demo, I couldn’t help but notice that the eagle didn’t have any eyes, and the trees were trunkless. Branching out from Google’s example, a voxel-style panda came out alright, but standard 3D models of a penguin and turtle came out quite primitive, with little to no detail.
This eagle has no eyes. Screenshot: The Verge
But Gemini 3 isn’t just built for prototyping and modeling; Google is testing a new “generative UI” feature for Pro subscribers that packages its responses inside a “visual” magazine-style interface, or in the form of a “dynamic” interactive webpage. I only got access to Gemini 3’s visual layout, which Google showed off as a way to envision your travel plans, like a three-day trip to Rome.
... continue reading