Skip to content
Tech News
← Back to articles

I tested Google’s upcoming Gemini Nano 4 — its faster, smarter AI isn’t what I expected

read original get Google Pixel Fold → more articles
Why This Matters

Google's Gemini Nano 4 models represent a significant advancement in on-device AI, offering faster, smarter capabilities that can run directly on smartphones. This development enhances user privacy, reduces reliance on cloud processing, and paves the way for more powerful mobile AI applications, impacting both consumers and the broader tech industry. The integration of these models into upcoming devices signals a shift toward more autonomous and efficient AI-powered smartphones.

Key Takeaways

Earlier this month, Google lifted the lid on its latest and most powerful Gemma 4 AI models that you can run on your own hardware. Gemma competes on performance with other models like GLM5 and Qwen3.5, but its closed Gemini model remains the flagship to take on OpenAI and Anthropic. Still, the exciting news is that Gemma 4 has versions small enough to run on your smartphone.

Specifically, Gemma 4 E2B and E4B are distilled down to effective two- and four-billion-parameter footprints. At just 4.2GB and 5.9GB, these can more easily fit into phones with 12GB of RAM or more. These are also the foundations for Google’s next-generation Gemini Nano smartphone models — Gemini Nano 4 Fast and Nano 4 Full — scheduled to launch later this year.

Google notes that the new models offer improved reasoning, math skills, time understanding, and image capabilities. The Full model retains greater reasoning power for complex tasks, while Fast is optimized for lower latency responses. In fact, Google claims the Fast model is up to 4x speedier than previous versions and consumes up to 60% less battery when running on the TPU.

Gemma 4 promises to be Google's fastest and smartest on-device AI tools.

That all sounds pretty impressive, and to help developers make a head start on integrating these models with their Android apps, Google has released early access to Gemini Nano 4 via an AICore Developer Preview. I grabbed a copy of the app on my Google Pixel 10 Pro XL, which lets you run these AI models on the Tensor G5’s TPU, to see just what sort of improvements might be on offer when Nano 4 arrives in prime time.

The AICore Developer Preview app offers access to Gemini Nano 3, Nano 4 Fast, and Nano 4 Full, so I decided to run comparisons against the existing mainstream model to better gauge exactly what’s changing. Of course, things might be tweaked here and there before full release, but let’s jump in anyway.

Is Google focusing too much on AI at the cost of hardware? 604 votes Yes, hardware matters more 56 % No, AI is the future 3 % We need a better balance of both 39 % Not sure 2 %

Testing some prompts

Robert Triggs / Android Authority

The first thing I wanted to check was how well all of these models perform on tasks you might reasonably run on an on-device AI model. Nothing huge or multi-step. Instead, I focused on logic, math, and text-summary prompts to see how they fared.

... continue reading