Tech News
← Back to articles

I Tried Vibe Coding the Same Project Using Different Gemini Models. The Results Were Dramatic

read original related products more articles

Vibe coding is a lot of fun when you know the general gist of the process. It's as easy as talking to an AI chatbot and having it code up an app for you, but it requires time and patience to iron out issues. I've created several vibe coding projects, but there are always new ways to test how good these outputs can be, especially when you consider the model you're using.

With so many AI models to tinker around with, they can produce significantly different results, especially if you don't have a solid plan in mind. I wanted to see how the lighter models compare to the "thinking" models, as Google and OpenAI refer to them. These lighter models vary in name: Google's Gemini interface calls it Fast (although the model is actually called, for example, Gemini 2.5 Flash), while OpenAI calls it Instant.

I decided to perform an experiment using two models to create the same project. First, I created a project from beginning to end using Google's Gemini 3 Pro, and I wanted to replicate it using one of its lighter models by attempting to have the same conversation. At the time, the most recent of the light models was Gemini 2.5 Flash. The results were telling: Both technically created the same output, but the journey getting there was very different between the two.

I was lacking in inspiration for this experiment, so I just offloaded it to Gemini. I asked for it to come up with interesting vibe coding projects that i could run with and I opted for one called "Trophy Display Case." I asked Gemini to display a list of horror movies, instead of trophies, and provide more information about them when you clicked on one of the posters. Outside of those requirements, I gave both Gemini models creative control.

Fast vs. thinking AI models: What's the difference?

If Google gives us a choice between the Flash and Pro models, they must be substantially different, right? Yes and no. They're both large language models, but they operate differently. To the everyday user, "fast" and "thinking" define the differences between the two well enough: speed versus depth.

A reasoning model is an LLM that's been fine-tuned to break complex problems into smaller steps before generating the final output. This is done by performing an internal chain of thought reasoning path. Both Gemini 2.5 Flash and Gemini 3 Pro are reasoning models, but Gemini 2.5 Flash takes a hybrid approach: It offers a balancing act between speed and reasoning.

Gemini 3 Pro is the stronger reasoning model, and is optimized for diving deep to find answers. As a result, it's slower than more efficient models like 2.5 Flash. Google has since released Gemini 3 Flash, a more powerful base model that replaced 2.5 Flash. Gemini 3 Pro remains the most powerful reasoning model available in Gemini for most people.

Gemini 3 Pro model did most of the work

The final project Gemini 3 Pro made wasn't perfect, but it was better than my original idea and about a mile ahead of what Gemini 2.5 Flash produced. Google Gemini/Screenshot by Blake Stimac/CNET

... continue reading