Tech News
← Back to articles

I Tried Vibe Coding With Different Gemini Models. Here's What I Learned

read original related products more articles

Vibe coding can be a lot of fun with the right mindset. All you need is the idea and chatbots such as Claude, Gemini and ChatGPT can generate workable code for you based on your instructions. I've spent a fair amount of time vibe coding event calendars and retro games just by chatting with LLMs, and it can open up a world to people who never thought they'd be able to create something out of code.

CNET

However, the model you use can have a dramatic impact on the quality of the project output. I wanted to see how the lighter models compare to the "thinking" models, as Google and OpenAI refer to them. These lighter models vary in name: Google's Gemini interface calls it Fast (although the model is actually called, for example, Gemini 2.5 Flash), while OpenAI calls it Instant.

To get a feel for how different each model is for vibe coding, I decided to do a loose experiment. I started by creating a project using Gemini's Thinking model -- Gemini 3 Pro -- and then wanted to see if I could replicate the same project with the fast model by using the identical prompts from the previous project. Given that there's no way to guarantee the responses for each model, I knew there would be differences and the conversations would fork, but for the most part, I used identical prompts for both projects.

At the time of this testing, the fast model was Gemini 2.5 Flash. I expected the finished results to be different, and they were, but not nearly as much as I expected. What was different was how I got from A to Z with each model.

I really didn't know where to start with my project, so I just asked Gemini to come up with some interesting vibe coding projects for me. One of them was a "Trophy Display Case," and I took that as a jumping-off point. I asked Gemini to display a list of horror movies, instead of trophies, and provide more information about them when you clicked on one of the posters. Outside of those requirements, I gave both Gemini models creative control.

Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.

Fast vs. thinking AI models: What's the difference?

If Google gives us a choice between the Flash and Pro models, they must be substantially different, right? Yes and no. They're both large language models, but they operate differently. To the everyday user, "fast" and "thinking" define the differences between the two well enough: speed versus depth.

A reasoning model is an LLM that's been fine-tuned to break complex problems into smaller steps before generating the final output. This is done by performing an internal chain of thought reasoning path. Both Gemini 2.5 Flash and Gemini 3 Pro are reasoning models, but Gemini 2.5 Flash takes a hybrid approach: It offers a balancing act between speed and reasoning.

... continue reading