Vibe coding can be a lot of fun when you have the right mind-set. It doesn't take an engineer to have a good idea that could turn into something great, and that's what makes creating apps using only natural language so appealing -- anyone can pick it up. I've dabbled in a fair amount of vibe coding myself by making by event calendars and recreating childhood video games in my web browser, just by chatting with a chatbot, but I know I've barely scratched the surface.
CNET
The model you use can have a dramatic impact on the quality of the project output and I witnessed this firsthand. I wanted to see how the lighter models compare to the "thinking" models, as Google and OpenAI refer to them. These lighter models vary in name: Google's Gemini interface calls it Fast (although the model is actually called, for example, Gemini 2.5 Flash), while OpenAI calls it Instant.
To get a feel for how different each model is for vibe coding, I ran an experiment. First, I started by creating a project using Gemini's Thinking model -- Gemini 3 Pro -- and wanted to see if I could replicate the same project with the fast model by using the identical prompts from the previous project. Given that there's no way to guarantee the responses for each model, I knew there would be variations and the conversations would fork, but for the most part, I tried to keep the conversations identical when I could.
At the time of this testing, the fast model was Gemini 2.5 Flash. I expected the finished results to be different, and they were, but not nearly as much as I expected. What was different was how I got from A to Z with each model.
I was lacking in inspiration for this experiment, so I just offloaded it to Gemini. I asked for it to come up with interesting vibe coding projects that i could run with and I opted for one called "Trophy Display Case." I asked Gemini to display a list of horror movies, instead of trophies, and provide more information about them when you clicked on one of the posters. Outside of those requirements, I gave both Gemini models creative control.
Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
Fast vs. thinking AI models: What's the difference?
If Google gives us a choice between the Flash and Pro models, they must be substantially different, right? Yes and no. They're both large language models, but they operate differently. To the everyday user, "fast" and "thinking" define the differences between the two well enough: speed versus depth.
A reasoning model is an LLM that's been fine-tuned to break complex problems into smaller steps before generating the final output. This is done by performing an internal chain of thought reasoning path. Both Gemini 2.5 Flash and Gemini 3 Pro are reasoning models, but Gemini 2.5 Flash takes a hybrid approach: It offers a balancing act between speed and reasoning.
... continue reading