Calvin Wankhede / Android Authority
ChatGPT has become the Xerox of the AI world, but OpenAI no longer has the stage to itself. Google has been pushing Gemini deeper into its ecosystem, and every few weeks it feels like another rival pops up with promises of being faster, smarter, or more useful. So when the latest GPT-5 model launched last month, it felt like a release OpenAI couldn’t afford to get wrong.
While OpenAI’s benchmarks claim that GPT-5 is better than the competition, the numbers only tell us half the story. For proof, look no further than the widespread backlash over GPT-5’s responses at launch. OpenAI was forced to alter the model and make other changes, but more on how that drama unfolded later.
Now that the dust has settled on GPT-5’s rocky debut, is it really better than Google’s best Gemini 2.5 Pro model in the real world? As someone who has extensively shuttled between the two chatbots, here’s my honest assessment.
Which AI chatbot do you prefer, Gemini or ChatGPT? 135 votes ChatGPT 33 % Gemini 36 % I alternate between both chatbots 24 % I don't use either chatbot 7 %
GPT-5 vs Gemini: Is one really better than the other?
Calvin Wankhede / Android Authority
The biggest and most impactful difference between the latest versions of ChatGPT and Gemini is that GPT-5 can detect when it needs to analyze and think before responding to more complex prompts. This feature, which OpenAI has dubbed its model routing system, allows ChatGPT to respond quickly to simpler queries and even use a smaller model variant like GPT-5 mini to save time.
With Gemini’s most capable 2.5 Pro model, however, you typically have to wait upwards of ten seconds before a response shows up. It’s either that, or you’ll have to switch to the less capable Gemini 2.5 Flash model ahead of time. Most people don’t know the difference between Flash and Pro, and even though I do, I usually forget to switch back and forth.
All in all, I prefer the faster response times I typically get from GPT-5. And you can always prompt ChatGPT to use its larger model and engage in reasoning by saying something like “Think hard about this question.”
... continue reading