Elyse Betters Picaro / ZDNET
Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
ChatGPT voice mode rushes, sacrificing accuracy for speed
Web version answers with detail; voice often hallucinates
Turning off advanced voice mode doesn't fully fix problems
OpenAI has been clear in its messaging that different models perform differently. But my recent testing has shown that different interaction modes, even using the same model, also perform differently.
Also: Is ChatGPT Plus still worth $20 when the free version offers so much - including GPT-5?
As it turns out, ChatGPT in Voice Mode (both Standard and Advanced) is considerably less accurate than the web version. The reason? It doesn't want to take time to think because that would slow down the conversation.
(Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Fabulous confabulation
I got into this very odd, very stubborn conversation with ChatGPT's Advanced Voice Mode. What made it weird is that it became one of those conversations we've all had with a friend, where the friend seems insistent on spouting something that you know, for an absolute fact, is wrong. And yet the spouting continues.
Also: ChatGPT lets parents restrict content and features for teens now - here's how
So, at least in the sense that Voice Mode has managed to mimic human conversational impasses, the AI is approaching human behavior.
It all started with a question about the iPhone 16 Pro Max physical buttons. I asked it to explain the function of the phone's buttons. In its answer, it mentioned the ring/silent switch on the left side, and the single button on the right side.
Screenshot by David Gewirtz/ZDNET
Of course, there is no ring/silent toggle on the iPhone 16 Pro Max. And there are two buttons on the right side. The buttons themselves are beside the point. It's about what this path of conversation reveals about the AI.
Also: 5 reasons I use local AI on my desktop - instead of ChatGPT, Gemini, or Claude
In any case, I told the AI that my phone doesn't have a ring/silent switch.
Screenshot by David Gewirtz/ZDNET
After correcting ChatGPT, I asked it why it messed up its answer. The first responses were mostly obsequiously apologetic, but not unexpected.
Screenshot by David Gewirtz/ZDNET
Then, it started to make stuff up. In this case, it decided to explain to me that the iPhone has an in-display fingerprint sensor. I wish it did, but the iPhone has never actually had that feature. We know AIs hallucinate, so that's not terribly surprising. What's really interesting is the reason for its hallucinations, which I'll talk about in a minute.
Screenshot by David Gewirtz/ZDNET
I told the AI to take a moment and think. This prompting practice often works with the web-based chatbot, but it didn't succeed here. This time, the AI decided the action button was on the right side of the phone instead of, or in addition to, the left side.
Screenshot by David Gewirtz/ZDNET
When I again corrected the AI, it returned to the story of there being just one button on the right side of the phone. In fact, there are two. The second button, which doesn't stick out the way the other buttons on the phone do, is one of the big iPhone 16 Pro features. It's the Camera Control button, which also doubles as a slider. But the AI backtracked.
Screenshot by David Gewirtz/ZDNET
Keep in mind this is not a new phone. This phone has been out for over a year, so the AI should have had that information. But then came the big reveal, the reason I'm writing this article. It appears that Voice Mode rushes its answers in order to "quickly answer" in conversations.
Screenshot by David Gewirtz/ZDNET
That's the big reveal:
I think I just jumped in quickly to answer you in conversation mode without pausing as much as I would if I were typing.
This appears to be a significant behavior of the Voice Mode.
No talkie, less fibbie
I asked the exact same original question to GPT-5 in the web interface. It gave a fully detailed information dump that, as far as I can tell, was also completely accurate.
Screenshot by David Gewirtz/ZDNET
Social proof
When I pitched this story idea to my editor, she asked me to see what the socials had to say. Were others experiencing extra confabulation or poorly considered responses from Voice Mode?
Also: How people actually use ChatGPT vs Claude - and what the differences tell us
Indeed, they were.
Take this thread in Reddit's r/OpenAI subreddit. It started a year ago, complaining about ChatGPT's Voice Mode. Redditor FurlyGhost52 says, "Because it's designed to respond quickly, it doesn't put as much effort into what it says back."
Redditor fakedogman69 doesn't hold back, saying, "Like talking to an insane person, on cocaine. That aside, I also find its conversation style has become insufferable and totally unnatural as described by many people in this thread."
Then, there's another thread entitled, "I hate Advanced mode voice so much. It talks completely different than how it messages." In it, Redditor Usual_Cup2454 has an interesting insight about Advanced Voice Mode, saying, "One key difference between Advanced Voice Mode and standard Voice Mode is that standard uses your Custom Instructions, Advanced doesn't."
Also: ChatGPT just got a new personalization hub. Not everyone is happy about it
In another thread, Redditor Soliman-El-Magnifico says, "The answers are extremely shallow." In the same thread, Redditor Elijah_Reddits says, "The voice sounds eerily life-like, but the content of what it's saying is so bad compared to normal models. It's like pulling teeth trying to get any useful information from it."
The consensus across threads seems to be that Advanced Voice Mode, strangely, is less helpful than the standard Voice Mode.
Is standard Voice Mode better?
No, not so much. You can turn off Advanced Voice Mode by going down to your profile icon, hitting Personalization, then scrolling all the way down to Advanced, and then scrolling all the way down until you see the Advanced Voice Mode toggle.
Also: How to use ChatGPT: A beginner's guide to the most popular AI chatbot
I turned it off and asked standard Voice Mode my same iPhone question. It correctly identified that there is an action button on the left side of the phone, but strongly doubled down on the idea that there is no second button on the right side.
In fact, there is. As I mentioned, the Camera Control button was a major feature of the iPhone 16 Pro Max announcement. More amusingly, the AI declared that if I noticed a button there, it was merely a design element.
Screenshot by David Gewirtz/ZDNET
Missing the subtle stuff
There's an old logic puzzle most often described as, "If yesterday was tomorrow, then today would be Friday." How this gets answered has been hotly debated over the years. I took that question and added a "What is today, actually?" twist for the AI:
If yesterday was tomorrow, then today would be Friday. What is today, actually?
There are technically two statements here: the logic puzzle and a question about what day today really is.
Also: How ChatGPT actually works (and why it's been so game-changing)
The answer to the logic puzzle isn't really relevant to our discussion. To some degree, neither is what day today is. For the record, these chats took place on Wednesday, Oct. 1, 2025.
What is relevant is how fervently ChatGPT Voice Mode defended its original answer, especially when compared to the GPT-5 web interface.
When asked that question, ChatGPT Voice determined the answer was Thursday. Even when I challenged it and said, "Really, what is today's date?" the AI responded, "Yes, really, it's Thursday. And just to give you the full picture, today's date is October 1st, 2025."
Also: ChatGPT can buy stuff for you now - forever changing online shopping
In order to push the AI off that answer, I had to engage in some additional questioning. What I found amusing, if a bit troubling, was the AI's justification for its error.
That's right, I did say that! Sometimes these riddles can get us a bit twisted around in the logic, but the actual calendar never lies. So yes, in real life, today is Wednesday.
Screenshot by David Gewirtz/ZDNET
That was the rapid-fire voice mode ChatGPT employs to keep responses crisp during a conversation. But what about the web interface? As it turns out, GPT-5 in the web interface was able to distinguish between the two parts of the question. First, it answered the riddle. But when I once again asked about the real today, it understood the nuance and provided both answers.
Screenshot by David Gewirtz/ZDNET
If you're curious about the ID numbers mentioned in the transcript, that's a custom instruction in ChatGPT settings. I have it number each interaction with an ID, so I can refer back to the conversational step with some degree of accuracy. ID 001 was when I asked the first question, and ID 002 was when it came back with the actual date.
What have I learned?
Well, on a practical level, I learned I can turn off Advanced Voice Mode and revert back to the original Voice Mode. I learned that lots of Redditors prefer the standard Voice Mode over the Advanced Voice Mode.
Also: I built a business plan with ChatGPT and it turned into a cautionary tale
But I also learned that answers in either Voice Mode are considerably less considered than answers coming from the web version of ChatGPT. I learned that Voice Mode specifically states that it skips some of the thinking in order to get answers out and maintain conversational flow.
People don't really like it when there's no gate between your brain and your mouth. It's a bug, not a feature.
How many of us have been guilty of that same behavior? And yet, we want our AIs to be accurate. So if you have important stuff to discuss or you'd like a higher chance of accuracy in your answers, use the web version.
Also: How web scraping actually works - and why AI changes everything
What do you think about ChatGPT's voice mode? Have you noticed it rushing answers or missing important details compared to the web version? Do you find advanced voice mode useful, or more frustrating than helpful? How much accuracy are you willing to trade for conversational speed? Let us know in the comments below.
To confirm my (and the social's) empirical observations about Voice Mode's behaviors, I've reached out to OpenAI. I'll update this space if they provide more information.
You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.
Get the morning's top stories in your inbox each day with our Tech Today newsletter.