is a senior reporter focusing on wearables, health tech, and more with 13 years of experience. Before coming to The Verge, she worked for Gizmodo and PC Magazine.
I dream of a gadget that can do it all. Instead, when I leave for the office, I pack one or two phones, a portable battery bank, a laptop, a Kindle, a new product I’m testing, and at least one pair of earbuds. In my backpack, there’s a pouch full of cords and adapters. On my body, I usually sport between two and four wearable devices. I know mine is a “gadget maximalist” life. But, surely, one day, the powers that be will convene and society will decide on the Next Must-Have Gizmo — one all-powerful, do-everything device that will replace the phone.
Google doesn’t seem to think so. At least, not based on what I witnessed at last week’s Made by Google event.
At a studio in Brooklyn Navy Yard, Google showed off four phones, a smartwatch, and a pair of earbuds. That’s fairly typical for a product launch, but something about this year’s updates was different. It wasn’t just the odd keynote format, or the latent anxiety of Gemini getting stuffed into every single corner of every product. It was the uncanny feeling that AI won’t be the thing that tears down walled gardens. It’ll strengthen them. Instead of streamlining the number of gadgets we carry around, it might make them multiply.
The word “vibe” gets overused these days, but in that weird Brooklyn TV studio, I felt a palpable vibe shift in mobile computing. Especially with wearables. Where think pieces and online discourse used to argue that wearables were dead, the category is now being positioned as a vanguard for AI.
“The first 15 years of wearables were very much, ‘gather data, the quantified self.’ That’s where Fitbit started,” explains Sandeep Waraich, Google’s product lead for Pixel wearables. “It was episodic data and that’s fast moving to continuous insights because data only goes so far. It’s moving from highly generic to something very personal.”
You can see where this is all going. Wearables generate a ton of data. It’s a lot harder than it sounds to find actionable insights in a way that’s digestible and keeps people engaged long-term. It’s the sort of task that AI would theoretically be good at — which is why you see every fitness tracker and app on the market hopping on the bandwagon.
At the same time, as Waraich describes it, wearables are the “only one device in our computing lives that is guaranteed on-body presence.” Your phone may seem like it’s glued to your hand, but even it might be left behind on a table, stashed in a purse, or turned off at a show. If you want the most personalized, always-available AI assistant, it has to know absolutely everything there is to know about you. Is there a better way to do that than to be on you?
The problem with AI hardware is that we’re in the spaghetti stage. No one knows what the winning formula is, and so every idea under the sun is going to get thrown at the wall until something sticks. You have your always-listening life recorders that purport to be your second memory. Meta’s hypothesis is that multimodal smart glasses are the platonic ideal gateway to AI. Jony Ive and Sam Altman can afford to be hyper vague about whatever project they’re working on because anything they say at this point could be correct.
But to hear Google tell it, no one form factor is going to reign supreme.
“Any religion you have now is probably premature,” says Rishi Chandra, Google’s VP of Fitbit and Health, when I ask what form factors Google is betting on for Gemini. “There’s no doubt in my mind, there’s going to be new form factors that will exist. But I think it’s too early to have conviction. What’s interesting is the AI is moving so fast that any point of view you have on the hardware could change very quickly.”
“What’s interesting is the AI is moving so fast that any point of view you have on the hardware could change very quickly.”
Instead, Chandra says Google’s leaning into the spaghetti-ness of it all. Some of that is an openness to experimentation. You only need to look at Android XR, its nascent platform for smart glasses, to see that. The other half is to “maximize the devices you already have.” The phone is a starting point. The smartwatch and earbuds are natural extensions, but the full potential of AI hardware has yet to be unlocked. The hope, Waraich says, is that by experimenting and maximizing, you end up with a winning combination that hasn’t been seen just yet.
“The future will be a very diverse set of accessories that people may choose to have that work for them, that’s personalized for them, in their environment and what they care about,” explains Chandra. “Our job is to make it all work [together] so it doesn’t matter.”
Waraich agrees. Google, he says, views these overall shifts aligning nicely with its vision for ambient computing, a world where your devices fade into the background, autonomously and proactively answering your every need. (When you walk into a room, for example, your AI lightbulbs might switch to a mellower setting because they can speak to your phone, on which you just texted a friend to say you have a migraine.) But ambient computing will never be unlocked if there’s only one all-powerful gadget. It also won’t work if these gadgets don’t “speak” a common language. So why wouldn’t Google pitch Gemini as the glue holding it all together? If Gemini becomes the must-have AI, and it’s primarily baked into Google hardware — perhaps that’s how you really get people to stop caring about green bubbles.
It’s not enough to just have smartwatches… or earbuds… or a phone. You’ll need them all and they’ll all have AI.
Some of this shift is because the smartphone is almost 20 years old. A gadget that once inspired awe, these days phones feel more like a Toyota Camry than a Ferrari. Verizon CEO Hans Vestberg said last year that the “exciting times are over” and that many people are keeping their phones for three years or more. These numbers fluctuate depending on region and demographic, but Big Tech has publicly admitted that people just aren’t upgrading their phones as often as they used to.
Viewed from that lens, the push toward AI hardware begins to make sense. It just feels out of sync with what people tell me they want. Google’s executives tell me the point of Gemini (and AI in general) is to make people’s lives easier, to return their time to them. It’s a noble quest that seemingly aligns with the exhaustion people feel from the always-on modern life. But even if I can see Google’s vision, even if I genuinely see the value in parts of it — it’s hard to square how adding more gadgets with more AI addresses that existential fatigue.
Nevertheless, this is the bet that Google’s going all in on. It’s why the Pixel 10 and Pixel Watch 4 feel like such opinionated devices in a landscape of iterative updates. I’d argue that’s also why we’re seeing smart rings gain traction, and why Meta’s seeing hardware success with its smart glasses after years of failing to convince people to care about the metaverse. Everyone’s looking for the next turn in the story, and for now, it’s converging on AI wearables. Many, many of them.