Rushil Agrawal / Android Authority
Ever since ChatGPT started gaining in popularity, every other tech company has been rushing to keep up with its advancements in generative AI. Although some might argue that OpenAI’s chatbot is still on top, the company that I believe has come closest to surpassing it is Google. After Google CEO Sundar Pichai issued a “code red” in December 2022 to meet the threat OpenAI represented, the company has created a powerful, robust, and wide-reaching set of AI-powered tools that offer tangible benefits for its users.
Circle to Search, for example, is probably one of the best software products Google has created since the launch of Google Photos in 2015. The Pixel-exclusive Recorder app is an AI-powered utility that has made my life so much easier. And Google’s Gemini chatbot has proven itself to be a terrific ChatGPT competitor — I use Gemini far more than ChatGPT, for what it’s worth.
The problem Google faces now, though, is that it’s been so desperate to beat the competition that it’s been forced to leave an organizational mess behind in its wake. This issue is something the company needs to make serious strides to rectify soon if it wants to stay on top of the heap.
Google’s AI tools need a centralized location
Mishaal Rahman / Android Authority
To help illuminate the problem Google has made for itself, let me ask you this seemingly simple question: Does that amazing restaurant you visited in October 2023 have outdoor seating? If you have a great memory, this is likely easy to answer. However, if you’re like me and have a terrible memory for things like this, figuring out the answer is not so simple.
Let’s break this down into steps. I first need to figure out where I was. Usually, I would go to Google Maps and use Timeline to deduce my visited places in October 2023, but Timeline is dead, and I switch phones so much that I can’t keep mine alive, so that’s out. Google Photos would be my next best bet. Assuming I took a photo at the restaurant, I could use the AI-powered search feature to find pictures of food from October 2023, and that would at least let me know the general location of where I was. Once I have that down, I could then go to Google Maps and hope that one of the restaurant names near the photo’s location would jog my memory. Finally, a quick query in the AI-powered info search there would let me know if there is outdoor seating.
AI is supposed to make things faster and easier, but a seemingly simple question about a restaurant I visited still takes multiple app searches to answer.
That’s a whole lot of steps to do something that one search bar should be able to handle on its own. The problem, though, is that no “universal” search bar exists that can perform all these functions for me. I certainly can’t just go to Google.com or use the Google app, as they don’t have any connectivity to my personal information. Even the main Gemini app — which links directly to my accounts in Gmail, Calendar, Maps, Drive, and more — couldn’t help me with this query. In fact, rather than access my information from any of the connected Google apps to help me deduce this, it simply hallucinated three French restaurants (I visited zero French restaurants in October 2023 and haven’t been to France since 2019):
C. Scott Brown / Android Authority
What is even the point of giving Gemini access to my Workspace accounts and personal data if it doesn’t actually use them to perform tasks? That’s a question for another day, but the point here is that the two primary ways people use Google to search for things didn’t work for my relatively simple query. I needed to bounce around from app to app to figure it out, just as I would have before generative AI was a thing.
Why are there numerous AI search tools in Google's repertoire but not one 'universal' search bar that does everything?
I know this is a very specific query that most people won’t have often, but it’s also a realistic one. It’s one that Google’s AI tools can answer, but I need to get the answer from multiple places. Why isn’t there one place to go that can do it all — one search bar that does everything the modern Google can do?
Let me give you another quick example of how Google’s scattershot approach to AI is causing issues. Using Gemini on an Android phone, it’s easy to find a specific photo in your Google Photos account. All you need to do is hold down the power button, say something like, “Find a photo of me and my dad at the zoo last year,” and it will do so (assuming you give it permission to access your Photos library). It’s smart enough to know who my dad is and when we were together at a zoo in 2024.
Likewise, you can also ask Gemini to text your wife by saying something like, “Text my wife to let her know I am on my way home.” Once again, it knows who your wife is and can send her a precomposed text. But, if you try to combine these two functions, Gemini fails because, for whatever reason, the function that searches through Photos and the one that performs functions in Messages don’t work together:
C. Scott Brown / Android Authority
OK, so Google’s AI tools don’t work well together, so there’s no one-stop shop where you can get everything done. That’s bad, but what if you don’t care? Even if you ignore these problems, there’s still the branding issue to contend with. On my phone, Gemini is a chatbot within the Gemini app that is powered by the Gemini 2.5 Pro large language model (LLM). I just referenced three wholly separate things that all have the same basic name. Oh, and don’t forget that Gemini Live exists and that it is powered by Gemini 2.5 Flash, which is also different. Confused yet?
Google's branding for its AI tools is also very confusing, and AI features in Google Search aren't replicated in Gemini.
I can keep going with this. When you go to Google Search by visiting Google.com, you can opt to try out AI Mode, which Google debuted at I/O this year. This system is not the same as AI Overviews, which are the AI-generated summary answers you sometimes get at the top of general Google searches (sometimes — there’s no way to control when they appear). Either way, AI Overviews and AI Mode are not accessible within Gemini, even though all of them are powered by the Gemini LLM.
I don’t know about you, but this all seems to be the very antithesis of what Google is aiming for with its AI ambitions. With generative AI, I should use fewer tools to get what I need. But here we are with a bunch of tools that all do different things with similar-sounding names that don’t work that well together.
Google isn’t the only company with this problem
C. Scott Brown / Android Authority
Although Google’s tools are my go-to for these kinds of issues, it isn’t the only company fighting hard for relevancy within the AI era. Samsung’s Galaxy AI is another suite of fancy AI tools, many of which are riffs on Google products. But Samsung faces the same problem as Google with a slew of great-sounding features that don’t work at all together.
For example, on Galaxy phones with One UI 6.1 or higher, Samsung offers an AI-powered search bar within Android settings. You can use this search bar to find specific settings using natural language. For example, you can say, “My screen is too bright,” and it will bring up the toggles for display brightness, offer to swap you into dark mode, and offer connections to other display-related functions. In this case, you don’t need to know the name of the function you’re looking for — you just need to describe the problem you’re having or what tweak you want to make.
Samsung is making similar mistakes in this realm, with Galaxy AI features only appearing in certain areas and not working well with others.
Without a doubt, this is a game-changing feature, especially for people who aren’t super tech-savvy. Unfortunately, this feature only works in this specific search bar within Android settings. Holding down the power button and then telling Gemini “My screen is too bright” won’t bring up anything in Android settings, because Gemini is a totally different service.
Once again, Samsung is relying on the user to know where to go to get what they need. But how many will actually do so? How many of those people who aren’t super tech-savvy will see the prompt fail at Gemini and then just…stop? My guess would be quite a lot, given that not many would likely even think they’d get a wholly different answer from the Settings app’s search bar.
Can this ever really be fixed, though?
Joe Maring / Android Authority
When I brought all this up to my colleague Rita El Khoury, she was quick to point out that this might be an unsolvable problem. The thing Google will want to avoid is the general user feeling like Google and/or Gemini is not only connected to their private information, but that everyone is.
A long time ago, Google had a product called Google Desktop. This free tool worked on Windows, macOS, and Linux. It indexed all your computer’s files and allowed you to “Google” your computer locally in the same way you would “Google” something online. Launched in 2004, the product became controversial immediately because people were worried that all their private information was now available on the public internet, since this one search bar showed results from their local machine and the general web. Google tried to address a lot of these concerns over the years, but by 2011, it had given up, and Desktop Search made its way to the Google Graveyard.
People also forget that around 2016, Google’s search bar widget worked universally across your entire Android phone. Under an “In apps” (later “Personal”) tab, you could find contacts, calendar events, Maps places, Chrome bookmarks and history, conversations, and more. It even had an open API where developers like Spotify and Todoist could let you search data from their apps through the same Google widget you use to search the web. But once again, this caused privacy concerns and confused users thought that their personal info was available on Google at large, not on their phone, leading to the feature’s removal.
Even if Google can never truly solve this fragmentation issue, it can still do a whole lot better than it is right now.
This kind of history would likely prevent Google from ever making Google.com the one-stop shop search bar that I’m envisioning. However, the Gemini app might be a good enough spot? People are a lot more savvy about the internet than they were in the early 2000s or mid-2010s, and most people have become comfortable with their data being aggregated and searchable on non-local machines. Since the Gemini app doesn’t have the same history as Google Search, people might be more willing to accept that just because the Gemini app knows all about them, it doesn’t mean their data is floating around for anyone else to find.
Even if Google never wants to make my proposed universal search tool, that doesn’t let it off the hook for the general mess its AI tools are in at the moment. I understand that the search bar in Photos should be different from the search bar in Maps, which should also be very different from the search bar in Gmail. But if Google really wants AI to be the thing it envisions it to be, those walls are going to need to come down, and there will need to be one place we go that can access the information across all those platforms and then execute commands using a mixture of data.
Until then, all we have are dozens of little AI tools spread around. I don’t know about you, but that doesn’t sound like the AI revolution Google keeps saying we are supposedly in.