Tech News
← Back to articles

Android’s new Computer Control feature shows the Rabbit R1 was ahead of its time

read original related products more articles

Ryan Haines / Android Authority

For a brief period last year, it seemed that AI-powered gadgets like the Rabbit R1 were going to be the next big thing. People were fascinated by the idea of replacing their smartphones with tiny wearable boxes they could talk to, but unfortunately, these gadgets failed to live up to the hype. They failed for a variety of reasons — like price, redundancy, and utility — but the seeds they planted for a future of fully automated apps stuck around.

These seeds grew into the “agentic AI” trend we see today. Companies are racing to build AI products that can perform tasks on your behalf, like helping you code a new project, booking an appointment, or ordering items online. As one of the leaders in the AI race, Google is also working on its own AI agents, with Gemini in Chrome being its most notable offering.

Google

Gemini in Chrome, of course, can only perform actions within the browser and not in other applications. If you want to automate Android apps, your options are limited to third-party tools like Tasker, which often have a steep learning curve. Even then, these tools must be meticulously configured to take specific actions in predetermined apps. Unlike newer AI agents, these existing automation tools can’t perform generalized tasks from a single, natural language prompt.

That’s why Project Astra, Google’s experimental universal AI project, is so exciting. At Google I/O, the company showed off a version of Astra that can control your Android phone. In the demo, the assistant found a document online, scrolled through it to find specific information, and then searched YouTube for related videos — all completely hands-free. To accomplish this, Astra recorded the screen for analysis and then sent tap or swipe inputs to launch apps or scroll through pages.

Google

Google’s demo showed the immense potential for an AI agent that can perform tasks in Android apps, but it also revealed that the company still has a lot of work to do. For starters, the parts of the video featuring the AI agent were sped up 2x, suggesting it’s quite slow. This wasn’t an issue in the scenario concocted for the demo, where the user clearly had their hands full, but it will be a problem in the real world. A slow agent means your phone will be occupied while it works, and common interruptions like a notification, an incoming call, or an alarm could disrupt the process by interfering with its screen analysis or inputs.

The purpose of the I/O demo was simply to show off Project Astra’s capabilities rather than detail how an on-device AI agent would actually work. Google had to hack together a prototype that took advantage of existing Android APIs in unintended ways — the MediaProjection API for screen recording and the Accessibility API for screen input — which resulted in the issues mentioned above.

Over the past few months, however, Google has been working on a new, standardized framework for AI agents to control Android apps. This framework, called Computer Control, is designed to enable automated control of Android apps in the background, sidestepping those problems. Although Google probably won’t announce Computer Control until next year’s Android 17 release, we managed to uncover a lot of information about it by digging through Android code. Here’s what we know so far. You’re reading the Authority Insights Newsletter, a weekly newsletter that reveals some new facet of Android that hasn’t been reported on anywhere else. If you’re looking for the latest scoops, the hottest leaks, and breaking news on Google’s Android operating system and other mobile tech topics, then we’ve got you covered.

... continue reading