Tech News
← Back to articles

Google says new cloud-based “Private AI Compute” is just as secure as local processing

read original related products more articles

Google’s current mission is to weave generative AI into as many products as it can, getting everyone accustomed to, and maybe even dependent on, working with confabulatory robots. That means it needs to feed the bots a lot of your data, and that’s getting easier with the company’s new Private AI Compute. Google claims its new secure cloud environment will power better AI experiences without sacrificing your privacy.

The pitch sounds a lot like Apple’s Private Cloud Compute. Google’s Private AI Compute runs on “one seamless Google stack” powered by the company’s custom Tensor Processing Units (TPUs). These chips have integrated secure elements, and the new system allows devices to connect directly to the protected space via an encrypted link.

Google’s TPUs rely on an AMD-based Trusted Execution Environment (TEE) that encrypts and isolates memory from the host. Theoretically, that means no one else—not even Google itself—can access your data. Google says independent analysis by NCC Group shows that Private AI Compute meets its strict privacy guidelines.

According to Google, the Private AI Compute service is just as secure as using local processing on your device. However, Google’s cloud has a lot more processing power than your laptop or phone, enabling the use of Google’s largest and most capable Gemini models.

Edge vs. Cloud

As Google has added more AI features to devices like Pixel phones, it has talked up the power of its on-device neural processing units (NPUs). Pixels and a few other phones run Gemini Nano models, allowing the phone to process AI workloads securely on “the edge” without sending any of your data to the Internet. With the release of the Pixel 10, Google upgraded Gemini Nano to handle even more data with the help of researchers from DeepMind.