Skip to content
Tech News
← Back to articles

Google's Vertex AI Has an Over-Privileged Problem

read original get AI Model Security Kit → more articles
Why This Matters

This article highlights a critical security vulnerability in Google's Vertex AI platform, where excessive default permissions can be exploited by attackers to access sensitive data and internal infrastructure. For the tech industry and consumers, this underscores the importance of proper configuration and least-privilege practices when deploying AI agents to prevent potential breaches and misuse. Ensuring secure deployment of AI tools is vital as organizations increasingly rely on automation and AI-driven workflows.

Key Takeaways

The AI agents many organizations have begun deploying to automate complex business and operational workflows can be quietly turned against them if not properly configured with the right permissions.

Recent research by Palo Alto Networks has shown how the risk can materialize in Google Cloud's Vertex AI platform, where excessive default permissions give attackers a way to abuse a deployed AI agent and use it to steal sensitive data, access restricted internal infrastructure, and potentially execute other unauthorized actions.

Excessive Permissions

Google has updated its official documentation to more explicitly explain how Vertex AI uses agents and other resources after Palo Alto Networks disclosed its findings to the search and cloud giant. Google has also recommended that organizations that want to use least-privilege access in their agentic AI environment replace the default service agent on Vertex Agent Engine with their own custom dedicated service account.

Related:Manufacturing and Healthcare Share Struggles with Passwords

Vertex AI is a Google Cloud platform that allows organizations to build, deploy, and manage AI-powered applications. It offers an Agent Engine and Application Development Kit that developers can use to create autonomous agents for performing tasks like querying databases, interacting with APIs, managing files, and making automated decisions with minimal human oversight. Many enterprises use these agents, or similar ones on other cloud platforms, to automate workflows, analyze data, power customer service tools, and AI-enable existing cloud services, granting them wide access permissions in the process.

And it's that wide access that creates opportunities for attackers to hijack those agents and turn them into double agents, doing the dirty work while appearing seemingly normal to the organizations using them, Palo Alto said in its report.

On Google's Vertex AI platform, the researchers discovered a default service account tied to every deployed Vertex AI agent called Per-Project, Per-Product Service Agent (P4SA) with excessive default permissions. The researchers showed how an attacker who is able to extract the agent's service account credentials could use them to gain access to sensitive areas of the customer's cloud environment. They showed how the same credentials would allow an attacker to download proprietary container images from Google's own internal infrastructure and to discover hardcoded references to internal Google storage buckets for potential future attacks.

Significant Security Risk

"This level of access constitutes a significant security risk, transforming the AI agent from a helpful tool into an insider threat," Palo Alto researcher Ofir Shaty wrote. "The scopes set by default on the Agent Engine could potentially extend access beyond the GCP environment and into an organization's Google Workspace, including services such as Gmail, Google Calendar, and Google Drive."

... continue reading