Cybersecurity researchers have identified a flaw in Google Cloud's Vertex AI platform where default service agent permissions allow for potential data exfiltration and unauthorized environment access. By exploiting the excessive scopes of the Agent Development Kit's service identity, an attacker can extract credentials to bypass isolation and gain read access to a project's entire cloud storage.
Researchers at Palo Alto Networks Unit 42 discovered that the default permission model within Google Cloud's Vertex AI platform contains a significant security blind spot. This vulnerability stems from the way the Per-Project, Per-Product Service Agent is configured when an organization deploys an artificial intelligence agent using the platform's development kit. Because these service agents are granted broad permissions by default, a compromised or misconfigured agent can be manipulated to act as a double agent, appearing to function normally while secretly accessing sensitive internal infrastructure.
The core of the issue lies in the execution context of the AI agent after it is deployed through the Vertex Agent Engine. The cybersecurity team found that any call made to the agent triggers Google's metadata service, which then exposes the credentials of the service agent along with specific project details. This exposure includes the identity of the AI agent and the permission scopes of the hosting machine, providing a clear map for an attacker to exploit the underlying cloud environment.
By capturing these exposed credentials, an attacker can effectively jump from the restricted execution environment of the AI agent directly into the broader customer project. This lateral movement undermines the security isolation that is supposed to keep different cloud services and data sets separate. Once this isolation is breached, the attacker gains the ability to act on behalf of the service agent with its elevated default privileges.
During their testing, Unit 42 demonstrated that this exploit allowed for unrestricted read access to all Google Cloud Storage buckets within the affected project. This means that any sensitive data, proprietary code, or private documents stored in that project’s cloud storage could be viewed or stolen. The researchers noted that this transformation turns a beneficial productivity tool into a potent insider threat that operates within the organization's trusted perimeter.
Google was notified of these findings and has since taken steps to address the excessive default permissions to better protect users. However, the discovery serves as a reminder of the unique security challenges posed by integrating AI agents into cloud ecosystems. It highlights the necessity for organizations to strictly follow the principle of least privilege and to carefully monitor the permissions granted to automated entities within their critical infrastructure.
Source: https://unit42.paloaltonetworks.com/double-agents-vertex-ai/


