Richard Drury / DigitalVision / Getty Images
Follow ZDNET: Add us as a preferred source on Google.
ZDNET key takeaways
AI agents pose risks to sensitive business information and processes.
New research from OIDF details these risks and potential solutions.
Organizations should extend their governance practices to AI agents.
Although new research from the OpenID Foundation (OIDF) doesn't come right out and warn that the world's digital infrastructure is hurtling towards a science fiction-like singularity where everything is literally connected to everything, it makes a pretty convincing technical argument for how agentic AI, if left unchecked, will be the protagonist that brings it to us.
Released today, the research suggests that AI agents could dangerously and easily transcend connectivity barriers once thought to be inviolable unless the industry prioritizes and cooperates on the development and deployment of a new breed of open, interoperable AI-specific identity and access management (IAM) standards and best practices.
The paper largely focuses on the needs of organizations that must strike a balance between the attraction to agentic AI and the need to reasonably govern its access and behavior with internal and external sources of data and computational services.
Also: Companies are making the same mistake with AI that Tesla made with robots
For example, imagine an employee who, in the name of productivity gains, grants email inbox access to an AI agent that automates responses to inbound customer requests. Today, it might only be one or two early adopters out of 1,000 employees who sample the productivity gains. In these early days, the exposure could be relatively limited and managed through ad hoc methodologies.
But five years from now, all 1,000 employees will have access to the technology, and each of them could easily have two or more agents working on their behalf, some of which have been granted carte blanche access (unbeknownst to the IT department) to other sensitive corporate resources. Even worse, those agents could be granting access to other agents, unbeknownst to anybody.
Whereas employees once outnumbered the agents, suddenly the agents -- all with a wide variety of human-like access to corporate resources -- will outnumber the employees. Hopefully, as the orders of magnitude worsen, all of those agents will be respectful of the corporate resources they can access. However, whether malicious or not, these agents more than likely won't. Hope is, therefore, not a strategy, and the OIDF's research seeks to alert stakeholders to the current state-of-the-state in agentic AI IAM and the technical gaps that desperately need to be filled.
MCP is a double-edged sword
Further exacerbating the challenge is AI's ability to shape-shift according to the demand and context at hand (even if legitimate); a capability whose evolution has accelerated due to the adoption of the Model Context Protocol (MCP). At least some of the magic of agentic AI is traceable to the rapidly growing range of data and computational services that can automagically inform it by virtue of MCP.
According to the research paper, "AI agents may seek access to a diverse array of resources. These can include structured data via APIs (e.g., for customer relationship management, inventory systems, or financial data), unstructured information from knowledge bases or document stores, computational services, or even other AI models." Among its many objectives, MCP essentially provides a standard means through which agents can dynamically discover and access the capabilities of any potential resource, regardless of format.
In some ways, MCP is both a blessing and a curse. Theoretically, the LLM-driven outcomes of agentic AI should improve as more sources of data, computational services, and AI models support the standard. On the other hand, the more resources are enabled with MCP, the more autonomous and less predictable AI agents become (and the more we could be heading towards that singularity).
Also: OpenAI's Altman calls AI sector 'bubbly', but says we shouldn't worry - here's why
When evaluating risk and making access management decisions, IT managers prefer predictability. They like having more knowns than unknowns. Unfortunately for them, AI agents are nothing like traditional monolithic software routines that predictably offer certain outputs given a fixed set of inputs. "[AI agents] take autonomous actions on external services, exhibiting non-deterministic, flexible behavior that adapts in real-time, rather than simply executing predetermined instructions," said the paper's authors.
Unfortunately, although some progress has been made in integrating IAM controls into MCP (and ultimately agentic AI), the resulting controls currently fall short of what IT managers need to manage the non-deterministic and autonomous behavior of agentic AI comfortably. Supposedly benign agents could easily belie their dormant and potentially malicious intent.
"MCP is definitely a double-edged sword. It opens up a ton of possibilities for AI agents but also introduces significant challenges for IT managers in terms of policy setting and control, especially as the ecosystem grows," the paper's author, Tobin South, told ZDNET. "MCP's IAM controls are a start, but they're not nearly robust enough for the expanding surface area. Its current identity and authorization framework still needs work to robustly scale to more autonomous AI use cases and meet the governance and security enterprises demand."
Introducing fresh guardrails
OIDF's research also identifies key areas for immediate improvements to IAM for agentic AI. Among them is the idea of giving AI agents the same sort of first-class identity considerations given to humans. In other words, whatever IAM controls you might have in place for humans should be applied minimally to agentic AI as well. However, in addition to that first-class citizenship, those controls also need to be seasoned with an element of sensitivity to the fact that the "user" is ultimately an AI agent.
The proper guardrails can "help prevent unintended behaviors, reduce risks, and maintain trust by guiding AI agents to act responsibly and in alignment with human values," says the research.
"These mechanisms are a critical extension of the principles found in traditional Identity Governance and Administration (IGA). While a mature IGA program establishes who can access what resources, AI guardrails provide a more specialized, real-time layer of control focused on how an agent uses that access, particularly when data is being exchanged with an AI model. For instance, while IGA may grant an agent permission to access a customer database, an AI guardrail would enforce policies at the point of action, such as automatically masking Personally Identifiable Information (PII) before it is sent to the LLM for summarization."
The paper includes a variety of examples of what it might look like to give AI agents first-class citizenship similar to that afforded to human users within enterprise IGA programs. For example, the paper discusses the role that the System for Cross-domain Identity Management (SCIM) protocol can play in automating the lifecycle management of AI agents. Today, SCIM is the standard protocol for automating user lifecycle management, and the glue between enterprise single sign-on systems and human resource management systems (HRMS).
As changes to a user's employment status are noted in the HRMS (such as hiring, promotion, separation, and more), the SCIM protocol is the means by which that user's resulting access rights are automatically reflected in the organization's IAM systems.
"This same lifecycle management [that applies to users] is equally critical for the agents themselves, which require formal processes for creation, permissioning, and eventual decommissioning," the paper states.
Also: Despite AI-related job loss fears, tech hiring holds steady - and here are the most in-demand skills
"To address this, experimental work is underway to [formally] extend the SCIM protocol to support agentic identities….by using [this] extended SCIM schema, organizations can provision agents into services just as they do users. This enables centralized IT administration, where agent permissions are not managed through ad-hoc processes but are governed by the same automated, policy-driven workflows used for human employees."
The paper discusses open standards that will be impacted by the elevation of AI agents to first-class entities and the work that is, or should be, in progress to retool and extend those standards to give IT managers better visibility and control over agentic AI deployments within their organizations. The paper can be downloaded in PDF form from the OIDF website.
Stay ahead of security news with Tech Today, delivered to your inbox every morning.