Active Directory, LDAP, and early PAM were built for humans. AI agents and machines were the exception. Today, they outnumber people 82 to 1, and that human-first identity model is breaking down at machine speed.AI agents are the fastest-growing and least-governed class of these machine identities — and they don’t just authenticate, they act. ServiceNow spent roughly $11.6 billion on security acquisitions in 2025 alone — a signal that identity, not models, is becoming the control plane for enterprise AI risk.CyberArk's 2025 research confirms what security teams and AI builders have long suspected: Machine identities now outnumber humans by a wide margin. Microsoft Copilot Studio users created over 1 million AI agents in a single quarter, up 130% from the previous period. Gartner predicts that by 2028, 25% of enterprise breaches will trace back to AI agent abuse.Why legacy architectures fail at machine scaleBuilders don’t create shadow agents or over-permissioned service accounts out of negligence. They do it because cloud IAM is slow, security reviews don’t map cleanly to agent workflows, and production pressure rewards speed over precision. Static credentials become the path of least resistance — until they become the breach vector.Gartner analysts explain the core problem in a report published in May: "Traditional IAM approaches, designed for human users, fall short of addressing the unique requirements of machines, such as devices and workloads."Their research identifies why retrofitting fails: "Retrofitting human IAM approaches to fit machine IAM use cases leads to fragmented and ineffective management of machine identities, running afoul of regulatory mandates and exposing the organization to unnecessary risks."The governance gap is stark. CyberArk's 2025 Identity Security Landscape survey of 2,600 security decision-makers reveals a dangerous disconnect: Though machine identities now outnumber humans 82 to 1, 88% of organizations still define only human identities as "privileged users." The result is that machine identities actually have higher rates of sensitive access than humans.That 42% figure represents millions of API keys, service accounts, and automated processes with access to crown jewels, all governed by policies designed for employees who clock in and out.The visibility gap compounds the problem. A Gartner survey of 335 IAM leaders found that IAM teams are only responsible for 44% of an organization's machine identities, meaning the majority operate outside security's visibility. Without a cohesive machine IAM strategy, Gartner warns, "organizations risk compromising the security and integrity of their IT infrastructure."The Gartner Leaders' Guide explains why legacy service accounts create systemic risk: They persist after the workloads they support disappear, leaving orphaned credentials with no clear owner or lifecycle.In several enterprise breaches investigated in 2024, attackers didn’t compromise models or endpoints. They reused long-lived API keys tied to abandoned automation workflows — keys no one realized were still active because the agent that created them no longer existed.Elia Zaitsev, CrowdStrike's CTO, explained why attackers have shifted away from endpoints and toward identity in a recent VentureBeat interview: "Cloud, identity and remote management tools and legitimate credentials are where the adversary has been moving because it's too hard to operate unconstrained on the endpoint. Why try to bypass and deal with a sophisticated platform like CrowdStrike on the endpoint when you could log in as an admin user?"Why agentic AI breaks identity assumptionsThe emergence of AI agents requiring their own credentials introduces a category of machine identity that legacy systems never anticipated or were designed for. Gartner's researchers specifically call out agentic AI as a critical use case: "AI agents require credentials to interact with other systems. In some instances, they use delegated human credentials, while in others, they operate with their own credentials. These credentials must be meticulously scoped to adhere to the principle of least privilege."The researchers also cite the Model Context Protocol (MCP) as an example of this challenge, the same protocol security researchers have flagged for its lack of built-in authentication. MCP isn’t just missing authentication — it collapses traditional identity boundaries by allowing agents to traverse data and tools without a stable, auditable identity surface.The governance problem compounds when organizations deploy multiple GenAI tools simultaneously. Security teams need visibility into which AI integrations have action capabilities, including the ability to execute tasks, not just generate text, and whether those capabilities have been scoped appropriately. Platforms that unify identity, endpoint, and cloud telemetry are emerging as the only viable way to detect agent abuse in real time. Fragmented point tools simply can’t keep up with machine-speed lateral movement.Machine-to-machine interactions already operate at a scale and speed human governance models were never designed to handle.Getting ahead of dynamic service identity shifts Gartner's research points to dynamic service identities as the path forward. They’re defined as being ephemeral, tightly scoped, policy-driven credentials that drastically reduce the attack surface. Because of this, Gartner is advising that security leaders "move to a dynamic service identity model, rather than defaulting to a legacy service account model. Dynamic service identities do not require separate accounts to be created, thus reducing management overhead and the attack surface."The ultimate objective is achieving just-in-time access and zero standing privileges. Platforms that unify identity, endpoint, and cloud telemetry are increasingly the only viable way to detect and contain agent abuse across the full identity attack chain.Practical steps security and AI builders can take today The organizations getting agentic identity right are treating it as a collaboration problem between security teams and AI builders. Based on Gartner's Leaders' Guide, OpenID Foundation guidance, and vendor best practices, these priorities are emerging for enterprises deploying AI agents.Conduct a comprehensive discovery and audit of every account and credential first. It’s a good idea to get a baseline in place first to see how many accounts and credentials are in use across all machines in IT. CISOs and security leaders tell VentureBeat that this often turns up between six and ten times more identities than the security team had known about before the audit. One hotel chain found that it had been tracking only a tenth of its machine identities before the audit.Build and tightly manage agent inventory before production. Being on top of this makes sure AI builders know what they're deploying and security teams know what they need to track. When there is too much of a gap between those functions, it's easier for shadow agents to get created, evading governance in the process. A shared registry should track ownership, permissions, data access, and API connections for every agentic identity before agents reach production environments.Go all in on dynamic service identities and excel at them. Transition from static service accounts to cloud-native alternatives like AWS IAM roles, Azure managed identities, or Kubernetes service accounts. These identities are ephemeral and need to be tightly scoped, managed and policy-driven. The goal is to excel at compliance while providing AI builders the identities they need to get apps built.Implement just-in-time credentials over static secrets. Integrating just-in-time credential provisioning, automatic secret rotation, and least-privilege defaults into CI/CD pipelines and agent frameworks is critical. These are all foundational elements of zero trust that need to be core to devops pipelines. Take the advice of seasoned security leaders defending AI builders, who often tell VentureBeat to pass along the advice of never trusting perimeter security with any AI devops workflows or CI/CD processes. Go big on zero trust and identity security when it comes to protecting AI builders’ workflows.Establish auditable delegation chains. When agents spawn sub-agents or invoke external APIs, authorization chains become hard to track. Make sure humans are accountable for all services, which include AI agents. Enterprises need behavioral baselines and real-time drift detection to maintain accountability.Deploy continuous monitoring. In keeping with the precepts of zero trust, continuously monitor every use of machine credentials with the deliberate goal of excelling at observability. This includes auditing as it helps detect anomalous activities such as unauthorized privilege escalation and lateral movement. Evaluate posture management. Assess potential exploitation pathways, the extent of possible damage (blast radius), and any shadow admin access. This involves removing unnecessary or outdated access and identifying misconfigurations that attackers could exploit. Start enforcing agent lifecycle management. Every agent needs human oversight, whether as part of a group of agents or in the context of an agent-based workflow. When AI builders move to new projects, their agents should trigger the same offboarding workflows as departing employees. Orphaned agents with standing privileges can become breach vectors.Prioritize unified platforms over point solutions. Fragmented tools create fragmented visibility. Platforms that unify identity, endpoint, and cloud security give AI builders self-service visibility while giving security teams cross-domain detection.Expect to see the gap widen in 2026The gap between what AI builders deploy and what security teams can govern keeps widening. Every major technology transition has, unfortunately, also led to another generation of security breaches often forcing its own unique industry-wide reckoning. Just as hybrid cloud misconfigurations, shadow AI, and API sprawl continue to challenge security leaders and the AI builders they support, 2026 will see the gap widen between what can be contained when it comes to machine identity attacks and what needs to improve to stop determined adversaries.The 82-to-1 ratio isn't static. It's accelerating. Organizations that continue relying on human-first IAM architectures aren't just accepting technical debt; they're building security models that grow weaker with every new agent deployed.Agentic AI doesn’t break security because it’s intelligent — it breaks security because it multiplies identity faster than governance can follow. Turning what for many organizations is one of their most glaring security weaknesses into a strength starts by realizing that perimeter-based, legacy identity security is no match for the intensity, speed, and scale of machine-on-machine attacks that are the new normal and will proliferate in 2026.