By Ido Shlomo, CTO and Co-Founder, Token Security
Agentic AI has arrived. From custom GPTs to autonomous copilots, AI agents now act on behalf of users and organizations, or even act as just another teammate, making decisions, accessing systems, and invoking other agents without direct human intervention.
But, with this new level of autonomy comes an urgent security question: If AI is doing the work, how do we know when to trust it?
In traditional systems, Zero Trust architecture assumes no implicit trust, where every user, endpoint, workload, and service must continuously prove who they are and what they’re authorized to do.
However, in the agentic AI world, these principles break down fast. AI agents often operate under inherited credentials, with no registered owner or identity governance.
The result is a growing population of agents that may look trusted but actually are not, one of many risks of autonomous AI agents in your infrastructure.
To close this gap, organizations must apply the NIST AI Risk Management Framework (AI RMF) through a Zero Trust lens with identity at the core. Identity has to be the root of trust for AI, and without it, everything else (access controls, auditability, accountability) falls apart.
Identity Risk in the Agentic Era
NIST’s AI RMF provides a high-level guide to managing AI risk across four functions: Map, Measure, Manage, and Govern. But interpreting these through the lens of identity governance reveals where AI-specific risks are hiding.
Take the “Map” function. How many AI agents are currently active in your organization? Who created them and who owns them? What access do they have to enterprise systems and services? Most security teams can’t answer these questions today.
... continue reading