Tuesday, March 10, 2026

Human-centric IAM is failing: agentic AI requires a modern identity control plane

Share

The race is on to implement agent-based artificial intelligence. Across the enterprise, systems that enable planning, action and collaboration across business applications deliver unprecedented efficiency. However, the rush to automate misses a critical element: scalable security. We are creating a digital workforce without providing them with a secure way to log in, access data, and do their jobs without creating catastrophic risk.

The fundamental problem is that customary identity and access management (IAM) designed for humans fails at agent scale. Controls such as stagnant roles, long-term passwords, and one-time approvals are useless when non-human identities can outnumber human identities 10 to one. To harness the power of agentic AI, identity must evolve from a uncomplicated login gatekeeper to a lively control plane for the entire AI operation.

“The fastest path to responsible AI is to avoid real data. Use synthetic data to prove value, then earn the right to touch reality. ” — Shawn Kanungo, keynote speaker and innovation strategist; author of the bestseller “The Brave”.

Why Your Human-Centered IAM Is a Sitting Duck

Agentic AI doesn’t just employ software; behaves like a user. It authenticates to systems, assumes roles, and calls APIs. If you treat these agents as regular application functions, you encourage concealed permission creep and undetectable activity. A single agent with excessive privileges can extract data or run erroneous business processes at machine speed without anyone realizing until it’s too slow.

The primary vulnerability is the stagnant nature of legacy permissions. It is not possible to define a fixed role of an agent in advance, whose tasks and required access to data may change daily. The only way to ensure access decisions are true is to shift policy enforcement from a one-time allocation to continuous evaluation at runtime.

Prove value against production data

Kanungo tips offer a practical ramp. Start with synthetic or masked data sets to validate agent workflows, scopes, and guardrails. Once your policies, logs, and glass-breaking trails have proven themselves in this sandbox, you can move agents to real-world data with confidence and clear audit evidence.

Building an identity-centric AI operating model

Providing a modern workforce requires a shift in mindset. Every AI agent must be treated as a first-class citizen in its identity ecosystem.

First, each agent needs a unique, verifiable identity. This is not just a technical identifier; it must be tied to a human, a specific business employ case, and a software bill of materials (SBOM). The era of shared services accounts is over; they are tantamount to handing over the master key to a faceless crowd.

Second, replace set-it-and-forget-it roles with session-based and risk-aware permissions. Access should be granted in a timely manner, circumscribed to the immediate task and the minimum necessary data set, and then automatically revoked upon completion of the task. Think of it as giving an agent the key to a single room for one meeting, rather than the master key to the entire building.

Three pillars of a scalable agent security architecture

At the core of context-aware authorization. Authorization can no longer be a uncomplicated “yes” or “no” at the door. It has to be an ongoing conversation. Systems should evaluate context in real time. Has the agent’s digital stance been confirmed? Does it request data specific to its purpose? Does this access occur within the normal operating window? This lively evaluation ensures both security and speed.

Purposeful access to data at the edge. The last line of defense is the data layer itself. By embedding policy enforcement directly into the data query engine, you can enforce row- and column-level security based on the agent’s stated purpose. The customer service agent should be automatically prevented from running a query that appears to be intended for financial analysis. Purpose binding ensures that data is used as intended and not only accessible to an authorized identity.

By default, evidence of manipulation. In a world of autonomous operations, auditability is non-negotiable. Every access decision, data request, and API call should be consistently logged, capturing who, what, where, and why. Combine logs to make them easily reproducible and noticeable to auditors or incident responders, ensuring a clear description of each agent’s activities.

A practical action plan to get you started

Start with an identity inventory. Catalog all non-human service identities and accounts. You will likely encounter sharing and over-sharing. Start issuing unique identities for each agent workload.

Pilot a just-in-time access platform. Implement a tool that grants short-lived, scoped credentials for a specific project. This proves the concept and demonstrates operational benefits.

Require short-lived credentials. Issue tokens that expire in minutes, not months. Find and remove stagnant API keys and secrets from code and configuration.

Create a synthetic data sandbox. First, validate agent workflows, scopes, prompts, and policies on synthetic or masked data. Promote to real data only after passing checks, logs and egress policies.

Conduct a tabletop exercise on agent incidents. Practice responses to exposed credentials, quick injection, or tool escalation. Prove you can revoke access, change credentials, and isolate an agent in minutes.

The most vital thing

An AI-driven future cannot be managed with human-era identity tools. Organizations that win will recognize identity as the central nervous system of AI operations. Make identity the control plane, move authorization to the runtime, tie data access to purpose, and prove the value of synthetic data before touching real data. Do this and you can scale to a million agents without increasing your risk of breach.

Michelle Buckner is a former NASA Information Systems Security Specialist (ISSO).

Latest Posts

More News