Your AI Agents Have Too Much Access. You Just Can't See It Yet

Wait 5 sec.

A few years ago, I was doing a security review at a mid-sized financial services company. They had a mature IAM program, a dedicated cloud security team, and had just completed a major access certification campaign. Clean bill of health.Then we pulled the service account inventory.Over 400 service accounts across their AWS and Snowflake environments. Roughly 60% hadn't been used in over 90 days. Several had admin-level privileges on production data pipelines. One created for a PoC that never went to prod had read access to their entire customer data warehouse. Nobody knew it existed.This wasn't negligence. It was entropy. Access sprawl isn't a policy failure, it's what happens when you treat access as a configuration detail rather than a system that needs to be continuously understood.That was before AI agents entered the picture. Now, that same pattern plays out 10x faster.What "Shadow AI" Actually Looks Like in PracticeShadow IT used to mean a rogue Dropbox account or an unauthorized SaaS subscription. Security teams learned to deal with it - DLP policies, CASB tooling, strongly-worded all-hands emails.Shadow AI is different in kind, not just degree. It's not just ungoverned storage - it's ungoverned access plus action.Here's what I've seen firsthand in enterprise environments over the last 18 months:The "just for testing" OAuth grant that never died. A developer connects an AI coding assistant to the internal GitHub org "just to evaluate it." The OAuth grant gets broad repo access. The eval period ends; the integration doesn't get revoked. Six months later, nobody remembers it's there but it still has read access to every private repo in the org.The no-code automation with human-level credentials. A business analyst builds a workflow automation using their own credentials to call internal APIs. The workflow runs on a schedule, silently, long after the analyst has moved to a different team. The access was never designed to outlive the person's role.The agent framework that caches more than you think. Open-source agent frameworks that retain conversation context and credentials between sessions deployed by an engineering team that didn't read that part of the docs. Context windows contain API keys. Sessions persist. Nobody audited what was being stored or who could access it.None of these show up in your IAM dashboard. None trigger your DLP policies. They live in the seam between "approved tooling" and "obvious incident."The Non-Human Identity Problem Is Already Bigger Than Most Teams RealizeHere's a number that tends to land hard: in most enterprise environments, non-human identities already outnumber human users often by a factor of 5 to 10.Service accounts. Workload identities. CI/CD pipelines. API keys. And increasingly, AI agents with their own OAuth grants and session tokens. These identities accumulate privileges quietly. They don't hit MFA prompts. They don't get flagged in quarterly access reviews. They don't get offboarded when the system they were created for gets decommissioned.And because they've historically been managed separately from human access in a different tool, by a different team, on a different cadence the governance model never developed the same muscle.The compounding problem with AI agents specifically is that their access patterns are harder to reason about statically. A human user with access to a data warehouse and a Slack integration uses those things independently. An AI agent with the same access can chain them pull from the warehouse, reason over the output, and post results somewhere in a single task execution that nobody designed end to end.The blast radius of a misconfigured or compromised agent isn't just the permissions it holds. It's the transitive surface of everything it can reach and act on.Access Is Now the Control Plane for AI TrustThe framing I keep coming back to: in an agentic world, access governance is the primary mechanism for controlling what AI systems can actually do.Model alignment matters. Output filtering matters. But an agent that behaves perfectly and has access to everything is still a risk surface that most security teams aren't ready for. Prompt injection attacks, scope creep, compromised API keys any of these become catastrophic if the blast radius isn't bounded.The access model has to evolve from: "Is this identity authorized?"to: "Is this access pattern consistent with what this identity should be doing right now?"That's a meaningful shift. The first question is answered at provisioning time and revisited periodically. The second requires continuous evaluation against observed behavior and it only works if you have a coherent model of what "normal" looks like for each identity.This is where treating access as a graph rather than a list actually pays off. When users, roles, permissions, resources, service identities, and agents are modeled as interconnected entities, you can ask questions that siloed systems can't answer:What can this agent actually reach if it follows transitive permission paths?Which identities share an access path to this sensitive dataset?If we add this new integration, what does the propagation look like before we deploy it?That last one matters most. The shift from reactive cleanup to proactive blast-radius modeling is where access intelligence stops being a security function and starts being an architectural input.What Responsible Agent Deployment Actually RequiresI've sat in enough post-mortems to know that AI-related access incidents rarely start with a dramatic breach. They start with access that was too broad, too static, and too invisible to catch before something downstream broke.The pattern is almost always the same:Agent gets provisioned with "enough" access to do its jobAccess is never scoped down once the job is better understoodAgent behavior drifts- new integrations, expanded use cases, changed promptsNobody is watching the access surface, because the agent passes all the authentication checksFixing this isn't about adding more controls. It's about a few concrete practices:Scope grants at deployment, not after. Define the minimum access an agent needs to complete its task before it goes anywhere near production. Treat over-provisioning as a deploy blocker, the same way you'd treat an unreviewed network rule.Model human and non-human identities together. If agents are evaluated on a separate cadence, with separate tools, by a separate team you will have blind spots. The governance model needs to cover both, consistently.Baseline behavior, not just provisioned scope. What data sources does this agent actually access? What chaining behavior does it exhibit? Does it stay within the access patterns it was designed for? Anomalies in behavior are often the first signal of drift, compromise, or scope creep but only if you're watching.Design access before deployment, not after. The best time to understand the blast radius of a new agent or integration is before it's running in production. Modeling access propagation upfront is an order of magnitude cheaper than cleaning it up after.The Access Problem Didn't Get SolvedI keep seeing organizations declare access "under control" after completing a certification campaign or deploying a new IAM tool. And then six months later, the same patterns re-emerge stale permissions, over-privileged accounts, ungoverned machine identities.The reason is structural: access governance has mostly been a point-in-time activity applied to a continuous problem. Cloud infrastructure, SaaS sprawl, and now AI agents don't slow down between your quarterly reviews.The organizations that are actually getting ahead of this are the ones that stopped treating access as a configuration detail and started treating it as a living system one that needs to be modeled, monitored, and reasoned about continuously across every kind of identity.Okay, So What Do You Actually Do About It?The pattern I've described isn't new to most security leaders. The harder question is always: where do we apply force? Here's a concrete action plan, organized by the three layers where access risk actually accumulates.Layer 1: Get a Real Inventory of Who — and What — Has AccessYou can't govern what you can't see. And most organizations have a visibility problem that's worse than they think.Build a unified identity inventory — human and non-human together. Stop managing service accounts in a separate spreadsheet from human users. Pull every identity type, employees, contractors, service accounts, workload identities, API keys, OAuth grants, CI/CD pipeline credentials, and agent tokens, into a single view. The goal isn't a perfect CMDB. It's enough visibility to ask cross-identity questions: Which identities have access to this dataset? Who shares a permission path with this admin role?Flag the orphans and the over-privileged immediately. Two filters that surface the highest-risk identities fast:Any non-human identity unused in 60+ days with privileges still activeAny identity — human or machine — with permissions that cross more than two system boundaries (e.g., production database + cloud storage + external API)These are your quickest wins and, more importantly, your most credible signal to leadership that the problem is real.Map OAuth grants and agent tokens as first-class identities. This is the one most teams skip. Every AI tool, automation, and SaaS integration that authenticated against your internal systems left a token somewhere. Treat each one as an identity with an owner, a scope, and an expiration. If it doesn't have all three, that's a gap.Layer 2: Understand Access as a Graph, Not a ListThe reason transitive access risk is so hard to manage is that most teams are querying flat permission lists when the actual risk lives in relationships between systems.Model permissions as connected entities. When you represent users, roles, resources, and service identities as nodes in a graph, with grants, inheritance, and delegation as edges, questions that were previously unanswerable become straightforward:If this service account is compromised, what's the blast radius?What's the shortest path from this external-facing API to our most sensitive data store?Which AI agents share an access path to this regulated dataset?You don't necessarily need a dedicated graph database to start. Even mapping this in a structured document for your five highest-risk systems will surface things that have been invisible for years.Trace transitive permissions before they become incidents. Direct permissions are easy to audit. Inherited ones aren't. A role that grants access to a data pipeline that has a trust relationship with a storage bucket that contains PII - that's three hops, and most point-in-time reviews miss it entirely. Build the habit of tracing at least two levels of inheritance for any identity you're evaluating.Identify cross-system access paths as a distinct risk category. Permissions that cross system boundaries, especially between a lower-trust environment and a higher-trust one deserve their own review cadence. An agent that can read from a dev environment and write to a prod messaging queue is a different risk than one scoped to a single system, even if each individual permission looks benign in isolation.Layer 3: Make Access a Design Input, Not an AfterthoughtThis is the hardest shift, but it's where the leverage is. The teams that are genuinely ahead of this problem didn't get there by cleaning up access faster they got there by building access considerations into how systems get designed and deployed.Require an access scope document before any new agent or automation goes to production. It doesn't need to be long. Four questions:What identities does this system use?What is the minimum access required for it to function?Who is the named owner responsible for its access surface?How does it get deprovisioned when it's no longer needed?Making this a deploy requirement, the same way you'd require a security review for a new external endpoint, shifts the culture from reactive to proactive without adding significant overhead.Define blast radius as a first-class design constraint. Before deploying a new agent or integration, explicitly model the worst-case impact if it's compromised or behaves unexpectedly. What systems can it reach? What data can it exfiltrate or corrupt? What downstream automations could it trigger? This isn't a theoretical exercise, it's the same kind of threat modeling you'd do for a new network segment, applied to access scope.Implement just-in-time access for high-privilege operations. Persistent broad access is the enemy. For operations that require elevated privileges, schema changes, bulk data exports, cross-environment access move toward time-bounded grants that require explicit justification and expire automatically. This applies to AI agents as much as humans. An agent that needs elevated access to complete a specific task should get it for that task, not permanently.Build behavioral baselines for your highest-risk non-human identities. What data sources does this agent normally access? What's its typical call volume? What systems does it chain together? Establish that baseline at deployment, and build alerting around meaningful deviations. Anomalous access behavior, new data sources, unusual chaining, access at unexpected times, is often the first signal of prompt injection, credential theft, or scope creep. But only if you defined "normal" first.Close the feedback loop between access and observed behavior. The final step, and the one that separates mature programs from everyone else: use actual usage data to continuously right-size access. If a service account hasn't touched a permission in 90 days, remove it. If an agent's observed behavior only ever touches two of the eight systems it has access to, scope it down. Access should reflect reality, not the broadest possible interpretation of what might someday be needed.The Question Every Security Leader Should Be Asking Right NowMost CISOs I talk to are focused on the right things securing the AI models their organizations are adopting, managing cloud risk, keeping up with compliance mandates. But there's a question that doesn't come up often enough in those conversations:Do you know what your AI agents can actually reach?Not what access you intended to grant. What they can reach through transitive permissions, chained integrations, and OAuth grants your team approved six months ago and forgot about.If you can't answer that with confidence today, you are already behind because your teams are deploying agents faster than your governance model can track them. And the gap between "what we provisioned" and "what can actually be accessed" is exactly the surface that gets exploited.The organizations that will navigate the agentic era without a major access-related incident aren't the ones with the most controls. They're the ones that decided, early, to treat access as a first-class system modeled continuously, owned clearly, and designed before deployment rather than cleaned up after.That decision starts at the top. The mandate to govern human and non-human identities together, to make access a design input for AI initiatives, and to hold engineering and business teams accountable for the access surface they create that doesn't come from a security engineer. It comes from you.The access problem didn't get solved. You just inherited a faster version of it. What you do in the next two quarters will determine whether your AI investments become a competitive advantage or your next hard conversation with the board.Author: Priyanka Neelakrishnan is a Director of Product Management focused on Access Security, with experience building enterprise security and data platforms at Palo Alto Networks and Symantec. She is the author of Autonomous Data Security: Creating a Proactive Enterprise Protection Plan, which explores how organizations can move from policy-centric data protection toward adaptive, AI-enabled security architectures. Make the world better than how it was yesterday!Good security is an act of respect - for users, for data, for the future.\\