Finding Shadow AI Agents

AI assistants are evolving into autonomous agents, capable of acting on behalf of users with little to no oversight. While this promises efficiency, it also introduces a major blind spot: the non-human identities (NHIs) that power these agents. From OAuth tokens embedded in servers to long-lived credentials hidden in configurations, AI agents depend on NHIs that often slip past traditional IAM controls.

In this 30-minute session, Field CTO Jonathan Sander is joined by Tal Skverer, Head of Security Research, to break down when AI became an “agent,” the identity risks fueling Shadow AI, and practical methods to fingerprint and secure these agents before they proliferate unchecked.

“Behind every AI agent lies an identity. And when agents run autonomously, organizations end up generating new identities and access on their behalf—often without the oversight or guardrails they expect.”

Key highlights:

AI agents vs. assistants: The critical shift happened when chat assistants were given “tools” (actions, APIs, email access), transforming them into autonomous agents.

Identity as the foundation: Agents aren’t LLMs themselves—they are powered by non-human identities (API keys, OAuth tokens, service accounts) that grant access to resources and actions.

New security risks: Shadow AI Agents bring unpredictability, permission sprawl, and new opportunities for living off the land attacks that make monitoring and control more difficult.

Fingerprinting techniques: Discover Shadow AI Agents by monitoring IP egress ranges from major AI vendors, checking OAuth app usage, analyzing user-agent strings, and leveraging platform audit logs.

Best practices for control: Tag and track NHIs tied to shadow AI Agents, enforce least privilege and lifecycle management, prohibit NHI reuse across agents, and assign clear ownership to every deployed AI agent.