Results

Your AI Agent Inventory Is Wrong. Here’s How to Fix It.

Omer Granot March 11, 2026

This post covers how Astrix discovers AI agents across enterprise environments, including the ones security teams don’t know exist. By the end, you’ll understand why traditional tools miss most agents, how Astrix finds them across three distinct data sources, and what actionable visibility actually looks like at enterprise scale.

When we ask security leaders how many AI agents are running in their environment, most have a number ready. Almost every time, that number is wrong by an order of magnitude.

It’s not a tooling gap in the traditional sense. It’s a structural one. Most security tools were built around human users: accounts with names, managed lifecycles, and clear ownership. AI agents don’t work that way. They authenticate with OAuth tokens and API keys. They’re deployed by developers, employees, and third-party vendors, often without IT involvement. They run continuously, touch multiple systems, and leave a footprint traditional discovery tools weren’t built to catch.

Shadow AI Agents Are a Security Problem, Not Just a Governance One

Shadow AI agents aren’t just undocumented. They’re unreviewed. No one has validated what they can do, what credentials they’re using, or how broadly they’re exposed. An employee-deployed ChatGPT agent with create and delete permissions across your entire Google Workspace directory isn’t a theoretical risk. Neither is a deprecated MCP server running on a developer’s endpoint, storing secrets locally, with no one tracking who or what is connecting through it. These AI agents exist in most enterprise environments today. Most security teams have no visibility into them.

How Astrix Builds a Complete AI Agent Inventory

Astrix builds AI agent inventory from three sources simultaneously.

The first is direct platform integrations with Cursor, ChatGPT, Bedrock, Gemini, Agentforce, and others – surfacing agents that were officially deployed and are actively monitored. 

The second is NHI fingerprinting. Because Astrix already monitors the OAuth and identity layers of your core platforms – Slack, your IDPs, SaaS tools – it can identify agents that were never registered anywhere but left a trace on those layers. These are the shadow agents: the ones no one deployed, and no one is watching.

The third is sensor data. For AI agents and MCP servers running locally on employee endpoints, Astrix uses existing EDR telemetry; no new deployment required. This covers local Claude Code instances, unofficial MCP servers, IDE-connected agents, and anything else operating at the endpoint layer that never touched a central platform.

All three sources feed into an identity correlation layer that produces a single inventory with full context: who owns each agent, what credentials it’s using, what resources it can access, and how broadly it’s exposed. In a typical enterprise, this surfaces thousands of agents and NHIs that security teams had no prior visibility into.

Watch what this looks like in a live environment

What Visibility Actually Looks Like

Discovery is only useful if it’s actionable. For each agent Astrix surfaces, the Identity Graph maps the full picture: the agent, the NHI it operates through, the permissions it holds, and the downstream systems it can touch. That context is what turns a list of agents into a risk-prioritized finding.

For MCP servers, Astrix surfaces the information security teams need to make a trust decision: local or remote, official or unofficial, actively maintained or deprecated. A deprecated GitHub MCP server running on an employee endpoint isn’t necessarily a crisis – but it’s a decision that should be made consciously, not missed entirely.

From there, Astrix scores each agent based on permission breadth, ownership gaps, and active findings. The highest-risk items, such as an AI agent accessible to your entire organization with write access to core systems, get flagged immediately, with remediation guidance attached.

Watch It in Action

The best way to understand what this looks like at enterprise scale is to see it in a live environment. In this demo, we walk through exactly how Astrix discovers AI agents, and what it surfaces when it does.

Watch the video

If you want to see how many AI agents are running in your own environment, including the ones you don’t know about, book a demo today.

Learn more

Astrix Named in the Gartner® Market Guide for Guardian Agents

OpenClaw detection by Astrix

How to Discover & Remediate OpenClaw (MoltBot) Agents with Astrix

OpenClaw / Moltbot Footprint Scanner

Introducing Astrix’s OpenClaw Scanner: A Practical Step Toward Reducing AI Agent Risk