A New Security Category for AI Agents: Inside SACR’s AIAP Report
Software Analyst Cyber Research (SACR) published its latest research report, “Emerging Agentic Identity Access Platforms (AIAP),” and the central argument is one we’ve been making since 2021: securing AI agents starts with identity.
The report introduces a framework for how enterprises should think about governing agents: not as a software or model problem, but as an identity and access problem. Astrix is featured as a representative vendor aligned with this model.
Here’s what the report covers, where Astrix fits, and what it means for security teams building their AI agent security programs today.
What SACR’s AIAP Framework Actually Says
The report defines Agentic Identity Access Platforms as a new identity security category purpose-built for AI agents. The analyst’s framing is direct: legacy IAM and PAM were designed around bounded intent, human oversight, and static credentials. Agents violate all three assumptions.
SACR structures the problem across four operational phases:
- Discover, Inventory, and Register: Before any control is possible, security teams need a continuous, accurate inventory of every AI agent, NHI, MCP server, secret, and API connection across SaaS, cloud, endpoints, and CI/CD. Most enterprises don’t have this. Discovery is the prerequisite, and not a nice-to-have.
- Translate and Authorize Intent: Traditional role-based access doesn’t map cleanly to agent behavior. Agents act on intent, not job titles. The report argues that authorization decisions need to shift from role-based to intent-based, evaluating what the agent is trying to do, not just who spun it up.
- Broker and Inject Access: Standing credentials are the primary risk amplifier in agentic systems. The report recommends just-in-time, task-scoped credential issuance with no embedded secrets and no long-lived tokens. Agents should only have access to what they need, for the duration they need it.
- Runtime Enforcement and Response: Authentication at deployment time is not enough. Agent behavior needs to be monitored continuously at runtime — detecting deviations, enforcing policy, revoking access, and maintaining end-to-end audit trails. This is where the report argues legacy IAM fails most clearly.
The analyst’s core conclusion: identity governance has to shift from “who logged in” to “what is acting, why, and for how long.” That shift requires infrastructure that doesn’t exist in traditional IAM stacks.
As Francis Odum put it during our joint webinar: “You can’t just have a fragmented way of thinking about it. When you have an agent acting on behalf of a user, you need to be able to interconnect and have full context into what they’re doing — who is this user, what are they spinning up — you need that human identity context to match controls.”
How Astrix Is Positioned in the Report
SACR profiles Astrix as an identity-first platform for AI agent security, operating at the non-human identity layer. The analyst’s framing centers on what we’ve called the convergence thesis: AI agents cannot be secured without first securing the non-human identities that power them.
The report identifies Astrix’s core strength as the ability to span two areas that most vendors address separately — NHI security and AI agent deployment — within a single integrated platform. In the analyst’s words:
“Astrix’s unique competitive advantage stems from being one of the few emerging identity platforms that addresses both legacy NHI remediation and agent AI deployment within a single integrated solution. This approach anchors AI agent security in non-human identity control, treating AI agents as enterprise identities whose risk is defined by the access, permissions, and downstream impact they hold across systems.”
This matters in practice. Most enterprises encountering agent sprawl today already have an NHI problem underneath it. Agents inherit the same credentials, OAuth apps, and service accounts that were already ungoverned. Treating them as a new, separate category means building controls from scratch on top of a foundation that was never secured to begin with. Astrix’s approach addresses both layers together.
What the report highlights in the Astrix profile:
- Deep discovery across cloud, SaaS, CI/CD, endpoints, and developer environments, covering custom, third-party, and shadow agents, including MCP servers and hardcoded secrets
- Identity attribution that maps agents back to owners, permissions, consumed resources, and accessed systems
- Continuous posture management and threat detection grounded in behavioral drift, not static policy
- Agent Control Plane (ACP) for secure-by-design deployment with just-in-time, short-lived, precisely-scoped credentials
Customer case study: Fortune 500 travel company
The Astrix case study in the report covers a Fortune 500 travel company where the CEO had mandated aggressive AI adoption across approximately 600 developers. The security challenge was real: agents were being spun up faster than governance could track, and existing IAM tools had no visibility into what those agents were connecting to or what permissions they carried. The organization had already deployed Astrix for NHI security before expanding into AI agent use cases. That foundation meant discovery was immediate and context-rich from day one.
With Astrix, the security team got a complete inventory of deployed GPTs and their connectors, including which agents had access to confidential data, which were open to all employees, and which were running with over-privileged API keys. The team identified high-risk scenarios including HR employees connecting GPTs to systems containing confidential employee records, and database administrators granting ChatGPT access to BigQuery with excessive permissions. Astrix routed alerts directly into the enterprise SIEM and processed through existing SOC workflows, no new response processes required.
The Identity Problem Doesn’t Change, The Scale Does
One question we hear often is whether AI agents represent a fundamentally new security problem or an acceleration of the NHI challenges enterprises already face. The answer, based on what we see in customer environments, is both.
The underlying primitives are familiar: API keys, OAuth apps, service accounts, secrets. What’s different is the scale, the autonomy, and the blast radius. An agent doesn’t just use a credential. It uses a credential to take actions across multiple systems, often without human review, at a speed and volume no human operator could match.
Idan Gour, Astrix co-founder and president, put it this way during “The Promise of the AI Agent Security Market” webinar: “If we think about these AI agents as an insider threat, employees that can go rogue… a regular employee would say ‘I shouldn’t use this credential to harvest the entire database.’ But agents don’t have that judgment. If they find the credential, they will use it. That’s a different risk profile.”
That judgment gap is what makes agent security an access problem. That’s the operational reality security teams are facing. The credential was always a risk, but AI agents remove the human check that kept it contained. The answer isn’t a new category of tooling bolted on top of the old stack, it’sa security foundation that covers both layers, one that starts with access and works outward.
Read the Full Report and Watch the Webinar
SACR’s AIAP report is required reading for CISOs and security leaders mapping their AI agent security programs. It provides a clear framework for evaluating where your controls stand today and what’s required as agent deployments scale.
We also hosted a webinar with Francis Odum where we went deeper on the market, the AIAP framework, and what a real AI agent security program looks like in practice, including shadow AI, MCP risk, and where to start if you’re early in the process.