What is in for me?
What AI Agents Are (A Practical Definition)
Enterprises are rapidly shifting from basic AI assistants to semi or even fully autonomous agents—software workers that can interpret intent, make decisions, and take actions on enterprise systems. This is a step-change from earlier generations of AI, where models simply produced text, analyzed data, or generated recommendations. Agents now execute workflows. They read, write, update, and orchestrate resources across cloud, SaaS, data platforms, engineering environments, and internal APIs.
Importantly, agents don’t just follow pre-coded automation rules. They reason. They evaluate objectives. They choose which tools or systems to call next. And in many cases, they perform these actions at machine speed with minimal and even potentially no human oversight.
In large organizations, agents appear in many forms:
- Native SaaS agents Salesforce Agentforce, Microsoft Copilot, HubSpot agents—embedded in systems employees already use.
- Internal copilots Custom-built agents trained on proprietary data for engineering, customer support, procurement, finance, or operations.
- Local and developer-created agents Agents built inside IDEs like Cursor or Windsurf, browser extensions, or local scripts that use enterprise credentials.
- Plugins, tools, and connectors AI-powered automations built into workflow systems like Workato or ServiceNow.
- Shadow agents Unregistered, self-built agents using personal API keys or credentials to interact with corporate systems.
This variety makes agents both powerful and unpredictable. There is no single “agent platform,” no unified logging standard, and no default governance layer. Each agent’s behavior depends on how the developer or user configured it, which tools it has access to, and, crucially, which non-human identity credentials it uses.
This is the conceptual foundation of the AI Agent Adoption Blueprint.
The AI Agent Adoption Blueprint
The architecture behind AI agents becomes much clearer when you break it down into three layers. This is the essence of the AI Agent Adoption Blueprint: Agents → NHIs → Enterprise Systems.
AI Agents
At the top are the agents themselves: the reasoning layer. Agents interpret prompts or tasks, determine the sequence of steps needed, and interact with external systems to execute work.
Key characteristics:
- Autonomy: Agents choose actions dynamically; their execution path is not scripted line-by-line.
- Speed: They interact with multiple systems in seconds, making rapid decisions.
- Breadth: They can combine data from CRM, cloud logs, ticketing systems, or databases in ways humans might not anticipate.
- Opacity: Their reasoning trails are difficult to reconstruct unless specifically instrumented.
Non-Human Identities (NHIs)
Agents never directly log into systems as “the agent.” They act through Non-Human Identities (NHIs): API keys,
OAuth apps, service accounts, personal access tokens, secrets stored in CI/CD pipelines or vaults, webhooks and connector configurations.
It is the NHIs that define the exact permissions and access rights the agent inherits. They determine:
- Which files the agent can read
- Which records it can update
- Which tickets it can close
- Which infrastructure it can provision
- Which systems it can access across cloud, SaaS, and internal APIs
This means that AI agent security is inseparable from NHI security. NHIs already outnumber humans by orders of magnitude and fall outside traditional IAM controls. They determine precisely what agents can or cannot do—yet most enterprises don’t track their creation, purpose, usage, or ownership.
Enterprise Systems
The bottom layer contains the systems agents interact with:
- SaaS platforms (Salesforce, ServiceNow, HubSpot, Slack)
- Cloud infrastructure (AWS, Azure, GCP)
- CI/CD systems (GitHub, GitLab, Jenkins)
- Data warehouses and analytics systems (Snowflake, BigQuery)
- Internal APIs and proprietary systems
This is where agents produce real-world effects: updating customer records, deploying code, manipulating cloud infrastructure, or accessing confidential data.
Every action in these systems maps back to an NHI. And every NHI maps back to an agent. Without clarity on these mappings, enterprises lose visibility into who—or what—is actually touching their systems. This opacity has implications for not only cybersecurity risks but compliance as well, as organizations without deep visibility and governance will struggle to understand what is occurring in their environments and the risks associated with the actions their agents take.
The Blueprint illustrates this dependency chain clearly: agent → NHI → system. Understanding this chain is the first step toward governing it.
MCP: The Rise of the AI Integration Layer
The Model Context Protocol (MCP) emerged to solve a practical problem: agents need structured, secure interfaces to internal and external tools and data. Instead of embedding brittle credentials into code or relying on ad-hoc APIs, MCP allows developers to expose tools, data sources, and actions to agents through a standardized gateway.
In practice, MCP servers act as translators and access brokers:
- They expose capabilities to the agent (“create ticket,” “query database,” “read folder”)
- They map those capabilities to real systems.
- They use NHIs behind the scenes to authenticate and authorize actions.
- They create a single point where an agent can access numerous tools.
This makes agent development dramatically easier—but also expands risks.
Astrix research into MCP servers shows:
- 88% require credentials to operate
- 53% rely on static, hard-coded secrets
- 79% pass secrets via insecure environment variables
MCP servers deployed by individuals (not teams) have the power to bridge external models with internal systems. When untracked or misconfigured, they create hidden tunnels of access no traditional security tool is monitoring.
Enterprises adopting agents need to treat MCP servers as first-class entities, and discover and govern them just as they govern AI agents and NHIs. Organizations also need visibility and governance of the MCP servers being consumed within their enterprise, especially as we have seen novel attack vectors with MCP (e.g. rug pull or tool description attacks), as well as intentionally malicious MCP servers in the wild.
The Interconnected Nature of AI Agents & NHIs
Every AI agent’s behavior is ultimately determined by the NHI it uses. If an NHI has admin permissions, the agent has admin permissions. If the NHI has write access to production systems, the agent inherits write access to production systems. Given the historical challenges of organizations implementing least-permissive access or proper entitlement management across identity lifecycles, this has concerning implications for the rise of agents and the NHIs associated with them.
This explains several patterns enterprises routinely observe:
NHI Sprawl Becomes AI Agent Sprawl
Each new agent often triggers the creation of:
- A new API key
- A new OAuth app
- A new service account
- A new secret in a CI/CD pipeline
Over time, organizations accumulate thousands of NHIs—many undocumented, unowned, or unmonitored. Agents come and go, but the NHIs remain, expanding the attack surface.
Misconfigured NHIs → Overprivileged AI Agents: The agent is not the direct problem. The overprivileged NHI is.
Orphaned NHIs → Persistent Access: When employees leave or agents are replaced, their NHIs often remain active indefinitely. This creates lingering backdoors into sensitive systems.
NHIs Become the Root Cause of Most Agentic Failures: Whether due to misconfiguration, prompt manipulation, or tool chaining, agent actions become dangerous only when the NHI is dangerously privileged.
Bringing It Together: The Blueprint as the Starting Point for Agentic Governance
The AI Agent Adoption Blueprint provides a clear map of how autonomous agents operate in real enterprise environments: AI agents determine logic and behavior, NHIs – access and permissions, and systems reflect the outcomes—and the risks.
Understanding this chain helps enterprises:
- Identify where governance must begin
- Avoid agent sprawl and shadow deployments
- Establish ownership early
- Audit and trace agent actions
- Enforce least privilege using NHIs as the control point
- Prepare for emerging regulations and risk frameworks
- Scale AI adoption without compounding access risk
The blueprint is not a security model by itself—it is the scaffolding.
In the next chapter, we build on it to define the full scope of AI Agent Security as a discipline, outline the market landscape that is forming around it, and introduce a lifecycle model that enterprises can apply at scale.