Astrix Research Presents: Touchpoints Between AI and Non-Human Identities

Tal Skverer June 9, 2025

Non-human identities such as service accounts, API keys, and OAuth applications, are foundational elements of modern enterprise identity infrastructure. They facilitate automated processes, grant access, and authorize actions across organizational systems. With the accelerating adoption of AI, particularly autonomous AI Agents, a critical question emerges: how do AI systems interact with NHIs?

While it might seem like AI would use NHIs just like any other automated process, it isn’t just another script or a scheduled task – AI (specifically AI Agents) can make autonomous nondeterministic decisions, mimic human behavior and even request access dynamically. This behavior raises deeper concerns:

  • Will AI require expanded access compared to traditional NHIs?
  • Will organizations experience a surge in NHIs to support AI adoption?
  • How will AI influence NHI behavior, blurring distinctions between automated and human-like access?
  • Can attackers exploit AI-associated NHIs, leading to novel cyber threats?

Tal Skverer, Head of Research at Astrix Research, and Ophir Oren, Cyber and AI Security Innovation Squad Leader at Bayer, examined these concerns. This article outlines their findings relating to AI implications for security, governance, and identity management.

Defining AI Agents

An AI Agent is an autonomous or semi-autonomous system leveraging AI technologies to perform tasks requiring specific permissions. Examples include AI-driven user onboarding processes, automated calendar summaries, or scheduling meetings without human intervention.

AI categories and practical uses

Before we examine how AI interacts with NHIs, we need to break down AI usage into distinct categories. AI is not some monolithic entity – it exists and operates in different environments, with different levels of autonomy, and varying access levels.

Some AI systems are designed for human interaction, while others operate on their own in the background, asking for commands or assistance only when needed.

Understanding these categories will help us map out where AI relies on NHIs, how access is granted and what would be the potential risks in this architecture. Below are the key AI categories shaping modern organizations.

Chatbot AI

What it is: Chatbots engage users through natural language interactions, often integrating with multiple enterprise tools:

  • Dedicated service accounts and internally created OAuth applications facilitate platform access.
  • API keys empower chatbots with advanced functionality, allowing them to act on users’ behalf.
  • Webhooks enable chatbots to trigger workflows or send data to logging systems.

NHIs used: OAuth apps, API keys, webhooks 

RAG (Retrieval-Augmented Generation)

What it is: RAG combines information retrieval with generative AI models to enhance the accuracy and context of responses. Instead of relying solely on pre-trained knowledge, RAG retrieves relevant external documents or data based on a user’s query. These external resources often reside behind authentication layers, requiring the AI model to use access credentials such as database connection strings, usernames and passwords, or API keys to obtain the necessary information.

RAGs can be seen as the simplest tier of agents that utilize NHIs.

NHIs used: API key, Database service accounts

AI Cloud Models (e.g., Sagemaker, Vertex)

What it is: Cloud providers offer managed services for deploying large language models (LLMs), accessed typically via API keys or built-in cloud identities (roles or managed identities). These identities grant models seamless integration with other cloud resources.

NHIs used: API keys, IAM Roles/Managed Identities 

API to LLM (e.g., API connections to SaaS-based LLMs)

What it is: An API connection that allows applications to interact with large language models (LLMs), regardless of whether the LLM is deployed on-prem, in the cloud, or in a hybrid setup.

Because LLMs usually don’t have their own identity, the API key used to access the model is the only identity that matters.

NHIs used: API key 

Enterprise AI Platforms

What it is: Enterprise AI platforms offer internal AI services accessed through Single Sign-On (SSO) or direct login credentials. Once authenticated, users interact with various integrated tools (e.g., SharePoint, internet search). Access to the enterprise platform in itself requires an NHI, which should be managed and governed.

NHIs used: OAuth tokens, API keys

Browser Agents

What it is: Browser agents operate within web browsers, accessing user data and sometimes authenticating as users via provided credentials. They manipulate browser data directly or via session cookies, enabling actions like summarizing emails or translating languages.

NHIs used: User service accounts, session tokens

Computer Agents

What it is: Computer agents possess broader capabilities than browser agents, enabling them to control entire machines, install software, access files, and closely mimic human activities. Like browser agents, computer agents are provided with credentials, including usernames and passwords for local or domain user accounts, as well as additional credentials necessary for accessing various applications.

NHIs used: User service accounts, local service account passwords, session tokens

Risks and potential attack paths

From the above observation, we learned that the types of NHIs used by the different AI categories are similar despite the various applications and connection methods. After understanding the common types of identities utilized across different applications, we began to explore the potential dangers and common attack paths associated with these identities.

Emergence of multi-NHI agents

Today, NHI programs within organizations typically focus on identifying and mitigating risks associated with individual non-human identities. This focus arises because most tasks traditionally require only a single or a small number of NHIs. For example:

  • OAuth applications usually operate on behalf of just a single user.
  • Workloads accessing cloud resources typically require only one cloud identity.
  • CI/CD processes generally use two identities – one for the code platform and another for managing cloud deployment services.

However, the introduction of AI Agents significantly complicates this landscape. AI Agents often require broader access to effectively execute tasks across multiple platforms, business applications, cloud services, databases, chat platforms, and email systems. Unlike traditional NHIs, AI Agents are more comprehensive in their functional scope – requiring numerous NHIs to fulfill their roles.

The complexity increases further due to limited oversight regarding AI Agent creation. Nearly every employee within an organization can create AI Agents and grant them extensive access to their personal and organizational data. Consequently, organizations frequently end up with multiple NHIs whose associations with specific AI Agents remain unclear or unknown.

Monitoring these NHIs individually can lead security teams to incorrectly assume an identity is safe and suitable for organizational approval. In reality, a comprehensive risk assessment should account for all permissions held by each AI Agent collectively. Security evaluations should consider the total access footprint of the AI Agent, who has access to it, and by extension, who can leverage its permissions.

Increasing NHI-to-human ratio

Currently, organizations exhibit a 1:40 human-to-NHI ratio. AI adoption could drastically increase this ratio, potentially reaching 1:100. This surge amplifies risks:

  • Expanding the organizational attack surface
  • Increasing the sensitive permissions granted
  • Reducing visibility and control over NHIs
  • Allowing third-party access outside official approval processes
  • Complicating lifecycle management, including provisioning, rotation, and decommissioning

Under optimistic scenarios, widespread adoption of AI Agents would likely result in each human having their own dedicated AI agent, alongside multiple autonomous agents embedded within organizational systems. This scenario could feasibly result in AI Agents outnumbering humans at a ratio of 3:1.

Considering each AI Agent requires multiple NHIs due to inheriting permissions across various platforms used by the average human (approximately 20 platforms per human user), this would significantly amplify the number of NHIs. Consequently, the ratio of human to non-human identities could potentially escalate from the current 1:40 to around 1:100. Such a dramatic increase would significantly magnify existing risks related to non-human identity management and security within organizations.

Sensitive permissions sprawl

The rapid adoption of Cloud and SaaS platforms within organizations has significantly increased sensitive permissions granted to users. IAM programs typically use policies to limit and control these permissions, focusing specifically on reducing the number of identities holding highly sensitive administrative privileges. Such permissions commonly include extensive write and administrative actions.

Sensitive permissions are traditionally controlled by restricting them to a minimal number of administrative users or specialized service accounts, alongside stringent activity auditing. Often, administrators must intentionally switch to highly privileged accounts, thereby creating an additional control layer.

However, the rise of autonomous AI Agents is poised to disrupt this practice. As AI Agents begin replacing manual administrative tasks within enterprise applications, their associated NHIs will necessarily be granted extensive, sensitive permissions previously reserved for limited administrative roles. This shift expands the sensitive permissions perimeter beyond just IAM security controls. Organizations must now also secure the AI Agents themselves, particularly by ensuring that administrative-level access is protected from misuse by lower-permissioned users, thereby preventing unauthorized escalations and misuse.

Non-best practices in AI-NHI access

Being the hottest trend around, AI adoption is accelerating rapidly across organizations. Unfortunately, this haste often results in insecure methods being employed to connect AI to services and platforms. 

Over the past decade, identity security practices have advanced substantially, adopting safer methods for non-human connectivity, such as OAuth for scoped application access, and OpenID Connect (OIDC), which enables identity-aware code to securely access cloud resources without relying on static credentials or direct vault access.

Despite these advancements, outdated access-granting methods persist widely for backward compatibility and transitional purposes, until complete support is achieved across all platforms. Unfortunately, these legacy methods are commonly utilized when rapidly deploying new AI capabilities, harming the organizational security posture. 

Attacks on AI Agents as resources

AI Agents, being accessible resources within enterprise platforms, present additional security considerations. Protecting these agents has to involve applying multifactor authentication (MFA), enforcing least-privilege access, and strictly defined access policies.

Living off the land through AI-associated NHIs

As mentioned, the widespread use of AI is expected to significantly increase the number of NHIs within organizations, creating new operational spaces for attackers. A critical threat is the emergence of “Living Off the Land” attacks.

In these attacks, adversaries exploit legitimate tools and existing identities within systems rather than deploying dedicated attack tools, greatly complicating the detection of malicious activities. Attackers who compromise NHIs associated with AI agents can:

  • Exploit legitimate permissions held by NHIs to conduct malicious actions.
  • Mask their activities as normal AI agent behavior.
  • Move laterally through organizational networks using multiple permissions assigned to AI agents.

The issue is exacerbated by the perception of AI agent activities as mere background noise due to their high frequency and volume, which complicates the identification of abnormal behaviors by security teams. Consequently, attackers’ dwell time within networks can significantly extend, enabling them to gather sensitive information, strengthen their foothold within organizational systems, and execute sustained malicious operations.

Without dedicated monitoring mechanisms specifically designed for AI agents and their associated identities, organizations may find themselves vulnerable to sophisticated attacks that are challenging to detect in real-time.

Recommended security measures for AI-associated NHIs

The intersection of AI adoption, non-human identities (NHIs), and the rapid rise of AI Agents presents serious security risks for organizations.

As AI becomes more prevalent, NHI usage will increase, opening the door to “Living Off the Land” (LOTL) attacks via these identities. Such attacks are harder to detect and can significantly extend attacker dwell time in a compromised environment.

To support secure AI adoption, protecting NHIs and AI Agents is essential.

NHI provisioning for AI Agents

Organizations should establish clear provisioning workflows for NHIs used by new AI Agents. These workflows must account for the fact that employees without identity management expertise will often be involved. Key elements to define include:

  • OwnershipWho is responsible for creating the NHI?
  • NHI typeWhat kind of identity should be created per platform or service?
  • Access scopeWhat is the maximum level of permissions the AI Agent should receive?
  • Credential handlingHow are credentials distributed and delivered to the agent?

To reduce the risk of orphaned NHIs after an agent is no longer active, always set expiration times for permissions or credentials. When supported, decommission NHIs automatically once the AI Agent completes its task.

Gaining visibility into NHIs

Maintain an up-to-date inventory of NHIs used by AI Agents. This map should be based on provisioning records and include:

  • NHI owner
  • Granted permissions
  • Expected usage timeframe

Maintaining NHI Posture

Ensure strong security posture for all AI-related NHIs:

  • Use unique identities for each agent to ensure traceability.
  • Apply security policies such as IP restrictions (based on expected execution environment), human approval for sensitive write actions, and least-privilege access based on resource needs.
  • Monitor all activity of AI-related NHIs, maintain accessible event logs, and configure alerts for abnormal or unauthorized API usage.

Preventing “Living Off the Land” Attacks

AI-driven environments increase the risk of LOTL attacks via NHIs, making it difficult for security teams to distinguish between malicious and legitimate behavior.

To counter this:

  • Monitor AI-associated NHIs with tools that can differentiate benign from suspicious behavior.
  • Establish behavioral baselines and alert on deviations.
  • Use specialized detection mechanisms to identify malicious actors attempting to mimic AI activity.
  • Audit agent activity regularly to uncover anomalies and potential compromises.
  • Implement automated response playbooks to contain breaches involving AI-linked NHIs quickly.

Conclusion

The intersection of AI and NHIs introduces new complexities and security challenges. To stay ahead of attackers while reaping the benefits of this game-changing technology, organizations must proactively implement robust identity management strategies and targeted security controls. By systematically addressing these challenges through structured NHI provisioning, comprehensive visibility, strict posture management, and continuous monitoring – organizations can securely enable AI-driven innovation while minimizing associated risks.

Big thanks to Ophir Oren, Cyber and AI Security Innovation Squad Leader at Bayer, the research co-lead and author.

Learn more

Astrix Security Joins Elite List of Startups Defining the Future of Cyber

Gartner’s Leaders’ Guide to Modern Machine IAM

AI Agents vs. AI Chatbots: Understanding the Difference