Securing AI Agents at Scale: What’s New in Astrix

Oleg Mogilevsky January 14, 2026

Over the past quarter, Astrix delivered a set of focused product updates designed to help security, IAM, and AI teams better govern and secure AI agents and Non-Human Identities powering them.

These updates strengthen three core areas teams consistently struggle with: discovering AI agents where they actually run, understanding which agents pose real risk, and responding to agent-driven threats with speed and context. 

Together, they move AI agent security from fragmented visibility to operational control.

Below is a closer look at the key capabilities we released, why we built them, and why they matter.

MCP Discovery: Visibility into MCP servers and agent connections

As AI agents proliferate, Model Context Protocol (MCP) servers have become a common integration layerβ€”often running on developer endpoints and operating outside centralized visibility. Traditional IAM and security tools don’t track them, leaving teams unaware of which agents exist, how they connect, or what credentials they use.

Astrix MCP Discovery is built to close this gap. Using EDR-based discovery, it identifies MCP servers across endpoints, classifies them as official or shadow, and maps connected AI agents, credentials, usage patterns, and last activity. This approach provides immediate visibility without requiring new agents or disruptive deployment changes.

Why this matters

You can’t govern what you can’t see. MCP Discovery gives teams a reliable inventory of AI agents and their integration paths, reducing blind spots and exposing unmanaged or risky deployments before they turn into incidents.

For a deeper dive, read: https://astrix.security/learn/blog/astrixs-mcp-discovery/

Threat Center: Unified threat detection for AI agents and NHIs

As AI agents operate autonomously, they generate signals across tokens, service accounts, integrations, and activity logs. These signals often surface as disconnected alerts, making it difficult for security teams to understand what’s actually happening or where to focus.

Astrix Threat Center was developed to unify these signals into coherent, actionable Threat Cases. It consolidates detections related to AI agents and NHIs, enriches them with context, and provides a structured investigation workflow with ownership, status tracking, exclusions, and automation hooks.

Why this matters
Instead of chasing isolated alerts, teams can focus on prioritized cases that reflect real risk. Threat Center reduces noise, accelerates investigations, and enables faster, more confident response to agent-driven threats.

Learn more about the thinking behind Threat Center here: https://astrix.security/learn/blog/built-differenta-unified-threat-center-for-ai-agent-security/

AI Agent Risk Engine: Prioritizing what actually poses risk

Not all AI agents are equally risky. Some operate with minimal permissions, while others have broad access to sensitive systems. Treating them the same leads to wasted effort and missed priorities.

The AI Agents Risk Engine was built to help teams distinguish between low- and high-risk agents. It continuously evaluates each agent based on configuration, permissions, and observed behavior, producing an explainable risk score that updates as conditions change. These scores can be used to guide approvals, trigger workflows, or restrict access when risk increases.

Why this matters
Risk-based prioritization allows teams to focus where it counts. Instead of reacting to everything, they can address the agents most likely to cause impactβ€”before issues escalate.

Read more about agent AI risk engine here: https://astrix.security/learn/blog/dont-just-discover-ai-agents-understand-their-risk/

Agent Control Plane: Secure AI Agents From Day One

Discovery and detection are necessary, but they work best when agents are deployed securely from the start. Many organizations still provision AI agents with broad permissions, long-lived credentials, and little lifecycle oversight.

Astrix built the Agent Control Plane (ACP) to address this. ACP enables teams to provision AI agents with least-privilege access, short-lived credentials, enforced policies, and full auditability. It provides consistent guardrails while allowing developers to deploy agents quickly using pre-approved profiles.

Why this matters

Secure-by-design deployment reduces risk before agents ever reach production. ACP helps teams prevent access drift, enforce governance consistently, and scale AI adoption without sacrificing control.

For a deeper dive into Astrix’s ACP, read: https://astrix.security/learn/blog/astrixs-agent-control-plane-acp-secure-ai-agents-from-day-one/

Bringing it together

Together, these capabilities give organizations a structured approach to AI agent security:

  • Discover every agent and integration path
  • Understand which agents pose real risk
  • Respond to threats with context and clarity
  • Enforce secure defaults from deployment onward

As AI agents continue to scale, visibility alone isn’t enough. Control, prioritization, and secure-by-design governance are what make AI adoption sustainable.

See It in Action

If you want to see how these product updates work together, request a demo and learn how Astrix helps teams discover, secure, govern, and deploy AI agents with confidence.

Learn more

Introducing Astrix’s OpenClaw Scanner: A Practical Step Toward Reducing AI Agent RiskΒ 

OpenClaw: The Rise, Chaos, and Security Nightmare of the First Real AI Agent

Astrix Recognized in Gartner 2026 Emerging Tech Impact Radar for Identity and Access Management for AI Agents