The Invisible Identities Behind AI Agents — and How to Secure Them
In this Hacker News webinar, Jonathan Sander, Field CTO at Astrix Security, joined the conversation to explore the growing security challenges posed by AI agents. Unlike traditional applications, these autonomous systems introduce new access patterns, unpredictable behavior, and an identity challenge that security teams aren’t fully prepared for—yet.
Sander shares why these agents deserve the same level of scrutiny as human users, how organizations can get visibility into AI agent activity today, and why treating them like Non-Human Identities (NHIs) is a practical starting point.
““You’re going to see these agentic systems being overprovisioned. And one dangerous habit we’ve had for a long time is trusting application logic to act as the guardrails. That doesn’t work when your AI agent is powered by an LLM—because LLMs don’t stop and think when they’re about to do something wrong. They just do it.”
Key highlights:
Why AI agents aren’t just another workload: Autonomy changes everything. Sander breaks down how AI agents behave differently from traditional apps—and why security teams need to treat them like human users with sensitive access.
What makes LLM-powered systems uniquely risky: Unlike symbolic code, LLMs don’t follow predictable rules. That means you can’t always forecast how they’ll use the access you give them—making missteps harder to catch and more dangerous.
A pragmatic approach: start with identity: AI agents rely on the same systems and credentials you’re already using—OAuth apps, service accounts, API keys. That’s why treating them like NHIs is a practical, low-friction way to regain control.
A live walkthrough of Astrix’s AI visibility: Sander demos how Astrix maps AI agents across environments, surfaces overprovisioned permissions, and identifies risky access patterns—before they become breach material.