AI Agent Governance at Scale with NHI Security: Case Study
A long-time Astrix customer, already securing non-human identities (NHIs) like API keys and service accounts across their enterprise, faced a growing challenge: governing AI agents developed internally and through ChatGPT.
The CISO was concerned about potential security and compliance risks stemming from the lack of visibility and policy enforcement. Seeing Astrix as a natural partner, they extended the collaboration to govern GPTs as part of their NHI strategy.
Within the first three months of implementation, Astrix significantly enhanced the customer’s ability to adopt AI agents while maintaining strong governance.
This is a major brand — widely recognized, and chances are you’ve used their services yourself.
The challenge: Accelerated AI agent adoption
Initially, the company decided to pilot three different Agentic AI platforms:
- Vertex AI
- ChatGPT Enterprise
- Glean
They granted enterprise licenses to 200 developers out of a broader engineering organization of thousands. Our focus here is on ChatGPT Enterprise.
Within 1.5 months they built around 400 GPT’s. Out of them 240 GPTs were in a published state, while the rest were in a draft state.
At this point the question was asked – “What do these GPT’s have access to?”. This came from the GRC manager and the CISO.
It was a simple question, but the customer felt like they were already losing their grip on the environment.
The Solution: Astrix for AI Agent Governance
Astrix deployed continuous monitoring and governance tools for AI agent activity, including:
- Inventory mapping of all GPTs and their NHIs, secrets, and permissions.
- Detection of shared files and connected external systems.
- Real-time alerts on access and usage changes.
- Policy enforcement recommendations (e.g., restrict “link-based” sharing).
Security Issues Detected In the First Week
- 250+ GPTs identified in the organization’s workspace.
- ~25% of agents were accessible by any user, some with direct access to sensitive resources like production BigQuery tables.
- Sensitive files were used as training data by publicly accessible GPTs.
- GPTs held admin-level scopes and could interact with external platforms like Jira.
- 10% of GPTs had direct API access to external systems such as BigQuery and Atlassian.
- Astrix traced one BigQuery connection back to a developer who had granted access to a credential with good intentions — but in doing so, unintentionally exposed PII from BigQuery to ChatGPT. This was flagged immediately as a nightmare scenario by the GRC manager and CISO.
Impact
- High-risk GPTs were identified and remediated.
- Unrestricted access to sensitive systems and files was curtailed.
- Governance controls were implemented to support compliance.
- Security posture improved as AI agent use expanded responsibly and transparently.
The incident reinforced the critical need for governance before scaling beyond the initial 200 developers, preventing significantly higher risks in full production.
Want to see how Astrix can help your organization securely scale AI agent adoption — without losing governance or control? Learn more about our approach to securing non-human identities and governing AI agents here.