Back

How generative AI impacts non-human identity security?

The popularity of Generative AI apps such as ChatGPT, Gemini, GPT4, Adobe, and many more is undeniably changing how organizations operate. While these AI-powered apps offer exceptional capabilities to automate tasks and boost productivity, they also pose significant threats and expand an organization’s attack surface through various threat vectors – a major one of them is non-human identity risks.

How generative AI impacts security?

We’ve all been there. We heard about a cool new AI marketing optimization tool or an AI code review app and are eager to give it a try. In fact, AI-powered apps were being downloaded 1506% more than last year, and the trend keeps ballooning. Employees nowadays are rushing to download and connect every shiny, new Generative AI app to their core systems and environments. These integrations are enabled by granting AI apps access via API keys, OAuth tokens, service accounts, and other forms of machine credentials and non-human identities.

The Astrix research team found that 32% of GenAI apps integrated into Google Workspace environments have extensive read, write, and delete permissions. Such broad privileges, combined with the lack of governance and management over the non-human identities these third-party integrations create, are an attacker’s dream. 

If a threat actor compromises the credentials of a generative AI app, they could exploit its permissions to infiltrate an organization’s entire environment and wreak havoc. For that reason, Gartner placed the risk of integration into Gen AI apps as one of the top risks in the Gen AI field

In addition to unvetted apps and untrustworthy vendors, whose apps could be malicious or hijacked, data sharing with Gen AI apps is another major risk. Samsung suffered multiple code and data leaks because employees innocently shared sensitive information with legitimate GenAI tools. As more unverified AI apps get integrated, even top organizations might lose track of their core environments, increasing the attack surface for threat actors to exploit. This issue stresses the need for an ITDR (Identity Threat Detection and Response) solution, but one that properly governs non-human identities.

How can you secure your generative AI integrations?

To ensure you can keep using gen AI while protecting against the risks AI apps pose, your organization must have effective management policies and best practices for your integrations and non-human identities. These include:

  • Automated discovery and inventory of all gen AI apps integrated into your systems
  • Analyzing the permissions and privilege levels granted to each AI integration
  • Monitoring for any suspicious or anomalous activity stemming from AI app credentials
  • Implementing least privilege access, removing unused integrations, and vetting AI app vendors
  • Establishing clear policies around the use of allowed generative AI apps

Astrix’s platform provides real-time detection and tailor-made response workflows to accomplish this. Schedule a demo with Astrix’s experts to see how it works. Protect your organization from the non-human identities risks unvetted AI apps use and integration might pose.

This site is using cookies for various purposes (analytics, marketing, user experience). You can read more in our privacy policy.

Request a demo

See how Astrix can help you take
control of your third-party integrations.



This will close in 0 seconds

Contact us



This will close in 0 seconds