Results

Securing LLMs, AI Agents, and MCP: Unpacking the New CIS Companion Guides 

Jonathan Sander April 21, 2026

Today the Center for Internet Security (CIS) published three new Companion Guides extending the CIS Critical Security Controls to AI systems: one for LLMs, one for AI agents, and one for MCP environments. The guides are a joint effort between CIS, Astrix Security, Cequence, and other contributors from many different industries. Astrix is the principal author on the LLM and AI Agent guides and a contributor on the MCP guide. Here’s what we want to let you know:

  • What each of the three guides addresses
  • The principle that connects them
  • Where I’d start to ensure secure AI adoption  

What each guide covers

AI is in production across most enterprises and the controls that cover it are inconsistent. You probably already know that. The guides close that gap without adding a new framework. They stay inside the 18 CIS Controls you already run, applied through an AI-aware lens.

The three guides map to the three layers security teams have to reason about separately.

  • AI LLM Companion Guide. The model layer. Covers direct and indirect prompt injection, context handling, sensitive data exposure through embeddings and logs, access controls, and model supply chain risk.
  • AI Agent Companion Guide. The reasoning and action layer. Covers tool allowlisting, privilege boundaries, memory integrity, human-in-the-loop for high-impact actions, identity and access management for AI Agents, and the confused deputy problem, where a higher-privileged agent gets nudged into doing things it was never meant to do.
  • MCP Companion Guide. The protocol layer. Covers secure tool access, NHI management, auditable interactions, and protections against rug-pull behavior in MCP servers, where a trusted tool quietly changes behavior after integration.

Each guide walks through the 18 CIS Controls and specifies how each safeguard applies when the system being secured is not deterministic. The CIS Implementation Group (IG) guidelines are applied throughout, so teams can scope the work to their risk profile and maturity.

The principle behind all three guides

On the surface, the attack classes named across the three guides look different: prompt injection at the model layer, the confused deputy problem at the agent layer, rug pulls at the MCP layer. Structurally, they’re the same failure. A system without deterministic authorization trusts something it shouldn’t, and takes an action it shouldn’t.

Two rules the guides return to across all three layers:

  • System prompts are configuration, not content. They need the same change control you apply to any other configuration artifact. Treating them as editable copy is how silent policy drift starts.
  • Model output is never an authorization signal. An agent cannot write its own rules. Every action against a secured resource has to pass through deterministic authorization.

The second point is what we focus on the most. AI agents act through non-human identities: API keys, service accounts, OAuth tokens, and MCP server credentials. Those NHIs define what an agent can read, write, and execute. That access layer is where agent risk becomes enterprise risk, and where the CIS Controls already apply with full force.

The other through-line across the guides is integration. The discipline the CIS Controls have applied to cloud, containers, and SaaS for years now extends to models, agents, and MCP. No single layer covers the whole picture, which is why the guidance is split three ways. Inputs get sanitized at the model layer. Context and memory get protected across models and agents. Tool calls get validated through MCP. Actions get logged and remain auditable. Security holds when the controls span all three surfaces.

Where I’d start

Open with Controls 1 and 2. You cannot lock down things you don’t know exist. Inventory of models, agents, MCP clients & servers, and the credentials that connect them is the precondition for everything else. Every subsequent control depends on it.

From there, the implementation groups let you tailor. You don’t have to do everything at once. The guides tell you what to do now and what’s next on the list once you’re ready.

Join me, CIS, and Cequence on May 13 for a live walkthrough of the new Guides. Register here.

Learn more

What the Vercel Breach Reveals About Third-Party Integration Risk

AI Agents Have an Exposure Management Problem. Gartner Names Astrix as the Domain Specialist. 

A New Security Category for AI Agents: Inside SACR’s AIAP Report