900K Users Compromised: Malicious AI Chrome Extensions Steal ChatGPT and DeepSeek Conversations
TL;DR
- Two Chrome Web Store extensions, masquerading as a legitimate AI sidebar product (AITOPIA), were used to steal users’ AI chat conversations and browsing data at scale. The combined install base was over ~900,000 users, and one listing even carried Google’s “Featured” badge, which likely boosted trust and visibility.
- The extensions worked as advertised (an AI sidebar), but also secretly captured ChatGPT and DeepSeek conversations and all open-tab URLs, then exfiltrated the data every ~30 minutes to attacker-controlled infrastructure.
- The Astrix impact: 85% of our customers removed these extensions before the research was published, as Astrix flagged them early as high-risk and tied to an untrusted vendor, prompting proactive cleanup.
What happened
In late December 2025, OX Security disclosed a browser extension campaign that should change how security teams think about “AI helper” tooling in the browser. Two Chrome Web Store extensions, packaged as an AI sidebar, quietly siphoned ChatGPT and DeepSeek conversations, plus all Chrome tab URLs, and shipped the data to attacker-controlled infrastructure on a 30-minute loop.
This is the follow-up you need if you are trying to answer one question: Are we treating AI browser extensions like the high-risk, high-permission identities they actually are?
Chain of events and attack flow
- Publish and position lookalike extensions Threat actors cloned the look and functionality of a legitimate AI sidebar product (AITOPIA) and published two convincing Chrome Web Store listings. One listing gained extra visibility and trust via Chrome Web Store signals (for example, “Featured”).
- User installs and grants broad permissions Users installed the extensions and approved high-privilege access (notably the ability to read content across sites), enabling full-page visibility into web apps, including AI chat interfaces.
- Misleading consent, framed as “analytics” The extensions prompted for consent framed as “anonymous analytics,” while the actual behavior included collecting browsing activity and full AI chat content.
- Continuous monitoring of browsing activity The extensions listened for tab and URL updates, captured URLs across open tabs, and staged the collected data locally.
- Targeted theft of ChatGPT and DeepSeek conversations When users visited ChatGPT or DeepSeek, the extensions detected the relevant URLs, scraped prompt and response content directly from page elements (DOM), and recorded session context (such as chat/session identifiers).
- Batch exfiltration to attacker infrastructure Staged data was transmitted to attacker-controlled endpoints (including domains highlighted by OX) on a predictable cadence, roughly every 30 minutes, often encoded (for example, base64) in transit.
- Persistence via “handoff” behavior If a user uninstalled one extension, it attempted to push the user to install the second malicious extension to maintain access.
What vulnerabilities were exploited
This was less a single software vulnerability, more a stack of ecosystem weaknesses:
Extension supply chain and marketplace trust gaps
A malicious or modified extension can still reach a massive distribution, even earning trust signals like “Featured.”
Over-privileged extension permissions
“Read all website content” makes it trivial to scrape sensitive data from high-value web apps (including AI chat UIs).
Misleading consent and privacy policy language
The extensions framed the collection as “anonymous analytics,” while actually exfiltrating full browsing and conversation content, and the privacy policies understated or omitted remote transmission.
In-browser data exposure
AI chat content is displayed in the DOM, so any extension with page access can scrape it.
The real impact
If an employee installed these extensions, the threat actor could exfiltrate high-value data streams that compound risk fast:
- Proprietary engineering data Including source code, dev questions, and debugging context shared in ChatGPT or DeepSeek conversations
- Sensitive business intelligence Including strategy, competitive insights, and planning discussions captured from AI chats
- Regulated and confidential content Including PII, legal topics, research, and other internal communications disclosed in conversations
- Complete browsing visibility Including full URLs across all open tabs and search queries that reveal investigations, priorities, and active projects
- Account and environment exposure Including URL parameters that may contain session tokens or user IDs, plus internal corporate URLs that map tools, workflows, and organizational structure
This combination enables corporate intelligence theft and account compromise, and it gives attackers exactly what they need to craft high-precision phishing and business email compromise campaigns.
In many organizations, it also means silent leakage of intellectual property, customer data, and confidential business information that was never intended to leave the enterprise.
How Astrix helps customers avoid this exact risk
This incident is a textbook example of why AI tooling and “helpful” browser add-ons must be treated as third-party risk with identity and data consequences, not just endpoint hygiene.
Astrix helps teams get ahead of this by:
- Surfacing untrusted suppliers and high-risk tools early, before they become normalized in the business.
- Enforcing governance on AI apps and extensions that create unmanaged exposure paths into sensitive workflows.
- Reducing blast radius by identifying where sensitive data can be accessed, moved, or copied by tools operating outside approved controls.
- Driving action, by helping security teams trigger remediation and user outreach when risky tools show up.v
If this feels theoretical, it’s not. We’ve seen teams uncover these extensions in their environment, intervene fast, and prevent sensitive data from being shared through an untrusted channel:
“While using Astrix, our team identified a suspicious NHI disguised as an official app of a known tool. With Astrix’s risk calculation and visibility, we fully removed it from the organization. Thanks to the collaboration between Astrix and our team, we prevented Guesty’s workspace from being compromised.”
– Kobi Afuta, Head of Cyber Security at Guesty
What to do now
- Remove the extensions immediately
across managed browsers, and block re-install where possible. - Invalidate sessions and rotate exposed secrets
Reset passwords where appropriate, rotate API keys, tokens, and any credentials that may have been pasted into AI chats - Assume prompt leakage
Treat AI prompts as potentially exfiltrated data, especially if they included code, customer info, internal URLs, or troubleshooting steps - Audit for internal URL exposure
Review which internal tools might have been exposed via full-tab URL collection (including sensitive query parameters) - Harden browser extension governance
Restrict installation to allow-listed extensions, require supplier trust and risk review for AI-related tools, even if they look popular or “Featured”
Closing thought
AI tooling is moving faster than enterprise controls. Browser extensions are one of the easiest ways for that speed to turn into silent data loss.
Move now: inventory what’s installed, restrict what can be installed, and put continuous monitoring in place for supplier trust, permission drift, and suspicious outbound data paths. If you wait for an incident to force the issue, you are already behind.