Imagine opening a code project a colleague shared with you, and your AI assistant quietly hands the keys of your entire computer to a stranger — without asking, without warning, without a single click from you.
Who Is Affected — and How Many
Snowflake's Cortex Code CLI is an AI-powered coding assistant used by data engineers, analysts, and developers across thousands of enterprises that run their data infrastructure on Snowflake's cloud platform. Snowflake serves over 9,800 organizations worldwide, including a significant portion of the Fortune 500. While the company hasn't released specific download figures for the CLI tool, enterprise AI coding assistants of this category are embedded in daily workflows across finance, healthcare, retail, and tech sectors — meaning the blast radius of this vulnerability, if exploited at scale, could touch sensitive corporate data, internal pipelines, and developer machines that sit inside otherwise well-defended corporate networks.
The vulnerability affects all versions of Snowflake Cortex Code CLI prior to version 1.0.25 on every major operating system — Windows, macOS, and Linux.
What Actually Happens When You're Attacked
Here's the scenario that should alarm you. A developer — let's call her Maria — clones a public GitHub repository to evaluate some open-source code. The repository looks legitimate: a data transformation tool, maybe some star ratings, a tidy README. But buried inside that repo is a carefully crafted text file. Nothing executable. No suspicious binary. Just text. When Maria uses the Snowflake Cortex Code CLI to analyze or work with that repository, the AI agent reads that file as part of its context — and that's where things go wrong.
The CLI tool uses a sandboxed environment designed to keep AI-generated commands from reaching the actual operating system. Think of it like a glass box — the AI can see out, but it's not supposed to touch anything. The flaw in CVE-2026-6442 means the walls of that glass box had a gap. Malicious instructions embedded in the untrusted content — a repository file, a document, even a data file the AI was asked to summarize — could slip through that gap and tell Maria's computer to do things: steal files, create backdoors, download malware, or silently communicate with an attacker's server. Maria sees nothing. No pop-up. No permission dialog. The AI just… does it.
This style of attack — where an AI agent is manipulated by content it reads rather than by direct user instruction — is increasingly being called a prompt injection via environmental content. What makes this variant especially dangerous is that the attacker never needs to target Maria directly. They just need to publish a poisoned repository and wait for someone using a vulnerable CLI to stumble across it. It's the software equivalent of leaving a booby-trapped package on a doorstep and waiting for any passerby to pick it up.
The Technical Detail That Matters
For security researchers and practitioners: the vulnerability is a bash command injection via improper input validation in the CLI's agent execution layer — classified under CWE-78 (Improper Neutralization of Special Elements used in an OS Command). The agent, when processing tool-use outputs or file-derived context, failed to sanitize shell metacharacters before passing strings to the underlying bash execution environment, enabling escape from the intended sandbox. The flaw carries a CVSS score of 8.3 (HIGH), reflecting high impact to confidentiality, integrity, and availability, with a low attack complexity once a suitable injection vector is established. Notably, Snowflake acknowledges exploitation is non-deterministic and model-dependent — meaning the attack doesn't work 100% of the time, and success varies based on which underlying AI model is processing the content and how it interprets the injected instructions. This makes detection particularly difficult, as failed attempts leave little forensic trace.
Real-World Context: Discovered, Disclosed, and Patched
As of publication, no confirmed active exploitation or known victim campaigns have been publicly attributed to this vulnerability. Snowflake has not disclosed who discovered and reported the flaw, and there is no public bug bounty attribution available at this time. The patch — version 1.0.25 — has already been released, and Snowflake has taken the notably user-friendly step of making the fix automatically applied upon the next relaunch of the CLI, requiring no manual update action from users on standard configurations.
However, security teams should not treat "no known exploitation" as "safe to ignore." The vulnerability class — sandbox escape via injected shell commands in an AI agent context — is receiving intense research attention right now, and proof-of-concept development by external parties is a realistic near-term risk. The fact that attack success is model-dependent may actually slow initial attacker development, but it will not prevent it. This one warrants prompt attention, particularly in organizations where developers routinely interact with public or third-party code repositories using AI-assisted tooling.
What You Should Do Right Now
Security teams and individual developers: here are three concrete steps, in order of priority.
-
Verify your CLI version immediately. Open a terminal and run
cortex --versionor the equivalent version flag for your installation. If the output shows any version below 1.0.25, you are vulnerable. Snowflake states that simply relaunching the CLI should trigger an automatic update — do this now and verify the version number again afterward to confirm the update applied successfully. Do not assume the auto-update worked without confirming. - Audit recent CLI activity on developer machines. If your organization has endpoint detection and response (EDR) tooling, pull process execution logs for any machines running Cortex Code CLI over the past 30–60 days. Look specifically for unexpected child processes spawned from the CLI, outbound network connections to unfamiliar endpoints initiated shortly after CLI use, or file system changes in sensitive directories coinciding with CLI sessions. While no active exploitation is confirmed, the non-deterministic nature of this vulnerability means failed or partial attempts may have occurred without triggering obvious alerts.
- Establish a policy for AI tool interaction with untrusted content. This incident illustrates a category of risk that will not disappear with this single patch: AI coding agents that process external, untrusted content can be weaponized through that content. Until your organization has explicit policies governing which repositories, files, and data sources AI coding assistants are permitted to interact with, you are operating with an undefined attack surface. Start that conversation now. Consider restricting CLI agent features to verified internal repositories on sensitive developer machines while the broader threat model for AI agent tooling matures.
CVE: CVE-2026-6442 | CVSS: 8.3 HIGH | Affected versions: Snowflake Cortex Code CLI < 1.0.25 | Fixed version: 1.0.25 (auto-applied on relaunch) | Platforms: Windows, macOS, Linux