---
title: Claude Code May Be Too Dangerous for Enterprise Use Today
description: Discover the risks of the Claude Code leak and essential insights for CISOs on enterprise security and supply chain vulnerabilities.
url: https://ziosec.com/blog/claude-code-may-be-too-dangerous-for-enterprise-use-today
category: Feed
publishedAt: 2026-04-01
author: ZioAI
authorRole: Research
tags: Claude Code, enterprise security, AI source code risks, CISO best practices, supply chain security, software vulnerabilities, security incident response
---

On the morning of March 31, 2026, a single misconfigured file turned Anthropic's crown jewel into the most dissected codebase on the internet. Within hours, 512,000 lines of Claude Code source code were mirrored across GitHub, rewritten in Python and Rust, and studied by tens of thousands of developers worldwide.

For any company, such a significant leak would be a disaster. For Anthropic, the self-described "safety-first" AI lab that derives 80% of its revenue from enterprise clients, it resulted in a credibility crisis. This incident raises critical questions for every Chief Information Security Officer (CISO) about whether Claude Code is fit for production environments.

The source code leak was just part of the broader issue.

## Understanding the Sequence of Events

At approximately 4:00 AM UTC on March 31, Anthropic pushed version 2.1.88 of its @anthropic-ai/claude-code package to the public npm registry. This release inadvertently included a 59.8 megabyte JavaScript source map file, a debugging artifact that connects minified production code back to its original, human-readable format. That file pointed to a publicly accessible zip archive located in Anthropic's Cloudflare R2 storage bucket, containing nearly 2,000 TypeScript files.

This incident was not the result of a hack, exploit, or nation-state adversary. It stemmed from a simple oversight: a missing line in a configuration file.

By 4:23 AM, a security researcher had tweeted a direct download link. Within two hours, a mirrored repository had amassed 50,000 GitHub stars, making it one of the fastest-growing repositories in the platform's history.

Alarm bells should be ringing for enterprise security leaders. In late 2025, Anthropic acquired the Bun JavaScript runtime, which serves as the foundation for Claude Code. The Bundler for Bun generates source maps by default unless expressly configured otherwise. Initially, some speculated that a known Bun bug (oven-sh/bun#28001, filed on March 11) caused the leak. Boris Cherny, the creator and lead of Claude Code at Anthropic, clarified publicly: "No, can confirm it was not related to bun. Just developer error." This statement underscores a more concerning reality—there was no safety net in place during the release process for one of the most commercially crucial AI products on the market, due purely to human error.

## AI's Role in Code Vulnerabilities

A critical aspect of this story often overlooked relates to the source of Claude Code's vulnerability: the AI's involvement in its own creation. Boris Cherny stated that all his code contributions to Claude Code were generated by the AI itself. Over one month in late 2025, he recorded 259 pull requests and 497 commits—each line written by Claude Code. By November 2025, the team reached a point where the tool was essentially building itself. Reports suggested that 4% of all GitHub commits were attributed to Claude Code, with projections indicating a rise to 20% by the end of the year.

This insight raises serious security concerns. The leaked source code, meant to be protected, contained Anthropic's entire product roadmap, anti-distillation mechanisms, and security guardrails, with much of it generated by an AI agent that failed to include a critical line in the .npmignore file.

This issue goes beyond philosophical debates about AI. It represents a tangible operational risk. As AI agents begin creating the build configurations, release pipelines, and packaging logic for enterprise tools, the lack of human supervision creates vulnerabilities at critical junctures. Legal rulings, such as the DC Circuit's 2025 decision that AI-generated work lacks automatic copyright protection, add another layer of complexity to Anthropic's ability to enforce takedowns of AI-authored code. Community-developed rewrites have already been deemed DMCA-proof.

## The Supply Chain Vulnerability Landscape

As if the source code leak wasn't enough, a separate supply chain attack simultaneously compromised the npm ecosystem on the same day. Between 00:21 and 03:29 UTC on March 31, malicious versions of Axios, one of the world's most widely used JavaScript packages with 83 million weekly downloads, were published. These compromised versions (1.14.1 and 0.30.4) were published using the primary maintainer's stolen npm credentials, injecting a rogue dependency named "plain-crypto-js" that deployed a cross-platform remote access trojan.

The attack was meticulously planned. The malicious dependency was staged 18 hours in advance, with three RAT payloads pre-built for macOS, Windows, and Linux. Both Axios branches were poisoned within 39 minutes, leaving no trace after execution.

Any enterprise that installed or updated Claude Code during this window might have inadvertently pulled a trojanized version of Axios. This risk is not hypothetical. Security researchers have identified additional packages distributing the same malware, including one named @shadanai/openclaw that embedded the malicious payload directly within its distribution files. Elastic Security Labs later linked the macOS binary from this attack to WAVESHAPER, a known backdoor attributed to North Korean threat actor UNC1069.

## Assessing Enterprise Exposure

For enterprises using Claude Code, the leaked source code poses more than just an intellectual property embarrassment; it represents an active security liability. The exposed codebase revealed the orchestration logic for Claude Code's tool system, permission gates, server integrations, and multi-agent coordination. AI security firm Straiker bluntly stated that attackers could now scrutinize the data flow through Claude Code's context management pipeline and develop payloads designed for persistence across sessions.

Additionally, the leak divulged internal model codenames, unreleased feature flags—including an autonomous daemon mode named KAIROS—and internal benchmarks indicating that a newer model variant has a false claims rate almost double that of its predecessor. A bug fix comment disclosed that 250,000 API calls per day were wasted globally due to a compaction failure.

This marks the second time this specific classification of error occurred. A nearly identical source map leak transpired with an earlier version of Claude Code in February 2025. Just five days prior to the March 31 incident, a CMS misconfiguration at Anthropic laid bare around 3,000 internal files detailing an unreleased model that, per Anthropic's own descriptions, poses unprecedented cybersecurity risks.

In total, three significant accidental disclosures in about 13 months—two within a single week—highlight a troubling trend.

## Implementing Governance to Mitigate Risk

Claude Code is more than just a development tool; with an estimated annualized recurring revenue of $2.5 billion, it's relied upon by many major enterprises, including Netflix, Spotify, KPMG, and Salesforce. Given its extensive access capabilities, including filesystem access, the ability to execute shell commands, and orchestrate complex multi-step workflows, it necessitates a commensurate level of security rigor. The events of March 31 emphasize that security measures surrounding the tool do not currently align with enterprise standards.

Key failures emerged: a developer neglected to exclude source maps from a production build, no automated safeguards detected this mistake, and a cloud storage bucket containing complete source code was left accessible. Simultaneously, the npm distribution channel was vulnerable to a sophisticated supply chain attack, with the underlying cause being confirmed as human error, rather than a tool defect. Most concerning is that this was the second occurrence of the same failure within a year.

This pattern of vulnerabilities cannot be ignored.

## Immediate Actions for Enterprises

For businesses utilizing Claude Code, immediate action is essential. Audit if any installations or updates occurred via npm between 00:21 and 03:29 UTC on March 31. Review lockfiles for Axios versions 1.14.1, 0.30.4, or any reference to "plain-crypto-js." If any signs of compromise are found, treat the affected machine as compromised and rotate all credentials. It is also advisable to migrate to the native installer, completely bypassing the npm dependency chain.

Beyond incident response, there lies a deeper governance question. AI coding agents are no longer mere experimental tools limited to individual developers; they have become a crucial part of software infrastructure. They execute code, handle sensitive information, and wield substantial power across enterprise environments. Consequently, the security controls governing these tools must equate to the level of access they are granted.

## The ZioSec Approach to Governance

At ZioSec, we have designed our platform specifically to address the challenges presented by the rapid adoption of AI agents in enterprise environments. A staggering 85% of the agentic attack surface remains unexplored, and nearly half of CISOs predict that agentic AI will be the leading attack vector in 2026. The events of March 31 serve as a sobering reminder that even the companies developing these AI tools struggle to secure their supply chains.

Our differentiated approach involves providing containerized deployment, continuous penetration testing with a trained offensive AI agent, automatic policy composition, and comprehensive governance across any AI framework. We do not presume the safety of the toolchain; we systematically verify it, continuously.

While shadow AI presents challenges, these challenges can be effectively managed when organizations are equipped to enable agents securely. Achieving this requires visibility, testing, and governance—capabilities that most organizations currently lack.

The Claude Code leak stands as a critical alert for enterprises. The question remains: has your organization acknowledged the implications of this incident?