Two Leaks in Five Days: Anthropic's Security Crisis
A CMS misconfiguration on March 26, then an npm source map error on March 31 — two major data exposures in less than a week raise serious questions about Anthropic's operational security.
TL;DR: Between March 26 and March 31, 2026, Anthropic suffered two separate data exposures through two entirely different systems. The first was a CMS misconfiguration that exposed roughly 3,000 unpublished assets, including the draft blog post revealing Claude Mythos. The second was an npm packaging error that shipped a 59.8 MB source map in Claude Code v2.1.88, leading to the exposure of 512,000 lines of proprietary TypeScript. Both incidents were attributed to human error. Both reflected the same underlying problem: insufficient safeguards in deployment and release pipelines. For a company seeking a $60B+ IPO and building AI it describes as “far ahead of any other AI model in cyber capabilities,” the pattern is difficult to dismiss.
Timeline: Two Anthropic Leaks in One Week
March 26, 2026 — CMS Misconfiguration (Leak 1)
Security researchers Roy Paz (LayerX Security) and Alexandre Pauwels (University of Cambridge) discovered that a misconfiguration in Anthropic’s content management system had left approximately 3,000 unpublished assets publicly accessible without authentication. The exposed material included draft blog posts, internal model specifications, and development files. The most consequential item was a draft describing a new model tier called “Capybara” and its first model, Claude Mythos, characterized as “a step change” in AI capabilities. The leak triggered immediate media coverage from Fortune, CNBC, and others. Cybersecurity stocks sold off sharply, with the Global Cybersecurity Index falling 4.2% and an estimated $400 billion in market capitalization erased within 24 hours.
March 31, 2026 — npm Source Map Exposure (Leak 2)
Security researcher Chaofan Shou discovered that version 2.1.88 of the @anthropic-ai/claude-code npm package included a 59.8 MB JavaScript source map file. The map referenced a zip archive on a publicly accessible Cloudflare R2 storage bucket. The archive contained approximately 1,900 TypeScript files totaling over 512,000 lines of code — the complete source tree for Claude Code. A backup repository was forked more than 41,500 times before Anthropic could respond. The code included 44 unreleased feature flags, among them references to an autonomous daemon mode (KAIROS), parallel worker agents (Coordinator Mode), and a remote multi-agent planning system (ULTRAPLAN).
Side by side:
| Leak 1 (March 26) | Leak 2 (March 31) | |
|---|---|---|
| Vector | CMS misconfiguration | npm source map in published package |
| Discoverer | Roy Paz, Alexandre Pauwels | Chaofan Shou |
| Scope | ~3,000 unpublished assets | ~1,900 files, 512,000 lines of source |
| Key exposure | Claude Mythos model details | Claude Code full source tree + 44 feature flags |
| Market impact | $400B cybersecurity selloff | Reputational; 41,500+ GitHub forks |
| Anthropic’s framing | Internal publishing error | ”Release packaging issue caused by human error” |
| Days between | — | 5 |
How Each Anthropic Leak Happened
The two leaks exploited different systems through different mechanisms, but the root cause pattern was identical.
Leak 1: CMS configuration error. Anthropic’s content management system stored draft assets in a data store that was publicly searchable. No authentication gate stood between the public internet and roughly 3,000 items that were never intended for external access. The researchers who found it did not need to bypass any security controls. The assets were simply there, indexed and accessible. Anthropic did not disclose how long the misconfiguration had been in place or how the assets came to be stored without access restrictions.
Leak 2: npm build pipeline failure. Claude Code’s build process failed to strip a JavaScript source map from the production package before publishing to npm. Source maps are standard development artifacts used to map bundled code back to original source files. They are routinely excluded from production builds precisely because they expose internal code structure. In this case, the source map also referenced a Cloudflare R2 storage bucket that lacked access controls. The combination — a source map that should not have been published pointing to a bucket that should not have been public — created a chain of exposure that neither error alone would have produced.
Both incidents were classified by Anthropic as “human error.” That framing is technically accurate but incomplete. Human error is a constant in every organization. The relevant question is not whether humans make mistakes but whether systems exist to catch those mistakes before they reach production.
In Leak 1, a CMS deployment pipeline apparently lacked a validation step to confirm that draft assets were not publicly accessible. In Leak 2, an npm publishing pipeline apparently lacked a check for anomalous file sizes or the presence of source map files in the distribution bundle. A 59.8 MB file in an npm package is conspicuous by any standard. Neither error was exotic or novel. Both are well-understood failure modes with well-established preventive measures.
The pattern is not that Anthropic’s employees made mistakes. The pattern is that Anthropic’s release and deployment processes did not have adequate guardrails to prevent known categories of mistakes from reaching the public.
The Irony of Anthropic’s Security Lapses
The leaked Mythos draft blog post described the model as “far ahead of any other AI model in cyber capabilities.” Anthropic’s own language positioned Claude Mythos as a generational advance in AI-powered cybersecurity — a system so capable that its release strategy required a “slower, more gradual approach” with early access limited to cyber defenders.
Five days after that description became public through a preventable CMS error, the company’s entire Claude Code source tree became public through a preventable npm packaging error.
Fortune’s headline on the second incident did not mince words: it characterized the Claude Code exposure as Anthropic’s “second major security breach” in five days. Anthropic objected to the word “breach,” drawing a distinction between an accidental exposure caused by human error and a breach involving unauthorized access. The distinction matters for legal and regulatory purposes. It matters less for the narrative that emerged.
The gap on display is not between Anthropic’s AI capabilities and its competitors’. It is between the sophistication of the AI systems Anthropic builds and the operational discipline of the infrastructure those systems are built on. Anthropic is not unique in this regard — the history of technology companies is littered with examples of world-class engineering teams undermined by mundane operational failures. But Anthropic is unique in the specificity of the contrast: a company whose flagship product is described as an unprecedented cybersecurity tool, exposed twice in five days by the kind of configuration and packaging errors that a basic CI/CD audit would catch.
The irony was not lost on the security community. Researchers and commentators noted that the preventive measures for both incidents — access control validation on CMS deployments, source map stripping and file size checks in npm publishing — are standard practices, not cutting-edge techniques. Anthropic did not need its own AI to prevent these leaks. It needed a checklist.
What This Means for Anthropic’s $60B IPO
Anthropic is targeting an initial public offering in Q4 2026, with October widely discussed as the likely window. The company’s most recent private fundraise, in February 2026, valued it at approximately $380 billion on $30 billion raised. Reporting indicates Anthropic has engaged Wilson Sonsini Goodrich & Rosati as IPO counsel, a signal that preparations are well underway.
The target valuation for the IPO is expected to exceed $60 billion in proceeds, placing it among the largest technology IPOs in history. At this scale, the offering will be scrutinized by institutional investors, regulators, and the financial press with a level of intensity that most private companies never face.
Two data exposures in five days create a narrative problem that is distinct from — and in some ways more damaging than — a single incident.
A single leak can be framed as an isolated mistake. Two leaks through two different systems in less than a week suggest a systemic gap in operational controls. Investors evaluating Anthropic’s IPO will weigh the company’s technical moat — its model capabilities, its research talent, its competitive position — against the operational risk demonstrated by these incidents.
The precedent from other pre-IPO security incidents is not encouraging. When companies approaching public offerings suffer high-profile operational failures, the long-term effect is typically not on valuation multiples but on the terms and conditions investors demand: stronger governance provisions, more detailed risk disclosures, and in some cases, executive accountability measures tied to operational security. The reputational damage from the leaks themselves fades. The structural changes investors require as a condition of participation do not.
For Anthropic, the calculus is specific. The company’s value proposition rests on trust: trust that it can build the most capable AI systems, and trust that it can deploy them responsibly. The second half of that proposition is harder to sustain when basic deployment pipelines fail twice in a week.
The February 2026 raise at a $380 billion valuation demonstrates that private market investors have already priced in Anthropic’s technical leadership. The IPO will test whether public market investors are equally willing to look past operational risk. The two leaks give skeptics a concrete, recent, and easily understood basis for concern.
Can Anthropic Rebuild Trust?
The path to restoring confidence before an IPO filing requires specific, verifiable actions rather than reassuring statements.
A public post-mortem for both incidents. Anthropic has not published a detailed account of what went wrong in either leak, what the root causes were, or what specific changes have been implemented. The company’s statements to date have been limited to classifying the incidents as human error and confirming that the immediate exposures were remediated. For a company that emphasizes transparency as a core value, the absence of a technical post-mortem is conspicuous.
An independent security audit of deployment and release pipelines. The two leaks came from different systems (CMS and npm), suggesting that the gap in controls is not confined to a single pipeline. A credible audit would need to cover the full range of systems through which Anthropic publishes code, content, and artifacts to external platforms.
Process changes with measurable outcomes. Automated checks for public accessibility of draft assets. Mandatory source map stripping and file size validation in npm publishing. Pre-release security review gates. These are not novel practices. Their absence is what made the leaks possible.
What Anthropic has said so far amounts to two variations on the same theme: the incidents were caused by human error, not by external attacks or systemic vulnerabilities, and no customer data was compromised. Both statements are likely accurate. Neither addresses the underlying question of why known categories of preventable errors were not caught before reaching production.
The clock is ticking. If Anthropic intends to file for an IPO in Q4 2026, the S-1 filing will need to address operational risk in concrete terms. Underwriters will require it. The SEC will review it. And every prospective investor will read the risk factors section knowing that two leaks in five days happened less than six months before the offering.
The company’s technical capabilities are not in question. Its operational maturity is. The distinction between those two things will determine how the market prices Anthropic’s public debut.
Further Reading
- Claude Mythos Leak Explained — Full reconstruction of the March 26 CMS incident.
- Claude Code Source Code Leak — Complete analysis of the March 31 npm exposure.
- Anthropic IPO and the Mythos Leak Impact — How the leaks affect Anthropic’s path to public markets.
- Leak Timeline — Hour-by-hour reconstruction of both incidents.