TLDR
- Internal source code for Claude Code, Anthropic’s AI coding assistant, was mistakenly made public
- Nearly 2,000 files containing approximately 512,000 lines of code were exposed
- A social media post linking to the leaked code accumulated over 30 million views
- The company confirms no sensitive customer information or authentication credentials were compromised
- Anthropic has now experienced two separate data exposure incidents within a single week
On Tuesday, Anthropic publicly acknowledged the unintentional disclosure of proprietary source code belonging to Claude Code, its artificial intelligence-driven development assistant. The organization characterized the incident as “a release packaging issue caused by human error, not a security breach.”
Claude code source code has been leaked via a map file in their npm registry!
Code: https://t.co/jBiMoOzt8G pic.twitter.com/rYo5hbvEj8
— Chaofan Shou (@Fried_rice) March 31, 2026
Cybersecurity professionals analyzing the leak determined that approximately 1,900 individual files were exposed, encompassing roughly 512,000 lines of programming code. The significance of this exposure lies in Claude Code’s operational design—it functions within developer workspaces and maintains access to potentially sensitive project data, heightening security implications.
The spread of information was rapid. A social media post on X containing a direct link to the compromised code achieved viral status, surpassing 30 million impressions within hours of its early Tuesday morning publication.
Development community members immediately began examining the exposed codebase to gain insights into Claude Code’s operational mechanisms and Anthropic’s future development roadmap. Security researchers, meanwhile, flagged potential threat vectors that malicious actors might exploit using this newly available information.
AI security company Straiker published analysis suggesting that threat actors could now reverse-engineer Claude Code’s data processing architecture. Their assessment indicated potential vulnerabilities that could enable adversaries to inject persistent malicious code throughout extended development sessions, essentially creating backdoor access points.
A Pattern of Security Lapses
This incident marks the second security-related event for Anthropic in under seven days. Earlier in the week, Fortune documented that the company had inadvertently configured thousands of internal documents with public access permissions.
Among those prematurely disclosed materials was an unreleased announcement detailing a forthcoming AI system referenced internally by the codenames “Mythos” and “Capybara.” According to reports, the draft documentation acknowledged potential cybersecurity vulnerabilities associated with the upcoming model.
Anthropic has pledged to implement additional safeguards to prevent similar occurrences. The company emphasized that neither incident resulted in the exposure of protected customer information or authentication credentials.
Claude Code’s Market Position
Anthropic launched Claude Code for public use in May of the previous year. The platform assists software engineers in feature development, debugging operations, and workflow automation.
Adoption has been substantial. By February, the product had achieved an annualized revenue run-rate exceeding $2.5 billion.
This commercial success has intensified competitive pressure across the industry. Major technology firms including OpenAI, Google, and xAI have each committed significant resources toward developing rival AI-assisted programming platforms to challenge Claude Code’s market position.
Founded in 2021 by executives and researchers who previously worked at OpenAI, Anthropic has built its reputation primarily on the Claude family of large language models.
A company representative stated that Anthropic is implementing procedural changes designed to eliminate similar packaging errors in future releases.
