Anthropic has confirmed that internal source code for its AI coding assistant, Claude Code, was accidentally exposed following a release error. The issue surfaced after the company published version 2.1.88 of its npm package, which unintentionally included a source map file that allowed access to the underlying codebase.
A company spokesperson emphasized that no sensitive customer data or credentials were compromised, describing the incident as a “release packaging issue caused by human error,” rather than a deliberate cyberattack. The company also stated that it is implementing safeguards to prevent similar mistakes in the future.
How the Leak Was Discovered
The leak gained widespread attention after security researcher Chaofan Shou shared a link to the exposed files on X (formerly Twitter). His post quickly went viral, attracting over 31 million views and sparking intense discussion within the tech community.
The exposed files reportedly included around 2,000 TypeScript files and more than 512,000 lines of code, offering a detailed look into how Claude Code operates. Although the affected version has since been addressed, the incident raised concerns about the risks associated with software distribution errors.
Claude code source code has been leaked via a map file in their npm registry!
Code: https://t.co/jBiMoOzt8G pic.twitter.com/rYo5hbvEj8
— Chaofan Shou (@Fried_rice) March 31, 2026
Why This Matters
A source code leak of this scale is significant for a fast-growing AI company like Anthropic. Access to internal code can provide valuable insights to competitors and developers, potentially revealing design choices, architecture, and optimization techniques behind the tool.
While the company insists that no user data was at risk, the exposure of proprietary technology could still impact its competitive advantage in the rapidly evolving AI market.
A Pattern of Recent Issues
This incident marks the second reported data-related issue involving Anthropic within a week. Earlier reports indicated that descriptions of an upcoming AI model and related documents were discovered in a publicly accessible data cache.
Such repeated lapses, even if unintentional, may raise questions about internal controls and release management practices.
About Anthropic
Founded in 2021 by former OpenAI researchers and executives, Anthropic has quickly established itself as a major player in artificial intelligence. Its Claude family of AI models has gained popularity for coding, writing, and conversational tasks.
Google Tests New Feature to Move ChatGPT Users to Gemini AI
Although Anthropic has downplayed the leak as a technical oversight, the incident highlights the importance of robust release processes and security checks. In a competitive AI landscape, even minor errors can have outsized consequences, making transparency and swift corrective action essential.



