Anthropic recently confirmed that a human error led to the accidental release of internal source code for its AI assistant, Claude Code, through a public package update. While no customer data was compromised, the leak exposed nearly 2,000 internal files and half a million lines of code, providing a detailed blueprint of the tool’s underlying architecture to the public.
Anthropic officially acknowledged on Tuesday that a packaging mistake caused the inadvertent release of internal code for its popular AI tool, Claude Code. The company clarified that this incident resulted from a human error during the release process rather than a malicious security breach. An Anthropic representative emphasized that no sensitive user credentials or private customer data were involved in the exposure, and the company is currently implementing new protocols to ensure such a mistake does not happen again.
The issue originated when version 2.1.88 of the Claude Code package was uploaded to the npm registry. Users quickly noticed that the update included a source map file, which effectively allowed anyone to reconstruct and access the original TypeScript source code. Before the package was removed from public access, it was revealed to contain over 512,000 lines of code across roughly 2,000 files. Security researcher Chaofan Shou first identified the leak on social media, where the news quickly went viral and led to the codebase being archived in public repositories.
This leak is considered a major event in the industry because it provides competitors and developers with a rare look at Anthropic’s proprietary methods. By analyzing the exposed files, researchers have already begun to document the specific ways the tool manages its memory and overcomes the natural limitations of AI context windows. These insights offer a roadmap of how the assistant handles complex orchestration and maintains its high level of performance during coding tasks.
Deep dives into the leaked files have revealed a sophisticated system of internal components, such as a multi-agent orchestration layer that allows the AI to create sub-agents to handle specific sub-tasks. The code also details a query engine for managing API calls and a bidirectional communication layer designed to bridge the gap between developer environments and the Claude command line interface. These discoveries provide a technical explanation for how the assistant executes bash commands and reads files so effectively.
Furthermore, the exposure revealed several unreleased or internal features, including a persistent background agent known as KAIROS. This feature allows the AI to operate independently to fix errors and send notifications to users without requiring constant human prompts. Documentation for a "dream" mode was also found, suggesting a future capability where the AI can continuously iterate on ideas and refine code in the background, further pushing the boundaries of autonomous software development.
Source: https://www.npmjs.com/package/@anthropic-ai/claude-code?activeTab=versions


