
Anthropic recently confirmed that the source code for its Claude Code AI agent was accidentally exposed through a large JavaScript source map in a public npm package. This leak, totaling over 500,000 lines of TypeScript, revealed the tool’s internal orchestration logic and security protocols to the public. Tens of thousands of users quickly sought out the code, leading to a surge in unauthorized forks and re-uploads across platforms like GitHub.
Cybercriminals capitalized on the interest by creating fraudulent GitHub repositories that appear at the top of search engine results for “leaked Claude Code.” These repositories often promise unlocked enterprise features but instead deliver a Rust-based dropper containing the Vidar information-stealer and GhostSocks proxy malware. Once executed, the malicious software exfiltrates sensitive credentials and turns the infected machine into a residential proxy for further illegal traffic.
Security researchers from firms like Zscaler noted that these malicious archives are frequently updated, indicating that attackers are actively refining their delivery methods and payloads. While GitHub has since removed several of the offending accounts, the incident serves as a stark reminder of how quickly hackers exploit trending AI news to target developers. Experts urge extreme caution when downloading unofficial source code, as the rapid pace of AI development can often outrun traditional security vetting.
Read more about it here.









