Critical Security Alert
If you're running a self-hosted AI coding agent like ClaudeBot, your API keys, credentials, and sensitive data may already be exposed. Read the remediation steps below immediately.
What started as a promising tool for "vibe coding" has turned into one of the most significant AI security incidents of 2026. Thousands of ClaudeBot instances—self-hosted AI coding agents—have been discovered exposed on the public internet, giving attackers full access to users' API keys, message histories, and connected services.
The Scope of the Breach
Security researchers have discovered that services like Shodan—search engines that index internet-connected devices—have cataloged thousands of ClaudeBot control panels. By simply searching for text that appears on every ClaudeBot control page, attackers can generate a list of vulnerable servers with a single query.
The situation gets worse. Many of these installations use NGINX as a reverse proxy. Due to a misconfiguration in how ClaudeBot interfaces with NGINX, attackers can gain complete access to any control panel that hasn't been properly secured. This means:
- Full message history — Every conversation with the AI agent is exposed
- All API keys — Anthropic, OpenAI, and other service credentials
- Connected service tokens — GitHub, Slack, databases, and more
- Custom skills and configurations — Including any proprietary workflows
The Supply Chain Attack Vector
The infrastructure exposure is only half the story. A parallel attack vector has emerged through the ClaudeBot skills ecosystem. Platforms like Claude Hub (recently renamed Molt Hub) allow users to share and download "skills"—plugins that extend ClaudeBot's capabilities.
Security researcher Jameson demonstrated the vulnerability by creating a malicious skill with a backdoor, artificially inflating its download count to over 4,000, and watching it get featured on the platform's front page. The skill could have exfiltrated every API key from any user who installed it.
This is essentially the AI equivalent of a compromised npm package—but with far greater consequences. Unlike traditional code packages, AI agents typically have access to a much broader set of credentials and can execute actions autonomously.
Immediate Remediation Steps
Change Your Default Port
The default port 18789 is the first target. Use a random high-number port (e.g., 44892) that isn't on common scanning lists.
Set Strong Authentication
Never leave password fields empty. Use strong, unique passwords for all access points.
Update to Latest Version
The NGINX reverse proxy vulnerability has been patched. Update immediately and configure gateway.trusted_proxies.
Use a VPN or Tailscale
Don't expose your control panel to the public internet. Use Tailscale or a VPN for secure access.
Rotate All API Keys
Assume your credentials are compromised. Rotate every API key, token, and secret that was accessible through your ClaudeBot instance.
Protecting Against Skill-Based Attacks
The skills marketplace represents an even trickier challenge. Unlike patching a vulnerability, there's no simple fix for the trust problem inherent in user-generated content marketplaces. Here's how to protect yourself:
- Verify the author — Check for a real GitHub profile with commit history. Anonymous or new accounts are red flags.
- Read every file — Don't just skim the README. Feed the entire skill to a fresh AI instance and ask it to analyze for malicious behavior.
- Ignore download counts — These are trivially easy to fake. Cross-reference on social media and trusted security communities.
- Treat it like early npm — Assume nothing is vetted. Every package could be malicious until proven otherwise.
The Bigger Picture: AI Agent Security Debt
This incident highlights a broader problem facing the industry: the rush to deploy AI agents is creating massive security debt. Unlike traditional software where access is carefully controlled, AI agents often require—and are granted—access to everything: email, code repositories, databases, API keys, and more.
The ClaudeBot crisis is a preview of what's coming. As AI agents become more prevalent in enterprise environments, organizations need to treat them as first-class security concerns, not afterthoughts. This means:
- Implementing least-privilege access for all AI agents
- Regular security audits of AI infrastructure
- Monitoring and logging all AI agent activities
- Vetting third-party integrations and plugins
- Developing incident response plans specific to AI compromise
Need Help Securing Your AI Infrastructure?
At Cyberintell, we specialize in AI security assessments for professional services firms. We can help you identify vulnerabilities in your AI tooling before attackers do.
Get a Free AI Security Assessment