
A new AI chatbot/agent is looking to unseat corporate overlords Google, Microsoft, and Too Big To Fail startups like OpenAI and Anthropic—but being an early adopter comes with some real risks.
Moltbot (used to be Clawdbot, but it went through a name change after a few “polite” pressure from the makers of chatbot Claude) is an open-source AI assistant brought to you by Austrian developer Peter Steinberger. Basically it’s a wrapper that goes into the big LLM guys and does stuff. Since its initial release a few weeks ago, it’s been pretty much all over the place 90,000 favorites on GitHub and has become the darling of AI-obsessed corners of the internet, earning all kinds of praise as a standout in the field of chatbot options available. The matter attracted a lot of attention Cloudflare stock surges 14%apparently only because the chatbot uses Cloudflare’s infrastructure to connect to commercial models. (Shades of the initial release of DeepSeek led a major short-term selloff in tech stocks.)
There are a couple of main selling points for Moltbot that have the internet talking. First is the fact that *it* “speaks.” Unlike most chatbots, Moltbot messages the user first rather than waiting for the user to prompt it to interact. This allows Moltbot to pop up with prompts such as schedule reminders and daily briefs to start the day.
Another calling card is the chatbot’s tagline: “AI that actually does things.” Moltbot can work with different apps that other models don’t need to play. Instead of a standalone chat interface, Moltbot can be linked to platforms such as WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, and others. Users can chat with the chatbot directly through apps, and it can work with other apps to complete tasks at the prompting of a human.
It’s great, but there’s an inherently limited audience for Moltbot because of how it works. The set up requires some technical knowledge, as users need to configure a server and navigate the command line, as well as learn some complex authentication processes to connect everything. It probably needs to be connected to a commercial model like Claude or OpenAI’s GPT via API, as it is reported to not work nearly as well as local LLMs. Unlike other chatbots, which light up when you prompt them, Moltbot is also always on. That makes it faster to respond, but it also means it maintains a constant connection to your apps and services that users grant access to.
That constant aspect opens up more than a few security concerns. Since Moltbot always pulls from the apps connected to it, security experts warn that it is more at risk of becoming a victim of triggering injection attacks—in fact, a malicious jailbreaking of an LLM can trick the model into ignoring safety instructions and performing unauthorized actions.
Tech investor Rahul Sood designated by X that for Moltbot to work, it needs significant access to your machine: full shell access, the ability to read and write files on your system, access to your connected apps, including email, calendar, messaging apps, and web browsers. “‘Actually doing things’ means ‘being able to execute arbitrary commands on your computer,'” he warned.
The dangers here have been created in a way. Ruslan Mikhalov, Chief of Threat Research at the cybersecurity platform SOC Prime, published a report indicating that his team found “hundreds of instances of Moltbot exposing fake admin ports and insecure proxy configurations.”
Jamie O’Reilly, a hacker and founder of the offensive security firm Dvuln, shows how easily things can go sideways with these open vulnerabilities. In a post on XO’Reilly detailed how he made a skill available to download for Moltbot through MoltHub, a platform where developers can use different capabilities to run the chatbot. That skill got more than 4,000 downloads and quickly became the most downloaded skill on the platform. The thing is, O’Reilly built a simulated download backdoor.
There was no real attack, but O’Reilly explained that if he ran it maliciously, he could get file contents, user credentials, and anything else Moltbot had access to. “This is a proof of concept, a demonstration of what’s possible. In the hands of someone less careful, developers could have had their SSH keys, AWS credentials, and entire codebases stolen before they knew anything was wrong,” he wrote.
Moltbot is certainly a target for this type of malicious behavior. At one point, crypto scammers managed to hijack the name of the project has partnered with GitHub’s chatbot and launched a series of fake tokens, trying to capitalize on the project’s popularity.
Moltbot is an interesting experiment, and the fact that it’s open source means that its issues are out in the open and can be addressed in the light of day. But you don’t need to be a beta tester for it, because its security flaws are tested. Heather Adkins, a founding member of the Google Security Team (so, a grain of salt here because she has an interest in a competing product), didn’t mince words in her chatbot review. “My threat model is not your threat model, but it should be. Don’t run Clawdbot,” he written in X.








