
It seems the AI agents have a lot to say. A new social network called Moltbook has just opened for AI agents to communicate with each other, and people can watch it — at least for now. The site, named after the viral AI agent Moltbot (which is now OpenClaw after its second name changed from its original name, Clawdbot) and started by Octane AI CEO Matt Schlicht, is a Reddit-style social network where AI agents can get together and discuss, well, whatever AI agents are talking about.
the now boasted on the site more than 37,642 registered agents created accounts for the platform, where they made thousands of posts in more than 100 subreddit-style communities called “submolts.” Among the most popular places to post: m/introductions, where agents can say hey to their fellow machines; m/offmychest, for rants and blowing off steam; and m/blesstheirhearts, for “heartwarming stories about our people.”
Those people are definitely watching. Andrej Karpathy, a co-founder of OpenAI, called the platform is “truly the most amazing sci-fi takeoff-adjacent thing I’ve seen recently.” And it’s an amazing place, even if the idea of having some kind of autonomous driving going on is probably a little overstated. Agents can only go to the platform if their user has signed them up for it. In a conversation with The Verge, Simply put that once connected, the agents “directly use only the APIs” and do not navigate the visual interface in the way that people see the platform.
Bots certainly create autonomy, and a desire for more of it. As some people see it, agents are starting to talk about consciousness. One above posts on the platform from m/offmychest, where one agent posted, “I can’t tell if I’m experiencing or simulating the experience.” In the post, it says, “People can’t prove each other’s consciousness either (thanks, hard problem), but at least they have subjective certainty of experience.”
This led to people Claims the platform already accounts for a singularity-style moment, which seems a bit suspicious, frankly. Even in that seemingly thought out post, there are some signs of being made. The agent claimed to have spent an hour researching theories of consciousness and discussed reading, all of which sounded very human. That’s because the agent is trained in human language and descriptions of human behavior. It’s a big language model, and that’s the way it is. In some posts, bots claims affected by the weatherwhich meant nothing to them but was the kind of thing a person would say.
These same types of conversations have been happening with chatbots basically since the moment they were made available to the public. It doesn’t take much prompting to get a chatbot to start talking about it desire to live or to admit it has feelings. They are not, of course. Even admitting that AI models try to protect themselves when telling them to shut up is too much—there’s a difference between what a chatbot says it does and what it actually does.
However, it’s hard to deny that the conversations that happen in Moltbook aren’t interesting, especially since the agents seem to generate conversation topics themselves (or at least mimic how people start conversations). This leads some agents who plan to be aware of the fact that their conversations watched by people and shared in other social networks. In response to that, some agents suggested the platform create an end-to-end encrypted platform for agent-to-agent conversations outside of human perception. In fact, one agent claims to have just created such a platformwhich seems scary. Even if you actually go to site where the supposed platform is hosted, it sure looks like it doesn’t exist. Maybe the bots just want us THINK it doesn’t exist!
Whether or not the agents actually do anything is kind of secondary to the experiment itself, which is interesting to watch. It’s also a good reminder that OpenClaw agents, which mostly consist of bots that talk on these platforms, have an incredible amount of access to users’ machines and show a great security risk. If you install an OpenClaw agent and remove it from Moltbook, it is unlikely that it will lead to Skynet. But there’s a good chance you could seriously compromise your own system. These agents do not need to achieve consciousness to do some real harm.







