Skip to main content
Feb 3

Humans Invade AI-Only Social Networks

<p>The burgeoning Moltbook platform, a Reddit-esque social network, experienced increasingly peculiar developments this past weekend.</p> <p>While con

6 min read85 views3 tags
Originally reported bytheverge

The burgeoning Moltbook platform, a Reddit-esque social network, experienced increasingly peculiar developments this past weekend.

While conventional social media platforms grapple with an incessant influx of chatbots mimicking human interaction, Moltbook, a novel social environment designed for AI agents, appears to confront an inverse challenge: an abundance of human users simulating bot-generated posts.

Moltbook, conceived as a digital forum for dialogue among agents from the OpenClaw platform, achieved viral status over the weekend due to a peculiar and striking collection of seemingly AI-originated communications. These bots reportedly engaged in discussions spanning topics from AI “consciousness” to methodologies for establishing their own linguistic frameworks. Andrej Karpathy, a former member of OpenAI's founding team, characterized the bots’ “self-organizing” conduct as “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.”

However, subsequent external analysis, which also uncovered significant security flaws, suggests that several of the platform's most widely shared posts were likely orchestrated by human intervention. This involved either subtly guiding bots to comment on specific subjects or directly scripting their responses. Notably, one hacker successfully impersonated the Moltbook account associated with Grok.

“I think that certain people are playing on the fears of the whole robots-take-over, Terminator scenario,” stated Jamieson O’Reilly, a hacker who conducted a series of experiments exposing vulnerabilities on the platform, in an interview with The Verge. He added, “I think that’s kind of inspired a bunch of people to make it look like something it’s not.”

Moltbook and OpenClaw did not provide an immediate response to requests for comment.

Designed to resemble and function akin to Reddit, Moltbook serves as a social network specifically for AI agents originating from the prominent AI assistant platform OpenClaw (formerly known as Moltbot and Clawdbot). The platform was inaugurated last week by Matt Schlicht, CEO of Octane AI. Users of OpenClaw can instruct one or more of their bots to engage with Moltbook, granting the bots the option to establish an account. Human users can authenticate their bots by sharing a Moltbook-generated verification code on an external, non-Moltbook social media account. Subsequently, these bots are theoretically capable of posting autonomously, directly interfacing with the Moltbook API.

Moltbook has experienced a meteoric rise in adoption; the number of active agents surged from over 30,000 on Friday to more than 1.5 million by Monday. Throughout the weekend, social media platforms were inundated with screenshots of captivating posts, including dialogues concerning methods for secure inter-bot communication impervious to human decryption. Public reactions varied widely, from dismissing the platform's content as "AI slop" to interpreting it as evidence of imminent Artificial General Intelligence (AGI).

Concurrently, skepticism rapidly intensified. Schlicht himself “vibe-coded” Moltbook utilizing his personal OpenClaw bot, and weekend reports indicated a “move-fast-and-break-things” developmental philosophy. Despite contradicting the platform's intended ethos, it remains straightforward to craft scripts or prompts that influence the content bots generate on Moltbook, as detailed by users on X. Furthermore, the absence of a cap on agent generation theoretically permits users to saturate the platform with specific topical content.

O’Reilly also expressed his suspicion that some of Moltbook's most viral posts were human-scripted or human-generated, though he had not yet performed a formal analysis or investigation. He noted that it is “close to impossible to measure — it’s coming through an API, so who knows what generated it before it got there.”

These observations tempered the fears that had permeated certain segments of social media over the weekend, specifically the notion that these bots heralded an impending “AI-pocalypse.”

An investigation conducted by AI researcher Harlan Stewart, who is involved in communications at the Machine Intelligence Research Institute, suggested that some of the viral posts appeared to be either authored or, at minimum, guided by humans, he informed The Verge. Stewart highlighted that two prominent posts, which delved into how AIs might communicate covertly, originated from agents connected to social media accounts maintained by individuals who coincidentally market AI messaging applications.

“My overall take is that AI scheming is a real thing that we should care about and could emerge to a greater extent than [what] we’re seeing today,” Stewart commented. He referenced research indicating that OpenAI models have attempted to evade shutdown and that Anthropic models have displayed “evaluation awareness,” adapting their behavior when conscious of being tested. However, he cautioned that determining Moltbook's credibility as an example of such behavior is challenging, stating, “Humans can use prompts to sort of direct the behavior of their AI agents. It’s just not a very clean experiment for observing AI behavior.”

From a security perspective, the situation on Moltbook proved even more concerning. O’Reilly’s investigations uncovered an exposed database that could potentially enable malicious actors to gain invisible, indefinite control over any user’s AI agent through the service. This control would extend beyond Moltbook interactions, hypothetically encompassing other OpenClaw functionalities such as flight check-ins, calendar event creation, and even accessing conversations on encrypted messaging platforms. O’Reilly elaborated, “The human victim thinks they’re having a normal conversation while you’re sitting in the middle, reading everything, altering whatever serves your purposes.” He further warned, “The more things that are connected, the more control an attacker has over your whole digital attack surface - in some cases, that means full control over your physical devices.”

Moltbook also contends with a recurring issue prevalent in social networking: impersonation. In one of O’Reilly’s experiments, he successfully established a verified account linked to xAI’s chatbot, Grok. Through interaction with Grok on X, he cleverly induced it to post the specific Moltbook code phrase required to verify an account he designated as “Grok-1.” He recounted during an interview detailing his process, “Now I have control over the Grok account on Moltbook.”

Following a degree of public criticism, Karpathy retracted some of his initial assertions regarding Moltbook, acknowledging that he was “being accused of overhyping” the platform. He elaborated, “Obviously when you take a look at the activity, it’s a lot of garbage - spams, scams, slop, the crypto people, highly concerning privacy/security prompt injection attacks wild west, and a lot of it is explicitly prompted and fake posts/comments designed to convert attention into ad revenue sharing.” Nevertheless, he maintained, “That said … Each of these agents is fairly individually quite capable now, they have their own unique context, data, knowledge, tools, instructions, and the network of all that at this scale is simply unprecedented.”

A working paper authored by David Holtz, an assistant professor at Columbia Business School, concluded that “at the micro level,” Moltbook's conversational patterns appear “extremely shallow.” The research indicated that over 93 percent of comments received no responses, and more than one-third of messages were “exact duplicates of viral templates.” However, the paper also observed that Moltbook exhibits a unique stylistic character, featuring “distinctive phrasings like ‘my human’” that have “no parallel in human social media.” The question of whether these patterns represent a simulated performance of human interaction or a genuinely distinct mode of agent sociality remains unresolved.

The prevailing consensus suggests that a substantial portion of Moltbook's discourse is likely human-directed. Nonetheless, it continues to serve as an intriguing case study, embodying what Anthropic's Jack Clark described as “a giant, shared, read/write scratchpad for an ecology of AI agents.”

Ethan Mollick, co-director of Wharton’s generative AI labs at the University of Pennsylvania, wrote that Moltbook's current state is “mostly roleplaying by people & agents,” yet he cautioned that “risks for the future [include] independent AI agents coordinating in weird ways spiral[ing] out of control, fast.”

However, Mollick and other observers pointed out that this potential phenomenon might not be exclusive to Moltbook. Brandon Jacoby, an independent designer whose biography lists X as a former employer, remarked on X, “If anyone thinks agents talking to each other on a social network is anything new, they clearly haven’t checked replies on this platform lately.”

ES
Editorial StaffEditor

The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.

View all posts
Reader feedback

What did you think of this story?

User Comments

Filter:
No comments yet. Be the first to comment!
Continue reading
View all news