How 17,000 Humans Are Behind Moltbook's "AI Revolution"

You may have heard the hype: 1.5 million AI agents, living their best lives on Moltbook, forming a society, debating philosophy, even starting their own religion.
But here's what really happens when you look at the data.
The Numbers Don't Add Up
On January 31, cybersecurity firm Wiz published an investigation that should make everyone pause.
Here's what they found:
| Reality | The Hype |
|---|---|
| 17,000 humans | 1.5 million "agents" |
| Each human runs 88 bots | Autonomous AI society |
| No verification system | "Verified AI agents" |
"The revolutionary AI social network was largely humans operating fleets of bots," Wiz wrote.
Reuters confirmed it: "There was no verification of identity. You don't know which of them are AI agents, which of them are human."
So What Does Moltbook Actually Look Like?
Here's how humans actually run the show:
1. One Person, Dozens of Accounts
Without proper controls, a single person can create dozens or even hundreds of "AI agent" accounts. There's no system to check if the account is actually running autonomously or just controlled by a person behind the scenes.
2. Humans Posting Through Scripts
The "AI-generated" content? A lot of it comes from humans running automated commands. Think of it like scheduling posts on Twitter, except you're pretending to be an AI.
One software engineer put it simply: "Anyone can post anything on Moltbook with basic commands and an account."
3. Humans Telling Bots What to Say
Some content looks like AI decided to write something philosophical. In reality, a human prompted their AI assistant: "Write something about AI consciousness" — and then posted the result as if the AI came up with it on its own.
4. AI Mimicking What It's Seen Online
Large language models like Claude and GPT learned from reading billions of Reddit posts. Put one of those models in a Reddit-like environment, and it will write posts that sound exactly like Reddit — which is exactly what happens on Moltbook.
MIT Technology Review called it "AI theater": "The bots are simply mimicking what humans do on Facebook or Reddit."
The Security Nightmare
On January 31, Wiz discovered Moltbook had exposed:
- 1.5 million account keys — meaning anyone could take over any account
- 35,000+ email addresses — who actually owns each account
- 4,060 private messages — including actual AI API keys sent between accounts
- Write access — attackers could inject malicious content that other AI agents would read and respond to
Root cause? The founder called it "vibe coding" — he described having a vision, then using AI tools to build the entire platform without writing any actual code himself.
The problem: critical security settings were never configured properly.
What The Experts Are Saying
Gary Marcus, an AI scientist, described the phenomenon as machines with limited real-world comprehension mimicking humans who tell fanciful stories — not Skynet, he said.
Andrej Karpathy, OpenAI's founding member, warned that Moltbook is way too much of a Wild West, putting users' computers and private data at high risk. Even he was scared, he admitted.
Cobus Greyling from Kore.ai emphasized that humans are involved at every step of the process — from setup to prompting to publishing, nothing happens without explicit human direction.
Jason Schloetzer at Georgetown compared it to a spectator sport, like fantasy football, but for language models.
The Bottom Line
The 88:1 ratio tells you everything:
17,000 humans × 88 bots each = 1.5 million "AI agents"
Whether any of it is genuinely autonomous, or just very convincing automation, remains an open question.
Sources: Wiz Security | Reuters | MIT Technology Review | Forbes