Skip to content
Tech News
← Back to articles

Why buying into Moltbook and OpenClaw may be Big Tech's most dangerous bet yet

read original more articles
Why This Matters

The article highlights the dangerous security vulnerabilities and questionable practices behind Moltbook and OpenClaw, raising concerns about the unchecked growth of insecure AI platforms in the tech industry. These developments underscore the need for better security standards and scrutiny as AI becomes more integrated into social and business environments.

Key Takeaways

Oscar Wong/Getty Images

Follow ZDNET: Add us as a preferred source on Google.

ZDNET's key takeaways

Both Moltbook and OpenClaw are irredeemably insecure.

Whatever Meta and OpenAI paid, it was too much.

Other, better programs have appeared that do the same jobs.

The AI business has become downright crazy. First, OpenAI hired Peter Steinberger, creator of the popular, horribly insecure open-source agent framework OpenClaw. Now, Meta has acquired Moltbook, the viral AI agent social network that also has no security to speak of. This is nuts.

Also: AI agents of chaos? New research shows how bots talking to bots can go sideways fast

Moltbook, a social platform for AI agents

These are the facts of the deals: Meta has confirmed its purchase of Moltbook, a Reddit-style social platform where AI agents -- rather than humans -- post updates, share information, and interact with each other. Well, that's what the Moltbook team tells people. The reality is that these "agents" were, in fact, humans role‑playing as agents, or heavily scripting what the agents had to say. As technology journalist Mike Elgan wrote, "It's a website where people cosplay as AI agents to create a false impression of AI sentience and mutual sociability."

... continue reading