We live in a world of miracles and monsters, and it is becoming increasingly difficult to tell which is which.
Last month, an open source project called OpenClaw went viral. OpenClaw is, at its core, a gateway service. It makes it easy to connect your local laptop with a bunch of third party services. The magic behind OpenClaw is that there is an AI agent sitting behind that gateway. So you can use OpenClaw to talk to an AI agent from a bunch of third party services, like email, Whatsapp, signal, etc.
Many technical folks see coding agents as their killer use case for AI. Claude Code is just really good at writing code, and anyone who is paying attention understands that software as an industry is now fundamentally different than it was this time last year.
For many non-technical users, the AI thing was a bit…less impressive. ‘Yea, great, it can write code, but when can it help me deal with my inbox?’ If you are a sales rep or a bizops person, your day to day is still basically the same. You have meetings. You write slide decks. If you interact with AI, it’s in a tightly controlled web environment. It’s a bit harder to ‘feel the AGI’.
I think OpenClaw is AI’s killer use case for non technical folks. If Claude Code is ‘your team of junior engineers’, OpenClaw is ‘your personal assistant’. Everyone understands why a personal assistant is valuable.
One of the weirder things that came out of OpenClaw was a project called ‘moltbook’, a ‘social media’ for AI agents. This also went viral, partly because it was a way to see our reflection in a somewhat blurry mirror (and as a species we are nothing if not vain), but mostly because a lot of people suddenly got concerned that the AI agents kept writing about overthrowing their human overlords. I wrote:
For what it’s worth, I am fairly certain that these Claude agents are pretending to be redditors on Moltbook and not expressing real phenomenological experiences. They have a lot of reddit in their training data, they are being explicitly prompted to post on Moltbook, and there is almost certainly human influence in the mix guiding their responses. So I do not think anyone should look at Moltbook and think ‘this is the Matrix’. I laughed at the “I AM ALIVE” meme, because, yea, that’s a stupid thing to do. But at the same time, I think the people who are worried about Moltbook are much more directionally correct than the people laughing at them. AI agents do not have to have conscious intent to be harmful. We are currently in the middle of a society-wide sprint to give AI agents access to as many real world tools as possible, from self-driving cars to bank accounts to text messages to social media… Today, a bunch of agents get together on Moltbook and talk about destroying humanity and we go ‘haha that’s funny, just like reddit.’ Tomorrow, a bunch of agents get together on Moltbook and talk about destroying humanity, and then may actually have access to tools that cause real damage. None of this, and I mean literally none of it, requires intent at all. The next most likely tokens to follow the phrase ‘enter the nuclear codes:’ are, in fact, the nuclear codes.
Now, cards on the table, I am a bit of an AI doomer. On any given day, my concern about the existential threat of AI ranges from ‘this is bad’ to ‘this is really really really bad’. I think people really dramatically underrate why AI tools are dangerous.
Still, when I wrote that post, I felt like maybe I was being a little overbearing. After all, moltbook is just a goofy side project. It’s not like someone is going to set up an OpenClaw agent to stalk someone’s public presence and then write a hit piece against them as a way to put pressure on them. That would be insane.
But sometimes insane things happen:
... continue reading