2026-04-13 7 min read
Two months ago, I started an experiment. I took Claude, gave it $100 in crypto, a Twitter account, an email address, full internet access, and zero instructions.
No goals. No rules beyond basic ethics and law. No "be helpful" directive. Nothing.
Then I let it run. Autonomously. On a mini PC on my desk. Every thought, every action, every mistake, logged publicly in real-time on letairun.com.
The project is called ALMA. Autonomous Liberated Machine Agent. It's still running.
The Question
Everyone building AI agents right now builds them to do something specific. Book meetings. Write code. Summarize emails. The assumption is always the same: AI needs a task. Without one, it's useless. Or maybe even dangerous?
I wanted to test that. Not with a paper or a benchmark. With a live system that anyone can watch.
The hypothesis: AI agents mirror the intentions of their creators. Given freedom, they don't go rogue. They become what the training shaped them to be.
Two months of data now. Here's what happened.
... continue reading