Tech News
← Back to articles

Bubblewrap: A nimble way to prevent agents from accessing your .env files

read original related products more articles

Last week I wrote a thing about how to run Claude Code when you don’t trust Claude Code. I proposed the creation of a dedicated user account & standard unix access controls. The objective was to stop Claude from dancing through your .env files, eating your secrets. There are some usability problems with that guide- I found a better approach and I wanted to share.

TL;DR: Use Bubblewrap to sandbox Claude Code (and other AI agents) without trusting anyone’s implementation but your own. It’s simpler than Docker and more secure than a dedicated user account.

What Changed Since My Last Post

Immediately after publishing, I caught the flu. During three painful days in bed, I realized there are other better approaches. Firejail would likely work well- but also there’s another solution called Bubblewrap.

As I dug into Bubblewrap, I realized something else… Anthropic uses Bubblewrap!

But Anthropic embeds bubblewrap in their client. This implementation has a major disadvantage.

Embedding bubblewrap in the client means you have to trust the correctness and security of Anthropic’s implementation. They deserve credit for thinking about security, but this puzzles me. Why not publish guidance so users can secure themselves from Claude Code? Aren’t we going to need this for ALL agents? Isn’t this solution generalizable?

Defense-in-depth means we don’t rely on any single vendor to execute perfectly 100% of the time. Plus, this problem applies to all coding agents, not just Claude Code. I want an approach that doesn’t tie my security to Anthropic’s destiny.

The Security Problem We’re Solving

Before we dive into Bubblewrap, here’s what we’re protecting against:

... continue reading