Skip to content
Tech News
← Back to articles

Anthropic’s Claude Code gets ‘safer’ auto mode

read original get AI Safety Coding Kit → more articles
Why This Matters

Anthropic's introduction of 'auto mode' for Claude Code marks a significant step toward safer autonomous AI operations, providing a balance between control and independence. This development is crucial for the tech industry as it aims to mitigate risks associated with AI making permissions-level decisions, enhancing safety for consumers and developers alike.

Key Takeaways

is a London-based reporter at The Verge covering all things AI and Senior Tarbell Fellow. Previously, he wrote about health, science and tech for Forbes.

Posts from this author will be added to your daily email digest and your homepage feed.

Anthropic has launched an “auto mode” for Claude Code, a new tool that lets AI make permissions-level decisions on users’ behalf. The company says the feature offers vibe coders a safer alternative between constant handholding or giving the model dangerous levels of autonomy.

Claude Code is capable of acting independently on users’ behalf, a useful but risky feature as it can also do things users don’t want, like deleting files, sending out sensitive data, and executing malicious code or hidden instructions. Auto mode is designed to prevent this, flagging and blocking potentially risky actions before they run and offering the agent a chance to try again or ask a user to intervene.

Right now, auto mode is only available as a research preview for Team plan users. Anthropic says access will expand to include Enterprise and API users in “the coming days.”

Anthropic warns the tool is experimental and “doesn’t eliminate” risk entirely, recommending developers use it in “isolated environments.”