Skip to content
Tech News
← Back to articles

Anthropic nuked a company's access to Claude, stopping 60 employees dead in their tracks — support via Google Form is the only recourse for vague usage policy violation

read original get Claude AI Support Guide → more articles
Why This Matters

The incident highlights the risks of relying heavily on AI service providers with opaque policies and automated enforcement, which can disrupt business operations unexpectedly. It underscores the importance for companies and consumers to diversify their tools and establish clear communication channels with service providers. This event also raises concerns about transparency and customer support in the rapidly evolving AI industry.

Key Takeaways

"Never put your eggs in one basket," said Belo CEO Patricio Molina on Twitter, on the aftermath of having his company frozen still by Anthropic. The AI maker cut off Belo's access to Claude with no explanation other than a vague message about breaking the service's usage policy, and magnanimously offered a Google Form as the only point of contact.

The shutdown happened last Friday, with the only communication from Anthropic being the presumably automated email, with zero details about exactly which rules were broken and how. Turning off the Claude tap for Belo meant that 60 employees were dead in the water, as reportedly their daily workflows rely on the AI assistant's integrations, skills, and conversation histories.

The situation was covered across many outlets and generated quite the blowback for the Claude creators. The move sparked public outcry, and Molina thankfully had Belo's access restored after 15 hours, but there's no telling whether that happened by Anthropic's own volition, or due to the bad PR generated.

Article continues below

A cursory look at comments on the original X thread and on other locations seem to indicate that Anthropic's modus operandi is to shoot first and ask questions never, with many users claiming they'd been filling the aforementioned Google Form for months now to no avail. After the service was restored, the only apparent justification was that it had been a false positive, likely by some automated system that Anthropic uses.

@claudeai you took down our entire organization with 60+ accounts belonging to a legitimate company for no apparent reason, without any explanations. The only way to appeal the decision is by filling out a Google Form? Very bad UX and customer service. pic.twitter.com/lV4IXiI3B5April 17, 2026

For his part, Molina sees this as the lesson to "never put all your eggs in one basket." Many commenters were quick to point out that he should never have coupled his company so closely with Claude to begin with, a reasonable critique by itself. However, it's worth noting that the story could have easily been the same if it had instead been Amazon Web Services, Azure, or an authentication provider like Okta.

Many users suggested that Molina would have been better off at running his own models locally, of which there are plenty, like OpenClaw. That's not as easy a sell as it seems, given it involves running your own infrastructure, plus the fact that Claude is seemingly head and shoulders above the competition, at least in the programming and automation areas. Even still, the old adage of "better safe than sorry" comes up, plus even open-source models are getting pretty darn competent these days.

Some commenters pointed out that having one sole provider for anything is a supply chain risk, and suggested to always keep more than one AI service active and running, even if it involves partially or completely duplicating work. As with mostly any big tech company, customers are at the mercy of their whims, should they want take the service down for whichever reason.

Stay On the Cutting Edge: Get the Tom's Hardware Newsletter Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors

... continue reading