Anthropic, the artificial intelligence company behind the chatbot Claude, is trying to carve out a spot as the Good Guy in the AI space. Fresh off being the only major AI firm to throw its support behind an AI safety bill in California, the company grabbed a headline from Semafor thanks to its apparent refusal to allow its model to be used for surveillance tasks, which is pissing off the Trump administration. According to the report, law enforcement agencies have felt stifled by Anthropic’s usage policy, which includes a section restricting the use of its technology for “Criminal Justice, Censorship, Surveillance, or Prohibited Law Enforcement Purposes.” That includes banning the use of its AI tools to “Make determinations on criminal justice applications,” “Target or track a person’s physical location, emotional state, or communication without their consent,” and “Analyze or identify specific content to censor on behalf of a government organization.” That’s been a real problem for federal agencies, including the FBI, Secret Service, and Immigration and Customs Enforcement, per Semafor, and has created tensions between the company and the current administration, despite Anthropic giving the federal government access to its Claude chatbot and suite of AI tools for just $1. According to the report, Anthropic’s policy is broad with fewer carveouts than competitors. For instance, OpenAI’s usage policy restricts the “unauthorized monitoring of individuals,” which may not rule out using the technology for “legal” monitoring. OpenAI did not respond to a request for comment. A source familiar with the matter explained that Anthropic’s Claude is being utilized by agencies for national security purposes, including for cybersecurity, but the company’s usage policy restricts uses related to domestic surveillance. A representative for Anthropic said that the company developed ClaudeGov specifically for the intelligence community, and the service has received “High” authorization from the Federal Risk and Authorization Management Program (FedRAMP), allowing its use with sensitive government workloads. The representative said Claude is available for use across the intelligence community. One administration official bemoaned to Semafor that Anthropic’s policy makes a moral judgment about how law enforcement agencies do their work, which, like…sure? But also, it’s as much a legal matter as a moral one. We live in a surveillance state, law enforcement can and has surveilled people without warrants in the past and will almost certainly continue to do so in the future. A company choosing not to participate in that, to the extent that it can resist, is covering its own ass just as much as it is staking out an ethical stance. If the federal government is peeved that a company’s usage policy prevents it from performing domestic surveillance, maybe the primary takeaway is that the government is performing widespread domestic surveillance and attempting to automate it with AI systems. Anyway, Anthropic’s theoretically principled stance is the latest in its effort to position itself as the reasonable AI firm. Earlier this month, it backed an AI safety bill in California that would require it and other major AI companies to submit to new and more stringent safety requirements to ensure models are not at risk of doing catastrophic harm. Anthropic was the only major player in the AI space to throw its weight behind the bill, which awaits Governor Newsom’s signature (which may or may not come, as he previously vetoed a similar bill). The company is also in D.C., pitching rapid adoption of AI with guardrails (but emphasis on the rapid part). Its position as the chill AI company is perhaps a bit undermined by the fact that it pirated millions of books and papers that it used to train its large language model, violating the rights of the copyright holders and leaving the authors high and dry without payment. A $1.5 billion settlement reached earlier this month will put at least some money into the pockets of the people who actually created the works used to train the model. Meanwhile, Anthropic was just valued at nearly $200 billion in a recent funding round that will make that court-ordered penalty into a rounding error.