Anthropic is in the midst of a splashy media tour in Washington, but its refusal to allow its models to be used for some law enforcement purposes has deepened hostility to the company inside the Trump administration, two senior officials told Semafor. Anthropic recently declined requests by contractors working with federal law enforcement agencies because the company refuses to make an exception allowing its AI tools to be used for some tasks, including surveillance of US citizens, said the officials, who spoke to Semafor on the condition of anonymity. The tensions come at a moment when Donald Trump’s White House has championed American AI companies as patriotic bulwarks of global competition — and expect the companies to repay that loyalty. The officials said they worried that Anthropic was selectively enforcing its policies based on politics and using vague terminology to allow its rules to be interpreted broadly. For instance, Anthropic currently limits how the FBI, Secret Service and Immigration, and Customs Enforcement can use its AI models because those agencies conduct surveillance, which is prohibited by Anthropic’s usage policy. AD One of the officials said Anthropic’s position, which has long been in effect, amounts to making a moral judgment about how law enforcement agencies do their jobs. The policy doesn’t specifically define what it means by “domestic surveillance” in a law enforcement context and appears to be using the term broadly, creating room for interpretation. Other AI model providers also list restrictions on surveillance, but offer more specific examples and often have carveouts for law enforcement activities. OpenAI’s policy, for instance, prohibits “unauthorized monitoring of individuals,” implying consent for legal monitoring by law enforcement. AD Anthropic declined to comment.