In the so-called "constitution" for its chatbot Claude, AI company Anthropic claims that it's committed to principles based on the Universal Declaration of Human Rights, instructing its chatbot to prioritize freedom, equality, freedom of thought, and adequate standards of living in its responses. Anthropic has long attempted to distinguish itself as putting safety first. The firm was founded by former OpenAI members, with a commitment to advancing AI ethically and responsibly. But four years on, the company's human leaders are now singing a dramatically different tune. As Wired reports, CEO Dario Amodei acknowledged in a Slack message that taking funding from Middle East leaders — which Anthropic is poised to do, with entreaties to the United Arab Emirates and Qatar — would line the pockets of "dictators." "This is a real downside and I'm not thrilled about it," Amodei wrote. "Unfortunately, I think ‘No bad person should ever benefit from our success’ is a pretty difficult principle to run a business on." Amodei's remarks highlight how even AI companies that banked on ethical practices are increasingly abandoning those goals as they race to secure funding for enormous — and incredibly environmentally damaging — AI infrastructure expansion projects. Even Anthropic, which has long touted itself as a more ethical alternative to the likes of OpenAI, is giving in to the temptation of accepting Gulf State money. That's despite Amodei citing national security concerns for denying Saudi Arabian funds last year. In a major reversal, the CEO is seeing the dollar signs — and simply can't let his conscience win yet again. "There is a truly giant amount of capital in the Middle East, easily $100B or more," Amodei wrote in the Slack messages, as quoted by Wired. "If we want to stay on the frontier, we gain a very large benefit from having access to this capital," he wrote. "Without it, it is substantially harder to stay on the frontier." Amodei tried to twist the narrative in his favor by arguing that the company was only getting money from Gulf countries, not investing to build out infrastructure there. He argued that it's "dangerous" to hand "authoritarian governments" key AI hardware. The news comes after OpenAI announced it would be part of Trump's $500 billion AI infrastructure project, dubbed Stargate. The massive venture, which is currently struggling to get off the ground, is being backed by the United Arab Emirates' royal family, which has an abysmal track record when it comes to human rights abuses, from detaining prisoners of conscience to the torture of immigrant workers. Amodei threw competing AI companies under the bus, accusing the United States of having "failed to prevent" a "race to the bottom where companies gain a lot of advantage by getting deeper and deeper in bed with the Middle East." The horrific optics of having Anthropic take Gulf State money were apparent, with Amodei accusing the media of "always looking for hypocrisy, while also being very stupid and therefore having a poor understanding of substantive issues." Given the negative blowback following the leak of his memo, Amodei appears to have accurately predicted the resulting "comms headache." In the same breath, he revealed that his AI company had a great interest in serving the Middle East "commercially," which he believes is a "pure positive as long as we don't build data centers there and as long as we enforce our [acceptable use policy]." The puzzling memo paints a picture of an AI industry caught between a rock and a hard place. As ambitions continue to grow, companies are throwing out the rulebook as they desperately try to lock down enough funding to realize their megalomaniac dreams. Even Anthropic, once seen as a more carefully realized alternative to OpenAI, has fallen to the temptation of a virtually endless stream of funding from the Middle East, marking the beginning of a new chapter — ethics be damned. More on Anthropic: OpenAI and Anthropic Are Horrified by Elon Musk's "Reckless" and "Completely Irresponsible" Grok Scandal