is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets.
Amid a weekslong conflict with the Pentagon, resulting in a blacklist and a lawsuit, Anthropic is shaking up its C-suite and research initiatives. The company announced Wednesday that it’s launching a new internal think tank, called the Anthropic Institute, that combines three of Anthropic’s current research teams. It will focus on researching AI’s large-scale implications, such as “what happens to jobs and economies, whether AI makes us safer or introduces new dangers, how its values might shape ours, and whether we can retain control,” per the company.
The news comes with C-suite changes, too. Anthropic cofounder Jack Clark is moving into a new role leading the think tank. His new title will be head of public benefit, after more than five years as head of public policy. The public policy team — which tripled in size in 2025, per Anthropic — will now be led by Sarah Heck, who was formerly head of external affairs. Anthropic will also open its planned office in Washington, DC, and the public policy team will continue to focus on issues like national security, AI infrastructure, energy, and “democratic leadership in AI.”
Clark told The Verge that the Anthropic Institute’s debut has been in the works for a while, and that he’s been thinking about moving into a role like this since November. But the timing comes just days after Anthropic sued the US government over its designation as a supply-chain risk, which would bar its clients from using Anthropic’s tech at all in their own work with the Department of Defense. The suit alleges that the Trump administration illegally blacklisted the company for setting “red lines” on mass domestic surveillance and fully autonomous lethal weapons.
When asked about it, Clark said, “It’s never dull working in AI here at Anthropic — there’s always something going on … The pace of AI progress isn’t slowing itself down for external events, and neither are we.” Clark said the situation hasn’t “directly changed” the planned research agenda but that he felt it “has affirmed” Anthropic’s decision to release more information to the public. “What we’re experiencing with the last few weeks just sort of shows you how much hunger there is for a larger national conversation by the public about this technology,” he said.
The Anthropic Institute launches with about 30 people, including founding members Matt Botvinick, formerly of Google DeepMind; Anton Korinek, a professor on leave from the University of Virginia’s department of economics; and Zoe Hitzig, a researcher who left OpenAI after its decision to introduce ads within ChatGPT. The new think tank combines Anthropic’s societal impacts team, which studies AI’s impacts on different areas of society; its frontier red team, which stress-tests AI systems for vulnerabilities and issues; and its economic research team, which tracks AI’s implications for the economy and the labor market. The Anthropic Institute also plans to “incubate” new teams, such as a team led by Botvinick studying how AI will impact the legal system. Hitzig and Korinek will lead large economic research projects. Clark said he expects the think tank’s number of staff will double every year for the foreseeable future.
This year in particular, there’s an increasing amount of pressure on high-valuation AI companies like Anthropic, which reportedly plans to IPO this year. Anthropic’s court filings revealed that the company generated more than $5 billion in all-time commercial revenue and that it has spent $10 billion to date on model training and inference. It also said that the company has “received outreach from numerous outside partners … expressing confusion about what was required of them and concern about their ability to continue to work with Anthropic” and that “dozens of companies have contacted Anthropic” seeking guidance and “in some cases, an understanding of their termination rights.” Anthropic said that depending on the interpretation of what the government will prohibit, exactly, “hundreds of millions of 2026 revenue is at risk” at least, and in the most severe case, it would be multiple billions.
Is Anthropic concerned about devoting more resources to long-term research when it’s very likely to lose some portion of its income in the short term? When The Verge asked Clark, he said he had “no concerns.”
“People tend to buy trust,” Clark said. “A lot of what we can produce are the sorts of research that help businesses trust us … Long-term, Anthropic has always viewed its investment in safety — and studying and reporting on the safety of its systems — as being not a cost center but a profit center.”
Clark also said he believes that powerful AI (essentially Anthropic’s own term for AGI, or artificial general intelligence) will arrive by the end of this year or early 2027, and that he decided to change roles largely due to the “pace of AI progress.” He added that when he looked back at his work last year, he focused more on policy matters, like SB 53, than he did AI R&D and other matters he wanted to give attention to. Anthropic said in a release that the Anthropic Institute is specifically dedicated to answering the “hardest questions posed by powerful AI.”
... continue reading