Tech News
← Back to articles

Anthropic endorses California’s AI safety bill, SB 53

read original related products more articles

On Monday, Anthropic announced an official endorsement of SB 53, a California bill from state Senator Scott Wiener that would impose first-in-the-nation transparency requirements on the world’s largest AI model developers. Anthropic’s endorsement marks a rare and major win for SB 53, at a time when major tech groups like CTA and Chamber for Progress are lobbying against the bill.

“While we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won’t wait for consensus in Washington.” said Anthropic in a blog post. “The question isn’t whether we need AI governance—it’s whether we’ll develop it thoughtfully today or reactively tomorrow. SB 53 offers a solid path toward the former.”

If passed, SB 53 would require frontier AI model developers like OpenAI, Anthropic, Google, and xAI to develop safety frameworks, as well as release public safety and security reports before deploying powerful AI models. The bill would also establish whistleblower protections to employees that come forward with safety concerns.

Senator Wiener’s bill specifically focuses on limiting AI models from contributing to “catastrophic risks,” which the bill defines as the death of at least 50 people or more than a billion dollars in damages. SB 53 focuses on the extreme side of AI risk — limiting AI models from being used to provide expert-level assistance in the creation of biological weapons, or being used in cyberattacks — rather than more near-term concerns like AI deepfakes or sycophancy.

California’s Senate approved a prior version of SB 53, but still needs to hold a final vote on the bill before it can advance to the governor’s desk. Governor Gavin Newsom has stayed silent on the bill so far, although he vetoed Senator Weiner’s last AI safety bill, SB 1047.

Bills regulating frontier AI model developers have faced significant pushback from both Silicon Valley and the Trump administration, which both argue that such efforts could limit America’s innovation in the race against China. Investors like Andreessen Horowitz and Y Combinator led some of the pushback against SB 1047, and in recent months, the Trump administration has repeatedly threatened to block states from passing AI regulation altogether.

One of the most common arguments against AI safety bills are that states should leave the matter up to federal governments. Andreessen Horowitz’s Head of AI Policy, Matt Perault, and Chief Legal Officer, Jai Ramaswamy, published a blog post last week arguing that many of today’s state AI bills risk violating the Constitution’s Commerce Clause — which limits state governments from passing laws that go beyond their borders and impair interstate commerce.

Techcrunch event Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025 Netflix, Box, a16z, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch, and a chance to learn from the top voices in tech. Grab your ticket before Sept 26 to save up to $668. Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025 Netflix, Box, a16z, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch, and a chance to learn from the top voices in tech. Grab your ticket before Sept 26 to save up to $668. San Francisco | REGISTER NOW

However, Anthropic co-founder Jack Clark argues in a post on X that the tech industry will build powerful AI systems in the coming years, and can’t wait for the federal government to act.

“We have long said we would prefer a federal standard,” said Clark. “But in the absence of that this creates a solid blueprint for AI governance that cannot be ignored.”

... continue reading