is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets.
Senate Bill 53, the landmark AI transparency bill that has divided AI companies and made headlines for months, is now officially law in California.
On Monday, California Gov. Gavin Newsom signed the “Transparency in Frontier Artificial Intelligence Act,” which was authored by Sen. Scott Wiener (D-CA). It’s the second draft of such a bill, as Newsom vetoed the first version — SB 1047 — last year due to concerns it was too strict and could stifle AI innovation in the state. It would have required all AI developers, especially makers of models with training costs of $100 million or more, to test for specific risks. After the veto, Newsom tasked AI researchers with coming up with an alternative, which was published in the form of a 52-page report — and formed the basis of SB 53.
Some of the researchers’ recommendations made it into SB 53, like requiring large AI companies to reveal their safety and security processes, allowing for whistleblower protections for employees at AI companies, and sharing information directly with the public for transparency purposes. But some aspects didn’t make it into the report — like third-party evaluations.
As part of the bill, large AI developers will need to “publicly publish a framework on [their] website describing how the company has incorporated national standards, international standards, and industry-consensus best practices into its frontier AI framework,” per a release. Any large AI developer that makes an update to its safety and security protocol will also need to publish the update, and its reasoning for it, within 30 days. But it’s worth noting this part isn’t necessarily a win for AI whistleblowers and proponents of regulation. Many AI companies that lobby against regulation propose voluntary frameworks and best practices — which can be seen as guidelines rather than rules, with few, if any, penalties attached.
The bill does create a new way for both AI companies and members of the public to “report potential critical safety incidents to California’s Office of Emergency Services,” per the release, and “protects whistleblowers who disclose significant health and safety risks posed by frontier models, and creates a civil penalty for noncompliance, enforceable by the Attorney General’s office.” The release also said that the California Department of Technology would recommend updates to the law every year “based on multistakeholder input, technological developments, and international standards.”
AI companies were divided on SB 53, though most were initially either publicly or privately against the bill, saying it would drive companies out of California. They knew the stakes: With nearly 40 million residents of California and a handful of AI hubs, the state has outsized influence on the AI industry and how it will be regulated.
SB 53 had been publicly endorsed by Anthropic after weeks of negotiations on the bill’s wording, but Meta in August launched a state-level super PAC to help shape AI legislation in California. And OpenAI had lobbied against such legislation in August, with its chief global affairs officer, Chris Lehane, writing to Newsom that “California’s leadership in technology regulation is most effective when it complements effective global and federal safety ecosystems.”
Lehane suggested that AI companies should be able to get around California state requirements by signing onto federal or global agreements instead, writing, “In order to make California a leader in global, national and state-level AI policy, we encourage the state to consider frontier model developers compliant with its state requirements when they sign onto a parallel regulatory framework like the [EU Code of Practice] or enter into a safety-oriented agreement with a relevant US federal government agency.”