The nascent AI industry has attracted untold hundreds of billions of dollars in investment over the past few years, but it’s still operating in a near-total regulatory vacuum. That’s not to say it’s had no negative impact. The tech has been linked to a wave of mental health breakdowns, suicides and even murder — and that’s without getting into allegations about the sector’s surveillance of users, copyright violations, and other alleged negative effects on users and society. Now, lawmakers are starting to play catch-up. This week, California governor Gavin Newsom signed what proponents say is the first AI safety and transparency law in the US. The Transparency in Frontier Artificial Intelligence Act, also known as SB 53, requires AI companies with over $500 million in revenue to publicly disclose their safety and security protocols in fairly granular detail. “California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive,” Newsom said in a statement. “This legislation strikes that balance.” Critics might disagree. It’s true that in some senses, the bill’s scope is sweeping: it requires AI companies do everything from sharing how they plan to mitigate potential “Terminator”-esque scenarios of rogue AI rising up against humanity, to reporting critical safety incidents, to creating new protections for whistleblowers. On the other hand, its penalties feel distinctly feeble. Newsom vetoed a previous attempt at similar AI regulation last year, which would’ve demanded far more from the industry: it would’ve applied to a vastly larger number of companies by targeting any that spend upwards of $100 million on an AI model, for instance, and penalties could’ve reached hundreds of millions for severe transgressions. The bill Newsom just signed, in contrast, caps fines at a drop-in-the-bucket $1 million per violation, which would be a mosquito bite to a centibillion dollar company like OpenAI. Tellingly, AI companies are trumpeting their support for the new bill. An OpenAI spokesperson said the company was “pleased to see that California has created a critical path toward harmonization with the federal government — the most effective approach to AI safety.” Meta and Anthropic quickly parroted that praise. State lawmakers have proposed similar legislation in New York, and there have been piecemeal attempts to regulate aspects of AI elsewhere. But as the home of virtually every important AI company, California is in a unique position to set the agenda for meaningful regulation. This bill might be better than nothing. But as a rule of thumb, if new regulation is greeted with open arms by the industry it’s supposed to oversee, it’s probably not much of a threat. More on AI legislation: It’s Now Officially Illegal to Use AI to Impersonate a Human Actor in Hollywood