is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets.
A group of tech companies and academic institutions spent tens of thousands of dollars in the past month — likely between $17,000 and $25,000 — on an ad campaign against New York’s landmark AI safety bill, which may have reached more than two million people, according to Meta’s Ad Library.
The landmark bill is called the RAISE Act, or the Responsible AI Safety and Education Act, and days ago, a version of it was signed by New York Governor Kathy Hochul. The closely watched law dictates that AI companies developing large models — OpenAI, Anthropic, Meta, Google, DeepSeek, etc. — must outline safety plans and transparency rules for reporting large-scale safety incidents to the attorney general. But the version Hochul signed — different than the one passed in both the New York State Senate and the Assembly in June — was a rewrite that made it much more favorable to tech companies. A group of more than 150 parents had sent the governor a letter urging her to sign the bill without changes. And the group of tech companies and academic institutions, called the AI Alliance, were part of the charge to defang it.
The AI Alliance — the organization behind the opposition ad campaign — counts Meta, IBM, Intel, Oracle, Snowflake, Uber, AMD, Databricks, and Hugging Face among its members, which is not necessarily surprising. The group sent a letter in June to New York lawmakers about its “deep concern” about the bill and deemed it “unworkable.” But the group isn’t just made up of tech companies. Its members include a number of colleges and universities all around the world, including New York University, Cornell University, Dartmouth College, Carnegie Mellon University, Northeastern University, Louisiana State University, and the University of Notre Dame, as well as Penn Engineering and Yale Engineering.
The ads began on November 23 and ran with the title, “The RAISE Act will stifle job growth.” They said that the legislation “would slow down the New York technology ecosystem powering 400,000 high-tech jobs and major investments. Rather than stifling innovation, let’s champion a future where AI development is open, trustworthy, and strengthens the Empire State.”
When The Verge asked the academic institutions listed above whether they were aware they had been inadvertently part of an ad campaign against widely discussed AI safety legislation, none responded to a request for comment, besides Northeastern, which did not provide a comment by publication time. In recent years, OpenAI and its competitors have increasingly been courting academic institutions to be part of research consortiums or offering technology directly to students for free.
Many of the academic institutions that are part of the AI Alliance aren’t directly involved in one-on-one partnerships with AI companies, but some are. For instance, Northeastern’s partnership with Anthropic this year translated to Claude access for 50,000 students, faculty, and staff across 13 global campuses, per Anthropic’s announcement in April. In 2023, OpenAI funded a journalism ethics initiative at NYU. Dartmouth announced a partnership with Anthropic earlier this month, a Carnegie Mellon University professor currently serves on OpenAI’s board, and Anthropic has funded programs at Carnegie Mellon.
The initial version of the RAISE Act stated that developers must not release a frontier model “if doing so would create an unreasonable risk of critical harm,” which the bill defines as the death or serious injury of 100 people or more, or $1 billion or more in damages to rights in money or property stemming from the creation of a chemical, biological, radiological, or nuclear weapon. That definition also extends to an AI model that “acts with no meaningful human intervention” and “would, if committed by a human,” fall under certain crimes. The version Hochul signed removed this clause. Hochul also increased the deadline for disclosure for safety incidents and lessened fines, among other changes.
The AI Alliance has lobbied previously against AI safety policies, including the RAISE Act, California’s SB 1047, and President Biden’s AI executive order. It states that its mission is to “bring together builders and experts from various fields to collaboratively and transparently address the challenges of generative AI and democratize its benefits,” especially via “member-driven working groups.” Some of the group’s projects beyond lobbying have involved cataloguing and managing “trustworthy” datasets and creating a ranked list of AI safety priorities.
The AI Alliance wasn’t the only organization opposing the RAISE Act with ad dollars. As The Verge wrote recently, Leading the Future, a pro-AI super PAC backed by Perplexity AI, Andreessen Horowitz (a16z), Palantir cofounder Joe Lonsdale, and OpenAI president Greg Brockman, has spent money on ads targeting the cosponsor of the RAISE Act, New York State Assemblymember Alex Bores. But Leading the Future is a super PAC with a clear agenda, whereas the AI Alliance is a nonprofit that’s partnered with a trade association — with the mission of “developing AI collaboratively, transparently, and with a focus on safety, ethics, and the greater good.”