In early January, a group of 90 or so political, community and thought leaders gathered in a New Orleans Marriott for a secret conference on artificial intelligence — so secret, in fact, that no one knew who else had been invited until they walked into the room. Church leaders and conservative academics were sitting next to labor union representatives. Progressive power brokers who’d drafted Bernie Sanders to run for president suddenly found themselves breathing the same air as MAGA talking heads. And the AI thought leaders who’d invited them to New Orleans were hoping that none of them would kill each other.
On Wednesday, the Future of Life Institute, one of the most authoritative voices in the world of AI safety, released the results of that meeting: the Pro-Human AI Declaration, a concise document with five guidelines on how AI development must be centered on humanity first, with a pointed focus on avoiding the concentration of power in the hands of the powerful; preserving the well-being of children, families and communities; and preserving human agency and liberty. It has the broadest range of signatories that I personally have ever seen on a single political document.
Powerful civic organizations well outside the tech world have signed onto the Declaration: major unions like the AFL-CIO, the American Federation of Teachers, and the Screen Writers Guild; religious organizations like the G20 Interfaith Forum Association and the Congress of Christian Leaders; the Progressive Democrats of America, the group that drafted Bernie Sanders to run as a Democrat in 2016; think tanks like the conservative Institute for Family Studies and advocacy groups like Parents RISE!.
The individual signatories range even further: Democratic presidential candidate Ralph Nader, AFT president Randi Weingarten, Signal Foundation president Meredith Whittaker, The Blaze’s Glenn Beck, War Room’s Steve Bannon, Virgin Group founder Sir Richard Branson, former National Security Advisor Susan Rice, SAG-AFTRA members, leaders of major evangelical organizations. More are expected to sign on in the next several days.
The meeting was under Chatham House Rules and the list of attendees remains private. But the participants who agreed to speak to The Verge about the experience said that they’d been invited by Max Tegmark, the co-founder of FLI and an MIT professor who had been named to the TIME 100 AI list. “We spent a lot of time talking to him over the course of the last few months,” Weingarten, a powerful teachers’ union advocate, told The Verge in a phone interview. Though she was unable to make it to New Orleans, she was involved in drafting the document, and she’d found remarkable similarities in FLI’s worldview and AFT’s own “common sense guardrails” for using AI in schools. “We’ve been on parallel tracks for quite a while without knowing it.”
Joe Allen, the cofounder of Humans First and a former correspondent for Bannon’s show War Room, told The Verge that Tegmark had also invited him to New Orleans, as well as an earlier proof-of-concept meeting in Manhattan. Though the wide range of attendees was jarring and the political tensions weren’t completely gone, Allen was surprised by how quickly they all agreed on similar topics: autonomous lethal weapons should not be solely AI-powered. AI companies should not leverage children’s emotional attachment for profit. AI should not be granted legal personhood. (The least popular position in the Declaration still got approved by 94% of attendees.)
“I think about it like, if there’s knowledge that there’s poison in the water supply, or that drugs are flooding schools — anything like that, in general — most people are going to be against it and it isn’t partisan,” he said. AI was slightly trickier in that people’s general opinion about specific AI models divided along party lines — Grok was the “based” AI and Anthropic was the “woke” AI — but to Allen, the distinction was meaningless. “Like, what does ‘based’ and ‘woke’ even mean at this point?”
“‘We will not have the luxury of debating all of those other issues if we don’t get this thing right. So let’s get this thing right.’”
Nearly a decade ago, FLI had laid out a more optimistic set of principles for AI research — 23 principles, to be exact, written during the 2017 Asilomar Conference for Beneficial AI, which drew over 100 tech luminaries of the day. Signatories and endorsers of the Asilomar AI Principles included AI leaders like Sam Altman, Elon Musk, and Demis Hannabis; luminaries like Stephen Hawking and Ray Kurzweil, and representatives from major companies like Google, Intel and Apple.
But this time, no one from the industry was invited, to say nothing of people on the level of Altman and Musk. “That was actually a very deliberate design choice,” Emilia Javorsky, the director of the Futures Program at FLI, told The Verge. Whenever she’d attended conferences and events about AI’s impact across society, she noticed that corporate interests would eventually become the dominant perspective in the room, “just by nature of their size and weight and funding capabilities.” Instead, the invitees were from civil society organizations, all of whom were experiencing mass disruption due to artificial intelligence, and all of whom were fed up with Big Tech shrugging off their concerns.
... continue reading