is a senior policy reporter at The Verge, covering the intersection of Silicon Valley and Capitol Hill. She spent 5 years covering tech policy at CNBC, writing about antitrust, privacy, and content moderation reform.
Before allegedly throwing a Molotov cocktail at OpenAI CEO Sam Altman’s home, the 20-year-old accused attacker wrote about his fear that the AI race would cause humans to go extinct, The San Francisco Chronicle found. Two days later, Altman’s home appeared to be targeted a second time, according to The San Francisco Standard. Only a week earlier, an Indianapolis councilman reported 13 shots fired at his door, with a note that read, “No Data Centers,” after he’d supported a rezoning petition for a data center developer.
These unsettling incidents have set off alarms in and around the AI industry. There’s long been a vocal resistance to the technology, fueled by fears of job displacement, climate impact, and unconstrained development absent of safety guardrails. AI workers themselves have warned about serious risks. The vast majority of critiques and demonstrations against AI have been nonviolent — including local resistance to energy-intensive AI data centers and protests urging a slowdown of the rapidly accelerating technology. Protestors have targeted AI companies directly with tactics like hunger strikes.
Groups that advocate against accelerated AI development explicitly denounced violence following the attacks on Altman’s home. Further investigation will take place to determine the attackers’ motivations. But the limited information made public so far suggests an escalation of the backlash against the technology, and, perhaps, risk to industry players themselves.
Over the past few years, there have been a handful of other notable incidents rising to the level of threats and harassment aimed at local officials, according to a database of reports compiled by Princeton University’s Bridging Divides Initiative. Last year, for example, a community utility authority board member in Ypsilanti, Michigan, reported that masked protesters visited his home to protest a “high performance computing facility,” according to MLive, and one protester allegedly smashed a printer on their lawn.
Shortly after the first attack on Altman’s home, the CEO appeared to partially blame critical media coverage for the violence. Days earlier, The New Yorker had published a lengthy investigation that compiled over a hundred interviews and found that many people who had worked with him distrusted him and found inconsistencies in his actions. “There was an incendiary article about me a few days ago,” Altman wrote on his personal blog. “Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me. I brushed it aside. Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives.” (He later walked back his rhetoric toward the article in response to a critique on X, writing, “That was a bad word choice and i wish i hadn’t used it.”)
Others took up the theme as well. White House AI adviser Sriram Krishnan, for example, wrote on X, “I think the doomers need to take a serious look at what they have helped incite and not just rely on ‘we condemn this and have said this is not the rational response’. This is the logical outcome of ‘If we build it everyone dies’” — a reference to a 2025 book by AI researchers Eliezer Yudkowsky and Nate Soares.
“A lot of the criticism of our industry comes from sincere concern about the incredibly high stakes of this technology”
But Altman also recognized the way his industry could fuel highly emotional reactions from the general public. “A lot of the criticism of our industry comes from sincere concern about the incredibly high stakes of this technology,” he wrote. “This is quite valid, and we welcome good-faith criticism and debate… While we have that debate, we should de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally.”
Even beyond apocalyptic scenarios, AI is reshaping the world’s social fabric in unpredictable ways. Many reports have detailed the psychological spirals that talking to an AI system for days on end can send people down, including allegations of AI-induced psychosis, suicide, and murder. That’s layered on top of real-life experiences of job loss due to AI, plus more existential concern about the world AI will create. “Take any labor movement that has been potentially rightly concerned about disruption and change, and then supercharge that with the AI apocalypse, and then supercharge that with chatbot sycophancy and romantic partners that are telling you to kill your ex-husband or telling you to marry your therapist or whatever it is. It’s not a huge surprise that we’re seeing scary acts like this,” says Purdue University assistant political science professor Daniel Schiff.
... continue reading