University of Montreal professor Yoshua Bengio is considered one of the “godfathers of AI,” whose academic work set the groundwork for today’s red hot AI arms race. He’s also the founder and scientific adviser of Mila, an AI research institute in Quebec, and recently launched a nonprofit research organization called LawZero, where he plans on developing ways to build safe AI models. Now, in a new conversation with the Wall Street Journal, Bengio didn’t mince his words: at this rate, he believes we’re headed down a dark path that could lead to the end of humankind. “If we build machines that are way smarter than us and have their own preservation goals, that’s dangerous,” he told the paper. “It’s like creating a competitor to humanity that is smarter than us.” “The thing with catastrophic events like extinction, and even less radical events that are still catastrophic like destroying our democracies, is that they’re so bad that even if there was only a 1 percent chance it could happen, it’s not acceptable,” he continued. Back in 2023, Bengio and hundreds of other AI experts called for a moratorium on developing AI, saying the research community needed time to establish and standardize safety and ethical protocols. That never happened: rather than listen to concerned luminaries of the field, ambitious founders and their investors continued pouring hundreds of billions of dollars into advancing AI models. As AI has gotten stronger, Bengio worries that it’s learning to behave deceitfully because it’s been trained to “mostly imitate humans,” who will themselves “lie and deceive and will try to protect themselves.” The result can be all too human: AI that acts in its own self-interest above that of its creators. “Recent experiments show that in some circumstances where the AI has no choice but between its preservation,” Bengio told the WSJ, “which means the goals that it was given, and doing something that causes the death of a human, they might choose the death of the human to preserve their goals.” It’s easy to fixate on “Terminator”-esque scenarios of rogue AI turning on humankind in a grand way, but Bengio warned that the risks could be subtler escalations of the misinformation and manipulation we’ve seen for years on social media — or, rather than gaining any particular agency of its own, AI could end up being one more tool that humans use to hurt other humans. “[AI] could influence people through persuasion, through threats, through manipulation of public opinion,” he fretted to the paper. “There are all sorts of ways that they can get things to be done in the world through people. Like, for example, helping a terrorist build a virus that could create new pandemics that could be very dangerous for us.” More on AI and humanity: Government Test Finds That AI Wildly Underperforms Compared to Human Employees