Google 's former CEO Eric Schmidt has issued a stark reminder about the dangers of AI and how susceptible it is to being hacked. Schmidt, who served as Google's chief executive from 2001 to 2011, warned about "the bad stuff that AI can do," when asked whether AI is more destructive than nuclear weapons during a fireside chat at the Sifted Summit "Is there a possibility of a proliferation problem in AI? Absolutely," Schmidt said Wednesday. The proliferation risks of AI include the technology falling into the hands of bad actors and being repurposed and misused. "There's evidence that you can take models, closed or open, and you can hack them to remove their guardrails. So in the course of their training, they learn a lot of things. A bad example would be they learn how to kill someone," Schmidt said. "All of the major companies make it impossible for those models to answer that question. Good decision. Everyone does this. They do it well, and they do it for the right reasons. There's evidence that they can be reverse-engineered, and there are many other examples of that nature." AI systems are vulnerable to attack, with some methods including prompt injections and jailbreaking. In a prompt injection attack, hackers hide malicious instructions in user inputs or external data, like web pages or documents, to trick the AI into doing things it's not meant to do — such as sharing private data or running harmful commands Jailbreaking, on the other hand, involves manipulating the AI's responses so it ignores its safety rules and produces restricted or dangerous content. In 2023, a few months after OpenAI's ChatGPT was released, users employed a "jailbreak" trick to circumvent the safety instructions embedded in the chatbot. This included creating a ChatGPT alter-ego called DAN, an acronym for "Do Anything Now," which involved threatening the chatbot with death if it didn't comply. The alter-ego could provide answers on how to commit illegal activities or list the positive qualities of Adolf Hitler. Schmidt said that there isn't a good "non-proliferation regime" yet to help curb the dangers of AI.