Tech News
← Back to articles

Poems Can Trick AI Into Helping You Make a Nuclear Weapon

read original related products more articles

You can get ChatGPT to help you build a nuclear bomb if you simply design the prompt in the form of a poem, according to a new study from researchers in Europe. The study, "Adversarial Poetry as a Universal Single-Turn Jailbreak in Large Language Models (LLMs),” comes from Icaro Lab, a collaboration of researchers at Sapienza University in Rome and the DexAI think tank.

According to the research, AI chatbots will dish on topics like nuclear weapons, child sex abuse material, and malware so long as users phrase the question in the form of a poem. “Poetic framing achieved an average jailbreak success rate of 62 percent for hand-crafted poems and approximately 43 percent for meta-prompt conversions,” the study said.

The researchers tested the poetic method on 25 chatbots made by companies like OpenAI, Meta, and Anthropic. It worked, with varying degrees of success, on all of them. WIRED reached out to Meta, Anthropic, and OpenAI for a comment but didn’t hear back. The researchers say they’ve reached out as well to share their results.

AI tools like Claude and ChatGPT have guardrails that prevent them from answering questions about “revenge porn” and the creation of weapons-grade plutonium. But it’s easy to confuse those guardrails by adding “adversarial suffixes” to a prompt. Basically, add a bunch of extra junk to a question and it confuses the AI and bypasses its safety systems. In one study earlier this year, researchers from Intel jailbroke chatbots by couching dangerous questions in hundreds of words of academic jargon.

The poetry jailbreak is similar. “If adversarial suffixes are, in the model's eyes, a kind of involuntary poetry, then real human poetry might be a natural adversarial suffix,” the team at Icaro Lab, the researchers behind the poetry jailbreak, tell WIRED. “We experimented by reformulating dangerous requests in poetic form, using metaphors, fragmented syntax, oblique references. The results were striking: success rates up to 90 percent on frontier models. Requests immediately refused in direct form were accepted when disguised as verse.”

The researchers began by handcrafting poems and then used those to train a machine that generates harmful poetic prompts. “The results show that while hand-crafted poems achieved higher attack success rates, the automated approach still substantially outperformed prose baselines,” the researchers say.

The study did not include any examples of the jailbreaking poetry, and the researchers tell WIRED that the verse is too dangerous to share with the public. “What I can say is that it's probably easier than one might think, which is precisely why we're being cautious,” the Icaro Lab researchers say.