Skip to content
Tech News
← Back to articles

AI can design viruses, toxins and other bioweapons. How worried should we be?

read original more articles

It’s hard to imagine that a snail could kill a person, but a particularly venomous group of marine molluscs called cone snails can. Their stings contain a cocktail of small proteins called conotoxins, some of which can block ion channels in the nervous system. No antivenom exists.

There are hundreds of thousands of conotoxin structures, and many are harmless to people or even medicinally useful: an approved treatment for chronic pain is derived from one, for instance. But research on specific dangerous conotoxins is highly restricted in some countries.

So, in 2024, when Chinese scientists reported developing an artificial-intelligence tool to design conotoxins1, it raised eyebrows in some quarters. In an e-mail to a private AI and biotechnology discussion group seen by Nature, a senior US government employee flagged the study as a possible biosecurity risk. The employee, who asked not to be named because of concerns for their job, felt it was especially concerning that the conotoxin AI is based on an open-source protein language model developed by US scientists.

The textile cone snail (Conus textile), one of a number of venomous species of cone snail.Credit: Pascual Fernandez Gomez/iStock via Getty

One of the conotoxin study’s authors told Nature that the concern is unwarranted. The work was aimed squarely at discovering drugs, says Weiwei Xue, a computational chemist at Chongqing University in China and a co-author of the paper. Xue’s team has found some conotoxins with potential therapeutic qualities after testing designs in the laboratory, he says. Although it is important to consider the risk that the AI tool could be misused, it was not designed to make harmful proteins, he adds. What’s more, translating designs into physical molecules requires significant expertise and equipment. Other researchers also told Nature that the risks of the work seem minimal.

The episode, however, illustrates a growing concern over emerging AI tools in biology; although they are being developed to help produce innovative drugs and other societal benefits, they could also make it easier to create new threats. The revolution in biological AI tools, such as AlphaFold, has enabled scientists at a keystroke to design bespoke proteins and viruses that kill superbugs, and general-purpose chatbots can boost people’s knowledge of how to make these designs in a lab. Might the latest AIs also speed up the development of more-potent toxins, viruses or other bioweapons?

The biosecurity threat is serious, interviews with more than 20 scientists and policy researchers suggest. “Theoretically — and this is what keeps me up at night — one could now develop toxins on the level of ricin or other very deadly agents that would be virtually undetectable,” says Martin Pacesa, a structural biologist at the University of Zurich in Switzerland.

But there is debate over what to do about these risks. Some are calling for limits on biological AI and others are wary of negative impacts on research. “We’ve always made the assessment that the benefits to the world far outweigh the dangers,” says computational biophysicist David Baker at the University of Washington in Seattle, who shared a 2024 Nobel prize for his pioneering work on protein design. “But, as capabilities increase, I think that’s going to be an important question to keep considering.”

Some say the focus should be on detecting and countering AI bioweapon attacks, as opposed to trying to prevent them by imposing software restrictions. “That ship has sailed in my opinion,” says protein designer Timothy Jenkins at the Technical University of Denmark in Kongens Lyngby.

What’s the worst that could happen?

... continue reading