Alarmed by what companies are building with artificial intelligence models, a handful of industry insiders are calling for those opposed to the current state of affairs to undertake a mass data poisoning effort to undermine the technology.
Their initiative, dubbed Poison Fountain, asks website operators to add links to their websites that feed AI crawlers poisoned training data. It's been up and running for about a week.
AI crawlers visit websites and scrape data that ends up being used to train AI models, a parasitic relationship that has prompted pushback from publishers. When scaped data is accurate, it helps AI models offer quality responses to questions; when it's inaccurate, it has the opposite effect.
Data poisoning can take various forms and can occur at different stages of the AI model building process. It may follow from buggy code or factual misstatements on a public website. Or it may come from manipulated training data sets, like the Silent Branding attack, in which an image data set has been altered to present brand logos within the output of text-to-image diffusion models. It should not be confused with poisoning by AI – making dietary changes on the advice of ChatGPT that result in hospitalization.
Poison Fountain was inspired by Anthropic's work on data poisoning, specifically a paper published last October that showed data poisoning attacks are more practical than previously believed because only a few malicious documents are required to degrade model quality.
The individual who informed The Register about the project asked for anonymity, "for obvious reasons" – the most salient of which is that this person works for one of the major US tech companies involved in the AI boom.
Our source said that the goal of the project is to make people aware of AI's Achilles' Heel – the ease with which models can be poisoned – and to encourage people to construct information weapons of their own.
We're told, but have been unable to verify, that five individuals are participating in this effort, some of whom supposedly work at other major US AI companies. We're told we'll be provided with cryptographic proof that there's more than one person involved as soon as the group can coordinate PGP signing.
The Poison Fountain web page argues the need for active opposition to AI. "We agree with Geoffrey Hinton: machine intelligence is a threat to the human species," the site explains. "In response to this threat we want to inflict damage on machine intelligence systems."
It lists two URLs that point to data designed to hinder AI training. One URL points to a standard website accessible via HTTP. The other is a "darknet" .onion URL, intended to be difficult to shut down.
... continue reading