In 2016, hundreds of Russians filed into a modern office building on 55 Savushkina Street in St. Petersburg every day; they were part of the now-infamous troll farm known as the Internet Research Agency. Day and night, seven days a week, these employees would manually comment on news articles, post on Facebook and Twitter, and generally seek to rile up Americans about the then-upcoming presidential election.
When the scheme was finally uncovered, there was widespread media coverage and Senate hearings, and social media platforms made changes in the way they verified users. But in reality, for all the money and resources poured into the IRA, the impact was minimal—certainly compared to that of another Russia-linked campaign that saw Hilary Clinton’s emails leaked just before the election.
A decade on, while the IRA is no more, disinformation campaigns have continued to evolve, including the use of AI technology to create fake websites and deepfake videos. A new paper, published in Science on Thursday, predicts an imminent step-change in how disinformation campaigns will be conducted. Instead of hundreds of employees sitting at desks in St. Petersburg, the paper posits, one person with access to the latest AI tools will be able to command “swarms” of thousands of social media accounts, capable not only of crafting unique posts indistinguishable from human content, but of evolving independently and in real time—all without constant human oversight.
These AI swarms, the researchers believe, could deliver society-wide shifts in viewpoint that not only sway elections but ultimately bring about the end of democracy—unless steps are taken now to prevent it.
“Advances in artificial intelligence offer the prospect of manipulating beliefs and behaviors on a population-wide level,” the report says. “By adaptively mimicking human social dynamics, they threaten democracy.”
The paper was authored by 22 experts from across the globe, drawn from fields including computer science, artificial intelligence, and cybersecurity, as well as psychology, computational social science, journalism, and government policy.
The pessimistic outlook on how AI technology will change the information environment is shared by other experts in the field who have reviewed the paper.
“To target chosen individuals or communities is going to be much easier and powerful,” says Lukasz Olejnik, a visiting senior research fellow at King’s College London’s Department of War Studies and the author of Propaganda: From Disinformation and Influence to Operations and Information Warfare. “This is an extremely challenging environment for a democratic society. We're in big trouble.”
Even those who are optimistic about AI’s potential to help humans believe the paper highlights a threat that needs to be taken seriously.
“AI-enabled influence campaigns are certainly within the current state of advancement of the technology, and as the paper sets out, this also poses significant complexity for governance measures and defense response,” says Barry O’Sullivan, a professor at the School of Computer Science and IT at University College Cork.
... continue reading