Online social networks are crawling with autonomous computer programs that spread propaganda in attempt to manipulate voters and otherwise influence political processes. Researchers are beginning to understand how these “bot” accounts are used to try to manipulate public sentiment on contentious issues including gun control and the 2016 U.S. presidential election. But they are still unraveling whether large numbers of computer-generated tweets can really sway policies or elections—and how such influence might be countered.
Real-world political outcomes are beginning to demonstrate the reach and power of bot-driven Twitter campaigns, in which a core group of tweets spreads information rapidly by encouraging large numbers of retweets. Recent investigations have uncovered, for example, Russia-backed bots programmed to automatically tweet animosity-stoking messages in the U.S. gun control debate following last month’s school shooting in Parkland, Fla. That followed a wave of bots in January demanding (via the #ReleaseTheMemo campaign) the public release of a controversial House of Representatives document accusing the FBI of political bias in its surveillance activities during Pres. Donald Trump’s 2016 campaign. Just weeks after tweets hashtagged #ReleaseTheMemo went viral, Trump released the memo—despite objections from the U.S. Department of Justice.
Other bot campaigns have sought to influence the U.K.’s “Brexit” referendum and manipulate voters in recent elections in France, Germany, Austria and Italy. Ahead of Catalonia’s referendum on independence from Spain last year, bots appeared en masse to harass those favoring the move, says Emilio Ferrara, an assistant research professor at the University of Southern California (U.S.C.). After studying nearly four million tweets related to the Catalonia ballot measure that were posted in late September and early October, Ferrara and colleagues noticed a trend. Accounts deemed highly likely to be bots repeatedly tweeted negative comments—some including the hashtag #sonunesbesties, or “#they are beasts”—at prominent Twitter accounts belonging to people favoring independence. “We believe that helped to polarize this conversation even more,” Ferrara says. (Catalonians voted overwhelmingly to become an independent state, but the Spanish government nullified the measure less than a month later).
“We are seeing that bots are effective in putting stuff into people’s feeds, and in amplifying messages. This has been found by many people, not just us,” says Filippo Menczer, a professor of informatics and computer science at Indiana University’s School of Informatics, Computing and Engineering.
Menczer is part of a group of researchers, based in the U.S. and China, who are studying what they call a “misinformation network” related to the 2016 U.S. presidential election. The researchers wrote software they called “Hoaxy” to find tweets with links to unverified claims related to the election. Hoaxy identified two million retweets produced by several hundred thousand accounts, spreading misinformation over the six months leading up to the election.
The researchers then used a program called Botometer, developed at Indiana University, to analyze the timing, text and other characteristics of those retweets to determine how likely the account producing them was a bot. Both programs helped the researchers identify a small group of “core” accounts that were likely bots and were retweeting the largest amounts of false information. “As we got closer and closer to the core, we found more and more bots”—and fewer references to fact-checking sites such as snopes.com—Menczer explains. When core accounts did reference fact-checking sites it was usually to mock them or to falsely state the fact checkers found a bogus claim to be true.
Bot-filled social media misinformation networks like the one Menczer and his colleagues studied on Twitter play into a broader debate about partisan online content’s ability to polarize public opinion—and affect the outcome of elections. The stronger a person’s partisan identity, the more likely that person is to actually vote as opposed to merely complaining about the opposition, says Liz Suhay, an assistant professor in American University’s Department of Government. Suhay’s research into online content and political beliefs has shown people’s desire to distinguish themselves socially from “the other side” seems to harden as they read partisan-biased articles.
Whereas Menczer and his colleagues have studied the bot problem by analyzing a large number of accounts, another group of researchers is attempting to sniff out bots at the individual tweet level. Sneha Kudugunta, an undergraduate at the Indian Institute of Technology, is working with U.S.C.’s Ferrara on an artificial intelligence system to identify social media bots based on certain characteristics of a tweet. These include the text (things like specific words used and patterns of capitalizing letters), how many hashtags it includes and how often it is retweeted. This system could correctly identify bots based on a single tweet with greater than 90 percent accuracy, according to the researchers.
Twitter had not responded by the time of publication to Scientific American’s requests for comment about research into politically oriented bots on its site. The company has tried to address the problem by purging bot accounts. In January it announced it had identified and suspended accounts potentially connected to a propaganda effort by a Russian government–linked organization, known as the Internet Research Agency. Twitter then notified about 1.4 million people in the U.S. who followed at least one of those accounts or had retweeted or liked a tweet from the accounts during the election period.
CEO Jack Dorsey pledged earlier this month to continue fighting bots and other abuse on Twitter, and to encourage “healthy” dialogue among its users. With U.S. midterm elections just a few months away, it is likely this resolve will be put to the test by lawmakers, advertisers and users.