Computers can manipulate public opinion. This is the conclusion of a new study by the Oxford Internet Institute, which highlights the significant spread of computer-generated political messages on social media, like reported by euvsdisinfo.eu. A group of researchers gathered empirical evidence on the size of automated relaying of messages on social media in “attempts to artificially shape public life”. The authors conclude that bots – software designed to artificially amplify messages on social networks – “are often key tools in propelling disinformation across sites like Twitter, Facebook, Reddit and beyond”.
Russia: 45% of Twitter accounts automated
According to the authors, “in authoritarian countries, social media are a primary means of social control” while in democracies, automated amplification is used to “manipulate public opinion” or particular segments of society. The study contains examples based on the analysis of large samples of online activity, in particular on Twitter, in various countries. The findings include that in Russia, 45% of the Twitter accounts are automated. See the chapter on Russia in the study.
Ukraine: most advanced use of computational propaganda
The researchers say that Ukraine is “the most globally advanced case of computational propaganda”, with the country being “the frontline of numerous disinformation campaigns in Europe”. The analysis showed that manually maintained fake accounts were more prevalent in Ukrainian online campaigns than automated relaying of messages. At the same time, bots were used for a wide variety of functions: increasing social media audiences, faking engagement to amplify messages, automatically registering new accounts and reporting other accounts to the networks to block them. Around 15% of all Twitter accounts in Ukraine are generated by bots rather than people, the study notes.
The disinformation around the downing of flight MH17 over Ukraine is an example of such combination of tools: a fake Twitter account of an alleged Spanish air traffic controlled named Carlos at the Kyiv airport claimed to have seen a Ukrainian military aircraft near the area of the catastrophe. In the wake of this tweet, Russian media and official sources alleged that the Malaysian plane had been shot down by the Ukrainian army. Ukrainian fact checkers were able to show that the story was a fake since only Ukrainian nationals are permitted to work at the Ukrainian traffic control. Bots were programmed to report Facebook accounts of journalists posting about MH17 and thus have helped block their accounts. Bot activity increased again when the results of the joint investigation on the downing of MH17 were presented in September 2016: each time someone used the #MH17 hashtag in Russian, “a bot would join the conversation and reply with a link to a fake article questioning the results of the investigation”. This occurred even when the tweet text had nothing to do with MH17.See the chapter on Ukraine in the study.
Germany: Facebook the platform of choice for misinformation
The researchers characterise Germany as a “leader” in traditional news consumption, with only 20% of the population claiming to get news on social media. All major German parties have committed to refraining from using bots during the campaign ahead of the parliamentary elections in September. It is therefore no surprise that there is indeed limited evidence of high-frequency bots amplifying political messages in the German twitter sphere. The hyperactive bots identified by the researchers seemed to focus on spreading negative messages on migrants and to support the right-wing party AfD. However, the researchers analysed Twitter content related to the election of the German Federal President and found that 20% of it contained misinformation. This compares favourably with the situation before the US elections where the ratio of genuine information vs misinformation was 1:1. Instead, Facebook seems to be the platform where disinformation is disseminated in Germany, so much so that German media refer to it as “Fakebook”. See the chapter on Germany in the study.
The researchers end their study with a call for social media organisations to meet their responsibilities: “Computational propaganda is now one of the most powerful tools against democracy. Social media firms may not be creating this nasty content, but they are the platform for it. They need to significantly redesign themselves if democracy is going to survive social media.”