Yuriy Senchyna

As bots on Twitter affect the elections in the United States


Facebook and Twitter Feeds somewhat cleansed by a continuous series of posts about Donald Trump and US elections, and we decided to correct this mistaken omission. Associate Professor of Computer Science at the University of Southern California Emilio Ferrara has published a small study of active bots on Twitter during the presidential campaign vs. Trump Clinton.


Investigating the activity of bots, it came to a disappointing conclusion: in most cases, people can not distinguish between bots tweets tweets from real users - is what makes this tool an effective method of manipulation. We offer translation of his article.


The key to democracy - participation of the community. People discuss topical issues with each other openly, honestly and without external influence. But what happens when a large number of participants in this discussion - "prejudiced" robots by unseen groups with unknown plans? As shown by my research, this is what happened during this election campaign.


Since 2012 I have been studying how people discuss the social, political, ideological questions online. In particular, I was looking for how social media are used to manipulate.


It turns out that a lot of political content that Americans see in social media every day, not made by people. Approximately one in five tweets related to the elections, in the period from 16 September to 21 October, has been generated "social bots."


These AI systems can be quite simple or very complex, but they all share one trait: they are configured to produce content in a specific pattern and the plan developed by groups that have little to identify. These bots have influenced the discussion of presidential elections, including the main topics on how online activity perceived by the media and the community.


Are they active?

grafic

To determine which accounts are bots, we use the service Bot Or Not, I developed together with colleagues at Indiana University. It uses machine learning algorithms to analyze a lot of parameters, including metadata profile, content and themes, which writes the account owner, the structure of his social relations, a schedule of activity and many more.


After analyzing about 1,000 factors Bot Or Not issue of similarity scores, which determine how the study account like a bot account. It allows you to get 95% accuracy. There are many examples of tweets generated by bots, supporting one or another candidate, or as attacking opponents. Here are a few examples:

some text twitter some text twitter

How effective are they?

The effectiveness of social bots depends on the reactions of real people. We have learned, unfortunately, that people can not ignore bots or produce them semblance of immunity.


It turned out that the majority of users-people can not recognize a tweet written bot. We draw this conclusion from the fact that bots retweets as often as men. Retweet content generated by a bot without its verification leads to real consequences, including the dissemination of rumors, conspiracy theories or incorrect information.


Illustrate the statement and retweets replai between bots and humans to help graphics function Complementary Cumulative Distribution Function. The graph shows that the bots are much less likely to engage in replai real people, and often "communicate" in replai with other bots (robots left, right - people):

grafic

But the number of retweets and bots, and real people get about the same (left bots, right - people)

grafic

Some bots are very primitive and are engaged only retweets of content from these worshipers a certain candidate. Others are more complicated, they produce new tweets, join the existing feedback using popular hashtags like #NeverHillary or #NeverTrump.


These people, who fouls the hashtags will see content generated by bots, perfectly inscribed in a stream of tweets from ordinary people.


Bots generate content automatically, that is, very quickly and constantly. This means that they can generate consistent and ubiquitous segments debate throughout the campaign. As a result, they receive a significant impact, collecting a lot of foloverov, and their tweets retweet thousands of real people.


A better understanding of bots

Our research has also led to an understanding of the more subtle nuances of behavior of the bots. One of them: boats initially biased and this affects the assessment of the situation. For example, in support of the Bots Trump systematically generate giperpozitivnye tweets and this changes the perception of bias, in particular - makes one believe that there is real, rooted in the community, a positive attitude towards this or that candidate.


If we analyze the location data, we learned another lesson. Twitter provides metadata about the physical location of the device, which is used for posting. Aggregiruya and analyzing these "prints", we learned that the bots are spread unevenly US. In some states there are many more, for example, Georgia or Mississippi. This means that the structure, control bot nets may be disposed in those states.

USA
Green is designated states with the largest number of politically active on Twitter bots, pink - people

We also found that the robots can operate in different ways. For example, when they are not busy creating content in support of a candidate, they can focus work against opponents, for example, write nasty things on certain hashtags (#NeverHillary or #NeverTrump).


This strategy exploits the known human prejudices. For example, it is known that the content is distributed negative social networks faster. We have found that in general, negative tweets retweet 2.5 times more rapidly than the positive.


It is difficult to calculate the impact of the activity of bots on the election results, but is allowed to assume that they are influenced by the number of votes in some locations. For example, in a particular state people are reading Twitter, believe that their candidate (or your opponent) as much support that it makes no sense to go to vote. And in fact, they see an artificial wave of support created by bots.


Our study approached the framework of what can be achieved today in the study of robots, using computational methods. Social media is playing an increasingly important role in shaping the political views of society, impacting on the lives of people in the offline and online. The research community must continue to explore these platforms to protect their users against manipulation.