Twitter Bots are Effective Even In Few Numbers
https://www.nbcnews.com/tech/tech-news/relatively-few-twitter-bots-were-needed-spread-misinformation-overwhelm-fact-n939021
Research Study: https://www.nature.com/articles/s41467-018-06930-7#Sec7
A recent news article on NBC news mentioned a new research study on the activity on Twitter with links to low-credibility and fact-checking content for ten months between 2016 and 2017. The result of the study was that the majority of links were to low-credibility sources, 389,569 out of the 400,000 articles that were analyzed. Out of these sources, 31% of them were so popular because of the 6% of Twitter accounts that were found to be bots. These bots were said to be effective because of two behaviors that they tend to follow in order to spread low-credibility content to people.
First, bots apply an early on support for low-credibility sources when they are first starting to be shared, to promote an information cascade when more users buy into the false claim. Second, bots focus on people that tend to be influential to others. Based on what we learned in Networks, both actions are clearly effective ways to spread information in different ways.
For the first behavior, by supporting new articles that haven’t been spread across too much yet, viewers will see an article that they will not know whether it is true or false. Because of the lack of information regarding on the article, the best way people can base off a decision whether or not an article is worth sharing is to see how many others have shared the article as well. If there are enough accounts that back the article, it is more likely for a potential user to reshare, which will further largen the information cascade. The best way of preventing the result of this behavior in Twitter bots is likely to effectively provide verified information to people before they decide to spread a false article by themselves, and while such a feat can be quite difficult, doing so will prevent an accidental reshare that was based on other users that shared the same information.
For the second behavior, we can assume that a potential influencer has many connections to other people. In a social network, a node that has many edges to other nodes is an effective influencer in producing an information cascade because it increases the chance that a connected node will agree with the information, and will, in turn, increase the fraction of connected people that agree in any connected node. By influencing an influencer to reshare false information, the article can reach out further nodes that might not be within the initial cluster and increase the chance to spread misinformation to the initial cluster. That’s why many bots will try to link articles to these kinds of people in order to get them to reshare. The best way to prevent this sort of attack is again, having the influential people obtain information in order to distinguish what is and what isn’t a verified article.
While removing bots from Twitter is a great initial way to solve the spread of low-credibility content to people, these bots are still not the underlying cause of information cascades. If people cannot determine a valid source from fake news, spreading false articles will still happen, regardless if the majority of bots are removed. However, while we still have no definite way to provide that kind of information to all kinds of articles, removing bots is still good as it will reduce how fast misinformation will spread, and allow fact-checkers to keep up with all the new articles.