Skip to main content



The Spread of Fake News and Diffusion

Link: https://www.scientificamerican.com/article/information-overload-helps-fake-news-spread-and-social-media-knows-it/

This article talks about how algorithms take advantage of our cognitive biases to spread fake news. All of us have a limited amount of information we can process. However, social media overloads us with information, meaning we can’t actually comprehend or evaluate all information we see. Often, users only look at the first few posts they see on their feed before they are overloaded with information. This means that the content that the algorithm recommends us is actually extremely important. When we are only reading a few articles, we will be biased towards the information that the algorithm happened to put at the top of our feed. This would not be a problem if the algorithm recommended high-quality and unbiased content. However, this usually is not the case. The algorithm often recommends content that is the most popular. It may also recommend content based on our past activity. When looking for music, it saves you time when the algorithm recommends music similar to your tastes. However, when you are recommended news articles that are similar to your beliefs, this becomes problematic. You start to see the information that simply confirms your biases and past views, reinforcing your views. This is harmful when someone already believes in a conspiracy theory and sees more information to confirm these beliefs. With recommendation algorithms, the exact people that should not be seeing more fake news are going to see it and won’t change their minds.

Information overload not only makes us more vulnerable to algorithms, it means we will seek out information that confirms our biases. Naturally, we are biased to seek information that agrees with our beliefs. This is because it is tiring for us to experience cognitive dissonance when we need to reconcile information that conflicts with our beliefs. Therefore, people will search for content online that confirms their background knowledge. When we are overloaded with information, we have less brainpower to consider many different perspectives on a topic. We will instead do what is most comfortable and easiest for us — to limit our consumption of media to anything that makes us feel confident in our current beliefs. On a micro-level, someone who believes in fake news will continue to consume fake news and low-quality content. On a macro-level, everyone who is already connected to others in the fake news community will stay there. There are no local bridges between clusters when everyone interacts with people who have similar views. This means that if there is a cascade of information, this information will only spread within a cluster of users with similar beliefs. They also won’t have access to information outside of their bubble that could have informed their understanding of a topic.

This article also discusses how much people are influenced by others’ beliefs. Studies show that if we see what music other people in our social group listen to, our taste in music will be different than if we listened to music on our own. We also are quick to believe that if a post is popular, it must be high quality and credible. We all have thresholds for how many people around us support a topic or believe in something. When we are recommended content that is similar to our beliefs and everyone we are connected with shares our beliefs, we will see this content trending and believe it is true. This article also mentions bots, and how they have a large influence on the spread of misinformation. Bots are fake accounts that are created to spread fake news. They will usually follow each other, post fake news, and interact with each others’ posts. This means they can create artificially trending posts. It is difficult for some users to detect posts from bots. Because we believe that a post is credible because it is popular, the fact that these fake news posts are trending causes them to become more popular. They also are more likely to be recommended by an algorithm.

Another issue with being stuck in an echo chamber is that your threshold for believing in or supporting an issue is easily passed. When you only interact with people who share the same ideas, the fraction of people you know who support an issue will likely be either really high or really low. When it is really low, you might be biased against an issue and believe it is not worth supporting. Additionally, because you are in a cluster disconnected from clusters with differing beliefs, this issue can’t cascade in your cluster. When the fraction of people who support an issue out of people you know is very high, you are very likely to believe in it, also. Even if your threshold is high and you aren’t normally someone who is easily influenced by others, you can become vulnerable to this cognitive bias. Another interesting thing to think about is that when viewing diffusion in a social network, each edge can be pretty equal. Although you may be closer with certain people you are following than others, that doesn’t necessarily mean you will see your close friend’s posts on your feed more than a random bot’s posts you are following. If anything, you are more likely to see the bot’s posts because they probably have more popular posts. This means that your threshold for believing an issue is more likely to be reached by a bot, who might be popular. Because we believe that popularity equals quality, the fact that a bot (that appears to be a real person) supports an issue could mean more to us than even our closest friend supporting it.

Comments

Leave a Reply

Blogging Calendar

December 2020
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
28293031  

Archives