“Artificial” Information Cascades Created By Social Networks
WeChat is a widely used social network application that has spread across the world, amassing more than triple the user base of Snapchat, and half the user base of Facebook. WeChat is used for a large variety of purposes, including instant messaging, personal financing, and even takeout orders. As the main social media platform for China, WeChat has also spread to other countries, including the US, where its influence has been seen in recent political elections and polarizations.
Why has the spread of WeChat caused such a polarization among the population? If we examine the structure and operation of the social network itself, we will see why. Unlike sites like Facebook and Twitter, WeChat operates using information outlets, also called OAs (official accounts), which will post information and articles on WeChat. Anyone can create an OA. However, due to the lack of search and hashtag functions in WeChat, information is distributed from OAs to their subscribers, which is subsequently shared within smaller groups and conversations, and so users are limited to a finite amount of articles and references within WeChat. And because of WeChat’s functionality, it is difficult to share views with the public from your own personal account (there is no way to post to the public).
So, all discussion about specific news topics is limited within private and semi-private networks, in which only a select few can view what is being shared. Information cascades, therefore, occur throughout small networks of friends and family, rather than throughout the entire social network, which contrasts from Facebook and other social media platforms that allow all users to post to the public and hold discussions. In addition, WeChat does not allow “free commenting”, in which any user is able to comment on any post. OAs have the ability to enable or disable comments for a post, and even if the comments are enabled, they are able to choose which comments are displayed, and so questionable or disagreeing comments are often filtered out from the public eye. This, therefore, creates “artificial information cascades” throughout WeChat.
When personal users come across an article on WeChat, they see an increasingly biased viewpoint, as the OA behind the article is able to control the content on it, as well as what responses from others are displayed below their posts. Seeing so many viewpoints that seemingly agree with one another, the user is more likely to agree with this viewpoint as well, without knowing that they are only seeing one side of the entire story. The user is unable to see how many dissenting viewpoints actually exist for this article, because the OA may have chosen to hide all of them. Therefore, the user will falsely believe that they are getting a lot of information from this article and be more inclined to follow this viewpoint. They may also be inclined to comment on the post as well, contributing to the “credibility” of the article if the OA chooses to display the comment. This will further drive the information cascade. Because of the functionalities of WeChat, it is much easier for information sources to filter the public view such that an information cascade in support of their viewpoints will arise. This will lead to widely believed, but incorrect, viewpoints in real-life events, and lead to misinformation within the population.
https://www.cjr.org/tow_center/wechat-misinformation-china.php