Information Cascades in Online Movie Rankings
Source: https://repository.upenn.edu/cgi/viewcontent.cgi?article=1324&context=marketing_papers
Online product ratings are widely available on the Web and are known to influence the decisions of prospective buyers. These reviews are intended to serve as a reliable source of information for consumers and are therefore of notable importance to markets. Thus, it is important to examine any potential problems associated with how these ratings are generated.
This article analyzes sequential user movie ratings to examine how new ratings are influenced by prior ratings. In particular, the authors are interested in herding behavior in online movie ratings, as suggested by the theory of information cascades. By definition, an information cascade is a phenomenon in which a number of people make the same decision in a sequential fashion. For a cascade to begin, an individual must encounter a scenario with a decision that may be influenced by outside factors, such as the observation of actions and outcomes of other individuals in similar scenarios. It is important to assess the possible influence of information cascades in the generation of online product reviews since this may cause the reviews to be biased and be irreflective of the true sentiment surrounding the product.
In this case, the authors utilize the time series of user ratings to identify whether new online movie reviews appear to be influenced by the crowd of previous reviews or by reviews from friends (since the time position of a rating indicates the amount of observational learning the user was exposed to before providing the rating). The authors gathered data from several public websites and sampled all movies released in theaters in 2007. They also collect the observable information of each user who has generated at least one movie rating for the movies, keeping track of friendship among users. This enables the authors to observe friendship networks among users.
By applying estimation methods to address heterogeneity in user and movie levels, the authors find that higher previous ratings tend to increase the likelihood of a subsequent user to provide a higher rating. This is an example of herding since we can see individuals converge to a uniform social behavior in providing similar movie ratings. However, the authors find that the impact of previous ratings becomes weaker as the volume of friend ratings increases. Taken together, this indicates that moviegoers are susceptible to providing similar ratings as those who came before them. However, it is noteworthy that, while individuals are likely to give similar movie reviews as those before them, they are more likely to give similar reviews as their friends than follow the general sentiment of the crowd. There exist many reasons why an individual would be inclined to imitate the behavior of others, in this case by emulating their reviews, such as the belief that perpetuating a common opinion enhances their own credibility (this is an example of a direct-benefit effect).
This analysis suggests that observational learning by others’ ratings can trigger herding behavior and that an aggregate level of consumer-generated product rating information (such as average user rating) may be a biased indication of product quality. This is due to how many users may base their reviews largely on the sentiment of previous reviews (either by the crowd or by friends), which may help explain why it often seems that you have such a different perception of a movie than others online, like when you find a movie on Rotten Tomatoes with reviews either much higher or lower than your own.
The authors note that the presence of observational learning and information cascades in online user ratings lowers the overall quality and integrity of the reviews since each user rating is associated with some degree of bias due to herding. This is consistent with the possibility of incorrect cascades, where an incorrect behavior may cascade through the system. In the case of online reviews, if the first few reviews on a site are similarly skewed to be much higher or lower than what an average person would think, this uncommon perception is likely to be perpetuated throughout the system in a domino-like effect.
In order to alleviate this bias, the authors recommend minimizing the influence of aggregate information, such as average user ratings, in online review forums. They hope that doing so can help increase the quality and integrity of online user reviews.