Skip to main content



Complicated real-life information cascade on Amazon

http://snap.stanford.edu/class/cs224w-2012/projects/cs224w-033-final.v08.pdf

 

In this paper, the researchers analyzed 548552 products and their 7781990 review on Amazon, and the data were gathered in 2006. The model generated has an interesting pattern: they found that majority of ratings clustered at 4 and 5 stars, while there was also a certain amount that clustered at 0 star. In terms of reviews, even though this was a large data set and there were a lot of products to look at, almost all of the products, regardless of the sale amount, had less than 400 reviews, with significant majority of them clustered at the interval 0-50 reviews. The paper then goes deep into mathematical proof and modeling, which it somehow hard to present and analyze here. However, what I have already mentioned have a lot of connections with what we learned in class, and I will focus on them instead.

 

In class, we used marble drawing as example to learn about information cascade, and a huge difference between this is that information cascade can be different between online and face-to-face situation. In a face to face interaction, we know that everyone needs to give a guess/evaluation, but online, it tends to be more polarized, as shown in this research. People who have pretty strong opinions are more likely to review, regardless good or bad. This makes the entire cascade more prone to mistake, because if the first person who got the product did not like it, and the second person who got it didn’t have a strong opinion, this product will likely to be less popular because of Amazon’s algorism, and since the later customers would not know about it, and the information they get here will be biased.

We have also talked in class that there can be a cascade when it reaches the threshold of two. Apparently, this works theoretically, but is not something we will typically expect in real life. The fact that judgement of liking is subjective (comparing to guess marble, which is objective) will make the threshold higher, because two people saying it good doesn’t necessarily imply that it is a good product for me.

Furthermore, we have talked about how people go with their own judgements if there is a tie. However, in real life, good and bad review are weighted differently. For instance, when 3/10 reviews of this product are bad, it starts to make me wonder about the quality, and sometimes, when there are alternatives, maybe 1/10 bad review is enough to cause me to change mind.

Comments

Leave a Reply

Blogging Calendar

November 2018
M T W T F S S
 1234
567891011
12131415161718
19202122232425
2627282930  

Archives