Skip to main content



Marbles, fake news, and how to stop it

By Victor Odouard

 

We talked in class about the marble experiment. To summarize it again, we have a bag that either contains majority red (2/3) or majority blue (2/3) marbles. Students stand in line, draw a marble, and announce a guess for whether they think the bag is majority red or majority blue. They can base that guess off their own information and the guesses of the students that came before them. If the bag has an equal probability of being red and blue, then if the two first people guess that the bag is majority of a given color, that is all it takes for everyone else to say the same thing.

This is despite the fact that even if the first two guesses were blue, for example, there is a 1/9 chance of the bag still being majority red, and that everyone in the class will be wrong. The scary part about this is that there can easily be an incorrect cascade and everyone ends up being wrong. But the reassuring part is that, given that two blues in a row are selected, it is much less likely that the bag is majority red than majority blue. This hints to us that information cascades should at least be correct “most” of the time, whatever that means.

However, there has been a new study from MIT that has found that fake news actually travels much more quickly than true news. The researchers, Aral, Roy, and Vosoughi, found a number of worrying statistics putting numbers on this worrying trend. As reported on MIT’s website, on a binary level, fake news is 70% more likely to be retweeted than true news is. Fake news also reaches chains of 10 retweets twenty time faster than real news does. Further, fake news reaches 1500 people with in one-sixth the time as real news. One strong explanatory factor was novelty—people were more likely to retweet fake news because it seemed more novel (which makes sense, since it isn’t true).

It’s clearly bad that fake news travels faster than real news. But how can we see this more concretely? I’d like to make an analogy between the marble experiment and the spread of fake news. Here are the correspondences

Majority red vs majority blue – Obama was born in the US vs Obama was born outside of the US (MECE)

Drawing of marbles – people getting signals from “reliable” sources, ones we can reasonably expect to give the correct information must of the time.

People giving their guesses – people posting stories on social media in favor of one view or the other

Like before, everyone can guess based on their “marble draw” and based on what they hear from other people. But unlike before, people don’t hear all the guesses from all previous people. In fact, the probability that a guess is heard given that it is correct is lower than the probability given that the guess is incorrect, because as the MIT study found, fake news is more likely to be spread than real news. So the outside information is less reliable than it would have been had people been hearing everyone’s guesses, as would have been the case in the original experiment.

This means that an incorrect cascade, which was already possible in the marble experiment, becomes even more likely, since any given incorrect guess gets heard more than a given correct guesses.

This simplification also assumes that there is a “reliable” news source that people can reasonably expect to hear from at least once, but sometimes even this may not be the case, making the spread of wrong information even more likely.

This partially explains why so many people thought Obama was born outside of the United States despite clear facts to the contrary.

While this is a worrisome finding, it at least gives some hints on how we can deal with the spread of fake news more effectively. Many hypotheses posited that bots were the primary purveyors of fake news, and we now know that this is not actually true. Real people are the primary retweeters of fake news, which gives us some hope—while bots don’t care about doing the right thing, people usually do. So if social media platforms built in functionality that could evaluate the reliability of a news story, and inform users of this before retweeting, then users might be discouraged from reposting questionable content. There could also be a reputation system based on the reliability of the stories you repost, which real people would presumably care about more than bots. It turns out that getting humans to do the right thing might be easier than getting bots to do the right thing, so perhaps the finding isn’t so grim as it seems at first glance.

https://sap.mit.edu/news/study-twitter-false-news-travels-faster-true-stories

Comments

Leave a Reply

Blogging Calendar

November 2018
M T W T F S S
 1234
567891011
12131415161718
19202122232425
2627282930  

Archives