Skip to main content



Bayes and the Effects of Bias and Belief

The author begins his article by revealing that in his last post, he explored the realm of statistics to explain why the “five sigma” criterion for a discovery is heavily used in particle physics. As part of his explanation, he quoted a Louis Lyons saying that the five sigma criterion includes a “subconscious Bayes factor”.

Bayes’ theorem is, on one dimension, an uncontroversial statement about conditional probabilities. For example, say I have a bag that contains three stones: two blue and one red. Without looking and in random order, you and I pick and keep one stone each. What are the chances that I have a blue stone and you have a red one?

This question can be solved in two ways: if you have the red stone (probability 1/3), then I definitely have a blue stone (probability 1), so the probability is 1 x 1/3 = 1/3.

On the other hand, if it is revealed that I have a blue stone (probability 2/3), then you have the red stone with probability ½, so the probability is 2/3 x ½ = 1/3, again.

The answers are the same, and these two events can be phrased as such: “The probability of me having a blue stone if you have the red stone” multiplied by “the probability of you having the red stone” is equal to “the probability of you having the red stone if I have a blue stone” multiplied by “the probability of me having a blue stone”. This basically defines Bayes’ theorem.

To see how this theorem can be useful, imagine that we only know that I have a blue stone. Bayes’ theorem can be manipulated to calculate the chance that you have the red stone. “The probability of you having the red stone if I have a blue stone” is equal to “the probability that I have a blue stone if you have the red stone” multiplied by “the probability of you having the red stone” divided by “the probability of me having a blue stone”.

To a statistician, this theorem is very fundamental stuff, which is widely used in many applications. Where things start to become more complicated is the common, real-life scenario where we don’t have enough information, and we are trying to figure out what is happening.

Say that we don’t know what colors the stones in the bag are (blue or red), or the color of the stone that you have, but we do know that I have a blue stone. What’s the probability that you have a red stone? Unfortunately, we don’t know the prior probabilities: how many of the three stones are blue and how many are red.

Bayesian inference would put these prior probabilities in terms of belief. Unless you have a certain belief about the initial contents (stones) of the bag or some previous evidence about the contents, you would be best off inferring that there’s a 50-50 chance of getting either blue or red. Assuming this, you could work out everything else with the given information in order to calculate the probability that you have a red stone.

You can “feed in” even other facts. For example, we can exclude the possibility that all three stones are red, because then I can’t have a blue stone, which I do. We might have information about the global population of red and blue stones, and one color is more abundant than the other. Or maybe the favorite color of the person who prepared the bag was red, so it’s more probable that there are more red stones in the bag than blue ones. Maybe all bags contain the same proportion of colors. And we will update our prior belief based on prior experiments… and now everything’s a mess, and scientific objectivity seems to have disappeared.

Putting aside the strength of the evidence, Bayes’ rule takes into account that even if you have good evidence, the credibility you give to a certain theory will depend on your previous analysis of the theories involved. In practice, this is true in science, regardless of whether scientists want to admit it or not – hence the term “subconscious Bayes factor” in the 5 sigma criterion.

Climate change is a good example. If you have a prior stereotype that modern life is bad and that technology is evil, then you will place a high prior probability on carbon dioxide emissions dooming us all. If instead you lean towards to the idea that there is an all-out scheme by large multinational environmental corporations and academics to deprive you of the right to drive your kids to school, you will most likely not place high credibility on mounting evidence of anthropogenic climate change.

An exception to this is that a prior assumption of zero probability can never be changed. For example, if you firmly believe that the Earth is 5 millennia old, no amount of evidence can change your mind; your belief makes you impenetrable. In this case, Bayesian statistics provides a mathematical definition of a closed mind: anyone with a prior of zero about anything can never learn from any amount of evidence, because anything multiplied by zero is zero.

On the more bright side, Bayes permits us to accommodate our bias and the weight of prior evidence.

 

This article is relevant to what we have learned about Bayes’ rule in class by going more in depth about how the theorem allows us to calculate certain probabilities using given information (prior and conditional probabilities). The article also explores the unique topic of how an individual’s bias and belief can affect their measures of certain probabilities. Overall, the article reviews what we have learned in class, while also shedding new light on the subject of Bayes’ rule.

 

Works Cited:

Butterworth, Jon. “Belief, Bias and Bayes.” Theguardian. 2015 Guardian News and Media Limited or Its Affiliated Companies, 28 Sept. 2014. Web. 21 Nov. 2015. <http://www.theguardian.com/science/life-and-physics/2014/sep/28/belief-bias-and-bayes>.

 

Source:

http://www.theguardian.com/science/life-and-physics/2014/sep/28/belief-bias-and-bayes

Comments

Leave a Reply

Blogging Calendar

November 2015
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
30  

Archives