Skip to main content



How Unbiased is Google Search?

We like to think that increased access to information necessarily means less biased information. The problem remains, however, that we have a limited ability to consume information, and thus some selection (either manually by a human, or automatically by an algorithm) must occur.

In a captivating TEDx talk by Swedish author and journalist Andreas Ekstrom, he first explains how Google Images (at least, an earlier iteration) works: images that are captioned with XYZ and have file names XYZ.jpeg are more likely to be show up during a search for XYZ (Of course, the reality is likely more complicated; in class, we’ve talked about the importance of hubs and authorities in determining which pages get ranked more highly). Then, Ekstrom mentions two incidents that on their face (pun intended, but you’ll see what the it is shortly) are somewhat different. First, as part of a racist smear campaign in 2009, some nefarious individuals utilized these attributes to make a photo of Michelle Obama, photoshopped to more closely resemble a monkey, become one of the top search results. Google, being a conscientious company, deemed this racist and removed it manually. Two years later, when Anders Behring Breivik committed mass murder by blowing up a government building and shooting children on an island, some activists decided to make use of the same technology to carry out vigilante justice — they captioned and named photos of feces with Breivik’s name. It worked. This time, however, the photos remained there; Google did not intervene. Evidently, our search results is inherently imbued with some moral biases, Ekstrom asserts.

Whether or not this discrepancy was due to conscious editorial decisions, as Ekstrom suggests, this highlights an important issue of information bias with search and media consumption. Many political scholars and sociologists lament the increasingly polarized landscape of American government, while psychologists such as Jonathan Haidt, prescribe opposite view-taking and ideological diversity in order reduce hostility and bias. Does the internet and advancement of algorithms help us do this? The answer appears to be no. Increasingly, individuals obtain their news from sites/apps like Facebook and Flipboard that employ algorithms that use past viewing histories to curate future feeds. In other words, if you read many liberal-slanting articles, in order to keep you as a happy user, the algorithm will present you with more liberal articles in the future. Even without these algorithms, however, the same effect manifests through homophily networks, where individuals embedded in a liberal network are more likely to see liberal articles or viewpoints shared by their friends. Clearly, this is bad for ideological diversity.

The question is, does Facebook and Flipboard have an obligation to present their users with opposing viewpoints?

Comments

Leave a Reply

Blogging Calendar

November 2015
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
30  

Archives