The Problem with Bias at Google
NY Times Article: “Here’s the Conversation We Really Need to Have About Bias at Google” by Farhad Manjoo
The New York Times article, “Here’s the Conversation We Really Need to Have About Bias at Google”, discusses the unintended, pervasive bias behind Google’s search engine and highlights Google’s powerful impact on our public discourse, purchasing decisions, and the information people have access to. A “monopolistic platform controlling the information landscape”, Google owns major communications platforms such as Youtube and Gmail as well as the Android operating system and app store. It is the world’s biggest internet advertising company, holding enormous power and influence on the market for digital news. And the way it controls the public discourse through algorithms is kept secret — no one outside the company really knows how they work. Meanwhile, ample evidence of bias has been discovered on Google’s platforms; for example, pictures of black people were labeled as “gorillas” by Google’s Photos app, and image results for the query “CEO” returned pictures of which 11% percent were women, despite the fact 27% of CEOs are women. These examples show how Google’s algorithms reflect the biases and blind spots of their human creators, but also how Google’s search engine, which learns from real-world user data, will amplify the biases found in society. As a corporation with financial and political goals, Google’s algorithms are also guilty of commercial biases, favoring its own properties. Examples include highlighting of Google reviews over competitor Yelp’s reviews, for example. Finally, Google’s algorithm, which favors recency and activity, is susceptible to misinformation after major news events.
I think this topic shows a more nuanced side to the search engine ranking systems we learned about in this course. The intuition behind hubs and authorities, for example, is based on the idea that certain pages are “high-value” lists that point to “correct” pages with the most votes, and that these pages are the proper “answers” for the given search query. The ideas of “right” and “wrong” answers, based on people’s behavior (the blogs and websites maintained by many different groups of people), are dangerous because they have an implied objectivity. They are mathematical methods of “sifting truth from trash” yet they are inherently vulnerable to bias. The principle of repeated improvement, iterating the PageRank or hubs and authorities until a refined list of “good answers” is obtained, could really be a process that amplifies the voices and ideas of the majority and skims off the conversations of the minority; because many view Google as an objective source of information and news, this process helps perpetuate society’s existing biases. It also gives Google an incredible amount of power over our beliefs and behaviors — they could be quietly steering us towards using more of their platforms or promoting their own products or insidiously push for political agendas that benefit their bottom line. It’s important to be aware that consumer well-being, fairness, and equity are not necessarily on the list of priorities for large companies like Google that have an enormous impact on our social, economic, and political landscape.