Skip to main content



Algorithms, Networks, and Discrimination

Without a doubt, there are an ever increasing number of services and companies who are using data and analytics to target individuals. This article from The Atlantic titled “What happens when biases are inadvertently built into algorithms” sheds light on how software and data analysis may inadvertently create bias and racial discrimination. In the article, author Lauren Kirchner describes the “disparate impact” theory, a tool often used to address discrimination. Historically, the disparate impact theory has been used by lawyers to challenge politics that has negatively impacted certain groups of people, regardless of it’s original intent. The first incident in which the term “disparate impact” was used was in the Griggs v. Duke Power Company court case where the Supreme Court ruled it illegal for companies to use intelligence tests and high school diplomas in hiring and promotion, regardless if the company had originally intended the intelligence tests to disproportionately disqualify people of color. As of today, a key issue is to figure out how courts will address algorithm bias. More importantly, in the context of networks and our technology-driven world, the real question is how this “disparate impact” theory can be applied to the algorithms that target consumers and users.

Data analytics and predictive software programs are growing at an exponential rate. Yet, as Kirchner points out, algorithms that make decisions based on data such as an individual’s phone location tag can “reflect, or even amplify the results of historical or institutional discrimination.” The article references a few examples where companies such as Flickr label pictures of black men as “animal” or “ape”, and researchers have determined that search results of African-American-sounding names are more frequently paired with ads relating to criminal activity.

With this in mind, this “inadvertent bias” of algorithms directly relates to graph theory. The example of Flickr labeling photos of black men indicates that an edge is created between two nodes: one being the tag of “black men” and the other being “ape.” As more nodes and edges are created, the issue arises where nodes become inadvertently linked to create negative connotations for tags such as the Flickr example given in the article. Furthermore, there also exists the relationship of a network connected to a few strong ties versus a network connected to many weaker ties. As with any network, it often is the case where a single strong link exerts much more influence than many weak links. Therefore, it is important to consider how these algorithms assign the strength of ties and connotations between tags, and ultimately how they influence the “discrimination” or “bias” presented to the user. The issue with this “bias” created by data analytics and software is that it is hard for consumers to detect, and even harder to quantitatively challenge and prove. Hence, with the ever-growing number of decisions being made by algorithms, these distinctions of discrimination, for the time being, will still remain largely blurred.

Source: http://www.theatlantic.com/business/archive/2015/09/discrimination-algorithms-disparate-impact/403969/

Comments

Leave a Reply

Blogging Calendar

September 2015
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
282930  

Archives