Skip to main content



prejudice among AI robots

This recent Engadget article reports on a research study from MIT and Cardiff University, in which AI robots demonstrated an evolving pattern of prejudice.  There have been several past instances of discriminatory AI—due, for example, from unintentionally biased training data.  In this case, however, the artificially intelligent machines learned intrinsically to follow prejudiced behaviors.  It is a profound observation, representing how the ability to express prejudice is not solely a biological phenomenon.  It is also interesting to apply the concepts we have learned so far to the results of this study, in particular the notion of strong and weak ties.

In the study, the researchers ran simulations where the robots decided on recipients for a series of donations.  Over time, the population of robots formed sub-groups, in which members were more likely to donate amongst themselves than those outside of their group.  In the expression of networks, each robot in the population may represent a single node.  Then, edges between these nodes may be strong, if the two robots are in the same sub-group, or weak, if the two robots are from different groups.  Using this abstraction, one may predict the donation recipient for any robot based on the tie strength: recipients are more likely to have strong ties to the donor than to have weak ties.  Using networks to model the robots’ learned behavior in this way may help to provide more insight into the results, and it is also thought provoking: what tendencies may future networks modeling human and robot interaction uncover?

Comments

Leave a Reply

Blogging Calendar

September 2018
M T W T F S S
 12
3456789
10111213141516
17181920212223
24252627282930

Archives