Skip to main content



Microsoft’s Tay or: The Intentional Manipulation of AI Algorithms

microsoft_tay_daddy-large_transqvzuuqpflyliwib6ntmjwfsvwez_ven7c6bhu2jjnt8-large_trans2oueflmhzzhjcyuvn_gr-bvmxc2g6irfbtwdjolshwgIn this post, I would like to discuss the events that occurred last Spring, after Microsoft launched a learning AI chat-bot named Tay. These events are discussed in this article, if you would like to read further. Last March, Microsoft released Tay, an AI chat-bot that would learn from its discussions. Microsoft released Tay as part of research in conversational understanding, with the goal of teaching Tay to speak like millennial internet users. When releasing Tay, Microsoft failed to anticipate the type of attention that Tay would receive.

Users of various notorious websites, such as 4chan, Reddit, and, especially, eBaumsworld decided it would be fun to abuse Tay’s learning algorithms to teach her to be racist, sexist, antisemitic, homophobic, islamophobic, etc. With a sudden surge of hate-filled messages, the internet quickly taught Tay all sorts of abhorrent ideas. Some of Tay’s notable stements include denials of the holocaust, glorifications Hitler, denouncements of feminists, casual calls for genocide of various groups, erotic role-playing, support of GamerGate, and pretended drug use. After a short 16 hours of life, Tay was shut down by Microsoft due to the offensive nature of her comments.

This is a powerful example of the challenges of AI learning and of the dissemination of ideas in a network. A Microsoft spokesperson has stated:

“The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”

By surrounding Tay with offensive input, users quickly changed her apparent ideas. It shows that, in developing machine learning algorithms, we must be aware that not all input that is received is necessarily good input; we may need to place a filter on what is learned from. Additionally, with Tay, we saw something quite similar to the information cascade that we have discussed in class. After the first few hours of offensive input, Tay no longer reacted responsively to ideas opposing those that she had been taught; she would insult users that expressed opinions contrary to her teachings and did not appear to learn from these users’ posts. This may have been an indication that Tay had already been taught some ideas very strongly and simply ignored newer input on these subjects.

Comments

One Response to “ Microsoft’s Tay or: The Intentional Manipulation of AI Algorithms ”

Leave a Reply

Blogging Calendar

November 2016
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
282930  

Archives