Skip to main content



Iterated Prisoner’s Dilemma

Imagine you really hate the iClicker questions in Networks class. I say imagine, because no one actually hates iClicker questions, right? Turns out, the guy who sits next to you every day also hates iClicker questions. The two of you then turn to a hard life of crime and decide to steal the iClicker receiver from Statler Hall in the middle of the night. Unfortunately, you two aren’t good burglars and the police catch you in the act. They place you in separate interrogation rooms, where you two can’t communicate. Fortunately, the police can’t convict you on the intent of stealing the iClicker and can only charge you with the lesser act of trespassing (robbing students of iClicker questions is a greater crime than trespassing). However, they offer you a deal—if you confess to the crime, you don’t have to go to jail at all, but your partner will be sent to prison for a very long time. You assume your partner in crime has been offered the same deal. If you both confess, you both get sent to prison for a long time. This is called the Prisoner’s dilemma, and in class, we determined that the best thing for you to do is to confess to the crime, even though cooperation would yield better results.

Now imagine that you’re really dumb and you try again with the same person, after you both served your prison sentences and re-enrolled in the class. In fact, you keep trying to steal the iClicker receiver from Statler Hall 185 and you keep getting caught. This situation is now called the Iterated Prisoner’s Dilemma” (1). In a study published in 1980 (http://www.jstor.org/stable/173932), Robert Axelrod showed that in an iterated Prisoner’s dilemma game, there is an effective strategy that can result in both of you getting the lesser trespassing sentence every time you get caught (2).

The strategy is known as Tit-for-Tat, and was “discovered” by Axelrod when he conducted a computer tournament that pitted strategies from different scientists against each other and tallied the total scores. The payoffs for Axelrod’s game are as follows: if both players “cooperate” (refuse to confess), they both receive 3 points. If both players “defect” (confess), they both receive 1 point. If one player cooperates, but the other defects, the player that cooperates gets nothing and the player that defects get 3 points. Tit-for-tat, which embodies the idea of an “eye-for-an-eye,” came out the winner against strategies ranging from always cooperate to always defect.

In Tit-for-Tat, you always start out cooperative. So, in your iClicker situation, you would not confess to the police. If the other player does the same, you both win. However, if the other player defects (your accomplice confesses), you get screwed. The next time you play the game (when you get caught again), you defect because the other player betrayed you in the previous game. If the other player continues to defect, you continue to defect However, if the other player cooperates, you choose to cooperate in the next round, which in essence, means that you forgive him for backstabbing you in the previous game.

The results for Axelrod’s study is very interesting, because it shows that in Prisoner’s Dilemma games, the people who are “nice” (they don’t defect) will almost always come out on top, despite the saying “nice guys finish last.” Tit-for-tat is a “nice” strategy because it assumes goodness in everyone, and fosters cooperation and clemency. Axelrod’s research is also very relevant to our discussion of Nash Equilibria in the Prisoner’s Dilemma game (Chapter 6) because it shows that the outcome of the Prisoner’s Dilemma can be different if the game is repeated.

Of course, the situation I described with the iClickers is very silly and unrealistic. However, the tit-for-tat strategy had been in practice far before Axelrod confirmed its efficacy. In World War I, soldiers in trenches would not fire on each other unless the other side shot first (3). This strategy led to lower casualties for both sides. During the cold war, the U.S. and Russia followed the doctrine of Mutually Assured Destruction, which was essentially a tit-for-tat strategy (You bomb me, I bomb you). Iterated Prisoner’s dilemma strategies can be applied to many situations ranging from business deals to international relations to friendships.

Thus, the next time you get into a Prisoner’s dilemma, remember that cooperation is a very good strategy if you know you’ll repeatedly find yourself in the same situation. Or, you can just get out of bed at 10:00am and click a few buttons 3 times a week.

(1) https://en.wikipedia.org/wiki/Prisoner’s_dilemma

(2) http://www.jstor.org/stable/173932 (Effective Choice in Prisoner’s Dilemma, Robert Axelrod)

(3) https://en.wikipedia.org/wiki/Live_and_let_live_(World_War_I)

Comments

Leave a Reply

Blogging Calendar

November 2013
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

Archives