Skip to main content



Autonomous Killing and Game Theory

Peter Finn wrote an interesting article for the Washington Post that details a recent military demonstration and the likely future of a highly automated military. During the demonstration, two unmanned autonomous (not human-controlled) aircraft flew over a military base in search of their target – a large orange tarp. They worked together, as programmed, to identify the tarp and signal a ground vehicle to take a closer look. Stunts like these are not far from being common battlefield activities. This is because, as outlined in the rest of the article, the pros outweigh the cons when it comes to developing these thinking, automated systems – at least from the military’s perspective.

So let’s say your future job is to program a drone for autonomous military operations. For example, it should know when it’s prudent to terminate a top terrorist leader on its own (i.e. without some CIA official calling the shots). How would you do it? First, let’s assume the drone has the same access to surveillance (maybe from its other robotic friends) and internal information as a human making the call would. Ok, but now there are still hundreds of variables to consider before making the decision to launch a missile at this guy. How certain am I that he actually is in this building? What about the collateral damage / civilian loss? What do I risk by waiting until nightfall? Until next week (when I may be better informed)? What of the cost of the missile I launch or the chances I’ll be shot down? And if you want to get super fancy… Is taking him out worth the retaliatory terrorism that might take place?

So what is the best way to manage these variables and come up with the best decision? Hopefully you see that war tactics like these are no different than a (very bloody) game and, thus,  game theory is required to make the best decisions. Humans use it all the time to make wise choices, and any thinking robot will need to do the same. At an very high level, the logic employed by these automated systems can be viewed as a very large payoff matrix where each decision is mapped to a cost and a value (or points in the game) based off of the estimated response to the decision. The drone in the example would want to maximize the value of the operation (the benefits of killing/not killing the terrorist leader) and minimize the cost (retaliation, collateral damage, etc..). Sure, the setup and logic would be slightly different that what we learned in class, but basic principles (like the existence of a best choice) would be the same.

Perhaps the most interesting part of all of this is that, today, the major holdup for creating such a sophisticated system is the implementation of the artificial intelligence. We already have drones in the air and we can grab enormous amounts of information through surveillance. What’s left is teaching the machine the game theory it needs to make smart choices.

Comments

Leave a Reply

Blogging Calendar

October 2011
M T W T F S S
 12
3456789
10111213141516
17181920212223
24252627282930
31  

Archives