The Moral Dilemma of Self-Driving Cars
A brand new Tesla Model S carrying four passengers is cruising down the highway with the autopilot on. It trails behind a pickup truck carrying what looks like large metal crates. Suddenly, the straps holding the crates on the truck breaks, and the crates fly toward the Tesla. An accident is inevitable. To the car’s left and right are motorcycles, and the AI powering the self-driving car must make a difficult decision. Does the car continue its path, crashing into the metal crates and potentially fatally injuring its passengers, or does it swerve to one of its sides, killing a motorcyclist but saving its four passengers?
Autonomous vehicles will undoubtedly increase safety on the road. Most car accidents are a result of human error, and self-driving cars will not succumb to human flaws. However, self-driving cars will inevitably have to answer moral dilemmas such as the one above, and their decision-making process must be quantitatively defined; in other words, engineers will have to consciously decide what self-driving cars should do in these situations. A game-theoretic approach could provide a potential solution to this problem. The first question to ask is, do we want to optimize the outcome for the passengers (a “rational” approach), or do we want to find a global optimum? Studies have shown that human moral behavior represents a mix of these two approaches, but humans aren’t perfect; should the morals of autonomous cars be modeled after humans, or should they do something else?
Fortunately, there are no reported cases of a self-driving car facing this problem, but the time will likely come. Autonomous vehicles learn to make decisions through machine learning; in other words, their decision-making center is modeled after that of humans, so it follows that autonomous vehicles have similar morals to humans. But human morals are imperfect, and are often a cause of car accidents, so the question still stands; how should self-driving cars react if an accident is unavoidable?
Applying game theory to this situation, this problem can be modeled as a zero-sum game, in which a win-win situation is impossible. From the example, either the passengers die and the motorcyclist survives, or the passengers survive and the motorcyclist dies. Unfortunately, no one can definitively decide on what self-driving cars should do in these situations, and self-driving cars are hesitant to give a clear answer, so the question remains unanswered.
Before we hand over power to autonomous vehicles, we must address this issue and devise a clear code of ethics for autonomous technology. A game theoretic approach is arguably the fairest solution, as it provides a quantitative approach to the problem. There is no perfect solution, so it is important to weigh all the possibilities and prioritize accordingly. This issue relates to the course material as it involves the application of game theory to a serious issue that will only become more pertinent in the future. Additionally, unlike the prisoner’s dilemma, there is no win-win situation. It is essentially a zero-sum situation, and game theory is crucial in determining an optimal solution.
https://users.cs.duke.edu/~conitzer/moralAAAI17.pdf
https://www.technologyreview.com/s/542626/why-self-driving-cars-must-be-programmed-to-kill/