Man vs. Machine: On the Road Edition
Self-driving cars: whether you’re for or against them, they’ve started to attract quite a bit of attention in the media’s eyes. It seems that every major player, from Google to Toyota, has their hat in the ring when it comes to automobile automation, and this somewhat strange mix of software giants and car manufacturers has started to produce some exciting results, with Google’s cars being tested now around the US and many other companies doing testing of their own. But public apprehension towards these new technologies is, as usual, a bit high, mostly stemming from the potential danger in allowing our machines to perform what is (statistically) the most dangerous thing most of us do everyday. People are still worried about potential for horrible crashes, but as this Popular Science article points out, the robots aren’t the ones to blame.
The fact that the majority of crashes involving driverless cars aren’t caused by the machine highlights one of the key issues with these novel devices. Simply put, once all the fearmongering is stripped away, these cars simply aren’t as good at handling tricky situations as human drivers. On a normal, boring commute, the cars are always attentive and alert, something that can’t always be said for their passengers. However, they lack the complex reasoning necessary for handling tricky situation, situations usually caused by another driver’s mistakes. So the question I feel the article asks is why should I buy a self-driving car, if other people will just crash into it?
Consider this: only two people in the world drive cars, and they have to consider the possible risks and rewards with getting a driverless car. (The following has higher risk/more cost as being higher numbers, so the goal is actually minimization in this case):
Clearly, both people are mediocre drivers, as the risk of them both driving manually is pretty high. And on the other hand, if both were to adopt driverless cars, because each car effectively knows what the other plans to do and they can communicate, their driving escapades would become much safer as a result, although more expensive. However, in the corners there are some interesting ranges: which I would propose vary a lot on the quality of the self-driving cars and how much more they cost. If it happens that another person’s driverless car benefits you more when you don’t have one yourself, or the cost/risk of getting a driverless car if the other doesn’t exceeds the normal outcome, the equilibrium of this game isn’t clear (from here on, I’m using D and ND for the driving/driverless players, respectively). In a D/ND game, if D has, for example, a cost of 1 and ND has a cost of 6, the equilibrium leans towards both driving manually. If both players have a reduced cost in D/ND (for instance, D=1 and ND=3), then a mixed strategy must be employed, and (after the math), either driver would choose the driverless car with 2/3 probability. An improvement, but still not ideal. Only once a driverless car benefits a certain individual more than manual driving, regardless of what the other people on the road choose to do, would these automated cars be fully adopted.
Of course, not only two people drive cars, and the actual progression of this industry also needs to factor in people’s beliefs, regulation, and other such hairy things. But I feel this game illustrates that, as driverless cars get better at handling tricky situations and start to enter the market, it will become more and more sensible to ditch the wheel.