Let’s face it. Human beings are terrible drivers. We drive under the influence, we fall asleep on the wheel, we text while driving, and we don’t always react promptly or appropriately to oncoming danger. More than 20 million people get injured or disabled and nearly 1.3 million people die in road crashes every year – that’s 3,287 deaths a day! That’s why Tesla, Ford, BMW, Uber, and tech giants like Apple, Google, and Baidu are working on a safer and easier way to get around: self-driving cars.
Developed as part of the Google X project to develop technology for electric cars, Google’s prototype driverless car has been seen around California, Michigan, Florida and Nevada since 2014. Those who had the pleasure of riding in Google’s earlier self-driving cars say that the system is cautious and non-aggressive, but having traveled extensively, Google’s self-driving cars have met with several accidents along the lines of being rear-ended at a stop sign or being side-swiped by another driver.
The project evolved into Waymo, which now operates under Google’s parent company Alphabet. It was the first to put fully self-driving cars on the road in November 2017. The Verge recently reported that Waymo “is ahead in the self-driving car race by most metrics”, having driven the most miles and collecting valuable data in the process. The data goes into deep (machine) learning to improve the perception and behavior of self-driving cars. Waymo’s fleet in Arizona consists of 600 autonomous minivans.
Other contenders, German automaker BMW and Chinese search engine and web services company Baidu, have also joined the race. Baidu’s self-driving car, a modified BMW 3-Series, hit the road in 2015. Unlike Google’s vision to make cars are capable in every situation so that no human will ever have to drive again, Baidu was geographically limiting its fully autonomous vehicles, thereby limiting the challenges it had to deal with. However, BMW and Baidu ended their partnership in the project in late 2016 after jointly developing the automatic overtaking capability. Both firms are targeting a wide deployment of their fully autonomous cars by 2021.
How soon can we purchase these self-driving cars, whether it be through an operating lease, a novated lease, or other means of financing? 2021 is perhaps a very optimistic target. Presently, semiautonomous and fully autonomous vehicles have yet to show a satisfactory safety track record. Renowned automotive technology pioneer Tesla’s semiautonomous Autopilot feature has been put under heavy scrutiny following fatal crashes involving the Autopilot feature in California earlier this year. Other self-driving vehicles put out by Waymo and Uber have also been reported to be involved in serious crashes, casting doubt on the technology’s development and safety.
The biggest benefit of fully autonomous vehicles is that they minimize or completely remove human error, ideally making commutes more convenient and safe, and even improving mobility and independence for the elderly and disabled. But in doing so, it creates an ethical dilemma and shifts the burden to the technology developers, and possibly insurance companies. Some point out that when human drivers meet with an unavoidable accident, their actions will be understood as just a reaction that arises from instinct or panic. However, when a programmer instructs the car to make the same move through an algorithm, then it becomes a deliberate decision.
In Uber’s case, in which a pedestrian was killed, the self-driving car reportedly ‘decided’ not to swerve or take evasive action. After the incident, Uber’s testing efforts were suspended. Some blamed Uber’s decision to reduce the number of critical sensors in its test cars, while partners like Nvidia pointed to Uber’s software as the cause.
In the rare cases that the car has to make a decision due to unforeseen circumstances, whose safety will the system prioritize? Will it seek to minimize danger to others, even if it means injuring its passengers, or will it be programmed to target (or sacrifice) specific groups? The ethics of technology has yet to be thoroughly explored, while new technologies like artificial intelligence (AI) are bringing up novel ethical dilemmas that seem impossible to solve. Inevitably, policymakers will have to face these complicated ethical considerations and someone will have to bear the consequences of their decisions.
It seems that autonomous vehicle technology is still fraught with risk, and regulation will be as complex as it is ethically challenging. Despite these setbacks, executives in the tech industry remain very optimistic about autonomous cars’ potential contribution to humanity. Nvidia’s Chief Executive Officer Jen-Hsun Huang sums it up during the unveiling the company’s newest autonomous-car brain in 2016, “Self-driving cars surely will make a huge contribution to society. We’ll be able to redesign the urban environment so that parks will replace parking lots. Think of the money we’ll save, the reduction in accidents and the incredible freedom this will provide people who can’t drive today.”