Driving is rapidly changing, in the United States and around the world, due to Silicon Valley’s investment in self-driving cars.
Companies like Google, Uber, Apple, and Tesla have all been pouring resources into the self-driving game. And they’ve been joined by traditional car companies like Mercedes and General Motors, too. The industry has enjoyed increased media attention the past few years as autonomous vehicles have become less of a dream and more of a reality.
Announcements from the driverless car space are exciting, but updates in the evolving technology spark discussions about personal safety.
Influential voices in public office have already given their input on the matter.
In an interview with Popular Science and XPrize, U.S. Secretary of Transportation Anthony Foxx said, “I see a future where we have 80 percent fewer accidents than we have today.”
Additionally, an op-ed from President Obama in the Pittsburgh Post-Gazette insists that self-driving technology has “the potential to save tens of thousands of lives each year.”
Is reducing the number of car related deaths the only benchmark for safety? Or is the safety of self-driving cars more complicated than that?
What happens when the car is in a scenario where no matter what someone will get hurt? How will cars be programmed to make life or death decisions?
Consider an autonomous car driving down a busy city street. On either side of the car there may be buildings, pedestrians, other cars, or barriers. Suddenly a pedestrian jumps in front of the car and an accident cannot be avoided. What should the car do?
Should the car slam on the brakes and risk killing the pedestrian? Should it swerve away from the pedestrian, potentially killing the driver or other bystanders?
This issue has become the elephant in the room when it comes to discussions of self-driving cars. Everyone knows that it’s a prevalent issue, but few in the media or in the tech space want to discuss it.
“I’ve not heard it discussed at all,” said Ed Blazina of the Pittsburgh Post-Gazette in an interview with MediaFile. “It’s the monster in the room no one wants to talk about.”
Pittsburgh has become a hub for the self-driving car industry ever since Uber started offering autonomous car rides for free.
“Government needs to be heavily involved in writing the rules,” Blazina said when asked about government’s role in the ethical dilemma of self-driving cars. An interesting issue in Pittsburgh, as Blazina has reported before, is that Uber is exempt from many regulations because it is not charging a fee for the rides.
When it comes to the media’s role, Blazina noted that “these issues are very hard to talk about just in theory.” He stressed that this is not to say that the media should wait until a tragedy occurs.
This past May, an Ohio man was killed when the autopilot feature on his Tesla Model S failed to notice the side of a tractor trailer and drove the man’s car at 65 miles per hour into the side of the truck. Media articles circled, bringing into question if Tesla’s autopilot was road-ready.
“Safety is certainly an issue that should be brought up,” Blazina said.
Safety has been discussed to some degree in the media, but the issue of self-driving ethics has been relegated to only a handful of discussions.
In the industry, Mercedes-Benz has been the only one to tackle the issue head-on. The company announced in October that its algorithm would always prioritize the driver’s life over others.
In government, the U.S. Department of Transportation published an Automated Vehicles Policy in September that gives guidelines for how the government should regulate autonomous vehicles going forward. The policy has a section on “Ethical Considerations,” but only addresses the issue without giving any guidance. “Algorithms for resolving [dangerous situations] should be developed transparently,” the policy says. “The resolution of these conflicts should be broadly acceptable.”
This is an answer to the moral question, but is it the right one? And should more be done to press the question? One answer given by Car and Driver is that autonomous cars are “safer all around, regardless.”
The ethics of self-driving cars is being studied in depth in the MIT Media Lab research institute. The Moral Machine is online quiz that aims to create “a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas,” per the Moral Machine’s website.
The way it works is quite simple. The user is presented with a scenario and is asked to decide who in each scenario lives and dies. For example, this scenario asks if the car should decide to kill the woman in the car or the woman crossing the street.
Other scenarios change the pedestrians and occupants to see how people respond to the idea of killing a doctor versus a robber, or an elderly man versus a child.
Who should be talking about this ethical dilemma? Maybe the companies should be stepping up, as Mercedes did. Or perhaps the government needs to make this a key part of new regulations, with clearer definitions of what the ethical answer is. And with all the technological advancements being touted by the media, is it the responsibility of the media to step up and address these questions?
There are many achievements of self-driving cars to applaud and point to as examples of the technology’s inevitability. But if everyone will one day be letting a robot drive them around, shouldn’t we give more attention to our cars’ life-or-death decision making?