(Reuters/Elijah Nouvelage) 

(Reuters/Elijah Nouvelage) 

Should Driverless Cars Kill Their Own Passengers To Save A Pedestrian? 

A recent Quartz.com article asked this question as part of a growing discussion on the ethical and moral decisions driverless cars will be required to make as we allow them on the road. As this question has been quietly emerging from Silicon Valley and elsewhere as ethicists start to be engaged in the development of driverless car technology it is one that's starting to take hold among government, the media and the general public as people begin to better understand what a truly driverless car's operations on the road may be. However, the level of public discussion is not nearly sufficient given the importance of the question at hand -- whose life has priority, the car's passengers or the world outside?  

At it's core, the question of whose life takes priority is a fascinating question, as it raises the question of both i) what we want driverless cars to truly be capable of (and whether we want them to exist, even if they can exist); and ii) will we hold the decisions of driverless cars to be greater/better than the decisions most human drivers would make in the same situations. 

Robot Driver

How much control do we want to hand over? 

The first question is pretty straight-forward. We know driverless cars will soon be 100% possible given the progress and direction of driverless car technology development. So the question will soon be whether we want completely driverless cars on our roads in the near future, and what extent of driverless technology will we legally and morally find acceptable as a society. (e.g. A fully automated driverless car? The car can drive on its own, but manual steering must still be in place to override at a moment's notice and a licensed driver must be behind the wheel? etc.)

Toyota Driverless Cars

Should driverless cars be better 'people' than we are? 

The second question is the more interesting one. The question of whether we should allow a driverless car to assess the best decision in a situation where there is both mortal danger to a pedestrian/cyclist/other car(s) and its own occupants to be to take an action(s) that will likely kill its own occupants goes to the heart of what it is to be a driver on ethical, moral and practical levels. If it isn't possible now, it soon will be possible to build the complex programming necessary for a driverless car to be able to judge such a situation to exist and make a decision where it may/will result in a mortal outcome for one or more of those involved. So we know this isn't simply a theoretical exercise for ethicists.

So the question really is how do we expect a driverless car to make its decisions -- based on the primary responsibility for preserving the life of its passengers, regardless of the outcome(s) for anyone else involved in the situation, or with a responsibility based in greater society and/or the perspective of our communal laws, which may result in the deaths of its passengers. 

Although I don't have hard data to back this up at this point, I'd wager the first scenario is the one likely more drivers operate their vehicles on (preservation of their own lives first). I don't know many people who consciously take the position that they're willing to give up their own lives to save another when they're behind the wheel (although there's ample evidence that in such specific situations people will make decisions that try and save the life of another; however, I'd guess that these actions would be done 99% of the time with the belief that they would preserve their own lives as well. It's almost impossible to test this hypothesis, as Hailey Joel Osman aside, nobody's been proven to be able to talk to dead people.) 

The challenge is that our traffic laws are written by-in-large from the perspective of the greater good for the community writ-large. We have codified in law such things as pedestrians having right-of-way in most situations and which cars having right-of-way in specific situations (e.g. four-way stops) with the goal of ensuring no one is put in a situation of mortal danger. But equally, such laws are designed to assess and assign culpability when the wrong decisions are made by drivers and we have created a hierarchy of priority for most driving situations that tends to put others over the driver of the car in question. This is taught in every driving school in the Province of Ontario, and all prospective drivers must take multiple tests to demonstrate that they understand (and by some extension agree with) the decisions that are codified in our communal traffic laws. 

Crash Dummy

The problem of the 'no-win' scenario

The knee-jerk reaction of most people is likely to say that we want our driverless cars to behave as a better version of how we would drive, with ourselves as the absolute priority in questions of potential injury or death. However, such programming would largely go contrary to our laws in most situations. Take, for example, a situation where the driverless car would be faced with potentially driving into a car accident that suddenly occurred in front of the car, potentially causing its passengers injury and/or death, or swerving to the right to avoid the accident and driving into a crowded bus stop in the process, but with the highest probability that the car's passengers would escape injury. 

Again without hard statistics on this scenario, I'd wager human drivers would be rather split on their reactions in this situation. Some human drivers certainly would reactively swerve to the right to avoid the accident, only to plow into the crowded bus stop and kill and/or injure a number of people, while other drivers may choose to have the accident either conscious of the bus stop or due to slower reaction times. The law would likely expect the driver to have the accident to avoid killing/injuring pedestrians (although it's unlikely the driver would be charged for such injuries/deaths unless it was proven they were driving under the influence.) However, the point is that human reactions would likely be split among the two options and that is the understanding our traffic laws are, in part, based on. With few exceptions, human nature is not a question of absolutes, and as such behavioural laws for things such as the operation of vehicles in our societies are constructed with a margin of flexibility to account for multiple potential responses. (This is not to say that we still don't assign a moral weighting to these choices, but we allow that a range of choices exist.) 

In the case of driverless cars, we're dealing not with a question of differing human responses, but the absolutes of computer programming. Despite the sophistication of driverless car programming, at their core they still rely on the binary "on or off" basis for decision-making. In theory, if we were to run the same scenario in exactly the same manner 100 times, a driverless car should make the same decision 100% of the time. As such, it becomes reasonable to expect a higher level of accountability for driverless cars to become possible in their decision-making, but also a higher level of thought needs to be built in to their decision-making processes. 

In the above scenario, if the car was mandated to preserve the health of its passengers to the greatest degree above all else, it's highly likely the driverless car would plow into the bus stop crowded with pedestrians 100% of the time, regardless of the fact this action may kill and/or injure a number of people outside the car. If the driverless car was mandated to obey traffic laws and the 'hierarchy of life' dictated by said laws for drivers in the Province of Ontario, it is highly likely the driverless car would have an accident that may result in its passengers injury and/or death 100% of the time. (Assuming that these were the only two options available.) Again, this may be the preferred option prescribed under current Provincial traffic laws. 

But when you have an ability to control/dictate the decision-making processes of drivers to the potential precision that driverless cars offer (again, at a rate of nearly 100% precision, assuming that there are no defects present), the question becomes do we program the car to be better than us for the greater good of society (but potentially unknowingly putting ourselves in mortal danger) or to put ourselves first, with the knowledge that this may put the rest of society in mortal danger ever time we take to the road? Further, it raises the question for government whether traffic laws should be re-written with a higher expectation of correct driver decision-making, and if so, what will constitute "correct" under traffic laws given that at least a portion of vehicles will make the same decision 100% of the time. (If must be remembered that for at least a generation the roads will continue to contain a mix of "smart" driverless cars and "dumb" manual human driver cars that will only have a limited ability to assist the driver at best -- e.g. blind spot recognition technology. Government laws must reflect this, lest the courts likely toss out any violations on the basis of them being unreasonably strict for human-based drivers.)  

Driverless Car Options

Why we need significant public discourse on the ethical questions of driverless cars

As driverless cars are programmed to follow the rules of the road as a core function of their programming, it must be understood that this question of who's fundamentally more important will apply only in situations where the potential for mortal danger exists (i.e. it won't change whether the car drives more aggressively or not in everyday driving depending on what scenario is chosen.) Even so, this debate must be taken beyond the realm of ethicists, programmers, driverless car builders and government bureaucrats. This issue goes to the core of the public's understanding of how driverless cars will operate in public spaces, and the public's understanding of literally what they're getting into when they step into a driverless car. (There is an addition question of who bears liability for accidents in the case of driverless cars, but that's for another blog post.) 

To be clear, I'm not opposed to a driverless car. In fact, I'm very much a supporter of driverless car technology and their operation in every day society. I think we will actually be a safer society for them. However, this issue must first be fundamentally addressed and the solution must be broadly understood for all our sakes. To pretend that this is just a minor or technical issue dramatically underplays it importance to all of us and to the moral and ethical underpinnings of our society. Are we a society that puts individual rights above all others, including other people's health and well-being; or are we a society that values our collective health and well-being to the point that it may result in our individual deaths? Driverless car technology is an "either/or" technology, and there will be traffic situations where we face only these choices. How we proceed says as much about ourselves as it does about the technology we embrace. 

Comment