Since its beginnings in the late 1800s, the automotive industry has grown to produce over 70 million vehicles per year. With this many vehicles on the road, there are nearly 1.25 million deaths due to car accidents each year worldwide and another 50 million people are injured or disabled due to accidents (“Road Safety Facts.”). According to recent statistics by the National Highway Traffic Safety Administration, approximately 94% of fatal accidents in the United States are caused by driver error. In response to this fact, and in an effort to protect the public, the industry has started automating various aspects of the driving process by equipping vehicles with helpful tools such as blindspot detectors and even semiautonomous autopilot modes. As these technologies are being successfully developed and implemented, more and more companies are focusing on the creation of fully autonomous vehicles. The development and use of autonomous vehicles to eliminate the faulty decision-making of drivers can be justified using a utilitarian framework – and, when analyzed within this framework, the implementation of such vehicles becomes morally obligatory.
Every major car manufacturer in the United States and across the world has a hand in research and development for autonomous vehicles. Beyond that, nearly every technology company in Silicon Valley is involved in some aspect of this process — creating microcontroller chips for sensor processing and connectivity or writing algorithms for deep learning and artificial intelligence or developing the light detection and ranging, or LIDAR, systems that allow the vehicles to take in the information necessary to function independently.
The question that all of these companies and their engineers are facing is this: Is it morally right to take away the control of passenger vehicles from humans and give it to machines? If so, what moral lens should be applied to the decision-making algorithm for these vehicles in the event of inevitable accidents?
A key factor in the successful development of fully autonomous vehicles will be the ability of the on-board computer system to analyze data about the vehicle’s surroundings and make an instantaneous decision. Because of the processing speed and the computational accuracy of this system, a decision made by a computer algorithm is able to maximize utility better than a decision made by a human being (Zhang). Fully autonomous vehicles equipped with these state-of-the-art decision-making algorithms will reduce fatal accidents — and therefore are a better alternative than continuing to use human drivers. Utilitarianism seeks to maximize utility whenever possible. If one choice maximizes more utility than another, there is an obligation to go with that option. For this reason, the replacement of human drivers with fully autonomous vehicles is not only morally permissible, but it is morally obligatory.
The primary consideration and moral issue that engineers and mathematicians are contemplating while programming the decision-making algorithms of autonomous vehicles is analogous to the 50-year-old trolley problem. In this problem, a train is fast approaching, and it is about to hit five people tied to the tracks in front of it. A bystander is standing near a switch that, when flipped, will send the oncoming train down a different track. There is, however, a single person tied to this track. In a split second, the bystander must make a decision to do nothing and allow the five people to die, or he must take an action to flip the switch and therefore be the cause of one person’s death. In the same way that an individual must decide between taking an action that will kill one person or letting a course of events proceed that kills five people, the decision-making algorithm of a fully autonomous vehicle must be programmed to handle a situation where an accident is unavoidable – that is, any action the vehicle takes will result in an accident. In such a scenario, the vehicle’s decision might make the difference between killing a bystander and killing the driver. This is no longer a split second human action based primarily on adrenaline and not conscious choice, but now it is a decision that must be programmed into the car in advance.
Take, for instance, a scenario in which a large truck pulls out on the road in front of a
self-driving car. The vehicle must either continue straight on its path and kill the driver or the truck driver, or it must swerve up onto the sidewalk and kill five pedestrians. To analyze this scenario from an ethical perspective and not a technological perspective, the assumption will be made that the vehicle in question is fully autonomous and has the ability to accurately and reliably detect and identify objects in the road. When a human driver is in control of the decision, his split-second tendency would probably be to err on the side of protecting himself regardless of the harm to other individuals — like swerving to avoid a crash and risking the lives of the pedestrians or other drivers on the road. When the vehicle is in control of the decision, it will process all data inputs such as speed, road conditions, surrounding vehicles, etc., and taken an action according to a pre-determined algorithm. Therefore, the car is not really making a decision, but rather it is taking an action in response to external circumstances; that is, the decision has already been made. However unlikely it is for a human driver in an accident, there exists the possibility of the driver changing his mind and choosing a different action. No such possibility exists for the autonomous vehicle. The nature of the decision being pre-determined means that it is final. The engineers and software developers are in effect the ones who actually made the decision – and this likely occurred months or even years before the accident in question. In producing this algorithm, engineers must decide what action autonomous vehicles should take in these catch-22 situations. The root issue that must be addressed is the value of one human life relative to another, as the software must indicate which life to protect first and foremost – the primary driver, other drivers on the road, passengers or other bystanders.
The utilitarian perspective is one lens through which the development of this algorithm can be viewed. This perspective is fundamentally ingrained in the industry of fully autonomous vehicles; even apart from the conversation of the decision-making abilities of the vehicle. As mentioned earlier, a motivating force in the push to develop self-driving cars is the opportunity and potential to significantly increase overall safety of the population by reducing fatal traffic accidents. This provides precedent to justify applying this view to the decision-making algorithm.
Utilitarianism is a set of ethical theories about what makes an action right or wrong. It is a subset of consequentialism; in it, the moral acceptability of actions depends solely on the consequences of the action. It takes into account the prospective outcomes of an action for the entire audience, i.e. all those affected by the action, and it seeks to maximize utility. In this context, utility is the safety and preservation of human life. Financial cost of damages could also be included in the definition of utility, but for this paper this will not be considered.
There are two primary types of utilitarianism: act utilitarianism and rule utilitarianism. Act utilitarianism is an ethical theory that states that an action is morally right if, and only if, it produces the most utility in a situation. Rule utilitarianism, on the other hand, states that people ought to act according to a set of rules that, if adopted by an overwhelming majority of people in society, would lead to optimal consequences. Act utilitarianism determines the morality of an act based on its consequences, while rule utilitarianism evaluates the same action based on the results of its universal adoption in society. The effect of autonomous vehicles in society is best considered when it is adopted as a rule – that is, all cars and drivers are replaced with these self-driving vehicles. Therefore, for the purposes of this argument, rule utilitarianism will be considered.
Through the prism of rule utilitarianism, the morally right decision is the one that will produce the greatest good for the greatest number of people. In this view, all lives have equal value, and, therefore, the vehicle should select the action that minimizes harm for the most possible people — and saves the most lives, regardless of whose lives they are.
There is also the question of whether or not a driver should purchase or ride in a vehicle that might kill him to save others. An ethical egoist perspective states that it is morally right to act in a way that brings about the best possible consequences for the individual taking the action — in this case, the driver. This perspective explains why a driver is more likely to swerve to avoid a crash even when that action might risk the lives of others. When it comes to the decision algorithm, egoism dictates that the driver’s life should be more highly valued than the lives of pedestrians, passengers or other drivers on the road. A computer designed within this egoist perspective could similarly help maximize utility for an individual. In an extreme case of egoism, the world would end up solely with vehicles that protect the lives of their drivers — and that do it at all costs. This has the potential to be disastrous to human society. As such, this perspective should not be heavily included in the decision algorithm.
A computer designed within a utilitarian framework is better able to make an unbiased decision to maximize utility of the most people without being inherently self-focused. However, the egoist perspective reveals a common argument against programming this algorithm with this framework: someone must inevitably get sacrificed to the system in the quest to maximize utility. This is illustrated by a study performed in the department of psychology and social behavior at the University of California, Irvine. Participants agreed that a vehicle should be programmed to minimize fatalities, regardless of whether they were passengers or pedestrians. However, these same participants indicated that they would be less inclined to purchase an autonomous vehicle for the fear that they or their families would be sacrificed for the greater good. While they know that autonomous vehicles better maximize utility, their innate desire to protect themselves and protect those that they care about more than those that they do not have a relationship with prevents them from wanting to utilize this technology.
Even though people recognize the utility-optimizing effects of autonomous vehicles, they indicate, out of a sense of self-preservation, they would not buy one. This illustrates that people are psychological egoists – they tend to act in their own interest. This does not necessarily mean that they are ethical egoists – that they believe they should always do so as a matter of morality. This also shows that people are irrational. Not buying an autonomous car for yourself is no protection against being on the road with an autonomous car which, in case of an accident, might kill you anyway. This is not an argument against self-driving cars, but an example of why humans might be reticent to adopt this technology. This human irrationality must be acknowledged as an obstacle to the moral obligation of implementing these vehicles, since it is evident that if self-driving cars were implemented systematically and consistently, driving would be much safer overall.
In conclusion, fully autonomous vehicles with decision making algorithms centered around utilitarianism maximize utility by preserving the most lives. Therefore, their implementation is not only morally right, but morally obligatory.