Name: Asha Latha Amara Date: June 7th, 2017
Course Name and Number: COEN 288 Software Ethics
Instructor’s Name: Brian Green Word Count: 1685
The topic I have chosen for my second essay is Robotic Cars and Ethics. Textbook definition states that “A robotic or autonomous car is a vehicle that is capable of sensing its environment and navigating without human input.” These cars have multiple benefits like reducing traffic collisions by implementing algorithms based on geographical conditions and local traffic updates. They reduce insurance costs, increase mobility for children and researchers have also estimated a reduction in crime. Though the above points sound highly promising, there are also quite a few issues associated with robotic cars.
Even though artificial intelligence has seen a lot of advancement in recent years, it still isn’t capable of functioning properly in chaotic city environments. This is where the engineering problem of robotic cars comes in. A car’s computer can be easily compromised and potentially be forced to drive off in the wrong direction. We also need to consider the susceptibility of a car’s navigation system owing to different weather and climate conditions which may result in jamming and spoofing. These cars would need very high-quality maps and if they are not up to date, the cars wouldn’t be able to function properly. Road infrastructure changes have to be updated to the software very frequently which is a taxing and often expensive procedure.
Joshua Brown’s accident in the May of 2016 resulted in his death, and goes to explain just how dangerous the consequences of the engineering problem can be. To briefly explain the accident, the 40 year old was driving a Tesla Model S, which slammed into the side of a white Tractor on a highway in Central Florida. The car which was in “Autopilot” mode at the time had a camera system that failed to recognize the white truck against a white sky. The computer wasn’t trained to recognize the flat slab of the truck’s side as a threat and as a result didn’t find it necessary to hit the brakes.
Apart from the engineering problem, there are a few other disadvantages worthy of discussion in this context. An increase in autonomous cars will potentially lead to many people losing their jobs in the transport industry, in the public transit industry, crash repair shops and there may also be a direct impact on the automobile insurance industry whose services people will not need as much as they did before the aforementioned cars came into existence.
Now let us consider a scenario- a man is driving to work in his autonomous car and his front tire accidentally bursts and the brakes engage. The computer system tries to override and slow down but cannot overcome the momentum. This leaves the SUV with two choices- either drive into the opposing lane of traffic and risk colliding head-on with a smaller vehicle with two people in it or veer to the right and drive off a cliff. The robot’s collision response algorithms decided that one person dying was better than two, even if that one person was the owner of the SUV, the person who paid for the robot.
This scenario explores the ethical dimensions of robotic cars. Having a robot make a decision in circumstances like these is ethically questionable. Should the robot veer off the cliff to save two lives or should it stay loyal to its owner, even if it means killing two innocent people?
The ethical tool that I chose to evaluate this ethical problem is The Risk Equation. According to this equation, Risk = Harm x Probability. Anything that threatens our physical, emotional or mental safety can be construed as harm. Applying this ethical tool to our situation, we have to ask three important questions. Are robotic cars safe? Safety cannot be decided by one particular individual. It is a decision that society makes as a whole. Considering our SUV scenario, the robotic car’s decision can harm the owner if it chooses to drive off the cliff whereas if it chooses to save the owner, two bystanders can get killed. When we think of safety as a decision that society makes as a whole, in this situation, no matter what decision the robot takes, one or more people are bound to get hurt and society as a whole is not safe.
The second question is- How do we know if robotic cars are safe? Robotic cars have a lot of power. These computers have capacity much greater than humans, their processing speed is much faster and they have better sensors. With these facts, there can be an assumption that robotic cars can make better decisions for the society as a whole than humans can in some situations. But it is also important to note that the robotic cars of today are still in very intermediate stages. This technology is advancing rapidly and we can be sure that they will soon be able to do what people can’t. Considering this advancement brings out the question of whether robots should be allowed to have such power and the ability to decide how many human lives it wants to save.
The third question is - How safe is safe enough? Keeping our perspective only on the SUV example, I think this question can be rather open ended. When we venture out to safety as a whole for the society, robotic cars are safe enough to protect the greater good. When safety is a social construction and applies to individuals, robotic cars can never be safe enough.
Having discussed the three questions, applying the risk equation to our scenario, the probability of falling into a situation where your robotic car might have a trolley situation is very high due to factors like traffic, unsuitable weather conditions, unforeseen engineering issues with our vehicles, etc. Multiplying the high probability with harm brings us to a conclusion that the risk factor of robotic cars is very high. In terms of ethics, the manufacturers should have calculated the risk factor before releasing such products to the public.
The second ethical tool I choose to evaluate the ethical problem of robotic cars is Parentalism vs. Autonomy. According to this ethical tool, I need to evaluate if the engineers who built robotic cars should have been more parentalistic or more autonomous in their approach. If the engineers chose to be more parentalistic in their approach, then they would have made sure that all the control of the car would be given to the robot in a dangerous situation and there would be no possibility for a user to manually take over in an unforeseen situation and do as he sees fit.
Applying this ethical tool to our previous autonomous SUV scenario, in my opinion the makers of the software should have been more autonomous in their approach, they should have allowed for a manual override button that would have let the driver make the best possible decision he could for his particular safety.
The last ethical tool I choose to evaluate this problem are the four approaches to ethics. I will be applying this tool to Tesla’s self-driving car accident which claimed the life of Joshua Brown. According to Deontology(the first approach to ethics), one must always follow the rules and in this particular context, the engineers of the automobile company did not follow the rules. They released a car to the public that was not thoroughly tested. Had it been tested completely, the engineers would have realized the problems of the software’s color recognition deficiencies.
The second approach to ethics is Consequentialism, according to which one must always be aware of the consequences irrespective of the rules. From a consequentialist standpoint, as there were a lot of unknown factors associated with this new technology, they should have thought of the fact that new technologies always have more consequences than can be predicted and should have essentially realized that new equipment can be dangerous and how harmful they may be in unpredictable conditions.
Casuistry is the third approach to ethics and according to this, when in doubt on an ethical standpoint, look for analogous cases, and if the outcome of that situation was accepted by the general public, try to accommodate what they did into your situation. The engineers of the automobile company should have done thorough research to find similar cases where autonomous or self driving cars were prior used and then made changes or tested their own software accordingly. Had they been responsible and done this, they would have saved a life.
The last approach to ethics is virtue ethics which states that people should be good while the previous three cases talk about people doing good things. According to this, the manufacturers should have been more responsible and accountable to the accident that occurred and the life it claimed. The company’s initial reaction was to make sure the blame did not fall on them and were in a rush to make sure they did not get hit with a lawsuit. In the aftermath of the accident, they should have claimed responsibility, give the victim’s family a detailed explanation on how this happened and stopped production and sales of this product until the software problem was fixed and thoroughly tested again.
After applying the three ethical tools on robotic cars and ethics, in my opinion the world of artificial intelligence and robotic cars is as dangerous as it is innovative. As a solution, I think before any more robotic cars get released, it is of essential importance that these vehicles are thoroughly tested not just in simulated conditions but in the real world among real traffic conditions(with safety precautions). When in doubt about some new software about robotic cars, the engineers should also be more autonomous than parentalistic, leaving more power with the user than the software and in the future, look at previous cases of robotic car accidents and learn not to make the same mistakes.
...(download the rest of the essay above)