How should we program autonomous cars in order for them to be morally correct?
by Natycpp01 on September 9, 2016 - 11:43am
With our rapidly evolving technology, the world will soon see automobiles that drive themselves. Indeed, companies such as Google, Mercedes and Volkswagen are already testing these driverless machines (Pedwell). These cars promise to increase safety, provide mobility solutions for disabled people, ease traffic congestion and increase time for doing other activities while the vehicle drives itself. Yet, as convenient as this invention might be, it also opens the door to some ethical issues. During a test drive in February 14th 2016, Google’s Autonomous Vehicle (AV) was partly blamed for causing an accident with a bus. This incident let the government to declare that strong regulations needed to be established in matters of ethical issues with the new driverless vehicles (Pedwell). In other words, a moral code should be created and programmed into these AVs before they can be presented to the public.
The Ethical Problem
Imagine this: an AV realizes that it can either divert itself in a way that will kill the driver, but save a bus full of children or it can continue in his trajectory and save the driver, but kill the children. In a situation such as this one, what course of action should the artificial intelligent system take? Ethicist must look at both sides and decide which arguments are the strongest. Arguments for killing the driver might include the fact that the school bus has children which should be protected at all cost (Always protect the young) and that you are sacrificing one life for the sake of many (Greater good). Arguments against killing the driver might include the fact that the driver’s life should be protected, since his life is worth as much as any other (Human life is fundamentally valuable, sanctity of life) and also the fact that as the owner of the car, some people believe it is the car’s duty to protect its owner (Always act in your best interest).
Different people will choose different sides according to their values. One who values the good of the community will probably opt for killing the driver, yet another who values his own best interest will vote for the second option. Both arguments are certainly debatable, but what is the morally and absolute right course of action? That is a question that has not yet been answered, but that has to be, since scenarios similar to this one are bound to happen.
This issue is still very new, and mostly theory. Yet if AVs are ever to be presented to the public, all of these questions must be answered in an ethical and fair way, foreseeing every possible scenario. In my opinion the responsibility of creating this moral code will require not only the government, but also a group of specialists including ethical experts, philosophers and even the general public. In the scenario I provided, I would save the children and kill the driver. To me, sacrificing one life to save many is ethical. I value the greater good and the idea of selflessness and being altruistic. Indeed, the AV would save us both in a perfect world, but if necessary it could kill me in order to save others. Thinking only of one’s self seems to me selfish and unethical. Kindness should be prioritized and doing the less possible harm.
Other Ethical Issues to Consider
Now that you know my opinion on the bus scenario, I invite you to ponder other ethical issues concerning AVs.
The AV must also be programmed to react to mundane events: should it cross a double-yellow line to avoid a parked car, even if it technically means breaking the law? A driver might naturally do it, yet should the AV be consciously programmed to break the law? Is it ethical to break the law, and if so when? And to what degree?
Another issue is who, if anyone, should have control over the car. How much is the government allowed to interfere with the car? Will the police, for safety and security, have a certain code to force the AV to stop if they suspect that the person inside the car is a criminal? Or would that be transgressing on people’s liberty and their right for privacy? What if a rapist or another criminal managed to get access to the code and control the car to achieve their evil deeds? (Doctorow).
These issues also need to be addressed when programming the driverless car, and the answers are far from being easy.
Pedwell, Terry. "Driverless Car Makers, Government Regulators, Face Ethical Dilemma." The Canadian Press, Apr 04 2016. ProQuest, http://search.proquest.com.ezproxy.champlaincollege.qc.ca/cbcacomplete/d.... Sept 05 2016.
Doctorow, Cory. “The problem with self-driving cars: who controls the code?” The Guardian, Dec 23 2015. www.theguardian.com/technology/2015/dec/23/the-problem-with-self-driving.... Sept 05 2016.