How should we program autonomous cars in order for them to be morally correct?

by Natycpp01 on September 9, 2016 - 11:43am

With our rapidly evolving technology, the world will soon see automobiles that drive themselves. Indeed, companies such as Google, Mercedes and Volkswagen are already testing these driverless machines (Pedwell). These cars promise to increase safety, provide mobility solutions for disabled people, ease traffic congestion and increase time for doing other activities while the vehicle drives itself. Yet, as convenient as this invention might be, it also opens the door to some ethical issues. During a test drive in February 14th 2016, Google’s Autonomous Vehicle (AV) was partly blamed for causing an accident with a bus. This incident let the government to declare that strong regulations needed to be established in matters of ethical issues with the new driverless vehicles (Pedwell). In other words, a moral code should be created and programmed into these AVs before they can be presented to the public.

The Ethical Problem

Imagine this: an AV realizes that it can either divert itself in a way that will kill the driver, but save a bus full of children or it can continue in his trajectory and save the driver, but kill the children. In a situation such as this one, what course of action should the artificial intelligent system take? Ethicist must look at both sides and decide which arguments are the strongest. Arguments for killing the driver might include the fact that the school bus has children which should be protected at all cost (Always protect the young) and that you are sacrificing one life for the sake of many (Greater good). Arguments against killing the driver might include the fact that the driver’s life should be protected, since his life is worth as much as any other (Human life is fundamentally valuable, sanctity of life) and also the fact that as the owner of the car, some people believe it is the car’s duty to protect its owner (Always act in your best interest).

Different people will choose different sides according to their values. One who values the good of the community will probably opt for killing the driver, yet another who values his own best interest will vote for the second option. Both arguments are certainly debatable, but what is the morally and absolute right course of action? That is a question that has not yet been answered, but that has to be, since scenarios similar to this one are bound to happen.

My Opinion

This issue is still very new, and mostly theory. Yet if AVs are ever to be presented to the public, all of these questions must be answered in an ethical and fair way, foreseeing every possible scenario. In my opinion the responsibility of creating this moral code will require not only the government, but also a group of specialists including ethical experts, philosophers and even the general public. In the scenario I provided, I would save the children and kill the driver. To me, sacrificing one life to save many is ethical. I value the greater good and the idea of selflessness and being altruistic. Indeed, the AV would save us both in a perfect world, but if necessary it could kill me in order to save others. Thinking only of one’s self seems to me selfish and unethical. Kindness should be prioritized and doing the less possible harm.

Other Ethical Issues to Consider

Now that you know my opinion on the bus scenario, I invite you to ponder other ethical issues concerning AVs.

The AV must also be programmed to react to mundane events: should it cross a double-yellow line to avoid a parked car, even if it technically means breaking the law? A driver might naturally do it, yet should the AV be consciously programmed to break the law? Is it ethical to break the law, and if so when? And to what degree?

Another issue is who, if anyone, should have control over the car. How much is the government allowed to interfere with the car? Will the police, for safety and security, have a certain code to force the AV to stop if they suspect that the person inside the car is a criminal? Or would that be transgressing on people’s liberty and their right for privacy? What if a rapist or another criminal managed to get access to the code and control the car to achieve their evil deeds?  (Doctorow).

These issues also need to be addressed when programming the driverless car, and the answers are far from being easy.

Works Cited

Pedwell, Terry. "Driverless Car Makers, Government Regulators, Face Ethical Dilemma." The Canadian Press, Apr 04 2016. ProQuest, http://search.proquest.com.ezproxy.champlaincollege.qc.ca/cbcacomplete/d.... Sept 05 2016.

Doctorow, Cory. “The problem with self-driving cars: who controls the code?” The Guardian, Dec 23 2015. www.theguardian.com/technology/2015/dec/23/the-problem-with-self-driving.... Sept 05 2016.

Comments

The topic you have chosen is extremely interesting and your article was very well done. In my opinion I agree with you completely on the scenario of the bus vs. the car. If there were to be such a situation I would 100% side with the value of greater good, choosing to save the bus in an accident as opposed to the car. Yes ideally I would want to save both but I do think first that saving the bus over the car is better because choosing the bus would save many individuals while choosing the car would just be saving one life. Second, I think the lives of children should be protected over the life of an older human, and last if I were to put myself in the position of being in the AV car I would rather the children be saved over myself, I would look out for the best interest of others. This is such a difficult topic to get a final answer on seeing as there are so many reasons we can come up with to support both sides and an endless amount of problems that may be encountered such as the ones you presented. Do you think allowing AV’s to become available to the public is a good idea?

Thank you for your positive feedback ! I do believe that we are not yet ready to make AVs public, because a lot of debates have not completely been settled. Maybe in the future, but as for now, there is still some work to do.

Wow, you brought up a very difficult situation! I enjoyed your post, it was easy to understand but very interesting. Having an outside perspective, I would say that the bus full of children must be saved which means the death of the driver. I believe that the lives of children are more valuable, in this case, since they represent our future. Also, I would rather kill one person and save many others than the opposite; for me the greater good is more important than acting in your own best interest. But if I am the driver I instantly would not want to save the bus, I would rather save my life; the same thing applies if you know personally the driver (family, friends, etc.). It is a difficult when you are engaged in the situation. In the scenario of the car vs. the bus, no matter the “winner” someone or many individuals are still dying (sanctity of life). If autonomous cars have accidents, which will probably occur often, it will cost the life of someone whether it is the driver or a bus full of children; there is no right or wrong because at the end the result is the same. If you were the driver, would you scarify your life for the lives of children?

Thank you for your answer ! yes, the situation does become complicated when you personally know the driver! I did not think of that.. would you sacrifice a dear someone's life ? It is a difficult question.

You picked a very interesting topic and treated it perfectly! I appreciated a lot the reading and it made me learn so much more on the topic and the ethical issues related to this. You have a nice writing style and you brought some very strong arguments on both sides of the debate. It is great that you inserted a lot of ethical concepts related to the issues observed in your text.

I totally agree with you when you say that you prioritize the greater good to the sake of only one person. Yet we should not be deciding those things, there should not be any question like this to ask. A new technology such as this one needs to be safe and on point at all cost because if it is not, why should we risk our lives and put it in the hands of a machine instead of driving safely on our own? It is obvious that an autonomous car will not have the same natural reflex as a human has and this is what needs to be rethought. Will there be more accidents on the road just because people want to have more time for themselves and are lazy so they don't want to drive their car by themselves anymore?

However, if the new technology of autonomous cars is so much controversial and has so important ethical questions concerning people's lives, does it mean we are still not ready to put in on the market and it still requires work? Maybe that if one autonomous car is put on the market then all the other ones not autonomous should have a detector or sensor inside and they could all send signals in case of an upcoming danger?

Thank you for your positive comment ! Like you said, I believe that we are still no ready for this technology to hit the market. We need to perfect it before presenting it to the public.

You chose a great topic and explained it impressively! Our society is evolving rapidly and we now have all types of machines doing things for us now. I agree with you that the value of the greater good and selflessness is necessary in society. It is better to save lives of children over one life, although we are still loosing a life because we did not have control of the vehicle. Unfortunately, we would not be able to chose the outcome of the situation. I believe the AV’s should be programmed to break laws if it for the greater good of an individual. If a human’s life is at stake and the car is able to avoid the accident without creating greater harm, than it should be allowed to break laws. Should autonomous vehicles be making life changing decisions for us?

You chose a great topic and explained it impressively! Our society is evolving rapidly and we now have all types of machines doing things for us now. I agree with you that the value of the greater good and selflessness is necessary in society. It is better to save lives of children over one life, although we are still loosing a life because we did not have control of the vehicle. Unfortunately, we would not be able to chose the outcome of the situation. I believe the AV’s should be programmed to break laws if it for the greater good of an individual. If a human’s life is at stake and the car is able to avoid the accident without creating greater harm, than it should be allowed to break laws. Should autonomous vehicles be making life changing decisions for us?

Thank you very much for your comment ! That question is definitely something to think about, should machines decide for us ? it seems like a strange concept.

Your article is one of the best I have seen on this site, I can see that you did a lot of research on it and put a lot of work into your article. The subject is also very pertinent what with the advent of AV technology and the implications it entails. Though you give a clear opinion on the subject, you still manage to present both sides without a bias, which can be very difficult and to which I must applaud you. As for what I think of the topic, I agree with you in valuing the good of the many above that of the few, and that saving yourself rather than a busload is selfish. However, I believe that instead of putting resources into ethicists and philosophers it would be wiser to try and improve the AI driving the cars so that it would have more options would be available if a dangerous situation arrives. I also propose that a law should be made that the driver should be capable of taking control of the vehicle while it drives itself so as to help minimize such events transpiring. It is indeed a very interesting and hot topic right now, and I'm sure discussions and debate on this subject will be very prevalent in the near future. To conclude I must ask you, in the event that everyone owned self-driving vehicles, do you think that it would ultimately remove road accident, since there would technically be no more room for human error?