Would You Drive into a Wall to Avoid a Pedestrian at Your Own Risk?

by mnsr on February 10, 2017 - 5:19pm

In an age of continuous scientific and technological advancement, it is not uncommon that morals constitute a potential brake to seemingly revolutionary ideas and concepts. This is the case for autonomous cars. Indeed, while human error is the source of more than 90% of car accidents, a car that requires nearly no human action has a much faster reaction time and cannot be drunk or distracted by a cellphone. However, the use of self self-driving cars represents a moral dilemma in the measure that these vehicles can face dilemmas themselves.

Some examples of these dilemmas can be found on MIT’s Moral Machine, a website dedicated to “gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars”. While programming is a hard discipline, it can depend on other variable factors, such as heavy snow, which could cover a speed limit sign and make the car’s algorithm dangerous. For instance, in the event of a brake failure, if your car could either run a group of pedestrians over or crash into a wall at your own risk, what would you want it to do?

From a deontological standpoint, using an autonomous car would be justifiable, as long as it does not imply violating “formalistic rules, principles, or maxims” (Merrill). For example, the solution to the aforementioned dilemma would be to crash into the wall to avoid using the pedestrians as a “means to an end” (Kant) by killing them. But sacrificing the person inside the car would also be using them as a "means to an end", which makes the deontological approach ineffective. Contrastingly, teleology is less “clear-cut and simple” (Merrill). While the practicality of a self-driving car would surely result in “the greatest happiness [for] the greatest number” (Mill) of users, a teleological thinker would “choose the action that will bring the most good to the party the actor deems most important. The altruist thinks of good to others; the egoist considers good to the self, with perhaps some benefits spinning off to others” (Merrill). Therefore, within this ethical framework, the solution to this dilemma depends on the actor’s set of values, which leads us to virtue ethics. Some advocates of this moral system would suggest crashing into the wall to stand by the virtue of consideration and “habitualise the best character traits” (Hendricks). But the degree of subjectivity associated with determining the best character traits is a reason to approach the dilemma from a teleological point of view, in order to fully consider the fact that different actors might have different sets of values.

Using teleology and acting for “the greatest happiness [for] the greatest number” (Mill), the use of autonomous cars is not as beneficial as it seems. Different car manufacturers lean toward different moral systems and standards. For example, while some companies would deem making a car swerve into one pedestrian to avoid three ethical, some would not. In the case of the aforementioned dilemma, some companies might want their cars to crash into the wall to put less lives at risk, but Mercedes-Benz CEO Christoph von Hugo suggested that his company “simply intends to program its self-driving cars to save the people inside the car” (Taylor). From a teleological point of view, the divergence of opinions of different car brands is problematic. How would autonomous cars, as a whole, provide the greatest good for the greatest number if different models are programmed with such contrasting priorities in mind?

To conclude, computer programming is a hard discipline and ethics are soft, which makes programming ethics an extremely complex task. The most effective way to approach the dilemma surrounding the use of autonomous car is to use a utilitarian perspective, and even though these vehicles would be very practical for their users, the variety of the philosophies around which they are programmed make it difficult for them to achieve the greatest good for the greatest number (Mill) in some situations. Their use can therefore be considered unethical, from a teleological point of view.


Works Cited

" Moral Machine." Massachusetts Institute of Technology, 9 Feb. 2017, www.moralmachine.mit.edu/

Hendricks, Scotty. “Virtue Ethics: A Moral System You've Never Heard of — But Probably Use.”Big Think, 12 Dec.2016, http://bigthink.com/scotty-hendricks/virtue-ethics-the-moral-system-you-...

Merril, John. “Overview: Theoretical Foundations for Media Ethics.” Media Ethics, edited by Sarah Waurechen, Eastman, 2017.

Mill, John. Utilitarianism. Handout. Marianopolis College. Westmount, Quebec. 1861. Print.

Kant, Immanuel. Fundamental Principles of the Metaphysics of Morals. Handout. Marianopolis College. Westmount, Quebec. 1785. Print.

Taylor, Michael. "Self-Driving Mercedes-Benzes Will Prioritize Occupant Safety Over Pedestrians." Car and Driver, 7 Oct. 2016, http://blog.caranddriver.com/self-driving-mercedes-will-prioritize-occup...


I personally don't want to drive a car (I fear wearing glasses in public & realized how many controls a car has), but if I were driving I'd likely be the same type of person as when I'm walking by foot on my own. I usually let cars pass by before me if I know I'll have an opening not long after to cross the road. In a car that would be more difficult, because of other drivers' satisfaction with what I'm doing. I'd probably be annoyed if someone I let pass takes it slow, because I usually scurry across the road if a car lets me pass. I also wouldn't have any reason to crash into someone in a car, because I don't smoke and suck at multi-tasking anyway. My move would likely be avoiding pedestrians if I were to nearly hit someone.