Setting the Moral Laws on LAWS
by SimoneDB on February 10, 2017 - 5:28pm
With the rapid advancement and proliferation of weapon technology, machines are increasingly beginning to replace humans on the battlefield. Many now consider robotics technology – Lethal Autonomous Weapons Systems that have the ability to choose how and when to attack – as the future of warfare. Most of us are uncomfortable with the idea of sentient machines, especially ones that are engineered to kill. After all, humans should be making the decision to take human lives, right? There’s something quite unsettling about a machine going through the rational decision making process to kill, which is why many NGOs and human rights groups are campaigning to ban them. Nonetheless, there is a moral case for robot soldiers, that is, one looking at the situation from a teleological ethical framework. Because LAWS are much more capable of achieving the summum bonum of safety for the greatest amount of people, they should not be banned.
The first issue in this moral dilemma deals with the following question: if machines are the ones making the decisions, then who is to blame for their actions? There would be no one to hold accountable for what’s been done and no morally reprehensible actor. This issue, also commonly known as the “responsibility vacuum” (The New York Times) confronts the lack of ethical responsibility that would ensue once we eliminate the human operator from the battlefield.
Moreover, there is the fear that autonomous weapons could lead to a world where people are morally detached from the use of force (Politico). In other words, by removing human soldiers from physical harm and also eliminating human decision from the chain of command, the disincentive to resort to armed force is decreased, which sets a harmful precedent for conflicts to come.
That being said, there is another side to this debate, one that focuses on the tangible benefits to humankind LAWS can provide. One of the most pertinent developments of the twentieth century has been the precision-guided munition, a weapon that has significantly lowered the number of civilian casualties and has contributed to making warfare more humane. Human Rights Watch actually recently declared that the use of unguided munitions in populated areas violates international law. LAWS would hypothetically never act out of fear or impulse for their own safety, nor would they fire out of vengeance and spite. Robots would also never rape or torture victims (Politico). Thus, if programmed correctly, LAWS could actually abide to the principle of just-war far better than humans ever could.
Furthermore, numerous studies have shown that disenfranchised groups are more likely to be recruited for the army. In the United States, men and women of lower income and minorities are disproportionately enrolled in the military (Wyant). Is it not also morally wrong for governments to target the most vulnerable groups of society? What does that say about the value of their lives in the eyes the general public?
Deontological ethics would prevent us from ever implementing such weapons due to the formalistic maxims of this theory (Merill 11), which tell us that an action is right or wrong depending on whether they fulfill our duty. Kant’s deontological theory of ethical rationalism would be completely against the use of LAWS. In no circumstances would his clear-cut categorical imperative give us the moral clearance to undermine the graveness of armed force, nor would they permit us to shift the moral burden of committing such acts of violence onto machines.
From a teleological point of view, more specifically a utilitarian one, we should use LAWS to our advantage because it guarantees the summum bonum of safety, and thus happiness, for the greatest amount of people. Humans should all be considered equal in the calculation to achieve the greatest good; safety should not be a privilege reserved for those who are well integrated into society. Not to mention, weapons with greater autonomy have more accuracy, cause fewer civilian casualties and would not abuse prisoners of war.
Ultimately, the teleological perspective is more convincing. We must recognize that humans actions do not occur in a vacuum. There is a broader context in which we should consider whether or not an action is moral and the context of warfare simply does not comply with absolutist standards. As dystopian and scary as LAWS seem, the appropriate response is not to waive potentially useful and revolutionary technology. Granted, LAWS do allow for a sense of detachment from violence, but it also garners more efficient militaries and thus, less deaths, which is what really matters. Although these machines do have unsettling amount of unprecedented autonomy, the harms under the deontological analysis do not outweigh the benefits provided by the teleological analysis, the latter of which secures the principle of just-war and civilian safety.
Horowitz, Michael C., and Paul Scharre. "Do Killer Robots Save Lives?" Politico. Politico, 19 Nov. 2014. Web. 10 Feb. 2017.
Horowitz, Michael C., and Paul Scharre. "The Morality of Robotic War." The New York Times. The New York Times, 26 May 2015. Web. 10 Feb. 2017.
Merill, John C. “Overview: Theoretical Foundations for Media Ethics,” 3-32 in A.David Gordon, John M Kittros, John C Merill, William Babcock and Michael Dorsher (eds.), Controversies in Media Ethics, 3rd Edition (New York: Routledge, 2011).
Wyant, Carissa. "Who’s Joining the US Military? Poor, Women And Minorities Targeted." MintPress News. MintPress News, 18 Dec. 2012. Web. 23 Jan. 2017.