Setting the Moral Laws on LAWS

by SimoneDB on February 10, 2017 - 5:28pm

With the rapid advancement and proliferation of weapon technology, machines are increasingly beginning to replace humans on the battlefield. Many now consider robotics technology – Lethal Autonomous Weapons Systems that have the ability to choose how and when to attack – as the future of warfare. Most of us are uncomfortable with the idea of sentient machines, especially ones that are engineered to kill. After all, humans should be making the decision to take human lives, right? There’s something quite unsettling about a machine going through the rational decision making process to kill, which is why many NGOs and human rights groups are campaigning to ban them.  Nonetheless, there is a moral case for robot soldiers, that is, one looking at the situation from a teleological ethical framework.  Because LAWS are much more capable of achieving the summum bonum of safety for the greatest amount of people, they should not be banned.

The first issue in this moral dilemma deals with the following question: if machines are the ones making the decisions, then who is to blame for their actions?  There would be no one to hold accountable for what’s been done and no morally reprehensible actor. This issue, also commonly known as the “responsibility vacuum” (The New York Times) confronts the lack of ethical responsibility that would ensue once we eliminate the human operator from the battlefield. 

Moreover, there is the fear that autonomous weapons could lead to a world where people are morally detached from the use of force (Politico).  In other words, by removing human soldiers from physical harm and also eliminating human decision from the chain of command, the disincentive to resort to armed force is decreased, which sets a harmful precedent for conflicts to come.

That being said, there is another side to this debate, one that focuses on the tangible benefits to humankind LAWS can provide. One of the most pertinent developments of the twentieth century has been the precision-guided munition, a weapon that has significantly lowered the number of civilian casualties and has contributed to making warfare more humane.  Human Rights Watch actually recently declared that the use of unguided munitions in populated areas violates international law. LAWS would hypothetically never act out of fear or impulse for their own safety, nor would they fire out of vengeance and spite.  Robots would also never rape or torture victims (Politico).  Thus, if programmed correctly, LAWS could actually abide to the principle of just-war far better than humans ever could.                      

Furthermore, numerous studies have shown that disenfranchised groups are more likely to be recruited for the army.  In the United States, men and women of lower income and minorities are disproportionately enrolled in the military (Wyant).  Is it not also morally wrong for governments to target the most vulnerable groups of society? What does that say about the value of their lives in the eyes the general public?

Deontological ethics would prevent us from ever implementing such weapons due to the formalistic maxims of this theory (Merill 11), which tell us that an action is right or wrong depending on whether they fulfill our duty.  Kant’s deontological theory of ethical rationalism would be completely against the use of LAWS.  In no circumstances would his clear-cut categorical imperative give us the moral clearance to undermine the graveness of armed force, nor would they permit us to shift the moral burden of committing such acts of violence onto machines. 

From a teleological point of view, more specifically a utilitarian one, we should use LAWS to our advantage because it guarantees the summum bonum of safety, and thus happiness, for the greatest amount of people.  Humans should all be considered equal in the calculation to achieve the greatest good; safety should not be a privilege reserved for those who are well integrated into society.  Not to mention, weapons with greater autonomy have more accuracy, cause fewer civilian casualties and would not abuse prisoners of war.

Ultimately, the teleological perspective is more convincing.  We must recognize that humans actions do not occur in a vacuum.  There is a broader context in which we should consider whether or not an action is moral and the context of warfare simply does not comply with absolutist standards. As dystopian and scary as LAWS seem, the appropriate response is not to waive potentially useful and revolutionary technology. Granted, LAWS do allow for a sense of detachment from violence, but it also garners more efficient militaries and thus, less deaths, which is what really matters.  Although these machines do have unsettling amount of unprecedented autonomy, the harms under the deontological analysis do not outweigh the benefits provided by the teleological analysis, the latter of which secures the principle of just-war and civilian safety. 

 

Horowitz, Michael C., and Paul Scharre. "Do Killer Robots Save Lives?" Politico. Politico, 19 Nov. 2014. Web. 10 Feb. 2017.

 

Horowitz, Michael C., and Paul Scharre. "The Morality of Robotic War." The New York Times. The New York Times, 26 May 2015. Web. 10 Feb. 2017.

 

Merill, John C. “Overview: Theoretical Foundations for Media Ethics,” 3-32 in A.David Gordon, John M Kittros, John C Merill, William Babcock and Michael Dorsher (eds.), Controversies in Media Ethics, 3rd Edition (New York: Routledge, 2011).

 

Wyant, Carissa. "Who’s Joining the US Military? Poor, Women And Minorities Targeted." MintPress News. MintPress News, 18 Dec. 2012. Web. 23 Jan. 2017.

 

 

Comments

Though I largely agree, there are things I want to point out
“The first issue in this moral dilemma deals with the following question: if machines are the ones making the decisions, then who is to blame for their actions? There would be no one to hold accountable for what’s been done and no morally reprehensible actor.”
I disagree there, I do believe there should be someone to blame. Just because there is a debate on who should be held more accountable, doesn’t mean that nobody should be held accountable.
“LAWS would hypothetically never act out of fear or impulse for their own safety, nor would they fire out of vengeance and spite. Robots would also never rape or torture victims (Politico).”
At an example, there are laws that get made to prevent the repeal of other laws. An example of this is the enforcement of the constitution, and the rare (or not so rare) situations on what happens to people who violate it. Additionally, I won’t pretend that robots are programed to rape people, but hypothetically, wouldn’t you have to teach a robot not to rape or torture? As well as program them with the knowledge of what it is? Admittedly, I don’t know much about AI.
“Is it not also morally wrong for governments to target the most vulnerable groups of society? What does that say about the value of their lives in the eyes the general public?”
It depends on who you ask, that is the basis of moral questioning. Not everyone is going to agree with that statement.

Very interesting topic! Its crazy how war and battle has become less and less man versus man and more machine versus machine… The first robots and machines ever created by man were designed with the sole purpose of aiding humankind and any machine that can autonomously go against this has been frowned upon. A man in California has challenged this and started HUGE debates about the ethics of robots by making a robot with the sole purpose of hurting people (see the first link below). This also raises the question as to whether robots that are controlled by humans are ethical or not. Some say that it takes the risk out of war and encourages more risky behaviour as there is no risk to the operators life. Another side to this argument is that can allow for more equality among genders. This would let women that have wanted to serve their country but were never ready to leave their families for months to go over seas or who aren’t physically fit to finally serve their country in other ways. Machines can provide huge opportunities for humans as a race but also come with challenges and will need strict guidelines.

https://www.fastcompany.com/3059484/this-robot-intentionally-hurts-peopl...