Killer robots, the future of this world?

by doriane.d on September 2, 2013 - 7:51pm

 

Technology in general interests me. The advancement that humans did over the past decades baffles me and what they come up with every day really challenges ethics in an extreme way. When I was reading about robots and their abilities, I came across an interesting article:

http://www.roughtype.com/?p=3371

This article is about a type of robots called Lethal Autonomous Robots. These robots will have the ability of making military decisions that humans normally do, like the decision of killing a target. The article follows the presentation of this new technology to the Human Rights Council by Christof Heyns, a special rapporteur for the United Nations. It shows Heyns' arguments as well as the UN's in the term of the dangers surrounding the future deployment of these robots in wars.

These robots, if eventually created, will change the way war as we always knew functioned forever. It would not only affect countries that are currently in an armed conflict but the whole world if this project goes through. After reading this article, I asked myself this question:

Should Lethal Autonomous Robots be allowed to exist?

I do not think that LARs should be created because of the rational decisions that robots will make and the awful impact this will have on humans. These robots will have to choose whether to eliminate someone or not using algorithms. In other words, they will act in the most rational and beneficial way possible. Rational decisions sometimes include having to suffer some loss from the side that one is fighting for or people that have nothing to do with the current conflict. Humans tend to think about an important decision according to their feelings. Suffering a loss from one's own side is an emotional pain that humans will try to avoid using different methods that will take more time, thus delaying the outcome of a war. LARs will be designed to make decisions in the fastest way possible and will not think about the emotional side of human collateral damages. Is time really such an immense factor that justifies these damages? Is a human life so worthless that being at the wrong place at the wrong time will automatically get one killed? Algorithms do not take these questions into consideration and if these robots are deployed into a war zone, I believe that more innocent lives are going to be taken because their lives will be reduced to an equation.

The people for the creation of Lethal Autonomous Robots will argue that fewer atrocities usually found in wars fought by humans will take place. Indeed, since humans are so different, personality wise, from one another, the way they act also vary. Some soldiers kill only when necessary and others justify killing everything that moves under the pretext of "war atmosphere". It is not uncommon to hear awful war stories like rapes or tortures, in the news. Robots will eliminate a target in the most rapid and painless way possible since for them, killing is not an answer but just a mean to reach a goal. They will not get distracted from their goal and will not commit atrocities toward humans unlike humans. 

After reading both my point of view and my opponents point of views concerning LARs and the different ways a rational kill can affect a war, what do you think about the place of rationality in wars, and how would it affect one?