Robots: ASCIIng the real questions. Addressing the bugs in moral codes.

by aurea.mediocritas on February 13, 2017 - 2:09am

In 2015, with an increase in research and access to artificial neural networks, an unprecedented amount progress took place with respect to artificial intelligence (AI)(Clark, 2015). Artificial neural networks aim to replicate the human brain, allowing them to acquire the brain’s unique ability of complex problem-solving and decision-making. While this has been beneficial in a range of fields, from finance to the medical industry to telecommunications and so much more, there is an inherent risk in the continued development of AI (CNN, 2006). As scientists perfect artificial neural networks, these machines, commonly referred to as robots, become more and more human-like. With their intellectual and emotional capabilities beginning to parallel our own, a serious ethical issue has arisen: should they be granted fundamental rights? Teleological ethics are driven by the outcome associated to a decision while deontological ethics suggest that a decision should be made based on what is morally right, disregarding the consequences of said decision. Recognizing the lack of accurate insight humans posses with regards to the future of associated with AI, while also taking into consideration the moral responsibility we hold, robot rights and responsibilities should be established (Greenwald, 2015). This essentially suggests that dealing with this ethical issue requires the implementation of deontological principles, over teleological ones.

Technological advancement with respect to artificial intelligence is seen in a very limited portion of the world. The reality of software and hardware development is that it is often confidential until a project has reached completion (Aftergood). This, however, limits various societies’ time to react, should robots be as intellectually equipped as they are projected to be. Since research on AI began, the idea of a machine having a brain of its own has raised serious concerns, concerns that are echoed across the world. Some wonder if robots will overpower humans, while others are concerned about their privacy in face of such powerful computers. Regardless of the concerns society may have, the reality is that teleology is limited to decisions based on outcome and, since it is impossible to know how AI will manifest itself in robots, no conclusion can be drawn. This, fortunately, is not the case for all ethical frameworks.

Ethical rationalism, a form of deontology, applies the theories of Immanuel Kant, an 18th century German philosopher. While Kant proposed a multi-faceted approach to ethics, he predominantly argued the moral righteousness of an action is founded within the intentions of he who performs the action. He states that “a good will is not good because of what it effects or accomplishes, because of its fitness to attain some proposed end, but only because of its volition.”(Kant, 1949) From animal rights to environmental rights, humans have used their intellectual ability and their sound moral compass, fine-tuned by the preset standards of society, in establishing systems that will benefit the community as a whole for generations to come. Given that robots may soon be roaming amongst us, and the fact that, as a result of AI, their level of intellect may surpass our own, it is our responsibility to set the guidelines in a similar manner under which they must function. We must set a precedent and act in a just fashion, as it is only then that we are morally in the clear.

As we move forward in an increasingly technologically bound world, it is of utmost importance to make morally-sound decisions. Teleology, being an outcome oriented framework, prevents calls to be made in time for the arrival of super-power robots, thus proving to be an ineffective moral model. Ultimately, robots MUST be granted rights similar to those of humans since, under the umbrella of deontology, it is truly the ethical solution.

 

 

*Aside - Title Explained: ASCII stands for the American Standard Code for Information Interchange. 

 

WORKS CITED

Aftergood, Steven. "Invention Secrecy." Invention Secrecy. N.p., n.d. Web. 13 Feb. 2017.

Clark, Jack. "Why 2015 Was a Breakthrough Year in Artificial Intelligence." Bloomberg.com. Bloomberg, 08 Dec. 2015. Web. 13 Feb. 2017.

CNN. "AI set to exceed human brain power." CNN. Cable News Network, n.d. Web. 13 Feb. 2017.

Greenwald, Ted. "Does Artificial Intelligence Pose a Threat?" The Wall Street Journal. Dow Jones & Company, 10 May 2015. Web. 13 Feb. 2017.

Kant, Immanuel, and Thomas K. Abbott. Fundamental Principles of the Metaphysic of Morals. New York: Liberal Arts Press, 1949. Print.