Programing ethics an AI

by Error on February 10, 2017 - 2:46pm

                Over the last few years there has been an increasingly intense debate of over the ethical viability of self-driving cars, which is a symptom of a larger debate on artificial intelligence and putting computer in positions of power. The progress that has been happening in the field of AI has come with a single key question: what will a machine do when it is put in a situation where it will have to choose between two or more human lives? Would a self-driving car hit someone standing in the middle of a bridge, or would it swerve out of the way, falling off the bridge and killing its passenger? Is it ethical to put a machine in a position of such power? Through the lens of utilitarian ethics, putting a computer in such a situation is not only ethical, but more ethical than letting a human decide.

                Most opponents of AI claim that since a computer program has code not conscience, it is impossible for it to make ethical choices due to its lack of moral compass. The idea is that a robot can only calculate its own best possible outcome, but how do we know if that best possible outcome is ethical, even form a utilitarian point of view. How can a program, with its limited knowledge be able to assign values to human life in a way to effectively make some sort of a calculation, of which the outcome would be ethical? Since a car cannot accumulate virtues it cannot be ethically viable via virtue ethics. Since it cannot make a single axiom it cannot be ethical via deuterian ethics, as it cannot make its own rules on the fly. A computer cannot even make a very good utilitarian calculation because it cannot easily assign values to different lives.

                In the end, the virtue of the computers may not exist, but they are still a product of the virtues of the programmer, so the worry there is alleviated by looking past the machine itself, in the same way a car has no moral compass, but the programmer does. A robot may not some times not make the right choice due to its inability to distinguish between the values of lives, and see them all as equal, but this way of looking at things will lead to a statistically more effective moral approach then unreliable humans.  Although it may also be impossible to place a deuterian axiom into a machine, it would be entirely logical that everyone surrounding said machine would follow the axiom of not making it have to need one.

In the end a compelling argument can be mad for the ethical viability of AI’s in everyday life, as they can satisfy the requirements of most ethical frameworks. The simple way of looking at it is that if the programmer is moral, then so is the program, as it will inevitably fall in line with the coder’s will. Not to mention that a machine is always consistent with their ethical choices, something that is nigh impossible for a person to achieve.


This raises a lot of questions when it comes to how people will deal with this sort of thing. One must question whether or not it is possible to make an ethical program seeing as though most people's values change based on their experiences. Would it be better for the AI to make the most logical decision and follow a utilitarian code, and if so, how would it calculate how much happiness people get from the decision it will make?