Google Terminator

by Scimson on February 2, 2014 - 10:00pm

Last week, Google acquired DeepMind, a company that develops artificial intelligence. The co-founder of this firm said in 2011 that artificial intelligences were the number one threat to human survival related to technology. The fear currently comes from the fact that one day machines could be more intelligent than human and take control. Currently though, the software consider as AI is barely intelligent enough to even consider it as AI, posing no direct considerable threat. With their purchase, Google has decided to implement an ethics board to regulate what can and cannot be done in the research in intelligence to make sure that no killer robots are created. However, issues are raised considering that AI might have choices to make that require an ethical judgment with all possible outcomes having life threatening impacts. The best example of this given by Bianca Bosker in her article “Google’s New A.I. Ethics Board Might Save Humanity From Extinction” published on January 29th, 2014 is related to a car driving AI having to choose between crashing a single car or a school bus filled with kids where there are no other possible outcomes. The human ethics judgment would save the kids first, but machines would also have to develop an ethical judgment, which is an issue with the implementation of AI. However, who can we trust with the development of such a moral code?

Nobody can really come up with a straight answer to this because there are just so many different values between humans themselves that coming up with a single answer would probably be nearly as hard as world peace. As a computer science student, I think it is necessary that current and future AI researcher evaluate the risk of their code before executing it. The implementation of ethical boards in every artificial intelligence researching company would be a great improvement in security.