Sadist Behind the Screen

by TheIdealCynic on Mars 21, 2017 - 5:39pm

 

 

We are in the midst of a troll epidemic. Despite being a virtual world, the Internet has become an increasingly dangerous place where anyone can become a victim of psychological abuse. Cyber bullying, however, has ramifications that go beyond an individual's self-esteem, threatening social progress by deterring free speech. In fact, research has proven that certain topics receive a disproportionate amount of hate, subsequently silencing significant groups who cannot tolerate the constant influx of hate and abuse. A recent Guardian study showed that of their ten most-trolled writers, eight were women and the only two men were black. It seems that the sexism and racism pervading the real world is reflected and perpetuated within the virtual world. 

While an ideal computer world would drag trolls into the trash bin, boundaries of censorship and free speech are unfortunately far easier said than done. Therefore, the moral dilemma remains: how can we preserve and promote free speech without comprising it through censorship? Google's Counter Abuse Technology Team has launched a software program entitled Perspective, which aims to undermine free-speech deterring trolls by regulating their ability to cyberbully. The tool uses machine learning to analyze comments and compare them to statements deemed "toxic”. A "toxic score" can then be used by publishers to filter out those damaging the positive dialogue. 

The program's ability to effectively regulate trolls lends itself to the merits of various ethical frameworks. Firstly, the software borrows ideas from the virtue ethics school of thought. According to virtue ethics, actions and people should be valued according to their virtuosity. Google’s Perspective is built upon this notion, valuing ideas on the basis of the virtue they bring to the conversation. Virtue ethics is often criticized for its inability to unanimously outline and define virtues and vices. In spite of this systematic flaw, virtue ethic's focus on the self's ability to internalize morality is a major win for society. Indeed, by providing publishers with tools to erase malicious content, organizations can embrace their own ethical identity’s and develop collective characters or codes of conduct - a win for humanity. 

Deontological ethical systems, in which moral decisions are made in accordance to universal maxims, are unfortunately not very useful as a practical solution here. A deontological moral decision must pass the categorical imperative, or should be deemed acceptable as a universal moral rule. However, censorship does not pass the categorical imperative because it is impossible to draw a precise line between censorship and free speech. 

Finally, teleological ethical theory plays the most fundamental role in constructing Google's Perspective tool. According to utilitarianism, a subset of teleological ethical theory, morality's ultimate 'Summon Bonum' or goal is to produce the greatest quality and quantity of happiness for the greatest number of people. Google's perspective works according to that logic, optimizing society's ability to protect discriminated groups. In other words, by ranking comment toxicity to prioritize those who are most targeted online (women and people of color), the greatest good can be achieved. Furthermore, teleology is often critiqued for the fact that society cannot accurately predict the outcomes of moral decisions. However in this case, the stark reality of cyber bullying and calls for censorship can be clear cut and predictable in the form of statistics and data. Therefore, by knowing and strategically pursuing the outcome we want to achieve (less hateful dialogue towards women and people or color), Google's Perspective tool may be the best way of containing the awful troll epidemic that pervades the digital world.

 

 

About the author