Since 2017, Myanmar’s military has continued to execute a genocide of Rohingya Muslims, forcing more than 700,000 people into crowded camps and inciting violence that have killed over 25,000 people— and Facebook has accepted much of the blame.
In late 2018, Facebook released a report that showed how its platform had been used to spread hate speech and lies about the Rohingya people that triggered a genocide. Facebook is extremely important in Myanmar, and millions of people use Facebook as their sole point of contact to any news sources or anything else on the Internet. This, combined with the very few Burmese-speaking Facebook monitors and little oversight on hate speech in Myanmar, led to an unraveling of misinformation, fake news, fear-mongering, and eventually violent attacks offline.
Massacres like these, stemming from chaos fueled by unchecked technology, seem to go far beyond simple questions of “bullshit” or ethical concerns that Frankfurt and Moor pose. What happens when misinformation goes beyond a little “bullshit” or misconstruing of the truth, and instead becomes a full, malicious, and unbridled narrative for an entire group of people? We can sit back and think of the ethical implications of spreading fake news to a few gullible voters online, but how should we respond to (and prevent) problems that pose an imminent threat to the safety of millions of people?
I do not believe that Moor (and possibly the discourse on computer ethics as a whole) properly addresses this. In his outline of recommended improvements to our ethical approach to technology, he mainly discusses how technologists and ethicists can work to preemptively identify and address potential ethical problems. However, I see Facebook’s contribution to the Rohingya genocide not as a failure to identify some ethical problems with hate speech or fake news, but as a failure to control Facebook’s power and ability to create such a problem in the first place. Companies are designed only to gain profits and market power— even when they are not equipped to handle the authority and responsibility that comes with both. It is not enough for technology companies to simply deliberate on potential ethical problems beforehand, but they must have their control consistently checked to prevent them from having the absolute power to enable these problems in the first place.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.