Friday, February 21, 2020

Beyond Bullshit: Facebook, Genocide, and Chaos Fueled by Technology

Since 2017, Myanmar’s military has continued to execute a genocide of Muslim Rohingya people, forcing more than 700,000 people into crowded camps and inciting violence that have killed over 25,000— and Facebook has accepted much of the blame.

By Moor’s definition in “Why We Need Better Ethics For Emerging Technologies,” the spread of Facebook as social media constitutes an enormous subrevolution of the technological revolution in Myanmar. Facebook is firmly in the power stage of its revolution, and millions of people in Myanmar use Facebook as their sole point of contact to any news sources or anything else on the Internet. In late 2018, Facebook released a report that showed how its platform had been used to spread hate speech and lies about the Rohingya people that triggered a genocide, and it's clear to see how these atrocities are the direct result of a (somewhat willful) policy vacuum executed by Facebook. Facebook’s well-established revolution in Myanmar, combined with the very few Burmese-speaking Facebook monitors and little oversight on hate speech, led to an unraveling of misinformation, fake news, fear-mongering, and eventually violent attacks offline against the Rohingya people.
I believe that the nature of Facebook’s policy vacuum and detrimental social impact in Myanmar demonstrate a shortcoming of Moor’s argument on how policy vacuums occur and how better ethics can be instated during a technological revolution. Moor argues that we experience policy vacuums when we cannot anticipate the consequences of new technologies when they are in the early stages of their revolutions, however, Facebook and its potential consequences were well-known in the United States prior to being established in Myanmar. The decision to not hire enough Burmese-speaking Facebook monitors or preventing hate-speech in Myanmardespite knowing the consequences of doing so in other parts of the worldwas a willful decision by Facebook that cost thousands of lives. Unlike what Moor suggests, Facebook’s contribution to the Rohingya genocide was not a failure to identify some ethical problems with hate speech or fake news, but a failure to control Facebook’s power to create such a problem in the first place. It is not enough for technology companies to simply deliberate on potential ethical problems beforehand, but they must have their control consistently checked to prevent them from having the absolute power to enable these crises.

1 comment:

  1. Hi Rachel, interesting post! The genocide of the Rohingya is a very appalling social crime, and I liked that you have taken a more objective stance on examining Facebook's involvement. I also liked that you mentioned Moor's article earlier in the post as opposed to later on, but I would still take the time to mention who Moor is to establish credibility as to why his ideas are relevant to this topic. Bringing your own opinion in the final paragraph was great, and I liked the way you structured your argument. However, I would suggest breaking it up a little more as you present a lot of important ideas that I think would be easier to follow if there was less text to read per paragraph. Overall, good work!

    ReplyDelete

Note: Only a member of this blog may post a comment.