Google and Twitter have since overhauled their political advertisement policies by banning microtargeted ads and dramatically limiting the presence of political advertisement on their platforms.
However, Facebook has refused to change their political advertisement policy and claimed that doing so would amount to censorship. They have double downed on the issue by refusing to ban politicians from lying in political ads.
Many people carry the misconception that social media as a technology cannot be biased or morally bad. Rather, it’s the people voicing their opinions on social media platforms who are solely responsible for the spread of disinformation. I disagree with this position completely.
According to Philip Brey in The Cambridge Handbook of Information and Computer Ethics, the design of computer systems has moral consequences. Through the development of software such as social media, application designers are encoding embedded moral values and norms. These embedded values can express themselves as tendencies that allow for things such as privacy or freedom of information. In this way, technology can support or be against issues.
In the case of Facebook, by deliberately allowing political ads to spread lies on their platform, they are providing algorithmic infrastructures for the spread of disinformation. By refusing to change their policies, Facebook is not fighting against censorship but instead supporting policies that threaten the democratic process.
References
Brey, Philip. “Values in Technology and Disclosive Computer Ethics.” The Cambridge Handbook of Information and Computer Ethics, edited by Luciano Floridi, Cambridge University Press, Cambridge, 2010, pp. 41–58.
However, Facebook has refused to change their political advertisement policy and claimed that doing so would amount to censorship. They have double downed on the issue by refusing to ban politicians from lying in political ads.
Many people carry the misconception that social media as a technology cannot be biased or morally bad. Rather, it’s the people voicing their opinions on social media platforms who are solely responsible for the spread of disinformation. I disagree with this position completely.
According to Philip Brey in The Cambridge Handbook of Information and Computer Ethics, the design of computer systems has moral consequences. Through the development of software such as social media, application designers are encoding embedded moral values and norms. These embedded values can express themselves as tendencies that allow for things such as privacy or freedom of information. In this way, technology can support or be against issues.
In the case of Facebook, by deliberately allowing political ads to spread lies on their platform, they are providing algorithmic infrastructures for the spread of disinformation. By refusing to change their policies, Facebook is not fighting against censorship but instead supporting policies that threaten the democratic process.
References
Brey, Philip. “Values in Technology and Disclosive Computer Ethics.” The Cambridge Handbook of Information and Computer Ethics, edited by Luciano Floridi, Cambridge University Press, Cambridge, 2010, pp. 41–58.
Ruth, I found your article interesting to read and believe you make a very good point on Facebook's political ad policy. I like how you reformatted your article from the original version to make it fit the page better. I also believe that your revisions added a better flow to your article, while still explaining your argument coherently. I believe it may of been beneficial if you explained embedded values first and then went on to use Facebook's ad policy as your example of how social media has those embedded values, but overall I think you did a nice job on your revision!
ReplyDeleteIts really infuriating to see how careless Facebook seems to be in acknowledging the massive influence they have. It just seems like a thinly veiled excuse to keep the technology that makes them the most money despite its morally questionable nature. I like that you added more context to understand Brey's ideas. The small adjustments to wording really make a difference and the conclusion is more impactful. Great job!
ReplyDeleteHi Ruth, I really enjoyed reading this post! I can really see the work you put into this revision: you fixed the formatting, broke up the post into more paragraphs to help the flow, and refined and expanded on what you wrote previously. I found your post super engaging and easy to read. I agree with Stmiallr that it may have been better for the flow of the piece to explain embedded values first and then use Facebook's ad policy as an example of how embedded values exist in social media. However, I think this was very well done overall!
ReplyDeleteHi Ruth! Loved this blog- The changes that you made from your original post have had a lasting effect on the format, structure, and overall flow of your blog post, so great work! Additionally, this was a really interesting blog post, because your topic is something that is very relevant to our current technology age. Morality of a technological agent is quite an important topic, and the fact that you addressed this topic with recognizable companies made it quite engaging for the reader. Thanks for the post, enjoyed reading it!
ReplyDeleteYour discussion of the Facebook’s Political Ad was intriguing. Comparing this version of your blog to the original its clear that you made a lot of good improvements. I like that you added in the quote about design of computer systems has moral consequences, as it added a lot to your blog. Breaking up the longer paragraphs into shorter chunks was a good move also. Great work.
ReplyDelete