Friday, February 7, 2020

Why Deepfakes Must Die

In 2019, the CEO of a U.K. energy firm mistakenly transferred 220,000 euros to a fraudster. He had received a phone call from his boss asking for the money. The voice of his boss was created through deepfake technology. Ever since I heard about deepfakes, I have been deeply concerned about their ethical use. According to Wikipedia, “Deepfakes are synthetic media in which a person in an existing image or video is replaced with someone else's likeness.” They have gained large attention as a result of their use in celebrity pornography, revenge porn, and fake news, along with hoaxes and financial fraud. In my research, I have not been able to find a positive use for deepfakes (except for improving deep learning), which is very concerning. Currently, companies are trying to create AIs to detect deepfake videos, but many experts believe that, as the technology improves, this will become an impossible task. 

 
Since 1985, James H. Moor has been researching the field of Information Ethics. In one of his papers, he references three stages of technological revolutions. The first stage is the Introduction, which is often the intellectual curiosity stage. Deepfakes were started in the scientific research field with the idea of helping to improve deep learning. The problem now, I believe, is in the second stage, called Permeation. The cost of using deepfake technology has radically dropped, and the design of deepfake software has become more standardized. An example of this is the DeepNude software that was introduced and then removed, as a result of pushback. The final stage that I fear we are approaching is the Power Stage, where, as Moor says “Most people in the culture are affected directly or indirectly by it”. In this case, we must fear the indirect problems caused by deepfakes. As the technology improves, it will be nearly impossible for society to know if a video is real or not. Moor states “As technological revolutions increase their social impact, ethical problems increase”. In an era of disinformation and fake news, deepfake technology will be used especially in the political realm, with dangerous consequences.
Moor realizes that current ethics are not doing enough to proactively address these issues. Lawmakers and companies wait until a problem exists before creating laws or standards to address them. This has dangerous consequences, especially since once a product reaches the power stage, it is very hard to control. By being proactive at each step, Moor believes that we will learn about technology as it develops, and we will be able to choose whether or not we want to adopt a technology by contrasting its benefits and consequences.


In the case of deepfakes, some states have taken initiatives to ban the technology. Until deepfakes are banned worldwide, however, the technology will continue to strengthen. While deepfakes may have practical benefits for improving deep learning, the ethical ramifications are too large, and the consequences far too great.

2 comments:

  1. Deepfakes are a great example of the ethical dilemma many researchers face: the concern that their invention may be put in the wrong hands and used for evil. However, this is in no way a reason to ban and restrict technological development. The fear of AI has held many philosophers and ethicists back from accepting technological innovation, but the only way to prepare for it is to accept that the tech is going to come, and to develop countermeasures in a technological arms race. If one chooses to suppress innovation, it leaves them naked and unprepared when an opponent decides to invest in the research and weaponize the tech. This is currently what is happening with Deepfakes. As the realism of these videos increase, so does the accuracy of the programs designed to detect fraudulent videos. The question then, is given that the technology cannot be suppressed, how should governments and lawmakers prepare for them?

    ReplyDelete
  2. Hi, I enjoyed reading your post!. You did a good job deriving information from the Moor reading and connecting it to the idea of deep fakes. I also like that you provided many links to external information. To improve, I suggest splitting your paragraphs/ideas up a bit more so that they are smaller in order to improve readability.

    ReplyDelete

Note: Only a member of this blog may post a comment.