Friday, February 21, 2020

Why Deepfakes Must Die

In 2019, the CEO of a U.K. energy firm mistakenly transferred 220,000 euros to a fraudster. He had received a phone call from his boss asking for the money. The voice of his boss was created through deepfake technology. Deepfakes are extremely concerning from an ethical standpoint. According to Wikipedia, “Deepfakes are synthetic media in which a person in an existing image or video is replaced with someone else's likeness.”  They have gained large attention as a result of their use in celebrity pornography, revenge porn, and fake news, along with hoaxes and financial fraud.  I believe this technology is going through a technological revolution, a term described by James Moor, a researcher in information ethics. Moor references three stages of technological revolution. As deepfakes progress through the stages, it will become impossible to control them. For this reason, I believe that we need to ban deepfake technology before it is too late.   

The first stage is the Introduction, which is often called the intellectual curiosity stage. Deepfakes were started in the scientific research field with the idea of helping to improve deep learning. The problem now, I believe, is in the second stage, called Permeation. The cost of using deepfake technology has radically dropped, and the design of deepfake software has become more standardized. An example of this is the DeepNude software that was introduced and then removed, as a result of pushback.


DeepNude twitter before It was shut down. "The superpower you always wanted"

The final stage that I am concerned we are approaching is the Power Stage, where, as Moor says, “Most people in the culture are affected directly or indirectly by it”. In this case, we must fear the indirect problems caused by deepfakes. As the technology improves, it will be nearly impossible for society to know if a video is real or not. Moor states “As technological revolutions increase their social impact, ethical problems increase”. In an era of disinformation and fake news, deepfake technology will be used especially in the political realm, with dangerous consequences.

A recent example of how a modified video could be believed, is a video of Speaker of the House Nancy Pelosi which circulated on the web. In the video, she is seen drunkenly slurring her words. An original video of Pelosi speaking had been slowed to make her seem drunk, but the video went viral on Facebook, and many people believed it was real. This shows, as technology becomes better, just how dangerous deepfakes will be in the political arena. Currently, companies are trying to create AIs to detect deepfakevideos, but many experts believe that, as the technology improves, this willbecome an impossible taskSo, we face a future where what is real and what is fake may not be known.


The altered video made national news, after going viral on Facebook

Some states have already taken initiatives to ban this technology. Until deepfakes are banned worldwide, though, the technology will continue to strengthen. While deepfakes may have practical benefits for improving deep learning, the ethical ramifications are too large, and the consequences far too great.



3 comments:

  1. Hi David, I think the topic you chose for your article is super interesting - deep fakes are huge issue today, especially with things like fake news. I think your post could benefit from a little bit of reorganization, for example outlining what the stages are of a technical revolution, then explaining how deep fakes fit into that, and then stating that you think the technology should be banned. Because we know where the article is headed from your title, I think this structure could help your argument's flow. Overall great job!

    ReplyDelete
  2. Hey David, awesome topic. Deep fakes are indeed threatening in more ways than one. I thought that your approach of bringing in an example of fraud from deep fakes for an introduction was a great strategy that was both informative and interesting. I think that more focus could be given to the final stage that you mentioned was the most worrisome. You bring up Moor's quotes here but it could do with more analysis. I think a way to do that would be to reduce the effort spent on the introduction and permeation stage. Asides from that, this was a great read!

    ReplyDelete
  3. Hey David, good revision! I think you did an excellent job improving several of the points you made, and I was happy to see your use of images with captions improve. One thing that I noticed is the sentence from your first version, the one that cites your personal thoughts on ethical use, is absent from this version. I think that sentence did a good job of describing your personal worries of the technology. Your use of the recent deepfaked political video is an excellent example and I was glad to see it used in this post. Overall I think you did a great job on your revision, and your revised sources were a great addition. Good job!

    ReplyDelete

Note: Only a member of this blog may post a comment.