Friday, February 7, 2020

DeepFake and frightening advancements in technology






A couple months ago, a friend sent me a link to a YouTube video. I immediately clicked on the link due to curiosity and saw the title “Spot on impressions of Al Pacino and Arnold Schwarzenegger by Bill Hader”. “This should be fun”, I thought to myself, but as I watched the video, I saw Bill Hader’s face seamlessly morph into a young Al Pacino and Arnold Schwarzenegger. One second I saw Bill Hader, the next I saw Al Pacino. I didn’t know whether to be amazed or terrified. This video showed both incredible technological prowess and worrying ethical implications of this technology.

DeepFake is an AI technology where people’s faces can be synthetically replaced with another’s in photos and videos. It has an enormous potential for misuse, with applications in celebrity pornography, false news and even financial fraud. It’s no surprise that with these possibilities for misuse, an ethical discussion on how to appropriately regulate this technology.

What’s also disturbing about DeepFake is its ability to further confirmation bias in people. Eli Saslow’s article is a perfect example of this, as it only took one glance at a clearly photoshopped picture of Chelsea Clinton and Michelle Obama giving Donald Trump ‘the finger’ to confirm some people’s pre-existing beliefs that democrats are ungrateful and classless. If one photoshopped picture could affect people that much, imagine how much more adversely DeepFake could affect our society.




DeepFake does have useful applications. Film studios have been using DeepFake to insert younger versions of actors into films, which ironically adds a level of authenticity to them. However, it’s still unclear whether this niche benefit of DeepFake outweighs its potential to harm society.

While all technological advancement calls for greater ethical discussions, I believe that some technologies have greater potential for harm than others and these need to be at the forefront of these discussions. As Moor states in his paper, “We at least collectively can affect our futures by choosing which technologies to have and which not to have and by choosing how technologies that we pursue will be used”. Artificial Intelligence is revolutionizing this world, but without this discussion, the consequences of this technological revolution could be disastrous.


2 comments:

  1. This post, along with the topic of DeepFakes in general, is very intriguing, and the possibilities for misusing this technology are almost limitless. I think bringing up the difference between lies and bullshit would been an interesting take on this topic as well, as a DeepFake video has the potential to fall under either category. Your comparison with Saslow's article is very insightful, and I think you could have expanded on this idea of "seeing is believing" when it comes to online media. One option would be to add an image of an America's Last Line of Defense article, then compare the reactions to that with reactions to the DeepFake video you showed. Additionally, your very last sentence came a bit out of nowhere, as you never mentioned AI previously (unless you're regarding DeepFakes to be artificially intelligent). Overall, this is a great post; it just needs a few small changes.

    ReplyDelete
    Replies
    1. I agree that it would be interesting to hear the author's thought on if DeepFakes should fall under lies or bullshit - or perhaps were at a point of technological advancement that it deserves its own category? It would also be super cool to tie this in with Moor's argument of emerging tech bringing about more ethical issues.

      Although this tech has many malicious potential as you mentioned, just wanted to point out that there are already validation measure against GANs (general adversarial network, which is DeepFake is an example of) that scrutinizes audio and visual data to determine whether or not its fake. And yup - GANs are type of AIs, hence the author's last sentence

      Delete

Note: Only a member of this blog may post a comment.