Friday, February 21, 2020

Floridi Redefines Evil: An Oxford Philosopher's Analysis of Intelligent Warfare

We’ve all seen hit flicks such as Terminator and wondered how we’d measure up against rogue, omnipotent demi-robots resulting from an overly ambitious programming team with a desire for military superiority. With unforeseen developments in the area of intelligent warfare, should we start our search to protect modern-day John Connor, or simply halt development of modern weaponry due to it’s clear breach of moral responsibility?
To try and save humanity from resembling a figment of James Cameron’s imagination, notorious Oxford philosopher Luciano Floridi has teamed up with fellow researcher J.W. Sanders to address the morality of emerging warfare in their paper “Artificial evil and the foundation of computer ethics” (2001). The English duo asserts how the concept of evil has been previously separated into two roots; human engineered and naturally occuring. However, with the rise of artificial intelligence, the two devise a new categorical evil - one reserved simply for beings without a heart, but a CPU. They further expand this notion of cyber evil as unique but are quick to imply us in its origin, illustrating how the emergence of artificial intelligence is impossible without human involvement.
Looking around, it’s hard to counter this sense of responsibility. In the interest of counterterrorism, we’ve developed the potential to eliminate a target thousands of miles away with only a single photograph of their appearance. When these missions go awry, like the 160 Pakistani children unintentionally killed by drones during George Bush’s aerial campaign1, it is us holding the remote that caused their demise, further solidifying Floridi’s argument about the involvement of man in the rise of the machine.
However, despite these technical mishaps and ethical blunders, nearly every global actor has dabbled in intelligent warfare for strategic purposes, with Russia going as far as to develop land robots equipped with turrets for arms and an autonomous “search-and-destroy” mode2. This particular innovation is the most disturbing, as it portrays a sense of distance from evil by humans who are technically not pulling the trigger. Floridi, however, knows this physical distance does not represent a moral one, and we are approaching an era that compromises our morality further than ever before. Whether or not you agree with Floridi’s assessment and the subsequent de-escalation of advanced weaponry, it’s officially time to say “hasta la vista” to traditional warfare as we know it.

                    Our future, if we’re not careful. One day, she’ll know the answer.

3 comments:

  1. Hi Dominick. Great post! I enjoyed the Terminator references, as well as the examples you used in the post. Is there a footer missing for the George Bush aerial campaign and Russian robot references? I understand your 'distance from evil' stance regarding the Russian robot, but I believe there is still accountability within the designers and engineers of the robot's algorithm. Again, nice job!

    ReplyDelete
  2. Hi Dominick,
    Really well written, except I think a lot of this feels more like it could be in an essay than a short blog post. I would suggest shortening the first half of the post, and really setting the tone for your discussion about technological warfare.
    What interests me the most about this topic, which I think you left out, was the ethical side of this advancement in technology. It's not just an algorithm that discriminates unintentionally, or a self-driving car that has encoded bias against pedestrians, but these machines are specifically designed to kill. Who is accountable, and who is responsible? I challenge you to take a side, rather than just signing off by saying that things will never be the same.

    ReplyDelete
  3. Hi Dominick, your blog post was entertaining. 80s films like Terminator exemplify many of the fears and fantasies people had about the future of human-technology integration. However, the transition between your introduction and the rest of your post could use improvement. I think you should change the tone and include more details of what you’ll be discussing in the rest of the post. It seemed awkward to me because later in the post you mentioned heavy topics like the death of innocent civilians due to drone strikes. I really like your topic, there is so much you could say about the ethics of artificial intelligence in military weaponry. Since the word limit for blog posts is so small, you could improve by focusing on a smaller aspect of the topic instead.

    ReplyDelete

Note: Only a member of this blog may post a comment.