Wednesday, February 5, 2020

Who Lives? Who Dies? Your car will decide.

Let’s jump 50 years into the future: everyone has self-driving cars now, and oh man they are cool. After class, you take a relaxing ride back to your house in your self-driving car, but on the way home your car’s brakes fail! With an intersection quickly approaching, and no foreseeable way to save everyone in this scenario, your car has a very important decision to make.
Your car can either: 
A. Continue through the intersection, saving yourself, but killing one street-crossing pedestrian.
B. Swerve into a barrier, saving the pedestrian, but killing you, the passenger.
As the passenger in this scenario, what would you want your car to do?

Personally, I would want my self-driving car to save itself and its owner, killing the pedestrian in this scenario. But what if the pedestrian was a child? Or a homeless person? Or a doctor? Or what if there are three pedestrians? What about the scenario shown below?
5 people are inside the car, 5 people are in the crosswalk,
the pedestrians are illegally crossing the street on a red signal.
These dilemmas are variations of the classic trolley problem, but now the stakes are much higher, and the scenarios could be happening daily. In 2070, we will be expecting our self-driving cars to make these kinds of moral decisions on the spot, but without a strong ethical and moral foundation for decision-making, the car would likely just choose whichever option saves the most people, which is not always viewed as the most ethical move. This the dilemma that Luciano Floridi and J.W. Sanders realized in their 2004 article, On the morality of artificial agents. Frances Grodzinsky built on their ideas in his 2008 paper, Ethics of Designing Artificial Agents, where he directly talks about how developers should ethically implement their artificial intelligences with learning and intentionality to develop a morally motivated artificial intelligence.

This is the problem that researchers at MIT set out to solve in 2016, and they ended up creating the Moral Machine, an online experiment that was designed to create a human foundation for moral decision-making in autonomous vehicles. Since the release of the Moral Machine onto the internet, over 40 million decisions have been recorded in ten languages from people in 233 countries and regions. Using machine learning, MIT’s Moral Machine has now formed a strong basis on which to make educated, ethical decisions about who lives and who dies. I highly recommend heading to the Moral Machine website and judging a full set of 13 scenarios. At the end, the machine will output an in depth breakdown of your decision-making preferences. Here’s my report, comment your report links below!

1 comment:

  1. I loved reading your blog, and I think your application of Floridi’s and Grodzinsky’s papers to self-driving cars was really interesting. The ethical decisions required in these scenarios also seem a lot more ambiguous than some of the other things we’ve discussed, so I’m glad you decided to write about it. I really liked how you set up your blog with the car’s decision/diagrams and context from MIT’s Moral Machine, but I think it would be useful if you incorporated a little bit more from Floridi’s and Grodzinsky’s readings for analysis. If I remember correctly, an important part of Grodzinsky’s paper was that, even if an artificial agent like self-driving cars has learning and intentionality, the designer still has some moral responsibility for the actions of the agent. How do you think that applies to this scenario? What stake do you think designers have in the moral decisions of self-driving cars?

    ReplyDelete

Note: Only a member of this blog may post a comment.