Thursday, February 6, 2020

The Automotive Industry: A Bunch of Dummies

In Philip Brey's Values in Technology, he discusses and defines biases in information systems as an "unfair discrimination against certain individuals or groups who may be users or stakeholders." Never has that been more evident than within the automotive industry, where certain technical biases play a key role in displaying the (unintentional) discrimination within a product's design that could mean the difference between life and death for a particular group of people.

In an article written by Carol Reiley from TechCrunch, she discusses the use of crash-test dummies in safety testing. For years, the automotive industry has used crash-test dummies as a standard measure for determining the safety of a driver in the event of a car accident. The issue, though ridiculously obvious, is that the crash-test dummies were designed to resemble an average male with an average build. So, as the automotive industry rolled out new cars to the world, potential consumers who didn’t resemble the build of a crash-test dummy were at a higher risk of being fatally injured in the event of an accident. In fact, Carol Reiley states that female drivers were 47 percent more likely to be seriously injured in the event of a car crash. Of course, one can assume that these engineers were mostly males of average builds, thus, the bias reveals itself in a potentially deadly safety testing flaw.

Philip Brey also goes into the idea that a product's embedded values may not necessarily reflect the same values of those responsible for the design. Though the engineer designs the product, he simply designs it to learn, and the things it learns are out of the hands of the engineer unless he decides to restrict its learning capabilities and efficiency for the sake of eliminating potential biases.

Within the automotive industry, self-driving cars are the hot new topic. Unfortunately, these cars may rely on software and/or hardware similar to the “Whites Only” sink, making these vehicles similarly inept to understanding who is and is not human. If the artificial agents embedded in these self-driving cars fail to properly identify humans while driving due to skin color, then that could put people of other racial groups at a higher risk of being hit by this vehicle. Imagine a scenario where the vehicle must decide and prioritize certain lives over others (a popular moral dilemma), the vehicle may unintentionally hit a group of people because it failed to recognize them as human from the start, thus negating the dilemma.

As particular technical designs present themselves as discriminatory, we must seek transparency. We need to identify and analyze morally problematic features that would have otherwise remained hidden in order to eliminate these flaws. If certain policies were established in regards to these design practices, then we could prevent flaws that would unintentionally inflict harm on different groups of people.

1 comment:

  1. The article does well with bringing the readings in very early, I also enjoy the title of the article. The article gives information about the readings and how this information leads up to the argument at the end. A possible recommendation might be to change the wording or the position of the paragraph explaining Brey's embedded values. I feel this paragraph is awkwardly placed between the two examples you are using to support your argument, and it may do good to explain what embedded values in technology are further.

    ReplyDelete

Note: Only a member of this blog may post a comment.