Friday, February 7, 2020

Facial Recognition, Law Enforcement, and Technological Racism


For the past few years, many American cities, including Detroit, have been using unreliable facial recognition technology for law enforcement. These facial recognition software have been shown to misidentify people of color and women at much higher rates, with a study published by the National Institute for Standards and Technology finding that some facial recognition algorithms are 10 to 100 times more likely to mismatch Asian and Black faces than white faces. They also found that Black women in particular were more likely than any other group to be falsely identified in large database of FBI mugshots– revealing how dangerous using facial recognition technology is in perpetuating racial discrimination in our criminal justice system.

These racial biases are built into the technology from the beginning. In Philip Brey's “Values in technology and disclosive computer ethics,” he discusses how pre-existing biases (one of the three origins of biases outlined by Nissenbaum and Friedman) emerge from “values and attitudes that exist prior to the design of a system.” The disparities in facial recognition technology, I would argue, arise from societal, pre-existing biases that treat white men as the favored default. One popular benchmark used to train facial recognition software, Labeled Faces in the Wild (LFW), has a collection of more than 13,000 face photos, however, 83% of the photos are of white people and 78% are of men. When training sets for facial recognition software are overwhelming white and male, an explicit bias against women and people of color is built into the software.

So what can we do from here? Even in situations where this technology is capable of learning and intentionality, Grodzinsky argues in “The ethics of designing artificial agents” that the designer retains a strong moral responsibility for the work of the artificial agent. I would argue that beyond trying to design ethical technology and holding a moral responsibility for their work, sometimes designers have a greater moral responsibility to prevent the creation and use of technology that is potentially dangerous in the first place. In “Information ethics,” Floridi asserts that entropy ought to be prevented and removed from the infosphere when it occurs. Similarly, I believe that instead of attempting to fix racial discrimination in facial recognition-based policing, which is the result of centuries of societal, pre-existing biases, we should stop using the technology for law enforcement completely to prevent future moral problems.

1 comment:

  1. I like how you used visuals with your post. I feel like they worked very well at complementing the urgency and importance of the point you're trying to get across.

    You should try to better incorporate and relate ideas from your choice of reading into your argument. You briefly mention a few definitions but don't really expand on the importance of these ideas and the implications they may have in relation to what you're talking about.

    ReplyDelete

Note: Only a member of this blog may post a comment.