Friday, February 7, 2020

There are Racists and Sexists in the Tech Industry, and They're Not People

When you hear the company names, IBM, and Microsoft, what comes to mind? Most people think of these two tech giants, as two of the oldest and most reputable companies of the modern era. Both companies have seen massive success in the last few years, with Microsoft installing Satya Nadella as its CEO in 2014, and Arvind Krishna set to take the helm as IBM's CEO in two months. These two companies are also a part of one of the most exciting races in the tech industry today: Facial Recognition and its applications. Many companies are attempting to make their mark in this massive multi-billion dollar space, but nobody can seem to perform as well as the two aforementioned tech giants. At least, nobody can seem to perform as well as the two aforementioned tech giants when it comes to classifying white males.

Philip Brey, a professor of philosophy of technology at the University of Twente, proposed the embedded values approach in 2010. Brey states that, "the embedded values approach holds that computer systems and software are not morally neutral and that it is possible to identify tendencies in them to promote or demote particular moral values and norms." You may be wondering why this approach is relevant to implicit biases in facial recognition, and technology in general. The results of a study comparing accuracy rates between facial recognition systems of some of the largest tech companies in the world may shock you.

Joy Buolamwini is a researcher at the MIT Media Lab
Joy Buolamwini was completing her MIT thesis called Gender Shades, when she found that there was a huge disparity in the accuracy rates of different facial recognition platforms. Joy found that IBM and Microsoft's platforms performed significantly better on males than females. All companies also performed better on lighter subjects than darker subjects. Looking back to what Professor Philip Brey said about embedded values, we now see that the norms these technologies may be promoting favor lighter skin tones and male dominant features. Brey in his 2010 paper titled Values in Technology and Disclosive Computer Ethics, also discusses three different kinds of biases that arise from information systems: Pre-existent, technical, and emergent biases.

This is a situation in which we certainly see aspects of Pre-existent and technical biases taking place. Given that there are tens of thousands of engineers between the two tech giants mentioned, it is completely possible that individuals with pre-existing biases designed the systems. It is also possible that there is a flaw in the algorithm itself, implying a technical bias. Regardless of what kind of bias is displayed by these systems, it is clear that the systems are flawed. IBM and Microsoft responded to these allegations swiftly by claiming to be re-working the data sets on which the systems trained, and improving the underlying algorithms as well. Despite this, damage has already been done, and we see that racism and sexism are making their way into one of the most progressive industries in the world.

1 comment:

  1. Your article interested me because of the facial recognition advancements happening with multiple tech giants recently. I really love your choice in images and your initial quote from Brey. Your title confuses me a little because there are pre-existing biases in engineers, who are humans, from these large tech companies that are unintentionally designing their systems. Also, maybe instead of talking about IBM and Microsoft, you could focus on one to be more specific in their facial recognition biases. It would be cool to see some company statistics in how well their datasets perform.

    ReplyDelete

Note: Only a member of this blog may post a comment.