Friday, February 21, 2020

There are Racists and Sexists in the Tech Industry, and They're Not People

When you hear the company names, IBM, and Microsoft, what comes to mind? Most people tend to think of these two tech giants, as two of the oldest and most reputable companies of the modern era. Both companies have experienced massive success recently due to changes in leadership, and massive innovations in technology. One of these innovations is facial recognition and its several applications in everyday life. Many large tech firms are trying their hand at this multi-billion dollar industry, but few companies can perform as well as the two aforementioned tech giants. Despite several breakthroughs and impressive feats of engineering, there are still some very troubling aspects of this exciting and expanding sector of technology.

Algorithmic bias is not a new topic of discussion, and has actually been a concerning topic for several years. However, it is difficult to tell whether or not the bias is introduced via the engineers who built the technology, or the technology itself without knowing what is, "under the hood" of the underlying algorithm. One explanation to this issue was proposed by Philip Brey, a professor of philosophy of technology at the University of Twente. Brey introduced the Embedded Values Approach, in which he states that, "computer systems and software are not morally neutral and that it is possible to identify tendencies in them to promote or demote particular moral values and norms." You may be wondering why this approach is relevant to implicit biases in facial recognition, and technology in general. The results of a study conducted by an MIT student comparing accuracy rates between facial recognition systems of some of the largest tech companies in the world may shock you.



Joy Buolamwini set out to see if algorithmic bias was a legitimate problem in 2018. While completing her MIT thesis titled Gender Shades, she found that there was a huge disparity in the accuracy rates of different facial recognition platforms. Joy noticed that IBM and Microsoft's platforms performed significantly better on males than females. All companies also performed better on lighter subjects than darker subjects. This has actually been a known and well documented issue with another large platform known as Snapchat. Snapchat has faced backlash that their filters designed to manipulate people's faces for quick and "silly" pictures to sent to friends, cannot pick up the faces of people with darker toned skin. This is problematic for many reasons, but it also helps to validate Brey's embedded values approach.

 Looking back to what Professor Philip Brey said about embedded values, we now see that the norms these technologies may be promoting favor lighter skin tones and male dominant features. Brey in his 2010 paper titled Values in Technology and Disclosive Computer Ethics, also discusses three different kinds of biases that arise from information systems: Pre-existent, technical, and emergent biases.

As mentioned earlier, it is difficult to tell whether or not these biases were created by the algorithms themselves, or if there were preeminent biases embedded into the systems by the people who created them. This is a situation in which we certainly see aspects of Pre-existent and technical biases taking place. Given that there are tens of thousands of engineers between the two tech giants mentioned, it is completely possible that individuals with pre-existing biases designed the systems. It is also possible that there is a flaw in the algorithm itself, implying a technical bias. Regardless of what kind of bias is displayed by these systems, it is clear that the systems are flawed and demand correction.


2 comments:

  1. Hi Aditya, I found your post to be very interesting, as it is something I have no heard of or thought about before. You do a good job of connecting your topic to concepts from the class, and even relating it to the readings. I also liked how you were able to connect the topic to a more mainstream audience by using snapchat as an example, because it is an easy form of facial recognition that people use on the daily but don't really think much about. I do think that you did a good job introducing your topic, however I was a little confused about the direction you were going with it at first. I think if you introduce the topic of sexism and racism in facial recognition a little bit earlier on, it would help with the transition and flow of the post. Other than that, I think you have a great post and I enjoyed reading it.

    ReplyDelete
  2. Hi Aditya, I felt your post was quite focused and well written. I think a part that would have been nice to include would be recommendations about how companies can embed values into their system (perhaps using more diverse groups of people for sample datasets). Using IBM and Microsoft in the first paragraph also seemed a bit out of place as you did not elaborate on the systems created by these companies until later in the third paragraph.

    I wonder if the situation with IBM and Microsoft could stem from the fact that more developers there are male (as STEM is traditionally male-dominated) so the algorithm would have males to draw from as sample data. In that case it may not be that the developers themselves have pre-existing biases, but the organization or society as a whole

    ReplyDelete

Note: Only a member of this blog may post a comment.