If you have Snapchat on your phone, you have probably sent your friend a selfie using one of their filters, also known as lenses. The photo messaging app uses facial recognition technology so that users transform their face into a child-like version of themselves, become another gender, and more. Unfortunately, these filters are not so fun for everyone. Snapchat lenses have a hard time identifying women and darker-skinned people due to the bias in their facial recognition algorithms.
How is this possible? In Philip Brey’s reading, “Values in technology and disclosive computer ethics,” he brings up the three origins of bias: pre-existing, technical, and emergent. Pre-existing bias came “from values and attitudes” the designer had before designing the application, technical bias came “from technical constraints or considerations”, and emergent bias came from when the “system is used in a way not intended by designers.”
Pre-existing bias and technical bias are the most relevant to the issue of Snapchat lenses not working on every demographic. Atima Lui, an African American woman with a darker complexion, has trouble using Snapchat filters. She states that “if developers are racially biased, the technology they produce is likely to be racially biased, too.” The technology world is a highly white-male dominated field. In this way, the engineers at Snapchat could be unintentionally designing their systems to be less diverse. This reflects the developer’s own bias, which is pre-existing bias.
According to Joy Buolamwini’s research with the MIT Media Lab, studies show “that darker-skinned females are the most misclassified group (with error rates of up to 34.4%).” It is interesting that these are the same demographics that are underrepresented in the tech industry.
On the other hand, there may be technical bias if there are not enough diverse datasets to train the algorithm with. The facial recognition software may be trained significantly on datasets containing fair-skinned people because they were more accessible than minorities.
According to Joy Buolamwini’s research with the MIT Media Lab, studies show “that darker-skinned females are the most misclassified group (with error rates of up to 34.4%).” It is interesting that these are the same demographics that are underrepresented in the tech industry.
On the other hand, there may be technical bias if there are not enough diverse datasets to train the algorithm with. The facial recognition software may be trained significantly on datasets containing fair-skinned people because they were more accessible than minorities.
Although Snapchat lenses are a fun way to connect with your friends and family, they have flaws that are the result of different types of bias in the tech world, illustrated by Brey. It is unfair that not everyone gets to experience technological advances due to their skin tone or gender.
I really enjoyed reading this article about the bias of Snapchat filters. First, I think you did a great job of explaining the three origins of bias from the Brey reading. I would suggest that you refer to another reading that is from the first 3 weeks of class, since it is a requirement for the blog posts. Another suggestion would be to create a better introduction to grab the reader's attention from the start. Other than that, I think the concept of bias was used very well with the example of bias in Snapchat filters. It felt like the post was about Brey's concept of bias, rather than being focused on the filters.
ReplyDeleteHi Aashia,
ReplyDeleteThe strongest part of this article was the contrast between how these filter issues come up. It is good for us to not only consider how the filters are trained using databases but also the tech world having a lack of diversity.
"Snapchat Filters Don't Work" seems to be a rather boring title and the introduction dives into the issue too fast. Breaking up the paragraphs more and using better pictures (Such as one comparing how bad the software is at recognizing faces) would make the article feel and start better.