“Please have
your eyes open in the photo.”
As I got
the feedback from the ID application platform after submitting my photo, I struggled
to contain my sigh. This had been my third submit and I was getting very
annoyed. Anyone could tell my eyes were open but my photo had to be processed
by an algorithm before it could be sent to HR so I was stuck in a vicious
cycle.
“Smile!” After
taking over 20 professional headshots, I was too tired to do so. Ironically, it
was the one where I gave up on smiling that ended up getting approved.
When I saw
Joz Wang’s post on how her Nikon Coolpix S630 thought she was blinking, I recalled
my own experience and shuddered. Upon further introspection, I realized that
that particular experience had become so normal to me that I failed to
acknowledge that the technology behind it was so flawed.
It emphasized the racial characteristics of my face and its sole response was
requesting for a photo that better fit its requirements.
David
Gelernter mentions in “The Second Coming-A Manifesto”, that we “don’t believe
in technological change” and as such we accept bad computer products and adapt
to their flaws.
This
conforming approach is structurally flawed because this facial recognition
software is only highlighting features of mine that do not conform with the typical
metrics that it currently uses.
Philip
Brey championed that the embedded values approach holds that computer systems
are not morally neutral. From my experience, we are definitely training racist
ML systems because of the lack of metrics that would have made facial
recognition more accurate for different races.
However,
I believe our tendency to conform and comply has also made us too accepting
of these flawed systems. Perhaps, from the beginning, we did train racist ML
systems unknowingly because of a lack of backtesting. Unfortunately, these racist ML systems have also trained us to be racist
by simply accepting the fact that we will not receive the appropriate treatment
because of our differences.
Only by
questioning the status quo would we be able to break out of this vicious racism
training cycle. The problem is, for the effort required to be heard about this
issue, are we truly willing to do so? Or has technology pampered us so much that we are in too much comfort to do anything
about it?
Great post. I like the point you make and the questions you uncover by inspecting how these technologies function. I also like the way you wrote your post - well written but in a blog style that is easy to read. I do think you could do a better job of incorporating the readings throughout your post though.
ReplyDelete