Friday, January 24, 2020

Big Brother 2020: The Shady Dealings of Clearview AI

Graphic of large eye in front of a background of smaller pixel art eyes.
I think many of us can agree that privacy is rapidly becoming a lost dream, if not
completely forfeited by now. It was a few days ago that I read a piece by Kashmir Hill 
about Clearview AI, an elusive startup that boasts it’s facial recognition abilities. Clearview’s
technology is built upon a database of over 3 billion images collected from various social
media sites such as Facebook, YouTube and others. The app is currently only available to
law enforcement, but there have been suggestions by the CEO, Hoan Ton-That, of a public
release and a prototype of glasses augmented with the app.

The prospect of this terrifies me, and should terrify anyone else who values the last shreds
of privacy they hold. The implications of Clearview AI’s possibilities bring up images of
stalking, misidentification and doxxing. Clearview AI raises many of the issues James Moor
has brought forth about computer ethics. Moor’s Law states: As technological revolutions
increase their social impact, ethical problems increase. With this technological breakthrough,
I feel that facial recognition is on the cusp of its power stage, where it becomes available to
the most people and creates major social impact. It is imperative that an ethical standard is
placed on these apps to prevent abuse, if they should even be allowed at all.

Hoan Ton-That, founder of Clearview AI, shows the results of a search for a photo of himself, in New York, Jan. 10, 2019. The little-known startup helps law enforcement match photos of unknown people to their online images — and “might lead to a dystopian future or something,” a backer says.  Image: Amr Alfiky c.2020 The New York Times Company
Hoan Ton-That, Clearview AI's CEO.
Photo credit: Amr Alfiky c.2020 The New York Times Company

What’s even more concerning is the questionable history of the company and Ton-That. Before his claim to fame,
Ton-That had been known for running viddyho.com,
a phishing website that spammed the contacts
of unassuming instant messenger users after they
entered their Gmail login
information. Clearview also has a curious connection
to right-wing political figures. Richard Schwartz was Clearview’s co-founder and early
backer, and he and Ton-That met at an event held by the Manhattan Institute - a
conservative think tank. Schwartz used his connections with Republican politicians to drum
up interest at police departments. The most shocking example, however, comes from one of
their early pitches of the app to Wisconsin Congress candidate Paul Nehlen. Nehlen is
known for his neo-Nazi views and has been banned from Twitter for racist and anti-Semitic
tweets. Schwartz and Ton-That approached him with their app to use for “extreme opposition
research,” I suppose to collect sensitive information on those who dared to oppose white
nationalism. This should be reason enough to believe that Clearview AI cannot be trusted
with anyone’s data.

When Hill confronted Ton-That about the possibility that someone with this technology could
use it to find the names and addresses of people walking down the street, his reply was
“I have to think about that.” I find it astounding that in the almost 4 years Ton-That has been
developing this app that such ethical questions have not been considered. This is either a
result of gross oversight, or Ton-That is talking major bullshit.

3 comments:

  1. I like the way you incorporate hyperlink as references in the article. It helps me to understand what is going on. I agree with you that a company should consider what could resulted from its product when they developed it. Building a product without thinking about its possible usages is unethical.

    ReplyDelete
  2. Your post is very nicely formatted and aesthetically pleasing to scroll through. I like how you hyperlink many articles throughout the blog. Also, it's good that you gave Kashmir Hill's full name. However, I would give a one to two sentence explanation as to who this is, if possible, because I have never heard of him. You also include class content throughout the article, which is great. Also, something that you don't necessarily have to change at all, but that I noticed is that the sentence after "racist and anti-Semitic tweets" is black colored font and the rest of the text is a dark grey.

    ReplyDelete
  3. I also wrote about Clearview AI in my blogpost and really like the take you took on this topic and your personal voice throughout the blog. The connections you made to political figures and views I believe is very important and you do a good job of explaining how this company is looking past ethical problems by giving examples of the CEO and his past. You do a great job of tying all of these points to Moor's points. The blog is very engaging!

    ReplyDelete

Note: Only a member of this blog may post a comment.