Saturday, February 22, 2020

Is technology racist or is the person who invented it?



"Did someone blink?" I can clearly tell that this girl's eyes are open but the technology and sensors of the camera did not. Who's fault is that? The camera's the inventors?

David Hankerson, author of his article: "Does Technology Have Race"  talks about a very interesting point and problem about how some modern day technologies are performing in a a discriminating manner towards specific groups of people. For example, the soap dispenser detecting a white person's hand but failing to detect a black person's hand. Or and apple watch not being able to detect a color person's pulse. A camera constantly asking if people in an Asian family photo are blinking. And especially, Google detecting black people in photographs and categorizing them as "Gorillas" or "Apes".

What struck me from the article was reading about how there are ways to fix these racial issues and controversies that technology is bringing about. If people in India are able to make their sensors on technology more inclusive to all skin types then why can't America?

This ties into the discussion and controvery of bullshit and lies. Harry Frankfurt, an American philospher, argues that bullshitting is not that same thing as lying because lying inncludes false facts and being fake, while bulshitting is a sort of misrepresentation. For example, he writes: "Since bullshit need not be false, it differs from lies in its misrepresentational intent." In short, bulshitting is a misrpresentation of the information; Not lying about false things but instead, twisting information.

I argue that not fixing issues regarding discriminating technology even with the ability to do so is a form of bullshit. It is the truth that there are means to and advancements to make technology in the United States more accessible and less discriminatory. So it cannot be a lie. It may just be bullshit that people who produce and sell technology that discriminates just so happens to ignore the truth that change is needed. 
This point makes me lead to the assumption that no, technology is not racist and neither are the inventors of technology and algorithms are racist. The users and providers are. As Hankerson talks about in his article, the fact that bias comes from privilege needs to be recognized and addressed. When designing a product, the creators should try to invent products based on an intersectional and inclusive lens rather than designs around a "straight white male". That is why I believe that the ones who are in control of changing or teaching the algorithm or the people who have the power to change the technology should be put at some fault and should listen to the awareness and warnings like Hankerson is putting out.

Image result for bullshit detector

How are we going to be able to call people in charge out on their bullshit? Hankerson's article was meant to raise awareness to the fact that technology is making people feel excluded. As I agree that it is important to spread this awareness especially to those who do not deal with this issue, I also think it is important that the users and the companies who sell and provide this technology follow guidelines and rules to make the tech as inclusive as possible instead of bulshitting the truth. Especially when there are countries who are able to make these changes, misrepresenting and manipulating the truth and ultimately continuing to product technology that is racially biased is unacceptable. 

So who's to blame? It is really hard to say because we are unsure of the true intentions of the inventors of the product or algorithm. However, with changing who the technology designs are shaped around I believe change is possible. 

Friday, February 21, 2020

Tinder Embedded Values


Human beings are creatures of habit. Shannon Vallor, an American philosopher and professor at Santa Clara, argues that these habits or the repetition of certain activities is what defines one’s moral character in her article titled “Social Networking Technology and the Virtues”. Specifically, she mentions that interacting with social media, a daily repetition for many of us, can alter one’s values for better or for worse by promoting certain values through the framework of how we interact with the software – what I will call embedded values.



Those of us who haven't been living underneath a rock know what Tinder is. For those of you that are a bit out of the loop, it is a modern location based social media that allows for people in similar areas to chat with each other and potentially meet up. This was all that Tinder was originally intended for but has transformed into an online platform used to find people to hook up with. This behavior is a result of embedded values created by the way tinder is programmed.



The two parts of the programming that encourage this behavior is the gender interest prompt and the display of potential match profiles. One of the first questions Tinder asks you when signing up, is what gender you are interested in matching with. This is the bit of programming that turns this app from a location based social media site and more into a dating site since ordinarily on social media you can be friends with genders you aren’t attracted to. When you are looking at a potential match on tinder, what you initially see is the user's picture with their name and less than a sentence about them. Although you can view more details about them, the app programming encourages users to swipe left (not match) or swipe right (match) based mainly on the picture provided. This is what turned Tinder into a hookup app. The app went from the initial idea of being able to match up with people in your area with similar interest to matching up with attractive people in your area and often not knowing much about them.


(Left) Gender interest prompt display
(Right) Potential match display








Some may not see this as a big issue -- so what if people want to hook up with attractive people close by? That may be true, but Tinder has become so big that it could influence our societies’ values and not just the people using the software. What it means to be in a relationship with someone may drastically change and what we want in our relationships may as well. If we no longer care about intimate connection and love for another person based on personality and what we want out of our relationships is simply sex, then this may not be an issue. If these are values that we wish to uphold, Tinder may be something that we have to rethink or refrain from.

Mask Off: Fake Names and Problematic Profiles


Social media has earned its place as one of the most impactful technologies of the new millennium. Public usage of this technology has seen a meteoric rise, and as of June 2019, 72% of Americans have created an account on at least one social media platform . Additionally, the average American spends 144 minutes a day on social networking apps. This deep and broad usage of social media throughout society has changed how we interact with the world around us and also how we interact with ourselves. Luciano Floridi describes transformative tech like social media as "forces that change the essence of our world because they create and re-engineer whole realities that the user inhabits".

One aspect that has seen the most change  stemming from social media is our sense of identity. On some platforms, we are able to form our identities as we please, separated from reality’s limitations. This has made these platforms a haven for society’s most marginalized individuals who face scrutiny offline for being who they are. However, loose identity verification has also enabled certain users to abuse this freedom for problematic purposes.

In “Constructing and enforcing ‘authentic’ identity online”, authors Oliver Haimson and Ann Hoffman document how the policies of social media websites limit marginalized individuals from representing themselves in the way they please. Facebook, for example, requires users represent themselves with their real, government issued identity. This creates problems for people like transgender individuals, who face account shutdowns because of ID verification issues stemming from name changes. 



One platform which remedies this issue is Twitter, which as former CEO Dick Costello states, “does not care about real names”. On Twitter, there is no system for identity verification, which allows users to represent themselves as they please. For marginalized individuals, this is great and allows for fill self expression, but this system is abused by a good amount of Twitter users for harmful purposes. 

“Other services say you have to use your real name because they think they can monetize that better and get more information about you." - Dick Costello on Twitter

There are many examples of problematic Twitter usage supported by anonymity and false representation, including bullying, fraud, and digital blackface.



Though Twitter’s lack of real name policy has allowed for some users to represent themselves authentically, it has also given way to harmful and inauthentic representation. A happy medium must be discovered which allows for free expression, but discourages harmful misrepresentation moving forward. 


Why Facebook’s Political Ad Policy is not Morally Neutral

Social media is a relatively new technology that has quickly shaped the landscape of political discourse in the United States. Between the 2016 Presidential Election and the subsequent investigation by Special Counsel Robert Mueller, people have begun to understand the profound influence social media has and how easily it can be abused.
Google and Twitter have since overhauled their political advertisement policies by banning microtargeted ads and dramatically limiting the presence of political advertisement on their platforms.

However, Facebook has refused to change their political advertisement policy and claimed that doing so would amount to censorship. They have double downed on the issue by refusing to ban politicians from lying in political ads.

Many people carry the misconception that social media as a technology cannot be biased or morally bad. Rather, it’s the people voicing their opinions on social media platforms who are solely responsible for the spread of disinformation. I disagree with this position completely.

According to Philip Brey in The Cambridge Handbook of Information and Computer Ethics, the design of computer systems has moral consequences. Through the development of software such as social media, application designers are encoding embedded moral values and norms. These embedded values can express themselves as tendencies that allow for things such as privacy or freedom of information. In this way, technology can support or be against issues.

In the case of Facebook, by deliberately allowing political ads to spread lies on their platform, they are providing algorithmic infrastructures for the spread of disinformation. By refusing to change their policies, Facebook is not fighting against censorship but instead supporting policies that threaten the democratic process.

References
Brey, Philip. “Values in Technology and Disclosive Computer Ethics.” The Cambridge Handbook of Information and Computer Ethics, edited by Luciano Floridi, Cambridge University Press, Cambridge, 2010, pp. 41–58.

What's In A Cookie?


When we think of what goes into a typical cookie, we typically come up with harmless ingredients such as flour, sugar, eggs, and chocolate chips. But what goes into an Internet cookie? Surprisingly, you can summarize it into one ingredient; your privacy.

James Moor’s describes in his article, “Why we need better ethics for emerging technologies,” that the web has reached the power stage of its technological revolution and in doing so, it is vulnerable to what he describes as “policy vacuums”; the web grew so fast and allowed us to do so many new things that there were not policies in place to guide us. Cookies are just one part of the web that exists in this policy vacuum.

The base internet cookie is a small file that stores online information such as your login, shopping cart, and your browsing history for a specific website. Just like with edible cookies, Internet cookies come in different types; however, the base cookie is the same.

First-party cookies are cookies that are created from the website that you are currently visiting. These are generally considered safe and help to create a better user experience by facilitating login and remembering what you were shopping for. The danger to your privacy, however, comes from third-party cookies, which are created by websites other than the one that you are currently browsing. These are usually created by advertising companies who can then track what websites you are visiting and personalize ads towards you. The dangerous part of this all is that you did not consent to this.

This privacy breach comes back to Moor’s article and how we must be more proactive when thinking of the ethics of new technology. A simple question of “should we be storing an individual’s data without their permission” could have prevented this. Nonetheless, the General Data Protection Regulation enacted by the European Union aimed to fill this policy vacuum. The law said that companies/websites must make sure users know what information is being stored about them.

This resulted in cookie notifications when the user loads a website. However, the information is either too complicated to read or is not right in front of you; no one has time to read whole privacy policies. Furthermore, they claim a “better browsing experience” but do not even mention the third-party implications. It seems there is more work to do to fill this vacuum.

Ethics in the Age of Artificial Intelligence


We are entering an age of unprecedented technological advancements that requires a new way of defining, clarifying, and governing. According to Luciano Floridi, a philosopher and leader in Information Ethics, biocentric ethics claim that any form of life, human or otherwise, are intrinsically worthy of life (Information Ethics, Floridi). If we bring in Information Ethics, this view claims anything being, or even information, has an intrinsic worthiness. Floridi’s philosophies brought me to think about one of my favorite films, Ex Machina, and the depiction of ethics surrounding artificial intelligence throughout the film.

Ava from Ex-Machina

[SPOILER ALERT] Ex Machina is about a sentient AI named Ava who was created by Nathan. It is revealed that Nathan treats her extremely unethically, and she eventually outsmarts Nathan and Caleb, a coder who is testing her intelligence and falls in love with her. Ava eventually uses Caleb's love for her to escape the lab and enter the real world where she believes she belongs.


Sophia beats Jimmy in Rock, Paper, Scissors.
This idea of a fully sentient AI robot may seem far-fetched, but it could be around the corner. Hanson Robotics is working on perfecting Sophia, a humanoid robot built to mimic human behavior. Hanson’s CEO explains how Sophia can help society address the question of what it really means to be human. Although this is a novel field, the ethics surrounding any instance of being has been discussed since Stoic and Neoplatonic philosophers (Information Ethics, Floridi). Floridi explains how these philosophers set forth that simply being or any instance of information deserves to flourish in a way that is appropriate.


In the case of Ex Machina, I believe Ava was definitely a sentient moral agent. She is an individual and human-based, or at least “reducible to an identifiable aggregation of human beings” (Information Ethics, Floridi). Less so with Sophia who even still is a moral agent, and even less so with my old Tamagotchi which was arguably being. But I don’t think it was unethical of me when my Tamagotchi died due to my disregard for its mealtime. Or that I leave Alexa on my bookshelf all day, every day. I would argue not, but would Floridi agree with me? If being is synonymous with information where is the line between being and not being?


The creation of moral agents like Sophia is quickly changing the landscape of Information Ethics and brings up important discussions that must be had. We need to ask ourselves how we can better shape the reality surrounding ethics and information so that technologies are developed responsibly. Ex Machina has warned us about the consequences if we don’t.