Wednesday, February 26, 2020

#CANCELLED

“You’re cancelled.” 

If you don’t know what that means, you’re either too old or you don’t use Twitter.

In the case that you don’t have a Twitter, let me explain. Every week, you can count on a trending hashtag in the form of #[insert celebrity name]iscancelled. This celebrity could be getting "cancelled" for something that they said or even what they wore. It only takes one tweet to start the flood of hashtags.

Take Kevin Hart, who was announced to host the Oscars. People began digging up Hart’s tweets from over a decade ago, some of which were homophobic. Never mind the fact that these tweets were made at a time when the social climate was different from what it is today, or the fact that people change over time; it was decided that Kevin Hart was #cancelled.

He tried explaining that those tweets didn’t describe the person he was today, but no one wanted to listen. Hart ended up stepping down from hosting the Oscars.

It seems kind of wrong to use a decade-old tweet against someone like that. So then why is cancel culture a thing?

This could be explained by Shannon Valor, author of Social Networking Technology and the Virtues. She discusses whether social media makes it harder for people to develop certain virtues. Let’s take a look at how cancel culture is an example of this:

Patience: Twitter’s platform leaves no room for patience. A lot of times, someone will see a #cancelled tweet and would rather just retweet that than take the time to do their research and see if what they are retweeting is justified.

Honesty: In social media, people would rather seek validation than be honest. Online, there is no fear of accountability for what you say. People can jump on the bandwagon of a #cancelled hashtag without worrying about the repercussions.

Empathy: Valor says one of the most important preconditions for empathy is being in the presence of the other person. When Hart tried to defend himself, people who “cancelled” him might have found it easier to empathize with him if they were able to see him rather than just reading a tweet.

Like Valor says, an idea for future technology would be to make it easier for conditions to exist that would encourage the development of virtues. “Cancelling” will continue until people realize there are real lives that get affected.

Human Rights and Value Sensitive Design

Value Sensitive Design (VSD) is focused on values that center on human well-being, human dignity, justice, welfare, and human rights. This means that computer programs implemented in the past few years have been used to ensure and protect your rights as a person. It was made to help people stay safe and secure within programs. Now while VSD has actually been a part of many different programs and people's lives in general there is some explaining to do as far as how it works and what exactly it does.





Value Sensitive Design has been used in a wide range of research and designs such as the bias in computer systems, universal access, Internet privacy, informed consent, ubiquitous sensing of the environment and individual rights, certain urban planning processes, social and moral aspects of human-robot interaction, privacy in public, and designer values in the design process. With all that said it has been used in a variety of ways to solve a variety of problems, these are all linked to making a process that will be fair for all. But it is not only a process that makes things fair for all it is also used to make programs that are more pleasing to the eye and in line with what people believe. For example VSD is used in line with the Google search engine to make some results more human rights friendly. Recently Google has been accused of hiding White Power websites deep within a list of human rights articles, now there are other things that play into this, but as far as the use of VSD these results are made to support human rights and justice among other things. While some people see this a way to bury the existence of these groups it can also been seen as a prolonging of the whole of the human race, a welcoming wish to see human rights at work. While these lines of thinking can go on and on deeper and deeper into true human rights, the use of VSD is clear and has been implemented into our everyday lives.





With that in mind, I think that this process of Value Sensitive Design should be used as much as possible. As such there is also some fine-tuning that needs to be done, this program is simply an assist rather than a full-on solution. The bigger picture is still out there, and people could sit down and talk about fairness and their own personal rights forever, but VSD is being used as it is now.There is still plenty of work for people to do to make sure that it's implications are just and fair for all. However, I do believe that just as there are plenty of ways to police people in real life there should be just as many ways to police them online and while using interlinking systems. You can see this process at work already, it has taken hold in many ways and I believe that it should continue to do so. However, there are plenty of people who don't believe that this is true and in a world entrusted to many that are completely unaware of this issue I believe that it will take years to implement fully.

Tuesday, February 25, 2020

GTG

In the year 2020, one is considered abnormal if they don’t have any social media platforms. Most Americans have Facebook, Instagram, Twitter, Snapchat, or another form of social media. In a lot of ways, the internet and communicating with others has changed drastically. A few years ago, many people when logging off of a chat site would say “gtg,” or “gotta go.” This is not a common phrase nowadays when people constantly have their cell phones on them. A simple buzz alerts the cell phone owner of an incoming communication, and they can interact with the notification from anywhere. People no longer “log off.”
Default Facebook Profile Photo
In David Gelertner’s “The Second Coming: A Manifesto,” He explains how he believes technology will change. He theorizes about “cyber-bodies”, essentially pockets of information. Each person would have their own cyber-body, detailing all of their electronic life. He explains that he believes there will be “tuners,” where one would be able to pull up cyber-bodies. 
Although no such “tuners,” exist, and in the literal sense there are no cyber-bodies, one could argue that our cell phones are, in a way, these “tuners” that Gelertner theorized about. At a moment’s notice, you could pull up the social media pages of anyone who chooses to use them and form an informationally supported narrative about that person. Internet users are no longer simply computer users; they essentially do have a digital body, that is tied to their cell phone,  desktop or laptop computer, smartwatch, or even, to some extent, headphones, to interact with the rest of the online world using their digital persona. 
There are some obvious perks and downfalls to everyone having their own cyber-body. Each person can choose to customize it to their liking, posting their most perfect photos and thoughts and putting a positive foot forward before ever meeting someone. However, others can also post about you, and if negative, could ruin not only your cyber-body, but also your reputation in the real world, since they are interconnected.
Person taking a selfie, from PhoneArena Link to source
Each person does have a “cyber-body,” once you tie together their Facebook, Instagram, Twitter, and any other social media pages, as well as their messaging apps. Handheld or wearable devices are simply the “tuners,” that Gelertner once described, able to pull up and interact with another’s “cyber-body” at a moment’s notice.

Poetic Prospects for Progress

The evolution of technology has caught the world asleep at the wheel; on a daily basis, newscasters, politicians, and analysts criticize the advance of artificial intelligence and its deep implantment in the fabric of human life. It seems as if innovators have been distracted by their efforts dedicated to solving problems and have neglected to foresee the negative consequences that can erupt from a computer with too much power. However, could our aptitude for constructing entire virtual universes have been expected years ago? In 1999, David Gelernter, a computer science professor at Yale University, authored an essay titled "The Second Coming — A Manifesto" in which he gazes into the cyberspace of technology's future through a critical lens, with aspirations for a more revolutionary approach (Gelertner, 1999).


Gelernter's manifesto is comprised of 58 points of commentary that criticize humanity's limitations in creation, characterize the limitless power of computers, and dictate the inevitable abilities of future systems and their elements. At the time of the essay's publishing, humanity stood at the edge of substantial alterations in the way technology is managed. Computers had already evolved significantly from their conception to the late 1990s, and Gelernter's manifesto acts as a direction for further gradation in a more effective manner than ever before. His tone throughout the manifesto is that of cultivating irritation with the stagnation of computers; he seems to understand the possibility that exists for the betterment of the current systems and is frustrated that not enough is being done to move forward.


Advances in artificial intelligence have brought upon the changes Gelernter sought (BGO Software, 2015).

Almost two decades later, Gelernter’s hopes for a less conventional approach have come true; machine learning has altered the field of computer science and has allowed people not only to delve into cyberspace but to create interactive systems that can reach back. Gelernter views the computer conventions of his time as accidents of the past that have remained intact due to people's adoption and acquaintance of them without an attempt for alteration. Have we as innovators arrived at a point in the history of computers where we can assess the functionality of what we've become attached to and determine whether replacements of thought and effort are needed? I believe that replacement will not entail a discardment of all we possess, but will rather be an act of higher performance. This idea has played out to arrive to the state of artificial intelligence today.






References
BGO Software. (2015). Humans vs Computers: Similarities Loading Now. Retrieved from https://www.bgosoftware.com/blog/humans-vs-computers-similarities-loading-now-part-i/

Gelernter, D. (1999). The Second Coming – A Manifesto. Edge. Retrieved from https://www.edge.org/documents/archive/edge70.html

Monday, February 24, 2020

Why Accessibility on the Internet Matters

A screencap of the website Ling's Cars, shown to illustrate bad web design.
Ling's Cars screencap courtesy of  Ranking by SEO.
Take a moment to look at the top photo. Can you point out what's wrong with the web page? I'm sure we've all got different tastes but I feel like I don't need to explain much more. Its easy to tell when a website has bad visual design, but is it always easy to tell bad accessibility? Is alt text provided to describe images to people who use screen readers? Can you navigate a page using only your keyboard? Is there proper color contrast between text and background? These are just some of the common missteps people make when developing websites. The Web Accessibility Initiative (WAI) and its Web Content Accessibility Guidelines (WCAG) have been around for as long as the Internet has been available for everyday people. Yet there exists an attitude that web accessibility should not be a high priority, as it takes time and money and results in unattractive web design. Not only are these both untrue, it also hinders a significant portion of the population with disabilities from participating in an essential activity.

Disabled people have as much a right to access the Internet as we do, right? It would be inappropriate for a hospital to not have entrances that wheelchair users could enter from. So why are the needs of the disabled thrown to the wayside when it comes to the web? What are known as embedded values along with bias can begin to shed light on the issue. Philip Brey, a professor of philosophy and technology, describes embedded values as values in computer programs that reflect the values of the developers or of society. This argues against the idea that computer systems are neutral. Brey also cites the ideas of Batya Friedman and Helen Nissenbaum on the three kinds of biases that can exist in technology. Pre-existing bias refers to biases present in society that imprint on the computer system, such as racial bias. Technical bias exists in the limitations of the technology itself, like a program that favors results that appear first more than result that appear later. Emergent bias is different, as it only becomes apparent in the "context of use with real users." 

When it comes to accessibility on the web, I think both pre-existing and emergent biases are at play. A pre-existing bias held by developers may be that disabled people do not use the Internet as much as we do, or that there are too few disabled people to care about catering to. A less charitable interpretation may be that developers feel that coding for accessibility is annoying and disabled people should just get over it. However, malice may not be involved at all. They may have just not been aware of the guidelines and consequently only see the problems after product release. If this information was not known, it could be excused, but it also depends on how they choose to react.

Illustration showing accessibility symbol and desktop screen with Domino's logo.The cost of a lack of accessibility may be worth more than if it was addressed early on. Lawsuits over web accessibility have been on the rise in recent years. The most prominent example comes from Domino's. The pizza chain was sued after a blind man was unable to access their website or mobile app using a screen reader. In this case, taken to the Supreme Court, Domino's tried to appeal by saying that there were no clear guidelines for web accessibility and thus had no obligation to optimize its technology for disabled people. The Court was not amused, and victory was granted to the plaintiff.

The defense from Domino's was telling to me. You could say that they were not aware of the need for accessibility measures. Yet when made they are made aware of this, they try to deflect and say that they have no obligation to make their online services accessible. Domino's has no obligation to serve its customers? That just seems very flippant and petty to me. On the brightside, I hope that cases like this will bring more attention to disability rights on the web.

Sunday, February 23, 2020

Your Uber Will be Arriving Shortly

As a college student, I am no stranger to Uber. Whether it be taking one to a class I'm late for because I overslept, or simply because I need to replenish my stockpile of snacks, more often than not I find myself opening the Uber app to "call" a ride.

Are you my Uber?
In “Why we need better ethics for emerging technologies,” James Moor reflects on the concept that living in a period of technology promises dramatic change, in which it is not satisfactory to do ethics as usual. He argued that major technological upheavals require better ethical thinking in terms of being better informed and take meaningful ethical action in terms of being more proactive.

When we call for a ride on the Uber app, or any app that requires personal information like credit card numbers, phone numbers, or in this case drivers license data we rarely think about the implications of where the data is going and how it’s being used by either the app or the company behind it. However, in 2016 more people started to be aware of their data after a hacker was able to access the information of 50 million of Uber’s users as well as 7 million of its drivers. Of the drivers, 600,000 had their driver’s license numbers compromised. 

When Uber first launched in 2011, it was a fairly simple idea that people never even knew they needed. Moor claimed that technological advancements better society, but their novelty makes it difficult to predict ethical issues as situations may arise for which we do not have adequate policies for. The Massive data breach in 2016 was the epitome of what Moor reflects on. As more people, start using any particular technology and the tech increases its social impact it is obvious that the number of ethical concerns increases as well. Moor analyzed this phenomenon as well, terming it Moor's law.

The Uber breach highlights the failure of large corporations to adequately safeguard the private information of their customers. Not only are these breaches of security, but they are breaches of trust for consumers, as companies fail to disclose leaks until months or years later. There is still much education to be done, and discussion to be had, around proper protocols related to data breaches. 


Next time, think twice before you confirm your ride. I know I will.

Saturday, February 22, 2020

Is technology racist or is the person who invented it?



"Did someone blink?" I can clearly tell that this girl's eyes are open but the technology and sensors of the camera did not. Who's fault is that? The camera's the inventors?

David Hankerson, author of his article: "Does Technology Have Race"  talks about a very interesting point and problem about how some modern day technologies are performing in a a discriminating manner towards specific groups of people. For example, the soap dispenser detecting a white person's hand but failing to detect a black person's hand. Or and apple watch not being able to detect a color person's pulse. A camera constantly asking if people in an Asian family photo are blinking. And especially, Google detecting black people in photographs and categorizing them as "Gorillas" or "Apes".

What struck me from the article was reading about how there are ways to fix these racial issues and controversies that technology is bringing about. If people in India are able to make their sensors on technology more inclusive to all skin types then why can't America?

This ties into the discussion and controvery of bullshit and lies. Harry Frankfurt, an American philospher, argues that bullshitting is not that same thing as lying because lying inncludes false facts and being fake, while bulshitting is a sort of misrepresentation. For example, he writes: "Since bullshit need not be false, it differs from lies in its misrepresentational intent." In short, bulshitting is a misrpresentation of the information; Not lying about false things but instead, twisting information.

I argue that not fixing issues regarding discriminating technology even with the ability to do so is a form of bullshit. It is the truth that there are means to and advancements to make technology in the United States more accessible and less discriminatory. So it cannot be a lie. It may just be bullshit that people who produce and sell technology that discriminates just so happens to ignore the truth that change is needed. 
This point makes me lead to the assumption that no, technology is not racist and neither are the inventors of technology and algorithms are racist. The users and providers are. As Hankerson talks about in his article, the fact that bias comes from privilege needs to be recognized and addressed. When designing a product, the creators should try to invent products based on an intersectional and inclusive lens rather than designs around a "straight white male". That is why I believe that the ones who are in control of changing or teaching the algorithm or the people who have the power to change the technology should be put at some fault and should listen to the awareness and warnings like Hankerson is putting out.

Image result for bullshit detector

How are we going to be able to call people in charge out on their bullshit? Hankerson's article was meant to raise awareness to the fact that technology is making people feel excluded. As I agree that it is important to spread this awareness especially to those who do not deal with this issue, I also think it is important that the users and the companies who sell and provide this technology follow guidelines and rules to make the tech as inclusive as possible instead of bulshitting the truth. Especially when there are countries who are able to make these changes, misrepresenting and manipulating the truth and ultimately continuing to product technology that is racially biased is unacceptable. 

So who's to blame? It is really hard to say because we are unsure of the true intentions of the inventors of the product or algorithm. However, with changing who the technology designs are shaped around I believe change is possible. 

Friday, February 21, 2020

Tinder Embedded Values


Human beings are creatures of habit. Shannon Vallor, an American philosopher and professor at Santa Clara, argues that these habits or the repetition of certain activities is what defines one’s moral character in her article titled “Social Networking Technology and the Virtues”. Specifically, she mentions that interacting with social media, a daily repetition for many of us, can alter one’s values for better or for worse by promoting certain values through the framework of how we interact with the software – what I will call embedded values.



Those of us who haven't been living underneath a rock know what Tinder is. For those of you that are a bit out of the loop, it is a modern location based social media that allows for people in similar areas to chat with each other and potentially meet up. This was all that Tinder was originally intended for but has transformed into an online platform used to find people to hook up with. This behavior is a result of embedded values created by the way tinder is programmed.



The two parts of the programming that encourage this behavior is the gender interest prompt and the display of potential match profiles. One of the first questions Tinder asks you when signing up, is what gender you are interested in matching with. This is the bit of programming that turns this app from a location based social media site and more into a dating site since ordinarily on social media you can be friends with genders you aren’t attracted to. When you are looking at a potential match on tinder, what you initially see is the user's picture with their name and less than a sentence about them. Although you can view more details about them, the app programming encourages users to swipe left (not match) or swipe right (match) based mainly on the picture provided. This is what turned Tinder into a hookup app. The app went from the initial idea of being able to match up with people in your area with similar interest to matching up with attractive people in your area and often not knowing much about them.


(Left) Gender interest prompt display
(Right) Potential match display








Some may not see this as a big issue -- so what if people want to hook up with attractive people close by? That may be true, but Tinder has become so big that it could influence our societies’ values and not just the people using the software. What it means to be in a relationship with someone may drastically change and what we want in our relationships may as well. If we no longer care about intimate connection and love for another person based on personality and what we want out of our relationships is simply sex, then this may not be an issue. If these are values that we wish to uphold, Tinder may be something that we have to rethink or refrain from.

Mask Off: Fake Names and Problematic Profiles


Social media has earned its place as one of the most impactful technologies of the new millennium. Public usage of this technology has seen a meteoric rise, and as of June 2019, 72% of Americans have created an account on at least one social media platform . Additionally, the average American spends 144 minutes a day on social networking apps. This deep and broad usage of social media throughout society has changed how we interact with the world around us and also how we interact with ourselves. Luciano Floridi describes transformative tech like social media as "forces that change the essence of our world because they create and re-engineer whole realities that the user inhabits".

One aspect that has seen the most change  stemming from social media is our sense of identity. On some platforms, we are able to form our identities as we please, separated from reality’s limitations. This has made these platforms a haven for society’s most marginalized individuals who face scrutiny offline for being who they are. However, loose identity verification has also enabled certain users to abuse this freedom for problematic purposes.

In “Constructing and enforcing ‘authentic’ identity online”, authors Oliver Haimson and Ann Hoffman document how the policies of social media websites limit marginalized individuals from representing themselves in the way they please. Facebook, for example, requires users represent themselves with their real, government issued identity. This creates problems for people like transgender individuals, who face account shutdowns because of ID verification issues stemming from name changes. 



One platform which remedies this issue is Twitter, which as former CEO Dick Costello states, “does not care about real names”. On Twitter, there is no system for identity verification, which allows users to represent themselves as they please. For marginalized individuals, this is great and allows for fill self expression, but this system is abused by a good amount of Twitter users for harmful purposes. 

“Other services say you have to use your real name because they think they can monetize that better and get more information about you." - Dick Costello on Twitter

There are many examples of problematic Twitter usage supported by anonymity and false representation, including bullying, fraud, and digital blackface.



Though Twitter’s lack of real name policy has allowed for some users to represent themselves authentically, it has also given way to harmful and inauthentic representation. A happy medium must be discovered which allows for free expression, but discourages harmful misrepresentation moving forward. 


Why Facebook’s Political Ad Policy is not Morally Neutral

Social media is a relatively new technology that has quickly shaped the landscape of political discourse in the United States. Between the 2016 Presidential Election and the subsequent investigation by Special Counsel Robert Mueller, people have begun to understand the profound influence social media has and how easily it can be abused.
Google and Twitter have since overhauled their political advertisement policies by banning microtargeted ads and dramatically limiting the presence of political advertisement on their platforms.

However, Facebook has refused to change their political advertisement policy and claimed that doing so would amount to censorship. They have double downed on the issue by refusing to ban politicians from lying in political ads.

Many people carry the misconception that social media as a technology cannot be biased or morally bad. Rather, it’s the people voicing their opinions on social media platforms who are solely responsible for the spread of disinformation. I disagree with this position completely.

According to Philip Brey in The Cambridge Handbook of Information and Computer Ethics, the design of computer systems has moral consequences. Through the development of software such as social media, application designers are encoding embedded moral values and norms. These embedded values can express themselves as tendencies that allow for things such as privacy or freedom of information. In this way, technology can support or be against issues.

In the case of Facebook, by deliberately allowing political ads to spread lies on their platform, they are providing algorithmic infrastructures for the spread of disinformation. By refusing to change their policies, Facebook is not fighting against censorship but instead supporting policies that threaten the democratic process.

References
Brey, Philip. “Values in Technology and Disclosive Computer Ethics.” The Cambridge Handbook of Information and Computer Ethics, edited by Luciano Floridi, Cambridge University Press, Cambridge, 2010, pp. 41–58.

What's In A Cookie?


When we think of what goes into a typical cookie, we typically come up with harmless ingredients such as flour, sugar, eggs, and chocolate chips. But what goes into an Internet cookie? Surprisingly, you can summarize it into one ingredient; your privacy.

James Moor’s describes in his article, “Why we need better ethics for emerging technologies,” that the web has reached the power stage of its technological revolution and in doing so, it is vulnerable to what he describes as “policy vacuums”; the web grew so fast and allowed us to do so many new things that there were not policies in place to guide us. Cookies are just one part of the web that exists in this policy vacuum.

The base internet cookie is a small file that stores online information such as your login, shopping cart, and your browsing history for a specific website. Just like with edible cookies, Internet cookies come in different types; however, the base cookie is the same.

First-party cookies are cookies that are created from the website that you are currently visiting. These are generally considered safe and help to create a better user experience by facilitating login and remembering what you were shopping for. The danger to your privacy, however, comes from third-party cookies, which are created by websites other than the one that you are currently browsing. These are usually created by advertising companies who can then track what websites you are visiting and personalize ads towards you. The dangerous part of this all is that you did not consent to this.

This privacy breach comes back to Moor’s article and how we must be more proactive when thinking of the ethics of new technology. A simple question of “should we be storing an individual’s data without their permission” could have prevented this. Nonetheless, the General Data Protection Regulation enacted by the European Union aimed to fill this policy vacuum. The law said that companies/websites must make sure users know what information is being stored about them.

This resulted in cookie notifications when the user loads a website. However, the information is either too complicated to read or is not right in front of you; no one has time to read whole privacy policies. Furthermore, they claim a “better browsing experience” but do not even mention the third-party implications. It seems there is more work to do to fill this vacuum.

Ethics in the Age of Artificial Intelligence


We are entering an age of unprecedented technological advancements that requires a new way of defining, clarifying, and governing. According to Luciano Floridi, a philosopher and leader in Information Ethics, biocentric ethics claim that any form of life, human or otherwise, are intrinsically worthy of life (Information Ethics, Floridi). If we bring in Information Ethics, this view claims anything being, or even information, has an intrinsic worthiness. Floridi’s philosophies brought me to think about one of my favorite films, Ex Machina, and the depiction of ethics surrounding artificial intelligence throughout the film.

Ava from Ex-Machina

[SPOILER ALERT] Ex Machina is about a sentient AI named Ava who was created by Nathan. It is revealed that Nathan treats her extremely unethically, and she eventually outsmarts Nathan and Caleb, a coder who is testing her intelligence and falls in love with her. Ava eventually uses Caleb's love for her to escape the lab and enter the real world where she believes she belongs.


Sophia beats Jimmy in Rock, Paper, Scissors.
This idea of a fully sentient AI robot may seem far-fetched, but it could be around the corner. Hanson Robotics is working on perfecting Sophia, a humanoid robot built to mimic human behavior. Hanson’s CEO explains how Sophia can help society address the question of what it really means to be human. Although this is a novel field, the ethics surrounding any instance of being has been discussed since Stoic and Neoplatonic philosophers (Information Ethics, Floridi). Floridi explains how these philosophers set forth that simply being or any instance of information deserves to flourish in a way that is appropriate.


In the case of Ex Machina, I believe Ava was definitely a sentient moral agent. She is an individual and human-based, or at least “reducible to an identifiable aggregation of human beings” (Information Ethics, Floridi). Less so with Sophia who even still is a moral agent, and even less so with my old Tamagotchi which was arguably being. But I don’t think it was unethical of me when my Tamagotchi died due to my disregard for its mealtime. Or that I leave Alexa on my bookshelf all day, every day. I would argue not, but would Floridi agree with me? If being is synonymous with information where is the line between being and not being?


The creation of moral agents like Sophia is quickly changing the landscape of Information Ethics and brings up important discussions that must be had. We need to ask ourselves how we can better shape the reality surrounding ethics and information so that technologies are developed responsibly. Ex Machina has warned us about the consequences if we don’t.