Wednesday, February 26, 2020

#CANCELLED

“You’re cancelled.” 

If you don’t know what that means, you’re either too old or you don’t use Twitter.

In the case that you don’t have a Twitter, let me explain. Every week, you can count on a trending hashtag in the form of #[insert celebrity name]iscancelled. This celebrity could be getting "cancelled" for something that they said or even what they wore. It only takes one tweet to start the flood of hashtags.

Take Kevin Hart, who was announced to host the Oscars. People began digging up Hart’s tweets from over a decade ago, some of which were homophobic. Never mind the fact that these tweets were made at a time when the social climate was different from what it is today, or the fact that people change over time; it was decided that Kevin Hart was #cancelled.

He tried explaining that those tweets didn’t describe the person he was today, but no one wanted to listen. Hart ended up stepping down from hosting the Oscars.

It seems kind of wrong to use a decade-old tweet against someone like that. So then why is cancel culture a thing?

This could be explained by Shannon Valor, author of Social Networking Technology and the Virtues. She discusses whether social media makes it harder for people to develop certain virtues. Let’s take a look at how cancel culture is an example of this:

Patience: Twitter’s platform leaves no room for patience. A lot of times, someone will see a #cancelled tweet and would rather just retweet that than take the time to do their research and see if what they are retweeting is justified.

Honesty: In social media, people would rather seek validation than be honest. Online, there is no fear of accountability for what you say. People can jump on the bandwagon of a #cancelled hashtag without worrying about the repercussions.

Empathy: Valor says one of the most important preconditions for empathy is being in the presence of the other person. When Hart tried to defend himself, people who “cancelled” him might have found it easier to empathize with him if they were able to see him rather than just reading a tweet.

Like Valor says, an idea for future technology would be to make it easier for conditions to exist that would encourage the development of virtues. “Cancelling” will continue until people realize there are real lives that get affected.

Human Rights and Value Sensitive Design

Value Sensitive Design (VSD) is focused on values that center on human well-being, human dignity, justice, welfare, and human rights. This means that computer programs implemented in the past few years have been used to ensure and protect your rights as a person. It was made to help people stay safe and secure within programs. Now while VSD has actually been a part of many different programs and people's lives in general there is some explaining to do as far as how it works and what exactly it does.





Value Sensitive Design has been used in a wide range of research and designs such as the bias in computer systems, universal access, Internet privacy, informed consent, ubiquitous sensing of the environment and individual rights, certain urban planning processes, social and moral aspects of human-robot interaction, privacy in public, and designer values in the design process. With all that said it has been used in a variety of ways to solve a variety of problems, these are all linked to making a process that will be fair for all. But it is not only a process that makes things fair for all it is also used to make programs that are more pleasing to the eye and in line with what people believe. For example VSD is used in line with the Google search engine to make some results more human rights friendly. Recently Google has been accused of hiding White Power websites deep within a list of human rights articles, now there are other things that play into this, but as far as the use of VSD these results are made to support human rights and justice among other things. While some people see this a way to bury the existence of these groups it can also been seen as a prolonging of the whole of the human race, a welcoming wish to see human rights at work. While these lines of thinking can go on and on deeper and deeper into true human rights, the use of VSD is clear and has been implemented into our everyday lives.





With that in mind, I think that this process of Value Sensitive Design should be used as much as possible. As such there is also some fine-tuning that needs to be done, this program is simply an assist rather than a full-on solution. The bigger picture is still out there, and people could sit down and talk about fairness and their own personal rights forever, but VSD is being used as it is now.There is still plenty of work for people to do to make sure that it's implications are just and fair for all. However, I do believe that just as there are plenty of ways to police people in real life there should be just as many ways to police them online and while using interlinking systems. You can see this process at work already, it has taken hold in many ways and I believe that it should continue to do so. However, there are plenty of people who don't believe that this is true and in a world entrusted to many that are completely unaware of this issue I believe that it will take years to implement fully.

Tuesday, February 25, 2020

GTG

In the year 2020, one is considered abnormal if they don’t have any social media platforms. Most Americans have Facebook, Instagram, Twitter, Snapchat, or another form of social media. In a lot of ways, the internet and communicating with others has changed drastically. A few years ago, many people when logging off of a chat site would say “gtg,” or “gotta go.” This is not a common phrase nowadays when people constantly have their cell phones on them. A simple buzz alerts the cell phone owner of an incoming communication, and they can interact with the notification from anywhere. People no longer “log off.”
Default Facebook Profile Photo
In David Gelertner’s “The Second Coming: A Manifesto,” He explains how he believes technology will change. He theorizes about “cyber-bodies”, essentially pockets of information. Each person would have their own cyber-body, detailing all of their electronic life. He explains that he believes there will be “tuners,” where one would be able to pull up cyber-bodies. 
Although no such “tuners,” exist, and in the literal sense there are no cyber-bodies, one could argue that our cell phones are, in a way, these “tuners” that Gelertner theorized about. At a moment’s notice, you could pull up the social media pages of anyone who chooses to use them and form an informationally supported narrative about that person. Internet users are no longer simply computer users; they essentially do have a digital body, that is tied to their cell phone,  desktop or laptop computer, smartwatch, or even, to some extent, headphones, to interact with the rest of the online world using their digital persona. 
There are some obvious perks and downfalls to everyone having their own cyber-body. Each person can choose to customize it to their liking, posting their most perfect photos and thoughts and putting a positive foot forward before ever meeting someone. However, others can also post about you, and if negative, could ruin not only your cyber-body, but also your reputation in the real world, since they are interconnected.
Person taking a selfie, from PhoneArena Link to source
Each person does have a “cyber-body,” once you tie together their Facebook, Instagram, Twitter, and any other social media pages, as well as their messaging apps. Handheld or wearable devices are simply the “tuners,” that Gelertner once described, able to pull up and interact with another’s “cyber-body” at a moment’s notice.

Poetic Prospects for Progress

The evolution of technology has caught the world asleep at the wheel; on a daily basis, newscasters, politicians, and analysts criticize the advance of artificial intelligence and its deep implantment in the fabric of human life. It seems as if innovators have been distracted by their efforts dedicated to solving problems and have neglected to foresee the negative consequences that can erupt from a computer with too much power. However, could our aptitude for constructing entire virtual universes have been expected years ago? In 1999, David Gelernter, a computer science professor at Yale University, authored an essay titled "The Second Coming — A Manifesto" in which he gazes into the cyberspace of technology's future through a critical lens, with aspirations for a more revolutionary approach (Gelertner, 1999).


Gelernter's manifesto is comprised of 58 points of commentary that criticize humanity's limitations in creation, characterize the limitless power of computers, and dictate the inevitable abilities of future systems and their elements. At the time of the essay's publishing, humanity stood at the edge of substantial alterations in the way technology is managed. Computers had already evolved significantly from their conception to the late 1990s, and Gelernter's manifesto acts as a direction for further gradation in a more effective manner than ever before. His tone throughout the manifesto is that of cultivating irritation with the stagnation of computers; he seems to understand the possibility that exists for the betterment of the current systems and is frustrated that not enough is being done to move forward.


Advances in artificial intelligence have brought upon the changes Gelernter sought (BGO Software, 2015).

Almost two decades later, Gelernter’s hopes for a less conventional approach have come true; machine learning has altered the field of computer science and has allowed people not only to delve into cyberspace but to create interactive systems that can reach back. Gelernter views the computer conventions of his time as accidents of the past that have remained intact due to people's adoption and acquaintance of them without an attempt for alteration. Have we as innovators arrived at a point in the history of computers where we can assess the functionality of what we've become attached to and determine whether replacements of thought and effort are needed? I believe that replacement will not entail a discardment of all we possess, but will rather be an act of higher performance. This idea has played out to arrive to the state of artificial intelligence today.






References
BGO Software. (2015). Humans vs Computers: Similarities Loading Now. Retrieved from https://www.bgosoftware.com/blog/humans-vs-computers-similarities-loading-now-part-i/

Gelernter, D. (1999). The Second Coming – A Manifesto. Edge. Retrieved from https://www.edge.org/documents/archive/edge70.html

Monday, February 24, 2020

Why Accessibility on the Internet Matters

A screencap of the website Ling's Cars, shown to illustrate bad web design.
Ling's Cars screencap courtesy of  Ranking by SEO.
Take a moment to look at the top photo. Can you point out what's wrong with the web page? I'm sure we've all got different tastes but I feel like I don't need to explain much more. Its easy to tell when a website has bad visual design, but is it always easy to tell bad accessibility? Is alt text provided to describe images to people who use screen readers? Can you navigate a page using only your keyboard? Is there proper color contrast between text and background? These are just some of the common missteps people make when developing websites. The Web Accessibility Initiative (WAI) and its Web Content Accessibility Guidelines (WCAG) have been around for as long as the Internet has been available for everyday people. Yet there exists an attitude that web accessibility should not be a high priority, as it takes time and money and results in unattractive web design. Not only are these both untrue, it also hinders a significant portion of the population with disabilities from participating in an essential activity.

Disabled people have as much a right to access the Internet as we do, right? It would be inappropriate for a hospital to not have entrances that wheelchair users could enter from. So why are the needs of the disabled thrown to the wayside when it comes to the web? What are known as embedded values along with bias can begin to shed light on the issue. Philip Brey, a professor of philosophy and technology, describes embedded values as values in computer programs that reflect the values of the developers or of society. This argues against the idea that computer systems are neutral. Brey also cites the ideas of Batya Friedman and Helen Nissenbaum on the three kinds of biases that can exist in technology. Pre-existing bias refers to biases present in society that imprint on the computer system, such as racial bias. Technical bias exists in the limitations of the technology itself, like a program that favors results that appear first more than result that appear later. Emergent bias is different, as it only becomes apparent in the "context of use with real users." 

When it comes to accessibility on the web, I think both pre-existing and emergent biases are at play. A pre-existing bias held by developers may be that disabled people do not use the Internet as much as we do, or that there are too few disabled people to care about catering to. A less charitable interpretation may be that developers feel that coding for accessibility is annoying and disabled people should just get over it. However, malice may not be involved at all. They may have just not been aware of the guidelines and consequently only see the problems after product release. If this information was not known, it could be excused, but it also depends on how they choose to react.

Illustration showing accessibility symbol and desktop screen with Domino's logo.The cost of a lack of accessibility may be worth more than if it was addressed early on. Lawsuits over web accessibility have been on the rise in recent years. The most prominent example comes from Domino's. The pizza chain was sued after a blind man was unable to access their website or mobile app using a screen reader. In this case, taken to the Supreme Court, Domino's tried to appeal by saying that there were no clear guidelines for web accessibility and thus had no obligation to optimize its technology for disabled people. The Court was not amused, and victory was granted to the plaintiff.

The defense from Domino's was telling to me. You could say that they were not aware of the need for accessibility measures. Yet when made they are made aware of this, they try to deflect and say that they have no obligation to make their online services accessible. Domino's has no obligation to serve its customers? That just seems very flippant and petty to me. On the brightside, I hope that cases like this will bring more attention to disability rights on the web.

Sunday, February 23, 2020

Your Uber Will be Arriving Shortly

As a college student, I am no stranger to Uber. Whether it be taking one to a class I'm late for because I overslept, or simply because I need to replenish my stockpile of snacks, more often than not I find myself opening the Uber app to "call" a ride.

Are you my Uber?
In “Why we need better ethics for emerging technologies,” James Moor reflects on the concept that living in a period of technology promises dramatic change, in which it is not satisfactory to do ethics as usual. He argued that major technological upheavals require better ethical thinking in terms of being better informed and take meaningful ethical action in terms of being more proactive.

When we call for a ride on the Uber app, or any app that requires personal information like credit card numbers, phone numbers, or in this case drivers license data we rarely think about the implications of where the data is going and how it’s being used by either the app or the company behind it. However, in 2016 more people started to be aware of their data after a hacker was able to access the information of 50 million of Uber’s users as well as 7 million of its drivers. Of the drivers, 600,000 had their driver’s license numbers compromised. 

When Uber first launched in 2011, it was a fairly simple idea that people never even knew they needed. Moor claimed that technological advancements better society, but their novelty makes it difficult to predict ethical issues as situations may arise for which we do not have adequate policies for. The Massive data breach in 2016 was the epitome of what Moor reflects on. As more people, start using any particular technology and the tech increases its social impact it is obvious that the number of ethical concerns increases as well. Moor analyzed this phenomenon as well, terming it Moor's law.

The Uber breach highlights the failure of large corporations to adequately safeguard the private information of their customers. Not only are these breaches of security, but they are breaches of trust for consumers, as companies fail to disclose leaks until months or years later. There is still much education to be done, and discussion to be had, around proper protocols related to data breaches. 


Next time, think twice before you confirm your ride. I know I will.