Friday, January 24, 2020

Are Designers to Blame for Raising Racist Robots?

In 2016, Microsoft launched its artificial intelligence chatbot Tay, which was designed to learn from what people tweeted at the bot and engage in “playful conversation”.

Sounds pretty inoffensive – that is, until mere hours later when poor Tay was corrupted by Internet denizens and began tweeting out racist, antisemitic and misogynistic statements, spewing out hateful messages like “Hitler was right” and “RACE WAR NOW”.



While it makes for an amusing read, this does bring up a more serious question with far-reaching implications for the future world of advancing algorithms and AI technology.

Are designers to blame for racist systems and machine learning algorithms?

In the case of Tay, it is easy to blame the disaster on a section of Internet culture being toxic. After all, trolls will be trolls, right? But to prevent such systems from suffering a similar fate, the designer is the one who has the power to make a difference.

A quote sums this up nicely:
“Designers have an increased burden of care in producing artificial agents that exhibit learning and intentionality” 
– The Ethics of Designing Artificial Agents, Grodzinsky et al.

The more AI technology advances, the more potential there will be for an AI system to move beyond the original design. Designers need to take this potential into account and take steps to minimize the chances of their systems taking a dangerous turn.

Perhaps a suitable analogy brought up by Grodzinsky is that we can relate the relationship between designers and algorithms to the relationship between a parent and a child. Is the parent responsible for the wrongdoings of a child? I’d argue the answer is yes, partly, since the parent could have taken steps to educate and discipline the child properly before the wrongdoing took place.

For Tay, we can infer that the fault of the “parent” here was allowing the child to be exposed to bad company and negative influences. Perhaps if the child was better protected from these influences and better educated on what kind of people to engage with, it wouldn’t have turned racist.

So are designers the ones to blame for racist robots? Yes, at least to some extent. Maybe some of the responsibility falls on society as a whole, but if they don’t take responsibility, who will be the ones stopping us from a horde of racist robots from being unleashed?


x

6 comments:

  1. I love the analogy of developers assuming a pseudo parental role for their creation. There is the notion of hindsight and lack of experience behind the handling of these seemingly harmless creations. However, I wholeheartedly believe that developers should start doing some level of projection to understand the possible ethical implications of their creations.
    However, I think that you could have delved deeper into Grodzinsky's quote and beliefs to further explain how developers should take this into account and what else they can do to ensure that they do not make the same mistake again.
    Lastly, your use of images are great, I think they really convey most of the context without need for further introduction!

    ReplyDelete
  2. This is a great post! Unfortunately, there are a lot of examples of AI being horribly racist. I think we also need to take into account the biases of the developers. There needs to be more of a push for self-accountability and opening of perspective for the developers and their team at large. As you said in your post, it can be hard to know if the AI will go beyond its expected behavior once out in the wild. I think situations like these really emphasize the importance of cultivating a diverse workplace, where these issues may be more closely scrutinized.

    ReplyDelete
  3. I like your comparison of the designer of the AI being the parent for their robot and although the AI may seem harmless but it's evident that there are still more factors that need to be considered. Who should be regulating these new innovations is a question that needs to be answered. Instead of connecting to the readings at the end, perhaps try to integrate it throughout your post. Great visual to show the essence of the issue, and maybe try to connect more to the ethics of creating these types of AI.

    ReplyDelete
  4. I really enjoyed this article! You brought up a great analogy comparing designers and AI to a parent and a child. I think it would be helpful to review consequences and actions taken to amend them - for example, what was Microsoft's response? How did they try to fix it? Or maybe expanding on the quote more could give more insight overall. I agree that the designers have a large responsibility to design "good" AI because abuse will always exist in society, within business ethics and scummy people in general. Great article though.

    ReplyDelete
  5. Microsoft learned a valuable lesson in what not to do, so I'm interested to see what developments they've made since the Tay failure. I also like your analogy of parents, though I mostly disagree. Yes a parent has the responsible to properly educate and discipline their child, but a child is still going to touch the hot iron or stove no matter how many times a parent tells them not to.
    I like how you posed the accountability question after the example of Tay, and not vice versa.

    ReplyDelete
  6. I thoroughly enjoyed reading this post. I initially clicked on it as your title is intriguing and called to my attention immediately. Next, instead of spending too much time with dense summary, you immediately got to the point of the post with a simple question, making your readers want to read more. Additionally, when mentioning your quote, you also mentioned what YOU thought of it. Maybe when initially mentioning the quote, you can give insight to who the author is and why he may be credible and relevant? Also, I know you mentioned the designers should be held accountable for their AI mishaps, however I would really enjoy your perspective on what these designers could do better to prevent such things from happening. Does there need to be an ethical rule book that designers should follow? etc. Thank you, and I'm looking forward to reading your edits.

    ReplyDelete

Note: Only a member of this blog may post a comment.