Friday, February 7, 2020

The AI with zero chill

In 2016, Microsoft developed "The AI with zero chill": a chatbot named Tay.  Within 16 hours of its release, Microsoft shut it down.

Why was it shut down so soon? At some point during its 96,000 tweets, Tay began to tweet racist and sexist tweets.  The developers of Tay never intended this to happen. In fact, they had blacklisted several serious topics, and Tay was programmed to return canned answers if asked about one of them.  Other twitter users were tweeting offensive statements at Tay, and Tay had began to learn from them and ultimately mimic them. This begs the question "Can an artificial agent that changes its own programming become so autonomous  that  the  original  designer  is  no  longer responsible for the behavior of the artificial agent?’’

Since Microsoft shut Tay down, it is obvious that they thought that they were responsible for the actions of Tay. The actions of Tay were not intended by Microsoft, but they were intended by the people who tweeted racist and sexist remarks at Tay.  Professor Frances S. Grodzinsky analogizes this phenomenon of creating an artificial agent to a parent raising a child. The parent is responsible for instilling values in the child and constraining them, but they are not the sole influence in the child. However, the difference is that at some point a parent is considered equal their child, whereas humans will always have dominion over an artificial agent.

In the future, we must all take responsibility for AI and not abuse it like what happened with Tay. If we use morals when interacting with AI, these situations will not occur.

5 comments:

  1. Surprisingly, I had not heard about Tay before reading this, and I agree that it definitely poses some interesting and important questions regarding responsibility. In my opinion, since the developers cannot predict how the AI will evolve, it is not really their fault how Tay began to develop. I am interested to know more about what you mean in the end when you say that we should use morals when interacting with AI. Do you mean that people should not have tweeted at Tay like that? Do you think that is realistic/should be enforced somehow? This is something that you could add if you decide to revise this piece. Also, you could put the reference to the reading earlier in order to clearly establish it as the starting point for your thought process. But overall, this is a super interesting and thought provoking topic.

    ReplyDelete
  2. Very good blog post, I felt really engaged with your content and found that your use of an example such as Tay was very good for getting your point across. My first blog post talked about the same thing you are referencing, basically on asking who is responsible for the actions of the AI.

    I agree that the creator of an autonomous creation is responsible to an extent. If the creator understands the capabilities of it's creation and is conscious of what could happen and doesn't take the necessary measures to prevent such situation then they are responsible. But if the AI does something that wasn't the intent of the creator then the blame should fall on the AI.

    I felt that your topic transitions where a bit forced, which is understandable given the amount of words we have but I would try to make the transition between the second and third paragraph a bit smoother. When it comes to blogs I try to think that I'm having a conversation with a reader. Usually this leads to more of a colloquial and approachable style that engages the audience although the story you presented was enough to hook me into reading your post.

    ReplyDelete
  3. This is a very interesting topic and I'm surprised that I had never heard of "Tay" before reading this. Your post raises a question that is going to be associated with AI of all kinds going forward, which is how much responsibility does the creator of an AI deserve as the AI learns on its own? (Phrased a bit differently than what you said in your post). You conclude by saying that in the future we need to take responsibility for AI and not abuse it, but how would we do so? I think this is an important point that could be expanded upon as it is one of the main concerns in your post.

    ReplyDelete
  4. Hi Taran, this is a very interesting post. I had previously read about Tay, and have seen some of the things that Tay has posted. It is eery how quickly the developers lost control of her. Your post was fun to read and kept my interest, however I am wondering what you would suggest we do in the future to avoid losing control of ai and keeping responsibility over them. However, overall I think your message is thought provoking and that it is a well written post.

    ReplyDelete
  5. I remember hearing about Tay and how quickly the account was taken offline, and I'm glad you took the opportunity to review the event through the eyes of philosophy. With AI becoming increasingly more autonomous it is easy to see how more care needs to be taken. Incorporating more of Grodzinsky's own input on who is responsible for Tay's behavior would have been a great inclusion, as he mentions an increased burden of care does fall on those who are designing the agent, and your response to how you agree or disagree would add even more depth. Great work overall!

    ReplyDelete

Note: Only a member of this blog may post a comment.