Thursday, February 20, 2020

Google Duplex: Using A.I. to Mimic Humans

In his research article, Why we need better ethics for emerging technologies, Professor of Philosophy at Dartmouth College, James Moor hypothesized that, "As technological revolutions increase their social impact, ethical problems increase." With the rise of tech companies incorporating machine learning into their software such as Tesla, Waymo, and Google, it is no doubt that artificial intelligence (A.I.) will soon become a part of our everyday lives. Ethics are needed in A.I. because, unlike humans, machines do not have a moral compass; they are only as ethical as its developer. 

Take Google Duplex for example. In 2018, Google unveiled an early version of its new A.I. assistant that can make appointments and purchases with local businesses for its users. 

Demo of Google Duplex
For years, virtual assistants such as Siri and Alexa were only used to accomplish minor tasks such as listing facts, setting reminders, or adding events to your calendar. When using them, you knew you were talking to a robot. Duplex is different because it was able to successfully request a haircut and restaurant reservation with a real person over the phone, and by adding filler words such as "umm" and "aah", it was also able to finish the task without the recipients knowing it was a bot. As incredible as that sounds, the issue in hand is that the technology can be seen as deceptive and that no ethical rules were set in play. Although the demo was pre-recorded, it has been stated that the bot was unleashed into unsuspecting business staff and that Duplex had no intention of disclosing that it was not a real human. Google could have easily programmed Duplex to say, "Hi, I am Google Assistant..." but instead, decided to prioritize its "wow" factor and place the topic of ethics as an afterthought.

Artificial Intelligence is expanding
As the development of Duplex and other A.I. systems continue to grow, it is crucial that their ethics must be highlighted as a core part of their design. As deception is just one example, others include sexism and racism due to biased fed data, and the questionable desirability of a robot-human relationship. Ultimately, we need transparency on who is responsible for the behavior of any virtual assistant or robot, regardless if it is autonomous or not. Only if we establish these ethical principles can we fully take advantage of A.I., otherwise we are only stripping ourselves of human context and societal consideration.






2 comments:

  1. Hi Steven, I really enjoyed reading your post and I could see why A.I could have ethical implications if there are no policies in place in the future. I liked how you introduced the reading in the beginning to make it clear how your topic connects to the readings. Something you could've done to really hit home your point is to have added another reading or another quote from the reading.

    ReplyDelete
  2. Hi Steven, great take on the need for ethics in A.I. However, I think that you could have pushed for a stronger approach that would more closely follow your title. You mentioned that the Google A.I. mimics humans and you describe the process of it doing so. However, I think that your point on A.I. ethics is not flushed out as well as it could be. Besides deception of the staff involved, you could also talk more about how which A.I. possess biases and tendencies that are unethical. What else is there in mimicking humans that these A.I. do that might be worrisome? I think being able to answer that question would improve the connection your blogpost has with your title. Besides that, great job!

    ReplyDelete

Note: Only a member of this blog may post a comment.