Friday, January 24, 2020

Artificial intelligence revolution and ethics

“Beep! Beep!Beep! Beep!”, “Hey Siri, turn off alarm”, “Hey Siri, how is the weather today?”, “It might get a bit slippery out there”. This is how my day gets started. AI is already taking care of everything. Artificial intelligence (AI) is intelligence exhibited by machines and AI research defines itself as the study of “intelligent agents”. AI is very reliable. In the future, AI will be helping us in doing things more efficiently and make our lives easier. Artificial intelligence agents will impact our lives in every society on Earth. Technology and commerce will see to that, Alberto Ibargüen, the president of Knight Foundation said in the statement. “Since even algorithms have parents and those parents have values that they instill in their algorithmic progeny, we want to influence the outcome by ensuring ethical behavior, and governance that includes the interests of the diverse communities that will be affected.









With Stronger AI developing, there are some important things we should be careful about. Moor's Law said: "As technological revolutions increase their social impact, ethical problems increase." There are Three Laws of Robotics that rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story "Runaround", These three basic laws offer a snappy, moral framework that will help us control murderous robots. But is that enough with our high-speed technology revolution?


The European Union today published a set of guidelines on how companies and governments should develop ethical applications of artificial intelligence. So, for example, if an AI system diagnoses you with cancer sometime in the future, the EU’s guidelines would want to make sure that several things take place: that the software wasn’t biased by your race or gender, that it didn’t override the objections of a human doctor, and that it gave the patient the option to have their diagnosis explained to them. These ethical rules are very important to notice when we are developing stronger technology.




In order to address concerns related to ethics and safety of AI, a group led by LinkedIn co-founder Reid Hoffman announced the creation of a $27 million research fund called the Ethics and Governance of Artificial Intelligence Fund. These Funds will help the  AI have a healthy growth.






Eline Chivot, a senior policy analyst at the Center for Data Innovation think tank, told The Verge. “To be a leader in ethical AI you first have to lead in AI itself.

their social impact, ethical problems increase.

2 comments:

  1. You do a good job a giving a personal example, but you might want to consider making an argument earlier in the blog. It is not until the third paragraph that ethics is brought up. The blog also seems a little choppy because each paragraph does not really link to the others in a smooth fashion. You mention Moore once, but other evidence more. You might consider talking a little bit more about the readings from class in order to help your readers understand where you are coming from.

    ReplyDelete
  2. I agree with Sean in that your examples were great, but overall it needs a little more cohesion with the rest of the article. There seems to lack a very clear driving argument that is then backed by a reading. You speak about Moor briefly, about his claim that ethical challenges will increase as technology advances. Maybe talk about how these ethical decisions will be made. In our first reading about Floridi, he addresses ways in which we could tackle these ethical problems in the world of ICT's.

    ReplyDelete

Note: Only a member of this blog may post a comment.