Friday, February 7, 2020

Why We Should Be Careful When Developing AA

‘‘Can an artificial agent that changes its own programming become so autonomous that the original designer is no longer responsible for the behavior of the artificial agent?’’ This is the question is I am always wondering about. We all want the artificial agent to become smarter and smarter, and if they can thinking, and learning they probably know themselves better than the designer, so, at this point, they might have a better way to programming themselves, just like a human can self-adjustment, they can do it as well. What is the disadvantage between this, the artificial agent will have higher access, so the Morality comes in? 


Grodzinsky’s article “the ethics of designing artificial agents” talked about this. He suggests we should identify the LoA1 (the user view) and LoA2 (the designer view). We distinguish between these two views to allow control of the granularity at which we analyze what Floridi and Sanders call the ‘‘observables.’’ So let’s say the designer has all the details of the computation available for inspection, the mapping at LoA2 can be simplified to the process of mapping from an initial state that includes new values due to inputs, both external and temporal, to a new state that includes values that are externally observable at LoA1. In this way, all the actions of any artificial agent as a mathematical function between states.


To understand how a designer works on artificial agents, we need to know things, learning, and intentionality in artificial agents.As for learning, there are three types: the learning of facts, the learning of new processes, and the discovery of new processes. So, the agent can change its subsequent behavior based on new information that is either purposefully given to the agent by a designer, or generated by the agent itself in response to environmental information. 

The Designers have an increased burden of care in producing artificial agents that exhibit learning and intentionality. The more an artificial agent exhibits learning and intentionality, the more difficult it will be for its designer to predict accurately the agent’s future behavior. The designer should have a very high ethic rule when they design the artificial agents to keep the balance of smarter and safe.

3 comments:

  1. The blog has a great opening paragraph starting with author’s question which demonstrates his critical thinking. The question closely related to Grodzinsky’s viewpoint and he explained Grodzinskys’s opinion clearly. Besides, the blog has a good structure. One suggestion is that the author and make a stronger ending.

    ReplyDelete
  2. What happens when the accountability gets removed from the designer? Is there a possible future in which Artificial Agents hold responsibility over their actions, and can be punished for them? We currently hold computers to a completely different ethical standard. Computers, essentially, can do no wrong, faults must be attributed to humans. Ironically, so much of science fiction is interested in "evil" machines that turn sentient. What do you think is the turning point in which an artificial agent, which has no accountability, suddenly becomes accountable?

    ReplyDelete
  3. This was a very thought provoking post. Some parts of the blog were just a bit confusing to me. One part is the thesis statement "What is the disadvantage between this, the artificial agent will have higher access, so the Morality comes in?". I do not know what you are trying to say here and this is the sentence that should make your whole paper clear. Additionally, you need to assume that the reader has not read the same posts we have. Introduce the author and their views before incorporating them into your writing.

    ReplyDelete

Note: Only a member of this blog may post a comment.