Thursday, February 20, 2020

Why We Should Be Careful When Developing AA

‘‘Can an artificial agent that changes its own programming become so autonomous that the original designer is no longer responsible for the behavior of the artificial agent?’’ This is the question is I am always wondering about. We all want the artificial agent to become smarter and smarter, and if they can thinking, and learning they probably know themselves better than the designer, so, at this point, they might have a better way to programming themselves, just like a human can self-adjustment, they can do it as well. What is the disadvantage between this, the artificial agent will have higher access, so the Morality comes in?




Grodzinsky is a Professor Emerita for Computer Science-Information Technology at Sacred Heart University. Her article “the ethics of designing artificial agents” talked about ethics problems. She suggests we should identify the LoA1 (the user view) and LoA2 (the designer view). We distinguish between these two views to allow control of the granularity at which we analyze what Floridi calls the ‘‘observables.’’ Floridi is currently Professor of Philosophy and Ethics of Information and Director of the Digital Ethics Lab, at the University of Oxford, he is deeply engaged with emerging policy initiatives on the socio-ethical value and implications of digital technologies and their applications. So let’s say the designer has all the details of the computation available for inspection, the mapping at LoA2 can be simplified to the process of mapping from an initial state that includes new values due to inputs, both external and temporal, to a new state that includes values that are externally observable at LoA1. In this way, all the actions of any artificial agent as a mathematical function between states. 


To understand how a designer works on artificial agents, we need to know learning, and intentionality in artificial agents. As for learning, there are three types: the learning of facts, the learning of new processes, and the discovery of new processes. So, the agent can change its subsequent behavior based on new information that is either purposefully given to the agent by a designer, or generated by the agent itself in response to environmental information. 

We Need to Develop Responsible Artificial Agent
The Designers have an increased burden of care in producing artificial agents that exhibit learning and intentionality. The more an artificial agent exhibits learning and intentionality, the more difficult it will be for its designer to predict accurately the agent’s future behavior. The designer should have a very high ethic rule when they design the artificial agents to keep the balance of smarter and safe. Therefore, we need to shift our focus from rapidly developing the most advanced AA to developing Responsible AA, where organizations exercise strict control, supervision and monitoring on the performance and actions of AA.

1 comment:

  1. I like the last paragraph of your new post. In this part you add your opinion to ethic and AA. But I think you can make more revision on the post. You can change your tone in the post, since your post is academic it will be easier to read.

    ReplyDelete

Note: Only a member of this blog may post a comment.