papers AI Learner
The Github is limit! Click to go to the new site.

The Responsibility Quantification Model of Human Interaction with Automation

2019-04-30
Nir Douer, Joachim Meyer

Abstract

Intelligent systems and advanced automation are involved in information collection and evaluation, in decision-making and in the implementation of chosen actions. In such systems, human responsibility becomes equivocal. Understanding human responsibility is particularly important when intelligent autonomous systems can harm people, as with autonomous vehicles or, most notably, with Advanced Weapon Systems (AWS). Using Information Theory, we develop a responsibility quantification (ResQu) model of human involvement in intelligent automated systems and demonstrate its applications on decisions regarding AWS. The analysis reveals that human comparative responsibility is often low, even when major functions are allocated to the human. Thus, broadly stated policies of keeping humans in the loop and having meaningful human control are misleading and cannot truly direct decisions on how to involve humans in intelligent systems and advanced automation. Our responsibility model can guide system design decisions and can aid policy and legal decisions regarding human responsibility in intelligent systems.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1810.12644

PDF

http://arxiv.org/pdf/1810.12644


Similar Posts

Comments