papers AI Learner
The Github is limit! Click to go to the new site.

Model-Free Model Reconciliation

2019-03-17
Sarath Sreedharan, Alberto Olmo, Aditya Prasad Mishra, Subbarao Kambhampati

Abstract

Designing agents capable of explaining complex sequential decisions remain a significant open problem in automated decision-making. Recently, there has been a lot of interest in developing approaches for generating such explanations for various decision-making paradigms. One such approach has been the idea of {\em explanation as model-reconciliation}. The framework hypothesizes that one of the common reasons for the user’s confusion could be the mismatch between the user’s model of the task and the one used by the system to generate the decisions. While this is a general framework, most works that have been explicitly built on this explanatory philosophy have focused on settings where the model of user’s knowledge is available in a declarative form. Our goal in this paper is to adapt the model reconciliation approach to the cases where such user models are no longer explicitly provided. We present a simple and easy to learn labeling model that can help an explainer decide what information could help achieve model reconciliation between the user and the agent.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1903.07198

PDF

http://arxiv.org/pdf/1903.07198


Similar Posts

Comments