papers AI Learner
The Github is limit! Click to go to the new site.

Towards a Characterization of Explainable Systems

2019-01-31
Dimitri Bohlender, Maximilian A. Köhl

Abstract

Building software-driven systems that are easily understood becomes a challenge, with their ever-increasing complexity and autonomy. Accordingly, recent research efforts strive to aid in designing explainable systems. Nevertheless, a common notion of what it takes for a system to be explainable is still missing. To address this problem, we propose a characterization of explainable systems that consolidates existing research. By providing a unified terminology, we lay a basis for the classification of both existing and future research, and the formulation of precise requirements towards such systems.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1902.03096

PDF

http://arxiv.org/pdf/1902.03096


Similar Posts

Comments