papers AI Learner
The Github is limit! Click to go to the new site.

Multimodal Explanations by Predicting Counterfactuality in Videos

2019-05-20
Atsushi Kanehira, Kentaro Takemoto, Sho Inayoshi, Tatsuya Harada

Abstract

This study addresses generating counterfactual explanations with multimodal information. Our goal is not only to classify a video into a specific category, but also to provide explanations on why it is not categorized to a specific class with combinations of visual-linguistic information. Requirements that the expected output should satisfy are referred to as counterfactuality in this paper: (1) Compatibility of visual-linguistic explanations, and (2) Positiveness/negativeness for the specific positive/negative class. Exploiting a spatio-temporal region (tube) and an attribute as visual and linguistic explanations respectively, the explanation model is trained to predict the counterfactuality for possible combinations of multimodal information in a post-hoc manner. The optimization problem, which appears during training/inference, can be efficiently solved by inserting a novel neural network layer, namely the maximum subpath layer. We demonstrated the effectiveness of this method by comparison with a baseline of the action recognition datasets extended for this task. Moreover, we provide information-theoretical insight into the proposed method.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1812.01263

PDF

http://arxiv.org/pdf/1812.01263


Similar Posts

Comments