papers AI Learner
The Github is limit! Click to go to the new site.

Interpretable Deep Neural Networks for Patient Mortality Prediction: A Consensus-based Approach

2019-05-14
Shaeke Salman, Seyedeh Neelufar Payrovnaziri, Xiuwen Liu, Zhe He

Abstract

Deep neural networks have achieved remarkable success in challenging tasks. However, the black-box approach of training and testing of such networks is not acceptable to critical applications. In particular, the existence of adversarial examples and their overgeneralization to irrelevant inputs makes it difficult, if not impossible, to explain decisions by commonly used neural networks. In this paper, we analyze the underlying mechanism of generalization of deep neural networks and propose an ($n$, $k$) consensus algorithm to be insensitive to adversarial examples and at the same time be able to reject irrelevant samples. Furthermore, the consensus algorithm is able to improve classification accuracy by using multiple trained deep neural networks. To handle the complexity of deep neural networks, we cluster linear approximations and use cluster means to capture feature importance. Due to weight symmetry, a small number of clusters are sufficient to produce a robust interpretation. Experimental results on a health dataset show the effectiveness of our algorithm in enhancing the prediction accuracy and interpretability of deep neural network models on one-year patient mortality prediction.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.05849

PDF

http://arxiv.org/pdf/1905.05849


Similar Posts

Comments