papers AI Learner
The Github is limit! Click to go to the new site.

Graph Spectral Regularization for Neural Network Interpretability

2019-01-24
Alexander Tong, David van Dijk, Jay S. Stanley III, Matthew Amodio, Guy Wolf, Smita Krishnaswamy

Abstract

While neural networks have been used to classify or embed data into lower dimensional spaces, they are often regarded as black boxes with uninterpretable features. Here we propose a novel class of Graph Spectral Regularizations for making hidden layers interpretable — and even informative of phenotypic state space. This regularization uses a graph Laplacian and encourages activations to be smooth either on a predetermined graph or on a feature-space graph learned from the data itself via co-activations of a hidden layer of the neural network. We show numerous uses for this including 1) cluster indication 2) trajectory finding and 3) visualization in biological and image data sets.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1810.00424

PDF

http://arxiv.org/pdf/1810.00424


Similar Posts

Comments