Abstract
While neural networks have been used to classify or embed data into lower dimensional spaces, they are often regarded as black boxes with uninterpretable features. Here we propose a novel class of Graph Spectral Regularizations for making hidden layers interpretable — and even informative of phenotypic state space. This regularization uses a graph Laplacian and encourages activations to be smooth either on a predetermined graph or on a feature-space graph learned from the data itself via co-activations of a hidden layer of the neural network. We show numerous uses for this including 1) cluster indication 2) trajectory finding and 3) visualization in biological and image data sets.
Abstract (translated by Google)
URL
http://arxiv.org/abs/1810.00424