papers AI Learner
The Github is limit! Click to go to the new site.

Fisher-Rao Metric, Geometry, and Complexity of Neural Networks

2019-02-23
Tengyuan Liang, Tomaso Poggio, Alexander Rakhlin, James Stokes

Abstract

We study the relationship between geometry and capacity measures for deep neural networks from an invariance viewpoint. We introduce a new notion of capacity — the Fisher-Rao norm — that possesses desirable invariance properties and is motivated by Information Geometry. We discover an analytical characterization of the new capacity measure, through which we establish norm-comparison inequalities and further show that the new measure serves as an umbrella for several existing norm-based complexity measures. We discuss upper bounds on the generalization error induced by the proposed measure. Extensive numerical experiments on CIFAR-10 support our theoretical findings. Our theoretical analysis rests on a key structural lemma about partial derivatives of multi-layer rectifier networks.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1711.01530

PDF

http://arxiv.org/pdf/1711.01530


Similar Posts

Comments