papers AI Learner
The Github is limit! Click to go to the new site.

The role of a layer in deep neural networks: a Gaussian Process perspective

2019-02-06
Oded Ben-David, Zohar Ringel

Abstract

A fundamental question in deep learning concerns the role played by individual layers in a deep neural network (DNN) and the transferable properties of the data representations which they learn. To the extent that layers have clear roles one should be able to optimize them separately using layer-wise loss functions. Such loss functions would describe what is the set of good data representations at each depth of the network and provide a target for layer-wise greedy optimization (LEGO). Here we introduce the Deep Gaussian Layer-wise loss functions (DGLs) which, we believe, are the first supervised layer-wise loss functions which are both explicit and competitive in terms of accuracy. The DGLs have a solid theoretical foundation, they become exact for wide DNNs, and we find that they can monitor standard end-to-end training. Being highly structured and symmetric, the DGLs provide a promising analytic route to understanding the internal representations generated by DNNs.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1902.02354

PDF

http://arxiv.org/pdf/1902.02354


Similar Posts

Comments