papers AI Learner
The Github is limit! Click to go to the new site.

Bayesian Learning of Neural Network Architectures

2019-01-27
Georgi Dikov, Patrick van der Smagt, Justin Bayer

Abstract

In this paper we propose a Bayesian method for estimating architectural parameters of neural networks, namely layer size and network depth. We do this by learning concrete distributions over these parameters. Our results show that regular networks with a learnt structure can generalise better on small datasets, while fully stochastic networks can be more robust to parameter initialisation. The proposed method relies on standard neural variational learning and, unlike randomised architecture search, does not require a retraining of the model, thus keeping the computational overhead at minimum.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1901.04436

PDF

https://arxiv.org/pdf/1901.04436


Similar Posts

Comments