papers AI Learner
The Github is limit! Click to go to the new site.

Formal derivation of Mesh Neural Networks with their Forward-Only gradient Propagation

2019-05-16
Federico A. Galatolo, Mario G.C.A. Cimino, Gigliola Vaglini

Abstract

This paper proposes the Mesh Neural Network (MNN), a novel architecture which allows neurons to be connected in any topology, to efficiently route information. In MNNs, information is propagated between neurons throughout a state transition function. State and error gradients are then directly computed from state updates without backward computation. The MNN architecture and the error propagation schema is formalized and derived in tensor algebra. The proposed computational model can fully supply a gradient descent process, and is suitable for very large scale NNs, due to its expressivity and training efficiency, with respect to NNs based on back-propagation and computational graphs.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.06684

PDF

http://arxiv.org/pdf/1905.06684


Similar Posts

Comments