papers AI Learner
The Github is limit! Click to go to the new site.

Out-of-the box neural networks can support combinatorial generalization

2019-03-29
Ivan Vankov, Jeffrey Bowers

Abstract

Combinatorial generalization - the ability to understand and produce novel combinations of already familiar elements - is considered to be a core capacity of the human mind and a major challenge to neural network models. A significant body of research suggests that conventional neural networks can’t solve this problem unless they are endowed with mechanisms specifically engineered for the purpose of representing symbols. In this paper we introduce a novel way of representing symbolic structures in connectionist terms - the vectors approach to representing symbols (VARS), which allows training standard neural architectures to encode symbolic knowledge explicitly at their output layers. In two simulations , we show that out-of-the-box neural networks not only can learn to produce VARS representations, but in doing so they achieve combinatorial generalization. This adds to other recent work that has shown improved combinatorial generalization under specific training conditions, and raises the question of whether special mechanisms are indeed needed to support symbolic processing.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1903.12354

PDF

http://arxiv.org/pdf/1903.12354


Similar Posts

Comments