papers AI Learner
The Github is limit! Click to go to the new site.

Automatically Composing Representation Transformations as a Means for Generalization

2019-05-08
Michael B. Chang, Abhishek Gupta, Sergey Levine, Thomas L. Griffiths

Abstract

A generally intelligent learner should generalize to more complex tasks than it has previously encountered, but the two common paradigms in machine learning – either training a separate learner per task or training a single learner for all tasks – both have difficulty with such generalization because they do not leverage the compositional structure of the task distribution. This paper introduces the compositional problem graph as a broadly applicable formalism to relate tasks of different complexity in terms of problems with shared subproblems. We propose the compositional generalization problem for measuring how readily old knowledge can be reused and hence built upon. As a first step for tackling compositional generalization, we introduce the compositional recursive learner, a domain-general framework for learning algorithmic procedures for composing representation transformations, producing a learner that reasons about what computation to execute by making analogies to previously seen problems. We show on a symbolic and a high-dimensional domain that our compositional approach can generalize to more complex problems than the learner has previously encountered, whereas baselines that are not explicitly compositional do not.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1807.04640

PDF

http://arxiv.org/pdf/1807.04640


Similar Posts

Comments