papers AI Learner
The Github is limit! Click to go to the new site.

Are Disentangled Representations Helpful for Abstract Visual Reasoning?

2019-05-29
Sjoerd van Steenkiste, Francesco Locatello, Jürgen Schmidhuber, Olivier Bachem

Abstract

A disentangled representation encodes information about the salient factors of variation in the data independently. Although it is often argued that this representational format is useful in learning to solve many real-world up-stream tasks, there is little empirical evidence that supports this claim. In this paper, we conduct a large-scale study that investigates whether disentangled representations are more suitable for abstract reasoning tasks. Using two new tasks similar to Raven’s Progressive Matrices, we evaluate the usefulness of the representations learned by 360 state-of-the-art unsupervised disentanglement models. Based on these representations, we train 3600 abstract reasoning models and observe that disentangled representations do in fact lead to better up-stream performance. In particular, they appear to enable quicker learning using fewer samples.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.12506

PDF

http://arxiv.org/pdf/1905.12506


Similar Posts

Comments