papers AI Learner
The Github is limit! Click to go to the new site.

Quantifying the visual concreteness of words and topics in multimodal datasets

2018-05-23
Jack Hessel, David Mimno, Lillian Lee

Abstract

Multimodal machine learning algorithms aim to learn visual-textual correspondences. Previous work suggests that concepts with concrete visual manifestations may be easier to learn than concepts with abstract ones. We give an algorithm for automatically computing the visual concreteness of words and topics within multimodal datasets. We apply the approach in four settings, ranging from image captions to images/text scraped from historical books. In addition to enabling explorations of concepts in multimodal datasets, our concreteness scores predict the capacity of machine learning algorithms to learn textual/visual relationships. We find that 1) concrete concepts are indeed easier to learn; 2) the large number of algorithms we consider have similar failure cases; 3) the precise positive relationship between concreteness and performance varies between datasets. We conclude with recommendations for using concreteness scores to facilitate future multimodal research.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1804.06786

PDF

https://arxiv.org/pdf/1804.06786


Similar Posts

Comments