papers AI Learner
The Github is limit! Click to go to the new site.

Semantic Image Retrieval via Active Grounding of Visual Situations

2017-10-31
Max H. Quinn, Erik Conser, Jordan M. Witte, Melanie Mitchell

Abstract

We describe a novel architecture for semantic image retrieval—in particular, retrieval of instances of visual situations. Visual situations are concepts such as “a boxing match,” “walking the dog,” “a crowd waiting for a bus,” or “a game of ping-pong,” whose instantiations in images are linked more by their common spatial and semantic structure than by low-level visual similarity. Given a query situation description, our architecture—called Situate—learns models capturing the visual features of expected objects as well the expected spatial configuration of relationships among objects. Given a new image, Situate uses these models in an attempt to ground (i.e., to create a bounding box locating) each expected component of the situation in the image via an active search procedure. Situate uses the resulting grounding to compute a score indicating the degree to which the new image is judged to contain an instance of the situation. Such scores can be used to rank images in a collection as part of a retrieval system. In the preliminary study described here, we demonstrate the promise of this system by comparing Situate’s performance with that of two baseline methods, as well as with a related semantic image-retrieval system based on “scene graphs.”

Abstract (translated by Google)
URL

https://arxiv.org/abs/1711.00088

PDF

https://arxiv.org/pdf/1711.00088


Similar Posts

Comments