papers AI Learner
The Github is limit! Click to go to the new site.

Improving Visual Relation Detection using Depth Maps

2019-05-02
Sahand Sharifzadeh, Max Berrendorf, Volker Tresp

Abstract

State of the art visual relation detection methods have been relying on features extracted from RGB images including objects’ 2D positions. In this paper, we argue that the 3D positions of objects in space can provide additional valuable information about object relations. This information helps not only to detect spatial relations, such as “standing behind”, but also non-spatial relations, such as “holding”. Since 3D information of a scene is not easily accessible, we propose incorporating a pre-trained RGB-to-Depth model within visual relation detection frameworks. We discuss different feature extraction strategies from depth maps and show their critical role in relation detection. Our experiments confirm that the performance of state-of-the-art visual relation detection approaches can significantly be improved by utilizing depth map information.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.00966

PDF

http://arxiv.org/pdf/1905.00966


Similar Posts

Comments