Abstract
State of the art visual relation detection methods have been relying on features extracted from RGB images including objects’ 2D positions. In this paper, we argue that the 3D positions of objects in space can provide additional valuable information about object relations. This information helps not only to detect spatial relations, such as “standing behind”, but also non-spatial relations, such as “holding”. Since 3D information of a scene is not easily accessible, we propose incorporating a pre-trained RGB-to-Depth model within visual relation detection frameworks. We discuss different feature extraction strategies from depth maps and show their critical role in relation detection. Our experiments confirm that the performance of state-of-the-art visual relation detection approaches can significantly be improved by utilizing depth map information.
Abstract (translated by Google)
URL
http://arxiv.org/abs/1905.00966