Abstract
Over the past few years, deep learning techniques have achieved tremendous success in many visual understanding tasks such as object detection, image segmentation, and caption generation. Despite this thriving in computer vision and natural language processing, deep learning has not yet shown significant impact in robotics. Due to the gap between theory and application, there are many challenges when applying the results of deep learning to the real robotic systems. In this study, our long-term goal is to bridge the gap between computer vision and robotics by developing visual methods that can be used in real robots. In particular, this work tackles two fundamental visual problems for autonomous robotic manipulation: affordance detection and fine-grained action understanding. Theoretically, we propose different deep architectures to further improves the state of the art in each problem. Empirically, we show that the outcomes of our proposed methods can be applied in real robots and allow them to perform useful manipulation tasks.
Abstract (translated by Google)
URL
http://arxiv.org/abs/1903.09761