papers AI Learner
The Github is limit! Click to go to the new site.

V2CNet: A Deep Learning Framework to Translate Videos to Commands for Robotic Manipulation

2019-03-23
Anh Nguyen, Thanh-Toan Do, Ian Reid, Darwin G. Caldwell, Nikos G. Tsagarakis

Abstract

We propose V2CNet, a new deep learning framework to automatically translate the demonstration videos to commands that can be directly used in robotic applications. Our V2CNet has two branches and aims at understanding the demonstration video in a fine-grained manner. The first branch has the encoder-decoder architecture to encode the visual features and sequentially generate the output words as a command, while the second branch uses a Temporal Convolutional Network (TCN) to learn the fine-grained actions. By jointly training both branches, the network is able to model the sequential information of the command, while effectively encodes the fine-grained actions. The experimental results on our new large-scale dataset show that V2CNet outperforms recent state-of-the-art methods by a substantial margin, while its output can be applied in real robotic applications. The source code and trained models will be made available.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1903.10869

PDF

http://arxiv.org/pdf/1903.10869


Similar Posts

Comments