papers AI Learner
The Github is limit! Click to go to the new site.

Improved Optical Flow for Gesture-based Human-robot Interaction

2019-05-21
Jen-Yen Chang, Antonio Tejero-de-Pablos, Tatsuya Harada

Abstract

Gesture interaction is a natural way of communicating with a robot as an alternative to speech. Gesture recognition methods leverage optical flow in order to understand human motion. However, while accurate optical flow estimation (i.e., traditional) methods are costly in terms of runtime, fast estimation (i.e., deep learning) methods’ accuracy can be improved. In this paper, we present a pipeline for gesture-based human-robot interaction that uses a novel optical flow estimation method in order to achieve an improved speed-accuracy trade-off. Our optical flow estimation method introduces four improvements to previous deep learning-based methods: strong feature extractors, attention to contours, midway features, and a combination of these three. This results in a better understanding of motion, and a finer representation of silhouettes. In order to evaluate our pipeline, we generated our own dataset, MIBURI, which contains gestures to command a house service robot. In our experiments, we show how our method improves not only optical flow estimation, but also gesture recognition, offering a speed-accuracy trade-off more realistic for practical robot applications.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.08685

PDF

http://arxiv.org/pdf/1905.08685


Similar Posts

Comments