papers AI Learner
The Github is limit! Click to go to the new site.

Affordance Learning for End-to-End Visuomotor Robot Control

2019-03-10
Aleksi Hämäläinen, Karol Arndt, Ali Ghadirzadeh, Ville Kyrki

Abstract

Training end-to-end deep robot policies requires a lot of domain-, task-, and hardware-specific data, which is often costly to provide. In this work, we propose to tackle this issue by employing a deep neural network with a modular architecture, consisting of separate perception, policy, and trajectory parts. Each part of the system is trained fully on synthetic data or in simulation. The data is exchanged between parts of the system as low-dimensional latent representations of affordances and trajectories. The performance is then evaluated in a zero-shot transfer scenario using Franka Panda robot arm. Results demonstrate that a low-dimensional representation of scene affordances extracted from an RGB image is sufficient to successfully train manipulator policies. We also introduce a method for affordance dataset generation, which is easily generalizable to new tasks, objects and environments, and requires no manual pixel labeling.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1903.04053

PDF

https://arxiv.org/pdf/1903.04053


Comments

Content