Abstract
In this work, we present an interaction-based approach to learn semantically rich representations for the task of slicing vegetables. Unlike previous approaches, we focus on object-centric representations and use auxiliary tasks to learn rich representations using a two-step process. First, we use simple auxiliary tasks, such as predicting the thickness of a cut slice, to learn an embedding space which captures object properties that are important for the task of slicing vegetables. In the second step, we use these learned latent embeddings to learn a forward model. Learning a forward model affords us to plan online in the latent embedding space and forces our model to improve its representations while performing the slicing task. To show the efficacy of our approach we perform experiments on two different vegetables: cucumbers and tomatoes. Our experimental evaluation shows that our method is able to capture important semantic properties for the slicing task, such as the thickness of the vegetable being cut. We further show that by using our learned forward model, we can plan for the task of vegetable slicing.
Abstract (translated by Google)
URL
http://arxiv.org/abs/1904.00303