papers AI Learner
The Github is limit! Click to go to the new site.

Detecting Features of Tools, Objects, and Actions from Effects in a Robot using Deep Learning

2018-09-23
Namiko Saito, Kitae Kim, Shingo Murata, Tetsuya Ogata, Shigeki Sugano

Abstract

We propose a tool-use model that can detect the features of tools, target objects, and actions from the provided effects of object manipulation. We construct a model that enables robots to manipulate objects with tools, using infant learning as a concept. To realize this, we train sensory-motor data recorded during a tool-use task performed by a robot with deep learning. Experiments include four factors: (1) tools, (2) objects, (3) actions, and (4) effects, which the model considers simultaneously. For evaluation, the robot generates predicted images and motions given information of the effects of using unknown tools and objects. We confirm that the robot is capable of detecting features of tools, objects, and actions by learning the effects and executing the task.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1809.08613

PDF

https://arxiv.org/pdf/1809.08613


Similar Posts

Comments