papers AI Learner
The Github is limit! Click to go to the new site.

Reasoning on Grasp-Action Affordances

2019-05-25
Paola Ardón, Èric Pairet, Ron Petrick, Subramanian Ramamoorthy, Katrin Lohan

Abstract

Artificial intelligence is essential to succeed in challenging activities that involve dynamic environments, such as object manipulation tasks in indoor scenes. Most of the state-of-the-art literature explores robotic grasping methods by focusing exclusively on attributes of the target object. When it comes to human perceptual learning approaches, these physical qualities are not only inferred from the object, but also from the characteristics of the surroundings. This work proposes a method that includes environmental context to reason on an object affordance to then deduce its grasping regions. This affordance is reasoned using a ranked association of visual semantic attributes harvested in a knowledge base graph representation. The framework is assessed using standard learning evaluation metrics and the zero-shot affordance prediction scenario. The resulting grasping areas are compared with unseen labelled data to asses their accuracy matching percentage. The outcome of this evaluation suggest the autonomy capabilities of the proposed method for object interaction applications in indoor environments.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.10610

PDF

http://arxiv.org/pdf/1905.10610


Comments

Content