TY - GEN
T1 - Reasoning on Grasp-Action Affordances
AU - Ardon, Paola
AU - Pairet, Èric
AU - Petrick, Ronald
AU - Ramamoorthy, Subramanian
AU - Lohan, Katrin
N1 - Funding Information:
Acknowledgements. Thanks to the support of the EPSRC IAA 455791 along with ORCA Hub EPSRC (EP/R026173/1, 2017–2021) and consortium partners.
Publisher Copyright:
© Springer Nature Switzerland AG 2019.
PY - 2019/6/28
Y1 - 2019/6/28
N2 - Artificial intelligence is essential to succeed in challenging activities that involve dynamic environments, such as object manipulation tasks in indoor scenes. Most of the state-of-the-art literature explores robotic grasping methods by focusing exclusively on attributes of the target object. When it comes to human perceptual learning approaches, these physical qualities are not only inferred from the object, but also from the characteristics of the surroundings. This work proposes a method that includes environmental context to reason on an object affordance to then deduce its grasping regions. This affordance is reasoned using a ranked association of visual semantic attributes harvested in a knowledge base graph representation. The framework is assessed using standard learning evaluation metrics and the zero-shot affordance prediction scenario. The resulting grasping areas are compared with unseen labelled data to asses their accuracy matching percentage. The outcome of this evaluation suggest the autonomy capabilities of the proposed method for object interaction applications in indoor environments.
AB - Artificial intelligence is essential to succeed in challenging activities that involve dynamic environments, such as object manipulation tasks in indoor scenes. Most of the state-of-the-art literature explores robotic grasping methods by focusing exclusively on attributes of the target object. When it comes to human perceptual learning approaches, these physical qualities are not only inferred from the object, but also from the characteristics of the surroundings. This work proposes a method that includes environmental context to reason on an object affordance to then deduce its grasping regions. This affordance is reasoned using a ranked association of visual semantic attributes harvested in a knowledge base graph representation. The framework is assessed using standard learning evaluation metrics and the zero-shot affordance prediction scenario. The resulting grasping areas are compared with unseen labelled data to asses their accuracy matching percentage. The outcome of this evaluation suggest the autonomy capabilities of the proposed method for object interaction applications in indoor environments.
UR - http://www.scopus.com/inward/record.url?scp=85068991313&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-23807-0_1
DO - 10.1007/978-3-030-23807-0_1
M3 - Conference contribution
AN - SCOPUS:85068991313
SN - 9783030238063
T3 - Lecture Notes in Computer Science
SP - 3
EP - 15
BT - Towards Autonomous Robotic Systems. TAROS 2019
A2 - Althoefer, Kaspar
A2 - Konstantinova, Jelizaveta
A2 - Zhang, Ketao
PB - Springer
T2 - 20th Towards Autonomous Robotic Systems Conference 2019
Y2 - 3 July 2019 through 5 July 2019
ER -