TY - GEN
T1 - Capturing Frame-Like Object Descriptors in Human Augmented Mapping
AU - Faridghasemnia, Mohamadreza
AU - Vanzo, Andrea
AU - Nardi, Daniele
PY - 2019
Y1 - 2019
N2 - The model of an environment plays a crucial role in autonomous mobile robots, by providing them with the necessary task-relevant information. As robots become more intelligent, they need a richer and more expressive environment model. This model is a map that contains a structured description of the environment that can be used as the robot’s knowledge for several tasks, such as planning and reasoning. In this work, we propose a framework that allows to capture important environment descriptors, such as functionality and ownership of the robot’s surrounding objects, through verbal interaction. Specifically, we propose a corpus of verbal descriptions annotated with frame-like structures. We use the proposed dataset to train two multi-task neural architectures. We compare the two architectures through an experimental evaluation, discussing the design choices. Finally, we describe the creation of a simple interactive interface with our system, implemented through the trained model. The novelties of this work are: (i) the definition of a new problem, i.e., addressing different object descriptors, that plays a crucial role for the robot’s tasks accomplishment; (ii) a specialized corpus to support the creation of rich Semantic Maps; (iii) the design of different neural architectures, and their experimental evaluation over the proposed dataset; (iv) a simple interface for the actual usage of the proposed resources.
AB - The model of an environment plays a crucial role in autonomous mobile robots, by providing them with the necessary task-relevant information. As robots become more intelligent, they need a richer and more expressive environment model. This model is a map that contains a structured description of the environment that can be used as the robot’s knowledge for several tasks, such as planning and reasoning. In this work, we propose a framework that allows to capture important environment descriptors, such as functionality and ownership of the robot’s surrounding objects, through verbal interaction. Specifically, we propose a corpus of verbal descriptions annotated with frame-like structures. We use the proposed dataset to train two multi-task neural architectures. We compare the two architectures through an experimental evaluation, discussing the design choices. Finally, we describe the creation of a simple interactive interface with our system, implemented through the trained model. The novelties of this work are: (i) the definition of a new problem, i.e., addressing different object descriptors, that plays a crucial role for the robot’s tasks accomplishment; (ii) a specialized corpus to support the creation of rich Semantic Maps; (iii) the design of different neural architectures, and their experimental evaluation over the proposed dataset; (iv) a simple interface for the actual usage of the proposed resources.
KW - Corpus annotator
KW - Human robot interaction
KW - Natural Language understanding
KW - Neural networks
KW - Semantic mapping
KW - Semantic mapping corpus
UR - http://www.scopus.com/inward/record.url?scp=85076727535&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-35166-3_28
DO - 10.1007/978-3-030-35166-3_28
M3 - Conference contribution
AN - SCOPUS:85076727535
SN - 9783030351656
T3 - Lecture Notes in Computer Science
SP - 392
EP - 404
BT - Advances in Artificial Intelligence
A2 - Alviano, Mario
A2 - Greco, Gianluigi
A2 - Scarcello, Francesco
PB - Springer
T2 - 18th International Conference of the Italian Association for Artificial Intelligence 2019
Y2 - 19 November 2019 through 22 November 2019
ER -