TY - JOUR
T1 - Predicting apparent personality from body language
T2 - benchmarking deep learning architectures for adaptive social human–robot interaction
AU - Romeo, Marta
AU - Hernández García, Daniel
AU - Han, Ting
AU - Cangelosi, Angelo
AU - Jokinen, Kristiina
N1 - Funding Information:
This work was partially supported by a grant of AIST-AIRC (Japan) for the collaboration with the University of Manchester. The study is based on the results obtained from a project commissioned by the New Energy and Industrial Technology Development Organization (NEDO). The work was also supported by the EPSRC UKRI TAS Node on Trust and the European Research Council (H2020) projects PERSEO ETN and eLADDA ETN.
Publisher Copyright:
© 2021 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.
PY - 2021
Y1 - 2021
N2 - First impressions of personality traits can be inferred by non-verbal behaviours such as head pose, body postures, and hand gestures. Enabling social robots to infer the apparent personalities of their users based on such non-verbal cues will allow robots to gain the ability of adapting to their users, constituting a further step towards the personalisation of human–robot interactions. Deep learning architectures such as residual networks, 3D convolutional networks, and long-short time memory networks have been applied to classify human activities and actions in computer vision tasks. These same architectures are beginning to be applied to study human emotions and personality by focusing mainly on facial features in video recordings. In this work, we exploit body language cues to predict apparent personality traits for human–robot interactions. We customised four state-of-the-art neural network architectures to the task, and benchmarked them on a dataset of short side-view videos of dyadic interactions. Our results show the potential for deep learning architectures to predict apparent personality traits from body language cues. While the performance varied between models and personality traits, our results show that these models could still be able to predict sole personality traits, as exemplified by the results on the conscientiousness trait.
AB - First impressions of personality traits can be inferred by non-verbal behaviours such as head pose, body postures, and hand gestures. Enabling social robots to infer the apparent personalities of their users based on such non-verbal cues will allow robots to gain the ability of adapting to their users, constituting a further step towards the personalisation of human–robot interactions. Deep learning architectures such as residual networks, 3D convolutional networks, and long-short time memory networks have been applied to classify human activities and actions in computer vision tasks. These same architectures are beginning to be applied to study human emotions and personality by focusing mainly on facial features in video recordings. In this work, we exploit body language cues to predict apparent personality traits for human–robot interactions. We customised four state-of-the-art neural network architectures to the task, and benchmarked them on a dataset of short side-view videos of dyadic interactions. Our results show the potential for deep learning architectures to predict apparent personality traits from body language cues. While the performance varied between models and personality traits, our results show that these models could still be able to predict sole personality traits, as exemplified by the results on the conscientiousness trait.
KW - adaptive robotics
KW - deep learning
KW - Personality computing
KW - video classification
UR - http://www.scopus.com/inward/record.url?scp=85118275214&partnerID=8YFLogxK
U2 - 10.1080/01691864.2021.1974941
DO - 10.1080/01691864.2021.1974941
M3 - Article
AN - SCOPUS:85118275214
SN - 0169-1864
VL - 35
SP - 1167
EP - 1179
JO - Advanced Robotics
JF - Advanced Robotics
IS - 19
ER -