TY - JOUR
T1 - Identification of Low-engaged Learners in Robot-led Second Language Conversations with Adults
AU - Engwall, Olov
AU - Cumbal, Ronald
AU - Lopes, José
AU - Ljung, Mikael
AU - Månsson, Linnea
N1 - Funding Information:
This work was supported by the Swedish Research Council under Grant 2016-03698 and Marcus and Amalia Wallenberg foundation under grant MAW 2020.0052. Authors’ addresses: O. Engwall and R. Cumbal, KTH Royal Institute of Technology, Department of Speech, Music and Hearing, Lindstedtsv. 24, Stockholm, Sweden, SE-10044; emails: {engwall, ronaldgc}@kth.se; J. Lopes, Heriot-Watt University, Edinburgh, United Kingdom; email: [email protected]; M. Ljung and L. Månsson, KTH Royal Institute of Technology, Stockholm, Sweden; emails: {milju, lman}@kth.se.
Publisher Copyright:
© 2022 Copyright held by the owner/author(s).
PY - 2022/6
Y1 - 2022/6
N2 - The main aim of this study is to investigate if verbal, vocal, and facial information can be used to identify low-engaged second language learners in robot-led conversation practice. The experiments were performed on voice recordings and video data from 50 conversations, in which a robotic head talks with pairs of adult language learners using four different interaction strategies with varying robot-learner focus and initiative. It was found that these robot interaction strategies influenced learner activity and engagement. The verbal analysis indicated that learners with low activity rated the robot significantly lower on two out of four scales related to social competence. The acoustic vocal and video-based facial analysis, based on manual annotations or machine learning classification, both showed that learners with low engagement rated the robot's social competencies consistently, and in several cases significantly, lower, and in addition rated the learning effectiveness lower. The agreement between manual and automatic identification of low-engaged learners based on voice recordings or face videos was further found to be adequate for future use. These experiments constitute a first step towards enabling adaption to learners' activity and engagement through within-and between-strategy changes of the robot's interaction with learners.
AB - The main aim of this study is to investigate if verbal, vocal, and facial information can be used to identify low-engaged second language learners in robot-led conversation practice. The experiments were performed on voice recordings and video data from 50 conversations, in which a robotic head talks with pairs of adult language learners using four different interaction strategies with varying robot-learner focus and initiative. It was found that these robot interaction strategies influenced learner activity and engagement. The verbal analysis indicated that learners with low activity rated the robot significantly lower on two out of four scales related to social competence. The acoustic vocal and video-based facial analysis, based on manual annotations or machine learning classification, both showed that learners with low engagement rated the robot's social competencies consistently, and in several cases significantly, lower, and in addition rated the learning effectiveness lower. The agreement between manual and automatic identification of low-engaged learners based on voice recordings or face videos was further found to be adequate for future use. These experiments constitute a first step towards enabling adaption to learners' activity and engagement through within-and between-strategy changes of the robot's interaction with learners.
KW - facial emotion expressions
KW - Robot-assisted language learning
KW - speech emotion recognition
KW - user engagement
UR - http://www.scopus.com/inward/record.url?scp=85127492914&partnerID=8YFLogxK
U2 - 10.1145/3503799
DO - 10.1145/3503799
M3 - Article
AN - SCOPUS:85127492914
SN - 2573-9522
VL - 11
JO - ACM Transactions on Human-Robot Interaction
JF - ACM Transactions on Human-Robot Interaction
IS - 2
M1 - 18
ER -