TY - GEN
T1 - Follow Me
T2 - 16th International Conference on Social Robotics 2024
AU - Sherer, Jeffrey
AU - McPherson, Robbie
AU - Mohanty, Sattwik
AU - Santé, Guilhem
AU - Gandolfi, Greta
AU - Romeo, Marta
AU - Suglia, Alessandro
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.
PY - 2025/3/25
Y1 - 2025/3/25
N2 - While robots are perceived as reliable in delivering factual data, their ability to achieve meaningful alignment with humans during subjective interactions remains unclear. Gaining insights into this alignment is vital to integrating robots more deeply into decision-making frameworks and enhancing their roles in social interactions. This study examines the impact of personality-prompted large language models (LLMs) on alignment in human-robot interactions. Participants interacted with a Furhat robot under two conditions: a baseline control condition and an experimental condition using personality prompts designed to simulate distinct personality traits through the LLM. Alignment was assessed by measuring changes in similarity between participants’ rankings and the robot’s rankings of factual (objective) and contestable (subjective) concepts before and after interaction. The findings indicate that participants aligned more with the robot on objective, factual concepts than on subjective, contestable ones, regardless of personality prompts. These results suggest that the current personality prompting method may be insufficient to significantly influence alignment in subjective interactions. This may be attributed to the conveyed traits lacking sufficient impact or the limitations of current system capabilities, which may not yet be advanced enough to foster the desired influence on participants’ perceptions.
AB - While robots are perceived as reliable in delivering factual data, their ability to achieve meaningful alignment with humans during subjective interactions remains unclear. Gaining insights into this alignment is vital to integrating robots more deeply into decision-making frameworks and enhancing their roles in social interactions. This study examines the impact of personality-prompted large language models (LLMs) on alignment in human-robot interactions. Participants interacted with a Furhat robot under two conditions: a baseline control condition and an experimental condition using personality prompts designed to simulate distinct personality traits through the LLM. Alignment was assessed by measuring changes in similarity between participants’ rankings and the robot’s rankings of factual (objective) and contestable (subjective) concepts before and after interaction. The findings indicate that participants aligned more with the robot on objective, factual concepts than on subjective, contestable ones, regardless of personality prompts. These results suggest that the current personality prompting method may be insufficient to significantly influence alignment in subjective interactions. This may be attributed to the conveyed traits lacking sufficient impact or the limitations of current system capabilities, which may not yet be advanced enough to foster the desired influence on participants’ perceptions.
KW - Alignment
KW - Human-Robot Interaction (HRI)
KW - LLM
KW - Personality Prompting (P)
UR - http://www.scopus.com/inward/record.url?scp=105002143611&partnerID=8YFLogxK
U2 - 10.1007/978-981-96-3519-1_44
DO - 10.1007/978-981-96-3519-1_44
M3 - Conference contribution
AN - SCOPUS:105002143611
SN - 9789819635184
T3 - Lecture Notes in Computer Science
SP - 487
EP - 496
BT - Social Robotics. ICSR + AI 2024
A2 - Palinko, Oskar
A2 - Bodenhagen, Leon
A2 - Cabibihan, John-John
A2 - Fischer, Kerstin
A2 - Šabanović, Selma
A2 - Winkle, Katie
A2 - Behera, Laxmidhar
A2 - Ge, Shuzhi Sam
A2 - Chrysostomou, Dimitrios
A2 - Jiang, Wanyue
A2 - He, Hongsheng
PB - Springer
Y2 - 23 October 2024 through 26 October 2024
ER -