Abstract
Personality is a vital factor in understanding acceptance, trust or emotional attachment to a robot. From R2D2 to WALL-E, there is a rich history of robots in films using semantic free utterances (SFUs) sounds (such as squeaks, clicks and tones) as audio gestures to communicate and convey emotion and personality. However, unlike in a film where an actor can pretend to understand non-verbal noises, in a practical application synthesised speech is often used to communicate information, intention and status. Here we present a pilot study exploring the impact of mixing speech synthesis with SFUs on perceived personality. In a listening test, subjects were presented with synthesised short utterances with and without SFUs, together with a picture of the agent as a tabletop social robot or a young man. Both picture and SFUs had an impact on perceived personality. However, no interaction was seen between SFUs and picture suggesting the listeners failed to fuse the perceptions of the two audio elements, perceiving the SFUs as background noises rather than audio gestures generated by the agent.
Original language | English |
---|---|
Title of host publication | Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction |
Publisher | Association for Computing Machinery |
Pages | 110-112 |
Number of pages | 3 |
ISBN (Print) | 9781450370578 |
DOIs | |
Publication status | Published - 1 Apr 2020 |