Abstract
Technology companies have produced varied responses to concerns about the effects of the design of their conversational AI systems. Some have claimed that their voice assistants are in fact not gendered or human-like- despite design features suggesting the contrary. We compare these claims to user perceptions by analysing the pronouns they use when referring to AI assistants. We also examine systems' responses and the extent to which they generate output which is gendered and anthropomorphic. We find that, while some companies appear to be addressing the ethical concerns raised, in some cases, their claims do not seem to hold true. In particular, our results show that system outputs are ambiguous as to the humanness of the systems, and that users tend to personify and gender them as a result.
Original language | English |
---|---|
Title of host publication | Proceedings of the 3rd Workshop on Gender Bias in Natural Language Processing |
Editors | Marta R. Costa-jussa, Hila Gonen, Christian Hardmeier, Kellie Webster |
Publisher | Association for Computational Linguistics |
Pages | 24-33 |
Number of pages | 10 |
ISBN (Electronic) | 9781954085619 |
DOIs | |
Publication status | Published - Aug 2021 |
Event | 3rd Workshop on Gender Bias in Natural Language Processing 2021 - Virtual, Online, Thailand Duration: 5 Aug 2021 → … |
Conference
Conference | 3rd Workshop on Gender Bias in Natural Language Processing 2021 |
---|---|
Abbreviated title | GeBNLP 2021 |
Country/Territory | Thailand |
City | Virtual, Online |
Period | 5/08/21 → … |
ASJC Scopus subject areas
- General Psychology
- Gender Studies
- Computer Science Applications
- Information Systems
- Computational Theory and Mathematics