SafetyKit: First Aid for Measuring Safety in Open-domain Conversational Systems

Emily Dinan, Gavin Abercrombie, A. Stevie Bergman, Shannon Spruit, Dirk Hovy, Y-Lan Boureau, Verena Rieser

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The social impact of natural language processing and its applications has received increasing attention. In this position paper, we focus on the problem of safety for end-to-end conversational AI. We survey the problem landscape therein, introducing a taxonomy of three observed phenomena: the Instigator, Yea-Sayer, and Impostor effects. We then empirically assess the extent to which current tools can measure these effects and current systems display them. We release these tools as part of a “first aid kit” (SafetyKit) to quickly assess apparent safety concerns. Our results show that, while current tools are able to provide an estimate of the relative safety of systems in various settings, they still have several shortcomings. We suggest several future directions and discuss ethical considerations.
Original languageEnglish
Title of host publicationProceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
PublisherAssociation for Computational Linguistics
Pages4113–4133
Number of pages21
DOIs
Publication statusPublished - May 2022

Fingerprint

Dive into the research topics of 'SafetyKit: First Aid for Measuring Safety in Open-domain Conversational Systems'. Together they form a unique fingerprint.

Cite this