Guiding the Release of Safer E2E Conversational AI through Value Sensitive Design

A. Stevie Bergman, Gavin Abercrombie, Shannon Spruit, Dirk Hovy, Emily Dinan, Y-Lan Boureau, Verena Rieser

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Over the last several years, end-to-end neural conversational agents have vastly improved their ability to carry unrestricted, open-domain conversations with humans. However, these models are often trained on large datasets from the Internet and, as a result, may learn undesirable behaviours from this data, such as toxic or otherwise harmful language. Thus, researchers must wrestle with how and when to release these models. In this paper, we survey recent and related work to highlight tensions between values, potential positive impact, and potential harms. We also provide a framework to support practitioners in deciding whether and how to release these models, following the tenets of value-sensitive design.
Original languageEnglish
Title of host publicationProceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
PublisherAssociation for Computational Linguistics
Pages39–52
Number of pages14
ISBN (Print)9781955917667
DOIs
Publication statusPublished - 1 Sept 2022

Fingerprint

Dive into the research topics of 'Guiding the Release of Safer E2E Conversational AI through Value Sensitive Design'. Together they form a unique fingerprint.

Cite this