Voice Puppetry: Speech Synthesis Adventures in Human Centred AI

Matthew P. Aylett, Yolanda Vazquez-Alvarez

Research output: Chapter in Book/Report/Conference proceedingConference contribution

7 Citations (Scopus)

Abstract

State-of-the-art speech synthesis owes much to modern AI machine learning, with recurrent neural networks becoming the new standard. However, how you say something is just as important as what you say. If we draw inspiration from human dramatic performance, ideas such as artistic direction can help us design interactive speech synthesis systems which can be finely controlled by a human voice. This "voice puppetry" has many possible applications from film dubbing to the pre-creation of prompts for a conversational agent. Previous work in voice puppetry has raised the question of how such a system should work and how we might interact with it. Here, we share the results of a focus group discussing voice puppetry and responding to a voice puppetry demo. Results highlight a main challenge in user-centred AI: where is the trade-off between control and automation? and how may users control this trade-off?
Original languageEnglish
Title of host publicationCompanion Proceedings of the 25th International Conference on Intelligent User Interfaces
PublisherAssociation for Computing Machinery
Pages108-109
Number of pages2
ISBN (Print)9781450375139
DOIs
Publication statusPublished - 17 Mar 2020

Fingerprint

Dive into the research topics of 'Voice Puppetry: Speech Synthesis Adventures in Human Centred AI'. Together they form a unique fingerprint.

Cite this