Designing speech interaction for the Sony Xperia Ear and Oakley Radar Pace smartglasses

Matthew P. Aylett, David A. Braude

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Speech synthesis is a key enabling technology for wearable technology. We discuss the design challenges in customising speech synthesis for the Sony Xperia Ear, and the Oakley Radar Pace smartglasses. In order to support speech interaction designers working on novel interactive eye-free mobile devices, specific functionality is required including: flexibility in terms of performance, memory footprint, disk requirements, server or local configurations, methods for personification and branding, architectures for fast reactive interfaces, and customisation for content, genres and speech styles. We describe implementations of this required functionality and how this functionality can be made available to engineers and designers working on 3rd party devices and the impact they can have on user experience. To conclude we discuss why some customers are reluctant to depend on speech services from well known providers such as Google and Amazon and consider the barrier to entry for custom built personal digital advisors.
Original languageEnglish
Title of host publicationProceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct
PublisherAssociation for Computing Machinery
Pages379-384
Number of pages6
ISBN (Print)9781450359412
DOIs
Publication statusPublished - 3 Sept 2018

Fingerprint

Dive into the research topics of 'Designing speech interaction for the Sony Xperia Ear and Oakley Radar Pace smartglasses'. Together they form a unique fingerprint.

Cite this