Abstract
Speech synthesis is a key enabling technology for wearable technology. We discuss the design challenges in customising speech synthesis for the Sony Xperia Ear, and the Oakley Radar Pace smartglasses. In order to support speech interaction designers working on novel interactive eye-free mobile devices, specific functionality is required including: flexibility in terms of performance, memory footprint, disk requirements, server or local configurations, methods for personification and branding, architectures for fast reactive interfaces, and customisation for content, genres and speech styles. We describe implementations of this required functionality and how this functionality can be made available to engineers and designers working on 3rd party devices and the impact they can have on user experience. To conclude we discuss why some customers are reluctant to depend on speech services from well known providers such as Google and Amazon and consider the barrier to entry for custom built personal digital advisors.
Original language | English |
---|---|
Title of host publication | Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct |
Publisher | Association for Computing Machinery |
Pages | 379-384 |
Number of pages | 6 |
ISBN (Print) | 9781450359412 |
DOIs | |
Publication status | Published - 3 Sept 2018 |