Towards making NLG a voice for interpretable Machine Learning

James Forrest, Somayajulu Sripada, Wei Pang, George Coghill

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper presents a study to understand the issues related to using NLG to humanise explanations from a popular interpretable machine learning frameworkcalled LIME. Our study shows that selfreported rating of NLG explanation washigher than that for a non-NLG explanation. However, when tested for comprehension, the results were not as clearcut showing the need for performing more studies to uncover the factors responsible for high-quality NLG explanations.
Original languageEnglish
Title of host publicationProceedings of the 11th International Conference on Natural Language Generation
PublisherAssociation for Computational Linguistics
Pages177–182
Number of pages6
ISBN (Print)9781948087865
Publication statusPublished - Nov 2018
Event11th International Conference on Natural Language Generation 2018 - Tilburg, Netherlands
Duration: 5 Nov 20188 Nov 2018

Conference

Conference11th International Conference on Natural Language Generation 2018
Abbreviated titleINLG 2018
Country/TerritoryNetherlands
CityTilburg
Period5/11/188/11/18

Fingerprint

Dive into the research topics of 'Towards making NLG a voice for interpretable Machine Learning'. Together they form a unique fingerprint.

Cite this