Learning a Neural Semantic Parser from User Feedback

Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, Luke Zettlemoyer

Research output: Contribution to conferencePaperpeer-review


We present an approach to rapidly and easily build natural language interfaces to databases for new domains, whose performance improves over time based on user feedback, and requires minimal intervention. To achieve this, we adapt neural sequence models to map utterances directly to SQL with its full expressivity, bypassing any intermediate meaning representations. These models are immediately deployed online to solicit feedback from real users to flag incorrect queries. Finally, the popularity of SQL facilitates gathering annotations for incorrect predictions using the crowd, which is directly used to improve our models. This complete feedback loop, without intermediate representations or database specific engineering, opens up new ways of building high quality semantic parsers. Experiments suggest that this approach can be deployed quickly for any new target domain, as we show by learning a semantic parser for an online academic database from scratch.
Original languageEnglish
Publication statusPublished - 2017
Event55th Annual Meeting of the Association for Computational Linguistics 2017 - Vancouver, Canada
Duration: 1 Jul 20174 Aug 2017


Conference55th Annual Meeting of the Association for Computational Linguistics 2017
Abbreviated titleACL 2017


Dive into the research topics of 'Learning a Neural Semantic Parser from User Feedback'. Together they form a unique fingerprint.

Cite this