We present an approach to rapidly and easily build natural language interfaces to databases for new domains, whose performance improves over time based on user feedback, and requires minimal intervention. To achieve this, we adapt neural sequence models to map utterances directly to SQL with its full expressivity, bypassing any intermediate meaning representations. These models are immediately deployed online to solicit feedback from real users to flag incorrect queries. Finally, the popularity of SQL facilitates gathering annotations for incorrect predictions using the crowd, which is directly used to improve our models. This complete feedback loop, without intermediate representations or database specific engineering, opens up new ways of building high quality semantic parsers. Experiments suggest that this approach can be deployed quickly for any new target domain, as we show by learning a semantic parser for an online academic database from scratch.
|Publication status||Published - 2017|
|Event||55th Annual Meeting of the Association for Computational Linguistics 2017 - Vancouver, Canada|
Duration: 1 Jul 2017 → 4 Aug 2017
|Conference||55th Annual Meeting of the Association for Computational Linguistics 2017|
|Abbreviated title||ACL 2017|
|Period||1/07/17 → 4/08/17|
Iyer, S., Konstas, I., Cheung, A., Krishnamurthy, J., & Zettlemoyer, L. (2017). Learning a Neural Semantic Parser from User Feedback. Paper presented at 55th Annual Meeting of the Association for Computational Linguistics 2017, Vancouver, Canada. https://arxiv.org/pdf/1704.08760.pdf