SEQ3: Differentiable Sequence-to-Sequence-to-Sequence Autoencoder for Unsupervised Abstractive Sentence Compression

Christos Baziotis, Ion Androutsopoulos, Ioannis Konstas, Alexandros Potamianos

Research output: Chapter in Book/Report/Conference proceedingConference contribution

46 Downloads (Pure)

Abstract

Neural sequence-to-sequence models are currently the dominant approach in several natural language processing tasks, but require large parallel corpora. We present a sequence-to-sequence-to-sequence autoencoder (SEQˆ3), consisting of two chained encoder-decoder pairs, with words used as a sequence of discrete latent variables. We apply the proposed model to unsupervised abstractive sentence compression, where the first and last sequences are the input and reconstructed sentences, respectively, while the middle sequence is the compressed sentence. Constraining the length of the latent word sequences forces the model to distill important information from the input. A pretrained language model, acting as a prior over the latent sequences, encourages the compressed sentences to be human-readable. Continuous relaxations enable us to sample from categorical distributions, allowing gradient-based optimization, unlike alternatives that rely on reinforcement learning. The proposed model does not require parallel text-summary pairs, achieving promising results in unsupervised sentence compression on benchmark datasets.
Original languageEnglish
Title of host publicationProceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics
Subtitle of host publicationHuman Language Technologies, Volume 1 (Long and Short Papers)
PublisherAssociation for Computational Linguistics
Pages673–681
Number of pages9
ISBN (Electronic)9781950737130
DOIs
Publication statusPublished - Jun 2019
Event2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics - Minneapolis, United States
Duration: 3 Jun 20195 Jun 2019

Conference

Conference2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Abbreviated titleNAACL 2019
Country/TerritoryUnited States
CityMinneapolis
Period3/06/195/06/19

Fingerprint

Dive into the research topics of 'SEQ3: Differentiable Sequence-to-Sequence-to-Sequence Autoencoder for Unsupervised Abstractive Sentence Compression'. Together they form a unique fingerprint.

Cite this