2019 TextSummarizationwithPretrained
- (Liu & Lapata, 2019) ⇒ Yang Liu, and Mirella Lapata. (2019). “Text Summarization with Pretrained Encoders.” doi:10.48550/arXiv.1908.08345
Subject Headings: Neural Summarization Algorithm.
Notes
- It employs BERT for understanding document semantics in text summarization.
- It introduces a novel document-level encoder for efficient semantic capture.
- It adapts BERT for both extractive and abstractive text summarization.
- It enhances the extractive summarization model with Transformer inter-sentence layers.
- It proposes a distinct fine-tuning schedule for the abstractive summarization.
- It achieves state-of-the-art results on text summarization benchmark datasets.
Cited By
Quotes
Abstract
Bidirectional Encoder Representations from Transformer (BERT) represent the latest incarnation of pretrained language models, which have recently advanced a wide range of natural language processing tasks. In this paper, we showcase how BERT can be usefully applied in text summarization and propose a general framework for both extractive and abstractive models. We introduce a novel document-level encoder based on BERT, which is able to express the semantics of a document and obtain representations for its sentences. Our extractive model is built on top of this encoder by stacking several inter-sentence Transformer layers. For abstractive summarization, we propose a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two (the former is pretrained while the latter is not). We also demonstrate that a two-staged fine-tuning approach can further boost the quality of the generated summaries. Experiments on three datasets show that our model achieves state-of-the-art results across the board in both extractive and abstractive settings. Our code is available at this https URL.
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2019 TextSummarizationwithPretrained | Mirella Lapata Yang Liu | Text Summarization with Pretrained Encoders | 10.48550/arXiv.1908.08345 | 2019 |