See-Liu-Manning Text Summarization Task
Jump to navigation
Jump to search
A See-Liu-Manning Text Summarization Task is an Abstractive Text Summarization Task that uses a pointer-generator model to produce summary from textual data.
- AKA: Text Summarization via Pointer-Generator Network.
- Context:
- Task Input(s): Text Document (e.g.news articles)
- Task Output(s): automatic generated summary.
- Task Requirement(s):
- Benchmark Datasets:
- Benchmark Performance Metrics:
- Baseline Models:
- Sequence-to-Sequence Attentional Model (seq-to-seq + attn);
- Pointer-Generator Model with and without Coverage Mechanism (pointer-generator, pointer-generator+covarage).
- It can be solved by See-Liu-Manning Text Summarization System that implements See-Liu Manning Text Summarization Algorithms.
- Example(s):
ROUGE | METEOR | ||||
---|---|---|---|---|---|
1 | 2 | L | exact match | + stem/syn/para | |
abstractive model (Nallapati et al., 2016)* | 35.46 | 13.30 | 32.65 | - | - |
seq-to-seq + attn baseline (150k vocab) | 30.49 | 11.17 | 28.08 | 11.65 | 12.86 |
seq-to-seq + attn baseline (50k vocab) | 31.33 | 11.81 | 28.83 | 12.03 | 13.20 |
pointer-generator | 36.44 | 15.66 | 33.42 | 15.35 | 16.65 |
pointer-generator + coverage | 39.53 | 17.28 | 36.38 | 17.32 | 18.72 |
lead-3 baseline (ours) | 40.34 | 17.70 | 36.57 | 20.48 | 22.21 |
lead-3 baseline (Nallapati et al., 2017)* | 39.2 | 15.7 | 35.5 | - | - |
extractive model (Nallapati et al., 2017)* | 39.6 | 16.2 | 35.3 | - | - |
- Counter-Example(s):
- See: Natural Language Generation Task, Text Generation Task, Text Summarization Task, Sequence-to-Sequence Model, Neural Machine Translation, Encoder-Decoder Neural Network, Artificial Neural Network, Natural Language Processing Task, Language Model, Coverage Model.
References
2017
- (See et al., 2017) ⇒ Abigail See, Peter J. Liu, and Christopher D. Manning. (2017). “Get To The Point: Summarization with Pointer-Generator Networks.” In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). DOI:10.18653/v1/P17-1099.
2016
- (Nallapati et al., 2016) ⇒ Ramesh Nallapati, Bowen Zhou, Cicero Nogueira dos Santos, Caglar Gulcehre, and Bing Xiang. (2016). “Abstractive Text Summarization Using Sequence-to-sequence RNNs and Beyond.” In: Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning (CoNLL 2016). DOI:10.18653/v1/K16-1028.
2015
- (Hermann et al., 2015) ⇒ Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. (2015). “Teaching Machines to Read and Comprehend.” In: Proceedings of the 28th International Conference on Neural Information Processing Systems (NIPS'15). arXiv:1506.03340v3.
2014
- (Denkowski & Lavie, 2014) ⇒ Michael J. Denkowski, and Alon Lavie. (2014). “Meteor Universal: Language Specific Translation Evaluation for Any Target Language". In: Proceedings of the Ninth Workshop on Statistical Machine Translation (WMT@ACL 2014). DOI:10.3115/v1/W14-3348.
2004
- (Lin, 2004) ⇒ Chin-Yew Lin. (2004)."Looking for a Few Good Metrics: Automatic Summarization Evaluation - How Many Samples Are Enough?". In: Proceedings of the Fourth NTCIR Workshop on Research in Information Access Technologies Information Retrieval, Question Answering and Summarization (NTCIR 2004).