Sentence Shortening Task
(Redirected from sentence compression)
Jump to navigation
Jump to search
A Sentence Shortening Task is an Automatic Text Summarization Task that reduces the length of sentences.
- AKA: Sentence Compression Task.
- Context:
- Task Input: Text Item,
- Task Output: shorter sentences.
- Task Requirements:
- a Sentence Extraction System (optional),
- a Sentence Shortening System.
- Example(s):
- Counter-Example(s):
- See: Maximum Entropy-based Text Summarization Task, TextRank, LexRank, Term Frequency-Inverse Document Frequency, Latent Semantic Analysis (LSA), Non-Negative Matrix Factorization (NMF), Text Mining, Document Compression, Extractive Summarization.
References
2021
- (Wikipedia, 2021) ⇒ https://en.wikipedia.org/wiki/Automatic_summarization Retrieved:2021-9-19.
- Automatic summarization is the process of shortening a set of data computationally, to create a subset (a summary) that represents the most important or relevant information within the original content.
In addition to text, images and videos can also be summarized. Text summarization finds the most informative sentences in a document; various methods of image summarization are the subject of ongoing research, with some looking to display the most representative images from a given collection or generating a video; video summarization extracts the most important frames from the video content.
- Automatic summarization is the process of shortening a set of data computationally, to create a subset (a summary) that represents the most important or relevant information within the original content.
2020a
- (Malireddy et al.) ⇒ Chanakya Malireddy, Tirth Maniar, and Manish Shrivastava (2020, July). "SCAR: Sentence Compression using Autoencoders for Reconstruction". In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop.
2020b
- (Hou et al., 2020) ⇒ Weiwei Hou, Hanna Suominen, Piotr Koniusz, Sabrina Caldwell, and Tom Gedeon (2020). "A Token-wise CNN-based Method for Sentence Compression". In: International Conference on Neural Information Processing (pp. 668-679). Springer, Cham.
2016
- (Moritz et al., 2016) ⇒ Maria Moritz, Barbara Pavlek, Greta Franzini, and Gregory Crane (2016). "Sentence Shortening via Morpho-Syntactic Annotated Data in Historical Language Learning". In: Journal on Computing and Cultural Heritage (JOCCH), 9(1), 1-9.
2015
- (Filippova et al., 2015) ⇒ Katja Filippova, Enrique Alfonseca, Carlos A. Colmenares, Lukasz Kaiser, and Oriol Vinyals. (2015). “Sentence Compression by Deletion with LSTMs.” In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing.
2013
- (Li et al., 2013) ⇒ Chen Li, Fei Liu, Fuliang Weng, and Yang Liu. (2013). “Document Summarization via Guided Sentence Compression.” In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 490-500.
- QUOTE:
- ...
- QUOTE:
2010
- (Filippova, 2010) ⇒ Katja Filippova. (2010). “Multi-sentence Compression: Finding Shortest Paths in Word Graphs.” In: Proceedings of the 23rd International Conference on Computational Linguistics.