Automated Text-Item(s) Summarization Task

From GM-RKB
(Redirected from Summarization NLP Task)
Jump to navigation Jump to search

An Automated Text-Item(s) Summarization Task is a text summarization task that is an automated NLG task (to create a text summary).



References

2023

  • (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/Automatic_summarization Retrieved:2023-9-16.
    • Automatic summarization is the process of shortening a set of data computationally, to create a subset (a summary) that represents the most important or relevant information within the original content. Artificial intelligence algorithms are commonly developed and employed to achieve this, specialized for different types of data.

      Text summarization is usually implemented by natural language processing methods, designed to locate the most informative sentences in a given document.[1] On the other hand, visual content can be summarized using computer vision algorithms. Image summarization is the subject of ongoing research; existing approaches typically attempt to display the most representative images from a given image collection, or generate a video that only includes the most important content from the entire collection. Video summarization algorithms identify and extract from the original video content the most important frames (key-frames), and/or the most important video segments (key-shots), normally in a temporally ordered fashion.[2] [3] [4] [5] Video summaries simply retain a carefully selected subset of the original video frames and, therefore, are not identical to the output of video synopsis algorithms, where new video frames are being synthesized based on the original video content.

  1. Torres-Moreno, Juan-Manuel (1 October 2014). Automatic Text Summarization. Wiley. pp. 320–. ISBN 978-1-848-21668-6.
  2. Sankar K. Pal; Alfredo Petrosino; Lucia Maddalena (25 January 2012). Handbook on Soft Computing for Video Surveillance. CRC Press. pp. 81–. ISBN 978-1-4398-5685-7.
  3. Elhamifar, Ehsan; Sapiro, Guillermo; Vidal, Rene (2012). “See all by looking at a few: Sparse modeling for finding representative objects". 2012 IEEE Conference on Computer Vision and Pattern Recognition. pp. 1600–1607. doi:10.1109/CVPR.2012.6247852. ISBN 978-1-4673-1228-8. S2CID 5909301. Retrieved 4 December 2022.
  4. Mademlis, Ioannis; Tefas, Anastasios; Nikolaidis, Nikos; Pitas, Ioannis (2016). "Multimodal stereoscopic movie summarization conforming to narrative characteristics". IEEE Transactions on Image Processing. IEEE. 25 (12): 5828–5840. Bibcode:2016ITIP...25.5828M. doi:10.1109/TIP.2016.2615289. hdl:1983/2bcdd7a5-825f-4ac9-90ec-f2f538bfcb72. PMID 28113502. S2CID 18566122. Retrieved 4 December 2022.
  5. Mademlis, Ioannis; Tefas, Anastasios; Pitas, Ioannis (2018). "A salient dictionary learning framework for activity video summarization via key-frame extraction". Information Sciences. Elsevier. 432: 319–331. doi:10.1016/j.ins.2017.12.020. Retrieved 4 December 2022.

2011

2004

2002

2001

2000 =

1999

1982

  • (DeJong, 1982) ⇒ G. F. DeJong. (1982). “An overview of the FRUMP system.” In: Strategies for Natural Language Processing, W.G.Lehnert & M.H.Ringle (Eds).
    • Domain specific
    • Skimmed and summarised news articles.
    • Template instantiation system
    • Identified which articles belonged to a particular domain.