Text Summarization Evaluation Task
(Redirected from Text Summarization Evaluation)
Jump to navigation
Jump to search
A Text Summarization Evaluation Task is a summarization evaluation task for a text summarization task.
- Context:
- It can (typically) reference a Text Summarization Performance Measure.
- ...
- See: Text Summarization Faithfulness Evaluation, Content Selection Evaluation, Coherence Evaluation, Relevance Evaluation, Compression Evaluation.
References
2023
- (Foysal & Böck, 2023) ⇒ Abdullah Al Foysal, and Ronald Böck. (2023). “Who Needs External References?âText Summarization Evaluation Using Original Documents.” In: AI, 4(4). doi:10.3390/ai4040049
- NOTEs:
- It introduces a new metric, SUSWIR (Summary Score without Reference), which evaluates automatic text summarization quality by considering Semantic Similarity, Relevance, Redundancy, and Bias Avoidance, without requiring human-generated reference summaries.
- It emphasizes the limitations of traditional text summarization evaluation methods like ROUGE, BLEU, and METEOR, particularly in situations where no reference summaries are available, motivating the need for a more flexible and unbiased approach.
- It demonstrates SUSWIR's effectiveness through extensive testing on various datasets, including CNN/Daily Mail and BBC Articles, showing that this new metric provides reliable and consistent assessments compared to traditional methods.
- NOTEs: