Round-Trip Factual Consistency: Difference between revisions

From GM-RKB
Jump to navigation Jump to search
No edit summary
No edit summary
 
Line 17: Line 17:
** [[AlignScore]] ([[#2023|Zao et al., 2023]]), which uses a unified alignment function to evaluate factual consistency.  
** [[AlignScore]] ([[#2023|Zao et al., 2023]]), which uses a unified alignment function to evaluate factual consistency.  
** [[QAFactEval]] ([[#2022a|Durmus et al., 2022]]), which assesses [[factual consistency]] in text [[summarization]] via question generation and answering.  
** [[QAFactEval]] ([[#2022a|Durmus et al., 2022]]), which assesses [[factual consistency]] in text [[summarization]] via question generation and answering.  
** [[TRUE]] ([[#2022b|Honovich et al., 2022]]), which re-evaluates [[factual consistency evaluation metric]]s for text generation.  
** [[TRUE (Factual Consistency Evaluation Metric)|TRUE]] ([[#2022b|Honovich et al., 2022]]), which re-evaluates [[factual consistency evaluation metric]]s for text generation.  
** [[Summarization Consistency Check]]: Reconstructing a summary back to its original article to verify retained facts.   
** [[Summarization Consistency Check]]: Reconstructing a summary back to its original article to verify retained facts.   
** [[Translation Round-Trip Test]]: Translating text to another language and back to assess fidelity (e.g., EN→FR→EN).   
** [[Translation Round-Trip Test]]: Translating text to another language and back to assess fidelity (e.g., EN→FR→EN).   

Latest revision as of 20:29, 9 March 2025

A Round-Trip Factual Consistency is a natural language generation evaluation metric that measures whether factual information remains accurate and unchanged after transformations (e.g., summarization, translation, paraphrasing) and reconstruction back to its original form.



References

2025

2024

2023

2022a

2022b

2021

2021b

2020