Factual Consistency Evaluation for Text Summarization via Counterfactual Estimation release_4sewni2fvjeptgobjgemgsmysa

by Yuexiang Xie, Fei Sun, Yang Deng, Yaliang Li, Bolin Ding

Released as a article .

2021  

Abstract

Despite significant progress has been achieved in text summarization, factual inconsistency in generated summaries still severely limits its practical applications. Among the key factors to ensure factual consistency, a reliable automatic evaluation metric is the first and the most crucial one. However, existing metrics either neglect the intrinsic cause of the factual inconsistency or rely on auxiliary tasks, leading to an unsatisfied correlation with human judgments or increasing the inconvenience of usage in practice. In light of these challenges, we propose a novel metric to evaluate the factual consistency in text summarization via counterfactual estimation, which formulates the causal relationship among the source document, the generated summary, and the language prior. We remove the effect of language prior, which can cause factual inconsistency, from the total causal effect on the generated summary, and provides a simple yet effective way to evaluate consistency without relying on other auxiliary tasks. We conduct a series of experiments on three public abstractive text summarization datasets, and demonstrate the advantages of the proposed metric in both improving the correlation with human judgments and the convenience of usage. The source code is available at https://github.com/xieyxclack/factual_coco.
In text/plain format

Archived Files and Locations

application/pdf  412.2 kB
file_4m7qcxe54zgqpexj4zhkopbcru
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2021-08-30
Version   v1
Language   en ?
arXiv  2108.13134v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 4a41d8eb-2f44-45e3-97e8-c386a7cd7eb9
API URL: JSON