Answers Unite! Unsupervised Metrics for Reinforced Summarization Models
release_w6cbzrx7gbalhphfrefhcmlame
by
Thomas Scialom, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano
2019
Abstract
Abstractive summarization approaches based on Reinforcement Learning (RL)
have recently been proposed to overcome classical likelihood maximization. RL
enables to consider complex, possibly non-differentiable, metrics that globally
assess the quality and relevance of the generated outputs. ROUGE, the most used
summarization metric, is known to suffer from bias towards lexical similarity
as well as from suboptimal accounting for fluency and readability of the
generated abstracts. We thus explore and propose alternative evaluation
measures: the reported human-evaluation analysis shows that the proposed
metrics, based on Question Answering, favorably compares to ROUGE -- with the
additional property of not requiring reference summaries. Training a RL-based
model on these metrics leads to improvements (both in terms of human or
automated metrics) over current approaches that use ROUGE as a reward.
In text/plain
format
Archived Files and Locations
application/pdf 184.0 kB
file_3va6aorodbcsxbiebthrclrexu
|
arxiv.org (repository) web.archive.org (webarchive) |
1909.01610v1
access all versions, variants, and formats of this works (eg, pre-prints)