Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting release_4w3iawlhp5fhvp2yclghygsr2e

by Yen-Chun Chen, Mohit Bansal

Released as a article .

2018  

Abstract

Inspired by how humans summarize long documents, we propose an accurate and fast summarization model that first selects salient sentences and then rewrites them abstractively (i.e., compresses and paraphrases) to generate a concise overall summary. We use a novel sentence-level policy gradient method to bridge the non-differentiable computation between these two neural networks in a hierarchical way, while maintaining language fluency. Empirically, we achieve the new state-of-the-art on all metrics (including human evaluation) on the CNN/Daily Mail dataset, as well as significantly higher abstractiveness scores. Moreover, by first operating at the sentence-level and then the word-level, we enable parallel decoding of our neural generative model that results in substantially faster (10-20x) inference speed as well as 4x faster training convergence than previous long-paragraph encoder-decoder models. We also demonstrate the generalization of our model on the test-only DUC-2002 dataset, where we achieve higher scores than a state-of-the-art model.
In text/plain format

Archived Files and Locations

application/pdf  765.1 kB
file_2lw47zysivbsxgebhncgakycba
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2018-05-28
Version   v1
Language   en ?
arXiv  1805.11080v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 0e9f6507-5d23-4f36-b9cf-84a0a6f0233f
API URL: JSON