HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Summary Level Training of Sentence Rewriting for Abstractive Summarization

Sanghwan Bae Taeuk Kim Jihoon Kim Sang-goo Lee

Summary Level Training of Sentence Rewriting for Abstractive Summarization

Abstract

As an attempt to combine extractive and abstractive summarization, Sentence Rewriting models adopt the strategy of extracting salient sentences from a document first and then paraphrasing the selected ones to generate a summary. However, the existing models in this framework mostly rely on sentence-level rewards or suboptimal labels, causing a mismatch between a training objective and evaluation metric. In this paper, we present a novel training signal that directly maximizes summary-level ROUGE scores through reinforcement learning. In addition, we incorporate BERT into our model, making good use of its ability on natural language understanding. In extensive experiments, we show that a combination of our proposed model and training procedure obtains new state-of-the-art performance on both CNN/Daily Mail and New York Times datasets. We also demonstrate that it generalizes better on DUC-2002 test set.

Benchmarks

BenchmarkMethodologyMetrics
abstractive-text-summarization-on-cnn-dailyBERT-ext + abs + RL + rerank
ROUGE-1: 41.90
ROUGE-2: 19.08
ROUGE-L: 39.64
extractive-document-summarization-on-cnnBERT-ext + RL
ROUGE-1: 42.76
ROUGE-2: 19.87
ROUGE-L: 39.11

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Summary Level Training of Sentence Rewriting for Abstractive Summarization | Papers | HyperAI