HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

A New Approach to Overgenerating and Scoring Abstractive Summaries

Kaiqiang Song Bingqing Wang Zhe Feng Fei Liu

A New Approach to Overgenerating and Scoring Abstractive Summaries

Abstract

We propose a new approach to generate multiple variants of the target summary with diverse content and varying lengths, then score and select admissible ones according to users' needs. Abstractive summarizers trained on single reference summaries may struggle to produce outputs that achieve multiple desirable properties, i.e., capturing the most important information, being faithful to the original, grammatical and fluent. In this paper, we propose a two-staged strategy to generate a diverse set of candidate summaries from the source text in stage one, then score and select admissible ones in stage two. Importantly, our generator gives a precise control over the length of the summary, which is especially well-suited when space is limited. Our selectors are designed to predict the optimal summary length and put special emphasis on faithfulness to the original text. Both stages can be effectively trained, optimized and evaluated. Our experiments on benchmark summarization datasets suggest that this paradigm can achieve state-of-the-art performance.

Code Repositories

Benchmarks

BenchmarkMethodologyMetrics
text-summarization-on-gigawordBest Summary Length
ROUGE-1: 39.27
ROUGE-2: 20.40
ROUGE-L: 37.75

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp