HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Neural Latent Extractive Document Summarization

Xingxing Zhang; Mirella Lapata; Furu Wei; Ming Zhou

Neural Latent Extractive Document Summarization

Abstract

Extractive summarization models require sentence-level labels, which are usually created heuristically (e.g., with rule-based methods) given that most summarization datasets only have document-summary pairs. Since these labels might be suboptimal, we propose a latent variable extractive model where sentences are viewed as latent variables and sentences with activated variables are used to infer gold summaries. During training the loss comes \emph{directly} from gold summaries. Experiments on the CNN/Dailymail dataset show that our model improves over a strong extractive baseline trained on heuristically approximated labels and also performs competitively to several recent models.

Benchmarks

BenchmarkMethodologyMetrics
extractive-document-summarization-on-cnnLatent
ROUGE-1: 41.05
ROUGE-2: 18.77
ROUGE-L: 37.54

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Neural Latent Extractive Document Summarization | Papers | HyperAI