HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Calibrating Sequence likelihood Improves Conditional Language Generation

Yao Zhao Misha Khalman Rishabh Joshi Shashi Narayan Mohammad Saleh Peter J. Liu

Calibrating Sequence likelihood Improves Conditional Language Generation

Abstract

Conditional language models are predominantly trained with maximum likelihood estimation (MLE), giving probability mass to sparsely observed target sequences. While MLE trained models assign high probability to plausible sequences given the context, the model probabilities often do not accurately rank-order generated sequences by quality. This has been empirically observed in beam search decoding as output quality degrading with large beam sizes, and decoding strategies benefiting from heuristics such as length normalization and repetition-blocking. In this work, we introduce sequence likelihood calibration (SLiC) where the likelihood of model generated sequences are calibrated to better align with reference sequences in the model's latent space. With SLiC, decoding heuristics become unnecessary and decoding candidates' quality significantly improves regardless of the decoding method. Furthermore, SLiC shows no sign of diminishing returns with model scale, and presents alternative ways to improve quality with limited training and inference budgets. With SLiC, we exceed or match SOTA results on a wide range of generation tasks spanning abstractive summarization, question generation, abstractive question answering and data-to-text generation, even with modest-sized models.

Benchmarks

BenchmarkMethodologyMetrics
abstractive-text-summarization-on-cnn-dailyPegasus
ROUGE-1: 47.36
ROUGE-2: 24.02
ROUGE-L: 44.45
text-summarization-on-reddit-tifuPEGASUS 2B + SLiC
ROUGE-1: 32.03
ROUGE-2: 11.13
ROUGE-L: 25.51
text-summarization-on-samsum-corpusPEGASUS 2B + SliC
ROUGE-1: 54.37
ROUGE-2: 29.88
ROUGE-L: 45.89

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Calibrating Sequence likelihood Improves Conditional Language Generation | Papers | HyperAI