HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

MASS: Masked Sequence to Sequence Pre-training for Language Generation

Kaitao Song; Xu Tan; Tao Qin; Jianfeng Lu; Tie-Yan Liu

MASS: Masked Sequence to Sequence Pre-training for Language Generation

Abstract

Pre-training and fine-tuning, e.g., BERT, have achieved great success in language understanding by transferring knowledge from rich-resource pre-training task to the low/zero-resource downstream tasks. Inspired by the success of BERT, we propose MAsked Sequence to Sequence pre-training (MASS) for the encoder-decoder based language generation tasks. MASS adopts the encoder-decoder framework to reconstruct a sentence fragment given the remaining part of the sentence: its encoder takes a sentence with randomly masked fragment (several consecutive tokens) as input, and its decoder tries to predict this masked fragment. In this way, MASS can jointly train the encoder and decoder to develop the capability of representation extraction and language modeling. By further fine-tuning on a variety of zero/low-resource language generation tasks, including neural machine translation, text summarization and conversational response generation (3 tasks and totally 8 datasets), MASS achieves significant improvements over the baselines without pre-training or with other pre-training methods. Specially, we achieve the state-of-the-art accuracy (37.5 in terms of BLEU score) on the unsupervised English-French translation, even beating the early attention-based supervised model.

Code Repositories

microsoft/MPNet
pytorch
Mentioned in GitHub
michael-wzhu/mpnet_zh
pytorch
Mentioned in GitHub
microsoft/MASS
Official
pytorch
Mentioned in GitHub
jiaruncao/BioCopyMechanism
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
text-summarization-on-gigawordMASS
ROUGE-1: 38.73
ROUGE-2: 19.71
ROUGE-L: 35.96
unsupervised-machine-translation-on-wmt2014-1MASS (6-layer Transformer)
BLEU: 34.9
unsupervised-machine-translation-on-wmt2014-2MASS (6-layer Transformer)
BLEU: 37.5
unsupervised-machine-translation-on-wmt2016MASS (6-layer Transformer)
BLEU: 28.3
unsupervised-machine-translation-on-wmt2016-1MASS (6-layer Transformer)
BLEU: 35.2
unsupervised-machine-translation-on-wmt2016-2MASS (6-layer Transformer)
BLEU: 35.2
unsupervised-machine-translation-on-wmt2016-3MASS (6-layer Transformer)
BLEU: 33.1

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp