HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Unified Language Model Pre-training for Natural Language Understanding and Generation

Li Dong; Nan Yang; Wenhui Wang; Furu Wei; Xiaodong Liu; Yu Wang; Jianfeng Gao; Ming Zhou; Hsiao-Wuen Hon

Unified Language Model Pre-training for Natural Language Understanding and Generation

Abstract

This paper presents a new Unified pre-trained Language Model (UniLM) that can be fine-tuned for both natural language understanding and generation tasks. The model is pre-trained using three types of language modeling tasks: unidirectional, bidirectional, and sequence-to-sequence prediction. The unified modeling is achieved by employing a shared Transformer network and utilizing specific self-attention masks to control what context the prediction conditions on. UniLM compares favorably with BERT on the GLUE benchmark, and the SQuAD 2.0 and CoQA question answering tasks. Moreover, UniLM achieves new state-of-the-art results on five natural language generation datasets, including improving the CNN/DailyMail abstractive summarization ROUGE-L to 40.51 (2.04 absolute improvement), the Gigaword abstractive summarization ROUGE-L to 35.75 (0.86 absolute improvement), the CoQA generative question answering F1 score to 82.5 (37.1 absolute improvement), the SQuAD question generation BLEU-4 to 22.12 (3.75 absolute improvement), and the DSTC7 document-grounded dialog response generation NIST-4 to 2.67 (human performance is 2.65). The code and pre-trained models are available at https://github.com/microsoft/unilm.

Code Repositories

facebookresearch/data2vec_vision
pytorch
Mentioned in GitHub
KnightZhang625/BERT_TF
tf
Mentioned in GitHub
YunwenTechnology/Unilm
pytorch
Mentioned in GitHub
LeonZh0u/Chatbot
pytorch
Mentioned in GitHub
microsoft/unilm
Official
pytorch
Mentioned in GitHub
robinsongh381/unilm_pytorch_korean
pytorch
Mentioned in GitHub
jiaruncao/BioCopyMechanism
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
abstractive-text-summarization-on-cnn-dailyUniLM
ROUGE-1: 43.08
ROUGE-2: 20.43
ROUGE-L: 40.34
document-summarization-on-cnn-daily-mailUniLM (Abstractive Summarization)
ROUGE-1: 43.08
ROUGE-2: 20.43
ROUGE-L: 40.34
generative-question-answering-on-coqaUniLM
F1-Score: 82.5
question-generation-on-squad11UniLM
BLEU-4: 22.78
METEOR: 25.1
ROUGE-L: 51.1
text-summarization-on-gigawordUniLM
ROUGE-1: 38.90
ROUGE-2: 20.05
ROUGE-L: 36.00

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp