Command Palette
Search for a command to run...
Sho Takase; Jun Suzuki; Masaaki Nagata

Abstract
This paper proposes a state-of-the-art recurrent neural network (RNN) language model that combines probability distributions computed not only from a final RNN layer but also from middle layers. Our proposed method raises the expressive power of a language model based on the matrix factorization interpretation of language modeling introduced by Yang et al. (2018). The proposed method improves the current state-of-the-art language model and achieves the best score on the Penn Treebank and WikiText-2, which are the standard benchmark datasets. Moreover, we indicate our proposed method contributes to two application tasks: machine translation and headline generation. Our code is publicly available at: https://github.com/nttcslab-nlp/doc_lm.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| constituency-parsing-on-penn-treebank | LSTM Encoder-Decoder + LSTM-LM | F1 score: 94.47 |
| language-modelling-on-penn-treebank-word | AWD-LSTM-DOC x5 | Params: 185M Test perplexity: 47.17 Validation perplexity: 48.63 |
| language-modelling-on-penn-treebank-word | AWD-LSTM-DOC | Params: 23M Test perplexity: 52.38 Validation perplexity: 54.12 |
| language-modelling-on-wikitext-2 | AWD-LSTM-DOC | Number of params: 37M Test perplexity: 58.03 Validation perplexity: 60.29 |
| language-modelling-on-wikitext-2 | AWD-LSTM-DOC x5 | Number of params: 185M Test perplexity: 53.09 Validation perplexity: 54.19 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.