HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Improved Language Modeling by Decoding the Past

Siddhartha Brahma

Improved Language Modeling by Decoding the Past

Abstract

Highly regularized LSTMs achieve impressive results on several benchmark datasets in language modeling. We propose a new regularization method based on decoding the last token in the context using the predicted distribution of the next token. This biases the model towards retaining more contextual information, in turn improving its ability to predict the next token. With negligible overhead in the number of parameters and training time, our Past Decode Regularization (PDR) method achieves a word level perplexity of 55.6 on the Penn Treebank and 63.5 on the WikiText-2 datasets using a single softmax. We also show gains by using PDR in combination with a mixture-of-softmaxes, achieving a word level perplexity of 53.8 and 60.5 on these datasets. In addition, our method achieves 1.169 bits-per-character on the Penn Treebank Character dataset for character level language modeling. These results constitute a new state-of-the-art in their respective settings.

Benchmarks

BenchmarkMethodologyMetrics
language-modelling-on-penn-treebank-characterPast Decode Reg. + AWD-LSTM-MoS + dyn. eval.
Bit per Character (BPC): 1.169
Number of params: 13.8M
language-modelling-on-penn-treebank-wordPast Decode Reg. + AWD-LSTM-MoS + dyn. eval.
Params: 22M
Test perplexity: 47.3
Validation perplexity: 48.0
language-modelling-on-wikitext-2Past Decode Reg. + AWD-LSTM-MoS + dyn. eval.
Number of params: 35M
Test perplexity: 40.3
Validation perplexity: 42.0

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Improved Language Modeling by Decoding the Past | Papers | HyperAI