HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Partially Shuffling the Training Data to Improve Language Models

Ofir Press

Partially Shuffling the Training Data to Improve Language Models

Abstract

Although SGD requires shuffling the training data between epochs, currently none of the word-level language modeling systems do this. Naively shuffling all sentences in the training data would not permit the model to learn inter-sentence dependencies. Here we present a method that partially shuffles the training data between epochs. This method makes each batch random, while keeping most sentence ordering intact. It achieves new state of the art results on word-level language modeling on both the Penn Treebank and WikiText-2 datasets.

Code Repositories

ofirpress/PartialShuffle
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
language-modelling-on-penn-treebank-wordAWD-LSTM-MoS + Partial Shuffle
Params: 22M
Test perplexity: 53.92
Validation perplexity: 55.89
language-modelling-on-penn-treebank-wordAWD-LSTM-DOC + Partial Shuffle
Params: 23M
Test perplexity: 52.0
Validation perplexity: 53.79
language-modelling-on-wikitext-2AWD-LSTM-MoS + Partial Shuffle
Number of params: 35M
Test perplexity: 59.98
Validation perplexity: 62.38
language-modelling-on-wikitext-2AWD-LSTM-DOC + Partial Shuffle
Number of params: 37M
Test perplexity: 57.85
Validation perplexity: 60.16

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Partially Shuffling the Training Data to Improve Language Models | Papers | HyperAI