HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Dynamic Evaluation of Neural Sequence Models

Ben Krause; Emmanuel Kahembwe; Iain Murray; Steve Renals

Dynamic Evaluation of Neural Sequence Models

Abstract

We present methodology for using dynamic evaluation to improve neural sequence models. Models are adapted to recent history via a gradient descent based mechanism, causing them to assign higher probabilities to re-occurring sequential patterns. Dynamic evaluation outperforms existing adaptation approaches in our comparisons. Dynamic evaluation improves the state-of-the-art word-level perplexities on the Penn Treebank and WikiText-2 datasets to 51.1 and 44.3 respectively, and the state-of-the-art character-level cross-entropies on the text8 and Hutter Prize datasets to 1.19 bits/char and 1.08 bits/char respectively.

Code Repositories

benkrause/dynamic-evaluation
Official
pytorch
Mentioned in GitHub
sacmehta/PRU
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
language-modelling-on-hutter-prizemLSTM + dynamic eval
Bit per Character (BPC): 1.08
Number of params: 46M
language-modelling-on-penn-treebank-wordAWD-LSTM + dynamic eval
Params: 24M
Test perplexity: 51.1
Validation perplexity: 51.6
language-modelling-on-text8mLSTM + dynamic eval
Bit per Character (BPC): 1.19
Number of params: 45M
language-modelling-on-wikitext-2AWD-LSTM + dynamic eval
Number of params: 33M
Test perplexity: 44.3
Validation perplexity: 46.4

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp