Command Palette
Search for a command to run...
Rami Al-Rfou; Dokook Choe; Noah Constant; Mandy Guo; Llion Jones

Abstract
LSTMs and other RNN variants have shown strong performance on character-level language modeling. These models are typically trained using truncated backpropagation through time, and it is common to assume that their success stems from their ability to remember long-term contexts. In this paper, we show that a deep (64-layer) transformer model with fixed context outperforms RNN variants by a large margin, achieving state of the art on two popular benchmarks: 1.13 bits per character on text8 and 1.06 on enwik8. To get good results at this depth, we show that it is important to add auxiliary losses, both at intermediate network layers and intermediate sequence positions.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| language-modelling-on-enwiki8 | 64-layer Character Transformer Model | Bit per Character (BPC): 1.11 Number of params: 44M |
| language-modelling-on-enwiki8 | Transformer (64 layers) | Bit per Character (BPC): 1.06 Number of params: 235M |
| language-modelling-on-hutter-prize | 64-layer Character Transformer Model | Bit per Character (BPC): 1.06 Number of params: 235M |
| language-modelling-on-hutter-prize | 12-layer Character Transformer Model | Bit per Character (BPC): 1.11 Number of params: 44M |
| language-modelling-on-text8 | 12-layer Character Transformer Model | Bit per Character (BPC): 1.18 Number of params: 44M |
| language-modelling-on-text8 | 64-layer Character Transformer Model | Bit per Character (BPC): 1.13 Number of params: 235M |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.