HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Character-Level Language Modeling with Deeper Self-Attention

Rami Al-Rfou; Dokook Choe; Noah Constant; Mandy Guo; Llion Jones

Character-Level Language Modeling with Deeper Self-Attention

Abstract

LSTMs and other RNN variants have shown strong performance on character-level language modeling. These models are typically trained using truncated backpropagation through time, and it is common to assume that their success stems from their ability to remember long-term contexts. In this paper, we show that a deep (64-layer) transformer model with fixed context outperforms RNN variants by a large margin, achieving state of the art on two popular benchmarks: 1.13 bits per character on text8 and 1.06 on enwik8. To get good results at this depth, we show that it is important to add auxiliary losses, both at intermediate network layers and intermediate sequence positions.

Code Repositories

Benchmarks

BenchmarkMethodologyMetrics
language-modelling-on-enwiki864-layer Character Transformer Model
Bit per Character (BPC): 1.11
Number of params: 44M
language-modelling-on-enwiki8Transformer (64 layers)
Bit per Character (BPC): 1.06
Number of params: 235M
language-modelling-on-hutter-prize64-layer Character Transformer Model
Bit per Character (BPC): 1.06
Number of params: 235M
language-modelling-on-hutter-prize12-layer Character Transformer Model
Bit per Character (BPC): 1.11
Number of params: 44M
language-modelling-on-text812-layer Character Transformer Model
Bit per Character (BPC): 1.18
Number of params: 44M
language-modelling-on-text864-layer Character Transformer Model
Bit per Character (BPC): 1.13
Number of params: 235M

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp