HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Hungry Hungry Hippos: Towards Language Modeling with State Space Models

Daniel Y. Fu Tri Dao Khaled K. Saab Armin W. Thomas Atri Rudra Christopher Ré

Hungry Hungry Hippos: Towards Language Modeling with State Space Models

Abstract

State space models (SSMs) have demonstrated state-of-the-art sequence modeling performance in some modalities, but underperform attention in language modeling. Moreover, despite scaling nearly linearly in sequence length instead of quadratically, SSMs are still slower than Transformers due to poor hardware utilization. In this paper, we make progress on understanding the expressivity gap between SSMs and attention in language modeling, and on reducing the hardware barrier between SSMs and attention. First, we use synthetic language modeling tasks to understand the gap between SSMs and attention. We find that existing SSMs struggle with two capabilities: recalling earlier tokens in the sequence and comparing tokens across the sequence. To understand the impact on language modeling, we propose a new SSM layer, H3, that is explicitly designed for these abilities. H3 matches attention on the synthetic languages and comes within 0.4 PPL of Transformers on OpenWebText. Furthermore, a hybrid 125M-parameter H3-attention model that retains two attention layers surprisingly outperforms Transformers on OpenWebText by 1.0 PPL. Next, to improve the efficiency of training SSMs on modern hardware, we propose FlashConv. FlashConv uses a fused block FFT algorithm to improve efficiency on sequences up to 8K, and introduces a novel state passing algorithm that exploits the recurrent properties of SSMs to scale to longer sequences. FlashConv yields 2$\times$ speedup on the long-range arena benchmark and allows hybrid language models to generate text 2.4$\times$ faster than Transformers. Using FlashConv, we scale hybrid H3-attention language models up to 2.7B parameters on the Pile and find promising initial results, achieving lower perplexity than Transformers and outperforming Transformers in zero- and few-shot learning on a majority of tasks in the SuperGLUE benchmark.

Code Repositories

hazyresearch/safari
pytorch
Mentioned in GitHub
lindermanlab/S5
jax
Mentioned in GitHub
hazyresearch/h3
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
coreference-resolution-on-winograd-schemaHybrid H3 125M (3-shot, logit scoring)
Accuracy: 43.3
coreference-resolution-on-winograd-schemaH3 125M (0-shot, rank classification)
Accuracy: 61.5
coreference-resolution-on-winograd-schemaH3 125M (3-shot, rank classification)
Accuracy: 63.5
language-modelling-on-the-pileTransformer 125M
Test perplexity: 10.7
language-modelling-on-the-pileHybrid H3 125M
Test perplexity: 10.2
language-modelling-on-wikitext-103Hybrid H3 (355M)
Number of params: 355M
Test perplexity: 16.9
language-modelling-on-wikitext-103Hybrid H3 125M
Test perplexity: 18.5
language-modelling-on-wikitext-103Hybrid H3 (1.3B)
Number of params: 1300M
Test perplexity: 12.5
language-modelling-on-wikitext-103Hybrid H3 (2.7B)
Number of params: 2700M
Test perplexity: 10.6
language-modelling-on-wikitext-103Hybrid H3 (125M)
Number of params: 125M
Test perplexity: 23.7
natural-language-inference-on-rteH3 125M (0-shot, rank classification)
Accuracy: 53.1%
natural-language-inference-on-rteHybrid H3 125M (3-shot, rank classification)
Accuracy: 58.1%
natural-language-inference-on-rteHybrid H3 125M (3-shot, logit scoring)
Accuracy: 58.1%
natural-language-inference-on-rteH3 125M (3-shot, rank classification)
Accuracy: 52.3%
natural-language-inference-on-rteHybrid H3 125M (0-shot, logit scoring)
Accuracy: 59.2%
question-answering-on-boolqHybrid H3 125M (0-shot, logit scoring)
Accuracy: 59.6
question-answering-on-boolqHybrid H3 2.7B (3-shot, logit scoring)
Accuracy: 60.6
question-answering-on-boolqHybrid H3 1.3B (0-shot, logit scoring)
Accuracy: 61.7
question-answering-on-boolqHybrid H3 125M (3-shot, logit scoring)
Accuracy: 56.1
question-answering-on-boolqHybrid H3 125M (3-shot, rank classification)
Accuracy: 56.1
question-answering-on-copaHybrid H3 125M (0-shot, rank classification)
Accuracy: 67
question-answering-on-copaH3 125M (0-shot, rank classification)
Accuracy: 51
question-answering-on-copaHybrid H3 125M (0-shot, logit scoring)
Accuracy: 67
question-answering-on-copaHybrid H3 2.7B (3-shot, logit scoring)
Accuracy: 77
question-answering-on-copaHybrid H3 2.7B (0-shot, logit scoring)
Accuracy: 81
question-answering-on-multircHybrid H3 125M (3-shot, logit scoring)
EM: 48.9
question-answering-on-multircHybrid H3 355M (0-shot, logit scoring)
EM: 59.5
question-answering-on-multircHybrid H3 355M (3-shot, logit scoring)
EM: 59.7
question-answering-on-multircHybrid H3 125M (0-shot, logit scoring)
EM: 51.4
word-sense-disambiguation-on-words-in-contextHybrid H3 125M (0-shot, rank classification)
Accuracy: 51.4
word-sense-disambiguation-on-words-in-contextHybrid H3 125M (3-shot, logit scoring)
Accuracy: 49.1
word-sense-disambiguation-on-words-in-contextHybrid H3 125M (0-shot, logit scoring)
Accuracy: 51.4

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp