HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale

Tim Dettmers; Mike Lewis; Younes Belkada; Luke Zettlemoyer

LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale

Abstract

Large language models have been widely adopted but require significant GPU memory for inference. We develop a procedure for Int8 matrix multiplication for feed-forward and attention projection layers in transformers, which cut the memory needed for inference by half while retaining full precision performance. With our method, a 175B parameter 16/32-bit checkpoint can be loaded, converted to Int8, and used immediately without performance degradation. This is made possible by understanding and working around properties of highly systematic emergent features in transformer language models that dominate attention and transformer predictive performance. To cope with these features, we develop a two-part quantization procedure, LLM.int8(). We first use vector-wise quantization with separate normalization constants for each inner product in the matrix multiplication, to quantize most of the features. However, for the emergent outliers, we also include a new mixed-precision decomposition scheme, which isolates the outlier feature dimensions into a 16-bit matrix multiplication while still more than 99.9% of values are multiplied in 8-bit. Using LLM.int8(), we show empirically it is possible to perform inference in LLMs with up to 175B parameters without any performance degradation. This result makes such models much more accessible, for example making it possible to use OPT-175B/BLOOM on a single server with consumer GPUs. We open-source our software.

Code Repositories

timdettmers/bitsandbytes
Official
pytorch
Mentioned in GitHub
kohjingyu/fromage
pytorch
Mentioned in GitHub
alextmallen/adaptive-retrieval
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
language-modelling-on-c4Zeropoint LLM.int8 13B (vector-wise + decomp)
Perplexity: 12.45
language-modelling-on-c4LLM.float32 2.7B
Perplexity: 14.43
language-modelling-on-c4LLM.float32 1.3B
Perplexity: 15.91
language-modelling-on-c4LLM.float32 6.7B
Perplexity: 13.3
linguistic-acceptability-on-colaRoBERTa-large 355M (MLP quantized vector-wise, fine-tuned)
Accuracy: 68.6%
natural-language-inference-on-multinliRoBERTa-large 355M (MLP quantized vector-wise, fine-tuned)
Matched: 90.2
natural-language-inference-on-qnliRoBERTa-large 355M (MLP quantized vector-wise, fine-tuned)
Accuracy: 94.7%
natural-language-inference-on-rteRoBERTa-large 355M (MLP quantized vector-wise, fine-tuned)
Accuracy: 85.4%
semantic-textual-similarity-on-mrpcRoBERTa-large 355M (MLP quantized vector-wise, fine-tuned)
Accuracy: 91.0%
semantic-textual-similarity-on-sts-benchmarkRoBERTa-large 355M (MLP quantized vector-wise, fine-tuned)
Pearson Correlation: 0.919
sentiment-analysis-on-sst-2-binaryRoBERTa-large 355M (MLP quantized vector-wise, fine-tuned)
Accuracy: 96.4

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp