HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer

Noam Shazeer; Azalia Mirhoseini; Krzysztof Maziarz; Andy Davis; Quoc Le; Geoffrey Hinton; Jeff Dean

Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer

Abstract

The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.

Code Repositories

unconst/MACH
tf
Mentioned in GitHub
jsuarez5341/Efficient-Dynamic-Batching
pytorch
Mentioned in GitHub
davidmrau/mixture-of-experts
pytorch
Mentioned in GitHub
ma921/XRDidentifier
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
language-modelling-on-one-billion-wordLow-Budget MoE
Number of params: 5B
PPL: 34.1
language-modelling-on-one-billion-wordHigh-Budget MoE
Number of params: 5B
PPL: 28.0
machine-translation-on-wmt2014-english-frenchMoE
BLEU score: 40.56
Hardware Burden: 142G
Operations per network pass:
machine-translation-on-wmt2014-english-germanMoE
BLEU score: 26.03
Hardware Burden: 24G
Operations per network pass:

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp