Command Palette
Search for a command to run...
Oleksii Kuchaiev; Boris Ginsburg

Abstract
We present two simple ways of reducing the number of parameters and accelerating the training of large Long Short-Term Memory (LSTM) networks: the first one is "matrix factorization by design" of LSTM matrix into the product of two smaller matrices, and the second one is partitioning of LSTM matrix, its inputs and states into the independent groups. Both approaches allow us to train large LSTM networks significantly faster to the near state-of the art perplexity while using significantly less RNN parameters.
Code Repositories
rdspring1/PyTorch_GBW_LM
pytorch
Mentioned in GitHub
okuchaiev/f-lm
Official
tf
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| language-modelling-on-one-billion-word | BIG G-LSTM-2 | PPL: 36.0 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.
AI Co-coding
Ready-to-use GPUs
Best Pricing
Hyper Newsletters
Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp