HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Factorization tricks for LSTM networks

Oleksii Kuchaiev; Boris Ginsburg

Factorization tricks for LSTM networks

Abstract

We present two simple ways of reducing the number of parameters and accelerating the training of large Long Short-Term Memory (LSTM) networks: the first one is "matrix factorization by design" of LSTM matrix into the product of two smaller matrices, and the second one is partitioning of LSTM matrix, its inputs and states into the independent groups. Both approaches allow us to train large LSTM networks significantly faster to the near state-of the art perplexity while using significantly less RNN parameters.

Code Repositories

rdspring1/PyTorch_GBW_LM
pytorch
Mentioned in GitHub
okuchaiev/f-lm
Official
tf

Benchmarks

BenchmarkMethodologyMetrics
language-modelling-on-one-billion-wordBIG G-LSTM-2
PPL: 36.0

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp