HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Addressing Some Limitations of Transformers with Feedback Memory

Angela Fan Thibaut Lavril Edouard Grave Armand Joulin Sainbayar Sukhbaatar

Addressing Some Limitations of Transformers with Feedback Memory

Abstract

Transformers have been successfully applied to sequential, auto-regressive tasks despite being feedforward networks. Unlike recurrent neural networks, Transformers use attention to capture temporal relations while processing input tokens in parallel. While this parallelization makes them computationally efficient, it restricts the model from fully exploiting the sequential nature of the input. The representation at a given layer can only access representations from lower layers, rather than the higher level representations already available. In this work, we propose the Feedback Transformer architecture that exposes all previous representations to all future representations, meaning the lowest representation of the current timestep is formed from the highest-level abstract representation of the past. We demonstrate on a variety of benchmarks in language modeling, machine translation, and reinforcement learning that the increased representation capacity can create small, shallow models with much stronger performance than comparable Transformers.

Benchmarks

BenchmarkMethodologyMetrics
language-modelling-on-enwiki8Feedback Transformer
Bit per Character (BPC): 0.96
Number of params: 77M
language-modelling-on-penn-treebank-characterFeedback Transformer
Bit per Character (BPC): 1.160
Number of params: 10.7M
language-modelling-on-wikitext-103Feedback Transformer (8 layers)
Number of params: 139M
Test perplexity: 18.2
Validation perplexity: 17.5
language-modelling-on-wikitext-103Feedback Transformer (4 layers)
Number of params: 44M
Test perplexity: 22.4
Validation perplexity: 21.4

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp