HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Fastformer: Additive Attention Can Be All You Need

Chuhan Wu Fangzhao Wu Tao Qi Yongfeng Huang Xing Xie

Fastformer: Additive Attention Can Be All You Need

Abstract

Transformer is a powerful model for text understanding. However, it is inefficient due to its quadratic complexity to input sequence length. Although there are many methods on Transformer acceleration, they are still either inefficient on long sequences or not effective enough. In this paper, we propose Fastformer, which is an efficient Transformer model based on additive attention. In Fastformer, instead of modeling the pair-wise interactions between tokens, we first use additive attention mechanism to model global contexts, and then further transform each token representation based on its interaction with global context representations. In this way, Fastformer can achieve effective context modeling with linear complexity. Extensive experiments on five datasets show that Fastformer is much more efficient than many existing Transformer models and can meanwhile achieve comparable or even better long text modeling performance.

Benchmarks

BenchmarkMethodologyMetrics
text-summarization-on-cnn-daily-mail-2Fastformer
ROUGE-1: 38.54
ROUGE-2: 16.22
ROUGE-L: 36.21
text-summarization-on-pubmed-1Fastformer
ROUGE-1: 38.09
ROUGE-2: 15.44
ROUGE-L: 34.81

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp