HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Lightweight Adapter Tuning for Multilingual Speech Translation

Hang Le Juan Pino Changhan Wang Jiatao Gu Didier Schwab Laurent Besacier

Lightweight Adapter Tuning for Multilingual Speech Translation

Abstract

Adapter modules were recently introduced as an efficient alternative to fine-tuning in NLP. Adapter tuning consists in freezing pretrained parameters of a model and injecting lightweight modules between layers, resulting in the addition of only a small number of task-specific trainable parameters. While adapter tuning was investigated for multilingual neural machine translation, this paper proposes a comprehensive analysis of adapters for multilingual speech translation (ST). Starting from different pre-trained models (a multilingual ST trained on parallel data or a multilingual BART (mBART) trained on non-parallel multilingual data), we show that adapters can be used to: (a) efficiently specialize ST to specific language pairs with a low extra cost in terms of parameters, and (b) transfer from an automatic speech recognition (ASR) task and an mBART pre-trained model to a multilingual ST task. Experiments show that adapter tuning offer competitive results to full fine-tuning, while being much more parameter-efficient.

Benchmarks

BenchmarkMethodologyMetrics
speech-to-text-translation-on-must-c-1Transformer with Adapters
SacreBLEU: 26.61
speech-to-text-translation-on-must-c-en-deTransformer with Adapters
Case-sensitive sacreBLEU: 24.63
speech-to-text-translation-on-must-c-en-esTransformer with Adapters
Case-sensitive sacreBLEU: 28.73

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp