HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

ASR is all you need: cross-modal distillation for lip reading

Triantafyllos Afouras Joon Son Chung Andrew Zisserman

ASR is all you need: cross-modal distillation for lip reading

Abstract

The goal of this work is to train strong models for visual speech recognition without requiring human annotated ground truth data. We achieve this by distilling from an Automatic Speech Recognition (ASR) model that has been trained on a large-scale audio-only corpus. We use a cross-modal distillation method that combines Connectionist Temporal Classification (CTC) with a frame-wise cross-entropy loss. Our contributions are fourfold: (i) we show that ground truth transcriptions are not necessary to train a lip reading system; (ii) we show how arbitrary amounts of unlabelled video data can be leveraged to improve performance; (iii) we demonstrate that distillation significantly speeds up training; and, (iv) we obtain state-of-the-art results on the challenging LRS2 and LRS3 datasets for training only on publicly available data.

Benchmarks

BenchmarkMethodologyMetrics
lipreading-on-lrs2CTC + KD ASR
Word Error Rate (WER): 53.2
lipreading-on-lrs3-tedCTC + KD
Word Error Rate (WER): 59.8

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
ASR is all you need: cross-modal distillation for lip reading | Papers | HyperAI