HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Do We Still Need Automatic Speech Recognition for Spoken Language Understanding?

Lasse Borgholt Jakob Drachmann Havtorn Mostafa Abdou Joakim Edin Lars Maaløe Anders Søgaard Christian Igel

Do We Still Need Automatic Speech Recognition for Spoken Language Understanding?

Abstract

Spoken language understanding (SLU) tasks are usually solved by first transcribing an utterance with automatic speech recognition (ASR) and then feeding the output to a text-based model. Recent advances in self-supervised representation learning for speech data have focused on improving the ASR component. We investigate whether representation learning for speech has matured enough to replace ASR in SLU. We compare learned speech features from wav2vec 2.0, state-of-the-art ASR transcripts, and the ground truth text as input for a novel speech-based named entity recognition task, a cardiac arrest detection task on real-world emergency calls and two existing SLU benchmarks. We show that learned speech features are superior to ASR transcripts on three classification tasks. For machine translation, ASR transcripts are still the better choice. We highlight the intrinsic robustness of wav2vec 2.0 representations to out-of-vocabulary words as key to better performance.

Benchmarks

BenchmarkMethodologyMetrics
spoken-language-understanding-on-fluentWav2vec 2.0 SSL
Accuracy (%): 99.6

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Do We Still Need Automatic Speech Recognition for Spoken Language Understanding? | Papers | HyperAI