HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

SpeechBERT: An Audio-and-text Jointly Learned Language Model for End-to-end Spoken Question Answering

Yung-Sung Chuang Chi-Liang Liu Hung-Yi Lee Lin-shan Lee

SpeechBERT: An Audio-and-text Jointly Learned Language Model for End-to-end Spoken Question Answering

Abstract

While various end-to-end models for spoken language understanding tasks have been explored recently, this paper is probably the first known attempt to challenge the very difficult task of end-to-end spoken question answering (SQA). Learning from the very successful BERT model for various text processing tasks, here we proposed an audio-and-text jointly learned SpeechBERT model. This model outperformed the conventional approach of cascading ASR with the following text question answering (TQA) model on datasets including ASR errors in answer spans, because the end-to-end model was shown to be able to extract information out of audio data before ASR produced errors. When ensembling the proposed end-to-end model with the cascade architecture, even better performance was achieved. In addition to the potential of end-to-end SQA, the SpeechBERT can also be considered for many other spoken language understanding tasks just as BERT for many text processing tasks.

Benchmarks

BenchmarkMethodologyMetrics
spoken-language-understanding-on-spoken-squadSpeechBERT
F1 score: 71.75

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
SpeechBERT: An Audio-and-text Jointly Learned Language Model for End-to-end Spoken Question Answering | Papers | HyperAI