HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

SpeechPrompt v2: Prompt Tuning for Speech Classification Tasks

Kai-Wei Chang Yu-Kai Wang Hua Shen Iu-thing Kang Wei-Cheng Tseng Shang-Wen Li Hung-yi Lee

SpeechPrompt v2: Prompt Tuning for Speech Classification Tasks

Abstract

Prompt tuning is a technology that tunes a small set of parameters to steer a pre-trained language model (LM) to directly generate the output for downstream tasks. Recently, prompt tuning has demonstrated its storage and computation efficiency in both natural language processing (NLP) and speech processing fields. These advantages have also revealed prompt tuning as a candidate approach to serving pre-trained LM for multiple tasks in a unified manner. For speech processing, SpeechPrompt shows its high parameter efficiency and competitive performance on a few speech classification tasks. However, whether SpeechPrompt is capable of serving a large number of tasks is unanswered. In this work, we propose SpeechPrompt v2, a prompt tuning framework capable of performing a wide variety of speech classification tasks, covering multiple languages and prosody-related tasks. The experiment result shows that SpeechPrompt v2 achieves performance on par with prior works with less than 0.15M trainable parameters in a unified framework.

Benchmarks

BenchmarkMethodologyMetrics
spoken-language-understanding-on-fluentpGSLM+
Accuracy (%): 98.2

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
SpeechPrompt v2: Prompt Tuning for Speech Classification Tasks | Papers | HyperAI