Command Palette
Search for a command to run...
Mao Li Bo Yang Joshua Levy Andreas Stolcke Viktor Rozgic Spyros Matsoukas Constantinos Papayiannis Daniel Bone Chao Wang

Abstract
Speech emotion recognition (SER) is a key technology to enable more natural human-machine communication. However, SER has long suffered from a lack of public large-scale labeled datasets. To circumvent this problem, we investigate how unsupervised representation learning on unlabeled datasets can benefit SER. We show that the contrastive predictive coding (CPC) method can learn salient representations from unlabeled datasets, which improves emotion recognition performance. In our experiments, this method achieved state-of-the-art concordance correlation coefficient (CCC) performance for all emotion primitives (activation, valence, and dominance) on IEMOCAP. Additionally, on the MSP- Podcast dataset, our method obtained considerable performance improvements compared to baselines.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| speech-emotion-recognition-on-msp-podcast | preCPC | CCC: 0.377 |
| speech-emotion-recognition-on-msp-podcast-1 | preCPC | CCC: 0.706 |
| speech-emotion-recognition-on-msp-podcast-2 | preCPC | CCC: 0.639 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.