HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Masked Modeling Duo: Towards a Universal Audio Pre-training Framework

Daisuke Niizumi; Daiki Takeuchi; Yasunori Ohishi; Noboru Harada; Kunio Kashino

Masked Modeling Duo: Towards a Universal Audio Pre-training Framework

Abstract

Self-supervised learning (SSL) using masked prediction has made great strides in general-purpose audio representation. This study proposes Masked Modeling Duo (M2D), an improved masked prediction SSL, which learns by predicting representations of masked input signals that serve as training signals. Unlike conventional methods, M2D obtains a training signal by encoding only the masked part, encouraging the two networks in M2D to model the input. While M2D improves general-purpose audio representations, a specialized representation is essential for real-world applications, such as in industrial and medical domains. The often confidential and proprietary data in such domains is typically limited in size and has a different distribution from that in pre-training datasets. Therefore, we propose M2D for X (M2D-X), which extends M2D to enable the pre-training of specialized representations for an application X. M2D-X learns from M2D and an additional task and inputs background noise. We make the additional task configurable to serve diverse applications, while the background noise helps learn on small data and forms a denoising task that makes representation robust. With these design choices, M2D-X should learn a representation specialized to serve various application needs. Our experiments confirmed that the representations for general-purpose audio, specialized for the highly competitive AudioSet and speech domain, and a small-data medical task achieve top-level performance, demonstrating the potential of using our models as a universal audio pre-training framework. Our code is available online for future studies at https://github.com/nttcslab/m2d

Code Repositories

nttcslab/m2d
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
audio-classification-on-audio-setM2D-AS/0.7
Mean AP: 48.5
audio-classification-on-audiosetM2D-AS/0.7
Test mAP: 0.485
audio-classification-on-audiosetM2D/0.7
Test mAP: 0.479
audio-classification-on-esc-50M2D/0.7
Accuracy (5-fold): 96.0
Top-1 Accuracy: 96.0
audio-classification-on-esc-50M2D-AS/0.7
Accuracy (5-fold): 97.2
PRE-TRAINING DATASET: AudioSet
Top-1 Accuracy: 97.2
audio-classification-on-icbhi-respiratoryM2D/0.7 (e=0.3)
ICBHI Score: 62.73
speaker-identification-on-voxceleb1M2D/0.6
Accuracy: 96.5
Top-1 (%): 96.5
speaker-identification-on-voxceleb1MSM-MAE
Accuracy: 96.6
Top-1 (%): 96.6
speaker-identification-on-voxceleb1M2D/0.7
Accuracy: 96.3
Top-1 (%): 96.3

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp