Command Palette
Search for a command to run...
Temporal-Channel Modeling in Multi-head Self-Attention for Synthetic Speech Detection
Truong Duc-Tuan ; Tao Ruijie ; Nguyen Tuan ; Luong Hieu-Thi ; Lee Kong Aik ; Chng Eng Siong

Abstract
Recent synthetic speech detectors leveraging the Transformer model havesuperior performance compared to the convolutional neural network counterparts.This improvement could be due to the powerful modeling ability of themulti-head self-attention (MHSA) in the Transformer model, which learns thetemporal relationship of each input token. However, artifacts of syntheticspeech can be located in specific regions of both frequency channels andtemporal segments, while MHSA neglects this temporal-channel dependency of theinput sequence. In this work, we proposed a Temporal-Channel Modeling (TCM)module to enhance MHSA's capability for capturing temporal-channeldependencies. Experimental results on the ASVspoof 2021 show that with only0.03M additional parameters, the TCM module can outperform the state-of-the-artsystem by 9.25% in EER. Further ablation study reveals that utilizing bothtemporal and channel information yields the most improvement for detectingsynthetic speech.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| audio-deepfake-detection-on-asvspoof-2021 | TCM-Add | 21DF EER: 2.14 21LA EER: 2.99 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.