Command Palette
Search for a command to run...
Alexandros Stergiou Dima Damen

Abstract
A key function of auditory cognition is the association of characteristic sounds with their corresponding semantics over time. Humans attempting to discriminate between fine-grained audio categories, often replay the same discriminative sounds to increase their prediction confidence. We propose an end-to-end attention-based architecture that through selective repetition attends over the most discriminative sounds across the audio sequence. Our model initially uses the full audio sequence and iteratively refines the temporal segments replayed based on slot attention. At each playback, the selected segments are replayed using a smaller hop length which represents higher resolution features within these segments. We show that our method can consistently achieve state-of-the-art performance across three audio-classification benchmarks: AudioSet, VGG-Sound, and EPIC-KITCHENS-100.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| audio-classification-on-audioset | PlayItBackX3 | Test mAP: 0.477 |
| audio-classification-on-epic-kitchens-100 | PlayItBackX3 | Top-1 Action: 15.9 Top-1 Noun: 23.1 Top-1 Verb: 47 Top-5 Action: 29.2 Top-5 Noun: 45.1 Top-5 Verb: 78.7 |
| audio-classification-on-vggsound | PlayItBackX3 | AUC: 97.8 Mean AP: 56.1 Top 1 Accuracy: 53.7 Top 5 Accuracy: 79.2 d-prime: 2.846 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.