Command Palette
Search for a command to run...
Hear Me Out: Fusional Approaches for Audio Augmented Temporal Action Localization
Anurag Bagchi Jazib Mahmood Dolton Fernandes Ravi Kiran Sarvadevabhatla

Abstract
State of the art architectures for untrimmed video Temporal Action Localization (TAL) have only considered RGB and Flow modalities, leaving the information-rich audio modality totally unexploited. Audio fusion has been explored for the related but arguably easier problem of trimmed (clip-level) action recognition. However, TAL poses a unique set of challenges. In this paper, we propose simple but effective fusion-based approaches for TAL. To the best of our knowledge, our work is the first to jointly consider audio and video modalities for supervised TAL. We experimentally show that our schemes consistently improve performance for state of the art video-only TAL approaches. Specifically, they help achieve new state of the art performance on large-scale benchmark datasets - ActivityNet-1.3 (54.34 mAP@0.5) and THUMOS14 (57.18 mAP@0.5). Our experiments include ablations involving multiple fusion schemes, modality combinations and TAL architectures. Our code, models and associated data are available at https://github.com/skelemoa/tal-hmo.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| temporal-action-localization-on-activitynet | AVFusion | mAP: 36.82 mAP IOU@0.5: 54.34 mAP IOU@0.75: 37.66 mAP IOU@0.95: 8.93 |
| temporal-action-localization-on-thumos-14 | AVFusion | mAP IOU@0.5: 57.18 |
| temporal-action-localization-on-thumos14 | AVFusion | Avg mAP (0.3:0.7): 53.3 mAP IOU@0.3: 70.1 mAP IOU@0.4: 64.9 mAP IOU@0.5: 57.1 mAP IOU@0.6: 45.4 mAP IOU@0.7: 28.8 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.