Command Palette
Search for a command to run...
Le Yang; Ziwei Zheng; Yizeng Han; Hao Cheng; Shiji Song; Gao Huang; Fan Li

Abstract
Recent proposed neural network-based Temporal Action Detection (TAD) models are inherently limited to extracting the discriminative representations and modeling action instances with various lengths from complex scenes by shared-weights detection heads. Inspired by the successes in dynamic neural networks, in this paper, we build a novel dynamic feature aggregation (DFA) module that can simultaneously adapt kernel weights and receptive fields at different timestamps. Based on DFA, the proposed dynamic encoder layer aggregates the temporal features within the action time ranges and guarantees the discriminability of the extracted representations. Moreover, using DFA helps to develop a Dynamic TAD head (DyHead), which adaptively aggregates the multi-scale features with adjusted parameters and learned receptive fields better to detect the action instances with diverse ranges from videos. With the proposed encoder layer and DyHead, a new dynamic TAD model, DyFADet, achieves promising performance on a series of challenging TAD benchmarks, including HACS-Segment, THUMOS14, ActivityNet-1.3, Epic-Kitchen 100, Ego4D-Moment QueriesV1.0, and FineAction. Code is released to https://github.com/yangle15/DyFADet-pytorch.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| temporal-action-localization-on-fineaction | DyFADet (VideoMAE v2-g) | mAP: 23.8 mAP IOU@0.5: 37.1 mAP IOU@0.75: 23.7 mAP IOU@0.95: 5.9 |
| temporal-action-localization-on-hacs | DyFADet(VideoMAEv2) | Average-mAP: 44.3 mAP@0.5: 64.0 mAP@0.75: 44.8 mAP@0.95: 14.1 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.