Command Palette
Search for a command to run...
Bowen Deng Dongchang Liu

Abstract
Temporal Action Detection(TAD) is a crucial but challenging task in video understanding.It is aimed at detecting both the type and start-end frame for each action instance in a long, untrimmed video.Most current models adopt both RGB and Optical-Flow streams for the TAD task. Thus, original RGB frames must be converted manually into Optical-Flow frames with additional computation and time cost, which is an obstacle to achieve real-time processing. At present, many models adopt two-stage strategies, which would slow the inference speed down and complicatedly tuning on proposals generating.By comparison, we propose a one-stage anchor-free temporal localization method with RGB stream only, in which a novel Newtonian Mechanics-MLP architecture is established. It has comparable accuracy with all existing state-of-the-art models, while surpasses the inference speed of these methods by a large margin. The typical inference speed in this paper is astounding 4.44 video per second on THUMOS14. In applications, because there is no need to convert optical flow, the inference speed will be faster.It also proves that MLP has great potential in downstream tasks such as TAD. The source code is available at https://github.com/BonedDeng/TadML
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| action-detection-on-thumos-14 | TadML-two stream | mAP: 59.7 |
| action-detection-on-thumos-14 | TadML-rgb | mAP: 53.46 |
| temporal-action-localization-on-thumos14 | TadML(two-stream) | Avg mAP (0.3:0.7): 59.70 mAP IOU@0.3: 73.29 mAP IOU@0.4: 69.73 mAP IOU@0.5: 62.53 mAP IOU@0.6: 53.36 mAP IOU@0.7: 39.60 |
| temporal-action-localization-on-thumos14 | TadML(rgb-only) | Avg mAP (0.3:0.7): 53.46 mAP IOU@0.3: 68.78 mAP IOU@0.4: 64.66 mAP IOU@0.5: 56.61 mAP IOU@0.6: 45.40 mAP IOU@0.7: 31.88 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.