Command Palette
Search for a command to run...
Yinxiao Li Zhichao Lu Xuehan Xiong Jonathan Huang

Abstract
In recent years, many works in the video action recognition literature have shown that two stream models (combining spatial and temporal input streams) are necessary for achieving state of the art performance. In this paper we show the benefits of including yet another stream based on human pose estimated from each frame -- specifically by rendering pose on input RGB frames. At first blush, this additional stream may seem redundant given that human pose is fully determined by RGB pixel values -- however we show (perhaps surprisingly) that this simple and flexible addition can provide complementary gains. Using this insight, we then propose a new model, which we dub PERF-Net (short for Pose Empowered RGB-Flow Net), which combines this new pose stream with the standard RGB and flow based input streams via distillation techniques and show that our model outperforms the state-of-the-art by a large margin in a number of human action recognition datasets while not requiring flow or pose to be explicitly computed at inference time. The proposed pose stream is also part of the winner solution of the ActivityNet Kinetics Challenge 2020.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| action-classification-on-kinetics-600 | PERF-Net (distilled ResNet50-G) | Top-1 Accuracy: 82.0 Top-5 Accuracy: 95.7 |
| action-recognition-in-videos-on-hmdb-51 | PERF-Net (distilled S3D-G) | Average accuracy of 3 splits: 83.2 |
| action-recognition-in-videos-on-ucf101 | PERF-Net (multi-distilled S3D) | 3-fold Accuracy: 98.6 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.