Command Palette
Search for a command to run...
Shubham Goel Georgios Pavlakos Jathushan Rajasegaran Angjoo Kanazawa Jitendra Malik

Abstract
We present an approach to reconstruct humans and track them over time. At the core of our approach, we propose a fully "transformerized" version of a network for human mesh recovery. This network, HMR 2.0, advances the state of the art and shows the capability to analyze unusual poses that have in the past been difficult to reconstruct from single images. To analyze video, we use 3D reconstructions from HMR 2.0 as input to a tracking system that operates in 3D. This enables us to deal with multiple people and maintain identities through occlusion events. Our complete approach, 4DHumans, achieves state-of-the-art results for tracking people from monocular video. Furthermore, we demonstrate the effectiveness of HMR 2.0 on the downstream task of action recognition, achieving significant improvements over previous pose-based action recognition approaches. Our code and models are available on the project website: https://shubham-goel.github.io/4dhumans/.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| 3d-human-pose-estimation-on-3dpw | HMR 2.0 | MPJPE: 69.8 MPVPE: 82.2 PA-MPJPE: 44.4 |
| 3d-human-pose-estimation-on-human36m | HMR 2.0a | Average MPJPE (mm): 44.8 PA-MPJPE: 33.6 |
| pose-tracking-on-posetrack2018 | 4DHumans + ViTDet | IDF1: 79.3 IDs: 367 MOTA: 61.9 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.