Command Palette
Search for a command to run...
Yujin Chen Zhigang Tu Di Kang Linchao Bao Ying Zhang Xuefei Zhe Ruizhi Chen Junsong Yuan

Abstract
Reconstructing a 3D hand from a single-view RGB image is challenging due to various hand configurations and depth ambiguity. To reliably reconstruct a 3D hand from a monocular image, most state-of-the-art methods heavily rely on 3D annotations at the training stage, but obtaining 3D annotations is expensive. To alleviate reliance on labeled training data, we propose S2HAND, a self-supervised 3D hand reconstruction network that can jointly estimate pose, shape, texture, and the camera viewpoint. Specifically, we obtain geometric cues from the input image through easily accessible 2D detected keypoints. To learn an accurate hand reconstruction model from these noisy geometric cues, we utilize the consistency between 2D and 3D representations and propose a set of novel losses to rationalize outputs of the neural network. For the first time, we demonstrate the feasibility of training an accurate 3D hand reconstruction network without relying on manual annotations. Our experiments show that the proposed method achieves comparable performance with recent fully-supervised methods while using fewer supervision data.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| 3d-hand-pose-estimation-on-ho-3d | S2Hand | AUC_J: 0.773 AUC_V: 0.777 F@15mm: 0.930 F@5mm: 0.450 PA-MPJPE (mm): 11.4 PA-MPVPE: 11.2 |
| 3d-hand-pose-estimation-on-ho-3d-v3 | S2HAND | AUC_J: 0.769 AUC_V: 0.778 F@15mm: 0.932 F@5mm: 0.448 PA-MPJPE: 11.5 PA-MPVPE: 11.1 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.