Command Palette
Search for a command to run...
Tang Hao ; Wang Wei ; Xu Dan ; Yan Yan ; Sebe Nicu

Abstract
Hand gesture-to-gesture translation in the wild is a challenging task sincehand gestures can have arbitrary poses, sizes, locations and self-occlusions.Therefore, this task requires a high-level understanding of the mapping betweenthe input source gesture and the output target gesture. To tackle this problem,we propose a novel hand Gesture Generative Adversarial Network (GestureGAN).GestureGAN consists of a single generator $G$ and a discriminator $D$, whichtakes as input a conditional hand image and a target hand skeleton image.GestureGAN utilizes the hand skeleton information explicitly, and learns thegesture-to-gesture mapping through two novel losses, the color loss and thecycle-consistency loss. The proposed color loss handles the issue of "channelpollution" while back-propagating the gradients. In addition, we present theFr\'echet ResNet Distance (FRD) to evaluate the quality of generated images.Extensive experiments on two widely used benchmark datasets demonstrate thatthe proposed GestureGAN achieves state-of-the-art performance on theunconstrained hand gesture-to-gesture translation task. Meanwhile, thegenerated images are in high-quality and are photo-realistic, allowing them tobe used as data augmentation to improve the performance of a hand gestureclassifier. Our model and code are available athttps://github.com/Ha0Tang/GestureGAN.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| gesture-to-gesture-translation-on-ntu-hand | GestureGAN | AMT: 26.1 IS: 2.5532 MSE: 105.7286 PSNR: 32.6091 |
| gesture-to-gesture-translation-on-senz3d | GestureGAN | AMT: 22.6 IS: 3.4107 MSE: 169.9219 PSNR: 27.9749 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.