Command Palette
Search for a command to run...
Alberto Sabater Iñigo Alonso Luis Montesano Ana C. Murillo

Abstract
Hand action recognition is a special case of action recognition with applications in human-robot interaction, virtual reality or life-logging systems. Building action classifiers able to work for such heterogeneous action domains is very challenging. There are very subtle changes across different actions from a given application but also large variations across domains (e.g. virtual reality vs life-logging). This work introduces a novel skeleton-based hand motion representation model that tackles this problem. The framework we propose is agnostic to the application domain or camera recording view-point. When working on a single domain (intra-domain action classification) our approach performs better or similar to current state-of-the-art methods on well-known hand action recognition benchmarks. And, more importantly, when performing hand action recognition for action domains and camera perspectives which our approach has not been trained for (cross-domain action classification), our proposed framework achieves comparable performance to intra-domain state-of-the-art methods. These experiments show the robustness and generalization capabilities of our framework.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| skeleton-based-action-recognition-on-first | TCN-Summ | 1:1 Accuracy: 95.93 1:3 Accuracy: 92.9 3:1 Accuracy: 96.76 Cross-person Accuracy: 88.70 |
| skeleton-based-action-recognition-on-shrec | TCN-Summ | 14 gestures accuracy: 93.57 28 gestures accuracy: 91.43 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.