HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Action Capsules: Human Skeleton Action Recognition

Ali Farajzadeh Bavil Hamed Damirchi Hamid D. Taghirad

Action Capsules: Human Skeleton Action Recognition

Abstract

Due to the compact and rich high-level representations offered, skeleton-based human action recognition has recently become a highly active research topic. Previous studies have demonstrated that investigating joint relationships in spatial and temporal dimensions provides effective information critical to action recognition. However, effectively encoding global dependencies of joints during spatio-temporal feature extraction is still challenging. In this paper, we introduce Action Capsule which identifies action-related key joints by considering the latent correlation of joints in a skeleton sequence. We show that, during inference, our end-to-end network pays attention to a set of joints specific to each action, whose encoded spatio-temporal features are aggregated to recognize the action. Additionally, the use of multiple stages of action capsules enhances the ability of the network to classify similar actions. Consequently, our network outperforms the state-of-the-art approaches on the N-UCLA dataset and obtains competitive results on the NTURGBD dataset. This is while our approach has significantly lower computational requirements based on GFLOPs measurements.

Benchmarks

BenchmarkMethodologyMetrics
skeleton-based-action-recognition-on-n-uclaAction Capsules
Accuracy: 97.3
skeleton-based-action-recognition-on-ntu-rgbdAction Capsules
Accuracy (CS): 90
Accuracy (CV): 96.3

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Action Capsules: Human Skeleton Action Recognition | Papers | HyperAI