HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Skeleton-based Action Recognition of People Handling Objects

Sunoh Kim; Kimin Yun; Jongyoul Park; Jin Young Choi

Skeleton-based Action Recognition of People Handling Objects

Abstract

In visual surveillance systems, it is necessary to recognize the behavior of people handling objects such as a phone, a cup, or a plastic bag. In this paper, to address this problem, we propose a new framework for recognizing object-related human actions by graph convolutional networks using human and object poses. In this framework, we construct skeletal graphs of reliable human poses by selectively sampling the informative frames in a video, which include human joints with high confidence scores obtained in pose estimation. The skeletal graphs generated from the sampled frames represent human poses related to the object position in both the spatial and temporal domains, and these graphs are used as inputs to the graph convolutional networks. Through experiments over an open benchmark and our own data sets, we verify the validity of our framework in that our method outperforms the state-of-the-art method for skeleton-based action recognition.

Benchmarks

BenchmarkMethodologyMetrics
action-recognition-in-videos-on-icvl-4OHA-GCN (Two stream; HP + OHP-hands + informative samples)
Accuracy: 91.86%
action-recognition-in-videos-on-irdOHA-GCN (Two stream; HP + OHP-hands + informative samples)
Accuracy: 80.11%

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Skeleton-based Action Recognition of People Handling Objects | Papers | HyperAI