HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Adaptive Multi-view and Temporal Fusing Transformer for 3D Human Pose Estimation

Hui Shuai Lele Wu Qingshan Liu

Adaptive Multi-view and Temporal Fusing Transformer for 3D Human Pose Estimation

Abstract

This paper proposes a unified framework dubbed Multi-view and Temporal Fusing Transformer (MTF-Transformer) to adaptively handle varying view numbers and video length without camera calibration in 3D Human Pose Estimation (HPE). It consists of Feature Extractor, Multi-view Fusing Transformer (MFT), and Temporal Fusing Transformer (TFT). Feature Extractor estimates 2D pose from each image and fuses the prediction according to the confidence. It provides pose-focused feature embedding and makes subsequent modules computationally lightweight. MFT fuses the features of a varying number of views with a novel Relative-Attention block. It adaptively measures the implicit relative relationship between each pair of views and reconstructs more informative features. TFT aggregates the features of the whole sequence and predicts 3D pose via a transformer. It adaptively deals with the video of arbitrary length and fully unitizes the temporal information. The migration of transformers enables our model to learn spatial geometry better and preserve robustness for varying application scenarios. We report quantitative and qualitative results on the Human3.6M, TotalCapture, and KTH Multiview Football II. Compared with state-of-the-art methods with camera parameters, MTF-Transformer obtains competitive results and generalizes well to dynamic capture with an arbitrary number of unseen views.

Benchmarks

BenchmarkMethodologyMetrics
3d-human-pose-estimation-on-human36mMTF-Transformer (M=0.4, T=7, N=1)
Average MPJPE (mm): 49.4
Multi-View or Monocular: Monocular
Using 2D ground-truth joints: No
3d-human-pose-estimation-on-human36mMTF-Transformer (M=0.4, T=1, N=1)
Average MPJPE (mm): 50.7
Multi-View or Monocular: Monocular
Using 2D ground-truth joints: No
3d-human-pose-estimation-on-human36mMTF-Transformer (M=0.4, T=1)
Average MPJPE (mm): 29.4
Multi-View or Monocular: Multi-View
Using 2D ground-truth joints: No
3d-human-pose-estimation-on-human36mMTF-Transformer (M=0.4, T=7)
Average MPJPE (mm): 28.5
Multi-View or Monocular: Multi-View
Using 2D ground-truth joints: No
3d-human-pose-estimation-on-total-captureMTF-Transformer (M=0.4, T=7)
Average MPJPE (mm): 29.2

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp