HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Jointly Modeling Motion and Appearance Cues for Robust RGB-T Tracking

Pengyu Zhang; Jie Zhao; Dong Wang; Huchuan Lu; Xiaoyun Yang

Jointly Modeling Motion and Appearance Cues for Robust RGB-T Tracking

Abstract

In this study, we propose a novel RGB-T tracking framework by jointly modeling both appearance and motion cues. First, to obtain a robust appearance model, we develop a novel late fusion method to infer the fusion weight maps of both RGB and thermal (T) modalities. The fusion weights are determined by using offline-trained global and local multimodal fusion networks, and then adopted to linearly combine the response maps of RGB and T modalities. Second, when the appearance cue is unreliable, we comprehensively take motion cues, i.e., target and camera motions, into account to make the tracker robust. We further propose a tracker switcher to switch the appearance and motion trackers flexibly. Numerous results on three recent RGB-T tracking datasets show that the proposed tracker performs significantly better than other state-of-the-art algorithms.

Benchmarks

BenchmarkMethodologyMetrics
rgb-t-tracking-on-gtotJMMAC
Precision: 90.2
Success: 73.2
rgb-t-tracking-on-rgbt234JMMAC
Precision: 79.0
Success: 57.3

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Jointly Modeling Motion and Appearance Cues for Robust RGB-T Tracking | Papers | HyperAI