HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Short-term anchor linking and long-term self-guided attention for video object detection

{Manuel Mucientes Víctor M Brea Daniel Cores}

Abstract

We present a new network architecture able to take advantage of spatio-temporal information available in videos to boost object detection precision. First, box features are associated and aggregated by linking proposals that come from the same anchor box in the nearby frames. Then, we design a new attention module that aggregates short-term enhanced box features to exploit long-term spatio-temporal information. This module takes advantage of geometrical features in the long-term for the first time in the video object detection domain. Finally, a spatio-temporal double head is fed with both spatial information from the reference frame and the aggregated information that takes into account the short- and long-term temporal context. We have tested our proposal in five video object detection datasets with very different characteristics, in order to prove its robustness in a wide number of scenarios. Non-parametric statistical tests show that our approach outperforms the state-of-the-art. Our code is available at https://github.com/daniel-cores/SLTnet.

Benchmarks

BenchmarkMethodologyMetrics
video-object-detection-on-imagenet-vidSLTnet FPN-X101
MAP : 82.4
video-object-detection-on-usc-grad-stddbSLTnet FPN-X101
AP: 16.6
AP 0.5: 44.9

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp