HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

PTSEFormer: Progressive Temporal-Spatial Enhanced TransFormer Towards Video Object Detection

Han Wang Jun Tang Xiaodong Liu Shanyan Guan Rong Xie Li Song

PTSEFormer: Progressive Temporal-Spatial Enhanced TransFormer Towards Video Object Detection

Abstract

Recent years have witnessed a trend of applying context frames to boost the performance of object detection as video object detection. Existing methods usually aggregate features at one stroke to enhance the feature. These methods, however, usually lack spatial information from neighboring frames and suffer from insufficient feature aggregation. To address the issues, we perform a progressive way to introduce both temporal information and spatial information for an integrated enhancement. The temporal information is introduced by the temporal feature aggregation model (TFAM), by conducting an attention mechanism between the context frames and the target frame (i.e., the frame to be detected). Meanwhile, we employ a Spatial Transition Awareness Model (STAM) to convey the location transition information between each context frame and target frame. Built upon a transformer-based detector DETR, our PTSEFormer also follows an end-to-end fashion to avoid heavy post-processing procedures while achieving 88.1% mAP on the ImageNet VID dataset. Codes are available at https://github.com/Hon-Wong/PTSEFormer.

Code Repositories

hon-wong/ptseformer
Official
pytorch

Benchmarks

BenchmarkMethodologyMetrics
video-object-detection-on-imagenet-vidPTSEFormer (ResNet-101)
MAP : 88.1

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp