Command Palette
Search for a command to run...
Temporal Context Aggregation for Video Retrieval with Contrastive Learning
Jie Shao Xin Wen Bingchen Zhao Xiangyang Xue

Abstract
The current research focus on Content-Based Video Retrieval requires higher-level video representation describing the long-range semantic dependencies of relevant incidents, events, etc. However, existing methods commonly process the frames of a video as individual images or short clips, making the modeling of long-range semantic dependencies difficult. In this paper, we propose TCA (Temporal Context Aggregation for Video Retrieval), a video representation learning framework that incorporates long-range temporal information between frame-level features using the self-attention mechanism. To train it on video retrieval datasets, we propose a supervised contrastive learning method that performs automatic hard negative mining and utilizes the memory bank mechanism to increase the capacity of negative samples. Extensive experiments are conducted on multiple video retrieval tasks, such as CC_WEB_VIDEO, FIVR-200K, and EVVE. The proposed method shows a significant performance advantage (~17% mAP on FIVR-200K) over state-of-the-art methods with video-level features, and deliver competitive results with 22x faster inference time comparing with frame-level features.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| video-retrieval-on-fivr-200k | TCAc | mAP (CSVR): 0.553 mAP (DSVR): 0.570 mAP (ISVR): 0.473 |
| video-retrieval-on-fivr-200k | TCAf | mAP (CSVR): 0.830 mAP (DSVR): 0.877 mAP (ISVR): 0.703 |
| video-retrieval-on-fivr-200k | TCAsym | mAP (CSVR): 0.698 mAP (DSVR): 0.728 mAP (ISVR): 0.592 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.