HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

CenterCLIP: Token Clustering for Efficient Text-Video Retrieval

Shuai Zhao Linchao Zhu Xiaohan Wang Yi Yang

CenterCLIP: Token Clustering for Efficient Text-Video Retrieval

Abstract

Recently, large-scale pre-training methods like CLIP have made great progress in multi-modal research such as text-video retrieval. In CLIP, transformers are vital for modeling complex multi-modal relations. However, in the vision transformer of CLIP, the essential visual tokenization process, which produces discrete visual token sequences, generates many homogeneous tokens due to the redundancy nature of consecutive and similar frames in videos. This significantly increases computation costs and hinders the deployment of video retrieval models in web applications. In this paper, to reduce the number of redundant video tokens, we design a multi-segment token clustering algorithm to find the most representative tokens and drop the non-essential ones. As the frame redundancy occurs mostly in consecutive frames, we divide videos into multiple segments and conduct segment-level clustering. Center tokens from each segment are later concatenated into a new sequence, while their original spatial-temporal relations are well maintained. We instantiate two clustering algorithms to efficiently find deterministic medoids and iteratively partition groups in high dimensional space. Through this token clustering and center selection procedure, we successfully reduce computation costs by removing redundant visual tokens. This method further enhances segment-level semantic alignment between video and text representations, enforcing the spatio-temporal interactions of tokens from within-segment frames. Our method, coined as CenterCLIP, surpasses existing state-of-the-art by a large margin on typical text-video benchmarks, while reducing the training memory cost by 35\% and accelerating the inference speed by 14\% at the best case. The code is available at \href{https://github.com/mzhaoshuai/CenterCLIP}{https://github.com/mzhaoshuai/CenterCLIP}.

Code Repositories

mzhaoshuai/CenterCLIP
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
video-retrieval-on-activitynetCenterCLIP (ViT-B/16)
text-to-video Mean Rank: 5.7
text-to-video Median Rank: 2
text-to-video R@1: 46.2
text-to-video R@10: 87.6
text-to-video R@5: 77.0
video-to-text Mean Rank: 5.5
video-to-text Median Rank: 2
video-to-text R@1: 46.7
video-to-text R@10: 88.0
video-to-text R@5: 77.1
video-retrieval-on-lsmdcCenterCLIP (ViT-B/16)
text-to-video Mean Rank: 47.3
text-to-video Median Rank: 8
text-to-video R@1: 24.2
text-to-video R@10: 55.9
text-to-video R@5: 46.2
video-to-text Mean Rank: 41.3
video-to-text Median Rank: 7
video-to-text R@1: 24.5
video-to-text R@10: 55.8
video-to-text R@5: 46.4
video-retrieval-on-msr-vtt-1kaCenterCLIP (ViT-B/16)
text-to-video Mean Rank: 13.8
text-to-video Median Rank: 2
text-to-video R@1: 48.4
text-to-video R@10: 82.0
text-to-video R@5: 73.8
video-to-text Mean Rank: 10.2
video-to-text Median Rank: 2
video-to-text R@1: 47.7
video-to-text R@10: 83.3
video-to-text R@5: 75.0
video-retrieval-on-msvdCenterCLIP (ViT-B/16)
text-to-video Mean Rank: 8.4
text-to-video Median Rank: 1
text-to-video R@1: 50.6
text-to-video R@10: 88.4
text-to-video R@5: 80.3
video-to-text Mean Rank: 3.0
video-to-text Median Rank: 1
video-to-text R@1: 68.4
video-to-text R@10: 95.0
video-to-text R@5: 90.1

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp