HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

VTG-GPT: Tuning-Free Zero-Shot Video Temporal Grounding with GPT

Yifang Xu; Yunzhuo Sun; Zien Xie; Benxiang Zhai; Sidan Du

VTG-GPT: Tuning-Free Zero-Shot Video Temporal Grounding with GPT

Abstract

Video temporal grounding (VTG) aims to locate specific temporal segments from an untrimmed video based on a linguistic query. Most existing VTG models are trained on extensive annotated video-text pairs, a process that not only introduces human biases from the queries but also incurs significant computational costs. To tackle these challenges, we propose VTG-GPT, a GPT-based method for zero-shot VTG without training or fine-tuning. To reduce prejudice in the original query, we employ Baichuan2 to generate debiased queries. To lessen redundant information in videos, we apply MiniGPT-v2 to transform visual content into more precise captions. Finally, we devise the proposal generator and post-processing to produce accurate segments from debiased queries and image captions. Extensive experiments demonstrate that VTG-GPT significantly outperforms SOTA methods in zero-shot settings and surpasses unsupervised approaches. More notably, it achieves competitive performance comparable to supervised methods. The code is available on https://github.com/YoucanBaby/VTG-GPT

Code Repositories

YoucanBaby/VTG-GPT
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
zero-shot-moment-retrieval-on-qvhighlightsVTG-GPT
R1@0.5: 54.26
R1@0.7: 38.45
mAP: 30.91
mAP@0.5: 54.17
mAP@0.75: 29.73

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp