HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Self-Adaptive Sampling for Efficient Video Question-Answering on Image--Text Models

Wei Han Hui Chen Min-Yen Kan Soujanya Poria

Self-Adaptive Sampling for Efficient Video Question-Answering on Image--Text Models

Abstract

Video question-answering is a fundamental task in the field of video understanding. Although current vision--language models (VLMs) equipped with Video Transformers have enabled temporal modeling and yielded superior results, they are at the cost of huge computational power and thus too expensive to deploy in real-time application scenarios. An economical workaround only samples a small portion of frames to represent the main content of that video and tune an image--text model on these sampled frames. Recent video understanding models usually randomly sample a set of frames or clips, regardless of internal correlations between their visual contents, nor their relevance to the problem. We argue that such kinds of aimless sampling may omit the key frames from which the correct answer can be deduced, and the situation gets worse when the sampling sparsity increases, which always happens as the video lengths increase. To mitigate this issue, we propose two frame sampling strategies, namely the most domain frames (MDF) and most implied frames (MIF), to maximally preserve those frames that are most likely vital to the given questions. MDF passively minimizes the risk of key frame omission in a bootstrap manner, while MIS actively searches key frames customized for each video--question pair with the assistance of auxiliary models. The experimental results on three public datasets from three advanced VLMs (CLIP, GIT and All-in-one) demonstrate that our proposed strategies can boost the performance for image-text pretrained models. The source codes pertaining to the method proposed in this paper are publicly available at https://github.com/declare-lab/sas-vqa.

Code Repositories

declare-lab/sealing
Official
pytorch
Mentioned in GitHub
declare-lab/sas-vqa
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
visual-question-answering-on-msrvtt-qa-1AIO+MDF
Accuracy: 0.438
visual-question-answering-on-msrvtt-qa-1AIO+MIF
Accuracy: 0.440
visual-question-answering-on-msrvtt-qa-1GIT+MDF
Accuracy: 0.423
visual-question-answering-on-msvd-qa-1AIO+MIF
Accuracy: 0.467
visual-question-answering-on-msvd-qa-1GIT+MDF
Accuracy: 0.469

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp