HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

FAST-VQA: Efficient End-to-end Video Quality Assessment with Fragment Sampling

Haoning Wu Chaofeng Chen Jingwen Hou Liang Liao Annan Wang Wenxiu Sun Qiong Yan Weisi Lin

FAST-VQA: Efficient End-to-end Video Quality Assessment with Fragment Sampling

Abstract

Current deep video quality assessment (VQA) methods are usually with high computational costs when evaluating high-resolution videos. This cost hinders them from learning better video-quality-related representations via end-to-end training. Existing approaches typically consider naive sampling to reduce the computational cost, such as resizing and cropping. However, they obviously corrupt quality-related information in videos and are thus not optimal for learning good representations for VQA. Therefore, there is an eager need to design a new quality-retained sampling scheme for VQA. In this paper, we propose Grid Mini-patch Sampling (GMS), which allows consideration of local quality by sampling patches at their raw resolution and covers global quality with contextual relations via mini-patches sampled in uniform grids. These mini-patches are spliced and aligned temporally, named as fragments. We further build the Fragment Attention Network (FANet) specially designed to accommodate fragments as inputs. Consisting of fragments and FANet, the proposed FrAgment Sample Transformer for VQA (FAST-VQA) enables efficient end-to-end deep VQA and learns effective video-quality-related representations. It improves state-of-the-art accuracy by around 10% while reducing 99.5% FLOPs on 1080P high-resolution videos. The newly learned video-quality-related representations can also be transferred into smaller VQA datasets, boosting performance in these scenarios. Extensive experiments show that FAST-VQA has good performance on inputs of various resolutions while retaining high efficiency. We publish our code at https://github.com/timothyhtimothy/FAST-VQA.

Code Repositories

timothyhtimothy/fast-vqa-and-fastervqa
pytorch
Mentioned in GitHub
timothyhtimothy/fast-vqa
Official
pytorch
Mentioned in GitHub
VQAssessment/FAST-VQA-and-FasterVQA
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
video-quality-assessment-on-konvid-1kFAST-VQA (trained on LSVQ only)
PLCC: 0.855
video-quality-assessment-on-konvid-1kFAST-VQA (finetuned on KonViD-1k)
PLCC: 0.892
video-quality-assessment-on-live-fb-lsvqFAST-VQA
PLCC: 0.877
video-quality-assessment-on-live-vqcFAST-VQA (trained on LSVQ only)
PLCC: 0.844
video-quality-assessment-on-live-vqcFAST-VQA (finetuned on LIVE-VQC)
PLCC: 0.862
video-quality-assessment-on-msu-video-qualityFASTER-VQA
KLCC: 0.5645
PLCC: 0.8087
SRCC: 0.7508
Type: NR
video-quality-assessment-on-msu-video-qualityFAST-VQA
KLCC: 0.6498
PLCC: 0.8613
SRCC: 0.8308
Type: NR
video-quality-assessment-on-youtube-ugcFAST-VQA (trained on LSVQ only)
PLCC: 0.748
video-quality-assessment-on-youtube-ugcFAST-VQA (finetuned on YouTube-UGC)
PLCC: 0.852

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp