HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

VinVL: Revisiting Visual Representations in Vision-Language Models

Pengchuan Zhang Xiujun Li Xiaowei Hu Jianwei Yang Lei Zhang Lijuan Wang Yejin Choi Jianfeng Gao

VinVL: Revisiting Visual Representations in Vision-Language Models

Abstract

This paper presents a detailed study of improving visual representations for vision language (VL) tasks and develops an improved object detection model to provide object-centric representations of images. Compared to the most widely used \emph{bottom-up and top-down} model \cite{anderson2018bottom}, the new model is bigger, better-designed for VL tasks, and pre-trained on much larger training corpora that combine multiple public annotated object detection datasets. Therefore, it can generate representations of a richer collection of visual objects and concepts. While previous VL research focuses mainly on improving the vision-language fusion model and leaves the object detection model improvement untouched, we show that visual features matter significantly in VL models. In our experiments we feed the visual features generated by the new object detection model into a Transformer-based VL fusion model \oscar \cite{li2020oscar}, and utilize an improved approach \short\ to pre-train the VL model and fine-tune it on a wide range of downstream VL tasks. Our results show that the new visual features significantly improve the performance across all VL tasks, creating new state-of-the-art results on seven public benchmarks. We will release the new object detection model to public.

Code Repositories

microsoft/Oscar
pytorch
Mentioned in GitHub
cattidea/VinVL-Paddle
paddle
Mentioned in GitHub
mkhalil1998/EC601_Group_Project
pytorch
Mentioned in GitHub
pzzhang/VinVL
Official
Mentioned in GitHub
yaolinli/capenrich
pytorch
Mentioned in GitHub
JoshuaPlacidi/MS-COCO-Tags
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
image-captioning-on-coco-captionsVinVL
BLEU-4: 41.0
CIDER: 140.9
METEOR: 31.1
SPICE: 25.2
image-captioning-on-nocaps-entireVinVL (Microsoft Cognitive Services + MSR)
B1: 81.59
B2: 65.15
B3: 45.04
B4: 26.15
CIDEr: 92.46
METEOR: 27.57
ROUGE-L: 56.96
SPICE: 13.07
image-captioning-on-nocaps-in-domainVinVL (Microsoft Cognitive Services + MSR)
B1: 83.24
B2: 68.04
B3: 49.68
B4: 30.62
CIDEr: 97.99
METEOR: 29.51
ROUGE-L: 58.54
SPICE: 13.63
image-captioning-on-nocaps-near-domainVinVL (Microsoft Cognitive Services + MSR)
B1: 82.77
B2: 66.94
B3: 47.02
B4: 27.97
CIDEr: 95.16
METEOR: 28.24
ROUGE-L: 57.95
SPICE: 13.36
image-captioning-on-nocaps-out-of-domainVinVL (Microsoft Cognitive Services + MSR)
B1: 75.78
B2: 56.1
B3: 34.02
B4: 15.86
CIDEr: 78.01
METEOR: 23.55
ROUGE-L: 51.99
SPICE: 11.48
image-captioning-on-nocaps-val-in-domainVinVL
CIDEr: 103.1
Pre-train (#images): 5.7M
SPICE: 14.2
image-captioning-on-nocaps-val-near-domainVinVL
CIDEr: 96.1
Pre-train (#images): 5.7M
SPICE: 13.8
image-captioning-on-nocaps-val-out-domainVinVL
CIDEr: 88.3
Pretrain (#images): 5.7M
SPICE: 12.1
image-captioning-on-nocaps-val-overallVinVL
CIDEr: 95.5
Pretrain (#images): 5.7M
SPICE: 13.5
image-text-matching-on-commercialadsdatasetVinVL
ADD(S) AUC: 88.56
visual-question-answering-on-gqa-test2019Single Model
Accuracy: 64.65
Binary: 82.63
Consistency: 94.35
Distribution: 4.72
Open: 48.77
Plausibility: 84.98
Validity: 96.62
visual-question-answering-on-vqa-v2-test-stdMSR + MS Cog. Svcs.
number: 61.5
other: 66.68
overall: 76.63
yes/no: 92.04
visual-question-answering-on-vqa-v2-test-stdMSR + MS Cog. Svcs., X10 models
number: 62.55
other: 67.87
overall: 77.45
yes/no: 92.38

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp