HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision

Wonjae Kim; Bokyung Son; Ildoo Kim

ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision

Abstract

Vision-and-Language Pre-training (VLP) has improved performance on various joint vision-and-language downstream tasks. Current approaches to VLP heavily rely on image feature extraction processes, most of which involve region supervision (e.g., object detection) and the convolutional architecture (e.g., ResNet). Although disregarded in the literature, we find it problematic in terms of both (1) efficiency/speed, that simply extracting input features requires much more computation than the multimodal interaction steps; and (2) expressive power, as it is upper bounded to the expressive power of the visual embedder and its predefined visual vocabulary. In this paper, we present a minimal VLP model, Vision-and-Language Transformer (ViLT), monolithic in the sense that the processing of visual inputs is drastically simplified to just the same convolution-free manner that we process textual inputs. We show that ViLT is up to tens of times faster than previous VLP models, yet with competitive or better downstream task performance. Our code and pre-trained weights are available at https://github.com/dandelin/vilt.

Code Repositories

guilk/vlc
pytorch
Mentioned in GitHub
glamor-usc/climb
pytorch
Mentioned in GitHub
huggingface/transformers
pytorch
Mentioned in GitHub
dandelin/vilt
Official
pytorch
Mentioned in GitHub
wglab/gestaltmml
pytorch
Mentioned in GitHub
wglab/gestaltmml-gestaltgpt
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
cross-modal-retrieval-on-coco-2014ViLT-B/32
Image-to-text R@1: 61.5
Image-to-text R@10: 92.7
Image-to-text R@5: 86.3
Text-to-image R@1: 42.7
Text-to-image R@10: 83.1
Text-to-image R@5: 72.9
cross-modal-retrieval-on-flickr30kViLT-B/32
Image-to-text R@1: 83.5
Image-to-text R@10: 98.6
Image-to-text R@5: 96.7
Text-to-image R@1: 64.4
Text-to-image R@10: 93.8
Text-to-image R@5: 88.7
image-retrieval-on-photochatViLT
R1: 11.5
R@10: 25.6
R@5: 33.8
Sum(R@1,5,10): 71.0
multimodal-intent-recognition-on-mmdialogViLT
F1: 55.8
multimodal-intent-recognition-on-photochatViLT
F1: 52.4
Precision: 55.4
Recall: 58.9
visual-question-answering-on-vqa-v2-test-devViLT-B/32
Accuracy: 71.26
visual-reasoning-on-nlvr2-devViLT-B/32
Accuracy: 75.7
visual-reasoning-on-nlvr2-testViLT-B/32
Accuracy: 76.13
zero-shot-cross-modal-retrieval-on-coco-2014ViLT-B/32
Image-to-text R@1: 56.5
Image-to-text R@10: 89.6
Image-to-text R@5: 82.6
Text-to-image R@1: 40.4
Text-to-image R@10: 81.1
Text-to-image R@5: 70
zero-shot-cross-modal-retrieval-on-flickr30kViLT-B/32
Image-to-text R@1: 73.2
Image-to-text R@10: 96.5
Image-to-text R@5: 93.6
Text-to-image R@1: 55
Text-to-image R@10: 89.8
Text-to-image R@5: 82.5

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp