HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Align before Fuse: Vision and Language Representation Learning with Momentum Distillation

Junnan Li; Ramprasaath R. Selvaraju; Akhilesh Deepak Gotmare; Shafiq Joty; Caiming Xiong; Steven Hoi

Align before Fuse: Vision and Language Representation Learning with Momentum Distillation

Abstract

Large-scale vision and language representation learning has shown promising improvements on various vision-language tasks. Most existing methods employ a transformer-based multimodal encoder to jointly model visual tokens (region-based image features) and word tokens. Because the visual tokens and word tokens are unaligned, it is challenging for the multimodal encoder to learn image-text interactions. In this paper, we introduce a contrastive loss to ALign the image and text representations BEfore Fusing (ALBEF) them through cross-modal attention, which enables more grounded vision and language representation learning. Unlike most existing methods, our method does not require bounding box annotations nor high-resolution images. In order to improve learning from noisy web data, we propose momentum distillation, a self-training method which learns from pseudo-targets produced by a momentum model. We provide a theoretical analysis of ALBEF from a mutual information maximization perspective, showing that different training tasks can be interpreted as different ways to generate views for an image-text pair. ALBEF achieves state-of-the-art performance on multiple downstream vision-language tasks. On image-text retrieval, ALBEF outperforms methods that are pre-trained on orders of magnitude larger datasets. On VQA and NLVR$^2$, ALBEF achieves absolute improvements of 2.37% and 3.84% compared to the state-of-the-art, while enjoying faster inference speed. Code and pre-trained models are available at https://github.com/salesforce/ALBEF/.

Code Repositories

salesforce/lavis
Official
pytorch
Mentioned in GitHub
salesforce/pb-ovd
pytorch
Mentioned in GitHub
amazon-research/mix-generation
pytorch
Mentioned in GitHub
yuliangcai2022/clumo
pytorch
Mentioned in GitHub
salesforce/ALBEF
pytorch
Mentioned in GitHub
facebookresearch/multimodal
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
cross-modal-retrieval-on-coco-2014ALBEF
Image-to-text R@1: 77.6
Image-to-text R@10: 97.2
Image-to-text R@5: 94.3
Text-to-image R@1: 60.7
Text-to-image R@10: 90.5
Text-to-image R@5: 84.3
image-text-matching-on-commercialadsdatasetALBEF
ADD(S) AUC: 82.74
image-to-text-retrieval-on-flickr30kALBEF
Recall@1: 95.9
Recall@10: 100.0
Recall@5: 99.8
open-vocabulary-attribute-detection-on-ovad-1ALBEF
mean average precision: 21.0
visual-question-answering-on-vqa-v2-test-devALBEF (14M)
Accuracy: 75.84
visual-question-answering-on-vqa-v2-test-stdALBEF (14M)
overall: 76.04
visual-reasoning-on-nlvr2-devALBEF (14M)
Accuracy: 83.14
visual-reasoning-on-nlvr2-testALBEF (14M)
Accuracy: 82.55
zero-shot-cross-modal-retrieval-on-coco-2014ALBEF
Image-to-text R@1: 68.7
Image-to-text R@10: 94.7
Image-to-text R@5: 89.5
Text-to-image R@1: 50.1
Text-to-image R@10: 84.5
Text-to-image R@5: 76.4
zero-shot-cross-modal-retrieval-on-flickr30kALBEF
Image-to-text R@1: 90.5
Image-to-text R@10: 99.7
Image-to-text R@5: 98.8
Text-to-image R@1: 76.8
Text-to-image R@10: 96.7
Text-to-image R@5: 93.7

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp