Command Palette
Search for a command to run...
Align before Fuse: Vision and Language Representation Learning with Momentum Distillation
Junnan Li; Ramprasaath R. Selvaraju; Akhilesh Deepak Gotmare; Shafiq Joty; Caiming Xiong; Steven Hoi

Abstract
Large-scale vision and language representation learning has shown promising improvements on various vision-language tasks. Most existing methods employ a transformer-based multimodal encoder to jointly model visual tokens (region-based image features) and word tokens. Because the visual tokens and word tokens are unaligned, it is challenging for the multimodal encoder to learn image-text interactions. In this paper, we introduce a contrastive loss to ALign the image and text representations BEfore Fusing (ALBEF) them through cross-modal attention, which enables more grounded vision and language representation learning. Unlike most existing methods, our method does not require bounding box annotations nor high-resolution images. In order to improve learning from noisy web data, we propose momentum distillation, a self-training method which learns from pseudo-targets produced by a momentum model. We provide a theoretical analysis of ALBEF from a mutual information maximization perspective, showing that different training tasks can be interpreted as different ways to generate views for an image-text pair. ALBEF achieves state-of-the-art performance on multiple downstream vision-language tasks. On image-text retrieval, ALBEF outperforms methods that are pre-trained on orders of magnitude larger datasets. On VQA and NLVR$^2$, ALBEF achieves absolute improvements of 2.37% and 3.84% compared to the state-of-the-art, while enjoying faster inference speed. Code and pre-trained models are available at https://github.com/salesforce/ALBEF/.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| cross-modal-retrieval-on-coco-2014 | ALBEF | Image-to-text R@1: 77.6 Image-to-text R@10: 97.2 Image-to-text R@5: 94.3 Text-to-image R@1: 60.7 Text-to-image R@10: 90.5 Text-to-image R@5: 84.3 |
| image-text-matching-on-commercialadsdataset | ALBEF | ADD(S) AUC: 82.74 |
| image-to-text-retrieval-on-flickr30k | ALBEF | Recall@1: 95.9 Recall@10: 100.0 Recall@5: 99.8 |
| open-vocabulary-attribute-detection-on-ovad-1 | ALBEF | mean average precision: 21.0 |
| visual-question-answering-on-vqa-v2-test-dev | ALBEF (14M) | Accuracy: 75.84 |
| visual-question-answering-on-vqa-v2-test-std | ALBEF (14M) | overall: 76.04 |
| visual-reasoning-on-nlvr2-dev | ALBEF (14M) | Accuracy: 83.14 |
| visual-reasoning-on-nlvr2-test | ALBEF (14M) | Accuracy: 82.55 |
| zero-shot-cross-modal-retrieval-on-coco-2014 | ALBEF | Image-to-text R@1: 68.7 Image-to-text R@10: 94.7 Image-to-text R@5: 89.5 Text-to-image R@1: 50.1 Text-to-image R@10: 84.5 Text-to-image R@5: 76.4 |
| zero-shot-cross-modal-retrieval-on-flickr30k | ALBEF | Image-to-text R@1: 90.5 Image-to-text R@10: 99.7 Image-to-text R@5: 98.8 Text-to-image R@1: 76.8 Text-to-image R@10: 96.7 Text-to-image R@5: 93.7 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.