HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Vision-Language Pre-Training with Triple Contrastive Learning

Jinyu Yang; Jiali Duan; Son Tran; Yi Xu; Sampath Chanda; Liqun Chen; Belinda Zeng; Trishul Chilimbi; Junzhou Huang

Vision-Language Pre-Training with Triple Contrastive Learning

Abstract

Vision-language representation learning largely benefits from image-text alignment through contrastive losses (e.g., InfoNCE loss). The success of this alignment strategy is attributed to its capability in maximizing the mutual information (MI) between an image and its matched text. However, simply performing cross-modal alignment (CMA) ignores data potential within each modality, which may result in degraded representations. For instance, although CMA-based models are able to map image-text pairs close together in the embedding space, they fail to ensure that similar inputs from the same modality stay close by. This problem can get even worse when the pre-training data is noisy. In this paper, we propose triple contrastive learning (TCL) for vision-language pre-training by leveraging both cross-modal and intra-modal self-supervision. Besides CMA, TCL introduces an intra-modal contrastive objective to provide complementary benefits in representation learning. To take advantage of localized and structural information from image and text input, TCL further maximizes the average MI between local regions of image/text and their global summary. To the best of our knowledge, ours is the first work that takes into account local structure information for multi-modality representation learning. Experimental evaluations show that our approach is competitive and achieves the new state of the art on various common down-stream vision-language tasks such as image-text retrieval and visual question answering.

Code Repositories

uta-smile/TCL
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
cross-modal-retrieval-on-coco-2014TCL
Image-to-text R@1: 75.6
Image-to-text R@10: 96.7
Image-to-text R@5: 92.8
Text-to-image R@1: 59.0
Text-to-image R@10: 89.9
Text-to-image R@5: 83.2
zero-shot-cross-modal-retrieval-on-coco-2014TCL
Image-to-text R@1: 71.4
Image-to-text R@10: 95.4
Image-to-text R@5: 90.8
Text-to-image R@1: 53.5
Text-to-image R@10: 87.1
Text-to-image R@5: 79.0

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp