Command Palette
Search for a command to run...
Unicoder-VL: A Universal Encoder for Vision and Language by Cross-modal Pre-training
Gen Li; Nan Duan; Yuejian Fang; Ming Gong; Daxin Jiang; Ming Zhou

Abstract
We propose Unicoder-VL, a universal encoder that aims to learn joint representations of vision and language in a pre-training manner. Borrow ideas from cross-lingual pre-trained models, such as XLM and Unicoder, both visual and linguistic contents are fed into a multi-layer Transformer for the cross-modal pre-training, where three pre-trained tasks are employed, including Masked Language Modeling (MLM), Masked Object Classification (MOC) and Visual-linguistic Matching (VLM). The first two tasks learn context-aware representations for input tokens based on linguistic and visual contents jointly. The last task tries to predict whether an image and a text describe each other. After pretraining on large-scale image-caption pairs, we transfer Unicoder-VL to caption-based image-text retrieval and visual commonsense reasoning, with just one additional output layer. We achieve state-of-the-art or comparable results on both two tasks and show the powerful ability of the cross-modal pre-training.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| image-text-matching-on-commercialadsdataset | Unicoder-VL | ADD(S) AUC: 83.16 |
| image-to-text-retrieval-on-coco | Unicoder-VL | Recall@10: 97.2 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.