HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Unicoder-VL: A Universal Encoder for Vision and Language by Cross-modal Pre-training

Gen Li; Nan Duan; Yuejian Fang; Ming Gong; Daxin Jiang; Ming Zhou

Unicoder-VL: A Universal Encoder for Vision and Language by Cross-modal Pre-training

Abstract

We propose Unicoder-VL, a universal encoder that aims to learn joint representations of vision and language in a pre-training manner. Borrow ideas from cross-lingual pre-trained models, such as XLM and Unicoder, both visual and linguistic contents are fed into a multi-layer Transformer for the cross-modal pre-training, where three pre-trained tasks are employed, including Masked Language Modeling (MLM), Masked Object Classification (MOC) and Visual-linguistic Matching (VLM). The first two tasks learn context-aware representations for input tokens based on linguistic and visual contents jointly. The last task tries to predict whether an image and a text describe each other. After pretraining on large-scale image-caption pairs, we transfer Unicoder-VL to caption-based image-text retrieval and visual commonsense reasoning, with just one additional output layer. We achieve state-of-the-art or comparable results on both two tasks and show the powerful ability of the cross-modal pre-training.

Benchmarks

BenchmarkMethodologyMetrics
image-text-matching-on-commercialadsdatasetUnicoder-VL
ADD(S) AUC: 83.16
image-to-text-retrieval-on-cocoUnicoder-VL
Recall@10: 97.2

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Unicoder-VL: A Universal Encoder for Vision and Language by Cross-modal Pre-training | Papers | HyperAI