HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Wukong: A 100 Million Large-scale Chinese Cross-modal Pre-training Benchmark

Wukong: A 100 Million Large-scale Chinese Cross-modal Pre-training Benchmark

Abstract

Vision-Language Pre-training (VLP) models have shown remarkable performance on various downstream tasks. Their success heavily relies on the scale of pre-trained cross-modal datasets. However, the lack of large-scale datasets and benchmarks in Chinese hinders the development of Chinese VLP models and broader multilingual applications. In this work, we release a large-scale Chinese cross-modal dataset named Wukong, which contains 100 million Chinese image-text pairs collected from the web. Wukong aims to benchmark different multi-modal pre-training methods to facilitate the VLP research and community development. Furthermore, we release a group of models pre-trained with various image encoders (ViT-B/ViT-L/SwinT) and also apply advanced pre-training techniques into VLP such as locked-image text tuning, token-wise similarity in contrastive learning, and reduced-token interaction. Extensive experiments and a benchmarking of different downstream tasks including a new largest human-verified image-text test dataset are also provided. Experiments show that Wukong can serve as a promising Chinese pre-training dataset and benchmark for different cross-modal learning methods. For the zero-shot image classification task on 10 datasets, $Wukong_{ViT-L}$ achieves an average accuracy of 73.03%. For the image-text retrieval task, it achieves a mean recall of 71.6% on AIC-ICC which is 12.9% higher than WenLan 2.0. Also, our Wukong models are benchmarked on downstream tasks with other variants on multiple datasets, e.g., Flickr8K-CN, Flickr-30K-CN, COCO-CN, et al. More information can be referred to: https://wukong-dataset.github.io/wukong-dataset/.

Code Repositories

Benchmarks

BenchmarkMethodologyMetrics
image-retrieval-on-coco-cnWukong (ViT-L/14)
R@1: 74.0
R@10: 98.1
R@5: 94.4
image-retrieval-on-coco-cnWukong (ViT-B/32)
R@1: 67.0
R@10: 96.7
R@5: 91.4
image-retrieval-on-flickr30k-cnWukong (ViT-B/32)
R@1: 67.6
R@10: 94.2
R@5: 89.6
image-retrieval-on-flickr30k-cnWukong (ViT-L/14)
R@1: 77.4
R@10: 97.0
R@5: 94.5
image-retrieval-on-muge-retrievalWukong (ViT-L/14)
Mean Recall: 72.1
R@1: 52.7
R@10: 85.6
R@5: 77.9
image-retrieval-on-muge-retrievalWukong (ViT-B/32)
Mean Recall: 61.2
R@1: 39.2
R@10: 77.4
R@5: 66.9

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp