HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Scaling Vision Transformers

Xiaohua Zhai Alexander Kolesnikov Neil Houlsby Lucas Beyer

Scaling Vision Transformers

Abstract

Attention-based neural networks such as the Vision Transformer (ViT) have recently attained state-of-the-art results on many computer vision benchmarks. Scale is a primary ingredient in attaining excellent results, therefore, understanding a model's scaling properties is a key to designing future generations effectively. While the laws for scaling Transformer language models have been studied, it is unknown how Vision Transformers scale. To address this, we scale ViT models and data, both up and down, and characterize the relationships between error rate, data, and compute. Along the way, we refine the architecture and training of ViT, reducing memory consumption and increasing accuracy of the resulting models. As a result, we successfully train a ViT model with two billion parameters, which attains a new state-of-the-art on ImageNet of 90.45% top-1 accuracy. The model also performs well for few-shot transfer, for example, reaching 84.86% top-1 accuracy on ImageNet with only 10 examples per class.

Code Repositories

google-research/big_vision
Official
jax
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
image-classification-on-imagenet-realViT-G/14
Accuracy: 90.81%
image-classification-on-imagenet-v2ViT-G/14
Top 1 Accuracy: 83.33
image-classification-on-objectnetNS (Eff.-L2)
Top-1 Accuracy: 68.5
image-classification-on-objectnetViT-G/14
Top-1 Accuracy: 70.53
image-classification-on-vtab-1k-1ViT-G/14
Top-1 Accuracy: 78.29

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp