HyperAI
Home
News
Latest Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
English
HyperAI
Toggle sidebar
Search the site…
⌘
K
Home
SOTA
Image Classification
Image Classification On Places205
Image Classification On Places205
Metrics
Top 1 Accuracy
Results
Performance results of various models on this benchmark
Columns
Model Name
Top 1 Accuracy
Paper Title
Repository
MixMIM-L
69.3
MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers
SEER
66.0
Self-supervised Pretraining of Visual Features in the Wild
AutoMix (ResNet-50 Supervised)
64.1
AutoMix: Unveiling the Power of Mixup for Stronger Classifiers
MAE (ViT-H, 448)
66.8
Masked Autoencoders Are Scalable Vision Learners
InternImage-H
71.7%
InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions
SwAV
56.7%
Unsupervised Learning of Visual Features by Contrasting Cluster Assignments
SAMix (ResNet-50 Supervised)
64.3
Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup
MixMIM-B
68.3
MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers
ResNet-50 (Supervised)
53.2%
Unsupervised Learning of Visual Features by Contrasting Cluster Assignments
Barlow Twins (ResNet-50)
54.1%
Barlow Twins: Self-Supervised Learning via Redundancy Reduction
MoCo v2
52.9
Improved Baselines with Momentum Contrastive Learning
SEER (RegNet10B - finetuned - 384px)
69.0
Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision
BYOL
54.0
Bootstrap your own latent: A new approach to self-supervised Learning
RegNetY-128GF (Supervised)
62.7
Self-supervised Pretraining of Visual Features in the Wild
SimCLR
53.3
A Simple Framework for Contrastive Learning of Visual Representations
0 of 15 row(s) selected.
Previous
Next