HyperAI超神经
首页
资讯
最新论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
首页
SOTA
Image Classification
Image Classification On Places205
Image Classification On Places205
评估指标
Top 1 Accuracy
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
Top 1 Accuracy
Paper Title
Repository
MixMIM-L
69.3
MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers
SEER
66.0
Self-supervised Pretraining of Visual Features in the Wild
AutoMix (ResNet-50 Supervised)
64.1
AutoMix: Unveiling the Power of Mixup for Stronger Classifiers
MAE (ViT-H, 448)
66.8
Masked Autoencoders Are Scalable Vision Learners
InternImage-H
71.7%
InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions
SwAV
56.7%
Unsupervised Learning of Visual Features by Contrasting Cluster Assignments
SAMix (ResNet-50 Supervised)
64.3
Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup
MixMIM-B
68.3
MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers
ResNet-50 (Supervised)
53.2%
Unsupervised Learning of Visual Features by Contrasting Cluster Assignments
Barlow Twins (ResNet-50)
54.1%
Barlow Twins: Self-Supervised Learning via Redundancy Reduction
MoCo v2
52.9
Improved Baselines with Momentum Contrastive Learning
SEER (RegNet10B - finetuned - 384px)
69.0
Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision
BYOL
54.0
Bootstrap your own latent: A new approach to self-supervised Learning
RegNetY-128GF (Supervised)
62.7
Self-supervised Pretraining of Visual Features in the Wild
SimCLR
53.3
A Simple Framework for Contrastive Learning of Visual Representations
0 of 15 row(s) selected.
Previous
Next