HyperAI超神经
首页
资讯
最新论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
首页
SOTA
Image Classification
Image Classification On Objectnet
Image Classification On Objectnet
评估指标
Top-1 Accuracy
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
Top-1 Accuracy
Paper Title
Repository
ResNet-50 + MixUp (rescaled)
28.37
On Mixup Regularization
MoCo-v2 (BG_Swaps)
20.8
Characterizing and Improving the Robustness of Self-Supervised Learning through Background Augmentations
-
AR-B (Opt Relevance)
47.1
Optimizing Relevance Maps of Vision Transformers Improves Robustness
RegViT (RandAug)
29.3
Pyramid Adversarial Training Improves ViT Performance
Vit B/16 (Bamboo)
53.9
Bamboo: Building Mega-Scale Vision Dataset Continually with Human-Machine Synergy
CLIP (CC12M pretrain)
15.24
Robust Cross-Modal Representation Learning with Progressive Self-Distillation
-
MLP-Mixer + Pixel
24.75
Pyramid Adversarial Training Improves ViT Performance
ALIGN
72.2
Combined Scaling for Zero-shot Transfer Learning
-
RegNetY 128GF (Platt)
64.3
Revisiting Weakly Supervised Pre-Training of Visual Perception Models
ViT H/14 (Platt)
60
Revisiting Weakly Supervised Pre-Training of Visual Perception Models
NASNet-A
35.77
ObjectNet: A large-scale bias-controlled dataset for pushing the limits of object recognition models
-
SWAG (ViT H/14)
69.5
Revisiting Weakly Supervised Pre-Training of Visual Perception Models
SwAV (reverse linear probing)
17.71
Measuring the Interpretability of Unsupervised Representations via Quantized Reversed Probing
-
BYOL (BG_RM)
23.9
Characterizing and Improving the Robustness of Self-Supervised Learning through Background Augmentations
-
Inception-v4
32.24
ObjectNet: A large-scale bias-controlled dataset for pushing the limits of object recognition models
-
AlexNet
6.78
ObjectNet: A large-scale bias-controlled dataset for pushing the limits of object recognition models
-
Discrete ViT
29.95
Pyramid Adversarial Training Improves ViT Performance
SwAV (BG_RM)
21.9
Characterizing and Improving the Robustness of Self-Supervised Learning through Background Augmentations
-
MAWS (ViT-H)
72.6
The effectiveness of MAE pre-pretraining for billion-scale pretraining
OBoW (reverse linear probing)
12.23
Measuring the Interpretability of Unsupervised Representations via Quantized Reversed Probing
-
0 of 106 row(s) selected.
Previous
Next