HyperAI
HyperAI
Home
News
Latest Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
English
HyperAI
HyperAI
Toggle sidebar
Search the site…
⌘
K
Home
SOTA
Image Classification
Image Classification On Flowers 102
Image Classification On Flowers 102
Metrics
Accuracy
Results
Performance results of various models on this benchmark
Columns
Model Name
Accuracy
Paper Title
Repository
Mixer-S/16- SAM
87.9
When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations
-
CeiT-S (384 finetune resolution)
98.6
Incorporating Convolution Designs into Visual Transformers
-
NAT-M1
-
Neural Architecture Transfer
-
CeiT-T
96.9
Incorporating Convolution Designs into Visual Transformers
-
ResNet-152-SAM
91.1
When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations
-
ResMLP12
97.4
ResMLP: Feedforward networks for image classification with data-efficient training
-
VIT-L/16 (Background)
99.75
Reduction of Class Activation Uncertainty with Background Information
-
NAT-M3
98.1%
Neural Architecture Transfer
-
NNCLR
95.1
With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations
-
Bamboo (ViT-B/16)
99.7
Bamboo: Building Mega-Scale Vision Dataset Continually with Human-Machine Synergy
-
ResMLP24
97.9
ResMLP: Feedforward networks for image classification with data-efficient training
-
CeiT-T (384 finetune resolution)
97.8
Incorporating Convolution Designs into Visual Transformers
-
ResNet-50x1-ACG (ImageNet-21K)
98.21
Effect of Pre-Training Scale on Intra- and Inter-Domain Full and Few-Shot Transfer Learning for Natural and Medical X-Ray Chest Images
-
CCT-14/7x2
99.76
Escaping the Big Data Paradigm with Compact Transformers
-
CaiT-M-36 U 224
99.1
-
-
DAT
98.9%
Domain Adaptive Transfer Learning on Visual Attention Aware Data Augmentation for Fine-grained Visual Categorization
-
SEER (RegNet10B)
96.3
Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision
-
ResNet-152x4-AGC (ImageNet-21K)
99.49
Effect of Pre-Training Scale on Intra- and Inter-Domain Full and Few-Shot Transfer Learning for Natural and Medical X-Ray Chest Images
-
CeiT-S
98.2
Incorporating Convolution Designs into Visual Transformers
-
TransBoost-ResNet50
97.85%
TransBoost: Improving the Best ImageNet Performance using Deep Transduction
-
0 of 51 row(s) selected.
Previous
Next
Image Classification On Flowers 102 | SOTA | HyperAI