HyperAI
Home
News
Latest Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
English
HyperAI
Toggle sidebar
Search the site…
⌘
K
Home
SOTA
Image Classification
Image Classification On Inaturalist
Image Classification On Inaturalist
Metrics
Top 1 Accuracy
Results
Performance results of various models on this benchmark
Columns
Model Name
Top 1 Accuracy
Paper Title
Repository
MetaSAug
63.28%
MetaSAug: Meta Semantic Augmentation for Long-Tailed Visual Recognition
AIMv2-1B
79.7
Multimodal Autoregressive Pre-training of Large Vision Encoders
SpineNet-143
63.6%
SpineNet: Learning Scale-Permuted Backbone for Recognition and Localization
iSQRT-COV-Net
-
Deep CNNs Meet Global Covariance Pooling: Better Representation and Generalization
MAE (ViT-H, 448)
83.4
Masked Autoencoders Are Scalable Vision Learners
b_22DeiT-LT(ours)
-
DeiT-LT Distillation Strikes Back for Vision Transformer Training on Long-Tailed Datasets
AIMv2-H
77.9
Multimodal Autoregressive Pre-training of Large Vision Encoders
Hiera-H (448px)
83.8
Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles
FixSENet-154
75.4
Fixing the train-test resolution discrepancy
IncResNetV2 SE
67.3%
The iNaturalist Species Classification and Detection Dataset
MetaFormer (MetaFormer-2,384,extra_info)
83.4%
MetaFormer: A Unified Meta Framework for Fine-Grained Recognition
MetaFormer (MetaFormer-2,384)
80.4%
MetaFormer: A Unified Meta Framework for Fine-Grained Recognition
TransFG
71.7
TransFG: A Transformer Architecture for Fine-grained Recognition
Graph-RISE (40M)
31.12%
Graph-RISE: Graph-Regularized Image Semantic Embedding
SEB+EfficientNet-B5
72.3
On the Eigenvalues of Global Covariance Pooling for Fine-grained Visual Recognition
AIMv2-3B
81.5
Multimodal Autoregressive Pre-training of Large Vision Encoders
AIMv2-L
76
Multimodal Autoregressive Pre-training of Large Vision Encoders
AIMv2-3B (448 res)
85.9
Multimodal Autoregressive Pre-training of Large Vision Encoders
0 of 18 row(s) selected.
Previous
Next