HyperAI超神经
首页
资讯
最新论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
首页
SOTA
Image Classification
Image Classification On Inaturalist
Image Classification On Inaturalist
评估指标
Top 1 Accuracy
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
Top 1 Accuracy
Paper Title
Repository
MetaSAug
63.28%
MetaSAug: Meta Semantic Augmentation for Long-Tailed Visual Recognition
AIMv2-1B
79.7
Multimodal Autoregressive Pre-training of Large Vision Encoders
SpineNet-143
63.6%
SpineNet: Learning Scale-Permuted Backbone for Recognition and Localization
iSQRT-COV-Net
-
Deep CNNs Meet Global Covariance Pooling: Better Representation and Generalization
MAE (ViT-H, 448)
83.4
Masked Autoencoders Are Scalable Vision Learners
b_22DeiT-LT(ours)
-
DeiT-LT Distillation Strikes Back for Vision Transformer Training on Long-Tailed Datasets
AIMv2-H
77.9
Multimodal Autoregressive Pre-training of Large Vision Encoders
Hiera-H (448px)
83.8
Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles
FixSENet-154
75.4
Fixing the train-test resolution discrepancy
IncResNetV2 SE
67.3%
The iNaturalist Species Classification and Detection Dataset
MetaFormer (MetaFormer-2,384,extra_info)
83.4%
MetaFormer: A Unified Meta Framework for Fine-Grained Recognition
MetaFormer (MetaFormer-2,384)
80.4%
MetaFormer: A Unified Meta Framework for Fine-Grained Recognition
TransFG
71.7
TransFG: A Transformer Architecture for Fine-grained Recognition
Graph-RISE (40M)
31.12%
Graph-RISE: Graph-Regularized Image Semantic Embedding
SEB+EfficientNet-B5
72.3
On the Eigenvalues of Global Covariance Pooling for Fine-grained Visual Recognition
AIMv2-3B
81.5
Multimodal Autoregressive Pre-training of Large Vision Encoders
AIMv2-L
76
Multimodal Autoregressive Pre-training of Large Vision Encoders
AIMv2-3B (448 res)
85.9
Multimodal Autoregressive Pre-training of Large Vision Encoders
0 of 18 row(s) selected.
Previous
Next