HyperAIHyperAI

Image Classification On Inaturalist 2018

Metrics

Top-1 Accuracy

Results

Performance results of various models on this benchmark

Model Name
Top-1 Accuracy
Paper TitleRepository
µ2Net+ (ViT-L/16)80.97A Continual Development Methodology for Large-scale Multitask Dynamic ML Systems-
ResNet-5049.7%ClusterFit: Improving Generalization of Visual Representations-
Barlow Twins (ResNet-50)46.5Barlow Twins: Self-Supervised Learning via Redundancy Reduction-
LeViT-38466.9%LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference-
ResNet-5069.8%Grafit: Learning fine-grained image representations with coarse labels-
BS-CMO (ResNet-50)74.0%The Majority Can Help The Minority: Context-rich Minority Oversampling for Long-tailed Classification-
CaiT-M-36 U 22478%--
ResNeXt-101 (SAMix)70.54%Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup-
GPaCo (ResNet-152)78.1%Generalized Parametric Contrastive Learning-
RIDE (ResNet-50)72.2%Long-tailed Recognition by Routing Diverse Distribution-Aware Experts-
ResNet-15269.05%Class-Balanced Loss Based on Effective Number of Samples-
CeiT-T (384 finetune resolution)72.2%Incorporating Convolution Designs into Visual Transformers-
CeiT-S (384 finetune resolution)79.4%Incorporating Convolution Designs into Visual Transformers-
LeViT-128S55.2%LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference-
RegNet-8GF81.2%Grafit: Learning fine-grained image representations with coarse labels-
ResNet-50 (AutoMix)64.73%AutoMix: Unveiling the Power of Mixup for Stronger Classifiers-
Hiera-H (448px)87.3%Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles-
SWAG (ViT H/14)86.0%Revisiting Weakly Supervised Pre-Training of Visual Perception Models-
LeViT-25666.2%LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference-
LeViT-19260.4%LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference-
0 of 60 row(s) selected.
Image Classification On Inaturalist 2018 | SOTA | HyperAI