HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Tune It or Don't Use It: Benchmarking Data-Efficient Image Classification

Lorenzo Brigato; Björn Barz; Luca Iocchi; Joachim Denzler

Tune It or Don't Use It: Benchmarking Data-Efficient Image Classification

Abstract

Data-efficient image classification using deep neural networks in settings, where only small amounts of labeled data are available, has been an active research area in the recent past. However, an objective comparison between published methods is difficult, since existing works use different datasets for evaluation and often compare against untuned baselines with default hyper-parameters. We design a benchmark for data-efficient image classification consisting of six diverse datasets spanning various domains (e.g., natural images, medical imagery, satellite data) and data types (RGB, grayscale, multispectral). Using this benchmark, we re-evaluate the standard cross-entropy baseline and eight methods for data-efficient deep learning published between 2017 and 2021 at renowned venues. For a fair and realistic comparison, we carefully tune the hyper-parameters of all methods on each dataset. Surprisingly, we find that tuning learning rate, weight decay, and batch size on a separate validation split results in a highly competitive baseline, which outperforms all but one specialized method and performs competitively to the remaining one.

Code Repositories

cvjena/deic
Official
pytorch

Benchmarks

BenchmarkMethodologyMetrics
small-data-image-classification-on-cifair-10-1 Harmonic Networks
Accuracy: 56.50
small-data-image-classification-on-cifair-10-1Cross-entropy baseline
Accuracy: 58.22
small-data-image-classification-on-cifair-10-1T-vMF Similarity
Accuracy: 57.50
small-data-image-classification-on-deicCross-Entropy baseline
Average Balanced Accuracy (across datasets): 67.90
small-data-image-classification-on-deicT-vMF Similarity
Average Balanced Accuracy (across datasets): 64.67
small-data-image-classification-on-deicDeep Hybrid Networks
Average Balanced Accuracy (across datasets): 60.33
small-data-image-classification-on-deicOLÉ
Average Balanced Accuracy (across datasets): 64.15
small-data-image-classification-on-deicHarmonic Networks
Average Balanced Accuracy (across datasets): 68.70
small-data-image-classification-on-deicCosine Loss
Average Balanced Accuracy (across datasets): 62.73
small-data-image-classification-on-deicDSK Networks
Average Balanced Accuracy (across datasets): 64.64
small-data-image-classification-on-deicFull Convolution
Average Balanced Accuracy (across datasets): 62.06
small-data-image-classification-on-deicGrad-l2 Penalty
Average Balanced Accuracy (across datasets): 55.47
small-data-image-classification-on-deicCosine + Cross-Entropy Loss
Average Balanced Accuracy (across datasets): 64.92
small-data-image-classification-on-eurosat-50DSK Networks
Accuracy: 91.25
small-data-image-classification-on-eurosat-50Harmonic Networks
Accuracy: 92.09
small-data-image-classification-on-eurosat-50Deep Hybrid Networks
Accuracy: 91.15
small-data-image-classification-on-imagenetCross-entropy baseline
1:1 Accuracy: 44.97
small-data-image-classification-on-imagenetHarmonic Networks
1:1 Accuracy: 46.36
small-data-image-classification-on-imagenetDSK Networks
1:1 Accuracy: 45.21
small-data-on-cub-200-2011-30-samples-per-1Harmonic Networks (no pre-training)
Accuracy: 72.26
small-data-on-cub-200-2011-30-samples-per-1Cross-entropy baseline (no pre-training)
Accuracy: 71.44
small-data-on-cub-200-2011-30-samples-per-1DSK Networks (no pre-training)
Accuracy: 71.02

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp