Command Palette
Search for a command to run...
Self-Supervised Aggregation of Diverse Experts for Test-Agnostic Long-Tailed Recognition
Yifan Zhang Bryan Hooi Lanqing Hong Jiashi Feng

Abstract
Existing long-tailed recognition methods, aiming to train class-balanced models from long-tailed data, generally assume the models would be evaluated on the uniform test class distribution. However, practical test class distributions often violate this assumption (e.g., being either long-tailed or even inversely long-tailed), which may lead existing methods to fail in real applications. In this paper, we study a more practical yet challenging task, called test-agnostic long-tailed recognition, where the training class distribution is long-tailed while the test class distribution is agnostic and not necessarily uniform. In addition to the issue of class imbalance, this task poses another challenge: the class distribution shift between the training and test data is unknown. To tackle this task, we propose a novel approach, called Self-supervised Aggregation of Diverse Experts, which consists of two strategies: (i) a new skill-diverse expert learning strategy that trains multiple experts from a single and stationary long-tailed dataset to separately handle different class distributions; (ii) a novel test-time expert aggregation strategy that leverages self-supervision to aggregate the learned multiple experts for handling unknown test class distributions. We theoretically show that our self-supervised strategy has a provable ability to simulate test-agnostic class distributions. Promising empirical results demonstrate the effectiveness of our method on both vanilla and test-agnostic long-tailed recognition. Code is available at \url{https://github.com/Vanint/SADE-AgnosticLT}.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| image-classification-on-inaturalist-2018 | TADE (ResNet-50) | Top-1 Accuracy: 72.9% |
| long-tail-learning-on-cifar-10-lt-r-10 | TADE | Error Rate: 9.2 |
| long-tail-learning-on-cifar-10-lt-r-10 | RIDE | Error Rate: 10.3 |
| long-tail-learning-on-cifar-10-lt-r-100 | TADE | Error Rate: 16.2 |
| long-tail-learning-on-cifar-100-lt-r-10 | TADE | Error Rate: 36.4 |
| long-tail-learning-on-cifar-100-lt-r-100 | TADE | Error Rate: 50.2 |
| long-tail-learning-on-cifar-100-lt-r-50 | TADE | Error Rate: 46.1 |
| long-tail-learning-on-imagenet-lt | TADE(ResNeXt101-32x4d) | Top-1 Accuracy: 61.4 |
| long-tail-learning-on-imagenet-lt | TADE(ResNeXt-50) | Top-1 Accuracy: 58.8 |
| long-tail-learning-on-inaturalist-2018 | TADE | Top-1 Accuracy: 72.9% |
| long-tail-learning-on-inaturalist-2018 | TADE(ResNet-152) | Top-1 Accuracy: 77% |
| long-tail-learning-on-places-lt | TADE | Top 1 Accuracy: 40.9 Top-1 Accuracy: 41.3 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.