Command Palette
Search for a command to run...
Kim Donghyun ; Heo Byeongho ; Han Dongyoon

Abstract
This paper revives Densely Connected Convolutional Networks (DenseNets) andreveals the underrated effectiveness over predominant ResNet-stylearchitectures. We believe DenseNets' potential was overlooked due to untouchedtraining methods and traditional design elements not fully revealing theircapabilities. Our pilot study shows dense connections through concatenation arestrong, demonstrating that DenseNets can be revitalized to compete with modernarchitectures. We methodically refine suboptimal components - architecturaladjustments, block redesign, and improved training recipes towards wideningDenseNets and boosting memory efficiency while keeping concatenation shortcuts.Our models, employing simple architectural elements, ultimately surpass SwinTransformer, ConvNeXt, and DeiT-III - key architectures in the residuallearning lineage. Furthermore, our models exhibit near state-of-the-artperformance on ImageNet-1K, competing with the very recent models anddownstream tasks, ADE20k semantic segmentation, and COCO objectdetection/instance segmentation. Finally, we provide empirical analyses thatuncover the merits of the concatenation over additive shortcuts, steering arenewed preference towards DenseNet-style designs. Our code is available athttps://github.com/naver-ai/rdnet.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| fine-grained-image-classification-on-stanford | RDNet-T (224 res, IN-1K pretrained) | Accuracy: 93.9% FLOPS: 5.0G PARAMS: 24M |
| fine-grained-image-classification-on-stanford | RDNet-L (224 res, IN-1K pretrained) | Accuracy: 94.2% FLOPS: 34.7G PARAMS: 186M |
| fine-grained-image-classification-on-stanford | RDNet-S (224 res, IN-1K pretrained) | Accuracy: 94.2% FLOPS: 8.7G PARAMS: 50M |
| fine-grained-image-classification-on-stanford | RDNet-B (224 res, IN-1K pretrained) | Accuracy: 94.1% FLOPS: 15.4G PARAMS: 87M |
| image-classification-on-cifar-10 | RDNet-L (224 res, IN-1K pretrained) | Percentage correct: 99.31 |
| image-classification-on-cifar-10 | RDNet-T (224 res, IN-1K pretrained) | Percentage correct: 98.88 |
| image-classification-on-cifar-10 | RDNet-B (224 res, IN-1K pretrained) | Percentage correct: 99.31 |
| image-classification-on-imagenet | RDNet-S | GFLOPs: 8.7 Number of params: 50M Top 1 Accuracy: 83.7% |
| image-classification-on-imagenet | RDNet-T | GFLOPs: 5.0 Number of params: 24M Top 1 Accuracy: 82.8% |
| image-classification-on-imagenet | RDNet-L | GFLOPs: 34.7 Number of params: 186M Top 1 Accuracy: 84.8% |
| image-classification-on-imagenet | RDNet-L (384 res) | GFLOPs: 34.7 Number of params: 186M Top 1 Accuracy: 85.8% |
| image-classification-on-imagenet | RDNet-B | GFLOPs: 15.4 Number of params: 87M Top 1 Accuracy: 84.4% |
| image-classification-on-inaturalist-2018 | RDNet-T (224 res, IN-1K pretrained) | Number of params: 24M Top-1 Accuracy: 77.0 |
| image-classification-on-inaturalist-2018 | RDNet-L (224 res, IN-1K pretrained) | Number of params: 186M Top-1 Accuracy: 81.8% |
| image-classification-on-inaturalist-2018 | RDNet-S (224 res, IN-1K pretrained) | Number of params: 50M Top-1 Accuracy: 79.1 |
| image-classification-on-inaturalist-2018 | RDNet-B (224 res, IN-1K pretrained) | Number of params: 87M Top-1 Accuracy: 80.5 |
| image-classification-on-inaturalist-2019 | RDNet-T (224 res, IN-1K pretrained) | Number of params: 24M Top-1 Accuracy: 81.2 |
| image-classification-on-inaturalist-2019 | RDNet-S (224 res, IN-1K pretrained) | Number of params: 50M Top-1 Accuracy: 82.9 |
| image-classification-on-inaturalist-2019 | RDNet-L (224 res, IN-1K pretrained) | Number of params: 186M Top-1 Accuracy: 83.7 |
| image-classification-on-inaturalist-2019 | RDNet-B (224 res, IN-1K pretrained) | Number of params: 87M Top-1 Accuracy: 83.5 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.