Command Palette
Search for a command to run...
Tingting Liang Xiaojie Chu Yudong Liu Yongtao Wang Zhi Tang Wei Chu Jingdong Chen Haibin Ling

Abstract
Modern top-performing object detectors depend heavily on backbone networks, whose advances bring consistent performance gains through exploring more effective network structures. In this paper, we propose a novel and flexible backbone framework, namely CBNetV2, to construct high-performance detectors using existing open-sourced pre-trained backbones under the pre-training fine-tuning paradigm. In particular, CBNetV2 architecture groups multiple identical backbones, which are connected through composite connections. Specifically, it integrates the high- and low-level features of multiple backbone networks and gradually expands the receptive field to more efficiently perform object detection. We also propose a better training strategy with assistant supervision for CBNet-based detectors. Without additional pre-training of the composite backbone, CBNetV2 can be adapted to various backbones (CNN-based vs. Transformer-based) and head designs of most mainstream detectors (one-stage vs. two-stage, anchor-based vs. anchor-free-based). Experiments provide strong evidence that, compared with simply increasing the depth and width of the network, CBNetV2 introduces a more efficient, effective, and resource-friendly way to build high-performance backbone networks. Particularly, our Dual-Swin-L achieves 59.4% box AP and 51.6% mask AP on COCO test-dev under the single-model and single-scale testing protocol, which is significantly better than the state-of-the-art result (57.7% box AP and 50.2% mask AP) achieved by Swin-L, while the training schedule is reduced by 6$\times$. With multi-scale testing, we push the current best single model result to a new record of 60.1% box AP and 52.3% mask AP without using extra training data. Code is available at https://github.com/VDIGPKU/CBNetV2.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| instance-segmentation-on-coco | CBNetV2 (Dual-Swin-L HTC, single-scale) | mask AP: 51.6 |
| instance-segmentation-on-coco | CBNetV2 (Dual-Swin-L HTC, multi-scale) | mask AP: 52.3 |
| instance-segmentation-on-coco | CBNetV2 (EVA02, single-scale) | AP50: 80.3 AP75: 62.1 APL: 70.9 APM: 59.3 APS: 39.7 mask AP: 56.1 |
| instance-segmentation-on-coco-minival | CBNetV2 (Dual-Swin-L HTC, multi-scale) | mask AP: 51.8 |
| instance-segmentation-on-coco-minival | CBNetV2 (Dual-Swin-L HTC, multi-scale) | mask AP: 51 |
| object-detection-on-coco | CBNetV2 (Dual-Swin-L HTC, multi-scale) | box mAP: 60.1 |
| object-detection-on-coco | CBNetV2 (Dual-Swin-L HTC, single-scale) | box mAP: 59.4 |
| object-detection-on-coco-minival | CBNetV2 (Dual-Swin-L HTC, multi-scale) | box AP: 59.6 |
| object-detection-on-coco-minival | CBNetV2 (Dual-Swin-L HTC, multi-scale) | box AP: 59.1 |
| object-detection-on-coco-o | CBNetV2 (Swin-L) | Average mAP: 39.0 Effective Robustness: 12.36 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.