Command Palette
Search for a command to run...
RobustNet: Improving Domain Generalization in Urban-Scene Segmentation via Instance Selective Whitening
Choi Sungha ; Jung Sanghun ; Yun Huiwon ; Kim Joanne ; Kim Seungryong ; Choo Jaegul

Abstract
Enhancing the generalization capability of deep neural networks to unseendomains is crucial for safety-critical applications in the real world such asautonomous driving. To address this issue, this paper proposes a novel instanceselective whitening loss to improve the robustness of the segmentation networksfor unseen domains. Our approach disentangles the domain-specific style anddomain-invariant content encoded in higher-order statistics (i.e., featurecovariance) of the feature representations and selectively removes only thestyle information causing domain shift. As shown in Fig. 1, our method providesreasonable predictions for (a) low-illuminated, (b) rainy, and (c) unseenstructures. These types of images are not included in the training dataset,where the baseline shows a significant performance drop, contrary to ours.Being simple yet effective, our approach improves the robustness of variousbackbone networks without additional computational cost. We conduct extensiveexperiments in urban-scene segmentation and show the superiority of ourapproach to existing work. Our code is available athttps://github.com/shachoi/RobustNet.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| domain-generalization-on-gta-to-avg | RobustNet | mIoU: 37.37 |
| robust-object-detection-on-dwd | ISW | mPC [AP50]: 26.3 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.