Command Palette
Search for a command to run...
Lee Kyungmoon ; Kim Sungyeon ; Kwak Suha

Abstract
Domain generalization is the task of learning models that generalize tounseen target domains. We propose a simple yet effective method for domaingeneralization, named cross-domain ensemble distillation (XDED), that learnsdomain-invariant features while encouraging the model to converge to flatminima, which recently turned out to be a sufficient condition for domaingeneralization. To this end, our method generates an ensemble of the outputlogits from training data with the same label but from different domains andthen penalizes each output for the mismatch with the ensemble. Also, we presenta de-stylization technique that standardizes features to encourage the model toproduce style-consistent predictions even in an arbitrary target domain. Ourmethod greatly improves generalization capability in public benchmarks forcross-domain image classification, cross-dataset person re-ID, andcross-dataset semantic segmentation. Moreover, we show that models learned byour method are robust against adversarial attacks and image corruptions.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| domain-generalization-on-office-home | XDED (ResNet-18) | Average Accuracy: 67.4 |
| domain-generalization-on-pacs-2 | XDED (ResNet-18) | Average Accuracy: 86.4 |
| image-to-sketch-recognition-on-pacs | XDED (ResNet18) | Accuracy: 51.5 |
| single-source-domain-generalization-on-pacs | XDED (ResNet18) | Accuracy: 66.5 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.