Command Palette
Search for a command to run...
Yiyuan Zhang Kaixiong Gong Xiaohan Ding Kaipeng Zhang Fangrui Lv Kurt Keutzer Xiangyu Yue

Abstract
We propose $\textbf{UniDG}$, a novel and $\textbf{Uni}$fied framework for $\textbf{D}$omain $\textbf{G}$eneralization that is capable of significantly enhancing the out-of-distribution generalization performance of foundation models regardless of their architectures. The core idea of UniDG is to finetune models during the inference stage, which saves the cost of iterative training. Specifically, we encourage models to learn the distribution of test data in an unsupervised manner and impose a penalty regarding the updating step of model parameters. The penalty term can effectively reduce the catastrophic forgetting issue as we would like to maximally preserve the valuable knowledge in the original model. Empirically, across 12 visual backbones, including CNN-, MLP-, and Transformer-based models, ranging from 1.89M to 303M parameters, UniDG shows an average accuracy improvement of +5.4% on DomainBed. These performance results demonstrate the superiority and versatility of UniDG. The code is publicly available at https://github.com/invictus717/UniDG
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| domain-generalization-on-domainnet | UniDG + CORAL + ConvNeXt-B | Average Accuracy: 59.5 |
| domain-generalization-on-office-home | UniDG + CORAL + ConvNeXt-B | Average Accuracy: 88.9 |
| domain-generalization-on-pacs-2 | UniDG + CORAL + ConvNeXt-B | Average Accuracy: 95.6 |
| domain-generalization-on-terraincognita | UniDG + CORAL + ConvNeXt-B | Average Accuracy: 69.6 |
| domain-generalization-on-vlcs | UniDG + CORAL + ConvNeXt-B | Average Accuracy: 84.5 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.