Command Palette
Search for a command to run...
{Xian-Sheng Hua Jianqiang Huang Feng Gao Xinmei Tian Zhiheng Yin Yonggang Zhang Xu Shen Chaoqun Wan}

Abstract
In single domain generalization, models trained with data from only one domain are required to perform well on many unseen domains. In this paper, we propose a new model, termed meta convolutional neural network, to solve the single domain generalization problem in image recognition. The key idea is to decompose the convolutional features of images into meta features. Acting as "visual words", meta features are defined as universal and basic visual elements for image representations (like words for documents in language). Taking meta features as reference, we propose compositional operations to eliminate irrelevant features of local convolutional features by an addressing process and then to reformulate the convolutional feature maps as a composition of related meta features. In this way, images are universally coded without biased information from the unseen domain, which can be processed by following modules trained in the source domain. The compositional operations adopt a regression analysis technique to learn the meta features in an online batch learning manner. Extensive experiments on multiple benchmark datasets verify the superiority of the proposed model in improving single domain generalization ability.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| photo-to-rest-generalization-on-pacs | MetaCNN (AlexNet) | Accuracy: 57.17 |
| single-source-domain-generalization-on-digits | MetaCNN (LeNet) | Accuracy: 78.76 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.