Command Palette
Search for a command to run...
Ming-Yu Liu; Oncel Tuzel

Abstract
We propose coupled generative adversarial network (CoGAN) for learning a joint distribution of multi-domain images. In contrast to the existing approaches, which require tuples of corresponding images in different domains in the training set, CoGAN can learn a joint distribution without any tuple of corresponding images. It can learn a joint distribution with just samples drawn from the marginal distributions. This is achieved by enforcing a weight-sharing constraint that limits the network capacity and favors a joint distribution solution over a product of marginal distributions one. We apply CoGAN to several joint distribution learning tasks, including learning a joint distribution of color and depth images, and learning a joint distribution of face images with different attributes. For each task it successfully learns the joint distribution without any tuple of corresponding images. We also demonstrate its applications to domain adaptation and image transformation.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| image-to-image-translation-on-cityscapes | CoGAN | Class IOU: 0.06 Per-class Accuracy: 10% Per-pixel Accuracy: 40% |
| image-to-image-translation-on-cityscapes-1 | CoGAN | Class IOU: 0.08 Per-class Accuracy: 11% Per-pixel Accuracy: 45% |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.