HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

First Order Generative Adversarial Networks

Calvin Seward; Thomas Unterthiner; Urs Bergmann; Nikolay Jetchev; Sepp Hochreiter

First Order Generative Adversarial Networks

Abstract

GANs excel at learning high dimensional distributions, but they can update generator parameters in directions that do not correspond to the steepest descent direction of the objective. Prominent examples of problematic update directions include those used in both Goodfellow's original GAN and the WGAN-GP. To formally describe an optimal update direction, we introduce a theoretical framework which allows the derivation of requirements on both the divergence and corresponding method for determining an update direction, with these requirements guaranteeing unbiased mini-batch updates in the direction of steepest descent. We propose a novel divergence which approximates the Wasserstein distance while regularizing the critic's first order information. Together with an accompanying update direction, this divergence fulfills the requirements for unbiased steepest descent updates. We verify our method, the First Order GAN, with image generation on CelebA, LSUN and CIFAR-10 and set a new state of the art on the One Billion Word language generation task. Code to reproduce experiments is available.

Code Repositories

zalandoresearch/first_order_gan
Official
tf
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
image-generation-on-cifar-10FOGAN
FID: 27.4
image-generation-on-lsun-bedroom-64-x-64FOGAN
FID: 11.4

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp