Command Palette
Search for a command to run...
Learning Canonical Representations for Scene Graph to Image Generation
Herzig Roei ; Bar Amir ; Xu Huijuan ; Chechik Gal ; Darrell Trevor ; Globerson Amir

Abstract
Generating realistic images of complex visual scenes becomes challenging whenone wishes to control the structure of the generated images. Previousapproaches showed that scenes with few entities can be controlled using scenegraphs, but this approach struggles as the complexity of the graph (the numberof objects and edges) increases. In this work, we show that one limitation ofcurrent methods is their inability to capture semantic equivalence in graphs.We present a novel model that addresses these issues by learning canonicalgraph representations from the data, resulting in improved image generation forcomplex visual scenes. Our model demonstrates improved empirical performance onlarge scene graphs, robustness to noise in the input scene graph, andgeneralization on semantically equivalent graphs. Finally, we show improvedperformance of the model on three different benchmarks: Visual Genome, COCO,and CLEVR.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| layout-to-image-generation-on-coco-stuff-4 | AttSPADE | FID: 54.7 Inception Score: 15.6 LPIPS: 0.44 |
| layout-to-image-generation-on-visual-genome-4 | AttSPADE | FID: 36.4 Inception Score: 11 LPIPS: 0.51 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.