High-Resolution Image Synthesis and Semantic Manipulation with
Conditional GANs
release_4cgazkjf3zhr7pnsczv3cydkia
by
Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, Bryan
Catanzaro
2017
Abstract
We present a new method for synthesizing high-resolution photo-realistic
images from semantic label maps using conditional generative adversarial
networks (conditional GANs). Conditional GANs have enabled a variety of
applications, but the results are often limited to low-resolution and still far
from realistic. In this work, we generate 2048x1024 visually appealing results
with a novel adversarial loss, as well as new multi-scale generator and
discriminator architectures. Furthermore, we extend our framework to
interactive visual manipulation with two additional features. First, we
incorporate object instance segmentation information, which enables object
manipulations such as removing/adding objects and changing the object category.
Second, we propose a method to generate diverse results given the same input,
allowing users to edit the object appearance interactively. Human opinion
studies demonstrate that our method significantly outperforms existing methods,
advancing both the quality and the resolution of deep image synthesis and
editing.
In text/plain
format
Archived Files and Locations
application/pdf 9.7 MB
file_sjy6qwpfrvev7kyyhm5mffpr3u
|
arxiv.org (repository) web.archive.org (webarchive) |
1711.11585v1
access all versions, variants, and formats of this works (eg, pre-prints)