High-Resolution Deep Convolutional Generative Adversarial Networks release_iq4qzqx6cfamxpsep4wnh36y7a

by J. D. Curtó and H. C. Zarza and T. Kim

Released as a article .

2018  

Abstract

Generative Adversarial Networks (GANs) [Goodfellow et al. 2014] convergence in a high-resolution setting with a computational constrain of GPU memory capacity has been beset with difficulty due to the known lack of convergence rate stability. In order to boost network convergence of DCGAN (Deep Convolutional Generative Adversarial Networks) [Radford et al. 2016] and achieve good-looking high-resolution results we propose a new layered network structure, HDCGAN, that incorporates current state-of-the-art techniques for this effect. Glasses, a mechanism to arbitrarily improve the final GAN generated results by enlarging the input size by a telescope ζ is also presented. A novel bias-free dataset, Graphics, containing human faces from different ethnical groups in a wide variety of illumination conditions and image resolutions is introduced. Graphics is enhanced with HDCGAN synthetic images, thus being the first GAN augmented face dataset. We conduct extensive experiments on CelebA [Liu et al. 2015], CelebA-hq [Karras et al. 2018] and Graphics. HDCGAN is the current state-of-the-art in synthetic image generation on CelebA achieving a MS-SSIM of 0.1978 and a FRÉCHET Inception Distance of 8.44.
In text/plain format

Archived Files and Locations

application/pdf  10.2 MB
file_qxjcyoiw3bh27ivqgnbnky4xai
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2018-05-31
Version   v11
Language   en ?
arXiv  1711.06491v11
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: d49f503e-4712-40dc-850f-08e8ae4a5727
API URL: JSON