Unsupervised Neural Rendering for Image Hazing
release_iytsx2q2jzhitef6ejsjnnxzma
by
Boyun Li, Yijie Lin, Xiao Liu, Peng Hu, Jiancheng Lv, Xi Peng
2021
Abstract
Image hazing aims to render a hazy image from a given clean one, which could
be applied to a variety of practical applications such as gaming, filming,
photographic filtering, and image dehazing. To generate plausible haze, we
study two less-touched but challenging problems in hazy image rendering,
namely, i) how to estimate the transmission map from a single image without
auxiliary information, and ii) how to adaptively learn the airlight from
exemplars, i.e., unpaired real hazy images. To this end, we propose a neural
rendering method for image hazing, dubbed as HazeGEN. To be specific, HazeGEN
is a knowledge-driven neural network which estimates the transmission map by
leveraging a new prior, i.e., there exists the structure similarity (e.g.,
contour and luminance) between the transmission map and the input clean image.
To adaptively learn the airlight, we build a neural module based on another new
prior, i.e., the rendered hazy image and the exemplar are similar in the
airlight distribution. To the best of our knowledge, this could be the first
attempt to deeply rendering hazy images in an unsupervised fashion. Comparing
with existing haze generation methods, HazeGEN renders the hazy images in an
unsupervised, learnable, and controllable manner, thus avoiding the
labor-intensive efforts in paired data collection and the domain-shift issue in
haze generation. Extensive experiments show the promising performance of our
method comparing with some baselines in both qualitative and quantitative
comparisons. The code will be released on GitHub after acceptance.
In text/plain
format
Archived Files and Locations
application/pdf 19.7 MB
file_srjw4t5iorepdlvu52cq3j2nha
|
arxiv.org (repository) web.archive.org (webarchive) |
2107.06681v1
access all versions, variants, and formats of this works (eg, pre-prints)