Sdf-GAN: Semi-supervised Depth Fusion with Multi-scale Adversarial
Networks
release_mqkt5nkw6fhrve37uom5gset7e
by
Can Pu, Runzi Song, Radim Tylecek, Nanbo Li, Robert B Fisher
2019
Abstract
Refining raw disparity maps from different algorithms to exploit their
complementary advantages is still challenging. Uncertainty estimation and
complex disparity relationships among pixels limit the accuracy and robustness
of existing methods and there is no standard method for fusion of different
kinds of depth data. In this paper, we introduce a new method to fuse disparity
maps from different sources, while incorporating supplementary information
(intensity, gradient, etc.) into a refiner network to better refine raw
disparity inputs. A discriminator network classifies disparities at different
receptive fields and scales. Assuming a Markov Random Field for the refined
disparity map produces better estimates of the true disparity distribution.
Both fully supervised and semi-supervised versions of the algorithm are
proposed. The approach includes a more robust loss function to inpaint invalid
disparity values and requires much less labeled data to train in the
semi-supervised learning mode. The algorithm can be generalized to fuse depths
from different kinds of depth sources. Experiments explored different fusion
opportunities: stereo-monocular fusion, stereo-ToF fusion and stereo-stereo
fusion. The experiments show the superiority of the proposed algorithm compared
with the most recent algorithms on public synthetic datasets (Scene Flow,
SYNTH3, our synthetic garden dataset) and real datasets (Kitti2015 dataset and
Trimbot2020 Garden dataset).
In text/plain
format
Archived Files and Locations
application/pdf 4.1 MB
file_zlnpqmcfivdjrbfg55dpbg5424
|
arxiv.org (repository) web.archive.org (webarchive) |
1803.06657v2
access all versions, variants, and formats of this works (eg, pre-prints)