Learning Task-Specific Generalized Convolutions in the Permutohedral Lattice release_jd2mieu7vzba5mtdezvotwa45m

by Anne S. Wannenwetsch, Martin Kiefel, Peter V. Gehler, Stefan Roth

Released as a article .

2019  

Abstract

Dense prediction tasks typically employ encoder-decoder architectures, but the prevalent convolutions in the decoder are not image-adaptive and can lead to boundary artifacts. Different generalized convolution operations have been introduced to counteract this. We go beyond these by leveraging guidance data to redefine their inherent notion of proximity. Our proposed network layer builds on the permutohedral lattice, which performs sparse convolutions in a high-dimensional space allowing for powerful non-local operations despite small filters. Multiple features with different characteristics span this permutohedral space. In contrast to prior work, we learn these features in a task-specific manner by generalizing the basic permutohedral operations to learnt feature representations. As the resulting objective is complex, a carefully designed framework and learning procedure are introduced, yielding rich feature embeddings in practice. We demonstrate the general applicability of our approach in different joint upsampling tasks. When adding our network layer to state-of-the-art networks for optical flow and semantic segmentation, boundary artifacts are removed and the accuracy is improved.
In text/plain format

Archived Files and Locations

application/pdf  2.4 MB
file_bbvbz7ycffcmrizh4xovjn4huu
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2019-09-09
Version   v1
Language   en ?
arXiv  1909.03677v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 533109b7-94c0-4c4a-b4d5-51b9fe101a8f
API URL: JSON