Sparse2Dense: From direct sparse odometry to dense 3D reconstruction release_tkkhtieguzfeldkccw7eptcnna

by Jiexiong Tang, John Folkesson, Patric Jensfelt

Released as a article .

2019  

Abstract

In this paper, we proposed a new deep learning based dense monocular SLAM method. Compared to existing methods, the proposed framework constructs a dense 3D model via a sparse to dense mapping using learned surface normals. With single view learned depth estimation as prior for monocular visual odometry, we obtain both accurate positioning and high quality depth reconstruction. The depth and normal are predicted by a single network trained in a tightly coupled manner.Experimental results show that our method significantly improves the performance of visual tracking and depth prediction in comparison to the state-of-the-art in deep monocular dense SLAM.
In text/plain format

Archived Files and Locations

application/pdf  4.0 MB
file_6xnxr4xyvjg4zejuof2n77mtrq
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2019-03-21
Version   v1
Language   en ?
arXiv  1903.09199v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: bef04cf7-ab7d-4c54-a83a-a92de93d5b31
API URL: JSON