Autonomous Navigation in Complex Environments with Deep Multimodal Fusion Network release_aoyrmjgue5fl5jpnx7xsr3jsgm

by Anh Nguyen, Ngoc Nguyen, Kim Tran, Erman Tjiputra, Quang D. Tran

Released as a article .

2020  

Abstract

Autonomous navigation in complex environments is a crucial task in time-sensitive scenarios such as disaster response or search and rescue. However, complex environments pose significant challenges for autonomous platforms to navigate due to their challenging properties: constrained narrow passages, unstable pathway with debris and obstacles, or irregular geological structures and poor lighting conditions. In this work, we propose a multimodal fusion approach to address the problem of autonomous navigation in complex environments such as collapsed cites, or natural caves. We first simulate the complex environments in a physics-based simulation engine and collect a large-scale dataset for training. We then propose a Navigation Multimodal Fusion Network (NMFNet) which has three branches to effectively handle three visual modalities: laser, RGB images, and point cloud data. The extensively experimental results show that our NMFNet outperforms recent state of the art by a fair margin while achieving real-time performance. We further show that the use of multiple modalities is essential for autonomous navigation in complex environments. Finally, we successfully deploy our network to both simulated and real mobile robots.
In text/plain format

Archived Files and Locations

application/pdf  4.7 MB
file_6xzfcayx2jfv7pludrcwd3j4gq
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2020-07-31
Version   v1
Language   en ?
arXiv  2007.15945v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 6007feb6-b148-4178-bbbf-394a3038cdfe
API URL: JSON