Causal Navigation by Continuous-time Neural Networks release_bz6kyrwunjcgnflaeqm5wgv5vm

by Charles Vorbach, Ramin Hasani, Alexander Amini, Mathias Lechner, Daniela Rus

Released as a article .

2021  

Abstract

Imitation learning enables high-fidelity, vision-based learning of policies within rich, photorealistic environments. However, such techniques often rely on traditional discrete-time neural models and face difficulties in generalizing to domain shifts by failing to account for the causal relationships between the agent and the environment. In this paper, we propose a theoretical and experimental framework for learning causal representations using continuous-time neural networks, specifically over their discrete-time counterparts. We evaluate our method in the context of visual-control learning of drones over a series of complex tasks, ranging from short- and long-term navigation, to chasing static and dynamic objects through photorealistic environments. Our results demonstrate that causal continuous-time deep models can perform robust navigation tasks, where advanced recurrent models fail. These models learn complex causal control representations directly from raw visual inputs and scale to solve a variety of tasks using imitation learning.
In text/plain format

Archived Files and Locations

application/pdf  17.8 MB
file_asp2czehmbbqhbfnuowgbajzb4
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2021-08-16
Version   v2
Language   en ?
arXiv  2106.08314v2
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 14fb356f-c9df-44a1-a7f6-0edf64bfa2f8
API URL: JSON