VR-Goggles for Robots: Real-to-sim Domain Adaptation for Visual Control
release_bqae25kvurctfndefni5ufm5km
by
Jingwei Zhang, Lei Tai, Peng Yun, Yufeng Xiong, Ming Liu, Joschka
Boedecker, Wolfram Burgard
2018
Abstract
In this paper, we deal with the reality gap from a novel perspective,
targeting transferring Deep Reinforcement Learning (DRL) policies learned in
simulated environments to the real-world domain for visual control tasks.
Instead of adopting the common solutions to the problem by increasing the
visual fidelity of synthetic images output from simulators during the training
phase, we seek to tackle the problem by translating the real-world image
streams back to the synthetic domain during the deployment phase, to make the
robot feel at home. We propose this as a lightweight, flexible, and efficient
solution for visual control, as 1) no extra transfer steps are required during
the expensive training of DRL agents in simulation; 2) the trained DRL agents
will not be constrained to being deployable in only one specific real-world
environment; 3) the policy training and the transfer operations are decoupled,
and can be conducted in parallel. Besides this, we propose a simple yet
effective shift loss that is agnostic to the downstream task, to constrain the
consistency between subsequent frames which is important for consistent policy
outputs. We validate the shift loss for artistic style transfer for videos and
domain adaptation, and validate our visual control approach in indoor and
outdoor robotics experiments.
In text/plain
format
Archived Files and Locations
application/pdf 10.3 MB
file_2dsyj2dy6vdvthn4own6awxkhq
|
arxiv.org (repository) web.archive.org (webarchive) |
1802.00265v2
access all versions, variants, and formats of this works (eg, pre-prints)