Unsupervised Reinforcement Learning for Transferable Manipulation Skill Discovery release_lroylkcvmffotlkh6rsysqyhkm

by Daesol Cho, Jigang Kim, H. Jin Kim

Released as a article .

2022  

Abstract

Current reinforcement learning (RL) in robotics often experiences difficulty in generalizing to new downstream tasks due to the innate task-specific training paradigm. To alleviate it, unsupervised RL, a framework that pre-trains the agent in a task-agnostic manner without access to the task-specific reward, leverages active exploration for distilling diverse experience into essential skills or reusable knowledge. For exploiting such benefits also in robotic manipulation, we propose an unsupervised method for transferable manipulation skill discovery that ties structured exploration toward interacting behavior and transferable skill learning. It not only enables the agent to learn interaction behavior, the key aspect of the robotic manipulation learning, without access to the environment reward, but also to generalize to arbitrary downstream manipulation tasks with the learned task-agnostic skills. Through comparative experiments, we show that our approach achieves the most diverse interacting behavior and significantly improves sample efficiency in downstream tasks including the extension to multi-object, multitask problems.
In text/plain format

Archived Files and Locations

application/pdf  4.2 MB
file_fp6yvbevijgfvbpguksotlpe6u
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2022-04-29
Version   v1
Language   en ?
arXiv  2204.13906v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 21643be5-f7fe-49e5-8e62-a24712055a35
API URL: JSON