State representation learning with recurrent capsule networks release_b2xrhupeqrh6znffvdsdzwo7ve

by Louis Annabi, Michael Garcia Ortiz

Released as a article .

2018  

Abstract

Unsupervised learning of compact and relevant state representations has been proved very useful at solving complex reinforcement learning tasks. In this paper, we propose a recurrent capsule network that learns such representations by trying to predict the future observations in an agent's trajectory.
In text/plain format

Archived Files and Locations

application/pdf  403.7 kB
file_qmb325nvtnf57bnherrflefa6y
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2018-12-28
Version   v1
Language   en ?
arXiv  1812.11202v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 74539e76-2c78-4232-85ab-aa46c02597a8
API URL: JSON