BibTeX
CSL-JSON
MLA
Harvard
State representation learning with recurrent capsule networks
release_b2xrhupeqrh6znffvdsdzwo7ve
by
Louis Annabi, Michael Garcia Ortiz
Released
as a article
.
2018
Abstract
Unsupervised learning of compact and relevant state representations has been
proved very useful at solving complex reinforcement learning tasks. In this
paper, we propose a recurrent capsule network that learns such representations
by trying to predict the future observations in an agent's trajectory.
In text/plain
format
Archived Files and Locations
application/pdf 403.7 kB
file_qmb325nvtnf57bnherrflefa6y
|
arxiv.org (repository) web.archive.org (webarchive) |
Read Archived PDF
Preserved and Accessible
arXiv
1812.11202v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
access all versions, variants, and formats of this works (eg, pre-prints)
Cite This
Lookup Links