Reinforcement Learning in Factored Action Spaces using Tensor Decompositions release_lrtmuzgaqna4phf5qf7h4uv3rq

by Anuj Mahajan, Mikayel Samvelyan, Lei Mao, Viktor Makoviychuk, Animesh Garg, Jean Kossaifi, Shimon Whiteson, Yuke Zhu, Animashree Anandkumar

Released as a article .

2021  

Abstract

We present an extended abstract for the previously published work TESSERACT [Mahajan et al., 2021], which proposes a novel solution for Reinforcement Learning (RL) in large, factored action spaces using tensor decompositions. The goal of this abstract is twofold: (1) To garner greater interest amongst the tensor research community for creating methods and analysis for approximate RL, (2) To elucidate the generalised setting of factored action spaces where tensor decompositions can be used. We use cooperative multi-agent reinforcement learning scenario as the exemplary setting where the action space is naturally factored across agents and learning becomes intractable without resorting to approximation on the underlying hypothesis space for candidate solutions.
In text/plain format

Archived Files and Locations

application/pdf  734.9 kB
file_qph7ewlgarfxtf6tg4w2wxao3e
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Archived
Type  article
Stage   submitted
Date   2021-10-27
Version   v1
Language   en ?
arXiv  2110.14538v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 1085ce41-de84-4040-b5ea-eb96de60bf59
API URL: JSON