A Review of Cooperative Multi-Agent Deep Reinforcement Learning release_gky4pllebvd4no7nmdu7qvfnzi

by Afshin OroojlooyJadid, Davood Hajinezhad

Released as a article .

2019  

Abstract

Deep Reinforcement Learning has made significant progress in multi-agent systems in recent years. In this review article, we have mostly focused on recent papers on Multi-Agent Reinforcement Learning (MARL) than the older papers, unless it was necessary. Several ideas and papers are proposed with different notations, and we tried our best to unify them with a single notation and categorize them by their relevance. In particular, we have focused on five common approaches on modeling and solving multi-agent reinforcement learning problems: (I) independent-learners, (II) fully observable critic, (III) value function decomposition, (IV) consensus, (IV) learn to communicate. Moreover, we discuss some new emerging research areas in MARL along with the relevant recent papers. In addition, some of the recent applications of MARL in real world are discussed. Finally, a list of available environments for MARL research are provided and the paper is concluded with proposals on the possible research directions.
In text/plain format

Archived Files and Locations

application/pdf  553.6 kB
file_xc4czokgozd3lm4zk7mes3smhu
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2019-09-18
Version   v2
Language   en ?
arXiv  1908.03963v2
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 940a2c9b-7a62-4a26-886d-fb6b06cd86ea
API URL: JSON