A Review of Cooperative Multi-Agent Deep Reinforcement Learning
release_fvuohzrqjrehbpigmovql5wpai
by
Afshin OroojlooyJadid, Davood Hajinezhad
2019
Abstract
Deep Reinforcement Learning has made significant progress in multi-agent
systems in recent years. In this review article, we have mostly focused on
recent papers on Multi-Agent Reinforcement Learning (MARL) than the older
papers, unless it was necessary. Several ideas and papers are proposed with
different notations, and we tried our best to unify them with a single notation
and categorize them by their relevance. In particular, we have focused on five
common approaches on modeling and solving multi-agent reinforcement learning
problems: (I) independent-learners, (II) fully observable critic, (III) value
function decomposition, (IV) consensus, (IV) learn to communicate. Moreover, we
discuss some new emerging research areas in MARL along with the relevant recent
papers. In addition, some of the recent applications of MARL in real world are
discussed. Finally, a list of available environments for MARL research are
provided and the paper is concluded with proposals on the possible research
directions.
In text/plain
format
Archived Files and Locations
application/pdf 545.4 kB
file_5xetzpvyhja2xcvxwhnb3lwd44
|
arxiv.org (repository) web.archive.org (webarchive) |
1908.03963v1
access all versions, variants, and formats of this works (eg, pre-prints)