UneVEn: Universal Value Exploration for Multi-Agent Reinforcement Learning release_vfuqbmwbkfaf7ipnpiymoai72i

by Tarun Gupta, Anuj Mahajan, Bei Peng, Wendelin Böhmer, Shimon Whiteson

Released as a article .

2020  

Abstract

This paper focuses on cooperative value-based multi-agent reinforcement learning (MARL) in the paradigm of centralized training with decentralized execution (CTDE). Current state-of-the-art value-based MARL methods leverage CTDE to learn a centralized joint-action value function as a monotonic mixing of each agent's utility function, which enables easy decentralization. However, this monotonic restriction leads to inefficient exploration in tasks with nonmonotonic returns due to suboptimal approximations of the values of joint actions. To address this, we present a novel MARL approach called Universal Value Exploration (UneVEn), which uses universal successor features (USFs) to learn policies of tasks related to the target task, but with simpler reward functions in a sample efficient manner. UneVEn uses novel action-selection schemes between randomly sampled related tasks during exploration, which enables the monotonic joint-action value function of the target task to place more importance on useful joint actions. Empirical results on a challenging cooperative predator-prey task requiring significant coordination amongst agents show that UneVEn significantly outperforms state-of-the-art baselines.
In text/plain format

Archived Files and Locations

application/pdf  3.0 MB
file_zbqu5cdovvcerh7sjqljlqrlvi
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Archived
Type  article
Stage   submitted
Date   2020-10-06
Version   v1
Language   en ?
arXiv  2010.02974v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: d2e6bdb4-b5ea-4aa9-b8b7-09cf79810274
API URL: JSON