Graph Convolutional Reinforcement Learning for Multi-Agent Cooperation release_kzxniz2b5ra5rbrw7fysr5moba

by Jiechuan Jiang, Chen Dun, Zongqing Lu

Released as a article .

2018  

Abstract

Learning to cooperate is crucially important in multi-agent environments. The key is to understand the mutual interplay between agents. However, multi-agent environments are highly dynamic, which makes it hard to learn abstract representations of their mutual interplay. In this paper, we propose graph convolutional reinforcement learning for multi-agent cooperation, where graph convolution adapts to the dynamics of the underlying graph of the multi-agent environment, and relation kernels capture the interplay between agents by their relation representations. Latent features produced by convolutional layers from gradually increased receptive fields are exploited to learn cooperation, and the cooperation is further boosted by temporal relation regularization for consistency. Empirically, we show that our method substantially outperforms existing methods in a variety of cooperative scenarios.
In text/plain format

Archived Files and Locations

application/pdf  1.6 MB
file_6xctsr7xfnestfzwvmqilml63u
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2018-10-22
Version   v1
Language   en ?
arXiv  1810.09202v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 33992831-6362-4e8f-a194-603a401a6f49
API URL: JSON