Graph Convolutional Reinforcement Learning for Multi-Agent Cooperation
release_kzxniz2b5ra5rbrw7fysr5moba
by
Jiechuan Jiang, Chen Dun, Zongqing Lu
2018
Abstract
Learning to cooperate is crucially important in multi-agent environments. The
key is to understand the mutual interplay between agents. However, multi-agent
environments are highly dynamic, which makes it hard to learn abstract
representations of their mutual interplay. In this paper, we propose graph
convolutional reinforcement learning for multi-agent cooperation, where graph
convolution adapts to the dynamics of the underlying graph of the multi-agent
environment, and relation kernels capture the interplay between agents by their
relation representations. Latent features produced by convolutional layers from
gradually increased receptive fields are exploited to learn cooperation, and
the cooperation is further boosted by temporal relation regularization for
consistency. Empirically, we show that our method substantially outperforms
existing methods in a variety of cooperative scenarios.
In text/plain
format
Archived Files and Locations
application/pdf 1.6 MB
file_6xctsr7xfnestfzwvmqilml63u
|
arxiv.org (repository) web.archive.org (webarchive) |
1810.09202v1
access all versions, variants, and formats of this works (eg, pre-prints)