Mutual Information Maximization in Graph Neural Networks release_pj3zvlsykndjflw6louwy3bw44

by Xinhan Di, Pengqian Yu, Rui Bu, Mingchao Sun

Released as a article .

2020  

Abstract

A variety of graph neural networks (GNNs) frameworks for representation learning on graphs have been recently developed. These frameworks rely on aggregation and iteration scheme to learn the representation of nodes. However, information between nodes is inevitably lost in the scheme during learning. In order to reduce the loss, we extend the GNNs frameworks by exploring the aggregation and iteration scheme in the methodology of mutual information. We propose a new approach of enlarging the normal neighborhood in the aggregation of GNNs, which aims at maximizing mutual information. Based on a series of experiments conducted on several benchmark datasets, we show that the proposed approach improves the state-of-the-art performance for four types of graph tasks, including supervised and semi-supervised graph classification, graph link prediction and graph edge generation and classification.
In text/plain format

Archived Files and Locations

application/pdf  153.0 kB
file_uyefr2xk3ffydbsldwmcvf3674
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2020-03-24
Version   v4
Language   en ?
arXiv  1905.08509v4
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 219414d6-6db3-45f9-8e55-cdbb73572e6e
API URL: JSON