Accurate Learning of Graph Representations with Graph Multiset Pooling
release_ckdavjjqnbhovdbjptjtjcqngi
by
Jinheon Baek, Minki Kang, Sung Ju Hwang
2021
Abstract
Graph neural networks have been widely used on modeling graph data, achieving
impressive results on node classification and link prediction tasks. Yet,
obtaining an accurate representation for a graph further requires a pooling
function that maps a set of node representations into a compact form. A simple
sum or average over all node representations considers all node features
equally without consideration of their task relevance, and any structural
dependencies among them. Recently proposed hierarchical graph pooling methods,
on the other hand, may yield the same representation for two different graphs
that are distinguished by the Weisfeiler-Lehman test, as they suboptimally
preserve information from the node features. To tackle these limitations of
existing graph pooling methods, we first formulate the graph pooling problem as
a multiset encoding problem with auxiliary information about the graph
structure, and propose a Graph Multiset Transformer (GMT) which is a multi-head
attention based global pooling layer that captures the interaction between
nodes according to their structural dependencies. We show that GMT satisfies
both injectiveness and permutation invariance, such that it is at most as
powerful as the Weisfeiler-Lehman graph isomorphism test. Moreover, our methods
can be easily extended to the previous node clustering approaches for
hierarchical graph pooling. Our experimental results show that GMT
significantly outperforms state-of-the-art graph pooling methods on graph
classification benchmarks with high memory and time efficiency, and obtains
even larger performance gain on graph reconstruction and generation tasks.
In text/plain
format
Archived Files and Locations
application/pdf 2.3 MB
file_eohytl2zkvdlbj5vtmsbc6a7ma
|
arxiv.org (repository) web.archive.org (webarchive) |
2102.11533v3
access all versions, variants, and formats of this works (eg, pre-prints)