GNNExplainer: Generating Explanations for Graph Neural Networks
release_igx34dpky5hsbi2q53zxld3d64
by
Rex Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, Jure Leskovec
2019
Abstract
Graph Neural Networks (GNNs) are a powerful tool for machine learning on
graphs.GNNs combine node feature information with the graph structure by
recursively passing neural messages along edges of the input graph. However,
incorporating both graph structure and feature information leads to complex
models, and explaining predictions made by GNNs remains unsolved. Here we
propose GNNExplainer, the first general, model-agnostic approach for providing
interpretable explanations for predictions of any GNN-based model on any
graph-based machine learning task. Given an instance, GNNExplainer identifies a
compact subgraph structure and a small subset of node features that have a
crucial role in GNN's prediction. Further, GNNExplainer can generate consistent
and concise explanations for an entire class of instances. We formulate
GNNExplainer as an optimization task that maximizes the mutual information
between a GNN's prediction and distribution of possible subgraph structures.
Experiments on synthetic and real-world graphs show that our approach can
identify important graph structures as well as node features, and outperforms
baselines by 17.1% on average. GNNExplainer provides a variety of benefits,
from the ability to visualize semantically relevant structures to
interpretability, to giving insights into errors of faulty GNNs.
In text/plain
format
Archived Files and Locations
application/pdf 2.6 MB
file_iokviitxw5hk3my5pggxfjotji
|
arxiv.org (repository) web.archive.org (webarchive) |
1903.03894v4
access all versions, variants, and formats of this works (eg, pre-prints)