Daoze commited on
Commit
37e1c69
·
verified ·
1 Parent(s): e055beb

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. papers/LOG/LOG 2022/LOG 2022 Conference/UqamDYtuh9/Initial_manuscript_md/Initial_manuscript.md +0 -0
  2. papers/LOG/LOG 2022/LOG 2022 Conference/UqamDYtuh9/Initial_manuscript_tex/Initial_manuscript.tex +222 -0
  3. papers/LOG/LOG 2022/LOG 2022 Conference/VBXRMnRBfRF/Initial_manuscript_md/Initial_manuscript.md +370 -0
  4. papers/LOG/LOG 2022/LOG 2022 Conference/VBXRMnRBfRF/Initial_manuscript_tex/Initial_manuscript.tex +266 -0
  5. papers/LOG/LOG 2022/LOG 2022 Conference/Vbfr1jiMxYS/Initial_manuscript_md/Initial_manuscript.md +923 -0
  6. papers/LOG/LOG 2022/LOG 2022 Conference/Vbfr1jiMxYS/Initial_manuscript_tex/Initial_manuscript.tex +353 -0
  7. papers/LOG/LOG 2022/LOG 2022 Conference/W2OStztdMhc/Initial_manuscript_md/Initial_manuscript.md +285 -0
  8. papers/LOG/LOG 2022/LOG 2022 Conference/W2OStztdMhc/Initial_manuscript_tex/Initial_manuscript.tex +134 -0
  9. papers/LOG/LOG 2022/LOG 2022 Conference/W59BHjEDfz/Initial_manuscript_md/Initial_manuscript.md +122 -0
  10. papers/LOG/LOG 2022/LOG 2022 Conference/W59BHjEDfz/Initial_manuscript_tex/Initial_manuscript.tex +129 -0
  11. papers/LOG/LOG 2022/LOG 2022 Conference/WtFobB28VDey/Initial_manuscript_md/Initial_manuscript.md +769 -0
  12. papers/LOG/LOG 2022/LOG 2022 Conference/WtFobB28VDey/Initial_manuscript_tex/Initial_manuscript.tex +354 -0
  13. papers/LOG/LOG 2022/LOG 2022 Conference/YCgwkDo56q/Initial_manuscript_md/Initial_manuscript.md +292 -0
  14. papers/LOG/LOG 2022/LOG 2022 Conference/YCgwkDo56q/Initial_manuscript_tex/Initial_manuscript.tex +274 -0
  15. papers/LOG/LOG 2022/LOG 2022 Conference/YXHoPO33rk/Initial_manuscript_md/Initial_manuscript.md +225 -0
  16. papers/LOG/LOG 2022/LOG 2022 Conference/YXHoPO33rk/Initial_manuscript_tex/Initial_manuscript.tex +100 -0
  17. papers/LOG/LOG 2022/LOG 2022 Conference/YcnAf3cEvH3/Initial_manuscript_md/Initial_manuscript.md +115 -0
  18. papers/LOG/LOG 2022/LOG 2022 Conference/YcnAf3cEvH3/Initial_manuscript_tex/Initial_manuscript.tex +69 -0
  19. papers/LOG/LOG 2022/LOG 2022 Conference/ZBsxA6_gp3/Initial_manuscript_md/Initial_manuscript.md +541 -0
  20. papers/LOG/LOG 2022/LOG 2022 Conference/ZBsxA6_gp3/Initial_manuscript_tex/Initial_manuscript.tex +294 -0
  21. papers/LOG/LOG 2022/LOG 2022 Conference/Zg8y2-v8ia/Initial_manuscript_md/Initial_manuscript.md +199 -0
  22. papers/LOG/LOG 2022/LOG 2022 Conference/Zg8y2-v8ia/Initial_manuscript_tex/Initial_manuscript.tex +159 -0
  23. papers/LOG/LOG 2022/LOG 2022 Conference/ZuMgYX1irC/Initial_manuscript_md/Initial_manuscript.md +327 -0
  24. papers/LOG/LOG 2022/LOG 2022 Conference/ZuMgYX1irC/Initial_manuscript_tex/Initial_manuscript.tex +284 -0
  25. papers/LOG/LOG 2022/LOG 2022 Conference/_nlbNbawXDi/Initial_manuscript_md/Initial_manuscript.md +249 -0
  26. papers/LOG/LOG 2022/LOG 2022 Conference/_nlbNbawXDi/Initial_manuscript_tex/Initial_manuscript.tex +307 -0
  27. papers/LOG/LOG 2022/LOG 2022 Conference/vEbUaN9Z2V8/Initial_manuscript_md/Initial_manuscript.md +148 -0
  28. papers/LOG/LOG 2022/LOG 2022 Conference/vEbUaN9Z2V8/Initial_manuscript_tex/Initial_manuscript.tex +95 -0
  29. papers/LOG/LOG 2022/LOG 2022 Conference/wY_IYhh6pqj/Initial_manuscript_md/Initial_manuscript.md +0 -0
  30. papers/LOG/LOG 2022/LOG 2022 Conference/wY_IYhh6pqj/Initial_manuscript_tex/Initial_manuscript.tex +325 -0
  31. papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/B1xPG55qZS/Initial_manuscript_md/Initial_manuscript.md +193 -0
  32. papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/B1xPG55qZS/Initial_manuscript_tex/Initial_manuscript.tex +172 -0
  33. papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/BJxZ3ZH1-S/Initial_manuscript_md/Initial_manuscript.md +163 -0
  34. papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/BJxZ3ZH1-S/Initial_manuscript_tex/Initial_manuscript.tex +170 -0
  35. papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/Bkgwe3GnZB/Initial_manuscript_md/Initial_manuscript.md +175 -0
  36. papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/Bkgwe3GnZB/Initial_manuscript_tex/Initial_manuscript.tex +197 -0
  37. papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/BkxeSkO7ZB/Initial_manuscript_md/Initial_manuscript.md +143 -0
  38. papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/BkxeSkO7ZB/Initial_manuscript_tex/Initial_manuscript.tex +139 -0
  39. papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/BkxqiK5h0V/Initial_manuscript_md/Initial_manuscript.md +149 -0
  40. papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/BkxqiK5h0V/Initial_manuscript_tex/Initial_manuscript.tex +142 -0
  41. papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/H1eTF7BqZr/Initial_manuscript_md/Initial_manuscript.md +169 -0
  42. papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/H1eTF7BqZr/Initial_manuscript_tex/Initial_manuscript.tex +167 -0
  43. papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/H1l9RiIVWr/Initial_manuscript_md/Initial_manuscript.md +183 -0
  44. papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/H1l9RiIVWr/Initial_manuscript_tex/Initial_manuscript.tex +189 -0
  45. papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/HJxydABnbS/Initial_manuscript_md/Initial_manuscript.md +139 -0
  46. papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/HJxydABnbS/Initial_manuscript_tex/Initial_manuscript.tex +146 -0
  47. papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/HklExX79-S/Initial_manuscript_md/Initial_manuscript.md +179 -0
  48. papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/HklExX79-S/Initial_manuscript_tex/Initial_manuscript.tex +137 -0
  49. papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/Hkx63bWjZr/Initial_manuscript_md/Initial_manuscript.md +121 -0
  50. papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/Hkx63bWjZr/Initial_manuscript_tex/Initial_manuscript.tex +121 -0
papers/LOG/LOG 2022/LOG 2022 Conference/UqamDYtuh9/Initial_manuscript_md/Initial_manuscript.md ADDED
The diff for this file is too large to render. See raw diff
 
papers/LOG/LOG 2022/LOG 2022 Conference/UqamDYtuh9/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,222 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § DIGRAC: DIGRAPH CLUSTERING BASED ON FLOW IMBALANCE
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ Node clustering is a powerful tool in the analysis of networks. We introduce a graph neural network framework to obtain node embeddings for directed networks in a self-supervised manner, including a novel probabilistic imbalance loss, which can be used for network clustering. Here, we propose directed flow imbalance measures, which are tightly related to directionality, to reveal clusters in the network even when there is no density difference between clusters. In contrast to standard approaches in the literature, in this paper, directionality is not treated as a nuisance, but rather contains the main signal. DIGRAC optimizes directed flow imbalance for clustering without requiring label supervision, unlike existing graph neural network methods, and can naturally incorporate node features, unlike existing spectral methods. Extensive experimental results on synthetic data, in the form of directed stochastic block models, and real-world data at different scales, demonstrate that our method, based on flow imbalance, attains state-of-the-art results on directed graph clustering when compared against 10 state-of-the-art methods from the literature, for a wide range of noise and sparsity levels, graph structures and topologies, and even outperforms supervised methods.
12
+
13
+ § 18 1 INTRODUCTION
14
+
15
+ Revealing an underlying community structure of directed networks (digraphs) is an important problem in many applications, see for example [1] and [2], such as detecting influential social groups [3] and analyzing migration patterns [4]. While most existing methods that could be applied to directed clustering use local edge densities as main signal and directionality (i.e, edge orientation) as additional signal, we argue that even in the absence of any edge density differences, directionality can play a vital role in directed clustering as it can reveal latent properties of network flows. The underlying intuition is that homogeneous clusters of nodes form meta-nodes in a meta-graph, with the meta-graph directing the flow between clusters; directed core-periphery structure is such an example [5]. Fig. 1(a) is an example of flow imbalance between two clusters, here on an unweighted network for simplicity: while ${80}\%$ of the edges flow from the Transient cluster to the Sink cluster, only ${20}\%$ flow in the other direction. As a real-world example, Fig. 1(b) shows the strongest flow imbalances between clusters detected by our method in a network of US migration flow [4]; most edges flow from the red cluster (label 1) to the blue one (label 2). Figures 1(c-d) show examples on a synthetic meta-graph. We could also think of a social network in which a set of fake accounts $\mathcal{A}$ have been created, and these target another subset $\mathcal{B}$ of real accounts by sending them messages. Most likely, there would be many more messages from $\mathcal{A}$ to $\mathcal{B}$ than from $\mathcal{B}$ to $\mathcal{A}$ , hinting that $\mathcal{A}$ is most likely comprised of fake accounts.
16
+
17
+ Thus, instead of finding relatively dense groups of nodes in digraphs with a relatively small amount of flow between the groups, as in [6-11], our main goal is to recover clusters with strongly imbalanced flow among them, in the spirit of [12, 13], where directionality is the main signal. This task is not addressed by most methods for node clustering in digraphs, including community detection methods. Those methods that do lay emphasis on directionality are usually spectral methods, for which incorporating features is non-trivial, or graph neural network (GNN) methods that require labeling information. An exception is the network community detection method InfoMap [14] which uses directed random walks; however, it still relies on some edge density information within clusters.
18
+
19
+ < g r a p h i c s >
20
+
21
+ Figure 1: Visualization of cut flow imbalance and meta-graph: (a) ${80}\%$ of edges flow from Transient to Sink, while ${20}\%$ of edges flow in the opposite direction; (b) top pair imbalanced flow on Migration data [4]: most edges flow from red (1) to blue (2); (c) & (d) are for a Directed Stochastic Block Model with a cycle meta-graph with ambient nodes, for a total of 5 clusters. Most edges flow in direction $0 \rightarrow 1 \rightarrow 2 \rightarrow 3 \rightarrow 0$ , while few flow in the opposite direction. Cluster 4 is the ambient cluster. In (a) and (c), blue lines indicate flows with random, equally likely directions; these flows do not exist in the meta-graph adjacency matrix $\mathbf{F}$ . For (d), the lighter the color, the stronger the flow.
22
+
23
+ Here we introduce DIGRAC, a GNN framework to obtain node embeddings for clustering digraphs (allowing weighted edges and self-loops but no multiple edges). In a self-supervised manner, a novel probabilistic imbalance loss is proposed to act on the digraph induced by all training nodes. The global imbalance score, one minus whom is the self-supervised loss function, is aggregated from pairwise normalized cut imbalances. The method is end-to-end in combining embedding generation and clustering without an intermediate step. To the best of our knowledge, this is the first GNN method which derives node embeddings for digraphs that directly maximizes flow imbalance between pairs of clusters. With an emphasis on the use of a direction-based flow imbalance objective, experimental results on synthetic data and real-world data at different scales demonstrate that our method can achieve leading performance for a wide range of network densities and topologies.
24
+
25
+ DIGRAC's main novelty is the ability to cluster based on direction-based flow imbalance, instead of using classical criteria such as maximizing relative densities within clusters. Compared with prior methods that focus on directionality, DIGRAC can easily consider node features and also does not require known node clustering labels. DIGRAC complements existing approaches in various aspects: (1) Our results show that DIGRAC complements classical community detection by detecting alternative patterns in the data, such as meta-graph structures, which are otherwise not detectable by existing methods. This aspect of detecting novel structures in directed graphs has also been emphasized in [12]. (2) DIGRAC complements existing spectral methods, through the possibility of including exogenous information, in the form of node-level features or labels, thus borrowing their strength. (3) DIGRAC complements existing GNN methods by introducing an imbalance-based objective. (4) DIGRAC introduces imbalance measures for evaluation when ground-truth is unavailable.
26
+
27
+ DIGRAC's applicability extends beyond settings where the input data is a digraph: with time series data as input, the digraph construction mechanism can accommodate any procedure that encodes a pairwise directional association between the corresponding time series, such as lead-lag relationships and Granger causality [15], with applications such as in the analysis of information flow in brain networks [16], biology [17], finance [18, 19] and earth sciences [20]. DIGRAC could also facilitate tasks in ranking and anomaly detection, as it allows one to extrapolate from local pairwise (directed) interactions to a global structure inference, in the high-dimensional low signal-to-noise ratio regime.
28
+
29
+ Main contributions. Our main contributions are as follows. $\bullet \left( 1\right)$ We propose a GNN framework for self-supervised end-to-end node clustering on (possibly attributed and weighted) digraphs explicitly taking into account the directed flow imbalance. $\bullet \left( 2\right)$ We propose a family of probabilistic global imbalance scores to serve as the self-supervised loss function and evaluation objective, including one based on hypothesis testing for directionality signal. To the best of our knowledge, this is the first method directly maximizing flow imbalance for node clustering in digraphs using GNNs. $\bullet \left( 3\right)$ We extend our method to the semi-supervised setting when label information is available.
30
+
31
+ § 2 RELATED WORK
32
+
33
+ Directed clustering has been explored by non-GNN methods. [21] performs directed clustering that hinges on symmetrizations of the adjacency matrix, but is not scalable as it requires large matrix multiplications. [22] proposes a spectral co-clustering algorithm for asymmetry discovery that relies on in-degree and out-degree. Whenever direction is the sole information, such as in a complete network with lead-lag structure derived from time series [18], a purely degree-based method cannot detect the clusters. While [23] produces two partitions of the node set, one based on out-degree and one based on in-degree, our partition simultaneously takes both directions into account. The directed graph Lapla-cians introduced by [2] are only applicable to strongly connected digraphs, which is rarely the case in sparse networks arising in applications. InfoMap by [14] assumes that there is a "map" underlying the network, similar to a meta-graph in DIGRAC. InfoMap aims to minimize the expected description length of a random walk and is recommended for networks where edges encode patterns of movement among nodes. While related to DIGRAC, InfoMap still relies on some amount of density-based signal being present within each of the modules. [12] seeks to uncover clusters characterized by a strongly imbalanced flow circulating among them, based on eigenvectors of the Hermitian matrix $\left( {\mathbf{A} - {\mathbf{A}}^{T}}\right) \cdot i$ , where $\mathbf{A}$ is the (normalized) adjacency matrix and $i$ the imaginary unit. [12] is a purely spectral-based method and is not able to naturally incorporate any available node features or label information; in contrast, DIGRAC is a GNN-based method that is naturally able to account for such information. Moreover, [12] is not driven by an optimization function, but only proposes evaluation metrics that capture the imbalance of the pairs of clusters. In contrast, inspired by [12], in DIGRAC a family of novel imbalance loss functions is proposed, with a probabilistic interpretation, rendering DIGRAC a fully trainable end-to-end pipeline. Furthermore, the rich class of imbalance evaluation and training objectives/losses proposed in this paper go far beyond the evaluation metrics considered in [12]. [13] uncovers higher-order structural information among clusters in digraphs, while maximizing the imbalance of the edge directions, but its definition of the flow ratio restricts the underlying meta-graph to a path.
34
+
35
+ GNNs have been applied to digraph node classification, which is similar to digraph clustering but requires known clustering labels. [24] uses first and second-order proximity, constructs three Laplacians, but the method is space and speed-inefficient. [25] simplifies [24], builds a directed Laplacian based on PageRank, and aggregates information dependent on higher-order proximity. Building on [12, 26], [27] constructs a Hermitian matrix that encodes undirected geometric structure in the magnitude of its entries, and directional information in their phase. [28] introduces a digraph data augmentation method called Laplacian perturbation and conducts digraph contrastive learning. [29] proposes a spectral-based graph convolution network for digraphs, yet is restricted to strongly connected digraphs that are usually not realistic. [30] utilizes convolution-like anisotropic filters based on local subgraph structures (motifs) for semi-supervised node classification tasks in digraphs, but relies on pre-defined structures and fails to handle complex networks.
36
+
37
+ In particular, $\left\lbrack {{24},{25},{27},{28},{30}}\right\rbrack$ all require known labels, which are not generally available for real-world data. $\left\lbrack {2,{12},{13},{21},{22}}\right\rbrack$ could not trivially incorporate node attributes or node labels. In contrast, we propose an efficient GNN-based method that maximizes a probablistic flow imbalance objective, in a self-supervised manner, and which can naturally analyze attributed weighted digraphs.
38
+
39
+ To avoid potential misunderstanding, we briefly mention several related works that we are aware of, but do not compare against in our experiments in the main text. While DIGRAC addresses the task of partitioning the nodes into disjoint sets, [31] locates a certain community within a network. In particular, [31] proposes a local algorithm while this paper proposes a global one. OSLOM by [32] is very flexible but based on a density heuristic and hence a comparison to DIGRAC on networks without density signal would not be fair to begin with. [33] introduces directionality in the Louvain algorithm. This algorithm optimizes a modularity-type function that compares the number of edges within communities to the expected number of edges under a specified model. It is thus an approach that aims to find denser-than-expected groups of vertices. When all groups have the same density, as in our synthetic data sets, and the only structure lies in the directionality of the edges, this method simply cannot be expected to perform well. The Leiden algorithm in [34] also builds on the Louvain method, again optimizing a modularity-type function that compares the number of edges within communities to the expected number of edges under a specified model. It is a powerful method for that task, but cannot be fairly compared to DIGRAC which is tailored to find imbalances.
40
+
41
+ We also do not compare DIGRAC against graph pooling methods [35], which are inspired by pooling in CNNs and developed to discard information which is superfluous for the task at hand, as a partition of the nodes which can be interpreted as clustering is only a byproduct. Moreover, graph pooling methods are usually developed only for undirected networks. While graph matching as in [36-38] and [39] can be viewed as a clustering method of networks, matching the graph of interest to a disconnected graph by connecting each node in the observed graph with an isolated node of the disconnected graph, this approach is not developed for directed networks. The underlying idea of these papers is complementary to the meta-graph idea which underpins DIGRAC; in the meta-graph, the components are connected, and estimating the directionality of these connections is the main focus. Hence this work addresses a very different task. We emphasize that these are all excellent methods, but they address different objectives and tasks. As confirmed by our experiments in Appendix (App.) E, comparing these methods to DIGRAC is not appropriate. DIGRAC is tailored to detect an imbalance signal in directed networks, and such a signal cannot be present in an undirected network. As it is based on imbalance, DIGRAC will not be able to detect a signal in an undirected network, thus rendering it not applicable to undirected networks.
42
+
43
+ § 3 THE DIGRAC FRAMEWORK
44
+
45
+ Problem definition. Denote a (possibly weighted) digraph with node attributes as $\mathcal{G} = \left( {\mathcal{V},\mathcal{E},w,\mathbf{X}}\right)$ , with $\mathcal{V}$ the set of nodes, $\mathcal{E}$ the set of directed edges or links, and $w \in \lbrack 0,\infty {)}^{\left| \mathcal{E}\right| }$ the set of edge weights. $\mathcal{G}$ may have self-loops, but no multiple edges. The number of nodes is $n = \left| \mathcal{V}\right|$ , and $\mathbf{X} \in {\mathbb{R}}^{n \times {d}_{\text{ in }}}$ is a matrix whose rows encode the nodes' attributes. Such a network can be represented by the attribute matrix $\mathbf{X}$ and the adjacency matrix $\mathbf{A} = {\left( {A}_{ij}\right) }_{i,j \in \mathcal{V}}$ , with ${\mathbf{A}}_{ij} = 0$ if no edge exists from ${v}_{i}$ to ${v}_{j}$ ; if there is an edge $e$ from ${v}_{i}$ to ${v}_{j}$ , we set ${A}_{ij} = {w}_{e}$ , the edge weight.
46
+
47
+ Digraphs often lend themselves to interpreting weighted directed edges as flows, with a meta-graph on clusters of vertices describing the overall flow directions; see Fig. 1. A clustering is a partition of the set of nodes into $K$ disjoint sets (clusters) $\mathcal{V} = {\mathcal{C}}_{0} \cup {\mathcal{C}}_{1} \cup \cdots \cup {\mathcal{C}}_{K - 1}$ (ideally, $K \geq 2$ ). Intuitively, nodes within a cluster should be similar to each other with respect to flow directions, while nodes across clusters should be dissimilar. In a self-supervised setting, only the number of clusters $K$ is given. In a semi-supervised setting, for each of the $K$ clusters, a fraction set ${\mathcal{V}}^{\text{ seed }} \subseteq {\mathcal{V}}^{\text{ train }} \subset \mathcal{V}$ of the set ${\mathcal{V}}^{\text{ train }}$ of all training nodes is selected to serve as the set of seed nodes, for which the cluster membership labels are known before training. The goal of semi-supervised clustering is to assign each node $v \in \mathcal{V}$ to a cluster containing some known seed nodes, without knowledge of the underlying flow meta-graph. The corresponding self-supervised clustering task does not use seed nodes.
48
+
49
+ § 3.1 SELF-SUPERVISED LOSS FOR CLUSTERING
50
+
51
+ Our self-supervised loss function is inspired by [12], aiming to cluster the nodes by maximizing a normalized form of cut imbalance across clusters. We first define probabilistic versions of cuts, imbalance flows, and probabilistic volumes. For $K$ clusters, the assignment probability matrix $\mathbf{P} \in {\mathbb{R}}^{n \times K}$ has as row $i$ the probability vector ${\mathbf{P}}_{\left( \mathbf{i}, : \right) } \in {\mathbb{R}}^{K}$ with entries denoting the probabilities of each node to belong to each cluster; its ${k}^{\text{ th }}$ column is denoted by ${\mathbf{P}}_{\left( : ,k\right) }$ .
52
+
53
+ $\bullet \forall k,l \in \{ 0,\ldots ,K - 1\}$ where $K \geq 2$ , the probabilistic cut from cluster ${\mathcal{C}}_{k}$ to ${\mathcal{C}}_{l}$ is defined as
54
+
55
+ $$
56
+ W\left( {{\mathcal{C}}_{k},{\mathcal{C}}_{l}}\right) = \mathop{\sum }\limits_{{i,j}}{\mathbf{A}}_{i,j} \cdot {\mathbf{P}}_{i,k} \cdot {\mathbf{P}}_{j,l} = {\left( {\mathbf{P}}_{\left( : ,k\right) }\right) }^{T}\mathbf{A}{\mathbf{P}}_{\left( : ,l\right) }.
57
+ $$
58
+
59
+ -The imbalance flow between ${\mathcal{C}}_{k}$ and ${\mathcal{C}}_{l}$ is defined as $\left| {W\left( {{\mathcal{C}}_{k},{\mathcal{C}}_{l}}\right) - W\left( {{\mathcal{C}}_{l},{\mathcal{C}}_{k}}\right) }\right|$ .
60
+
61
+ For interpretability and ease of comparison, we normalize the imbalance flows to obtain an imbalance score with values in $\left\lbrack {0,1}\right\rbrack$ as follows (we defer additional details to App. B.2).
62
+
63
+ -The probabilistic volume for cluster ${\mathcal{C}}_{k}$ is defined as
64
+
65
+ $$
66
+ \operatorname{VOL}\left( {\mathcal{C}}_{k}\right) = {\operatorname{VOL}}^{\left( \text{ out }\right) }\left( {\mathcal{C}}_{k}\right) + {\operatorname{VOL}}^{\left( \text{ in }\right) }\left( {\mathcal{C}}_{k}\right)
67
+ $$
68
+
69
+ $$
70
+ = \mathop{\sum }\limits_{{i,j}}\left( {{\mathbf{A}}_{j,i} + {\mathbf{A}}_{i,j}}\right) \cdot {\mathbf{P}}_{j,k}
71
+ $$
72
+
73
+ Then $\operatorname{VOL}\left( {\mathcal{C}}_{k}\right) \geq W\left( {{\mathcal{C}}_{k},{\mathcal{C}}_{l}}\right)$ for all $l = 1,\ldots ,K - 1$ and
74
+
75
+ $$
76
+ \min \left( {\operatorname{VOL}\left( {\mathcal{C}}_{k}\right) ,\operatorname{VOL}\left( {\mathcal{C}}_{l}\right) }\right) \geq \left| {W\left( {{\mathcal{C}}_{k},{\mathcal{C}}_{l}}\right) - W\left( {{\mathcal{C}}_{l},{\mathcal{C}}_{k}}\right) }\right| . \tag{1}
77
+ $$
78
+
79
+ 178 The imbalance term, which is used in most of our experiments, denoted ${\mathrm{{CI}}}^{\mathrm{{vol}}\_ \text{ sum }}$ , is defined as
80
+
81
+ $$
82
+ {\mathrm{{CI}}}^{\text{ vol\_sum }}\left( {k,l}\right) = 2\frac{\left| W\left( {\mathcal{C}}_{k},{\mathcal{C}}_{l}\right) - W\left( {\mathcal{C}}_{l},{\mathcal{C}}_{k}\right) \right| }{\operatorname{VOL}\left( {\mathcal{C}}_{k}\right) + \operatorname{VOL}\left( {\mathcal{C}}_{l}\right) } \in \left\lbrack {0,1}\right\rbrack . \tag{2}
83
+ $$
84
+
85
+ The aim is to find a partition which maximizes the imbalance flow under the constraint that the partition has at least two sets, to capture groups of nodes which could be viewed as representing clusters in the meta-graph. The normalization by the volumes penalizes partitions that put most nodes into a single cluster. The range $\left\lbrack {0,1}\right\rbrack$ follows from Eq. (1). Other variants are discussed in App. B.3.
86
+
87
+ To obtain a global probabilistic imbalance score, based on ${\mathrm{{CI}}}^{\mathrm{{vol}}\_ \mathrm{{sum}}}$ from Eq. (2), we average over pairwise imbalance scores of different pairs of clusters. Since the scores discussed are symmetric and the cut difference before taking absolute value is skew-symmetric, we only need to consider the pairs in the set $\mathcal{T} = \left\{ {\left( {{\mathcal{C}}_{k},{\mathcal{C}}_{l}}\right) : 0 \leq k < l \leq K - 1,k,l \in \mathbb{Z}}\right\}$ .
88
+
89
+ A naive approach, which we call the "naive" variant, considers all possible $\left( \begin{matrix} K \\ 2 \end{matrix}\right)$ pairwise cut imbalance values. However, due to potentially high noise levels in certain data sets, one may only be interested in pairs that are not just noise, but exhibit true signal. To this end, we introduce a "std" variant, which only considers pairwise cut imbalance values that are 3 standard deviations away from the observed purely noisy imbalance values; the standard deviation is calculated under the null hypothesis that the between-cluster relationship has no direction preference, i.e. ${\mathbf{F}}_{k,l} = {\mathbf{F}}_{l,k}$ (entries of the meta-graph adjacency matrix $\mathbf{F}$ to be introduced later in this section), as follows.
90
+
91
+ Suppose two clusters ${\mathcal{C}}_{k}$ and ${\mathcal{C}}_{l}$ have only noisy links between them, with no edge in the meta-graph $\mathbf{F}$ , i.e. ${\mathbf{F}}_{kl} = 0$ . Assume also that the underlying network is fixed in terms of the number of nodes and locations of edges; the only randomness stems from the direction the edges. Then we can provide the following theoretical guarantee.
92
+
93
+ Proposition 1. Suppose that ${\mathcal{C}}_{k}$ and ${\mathcal{C}}_{l}$ are two clusters of ${n}_{k}$ and ${n}_{l}$ nodes, respectively, with $m\left( {k,l}\right)$ edges between them, edge weights ${w}_{ij} = {w}_{ij} \in \left\lbrack {0,1}\right\rbrack$ and edge direction drawn independently at random with equal probability $\frac{1}{2}$ for each direction. We assume that the edge weights satisfy $\mathop{\max }\limits_{e}\left| {w}_{e}\right| {\left( \mathop{\sum }\limits_{e}{w}_{e}^{2}\right) }^{-\frac{1}{2}} = o\left( {m\left( {k,l}\right) }\right)$ . Then $W\left( {{\mathcal{C}}_{k},{\mathcal{C}}_{l}}\right) - W\left( {{\mathcal{C}}_{l},{\mathcal{C}}_{k}}\right)$ is approximately normally distributed with mean 0 and variance $\parallel w{\parallel }^{2}$ as $m\left( {k,l}\right) \rightarrow \infty$ .
94
+
95
+ A consequence of Proposition 1, which is proved in App. B.1, is that under its assumptions, approximately ${99.7}\%$ of the observations fall within 3 standard deviations from 0 . While Proposition 1 makes many assumptions and ignores reciprocal edges, the resulting threshold is still a useful guideline for restricting attention to pairwise imbalance values which are very likely to capture a true signal. In particular, we use it as motivation for our "std" variant to pick cluster pairs from $\mathcal{T}$ that satisfy ${\left( W\left( {\mathcal{C}}_{k},{\mathcal{C}}_{l}\right) - W\left( {\mathcal{C}}_{l},{\mathcal{C}}_{k}\right) \right) }^{2} > 9\left( {W\left( {{\mathcal{C}}_{k},{\mathcal{C}}_{l}}\right) + W\left( {{\mathcal{C}}_{l},{\mathcal{C}}_{k}}\right) }\right)$ .
96
+
97
+ As we are mainly concerned about the top pairs (i.e., those exhibiting the largest imbalance flow), another option is the "sort" variant, which selects the largest $\beta$ pairwise cut imbalance values, where $\beta$ is half of the number of nonzero entries in the off-diagonal entries of the meta-graph adjacency matrix $\mathbf{F}$ , if the meta-graph is known or can be approximated. For example, for a "cycle" meta-graph with three clusters and no ambient nodes, $\beta = 3$ . When the meta-graph is a "path" with three clusters and ambient nodes, then $\beta = 1$ . When considering the "sort" variant, with $\mathcal{T}\left( \beta \right) = \left\{ {\left( {{\mathcal{C}}_{k},{\mathcal{C}}_{l}}\right) \in \mathcal{T} : {\mathrm{{CI}}}^{\text{ vol\_sum }}\left( {k,l}\right) }\right.$ is among the top $\beta$ values $\}$ , where $\left. {1 \leq \beta \leq \left( \begin{matrix} K \\ 2 \end{matrix}\right) }\right)$ , we set
98
+
99
+ $$
100
+ {\mathcal{O}}_{\text{ vol\_sum }}^{\text{ sort }} = \frac{1}{\beta }\mathop{\sum }\limits_{{\left( {{\mathcal{C}}_{k},{\mathcal{C}}_{l}}\right) \in \mathcal{T}\left( \beta \right) }}{\mathrm{{CI}}}^{\text{ vol\_sum }}\left( {k,l}\right) ,\;\text{ and }\;{\mathcal{L}}_{\text{ vol\_sum }}^{\text{ sort }} = 1 - {\mathcal{O}}_{\text{ vol\_sum }}^{\text{ sort }}, \tag{3}
101
+ $$
102
+
103
+ as the corresponding loss function. Definitions of meta-graph structures are discussed in Section 4.1. For the other variants, the corresponding scores and loss functions are defined analogously. We apply the "std" variant when we have no prior knowledge on the meta-graph structure during training, and the "sort" variant when we have information of the number of pairs to count.
104
+
105
+ When using the "std" variant for training, for the initial 50 epochs, we apply the "sort" variant with $\beta = 3$ for a reasonable starting clustering probability matrix for training, as otherwise during the initial training epochs possibly no pairs could be picked out. During the epochs actually utilizing this "std" variant, if no pairs could be picked out, we temporarily switch to the "naive" variant for that epoch.
106
+
107
+ Regarding complexity, the objective mainly contains matrix-vector multiplications and element-wise matrix divisions, which are at most quadratic in the number of nodes, but usually faster with our sparsity-aware implementation.
108
+
109
+ § 3.2 INSTANTIATION OF DIGRAC
110
+
111
+ To instantiate DI-GRAC, any aggregation scheme able to take directionality into account could be incorporated into our general framework, as long as it can output the node embedding matrix Z. Here, by default, we adapt the Signed Mixed Path Aggregation (SIMPA) scheme from [40]. We remove the signed parts and devise a simple yet effective directed mixed path aggregation scheme, which we call Directed Mixed Path Aggregation (DIMPA), to obtain the probability assignment matrix $\mathbf{P}$ by applying a linear layer followed by a unit softmax function to the embedding generated, and feed it to the loss function. Details of DIMPA are provided in App. A. A framework diagram is provided in Fig. 2, and an instantiation using DIMPA is visualized in Fig. 5.
112
+
113
+ < g r a p h i c s >
114
+
115
+ Figure 2: DIGRAC overview: from feature matrix $\mathbf{X}$ , adjacency matrix A and number of clusters $K$ , we first apply a directed GNN aggregator to obtain the node embedding matrix $\mathbf{Z}$ , then apply a linear layer followed by a unit softmax function to get the probability matrix $\mathbf{P}$ . Applying argmax on each row of $\mathbf{P}$ yields node cluster assignments. Green circles involves our proposed imbalance objective, while the yellow circles can only be used when ground-truth labels are provided.
116
+
117
+ § 4 EXPERIMENTS
118
+
119
+ In our synthetic experiments, when by design ground truth is available, performance is assessed by the Adjusted Rand Index (ARI) [41]. Normalized Mutual Information (NMI) results give almost the same ranking for the best-performing methods as the ARI, with an average Kendall tau value of ${83.8}\%$ and standard deviation 24.9%, for pairwise ranking comparison, on the methods compared in our experiments. We do not focus on NMI in the main text due to its shortcomings [42].
120
+
121
+ Clustering tasks will have different ground truths, depending on the pattern they are trying to detect. Many network clustering methods focus on detecting relatively dense clusters, and try to optimize classical network clustering measures, such as directed modularity or partition density. Ground truth for these clustering algorithms then relates to relatively densely connected subgroups in the data. DIGRAC is a novel method that addresses a novel task, namely that of detecting flow imbalances. To the best of our knowledge, real-world data sets with ground-truth flow imbalances are not available to date, and hence we introduce normalized imbalance scores to evaluate clustering performance based on flow imbalance. As ARI and NMI require ground-truth labels, they thus cannot be applied to the available real-world data sets. To address this shortcoming, for the real-world data sets, in Table 1, we include three performance measures which we introduce in the paper, and the appendix contains an additional 11 performance measures. Implementation details are provided in App. C. Anonymized codes and preprocessed data are available at https://anonymous.4open.science/r/DIGRAC.
122
+
123
+ We compare DIGRAC against the most recent related methods from the literature for clustering digraphs. The10methods are $\bullet \left( 1\right)$ InfoMap [14], $\bullet \left( 2\right)$ Bibliometric and $\bullet \left( 3\right)$ Degree-discounted introduced in [21], $\bullet \left( 4\right)$ DI_SIM [22], $\bullet \left( 5\right)$ Herm and $\bullet \left( 6\right)$ Herm_sym introduced in [12], $\bullet \left( 7\right)$ MagNet [27], $\bullet \left( 8\right)$ DGCN [24], $\bullet \left( 9\right)$ DiGCN [25], and $\bullet \left( {10}\right)$ DiGCL [28]. The abbreviations of these methods, when reported in the numerical experiments, are InfoMap, Bi_sym, DD_sym, DISG_LR, Herm, Herm_sym, MagNet, DGCN, DiGCN, DiGCL, respectively. DGCN is the least efficient method in terms of speed and space complexity, followed by DiGCN which involves the so-called inception blocks. We use the same hyperparameter settings stated in these papers. Methods (7), (8), (9), (10) are trained with ${80}\%$ nodes under label supervision, while all the other methods are trained without label supervision. DIGRAC further restricts itself to be trained on the subgraph induced by only the training nodes. Runtime comparison is provided in App. C.2, illustrating that DIGRAC is among the fastest among competing GNNs. Implementation details for competitors are provided in App. C.10.
124
+
125
+ Table 1: Performance comparison on real-world data sets. The best is marked in bold red and the second best is marked in underline blue. The objectives are defined in Section 3.1.
126
+
127
+ max width=
128
+
129
+ Metric Data set InfoMap Bi_sym DD_sym DISG_LR Herm Herm_rw DIGRAC
130
+
131
+ 1-9
132
+ 5*${\mathcal{O}}_{\text{ vol\_sum }}^{\text{ sort }}$ Telegram ${0.04} \pm {0.00}$ ${0.21} \pm {0.0}$ ${0.21} \pm {0.0}$ ${0.21} \pm {0.01}$ ${0.2} \pm {0.01}$ ${0.14} \pm {0.0}$ ${0.32} \pm {0.01}$
133
+
134
+ 2-9
135
+ Blog ${0.07} \pm {0.00}$ ${0.07} \pm {0.0}$ ${0.0} \pm {0.0}$ ${0.05} \pm {0.0}$ ${0.37} \pm {0.0}$ ${0.0} \pm {0.0}$ ${0.44} \pm {0.0}$
136
+
137
+ 2-9
138
+ Migration N/A ${0.03} \pm {0.00}$ ${0.01} \pm {0.00}$ ${0.02} \pm {0.00}$ ${0.04} \pm {0.00}$ ${0.02} \pm {0.00}$ ${0.05} \pm {0.00}$
139
+
140
+ 2-9
141
+ WikiTalk N/A N/A N/A ${0.18} \pm {0.03}$ ${0.15} \pm {0.02}$ ${0.0} \pm {0.0}$ ${0.24} \pm {0.05}$
142
+
143
+ 2-9
144
+ Lead-Lag N/A ${0.07} \pm {0.01}$ ${0.07} \pm {0.01}$ ${0.07} \pm {0.01}$ ${0.07} \pm {0.02}$ ${0.07} \pm {0.02}$ ${0.15} \pm {0.03}$
145
+
146
+ 1-9
147
+ 5*${\mathcal{O}}_{\text{ vol\_sum }}^{\text{ std }}$ Telegram ${0.01} \pm {0.00}$ ${0.26} \pm {0.00}$ ${0.26} \pm {0.00}$ ${0.26} \pm {0.01}$ ${0.25} \pm {0.02}$ ${0.35} \pm {0.00}$ ${0.28} \pm {0.01}$
148
+
149
+ 2-9
150
+ Blog ${0.00} \pm {0.00}$ ${0.07} \pm {0.00}$ ${0.00} \pm {0.00}$ ${0.05} \pm {0.00}$ ${0.37} \pm {0.00}$ ${0.00} \pm {0.00}$ ${0.44} \pm {0.00}$
151
+
152
+ 2-9
153
+ Migration N/A ${0.01} \pm {0.00}$ ${0.01} \pm {0.00}$ ${0.01} \pm {0.00}$ ${0.02} \pm {0.00}$ ${0.02} \pm {0.00}$ ${0.04} \pm {0.01}$
154
+
155
+ 2-9
156
+ WikiTalk N/A N/A N/A ${0.17} \pm {0.04}$ ${0.06} \pm {0.01}$ ${0.01} \pm {0.00}$ ${0.14} \pm {0.02}$
157
+
158
+ 2-9
159
+ Lead-Lag N/A ${0.04} \pm {0.01}$ ${0.04} \pm {0.01}$ ${0.04} \pm {0.01}$ ${0.04} \pm {0.01}$ ${0.04} \pm {0.01}$ ${0.12} \pm {0.03}$
160
+
161
+ 1-9
162
+ 5*${\mathcal{O}}_{\text{ vol\_sum }}^{\text{ naive }}$ Telegram ${0.01} \pm {0.00}$ ${0.26} \pm {0.0}$ ${0.26} \pm {0.0}$ ${0.26} \pm {0.01}$ ${0.25} \pm {0.02}$ ${0.23} \pm {0.0}$ ${0.27} \pm {0.01}$
163
+
164
+ 2-9
165
+ Blog ${0.00} \pm {0.00}$ ${0.07} \pm {0.0}$ ${0.0} \pm {0.0}$ ${0.05} \pm {0.0}$ ${0.37} \pm {0.0}$ ${0.0} \pm {0.0}$ ${0.44} \pm {0.0}$
166
+
167
+ 2-9
168
+ Migration N/A ${0.01} \pm {0.00}$ ${0.01} \pm {0.00}$ ${0.01} \pm {0.00}$ ${0.02} \pm {0.00}$ ${0.01} \pm {0.00}$ ${0.04} \pm {0.01}$
169
+
170
+ 2-9
171
+ WikiTalk N/A N/A N/A ${0.1} \pm {0.02}$ ${0.04} \pm {0.0}$ ${0.0} \pm {0.0}$ ${0.12} \pm {0.01}$
172
+
173
+ 2-9
174
+ Lead-Lag N/A ${0.30} \pm {0.06}$ ${0.28} \pm {0.06}$ ${0.27} \pm {0.06}$ ${0.29} \pm {0.05}$ ${0.29} \pm {0.05}$ ${0.32} \pm {0.11}$
175
+
176
+ 1-9
177
+
178
+ § 4.1 DATA SETS
179
+
180
+ Synthetic data: Directed Stochastic Block Models A standard directed stochastic blockmodel (DSBM) is often used to represent a network cluster structure, see for example [1]. Its parameters are the number $K$ of clusters and the edge probabilities; given the cluster assignment of the nodes, the edge indicators are independent. The DSBMs used in our experiments also depend on a meta-graph adjacency matrix $\mathbf{F} = {\left( {\mathbf{F}}_{k,l}\right) }_{k,l = 0,\ldots ,K - 1}$ and a filled version of it, $\widetilde{\mathbf{F}} = {\left( {\widetilde{\mathbf{F}}}_{k,l}\right) }_{k,l = 0,\ldots ,K - 1}$ , and on a noise level parameters $\eta \leq {0.5}$ . The meta-graph adjacency matrix $\mathbf{F}$ is generated from the given meta-graph structure, called $\mathcal{M}$ . To include an ambient background, the filled meta-graph adjacency matrix $\widetilde{\mathbf{F}}$ replaces every zero in $\mathbf{F}$ that is not part of the imbalance structure by 0.5 . The filled meta-graph thus creates a number of ambient nodes which correspond to entries which are not part of $\mathcal{M}$ and thus are not part of a meaningful cluster; this set of ambient nodes is also called the ambient cluster. First, we provide examples of structures of $\mathbf{F}$ without any ambient nodes, where1 denotes the indicator function.
181
+
182
+ -(1) "cycle": ${\mathbf{F}}_{k,l} = \left( {1 - \eta }\right) \mathbb{1}\left( {l = \left( {\left( {k + 1}\right) {\;\operatorname{mod}\;K}}\right) }\right) + \eta \mathbb{1}\left( {l = \left( {\left( {k - 1}\right) {\;\operatorname{mod}\;K}}\right) }\right) + \frac{1}{2}\mathbb{1}\left( {l = k}\right)$ .
183
+
184
+ -(2) "path": ${\mathbf{F}}_{k,l} = \left( {1 - \eta }\right) \mathbb{1}\left( {l = k + 1}\right) + \eta \mathbb{1}\left( {l = k - 1}\right) + \frac{1}{2}\mathbb{1}\left( {l = k}\right)$ .
185
+
186
+ $\bullet \left( 3\right)$ "complete": assign diagonal entries $\frac{1}{2}$ . For each pair(k, l)with $k < l$ , let ${\mathbf{F}}_{k,l}$ be $\eta$ and $1 - \eta$ with equal probability, then assign ${\mathbf{F}}_{l,k} = 1 - {\mathbf{F}}_{k,l}$ .
187
+
188
+ -(4) "star", following [43]: select the center node as $\omega = \left\lfloor \frac{K - 1}{2}\right\rfloor$ and set ${\mathbf{F}}_{k,l} = \left( {1 - \eta }\right) \mathbb{1}(k =$ $\omega ,l$ odd $) + \eta \mathbb{1}\left( {k = \omega ,l\text{ even }}\right) + \left( {1 - \eta }\right) \mathbb{1}\left( {l = \omega ,k\text{ odd }}\right) + \eta \mathbb{1}\left( {\widetilde{l} = \omega ,l\text{ even }}\right)$ .
189
+
190
+ When ambient nodes are present, the construction involves two steps, with the first step the same as the above, but with the following changes: For "cycle" meta-graph structure, ${\mathbf{F}}_{k,l} = \left( {1 - \eta }\right) \mathbb{1}(l = (\left( {k + 1}\right)$ ${\;\operatorname{mod}\;\left( {K - 1}\right) })) + \eta \mathbb{1}\left( {l = \left( {\left( {k - 1}\right) {\;\operatorname{mod}\;\left( {K - 1}\right) }}\right) }\right) + {0.5}\mathbb{1}\left( {l = k}\right)$ . The second step is to assign $0\left( {{0.5}\text{ , resp. }}\right)$ to the last row and the last column of $\mathbf{F}\left( {\widetilde{\mathbf{F}}\text{ , resp.). Figures 1(c-d) display a "cycle" }}\right)$ meta-graph structure with ambient nodes (in cluster 4). The majority of edges flow in the form $0 \rightarrow 1 \rightarrow 2 \rightarrow 3 \rightarrow 0$ , while few flow from the opposite direction. Fig. 1(d) illustrates the meta-graph adjacency matrix corresponding to this $\mathbf{F}$ .
191
+
192
+ In our experiments, we choose the number of clusters, the (approximate) ratio, $\rho$ , between the largest and the smallest cluster size, and the number, $n$ , of nodes. To tackle the hardest clustering task and also focus on directionality, all pairs of nodes within a cluster and all pairs of nodes between clusters have the same edge probability, $p$ . Note that for $\mathcal{M} =$ "cycle", even the expected in-degree and out-degree of all nodes are identical. Our DSBM, which we denote by DSBM $\left( {\mathcal{M},\mathbb{1}\text{ (ambient), }n,K,p,\rho ,\eta }\right)$ , is built similarly to [12] but with possibly unequal cluster sizes, with more details in App. C.3. For each node ${v}_{i} \in {\mathcal{C}}_{k}$ , and each node ${v}_{j} \in {\mathcal{C}}_{l}$ , independently sample an edge from node ${v}_{i}$ to node ${v}_{j}$ with probability $p \cdot {\widetilde{\mathbf{F}}}_{k,l}$ . The parameter settings in our experiments are $p \in \{ {0.001},{0.01},{0.02},{0.1}\}$ , $\rho \in \{ 1,{1.5}\} ,K \in \{ 3,5,{10}\} ,\mathbb{1}$ (ambient) $\in \{ \mathrm{T},\mathrm{F}\}$ (True and False), $n \in \{ {1000},{5000},{30000}\}$ , and we also vary the direction flip probability $\eta$ from 0 to 0.45, with a 0.05 step size.
193
+
194
+ < g r a p h i c s >
195
+
196
+ Figure 3: Test ARI comparison on synthetic data. Dashed lines highlight DIGRAC's performance. Error bars are given by one standard error.
197
+
198
+ < g r a p h i c s >
199
+
200
+ Figure 4: Ablation study. (c-d) are on DSBM("cycle", $\mathrm{F},n = {1000},K = 5,p = {0.02},\rho = 1$ ).
201
+
202
+ Real-world data We perform experiments on five real-world digraph data sets with size ranging from 245 to over 2 million nodes: Telegram [3], Blog [44], Migration [4], WikiTalk [45], and Lead-Lag [18], with details in App. C.3. We set the number of clusters $K$ to be 4,2,10,10,10, respectively, and values of $\beta$ to be5,1,9,10,3, respectively. Note that Lead-Lag comprises of 19 separate networks constructed from yearly financial time series, rendering a total of 23 real-world networks.
203
+
204
+ § 4.2 EXPERIMENTAL RESULTS
205
+
206
+ Training set-up As training setup, we use 10% of all nodes from each cluster as test nodes, 10% as validation nodes to select the model, and the remaining ${80}\%$ as training nodes. In each setting, unless otherwise stated, we carry out 10 experiments with different data splits. Error bars are given by one standard error. When no node attributes are given, the matrix $\mathbf{X}$ for DIGRAC is taken as the stacked eigenvectors corresponding to the largest $K$ eigenvalues of the random-walk symmetrized Hermitian matrix used in the comparison method Herm_rw. The imbalance loss function acts on the subgraph induced by the training nodes. To further clarify the training setup, DIGRAC uses 0% of the labels in training. As DIGRAC is a self-supervised method, in principle, we could use all nodes for training. However for a fair comparison with other GNN methods we use only ${80}\%$ of the nodes for training. For supervised methods our split of ${80}\% - {10}\% - {10}\%$ is a standard split. For the non-GNN methods, all nodes are used for training. The default loss function for DIGRAC is ${\mathcal{O}}_{\text{ vol\_sum }}^{\text{ sort }}$ .
207
+
208
+ Results on synthetic data Fig. 3 compares the numerical performance of DIGRAC with other methods on synthetic data. For this Fig. we generate 5 DSBM networks under each parameter setting and use 10 different data splits for each network, then average over the 50 runs. Error bars are given by one standard error. App. C provides additional implementation details. We conclude that DIGRAC compares favorably against state-of-the-art methods, on a wide range of network densities and noise levels, on different network sizes, and with different underlying meta-graph structures, with and without ambient nodes. Being a self-supervised method, DIGRAC even attains comparable or better performance than fully-supervised GNN competitors.
209
+
210
+ Results on real-world data For our real-world data sets, the node in- and out-degrees may not be identical across clusters. Moreover, as these data sets do not contain node attributes, DIGRAC considers the eigenvectors corresponding to the largest $K$ eigenvalues of the Hermitian matrix from [12] to construct an input feature matrix. Table 1 reveals that DIGRAC provides competitive global imbalance scores in three objectives discussed and across all real-world data sets, and outperforms all other methods in 13 out of 15 instances, while attains the second-best performance for the remaining two instances. The N/A entries for WikiTalk are caused by memory error, and the N/A entries for InfoMap on Migration and Lead-Lag are due to its prediction of only one single cluster. For Migration, as detailed in Fig. 1(b) and App. D.4, DIGRAC is able to uncover nontrivial migration patterns, such as migration from California to Arizona, as discovered by [4]. Lead-Lag results in each year are averaged over ten runs, while the mean and standard deviation values are calculated with respect to the 19 years. The experiments indicate that edge directionality contains an important signal that DIGRAC is able to capture. As App. D. 2 illustrates, DIGRAC is able to provide comparable or higher pairwise imbalance scores for the leading pairs. The fitted meta-graph plots in App. D. 3 reveal that DIGRAC is able to recover a directed flow imbalance between clusters in all of the selected data sets. A comprehensive numerical comparison in App. D reveals similar conclusions.
211
+
212
+ § 4.3 ABLATION STUDY
213
+
214
+ Figures 4(a-b) compare the performance of DiGCN replacing the loss function by ${\mathcal{L}}_{\mathrm{{vol}}\text{ sum }}^{\text{ sort }}$ from Eq. (3), indicated by "CI", or "LICE", sum of supervised and self-supervised loss, on two synthetic models. We find that replacing the supervised loss function with ${\mathcal{L}}_{\text{ vol\_sum }}^{\text{ sort }}$ leads to comparable results, and that adding ${\mathcal{L}}_{\mathrm{{vol}}\text{ sum }}^{\text{ sort }}$ to the loss could be beneficial, indicating that the imbalance objectives are more general than only applicable to DIMPA. Fig. 4(c) compares the test ARI performance using three variants of loss functions on the same digraph. The current choice "sort" performs best among these variants, indicating a benefit in only considering top pairs of individual imbalance scores. The "std" variant is comparable with the "sort" variant, but the "sort" variant performs the best with prior knowledge on the network structure. More details on loss functions, comparison with other variants, and evaluation on additional metrics are discussed in App. B, with similar conclusions. As illustrated in Fig. 4(d), again on the same digraph, we also experiment on adding seeds, with the seed ratio defined as the ratio of the number of seed nodes to the number of training nodes. A supervised loss, following [40], is then applied to these seeds; App. C. 8 contains additional details. In conclusion, seed nodes with a supervised loss function enhance performance, and we infer that our model can further boost its performance when additional label information is available.
215
+
216
+ § 5 CONCLUSION, LIMITATIONS AND OUTLOOK
217
+
218
+ DIGRAC provides an end-to-end pipeline to create node embeddings and perform directed clustering, with or without available additional node features or cluster labels. We illustrate DIGRAC on publicly available data without any personally identifiable information. DIGRAC could potentially have societal impact, for example, in detecting clusters of fake accounts in social networks. While we do not envision our work to have any negative societal impact, vigilance is of course required.
219
+
220
+ Current limitations that could be addressed by future work include detecting the number of clusters $\left\lbrack {{10},{46}}\right\rbrack$ , instead of specifying it a-priori, as this is typically not available in real-world applications. The relatively small sizes of the networks used in the paper (the largest has 2 million nodes) also opens future direction in adapting our pipeline to extremely large networks, possibly combined with sampling methods or mini-batch [47], rendering DIGRAC applicable to large scale industrial applications. We also intent to further explore the effect of normalization terms in our objectives, and to design more powerful objectives that could explicitly account for varying edge density.
221
+
222
+ Another future direction pertains to additional experiments in the semi-supervised setting, when there exist seed nodes with known cluster labels, or when additional information is available in the form of must-link and cannot-link constraints, popular in the constrained clustering literature [48, 49]. Further research directions will also address the performance in the sparse regime, where spectral methods are known to underperform, and various regularizations have been proven to be effective theoretically and empirically; e.g., see regularization in the sparse regime for the undirected settings [50-52].
papers/LOG/LOG 2022/LOG 2022 Conference/VBXRMnRBfRF/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,370 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Metric Based Few-Shot Graph Classification
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ Few-shot graph classification is a novel yet promising emerging research field that still lacks the soundness of well-established research domains. Existing works often consider different benchmarks and evaluation settings, hindering comparison and, therefore, scientific progress. In this work, we start by providing an extensive overview of the possible approaches to solving the task, comparing the current state-of-the-art and baselines via a unified evaluation framework. Our findings show that while graph-tailored approaches have a clear edge on some distributions, easily adapted few-shot learning methods generally perform better. In fact, we show that it is sufficient to equip a simple metric learning baseline with a state-of-the-art graph embedder to obtain the best overall results. We then show that straightforward additions at the latent level lead to substantial improvements by introducing i) a task-conditioned embedding space ii) a MixUp-based data augmentation technique. Finally, we release a highly reusable codebase to foster research in the field, offering modular and extensible implementations of all the relevant techniques.
12
+
13
+ ## 16 1 Introduction
14
+
15
+ Graphs have ruled digital representations since the dawn of computer science. Their structure is simple and general, and their structural properties are well studied. Given the success of deep learning in different domains that enjoy a regular structure, such as those found in computer vision $\left\lbrack {4,{47},{71}}\right\rbrack$ and natural language processing $\left\lbrack {9,{14},{38},{54}}\right\rbrack$ , a recent line of research has sought to extend it to manifolds and graph-structured data $\left\lbrack {3,8,{25}}\right\rbrack$ . Nevertheless, the expressivity brought by deep learning comes at a cost: deep models require vast amounts of data to search the complex hypothesis spaces they define. When data is scarce, these models end up overfitting the training set, hindering their generalization capability on unseen samples. While annotations are usually abundant in computer vision and natural language processing, they are harder to obtain for graph-structured data due to the impossibility or expensiveness of the annotation process $\left\lbrack {{28},{49},{51}}\right\rbrack$ . This is particularly true when the samples come from specialized domains such as biology, chemistry and medicine [27], where graph-structured data are ubiquitous. The most heartfelt example is drug testing, requiring expensive in-vivo testing and laborious wet experiments to label drugs and protein graphs [36].
16
+
17
+ To address this problem, the field of Few-Shot Learning (FSL) [17, 19] aims at designing models which can effectively operate in scarce data scenarios. While this well-established research area enjoys a plethora of mature techniques, robust benchmarks and libraries, its intersection with graph representation learning is still at an embryonic stage. As such, the field suffers from a lack of uniformity: existing works often consider different benchmarks and evaluation settings, with no two works considering the same set of datasets or evaluation hyperparameters. This scenario results in a fragmented understanding, hindering comparison and, therefore, scientific progress in the field. In an attempt to mitigate this issue and facilitate new research, we provide a modular and easily extensible codebase with re-implementations of the most relevant baselines and state-of-the-art works. The latter allows both for straightforward use by practitioners and for a fair comparison of the techniques in a unified evaluation setting. Our findings show that kernel methods achieve impressive results on particular distributions but are too rigid to be used as an overall solution. On the other hand, few-shot learning techniques can be easily adapted to the graph setting by employing a graph neural network as encoder. Contrarily to existing works, we argue that the latter is sufficient to capture the complexity of the structure, relieving the remaining pipeline of the burden. When in the latent space, standard techniques behave as expected and need no further tailoring to the graph domain is needed.
18
+
19
+ ![01963ed9-99db-7614-977d-e8fed6978e5a_1_490_195_842_505_0.jpg](images/01963ed9-99db-7614-977d-e8fed6978e5a_1_490_195_842_505_0.jpg)
20
+
21
+ Figure 1: An $N$ -way $K$ -shot episode. In this example, there are $N = 3$ classes. Each class has $k = 4$ supports yielding a support set with size $N * K = {12}$ . The class information provided by the supports is exploited to classify the queries. We test the classification accuracy on all $N$ classes. In Figure there are $Q = 2$ queries for each class thus the query set has size $N * Q = 6$ .
22
+
23
+ In this direction, we show that a simple Prototypical Networks [48] architecture outperforms existing works when equipped with a state-of-the-art graph embedder. As typical in few-shot learning, we frame tasks as episodes, where an episode is defined by a set of classes and several supervised samples (supports) for each of them [56]. Such an episode is depicted in Figure 1. This setting favors a straightforward addition to the architecture: in fact, while a standard Prototypical Network would embed the samples in the same way independently of the episode, we can take inspiration from [39] and empower the graph embeddings by conditioning them on the particular set of classes seen in the episode. This way, the intermediate features and the final embeddings may be modulated according to what is best for the current episode. Finally, we propose to augment the training dataset using a MixUp-based [70] online data augmentation technique. The latter creates artificial samples from two existing ones as a mix-up of their latent representations, probing unexplored regions of the latent space that can accommodate samples from unseen classes. We finally show that these additions are beneficial for the task both qualitatively and quantitatively.
24
+
25
+ Summarizing, our contribution is 4 -fold:
26
+
27
+ 1. We provide an extensive overview of the possible approaches to solve the task, comparing all the existing works and baselines in a unified evaluation framework;
28
+
29
+ 2. We release a strongly re-usable codebase to foster research in the field, offering modular and extensible implementations of all the relevant techniques;
30
+
31
+ 3. We show that it is enough to equip existing few-shot pipelines with graph encoders to obtain competitive results, proposing in particular a metric learning baseline for the task;
32
+
33
+ 4. We equip the latter with two supplementary modules: an episode-adaptive embedder and a novel online data augmentation technique, proving their benefits qualitatively and quantitatively.
34
+
35
+ ## 2 Related work
36
+
37
+ Few-Shot Learning. Data-scarce tasks are usually tackled by using one of the following paradigms: i) transfer learning techniques $\left\lbrack {1,{33},{34}}\right\rbrack$ that aim at transferring the knowledge gained from a data-abundant task to a task with scarce data; ii) meta-learning [20, 41, 69] techniques that more generally introduce a meta-learning procedure to gradually learn meta-knowledge that generalizes across several tasks; iii) data augmentation works [21, 53, 65] that seek to augment the data applying transformations on the available samples to generate new ones preserving specific properties. We refer the reader to [61] for an extensive treatment of the matter. Particularly relevant to our work are distance metric learning approaches: in this direction, [56] suggest embedding both supports and queries and then labeling the query with the label of its nearest neighbor in the embedding space. By obtaining a class distribution for the query using a softmax over the distances from the supports, they then learn the embedding space by minimizing the negative log-likelihood. [48] generalize this intuition by allowing $k$ supports for class to be aggregated to form prototypes. Given its effectiveness and simplicity, we chose this approach as the starting point for our architecture.
38
+
39
+ Graph Data Augmentation. Data augmentation follows the idea that in the working domain, there exist transformations that can be applied to samples to generate new ones in a controlled way (e.g., preserving the sample class in a classification setting while changing its content). Therefore, synthetic samples can meet the needs of large neural networks that require training with high volumes of data [61]. In Euclidean domains (e.g., images), this can often be achieved by simple rotations and translations [5, 42]. Unfortunately, in the graph domain, it is challenging to define such transformations on a given graph sample while keeping control of its properties. To this end, a line of works takes inspiration from Mix-Up [37, 70] to create new artificial samples as a combination of two existing ones: $\left\lbrack {{23},{26},{40},{63}}\right\rbrack$ propose to augment graph data directly in the data space, while [64] interpolates latent representations to create novel ones. We also operate in the latent space, but differently from [64], we suggest creating a new sample by selecting only certain features of one representation and the remaining ones from the other by employing a random gating vector. This allows for obtaining synthetic samples as random compositions of the features of the existing samples rather than a linear interpolation of them. We also argue that the proposed Mix-Up is tailored for distance metric learning, making full use of the similarity among samples and class prototypes.
40
+
41
+ Few-Shot Graph Representation Learning. Few-shot graph representation learning is concerned with applying graph representation learning techniques in scarce data scenarios. Similarly to standard graph representation learning, it tackles tasks at different levels of granularity: node-level [15, 58, 68, 72, 73], edge-level [2, 35, 43, 59] and graph-level [12, 24, 29, 32, 36, 60, 62]. Concerning the latter, GSM [12] proposes a hierarchical approach, AS-MAML adapts the well known MAML [20] architecture to the graph setting and SMF-GIN [29] uses a Prototypical Network (PN) variant with domain-specific priors. Differently from the latter, we employ a more faithful formulation of PN that shows far superior performance. Most recently, FAITH [60] proposes to capture episode correlations with an inter-episode hierarchical graph and SP-NP [32] suggests employing neural processes [22] for the task.
42
+
43
+ ## 3 Approach
44
+
45
+ Setting and Notation. In few-shot graph classification each sample is a tuple $\left( {\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right) , y}\right)$ where $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ is a graph with node set $\mathcal{V}$ and edge set $\mathcal{E}$ , while $y$ is a graph-level class. Given a set of data-abundant base classes ${C}_{\mathrm{b}}$ , we aim to classify a set of data-scarce novel classes ${C}_{\mathrm{n}}$ . We cast this problem through an episodic framework [57]: during training, we mimic the few-shot setting dividing the base training data in episodes. Each episode e is a $N$ -way $K$ -shot classification task, with its own train $\left( {D}_{\text{train }}\right)$ and test $\left( {D}_{\text{test }}\right)$ data. For each of the $N$ classes, ${D}_{\text{train }}$ contains $K$ corresponding support graphs, while ${D}_{\text{test }}$ contains $Q$ query graphs. A schematic visualization of an episode is depicted in Figure 1.
46
+
47
+ Prototypical Network (PN) Architecture. We build our network upon the simple-yet-effective idea of Prototypical Networks [48], originally proposed for few-shot image classification. We employ a state-of-the-art Graph Neural Network as node embedder, composed of a set of layers of GIN convolutions [67], each equipped with a MLP regularized with GraphNorm [10]. In practice, each sample is first passed through a set of convolutions, obtaining a hidden representation ${h}^{\left( l\right) }$ for each layer. According to [67], the latter is obtained by updating at each layer its hidden representation as
48
+
49
+ $$
50
+ {\mathbf{h}}_{v}^{\left( l\right) } = {\operatorname{MLP}}^{\left( l\right) }\left( {\left( {1 + {\epsilon }^{\left( l\right) }}\right) \cdot {\mathbf{h}}_{v}^{\left( l - 1\right) } + \mathop{\sum }\limits_{{u \in \mathcal{N}\left( v\right) }}{\mathbf{h}}_{u}^{\left( l - 1\right) }}\right) \tag{1}
51
+ $$
52
+
53
+ where ${\epsilon }^{\left( l\right) }$ is a learnable parameter. Following [66], the final node $d$ -dimensional embedding ${\mathbf{h}}_{v} \in {R}^{d}$ is then given by the concatenation of the outputs of all the layers. The graph-level embedding is then obtained by employing a global pooling function, such as mean or sum. While the sum is a more expressive pooling function for GNNs [67], we observed the mean to behave better for the task in most considered datasets and will therefore be the aggregation function of choice when not specified differently. The $K$ embedded supports ${\mathbf{s}}_{1}^{\left( n\right) },\ldots ,{\mathbf{s}}_{K}^{\left( n\right) }$ for each class $n$ are then aggregated to form the class prototypes ${\mathbf{p}}^{\left( n\right) }$ ,
54
+
55
+ $$
56
+ {\mathbf{p}}^{\left( n\right) } = \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}{\mathbf{s}}_{k}^{\left( n\right) } \tag{2}
57
+ $$
58
+
59
+ ![01963ed9-99db-7614-977d-e8fed6978e5a_3_305_124_1179_662_0.jpg](images/01963ed9-99db-7614-977d-e8fed6978e5a_3_305_124_1179_662_0.jpg)
60
+
61
+ Figure 2: Prototypical Networks architecture. A graph encoder embeds the supports graphs, the embeddings that belong to the same class are averaged to obtain the class prototype $p$ . To classify a query graph $q$ , it is embedded in the same space of the supports. The distances in the latent space between the query and the prototypes determine the similarities and thus the probability distribution of the query among the different classes, computed as in Equation (3).
62
+
63
+ In the same way, the $Q$ query graphs for each class $n$ are embedded to obtain ${\mathbf{q}}_{1}^{\left( n\right) },\ldots ,{\mathbf{q}}_{Q}^{\left( n\right) }$ . To compare each query graph embedding $\mathbf{q}$ with the class prototypes ${\mathbf{p}}_{1},\ldots ,{\mathbf{p}}_{N}$ , we use an ${\mathcal{L}}_{2}$ -metric scaled by a learnable temperature factor $\alpha$ as suggested in [39]. We refer to this metric as ${d}_{\alpha }$ . The class probability distribution $\rho$ for the query is finally computed by taking the softmax over these
64
+
65
+ distances
66
+
67
+ $$
68
+ {\mathbf{\rho }}_{n} = \frac{\exp \left( {-{d}_{\alpha }\left( {\mathbf{q},{\mathbf{p}}_{n}}\right) }\right) }{\mathop{\sum }\limits_{{{n}^{\prime } = 1}}^{N}\exp \left( {-{d}_{\alpha }\left( {\mathbf{q},{\mathbf{p}}_{{n}^{\prime }}}\right) }\right) }. \tag{3}
69
+ $$
70
+
71
+ The model is then trained end-to-end by minimizing via SGD the log-probability $\mathcal{L}\left( \phi \right) = - \log {\rho }_{n}$ of the true class $n$ . We will refer to this approach without additions as PN in the experiments.
72
+
73
+ Task-Adaptive Embedding (TAE). Until now, our module computes the embeddings regardless of the specific composition of the episode. Our intuition is that the context in which a graph appears should influence its representation. In practice, inspired by [39], we condition the embeddings on the particular task (episode) for which they are computed. Such influence will be expressed by a translation $\beta$ and a scaling $\gamma$ .
74
+
75
+ First of all, given an episode $e$ we compute an episode representation ${\mathbf{p}}_{\mathbf{e}}$ as the mean of the prototypes ${\mathbf{p}}_{n}$ for the classes $n = 1,\ldots , N$ in the episode. We consider ${\mathbf{p}}_{\mathbf{e}}$ as a prototype for the episode and a proxy for the task. Then, we feed it to a Task Embedding Network (TEN), composed of two distinct residual MLPs. These output a shift vector ${\mathbf{\beta }}_{\ell }$ and a scale vector ${\gamma }_{\ell }$ respectively for each layer of the graph embedding module. At layer $\ell$ , the output ${\mathbf{h}}_{\ell }$ is then conditioned on the episode by transforming it as
76
+
77
+ $$
78
+ {\mathbf{h}}_{\ell }^{\prime } = \gamma \odot {\mathbf{h}}_{\ell } + \mathbf{\beta }. \tag{4}
79
+ $$
80
+
81
+ As in [39], at each layer $\gamma$ and $\beta$ are multiplied by two ${L}_{2}$ -penalized scalars ${\gamma }_{0}$ and ${\beta }_{0}$ so to to promote significant conditioning only if useful. Wrapping up, defining ${g}_{\Theta }$ and ${h}_{\Phi }$ to be the predictors for the shift and scale vectors respectively, the actual vectors to be multiplied to the hidden representation are respectively $\mathbf{\beta } = {\beta }_{0}{g}_{\Theta }\left( {\mathbf{p}}_{\mathbf{e}}\right)$ and $\mathbf{\gamma } = {\gamma }_{0}{h}_{\Phi }\left( {\mathbf{p}}_{\mathbf{e}}\right) + \mathbf{1}$ . When we use this improvement in our experiments, we add the label TAE to the method name.
82
+
83
+ ![01963ed9-99db-7614-977d-e8fed6978e5a_4_490_186_820_421_0.jpg](images/01963ed9-99db-7614-977d-e8fed6978e5a_4_490_186_820_421_0.jpg)
84
+
85
+ Figure 3: Mixup procedure. Each graph is embedded into a latent representation. We generate a random boolean mask $\mathbf{\sigma }$ and its complementary $\mathbf{1} - \mathbf{\sigma }$ , which describe the features to select from ${\mathbf{s}}_{1}$ and ${\mathbf{s}}_{2}$ . The selected features are then recomposed to generated the novel latent vector $\widetilde{\mathbf{s}}$ .
86
+
87
+ MixUp (MU) Embedding Augmentation. Typical learning pipelines rely on data augmentation to overcome limited variability in the dataset. While this is mainly performed to obtain invariance to specific transformations, we use it to improve our embedding representation, promoting generalization on unseen feature combinations. In practice, given an episode e, we randomly sample for each pair of classes ${n}_{1},{n}_{2}$ two graphs ${\mathcal{G}}^{\left( 1\right) }$ and ${\mathcal{G}}^{\left( 2\right) }$ from the corresponding support sets. Then, we compute their embeddings ${\mathbf{s}}^{\left( 1\right) }$ and ${\mathbf{s}}^{\left( 2\right) }$ , as well as their class probability distributions ${\mathbf{\rho }}^{\left( 1\right) }$ and ${\mathbf{\rho }}^{\left( 2\right) }$ according to Equation (3). Next, we randomly obtain a boolean mask $\mathbf{\sigma } \in \{ 0,1{\} }^{d}$ . We can then obtain a novel synthetic example by mixing the features of the two graphs in the latent space
88
+
89
+ $$
90
+ \widetilde{\mathbf{s}} = \mathbf{\sigma }{\mathbf{s}}^{\left( 1\right) } + \left( {\mathbf{1} - \mathbf{\sigma }}\right) {\mathbf{s}}^{\left( 2\right) }, \tag{5}
91
+ $$
92
+
93
+ where 1 is a $d$ -dimensional vector of ones. Finally, we craft a synthetic class probability $\widetilde{\mathbf{\rho }}$ for this example by linear interpolation
94
+
95
+ $$
96
+ \widetilde{\mathbf{\rho }} = \lambda {\mathbf{\rho }}^{\left( 1\right) } + \left( {1 - \lambda }\right) {\mathbf{\rho }}^{\left( 2\right) },\;\lambda = \left( {\frac{1}{d}\mathop{\sum }\limits_{{i = 1}}^{d}{\mathbf{\sigma }}_{i}}\right) \tag{6}
97
+ $$
98
+
99
+ where $\lambda$ represents the percentage of features sampled from the first sample. If we then compute the class distribution $\rho$ for $\widetilde{\mathrm{s}}$ according to Equation (3), we can ask it to be similar to Equation (6) by adding the following regularizing term to the training loss
100
+
101
+ $$
102
+ {\mathcal{L}}_{\mathrm{{MU}}} = \parallel \mathbf{\rho } - \widetilde{\mathbf{\rho }}{\parallel }_{2}^{2}. \tag{7}
103
+ $$
104
+
105
+ Intuitively, by adopting this online data augmentation procedure, the network is faced with new feature combinations during training, helping to explore unseen regions of the embedding space. The overall procedure is summarized in Figure 3.
106
+
107
+ ## 4 Experiments
108
+
109
+ ### 4.1 Datasets
110
+
111
+ We benchmark our approach over two sets of datasets: the first one was introduced in [12], and consists of: (i) TRIANGLES, a collection of graphs labeled $i = 1,\ldots ,{10}$ , where $i$ is the number of triangles in the graph. (ii) ENZYMES, a dataset of tertiary protein structures from the BRENDA database [11]; each label corresponds to a different top-level enzyme. (iii) Letter-High, a collection of graph-represented letter drawings from the English alphabet; each drawing is labeled with the corresponding letter. (iv) Reddit-12K, a social network dataset where graphs represent threads, with edges connecting users interacting. The corresponding discussion forum gives the label of a thread. 7 We will refer to this set of datasets as ${\mathcal{D}}_{\mathrm{A}}$ . The second set of datasets was introduced in [36] and consists of: (i) Graph-R52, a textual dataset in which each graph represents a different text, with words being connected by an edge if they appear together in a sliding window. (ii) COIL-DEL, a collection of graph-represented images obtained through corner detection and Delaunay triangulation. We will refer to this set of datasets as ${\mathcal{D}}_{\mathrm{B}}$ . The overall dataset statistics are reported in appendix A.
112
+
113
+ <table><tr><td colspan="2" rowspan="2">Model</td><td colspan="2">TRIANGLES</td><td colspan="2">Letter-High</td><td colspan="2">ENZYMES</td><td colspan="2">Reddit</td><td colspan="2">mean</td></tr><tr><td>5-shot</td><td>10-shot</td><td>5-shot</td><td>10-shot</td><td>5-shot</td><td>10-shot</td><td>5-shot</td><td>10-shot</td><td>5-shot</td><td>10-shot</td></tr><tr><td rowspan="3"/><td>WL</td><td>${59.3} \pm {7.7}$</td><td>${64.5} \pm {7.4}$</td><td>${69.8} \pm {7.2}$</td><td>${74.1} \pm {5.8}$</td><td>${54.9} \pm {9.1}$</td><td>${57.0} \pm {9.1}$</td><td>${29.3} \pm {4.5}$</td><td>${34.2} \pm {4.9}$</td><td>53.3</td><td>57.5</td></tr><tr><td>SP</td><td>${61.0} \pm {8.0}$</td><td>${66.7} \pm {7.4}$</td><td>${67.3} \pm {6.8}$</td><td>${71.2} \pm {6.6}$</td><td>${58.8} \pm {9.1}$</td><td>${61.5} \pm {8.8}$</td><td>${51.0} \pm {5.8}$</td><td>${52.7} \pm {4.9}$</td><td>59.5</td><td>63.0</td></tr><tr><td>Graphlet</td><td>${69.2} \pm {10.2}$</td><td>${79.3} \pm {8.1}$</td><td>${35.4} \pm {4.2}$</td><td>${39.4} \pm {4.4}$</td><td>${58.8} \pm {10.6}$</td><td>${59.8} \pm {9.8}$</td><td>${42.7} \pm {11.3}$</td><td>${45.4} \pm {11.2}$</td><td>51.5</td><td>56.0</td></tr><tr><td rowspan="3">0.1</td><td>MAML</td><td>$\mathbf{{87.8}} \pm {4.9}$</td><td>${88.2} \pm {4.5}$</td><td>${69.6} \pm {7.9}$</td><td>${73.8} \pm {5.7}$</td><td>${52.7} \pm {8.9}$</td><td>${54.9} \pm {8.5}$</td><td>${26.0} \pm {6.0}$</td><td>${37.0} \pm {6.9}$</td><td>59.0</td><td>63.5</td></tr><tr><td>AS-MAML [36]</td><td>${86.4} \pm {0.7}$</td><td>${87.2} \pm {0.6}$</td><td>${76.2} \pm {0.8}$</td><td>${77.8} \pm {0.7}$</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>AS-MAML*</td><td>${79.2} \pm {5.9}$</td><td>${84.0} \pm {5.3}$</td><td>${71.8} \pm {7.6}$</td><td>${73.0} \pm {5.2}$</td><td>${45.1} \pm {8.2}$</td><td>${53.1} \pm {8.1}$</td><td>${33.7} \pm {10.8}$</td><td>${37.4} \pm {10.8}$</td><td>57.4</td><td>61.9</td></tr><tr><td rowspan="3">0 E 2</td><td>SMF-GIN [29]</td><td>${79.8} \pm {0.7}$</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>FAITH [60]</td><td>${79.5} \pm {4.0}$</td><td>${80.7} \pm {3.5}$</td><td>${71.5} \pm {3.5}$</td><td>${76.6} \pm {3.2}$</td><td>${57.8} \pm {4.6}$</td><td>${62.1} \pm {4.1}$</td><td>${42.7} \pm {4.1}$</td><td>${46.6} \pm {4.0}$</td><td>62.9</td><td>66.5</td></tr><tr><td>SPNP [32]</td><td>${85.2} \pm {0.7}$</td><td>${86.8} \pm {0.7}$</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td rowspan="5"/><td>GIN</td><td>${82.1} \pm {6.3}$</td><td>${83.6} \pm {5.4}$</td><td>${68.4} \pm {7.3}$</td><td>${74.5} \pm {5.7}$</td><td>${54.2} \pm {9.3}$</td><td>${55.9} \pm {9.4}$</td><td>${49.8} \pm {7.0}$</td><td>$\mathbf{{53.4} \pm {6.3}}$</td><td>63.6</td><td>66.8</td></tr><tr><td>GAT</td><td>${82.8} \pm {6.1}$</td><td>${83.4} \pm {5.5}$</td><td>${74.1} \pm {6.2}$</td><td>${76.4} \pm {5.1}$</td><td>${53.6} \pm {9.4}$</td><td>${55.4} \pm {9.1}$</td><td>${39.0} \pm {6.7}$</td><td>${41.7} \pm {6.1}$</td><td>62.4</td><td>64.2</td></tr><tr><td>GCN</td><td>${82.0} \pm {6.1}$</td><td>${82.7} \pm {5.5}$</td><td>${71.3} \pm {6.8}$</td><td>${74.9} \pm {5.5}$</td><td>${53.4} \pm {9.3}$</td><td>${54.6} \pm {9.4}$</td><td>${44.7} \pm {7.4}$</td><td>${50.8} \pm {6.3}$</td><td>62.8</td><td>65.7</td></tr><tr><td>GSM [12]</td><td>${71.4} \pm {4.3}$</td><td>${75.6} \pm {3.6}$</td><td>${69.9} \pm {5.9}$</td><td>${73.2} \pm {3.4}$</td><td>${55.4} \pm {5.7}$</td><td>${60.6} \pm {3.8}$</td><td>${41.5} \pm {4.1}$</td><td>${45.6} \pm {3.6}$</td><td>59.5</td><td>63.8</td></tr><tr><td>GSM*</td><td>${79.2} \pm {5.7}$</td><td>${81.0} \pm {5.6}$</td><td>${72.9} \pm {6.4}$</td><td>${75.6} \pm {5.6}$</td><td>${56.8} \pm {10.3}$</td><td>${58.4} \pm {9.7}$</td><td>${40.7} \pm {6.8}$</td><td>${46.4} \pm {6.3}$</td><td>62.4</td><td>65.4</td></tr><tr><td>0</td><td>PN+TAE+MU</td><td>- . $\pm {0.9}$ ${}^{87.4}{ + }_{{4c} - 4}$</td><td>_____ $\pm {0.8}$ ${87.5}{ + }_{3\mathrm{e} - 4}$</td><td>_____ $\pm {5.5}$ ${77.2}_{+{2c} - 3}^{\pm {0.9}}$</td><td>_____ $\pm {4.8}$ ${79.2}{}_{\pm \;1\mathrm{c} - 3}^{\pm {4.8}}$</td><td>The of $\pm {10.1}$ ${}^{48} \pm 4\mathrm{c}$ .</td><td>。 $z \pm {9.4}$ ${59.3}_{\pm 3\mathrm{e} - 3}^{+ - - }$</td><td>_____ $\pm {6.7}$ ${}^{45.7}{ \pm }_{2\mathrm{e} - 3}$</td><td>to. $- \pm {6.3}$ ${}^{48.5}{ \pm }_{{2e} - 3}$</td><td>66.8</td><td>68.7</td></tr></table>
114
+
115
+ Table 1: Macro accuracy scores over different $k$ -shot settings and architectures. They are partitioned into baselines (upper section) and our full architecture (lower section). The best scores are in bold. We report standard deviation values in blue and 0.9 confidence intervals in orange. Cells filled with - indicate lack of results in the original works for the corresponding datasets.
116
+
117
+ It is important to note that only the datasets in ${\mathcal{D}}_{\mathrm{B}}$ have enough classes to permit a disjoint set of classes for validation. In contrast, a disjoint subset of the training samples is used as a validation set in the first four by existing works. We argue that this setting is critically unfit for few-shot learning, as the validation set does not make up for a good proxy for the actual testing environment since the classes are not novel. Moreover, the lack of a reliable validation set prevents the usage of early stopping, as there is no way to decide on a good stopping criterion for samples from unseen classes. We nevertheless report the outcomes of this evaluation setting for the sake of comparison.
118
+
119
+ ### 4.2 Baselines
120
+
121
+ We group the considered approaches according to their category. We note, however, that the taxonomy is not strict, and some works may considered to belong to more categories.
122
+
123
+ Graph kernels. Starting from graph kernel methods, we consider Weisfeiler-Lehman (WL) [45], Shortest Path (SP) [7] and Graphlet [44]. These well-known methods compute similarity scores between pairs of graphs, and can be understood as performing inner-product between graphs. We refer the reader to [31] for a thorough treatment. In our implementation, an SVM is used as the head classifier for all the methods. More implementation details can be found in appendix B.
124
+
125
+ Meta learning. Regarding the meta-learning approaches, we consider both vanilla Model-Agnostic Meta-Learning (MAML) [20] and its graph-tailored variant AS-MAML [36]. The former employs a meta-learner trained by optimizing the sum of the losses from a set of downstream tasks, encouraging the learning of features that can be adapted with a small number of optimization steps. The latter builds upon MAML by integrating a reinforcement learning-based adaptive step controller to decide the number of inner optimization steps adaptively.
126
+
127
+ Metric learning. For the metric based approaches, the considered works are SMF-GIN [29], FAITH [60] and SPNP [32]. In SMF-GIN, a GNN is employed to encode both global (via an attention over different GNN layer encodings) and local (via an attention over different substructure encodings) properties. We point out that they include a ProtoNet-based baseline. However, their implementation does not accurately follow the original one and, differently from us, leverages domain-specific prior knowledge. FAITH proposes to capture correlations among meta-training tasks via a hierarchical task graph to transfer meta-knowledge to the target task better. For each meta-training task, a set of additional ones is sampled according to its classes to build the hierarchical graph. Subsequently, the 1 knowledge from the embeddings extracted by the hierarchical task graph is aggregated to classify the query graph samples. Finally, SPNP makes use of Neural Processes (NPs) by introducing an encoder capable of constructing stochastic processes considering the graph structure information extracted by a GNN and a prototypical decoder that provides a metric space where classification is performed.
128
+
129
+ <table><tr><td rowspan="2">Category</td><td rowspan="2">Model</td><td colspan="4">Graph-R52</td><td colspan="4">COIL-DEL</td><td colspan="2">mean</td></tr><tr><td colspan="2">5-shot</td><td colspan="2">10-shot</td><td colspan="2">5-shot</td><td colspan="2">10-shot</td><td>5-shot</td><td>10-shot</td></tr><tr><td rowspan="3">Kernel</td><td>WL</td><td>88.2</td><td>$\pm {10.9}$</td><td>91.4</td><td>±9.1</td><td>56.5</td><td>$\pm {12.7}$</td><td>64.0</td><td>$\pm {12.8}$</td><td>72.4</td><td>77.7</td></tr><tr><td>SP</td><td>84.3</td><td>$\pm {11.3}$</td><td>88.9</td><td>$\pm {9.6}$</td><td>39.6</td><td>$\pm {9.6}$</td><td>45.5</td><td>$\pm {11.3}$</td><td>61.9</td><td>67.2</td></tr><tr><td>Graphlet</td><td>57.4</td><td>$\pm {10.3}$</td><td>58.3</td><td>$\pm {10.1}$</td><td>57.6</td><td>$\pm {12.2}$</td><td>61.3</td><td>$\pm {11.5}$</td><td>57.5</td><td>59.8</td></tr><tr><td rowspan="3">Meta</td><td>MAML</td><td>64.9</td><td>$\pm {13.3}$</td><td>70.1</td><td>$\pm {12.7}$</td><td>76.7</td><td>$\pm {12.6}$</td><td>78.8</td><td>$\pm {11.5}$</td><td>70.8</td><td>74.4</td></tr><tr><td>AS-MAML [36]</td><td>75.3</td><td>$\pm {1.1}$</td><td>78.3</td><td>$\pm {1.1}$</td><td>81.5</td><td>$\pm {1.3}$</td><td>84.7</td><td>$\pm {1.3}$</td><td>78.4</td><td>81.5</td></tr><tr><td>AS-MAML*</td><td>72.3</td><td>$\pm {14.8}$</td><td>72.0</td><td>$\pm {15.5}$</td><td>77.2</td><td>$\pm {11.1}$</td><td>80.1</td><td>$\pm {9.9}$</td><td>74.7</td><td>76.0</td></tr><tr><td rowspan="4">Transfer</td><td>GIN</td><td>67.2</td><td>$\pm {13.9}$</td><td>66.4</td><td>$\pm {13.7}$</td><td>72.3</td><td>$\pm {11.4}$</td><td>74.0</td><td>$\pm {11.3}$</td><td>69.8</td><td>74.4</td></tr><tr><td>GAT</td><td>75.2</td><td>$\pm {12.8}$</td><td>77.5</td><td>$\pm {12.4}$</td><td>79.3</td><td>$\pm {10.3}$</td><td>80.8</td><td>$\pm {9.9}$</td><td>77.2</td><td>79.1</td></tr><tr><td>GCN</td><td>75.1</td><td>$\pm {13.0}$</td><td>74.1</td><td>$\pm {14.5}$</td><td>75.2</td><td>$\pm {11.4}$</td><td>77.1</td><td>$\pm {10.8}$</td><td>75.1</td><td>75.6</td></tr><tr><td>GSM*</td><td>70.3</td><td>$\pm {15.7}$</td><td>71.6</td><td>$\pm {14.9}$</td><td>74.9</td><td>$\pm {11.4}$</td><td>79.2</td><td>$\pm {10.3}$</td><td>72.6</td><td>75.4</td></tr><tr><td>Metric</td><td>SPNP [32]</td><td>-</td><td/><td>-</td><td/><td>84.8</td><td>$\pm {1.6}$</td><td>87.3</td><td>$\pm {1.6}$</td><td>-</td><td>-</td></tr><tr><td rowspan="4">Ours</td><td>PN</td><td>73.1</td><td>$\pm {12.1}$</td><td>78.0</td><td>$\pm {10.6}$</td><td>85.5</td><td>±9.8</td><td>87.2</td><td>±9.3</td><td>79.3</td><td>82.6</td></tr><tr><td>PN+TAE</td><td>77.9</td><td>$\pm {11.8}$</td><td>81.3</td><td>$\pm {10.6}$</td><td>86.4</td><td>$\pm {9.6}$</td><td>88.8</td><td>$\pm {8.5}$</td><td>82.1</td><td>85.0</td></tr><tr><td rowspan="2">PN+TAE+MU</td><td rowspan="2">77.9</td><td>$\pm {11.8}$</td><td rowspan="2">81.5</td><td>$\pm {10.4}$</td><td rowspan="2">87.7</td><td>$\pm {9.2}$</td><td rowspan="2">90.5</td><td>$\pm {7.7}$</td><td rowspan="2">82.8</td><td rowspan="2">86.0</td></tr><tr><td>$\pm 3\mathrm{e} - 3$</td><td>$\pm 4\mathrm{e} - 3$</td><td>$\pm 4\mathrm{e} - 3$</td><td>$\pm 3\mathrm{e} - 3$</td></tr></table>
130
+
131
+ Table 2: Macro accuracy scores over different $k$ -shot settings and architecture. The best scores are in bold. We report standard deviation values in blue and 0.9 confidence intervals in orange. Cells filled with - indicates lack of results in the original works for the corresponding datasets.
132
+
133
+ Transfer learning. Finally, transfer learning approaches include GSM [12] and three simple baselines built on top of varying GNN architectures, namely GIN [67], GAT [55] and GCN [30]. The latter follow the most standard fine-tuning procedure, i.e. training the embedder backbone over the base classes and fine-tuning the classifier head over the $k$ supports. In GSM, graph prototypes are computed as a first step and then clustered based on their spectral properties to create super-classes. These are then used to generate a super-graph which is employed to separate the novel graphs. The original work however does not follow an episodic framework, making the results not directly comparable. For this reason, we also re-implemented it to cast it in the episodic framework. We demand the reader to appendix $\mathrm{B}$ for more details.
134
+
135
+ ### 4.3 Experimental details
136
+
137
+ Our graph embedder is composed of two layers of GIN followed by a mean pooling layer, and the dimension of the resulting embeddings is set to 64. Furthermore, both the latent mixup regularizer and the L2 regularizer of the task-adaptive embedding are weighted at 0.1 . The framework is trained with a batch size of 32 using Adam optimizer with a learning rate of 0.0001 . We implement our framework with Pytorch Lightning [16] using Pytorch Geometric [18], and WandB [6] to log the experiment results. The specific configurations of all our approaches are reported in appendix B.
138
+
139
+ ## 5 Results
140
+
141
+ We report in this section the results over the two sets of benchmark datasets ${\mathcal{D}}_{A},{\mathcal{D}}_{B}$ . Given the lack of homogeneity in the evaluation settings of previous works, we will report both the standard deviation of our results between different episodes and the 0.95 confidence interval. Moreover, when possible, we provide the re-implementation of the methods, indicating them with a $\star$ in their name.
142
+
143
+ Benchmark ${\mathcal{D}}_{\mathbf{A}}$ . As can be seen in Table 1, there is no one-fits-all approach for the considered datasets. In fact, the best results for each are obtained with approaches belonging to different categories, including graph kernels. However, the proposed approach obtains the best results if we consider the average performance for both $k = 5,{10}$ . In fact, considering previous published works, we obtain an overall margin of $+ {7.3}\% , + {4.9}\%$ accuracy for $k = 5,{10}$ compared to GSM [12], $+ {9.4}\%$ and +6.8% compared to to AS-MAML* [36], and +3.9%, +2.2% with respect to FAITH [60]. However, we again stress the partial inadequacy of these datasets as a realistic evaluation tool, given the lack of a disjoint set of classes for the validation set. Interestingly, our re-implementation of ${\mathrm{{GSM}}}^{ \star }$ obtains slightly better results than the original over Reddit and Letter-High, a significant improvement over TRIANGLES and a comparable result over ENZYMES. The difference may be attributed to the difference in the evaluation setting, as the non-episodic framework employed in GSM does not have a fixed number of queries per class, and batches are sampled without episodes.
144
+
145
+ Benchmark ${\mathcal{D}}_{\mathbf{B}}$ . Table 2 shows the results for the two datasets in the benchmark. Most surprisingly, graph kernels exhibit superior performance over R-52, outperforming all the considered deep learning models. It must be noted, however, that the latter is characterized by a very skewed sample distribution, with few classes accounting for most of the samples. In this regard, deep learning methods may end up overfitting the most frequent class, while graph kernel methods are less prone due to the smaller parameter volume and stronger inductive bias. Nevertheless, the latter also hinders their adaptivity to different distributions: we can see, in fact, how the same methods perform miserably on COIL-DEL. This can be observed by considering the mean results over both sets of datasets, in which graph kernels generally perform the worst. Compared to existing works, our approach obtains an average margin of $+ {4.37}\%$ and $+ {4.53}\%$ over AS-MAML [36] and $+ {10.2}\% , + {10.6}\%$ over GSM for $k = 5,{10}$ respectively. Finally, the last three rows of Table 2 show the efficacy of the proposed improvements. Task-adaptive embedding (TAE) allows obtaining the most critical gain, yielding an average increment of $+ {2.82}\%$ and $+ {2.42}\%$ for the 5 -shot and 10 -shot cases, respectively. Then, the proposed online data augmentation technique (MU) allows obtaining an additional boost, especially on COIL-DEL. In fact, in the latter case, its addition yields a +0.65% and +1.72% improvement in accuracy for $k = 5,{10}$ . Remarkably, a vanilla Prototypical Network (PN) architecture with the proposed graph embedder is already sufficient to obtain state-of-the-art results.
146
+
147
+ Qualitative analysis. The latent space learned by the graph embedder is the core element of our approach since it determines the prototypes and the subsequent sample classification. To provide a better insight into our method peculiarities, Figure 5 depicts a T-SNE representation of the learned embeddings for novel classes. Each row represents different episodes, while the different columns show the different embeddings obtained with our approach and its further refinements. We also highlight the queries (crosses), the supports (circles) and the prototypes (star). As can be seen, our approach separates samples belonging to novel classes into clearly defined clusters. Already in PN, some classes naturally cluster in different regions of the embedding. The TAE regularization improves the class separation without significantly changing the disposition of the clusters in the space. Our insight is that the context may let the network reorganize the already seen space without moving far from the already obtained representation. Finally, MU allows better use of previously unexplored regions, as expected from this kind of data augmentation. We show that our feature recombination helps the network better generalize and anticipate the coming of novel classes.
148
+
149
+ ## 6 Conclusions
150
+
151
+ Limitations. Employing a graph neural network embedder, the proposed approach may inherit known issues such as the presence of information bottlenecks [52] and over smoothing [13]. These may be aggravated by the additional aggregation required to compute the prototypes, as the readout function to obtain a graph-level representation is already an aggregation of the node embeddings. Also, the nearest-neighbour association in the final embedding assumes that it enjoys a euclidean metric. While this is an excellent local approximation, we expect it may lead to imprecision. To overcome this, further improvements can be inspired by the Computer Vision community [50].
152
+
153
+ Future works. In future work, we aim to enrich the latent space defined by the architecture, for instance, forcing the class prototypes in each episode to be sampled from a learnable distribution rather than directly computed as the mean of the supports. Moreover, it may be worth introducing an attention layer to have supports (or prototypes, directly) affect each other directly and not implicitly, as it now happens with the task embedding module. We also believe data augmentation is a crucial technique for the future of this task: the capacity to meaningfully inflate the small available datasets may result in a significant performance improvement. In this regard, we plan to extensively test the existing graph data augmentation techniques in the few-shot scenario and build upon MixUp to exploit different mixing strategies, such as non-linear interpolation.
154
+
155
+ ![01963ed9-99db-7614-977d-e8fed6978e5a_8_315_194_1159_918_0.jpg](images/01963ed9-99db-7614-977d-e8fed6978e5a_8_315_194_1159_918_0.jpg)
156
+
157
+ Figure 4: Visualization of latent spaces from the COIL-DEL dataset, through T-SNE dimensionality reduction. Each row is a different episode, the colors represent novel classes, the crosses are the queries, the circles are the supports and the stars are the prototypes. The left column is produced with the base model PN, the middle one with the PN+TAE model, the right one with the full model PN+TAE+MU. This comparison shows the TAE and MU regularizations improve the class separation in the latent space, with MU proving essential to obtain accurate latent clusters.
158
+
159
+ Conclusions. In this paper, we tackle the problem of few-shot graph classification, an under-explored problem in the broader machine learning community. We provide a modular and extensible codebase to facilitate practitioners in the field and set a stable ground for fair comparisons. The latter contains re-implementations of the most relevant baselines and state-of-the-art works, allowing us to provide an overview of the possible approaches. Our findings show that while there is no one-fits-all approach for all the datasets, the overall best results are obtained by using a distance metric learning baseline. We then suggest valuable additions to the architecture, adapting a task-adaptive embedding procedure and designing a novel online graph data augmentation technique. Lastly, we prove their benefits for the problem over several datasets. We hope this work to encourage a reconsideration of the effectiveness of distance metric learning when dealing with graph-structured data. In fact, we believe metric learning to be incredibly fit for dealing with graphs, considering that the latent spaces encoded by graph neural networks are known to capture both topological features and node signals effectively. Most importantly, we hope this work and its artifacts to facilitate practitioners in the field and to encourage new ones to approach it.
160
+
161
+ ## References
162
+
163
+ [1] S. Azadi, M. Fisher, V. Kim, Z. Wang, E. Shechtman, and T. Darrell. Multi-content gan for few-shot font style transfer. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 7564-7573, Los Alamitos, CA, USA, jun 2018. IEEE Computer Society. 2
164
+
165
+ [2] Jinheon Baek, Dong Bok Lee, and Sung Ju Hwang. Learning to extrapolate knowledge: Trans-ductive few-shot out-of-graph link prediction. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. 3
166
+
167
+ [3] Peter Battaglia, Jessica Blake Chandler Hamrick, Victor Bapst, Alvaro Sanchez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, Caglar Gulcehre, Francis Song, Andy Ballard, Justin Gilmer, George E. Dahl, Ashish Vaswani, Kelsey Allen, Charles Nash, Victoria Jayne Langston, Chris Dyer, Nicolas Heess, Daan Wierstra, Pushmeet Kohli, Matt Botvinick, Oriol Vinyals, Yujia Li, and Razvan Pas-canu. Relational inductive biases, deep learning, and graph networks. arXiv, 2018. URL https://arxiv.org/pdf/1806.01261.pdf.1
168
+
169
+ [4] Christian F Baumgartner, Lisa M Koch, Marc Pollefeys, and Ender Konukoglu. An exploration of $2\mathrm{\;d}$ and $3\mathrm{\;d}$ deep learning techniques for cardiac mr image segmentation. In International Workshop on Statistical Atlases and Computational Models of the Heart, pages 111-119. Springer, 2017. 1
170
+
171
+ [5] Sagie Benaim and Lior Wolf. One-shot unsupervised cross domain translation. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS'18, page 2108-2118, Red Hook, NY, USA, 2018. Curran Associates Inc. 3
172
+
173
+ [6] Lukas Biewald. Experiment tracking with weights and biases, 2020. URL https://www.wandb.com/.Software available from wandb.com. 7
174
+
175
+ [7] K.M. Borgwardt and H.P. Kriegel. Shortest-path kernels on graphs. In Fifth IEEE International Conference on Data Mining (ICDM'05), pages 8 pp.-, 2005. doi: 10.1109/ICDM.2005.132. 6
176
+
177
+ [8] Michael M. Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: Going beyond euclidean data. IEEE Signal Processing Magazine, 34 (4):18-42, 2017. doi: 10.1109/MSP.2017.2693418. 1
178
+
179
+ [9] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc., 2020. 1
180
+
181
+ [10] Tianle Cai, Shengjie Luo, Keyulu Xu, Di He, Tie-Yan Liu, and Liwei Wang. Graphnorm: A principled approach to accelerating graph neural network training. In 2021 International Conference on Machine Learning, 2021. 3
182
+
183
+ [11] Antje Chang, Lisa Jeske, Sandra Ulbrich, Julia Hofmann, Julia Koblitz, Ida Schomburg, Meina Neumann-Schaal, Dieter Jahn, and Dietmar Schomburg. BRENDA, the ELIXIR core data resource in 2021: new developments and updates. Nucleic Acids Research, 49(D1):D498-D508, 11 2020. ISSN 0305-1048. doi: 10.1093/nar/gkaa1025. URL https://doi.org/10.1093/ nar/gkaa1025.5
184
+
185
+ [12] Jatin Chauhan, Deepak Nathani, and Manohar Kaul. Few-shot learning on graphs via super-classes based on graph spectral measures. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. 3, 5, 6, 7
186
+
187
+ [13] Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04):3438-3445, Apr. 2020. doi: 10.1609/ aaai.v34i04.5747. URL https://ojs.aaai.org/index.php/AAAI/article/view/5747.8
188
+
189
+ [14] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In Jill Burstein, Christy Doran, and Thamar Solorio, editors, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT
190
+
191
+ 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171- 4186. Association for Computational Linguistics, 2019. doi: 10.18653/v1/n19-1423. URL https://doi.org/10.18653/v1/n19-1423.1
192
+
193
+ [15] Kaize Ding, Jianling Wang, Jundong Li, Kai Shu, Chenghao Liu, and Huan Liu. Graph prototypical networks for few-shot learning on attributed networks. In Mathieu d'Aquin, Stefan Dietze, Claudia Hauff, Edward Curry, and Philippe Cudré-Mauroux, editors, CIKM '20: The 29th ACM International Conference on Information and Knowledge Management, Virtual Event, Ireland, October 19-23, 2020, pages 295-304. ACM, 2020. 3
194
+
195
+ [16] William Falcon et al. Pytorch lightning. GitHub. Note: https://github.com/PyTorchLightning/pytorch-lightning, 3, 2019. 7
196
+
197
+ [17] Li Fei-Fei, R. Fergus, and P. Perona. One-shot learning of object categories. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(4):594-611, 2006. doi: 10.1109/TPAMI. 2006.79.1
198
+
199
+ [18] Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019. 7
200
+
201
+ [19] Michael Fink. Object classification from a single example utilizing class relevance metrics. In L. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information Processing Systems, volume 17. MIT Press, 2004. URL https://proceedings.neurips.cc/paper/ 2004/file/ef1e491a766ce3127556063d49bc2f98-Paper.pdf. 1
202
+
203
+ [20] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 1126-1135. PMLR, 2017. 2, 3, 6
204
+
205
+ [21] Hang Gao, Zheng Shou, Alireza Zareian, Hanwang Zhang, and Shih-Fu Chang. Low-shot learning via covariance-preserving adversarial augmentation networks. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS'18, page 983-993, Red Hook, NY, USA, 2018. Curran Associates Inc. 2
206
+
207
+ [22] Marta Garnelo, Jonathan Schwarz, Dan Rosenbaum, Fabio Viola, Danilo J. Rezende, S. M. Ali Eslami, and Yee Whye Teh. Neural processes. CoRR, abs/1807.01622, 2018. URL http: //arxiv.org/abs/1807.01622.3
208
+
209
+ [23] Hongyu Guo and Yongyi Mao. Intrusion-Free graph mixup. 2021. 3
210
+
211
+ [24] Zhichun Guo, Chuxu Zhang, Wenhao Yu, John Herr, Olaf Wiest, Meng Jiang, and Nitesh V Chawla. Few-shot graph learning for molecular property prediction. arXiv preprint arXiv:2102.07916, 2021. 3
212
+
213
+ [25] William L. Hamilton, Rex Ying, and Jure Leskovec. Representation learning on graphs: Methods and applications. ArXiv, abs/1709.05584, 2017. 1
214
+
215
+ [26] Xiaotian Han, Zhimeng Jiang, Ninghao Liu, and Xia Hu. G-Mixup: Graph data augmentation for graph classification. 2022. 3
216
+
217
+ [27] Kaveh Hassani. Cross-domain few-shot graph classification. 2022. 1
218
+
219
+ [28] Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay S. Pande, and Jure Leskovec. Strategies for pre-training graph neural networks. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. 1
220
+
221
+ [29] Shunyu Jiang, Fuli Feng, Weijian Chen, Xiang Li, and Xiangnan He. Structure-enhanced meta-learning for few-shot graph classification. AI Open, 2:160-167, 2021. 3, 6
222
+
223
+ [30] Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR), 2017. 7
224
+
225
+ [31] Nils M Kriege, Fredrik D Johansson, and Christopher Morris. A survey on graph kernels. Applied Network Science, 5(1):1-42, January 2020. 6
226
+
227
+ [32] Xixun Lin, Zhao Li, Peng Zhang, Luchen Liu, Chuan Zhou, Bin Wang, and Zhihong Tian. Structure-Aware prototypical neural process for Few-Shot graph classification. IEEE Trans Neural Netw Learn Syst, PP, May 2022. 3, 6, 7
228
+
229
+ [33] B. Liu, X. Wang, M. Dixit, R. Kwitt, and N. Vasconcelos. Feature space transfer for data augmentation. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9090-9098, Los Alamitos, CA, USA, jun 2018. IEEE Computer Society. 2
230
+
231
+ [34] Zelun Luo, Yuliang Zou, Judy Hoffman, and Li Fei-Fei. Label efficient learning of transferable representations across domains and tasks. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, page 164-176, Red Hook, NY, USA, 2017. Curran Associates Inc. ISBN 9781510860964. 2
232
+
233
+ [35] Xin Lv, Yuxian Gu, Xu Han, Lei Hou, Juanzi Li, and Zhiyuan Liu. Adapting meta knowledge graph information for multi-hop reasoning over few-shot relations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3376-3381, Hong Kong, China, 2019. Association for Computational Linguistics. 3
234
+
235
+ [36] Ning Ma, Jiajun Bu, Jieyu Yang, Zhen Zhang, Chengwei Yao, Zhi Yu, Sheng Zhou, and Xifeng Yan. Adaptive-step graph meta-learner for few-shot graph classification. In Mathieu d'Aquin, Stefan Dietze, Claudia Hauff, Edward Curry, and Philippe Cudré-Mauroux, editors, CIKM '20: The 29th ACM International Conference on Information and Knowledge Management, Virtual Event, Ireland, October 19-23, 2020, pages 1055-1064. ACM, 2020. 1, 3, 5, 6, 7, 8
236
+
237
+ [37] Puneet Mangla, Nupur Kumari, Abhishek Sinha, Mayank Singh, Balaji Krishnamurthy, and Vineeth N Balasubramanian. Charting the right manifold: Manifold mixup for few-shot learning. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 2218-2227, 2020. 3
238
+
239
+ [38] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, NIPS'13, page 3111-3119, Red Hook, NY, USA, 2013. Curran Associates Inc. 1
240
+
241
+ [39] Boris N. Oreshkin, Pau Rodríguez López, and Alexandre Lacoste. TADAM: task dependent adaptive metric for improved few-shot learning. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett, editors, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 719-729, 2018. 2, 4
242
+
243
+ [40] Joonhyung Park, Hajin Shim, and Eunho Yang. Graph transplant: Node Saliency-Guided graph mixup with local structure preservation. In Proceedings of the First MiniCon Conference, 2022. 3
244
+
245
+ [41] Sachin Ravi and H. Larochelle. Optimization as a model for few-shot learning. In ICLR, 2017. 2
246
+
247
+ [42] Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Meta-learning with memory-augmented neural networks. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML'16, page 1842-1850. JMLR.org, 2016. 3
248
+
249
+ [43] Jiawei Sheng, Shu Guo, Zhenyu Chen, Juwei Yue, Lihong Wang, Tingwen Liu, and Hongbo Xu. Adaptive Attentional Network for Few-Shot Knowledge Graph Completion. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1681-1691, Online, 2020. Association for Computational Linguistics. 3
250
+
251
+ [44] Nino Shervashidze, SVN Vishwanathan, Tobias Petri, Kurt Mehlhorn, and Karsten Borgwardt. Efficient graphlet kernels for large graph comparison. In David van Dyk and Max Welling, editors, Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, volume 5 of Proceedings of Machine Learning Research, pages 488-495, Hilton Clearwater Beach Resort, Clearwater Beach, Florida USA, 16-18 Apr 2009. PMLR. URL https://proceedings.mlr.press/v5/shervashidze09a.html.6
252
+
253
+ [45] Nino Shervashidze, Pascal Schweitzer, Erik Jan van Leeuwen, Kurt Mehlhorn, and Karsten M. Borgwardt. Weisfeiler-lehman graph kernels. Journal of Machine Learning Research, 12(77): 2539-2561, 2011. URL http://jmlr.org/papers/v12/shervashidze11a.html.6
254
+
255
+ [46] Giannis Siglidis, Giannis Nikolentzos, Stratis Limnios, Christos Giatsidis, Konstantinos Skianis, and Michalis Vazirgiannis. Grakel: A graph kernel library in python. Journal of Machine Learning Research, 21(54):1-5, 2020. 16
256
+
257
+ [47] Dmitriy Smirnov and Justin Solomon. Hodgenet: learning spectral geometry on triangle meshes. ACM Transactions on Graphics (TOG), 40(4):1-11, 2021. 1
258
+
259
+ [48] Jake Snell, Kevin Swersky, and Richard S. Zemel. Prototypical networks for few-shot learning. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 4077-4087, 2017. 2, 3
260
+
261
+ [49] Fan-Yun Sun, Jordan Hoffmann, Vikas Verma, and Jian Tang. Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. 1
262
+
263
+ [50] Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip H.S. Torr, and Timothy M. Hospedales. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. 8
264
+
265
+ [51] Susheel Suresh, Pan Li, Cong Hao, and Jennifer Neville. Adversarial graph augmentation to improve graph contrastive learning. NeurIPS, 2021. 1
266
+
267
+ [52] Jake Topping, Francesco Di Giovanni, Benjamin Paul Chamberlain, Xiaowen Dong, and Michael M. Bronstein. Understanding over-squashing and bottlenecks on graphs via curvature, 2021.8
268
+
269
+ [53] Yao-Hung Hubert Tsai and Ruslan Salakhutdinov. Improving one-shot learning through fusing side information. CoRR, abs/1710.08347, 2017. URL http://arxiv.org/abs/1710.08347.2
270
+
271
+ [54] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper/2017/file/ 3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf.l
272
+
273
+ [55] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph Attention Networks. International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=rJXMpikCZ.accepted as poster. 7
274
+
275
+ [56] Oriol Vinyals, Charles Blundell, Tim Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Matching networks for one shot learning. In Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett, editors, Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 3630-3638, 2016. 2, 3
276
+
277
+ [57] Oriol Vinyals, Charles Blundell, Tim Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Matching networks for one shot learning. In Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett, editors, Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 3630-3638, 2016. 3
278
+
279
+ [58] Ning Wang, Minnan Luo, Kaize Ding, Lingling Zhang, Jundong Li, and Qinghua Zheng. Graph few-shot learning with attribute matching. In Mathieu d'Aquin, Stefan Dietze, Claudia Hauff, Edward Curry, and Philippe Cudré-Mauroux, editors, CIKM '20: The 29th ACM International Conference on Information and Knowledge Management, Virtual Event, Ireland, October 19-23, 2020, pages 1545-1554. ACM, 2020. 3
280
+
281
+ [59] Song Wang, Xiao Huang, Chen Chen, Liang Wu, and Jundong Li. REFORM: Error-Aware Few-Shot Knowledge Graph Completion, page 1979-1988. Association for Computing Machinery, New York, NY, USA, 2021. ISBN 9781450384469. 3
282
+
283
+ [60] Song Wang, Yushun Dong, Xiao Huang, Chen Chen, and Jundong Li. Faith: Few-shot graph classification with hierarchical task graphs. In Lud De Raedt, editor, Proceedings of
284
+
285
+ the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, pages 2284-2290. International Joint Conferences on Artificial Intelligence Organization, 7 2022. doi: 10.24963/ijcai.2022/317. URL https://doi.org/10.24963/ijcai.2022/317.Main Track.3,6,8
286
+
287
+ [61] Yaqing Wang, Quanming Yao, James T. Kwok, and Lionel M. Ni. Generalizing from a few examples: A survey on few-shot learning. ACM Comput. Surv., 53(3), 2020. ISSN 0360-0300. 2,3
288
+
289
+ [62] Yaqing Wang, Abulikemu Abuduweili, Quanming Yao, and Dejing Dou. Property-aware relation networks for few-shot molecular property prediction. In Advances in Neural Information Processing Systems, 2021. 3
290
+
291
+ [63] Yiwei Wang, Wei Wang, Yuxuan Liang, Yujun Cai, and Bryan Hooi. GraphCrop: Subgraph cropping for graph classification. 2020. 3
292
+
293
+ [64] Yiwei Wang, Wei Wang, Yuxuan Liang, Yujun Cai, and Bryan Hooi. Mixup for node and graph classification. In Proceedings of the Web Conference 2021, WWW '21, page 3663-3674, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450383127. 3
294
+
295
+ [65] Yu Wu, Yutian Lin, Xuanyi Dong, Yan Yan, Wanli Ouyang, and Yi Yang. Exploit the unknown gradually: One-shot video-based person re-identification by stepwise learning. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5177-5186, 2018. doi: 10.1109/CVPR.2018.00543. 2
296
+
297
+ [66] Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. Representation learning on graphs with jumping knowledge networks. In Jennifer G. Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 5449-5458. PMLR, 2018. 3
298
+
299
+ [67] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. 3, 7
300
+
301
+ [68] Huaxiu Yao, Chuxu Zhang, Ying Wei, Meng Jiang, Suhang Wang, Junzhou Huang, Nitesh V. Chawla, and Zhenhui Li. Graph few-shot learning via knowledge transfer. CoRR, abs/1910.03053, 2019. 3
302
+
303
+ [69] Jaesik Yoon, Taesup Kim, Ousmane Dia, Sungwoong Kim, Yoshua Bengio, and Sungjin Ahn. Bayesian model-agnostic meta-learning. In S. Bengio, H. Wallach, H. Larochelle, K. Grau-man, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/file/e1021d43911ca2c1845910d84f40aeae-Paper.pdf.2
304
+
305
+ [70] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017. 2, 3
306
+
307
+ [71] Jiaying Zhang, Xiaoli Zhao, Zheng Chen, and Zhejun Lu. A review of deep learning-based semantic segmentation for point cloud. IEEE Access, 7:179118-179133, 2019. 1
308
+
309
+ [72] Shengzhong Zhang, Ziang Zhou, Zengfeng Huang, and Zhongyu Wei. Few-shot classification on graphs with structural regularized GCNs, 2019. 3
310
+
311
+ [73] Fan Zhou, Chengtai Cao, Kunpeng Zhang, Goce Trajcevski, Ting Zhong, and Ji Geng. Meta-gnn: On few-shot node classification in graph meta-learning. In Wenwu Zhu, Dacheng Tao, Xueqi Cheng, Peng Cui, Elke A. Rundensteiner, David Carmel, Qi He, and Jeffrey Xu Yu, editors, Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM 2019, Beijing, China, November 3-7, 2019, pages 2357-2360. ACM, 2019. 3
312
+
313
+ ## 576 A Data statistics
314
+
315
+ 577 We report in table 3 general statistics of the datasets considered in this work.
316
+
317
+ <table><tr><td/><td>$\mathbf{{Dataset}}$</td><td>avg # nodes</td><td>avg #edges</td><td>#samples</td><td>#samples / class</td><td>#classes</td><td>#base</td><td>#val</td><td>#novel</td></tr><tr><td rowspan="2">${\mathcal{D}}_{\mathrm{B}}$</td><td>COIL-DEL</td><td>21.54</td><td>54.24</td><td>3900</td><td>39</td><td>96</td><td>60</td><td>16</td><td>20</td></tr><tr><td>Graph-R52</td><td>30.92</td><td>165.78</td><td>8214</td><td>unbalanced</td><td>28</td><td>18</td><td>5</td><td>5</td></tr><tr><td rowspan="4">${\mathcal{D}}_{\mathrm{A}}$</td><td>TRIANGLES</td><td>20.85</td><td>35.5</td><td>2010</td><td>201</td><td>10</td><td>7</td><td>0</td><td>3</td></tr><tr><td>ENZYMES</td><td>32.63</td><td>62.14</td><td>600</td><td>100</td><td>6</td><td>4</td><td>0</td><td>2</td></tr><tr><td>Letter_high</td><td>4.67</td><td>4.5</td><td>2250</td><td>150</td><td>15</td><td>11</td><td>0</td><td>4</td></tr><tr><td>Reddit-12K</td><td>391.41</td><td>456.89</td><td>1111</td><td>101</td><td>11</td><td>7</td><td>0</td><td>4</td></tr></table>
318
+
319
+ Table 3: Statistics of all the considered datasets. These are grouped according to whether they encompass a disjoint set of classes to be used for validation. Graph-R52 is the only one with a skewed distribution of samples over its classes.
320
+
321
+ ## B Additional details
322
+
323
+ ### B.1 Evaluation setting
324
+
325
+ The models are trained in an episodic framework by considering $N$ -way $K$ -shot episodes with the same $N$ and $K$ considered for the novel classes at test time. We use for each dataset the same $N$ and $K$ proposed by the works in which they were introduced. In particular, $K = 5,{10}$ for all the datasets, while the number of classes $N$ is reported in Table 4. The best model used for evaluation is picked by employing early stopping over the validation set. The latter is composed of a random ${20}\%$ subset of the base samples for datasets in ${\mathcal{D}}_{A}$ while it is composed of samples from a disjoint set of novel classes, different from the ones used for testing, for datasets in ${\mathcal{D}}_{B}$ .
326
+
327
+ <table><tr><td/><td>$\mathrm{N}$</td><td>Train (base classes)</td><td>Validation</td><td>Test (novel classes)</td></tr><tr><td rowspan="2">Graph-R52</td><td>2</td><td>$\{ 3,4,6,7,8,9,{10},{12},{15},{18},{19},{21},{22},{23},{24},{25},{26},{27}\}$</td><td>$\{ 2,5,{11},{13},{14}\}$</td><td>$\{ 0,1,{16},{17},{20}\}$</td></tr><tr><td/><td>$\{ 0,1,2,3,4,5,6,7,8,9,{10},{11},{12},{13},{14},{15},{16},{17},{18},{19},{20},{21},{22}$ ,</td><td>$\{ {64},{65},{66},{67},{68}$ ,</td><td>$\{ {80},{81},{82},{83},{84},{85},{86}$</td></tr><tr><td>COIL-DEL</td><td>5</td><td>23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,</td><td>69,70,71,72,73,</td><td>87,88,89,90,91,92,93,</td></tr><tr><td/><td/><td>${43},{44},{45},{46},{47},{48},{49},{50},{51},{52},{53},{54},{55},{56},{57},{58},{59},{60},{61},{62},{63}\}$</td><td>${74},{75},{76},{77},{78},{79}\}$</td><td>${94},{95},{96},{97},{98},{99}\}$</td></tr><tr><td>ENZYMES</td><td>2</td><td>$\{ 1,3,5,6\}$</td><td>*</td><td>$\{ 2,4\}$</td></tr><tr><td>Letter-High</td><td>4</td><td>$\{ 1,9,{10},2,0,3,{14},5,{12},{13},7\}$</td><td>*</td><td>$\{ 4,6,{11},8\}$</td></tr><tr><td>Reddit</td><td>4</td><td>$\{ 1,3,5,6,7,9,{11}\}$</td><td>+</td><td>$\{ 2,4,8,{10}\}$</td></tr><tr><td>TRIANGLES</td><td>3</td><td>$\{ 1,3,4,6,7,8,9\}$</td><td>*</td><td>$\{ 2,5,{10}\}$</td></tr></table>
328
+
329
+ Table 4: Split between base and novel classes for each dataset, chosen to be the same as the competitors. Datasets marked with a (*) do not have a disjoint set of classes for validation, so the validation set is a disjoint subsample of samples from the base classes.
330
+
331
+ The epochs contain 2000,500 and 1 episodes for train, val and test respectively. Finally, the number of queries $Q$ is set to 15 for each class and for each dataset. Each episode has therefore in total $N * Q$ queries. The number of episodes in a batch is set to 32 for all the datasets except that for Reddit, for which is set to 8 .
332
+
333
+ We follow the same base-novel splits used by GSM and AS-MAML. These are shown in Table 4. The model configurations are described in Table 5. Hyperparameter values for TRIANGLES and Letter-High were found via Bayesian parameter search, while those for Graph-R52, COIL-DEL, ENZYMES and Reddit were set to the same set of manually found values after having observed an overall small benefit in employing searched parameters. For the evaluation, we randomly sample 5000 episodes containing support and query samples from the novel classes. We then compute the accuracy over the query samples.
334
+
335
+ In GSM, the reported standard deviation is computed among a different number of runs of the same pretrained model for different support and query sets. Since they do not employ an episodic framework neither for training and for evaluation, their setting is not directly comparable to ours and therefore led us to re-implement it. We used the same hyperparameters employed in the original manuscript for the datasets in ${\mathcal{D}}_{A}$ . For the datasets in ${\mathcal{D}}_{B}$ , over which the original model has never been employed, we chose the number of superclasses to match the increased number of classes in the latter datasets, choosing a value of 4 and 10 for Graph-R52 and COIL-DEL respectively. Furthermore, for the transfer learning baselines we use the same setting of our re-implementation of GSM, but we 06 set repeat the fine-tuning phase of the supports 10 times. For the graph kernel methods, we use the Grakel library [46]. A SVM is used as the classifier for all three approaches with the kernel sets to "precomputed" as the graph kernel methods pass to it the similarity matrix. We employ the default parameters for all the graph kernels for all the datasets, excluding Graphlet on R-52 and Reddit where we use a graphlet size equals to 3 instead of the default value 5 , where the computational costs were infeasible due to the size of graphs.
336
+
337
+ <table><tr><td rowspan="2"/><td colspan="4">${D}_{\mathrm{A}}$</td><td colspan="2">${\mathcal{D}}_{\mathrm{B}}$</td></tr><tr><td>ENZYMES</td><td>Letter-High</td><td>Reddit</td><td>TRIANGLES</td><td>COIL-DEL</td><td>Graph-R52</td></tr><tr><td>LR</td><td>1e-4</td><td>1e-2</td><td>1e-4</td><td>1e-3</td><td>1e-4</td><td>1e-4</td></tr><tr><td>Scaling factor</td><td>7.5</td><td>90.0</td><td>7.5</td><td>7.5</td><td>7.5</td><td>7.5</td></tr><tr><td>${\gamma }_{0}$ init.</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td></tr><tr><td>${\beta }_{0}$ init.</td><td>1.0</td><td>5.0</td><td>1.0</td><td>1.0</td><td>1.0</td><td>1.0</td></tr><tr><td>${\lambda }_{\text{mixup }}$</td><td>0.1</td><td>0.1</td><td>0.1</td><td>0.6</td><td>0.1</td><td>0.1</td></tr><tr><td>${\lambda }_{\text{reg }}$</td><td>0.1</td><td>0.3</td><td>0.1</td><td>0.8</td><td>0.1</td><td>0.1</td></tr><tr><td>Global Pooling</td><td>mean</td><td>sum</td><td>mean</td><td>mean</td><td>mean</td><td>mean</td></tr><tr><td>Embedding dim.</td><td>64</td><td>32</td><td>64</td><td>64</td><td>64</td><td>64</td></tr><tr><td>#convs</td><td>2</td><td>3</td><td>2</td><td>2</td><td>2</td><td>2</td></tr><tr><td>Dropout</td><td>0.0</td><td>0.7</td><td>0.0</td><td>0.5</td><td>0.0</td><td>0.0</td></tr><tr><td>#GIN MLP layers</td><td>2</td><td>2</td><td>2</td><td>1</td><td>2</td><td>2</td></tr></table>
338
+
339
+ Table 5: Model hyperparameters for the various datasets.
340
+
341
+ Finally, since AS-MAML reports the 0.95 confidence interval, we also re-implement this work using the same hyperparameters of the original work, allowing us to retrieve the results on the remaining datasets.
342
+
343
+ ### B.2 Efficiency analysis
344
+
345
+ Table 6 reports the training time and number of episodes of our approach over each dataset. Table 7 instead shows how the model compares in training and inference times with respect to the other considered models over Graph-R52.
346
+
347
+ <table><tr><td rowspan="3"/><td colspan="8">${\mathcal{D}}_{\mathrm{A}}$</td><td colspan="4">${\mathcal{D}}_{\mathrm{B}}$</td></tr><tr><td colspan="2">ENZYMES</td><td colspan="2">Letter-High</td><td colspan="2">Reddit</td><td colspan="2">TRIANGLES</td><td colspan="2">COIL-DEL</td><td colspan="2">Graph-R52</td></tr><tr><td>5-shot</td><td>10-shot</td><td>5-shot</td><td>10-shot</td><td>5-shot</td><td>10-shot</td><td>5-shot</td><td>10-shot</td><td>5-shot</td><td>10-shot</td><td>5-shot</td><td>10-shot</td></tr><tr><td>Time (seconds)</td><td>1058</td><td>817</td><td>8493</td><td>3698</td><td>1846</td><td>2156</td><td>1600</td><td>1252</td><td>4269</td><td>5948</td><td>1449</td><td>1388</td></tr><tr><td>Episodes</td><td>192</td><td>192</td><td>8320</td><td>1792</td><td>128</td><td>64</td><td>4608</td><td>3072</td><td>1856</td><td>4544</td><td>1920</td><td>1536</td></tr></table>
348
+
349
+ Table 6: Training time in seconds and number of episodes over the various datasets with varying number of shots $k$ . These include the whole training time with early stopping enabled. All the computation was carried on a NVIDIA 2080Ti GPU with an Intel(R) Core(TM) i7-9700K CPU.
350
+
351
+ ## C Qualitative Analysis
352
+
353
+ More insight into the learned latent space is provided in Figures 5 to 7. In Figure 5, the latent space of different episodes for the Graph-R52 dataset is shown considering the three presented models. It is worth noting that, on the Graph-R52 dataset, the PN+TAE model creates better clusters than the PN model, and these are slightly improved with the addition of MU. Nevertheless, the benefits of adding MU are not as clearly visible as they are for COIL-DEL, and this is also reflected in the less prominent benefit in accuracy. Subsequently, in Figure 6 we present the latent space of a novel episode produced by the datasets belonging to ${\mathcal{D}}_{\mathrm{A}}$ , namely ENZYMES, Letter-High, Reddit and TRIANGLES. We compare the T-SNE obtained by our full model with the one obtained by ${\mathrm{{GSM}}}^{ \star }$ (our re-implementation of GSM). As can be seen, our model is more successful at separating samples into clusters than ${\mathrm{{GSM}}}^{ \star }$ . Finally, in Figure 7 we show the latent space of a novel episode produced by the datasets belonging to ${\mathcal{D}}_{\mathrm{B}}$ . As before, the T-SNE plot demonstrates the better separation ability of our 31 full model than ${\mathrm{{GSM}}}^{ \star }$ also for these datasets.
354
+
355
+ ![01963ed9-99db-7614-977d-e8fed6978e5a_17_297_223_1170_458_0.jpg](images/01963ed9-99db-7614-977d-e8fed6978e5a_17_297_223_1170_458_0.jpg)
356
+
357
+ Figure 6: T-SNE visualization of a novel episode’s latent space from the datasets belonging to ${\mathcal{D}}_{\mathrm{A}}$ . The first row shows the T-SNE produced with our full model (PN+TAE+MU), while the second one shows the plots produced with ${\mathrm{{GSM}}}^{ \star }$ . In each plot, the colors represent novel classes, the crosses are the queries and the circles are the supports. In addition, since our model works with prototypes, these are represented by the stars only in the plots of the first row.
358
+
359
+ ![01963ed9-99db-7614-977d-e8fed6978e5a_17_304_971_1147_880_0.jpg](images/01963ed9-99db-7614-977d-e8fed6978e5a_17_304_971_1147_880_0.jpg)
360
+
361
+ Figure 7: T-SNE visualization of a novel episode’s latent space from the datasets belonging to ${\mathcal{D}}_{\mathrm{B}}$ . The first row shows the T-SNE produced with our full model (PN+TAE+MU), while the second one shows the plots produced with ${\mathrm{{GSM}}}^{ \star }$ . In each plot, the colors represent novel classes, the crosses are the queries and the circles are the supports. In addition, since our model works with prototypes, these are represented by the stars only in the plots of the first row.
362
+
363
+ <table><tr><td rowspan="2"/><td colspan="2">GSM*</td><td colspan="2">MAML</td><td colspan="2">PN</td><td colspan="2">PN+TAE</td><td colspan="2">$\mathrm{{PN}} + \mathrm{{TAE}} + \mathrm{{MU}}$</td></tr><tr><td>5-shot</td><td>10-shot</td><td>5-shot</td><td>10-shot</td><td>5-shot</td><td>10-shot</td><td>5-shot</td><td>10-shot</td><td>5-shot</td><td>10-shot</td></tr><tr><td>Training time</td><td>$0 : {50} : {03}$</td><td>0:56:03</td><td>0:32:57</td><td>0:32:28</td><td>0:12:07</td><td>0:19:11</td><td>0:16:21</td><td>0:25:15</td><td>0:24:09</td><td>$0 : {23} : {08}$</td></tr><tr><td>Inference time</td><td>2.82s</td><td>3.18s</td><td>0.05s</td><td>0.05s</td><td>0.05s</td><td>0.07s</td><td>0.05s</td><td>0.06s</td><td>0.06s</td><td>0.06s</td></tr></table>
364
+
365
+ Table 7: Training and inference times of the considered models.
366
+
367
+ ![01963ed9-99db-7614-977d-e8fed6978e5a_16_309_192_1166_922_0.jpg](images/01963ed9-99db-7614-977d-e8fed6978e5a_16_309_192_1166_922_0.jpg)
368
+
369
+ Figure 5: Visualization of novel episodes' latent spaces from the Graph-R52 dataset, through T-SNE dimensionality reduction. Each row is a different episode, the colors represent novel classes, the crosses are the queries, the circles are the supports and the stars are the prototypes. The left column is produced with the base model PN, the middle one with the PN+TAE model, the right one with the full model PN+TAE+MU. This comparison shows that the TAE and MU regularizations improve the class separation in the latent space, although less remarkably than in COIL-DEL.
370
+
papers/LOG/LOG 2022/LOG 2022 Conference/VBXRMnRBfRF/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,266 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § METRIC BASED FEW-SHOT GRAPH CLASSIFICATION
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ Few-shot graph classification is a novel yet promising emerging research field that still lacks the soundness of well-established research domains. Existing works often consider different benchmarks and evaluation settings, hindering comparison and, therefore, scientific progress. In this work, we start by providing an extensive overview of the possible approaches to solving the task, comparing the current state-of-the-art and baselines via a unified evaluation framework. Our findings show that while graph-tailored approaches have a clear edge on some distributions, easily adapted few-shot learning methods generally perform better. In fact, we show that it is sufficient to equip a simple metric learning baseline with a state-of-the-art graph embedder to obtain the best overall results. We then show that straightforward additions at the latent level lead to substantial improvements by introducing i) a task-conditioned embedding space ii) a MixUp-based data augmentation technique. Finally, we release a highly reusable codebase to foster research in the field, offering modular and extensible implementations of all the relevant techniques.
12
+
13
+ § 16 1 INTRODUCTION
14
+
15
+ Graphs have ruled digital representations since the dawn of computer science. Their structure is simple and general, and their structural properties are well studied. Given the success of deep learning in different domains that enjoy a regular structure, such as those found in computer vision $\left\lbrack {4,{47},{71}}\right\rbrack$ and natural language processing $\left\lbrack {9,{14},{38},{54}}\right\rbrack$ , a recent line of research has sought to extend it to manifolds and graph-structured data $\left\lbrack {3,8,{25}}\right\rbrack$ . Nevertheless, the expressivity brought by deep learning comes at a cost: deep models require vast amounts of data to search the complex hypothesis spaces they define. When data is scarce, these models end up overfitting the training set, hindering their generalization capability on unseen samples. While annotations are usually abundant in computer vision and natural language processing, they are harder to obtain for graph-structured data due to the impossibility or expensiveness of the annotation process $\left\lbrack {{28},{49},{51}}\right\rbrack$ . This is particularly true when the samples come from specialized domains such as biology, chemistry and medicine [27], where graph-structured data are ubiquitous. The most heartfelt example is drug testing, requiring expensive in-vivo testing and laborious wet experiments to label drugs and protein graphs [36].
16
+
17
+ To address this problem, the field of Few-Shot Learning (FSL) [17, 19] aims at designing models which can effectively operate in scarce data scenarios. While this well-established research area enjoys a plethora of mature techniques, robust benchmarks and libraries, its intersection with graph representation learning is still at an embryonic stage. As such, the field suffers from a lack of uniformity: existing works often consider different benchmarks and evaluation settings, with no two works considering the same set of datasets or evaluation hyperparameters. This scenario results in a fragmented understanding, hindering comparison and, therefore, scientific progress in the field. In an attempt to mitigate this issue and facilitate new research, we provide a modular and easily extensible codebase with re-implementations of the most relevant baselines and state-of-the-art works. The latter allows both for straightforward use by practitioners and for a fair comparison of the techniques in a unified evaluation setting. Our findings show that kernel methods achieve impressive results on particular distributions but are too rigid to be used as an overall solution. On the other hand, few-shot learning techniques can be easily adapted to the graph setting by employing a graph neural network as encoder. Contrarily to existing works, we argue that the latter is sufficient to capture the complexity of the structure, relieving the remaining pipeline of the burden. When in the latent space, standard techniques behave as expected and need no further tailoring to the graph domain is needed.
18
+
19
+ < g r a p h i c s >
20
+
21
+ Figure 1: An $N$ -way $K$ -shot episode. In this example, there are $N = 3$ classes. Each class has $k = 4$ supports yielding a support set with size $N * K = {12}$ . The class information provided by the supports is exploited to classify the queries. We test the classification accuracy on all $N$ classes. In Figure there are $Q = 2$ queries for each class thus the query set has size $N * Q = 6$ .
22
+
23
+ In this direction, we show that a simple Prototypical Networks [48] architecture outperforms existing works when equipped with a state-of-the-art graph embedder. As typical in few-shot learning, we frame tasks as episodes, where an episode is defined by a set of classes and several supervised samples (supports) for each of them [56]. Such an episode is depicted in Figure 1. This setting favors a straightforward addition to the architecture: in fact, while a standard Prototypical Network would embed the samples in the same way independently of the episode, we can take inspiration from [39] and empower the graph embeddings by conditioning them on the particular set of classes seen in the episode. This way, the intermediate features and the final embeddings may be modulated according to what is best for the current episode. Finally, we propose to augment the training dataset using a MixUp-based [70] online data augmentation technique. The latter creates artificial samples from two existing ones as a mix-up of their latent representations, probing unexplored regions of the latent space that can accommodate samples from unseen classes. We finally show that these additions are beneficial for the task both qualitatively and quantitatively.
24
+
25
+ Summarizing, our contribution is 4 -fold:
26
+
27
+ 1. We provide an extensive overview of the possible approaches to solve the task, comparing all the existing works and baselines in a unified evaluation framework;
28
+
29
+ 2. We release a strongly re-usable codebase to foster research in the field, offering modular and extensible implementations of all the relevant techniques;
30
+
31
+ 3. We show that it is enough to equip existing few-shot pipelines with graph encoders to obtain competitive results, proposing in particular a metric learning baseline for the task;
32
+
33
+ 4. We equip the latter with two supplementary modules: an episode-adaptive embedder and a novel online data augmentation technique, proving their benefits qualitatively and quantitatively.
34
+
35
+ § 2 RELATED WORK
36
+
37
+ Few-Shot Learning. Data-scarce tasks are usually tackled by using one of the following paradigms: i) transfer learning techniques $\left\lbrack {1,{33},{34}}\right\rbrack$ that aim at transferring the knowledge gained from a data-abundant task to a task with scarce data; ii) meta-learning [20, 41, 69] techniques that more generally introduce a meta-learning procedure to gradually learn meta-knowledge that generalizes across several tasks; iii) data augmentation works [21, 53, 65] that seek to augment the data applying transformations on the available samples to generate new ones preserving specific properties. We refer the reader to [61] for an extensive treatment of the matter. Particularly relevant to our work are distance metric learning approaches: in this direction, [56] suggest embedding both supports and queries and then labeling the query with the label of its nearest neighbor in the embedding space. By obtaining a class distribution for the query using a softmax over the distances from the supports, they then learn the embedding space by minimizing the negative log-likelihood. [48] generalize this intuition by allowing $k$ supports for class to be aggregated to form prototypes. Given its effectiveness and simplicity, we chose this approach as the starting point for our architecture.
38
+
39
+ Graph Data Augmentation. Data augmentation follows the idea that in the working domain, there exist transformations that can be applied to samples to generate new ones in a controlled way (e.g., preserving the sample class in a classification setting while changing its content). Therefore, synthetic samples can meet the needs of large neural networks that require training with high volumes of data [61]. In Euclidean domains (e.g., images), this can often be achieved by simple rotations and translations [5, 42]. Unfortunately, in the graph domain, it is challenging to define such transformations on a given graph sample while keeping control of its properties. To this end, a line of works takes inspiration from Mix-Up [37, 70] to create new artificial samples as a combination of two existing ones: $\left\lbrack {{23},{26},{40},{63}}\right\rbrack$ propose to augment graph data directly in the data space, while [64] interpolates latent representations to create novel ones. We also operate in the latent space, but differently from [64], we suggest creating a new sample by selecting only certain features of one representation and the remaining ones from the other by employing a random gating vector. This allows for obtaining synthetic samples as random compositions of the features of the existing samples rather than a linear interpolation of them. We also argue that the proposed Mix-Up is tailored for distance metric learning, making full use of the similarity among samples and class prototypes.
40
+
41
+ Few-Shot Graph Representation Learning. Few-shot graph representation learning is concerned with applying graph representation learning techniques in scarce data scenarios. Similarly to standard graph representation learning, it tackles tasks at different levels of granularity: node-level [15, 58, 68, 72, 73], edge-level [2, 35, 43, 59] and graph-level [12, 24, 29, 32, 36, 60, 62]. Concerning the latter, GSM [12] proposes a hierarchical approach, AS-MAML adapts the well known MAML [20] architecture to the graph setting and SMF-GIN [29] uses a Prototypical Network (PN) variant with domain-specific priors. Differently from the latter, we employ a more faithful formulation of PN that shows far superior performance. Most recently, FAITH [60] proposes to capture episode correlations with an inter-episode hierarchical graph and SP-NP [32] suggests employing neural processes [22] for the task.
42
+
43
+ § 3 APPROACH
44
+
45
+ Setting and Notation. In few-shot graph classification each sample is a tuple $\left( {\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right) ,y}\right)$ where $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ is a graph with node set $\mathcal{V}$ and edge set $\mathcal{E}$ , while $y$ is a graph-level class. Given a set of data-abundant base classes ${C}_{\mathrm{b}}$ , we aim to classify a set of data-scarce novel classes ${C}_{\mathrm{n}}$ . We cast this problem through an episodic framework [57]: during training, we mimic the few-shot setting dividing the base training data in episodes. Each episode e is a $N$ -way $K$ -shot classification task, with its own train $\left( {D}_{\text{ train }}\right)$ and test $\left( {D}_{\text{ test }}\right)$ data. For each of the $N$ classes, ${D}_{\text{ train }}$ contains $K$ corresponding support graphs, while ${D}_{\text{ test }}$ contains $Q$ query graphs. A schematic visualization of an episode is depicted in Figure 1.
46
+
47
+ Prototypical Network (PN) Architecture. We build our network upon the simple-yet-effective idea of Prototypical Networks [48], originally proposed for few-shot image classification. We employ a state-of-the-art Graph Neural Network as node embedder, composed of a set of layers of GIN convolutions [67], each equipped with a MLP regularized with GraphNorm [10]. In practice, each sample is first passed through a set of convolutions, obtaining a hidden representation ${h}^{\left( l\right) }$ for each layer. According to [67], the latter is obtained by updating at each layer its hidden representation as
48
+
49
+ $$
50
+ {\mathbf{h}}_{v}^{\left( l\right) } = {\operatorname{MLP}}^{\left( l\right) }\left( {\left( {1 + {\epsilon }^{\left( l\right) }}\right) \cdot {\mathbf{h}}_{v}^{\left( l - 1\right) } + \mathop{\sum }\limits_{{u \in \mathcal{N}\left( v\right) }}{\mathbf{h}}_{u}^{\left( l - 1\right) }}\right) \tag{1}
51
+ $$
52
+
53
+ where ${\epsilon }^{\left( l\right) }$ is a learnable parameter. Following [66], the final node $d$ -dimensional embedding ${\mathbf{h}}_{v} \in {R}^{d}$ is then given by the concatenation of the outputs of all the layers. The graph-level embedding is then obtained by employing a global pooling function, such as mean or sum. While the sum is a more expressive pooling function for GNNs [67], we observed the mean to behave better for the task in most considered datasets and will therefore be the aggregation function of choice when not specified differently. The $K$ embedded supports ${\mathbf{s}}_{1}^{\left( n\right) },\ldots ,{\mathbf{s}}_{K}^{\left( n\right) }$ for each class $n$ are then aggregated to form the class prototypes ${\mathbf{p}}^{\left( n\right) }$ ,
54
+
55
+ $$
56
+ {\mathbf{p}}^{\left( n\right) } = \frac{1}{K}\mathop{\sum }\limits_{{k = 1}}^{K}{\mathbf{s}}_{k}^{\left( n\right) } \tag{2}
57
+ $$
58
+
59
+ < g r a p h i c s >
60
+
61
+ Figure 2: Prototypical Networks architecture. A graph encoder embeds the supports graphs, the embeddings that belong to the same class are averaged to obtain the class prototype $p$ . To classify a query graph $q$ , it is embedded in the same space of the supports. The distances in the latent space between the query and the prototypes determine the similarities and thus the probability distribution of the query among the different classes, computed as in Equation (3).
62
+
63
+ In the same way, the $Q$ query graphs for each class $n$ are embedded to obtain ${\mathbf{q}}_{1}^{\left( n\right) },\ldots ,{\mathbf{q}}_{Q}^{\left( n\right) }$ . To compare each query graph embedding $\mathbf{q}$ with the class prototypes ${\mathbf{p}}_{1},\ldots ,{\mathbf{p}}_{N}$ , we use an ${\mathcal{L}}_{2}$ -metric scaled by a learnable temperature factor $\alpha$ as suggested in [39]. We refer to this metric as ${d}_{\alpha }$ . The class probability distribution $\rho$ for the query is finally computed by taking the softmax over these
64
+
65
+ distances
66
+
67
+ $$
68
+ {\mathbf{\rho }}_{n} = \frac{\exp \left( {-{d}_{\alpha }\left( {\mathbf{q},{\mathbf{p}}_{n}}\right) }\right) }{\mathop{\sum }\limits_{{{n}^{\prime } = 1}}^{N}\exp \left( {-{d}_{\alpha }\left( {\mathbf{q},{\mathbf{p}}_{{n}^{\prime }}}\right) }\right) }. \tag{3}
69
+ $$
70
+
71
+ The model is then trained end-to-end by minimizing via SGD the log-probability $\mathcal{L}\left( \phi \right) = - \log {\rho }_{n}$ of the true class $n$ . We will refer to this approach without additions as PN in the experiments.
72
+
73
+ Task-Adaptive Embedding (TAE). Until now, our module computes the embeddings regardless of the specific composition of the episode. Our intuition is that the context in which a graph appears should influence its representation. In practice, inspired by [39], we condition the embeddings on the particular task (episode) for which they are computed. Such influence will be expressed by a translation $\beta$ and a scaling $\gamma$ .
74
+
75
+ First of all, given an episode $e$ we compute an episode representation ${\mathbf{p}}_{\mathbf{e}}$ as the mean of the prototypes ${\mathbf{p}}_{n}$ for the classes $n = 1,\ldots ,N$ in the episode. We consider ${\mathbf{p}}_{\mathbf{e}}$ as a prototype for the episode and a proxy for the task. Then, we feed it to a Task Embedding Network (TEN), composed of two distinct residual MLPs. These output a shift vector ${\mathbf{\beta }}_{\ell }$ and a scale vector ${\gamma }_{\ell }$ respectively for each layer of the graph embedding module. At layer $\ell$ , the output ${\mathbf{h}}_{\ell }$ is then conditioned on the episode by transforming it as
76
+
77
+ $$
78
+ {\mathbf{h}}_{\ell }^{\prime } = \gamma \odot {\mathbf{h}}_{\ell } + \mathbf{\beta }. \tag{4}
79
+ $$
80
+
81
+ As in [39], at each layer $\gamma$ and $\beta$ are multiplied by two ${L}_{2}$ -penalized scalars ${\gamma }_{0}$ and ${\beta }_{0}$ so to to promote significant conditioning only if useful. Wrapping up, defining ${g}_{\Theta }$ and ${h}_{\Phi }$ to be the predictors for the shift and scale vectors respectively, the actual vectors to be multiplied to the hidden representation are respectively $\mathbf{\beta } = {\beta }_{0}{g}_{\Theta }\left( {\mathbf{p}}_{\mathbf{e}}\right)$ and $\mathbf{\gamma } = {\gamma }_{0}{h}_{\Phi }\left( {\mathbf{p}}_{\mathbf{e}}\right) + \mathbf{1}$ . When we use this improvement in our experiments, we add the label TAE to the method name.
82
+
83
+ < g r a p h i c s >
84
+
85
+ Figure 3: Mixup procedure. Each graph is embedded into a latent representation. We generate a random boolean mask $\mathbf{\sigma }$ and its complementary $\mathbf{1} - \mathbf{\sigma }$ , which describe the features to select from ${\mathbf{s}}_{1}$ and ${\mathbf{s}}_{2}$ . The selected features are then recomposed to generated the novel latent vector $\widetilde{\mathbf{s}}$ .
86
+
87
+ MixUp (MU) Embedding Augmentation. Typical learning pipelines rely on data augmentation to overcome limited variability in the dataset. While this is mainly performed to obtain invariance to specific transformations, we use it to improve our embedding representation, promoting generalization on unseen feature combinations. In practice, given an episode e, we randomly sample for each pair of classes ${n}_{1},{n}_{2}$ two graphs ${\mathcal{G}}^{\left( 1\right) }$ and ${\mathcal{G}}^{\left( 2\right) }$ from the corresponding support sets. Then, we compute their embeddings ${\mathbf{s}}^{\left( 1\right) }$ and ${\mathbf{s}}^{\left( 2\right) }$ , as well as their class probability distributions ${\mathbf{\rho }}^{\left( 1\right) }$ and ${\mathbf{\rho }}^{\left( 2\right) }$ according to Equation (3). Next, we randomly obtain a boolean mask $\mathbf{\sigma } \in \{ 0,1{\} }^{d}$ . We can then obtain a novel synthetic example by mixing the features of the two graphs in the latent space
88
+
89
+ $$
90
+ \widetilde{\mathbf{s}} = \mathbf{\sigma }{\mathbf{s}}^{\left( 1\right) } + \left( {\mathbf{1} - \mathbf{\sigma }}\right) {\mathbf{s}}^{\left( 2\right) }, \tag{5}
91
+ $$
92
+
93
+ where 1 is a $d$ -dimensional vector of ones. Finally, we craft a synthetic class probability $\widetilde{\mathbf{\rho }}$ for this example by linear interpolation
94
+
95
+ $$
96
+ \widetilde{\mathbf{\rho }} = \lambda {\mathbf{\rho }}^{\left( 1\right) } + \left( {1 - \lambda }\right) {\mathbf{\rho }}^{\left( 2\right) },\;\lambda = \left( {\frac{1}{d}\mathop{\sum }\limits_{{i = 1}}^{d}{\mathbf{\sigma }}_{i}}\right) \tag{6}
97
+ $$
98
+
99
+ where $\lambda$ represents the percentage of features sampled from the first sample. If we then compute the class distribution $\rho$ for $\widetilde{\mathrm{s}}$ according to Equation (3), we can ask it to be similar to Equation (6) by adding the following regularizing term to the training loss
100
+
101
+ $$
102
+ {\mathcal{L}}_{\mathrm{{MU}}} = \parallel \mathbf{\rho } - \widetilde{\mathbf{\rho }}{\parallel }_{2}^{2}. \tag{7}
103
+ $$
104
+
105
+ Intuitively, by adopting this online data augmentation procedure, the network is faced with new feature combinations during training, helping to explore unseen regions of the embedding space. The overall procedure is summarized in Figure 3.
106
+
107
+ § 4 EXPERIMENTS
108
+
109
+ § 4.1 DATASETS
110
+
111
+ We benchmark our approach over two sets of datasets: the first one was introduced in [12], and consists of: (i) TRIANGLES, a collection of graphs labeled $i = 1,\ldots ,{10}$ , where $i$ is the number of triangles in the graph. (ii) ENZYMES, a dataset of tertiary protein structures from the BRENDA database [11]; each label corresponds to a different top-level enzyme. (iii) Letter-High, a collection of graph-represented letter drawings from the English alphabet; each drawing is labeled with the corresponding letter. (iv) Reddit-12K, a social network dataset where graphs represent threads, with edges connecting users interacting. The corresponding discussion forum gives the label of a thread. 7 We will refer to this set of datasets as ${\mathcal{D}}_{\mathrm{A}}$ . The second set of datasets was introduced in [36] and consists of: (i) Graph-R52, a textual dataset in which each graph represents a different text, with words being connected by an edge if they appear together in a sliding window. (ii) COIL-DEL, a collection of graph-represented images obtained through corner detection and Delaunay triangulation. We will refer to this set of datasets as ${\mathcal{D}}_{\mathrm{B}}$ . The overall dataset statistics are reported in appendix A.
112
+
113
+ max width=
114
+
115
+ 2|c|Model 2|c|TRIANGLES 2|c|Letter-High 2|c|ENZYMES 2|c|Reddit 2|c|mean
116
+
117
+ 3-12
118
+ 2|c|X 5-shot 10-shot 5-shot 10-shot 5-shot 10-shot 5-shot 10-shot 5-shot 10-shot
119
+
120
+ 1-12
121
+ 3*X WL ${59.3} \pm {7.7}$ ${64.5} \pm {7.4}$ ${69.8} \pm {7.2}$ ${74.1} \pm {5.8}$ ${54.9} \pm {9.1}$ ${57.0} \pm {9.1}$ ${29.3} \pm {4.5}$ ${34.2} \pm {4.9}$ 53.3 57.5
122
+
123
+ 2-12
124
+ SP ${61.0} \pm {8.0}$ ${66.7} \pm {7.4}$ ${67.3} \pm {6.8}$ ${71.2} \pm {6.6}$ ${58.8} \pm {9.1}$ ${61.5} \pm {8.8}$ ${51.0} \pm {5.8}$ ${52.7} \pm {4.9}$ 59.5 63.0
125
+
126
+ 2-12
127
+ Graphlet ${69.2} \pm {10.2}$ ${79.3} \pm {8.1}$ ${35.4} \pm {4.2}$ ${39.4} \pm {4.4}$ ${58.8} \pm {10.6}$ ${59.8} \pm {9.8}$ ${42.7} \pm {11.3}$ ${45.4} \pm {11.2}$ 51.5 56.0
128
+
129
+ 1-12
130
+ 3*0.1 MAML $\mathbf{{87.8}} \pm {4.9}$ ${88.2} \pm {4.5}$ ${69.6} \pm {7.9}$ ${73.8} \pm {5.7}$ ${52.7} \pm {8.9}$ ${54.9} \pm {8.5}$ ${26.0} \pm {6.0}$ ${37.0} \pm {6.9}$ 59.0 63.5
131
+
132
+ 2-12
133
+ AS-MAML [36] ${86.4} \pm {0.7}$ ${87.2} \pm {0.6}$ ${76.2} \pm {0.8}$ ${77.8} \pm {0.7}$ - - - - - -
134
+
135
+ 2-12
136
+ AS-MAML* ${79.2} \pm {5.9}$ ${84.0} \pm {5.3}$ ${71.8} \pm {7.6}$ ${73.0} \pm {5.2}$ ${45.1} \pm {8.2}$ ${53.1} \pm {8.1}$ ${33.7} \pm {10.8}$ ${37.4} \pm {10.8}$ 57.4 61.9
137
+
138
+ 1-12
139
+ 3*0 E 2 SMF-GIN [29] ${79.8} \pm {0.7}$ - - - - - - - - -
140
+
141
+ 2-12
142
+ FAITH [60] ${79.5} \pm {4.0}$ ${80.7} \pm {3.5}$ ${71.5} \pm {3.5}$ ${76.6} \pm {3.2}$ ${57.8} \pm {4.6}$ ${62.1} \pm {4.1}$ ${42.7} \pm {4.1}$ ${46.6} \pm {4.0}$ 62.9 66.5
143
+
144
+ 2-12
145
+ SPNP [32] ${85.2} \pm {0.7}$ ${86.8} \pm {0.7}$ - - - - - - - -
146
+
147
+ 1-12
148
+ 5*X GIN ${82.1} \pm {6.3}$ ${83.6} \pm {5.4}$ ${68.4} \pm {7.3}$ ${74.5} \pm {5.7}$ ${54.2} \pm {9.3}$ ${55.9} \pm {9.4}$ ${49.8} \pm {7.0}$ $\mathbf{{53.4} \pm {6.3}}$ 63.6 66.8
149
+
150
+ 2-12
151
+ GAT ${82.8} \pm {6.1}$ ${83.4} \pm {5.5}$ ${74.1} \pm {6.2}$ ${76.4} \pm {5.1}$ ${53.6} \pm {9.4}$ ${55.4} \pm {9.1}$ ${39.0} \pm {6.7}$ ${41.7} \pm {6.1}$ 62.4 64.2
152
+
153
+ 2-12
154
+ GCN ${82.0} \pm {6.1}$ ${82.7} \pm {5.5}$ ${71.3} \pm {6.8}$ ${74.9} \pm {5.5}$ ${53.4} \pm {9.3}$ ${54.6} \pm {9.4}$ ${44.7} \pm {7.4}$ ${50.8} \pm {6.3}$ 62.8 65.7
155
+
156
+ 2-12
157
+ GSM [12] ${71.4} \pm {4.3}$ ${75.6} \pm {3.6}$ ${69.9} \pm {5.9}$ ${73.2} \pm {3.4}$ ${55.4} \pm {5.7}$ ${60.6} \pm {3.8}$ ${41.5} \pm {4.1}$ ${45.6} \pm {3.6}$ 59.5 63.8
158
+
159
+ 2-12
160
+ GSM* ${79.2} \pm {5.7}$ ${81.0} \pm {5.6}$ ${72.9} \pm {6.4}$ ${75.6} \pm {5.6}$ ${56.8} \pm {10.3}$ ${58.4} \pm {9.7}$ ${40.7} \pm {6.8}$ ${46.4} \pm {6.3}$ 62.4 65.4
161
+
162
+ 1-12
163
+ 0 PN+TAE+MU - . $\pm {0.9}$ ${}^{87.4}{ + }_{{4c} - 4}$ _____ $\pm {0.8}$ ${87.5}{ + }_{3\mathrm{e} - 4}$ _____ $\pm {5.5}$ ${77.2}_{+{2c} - 3}^{\pm {0.9}}$ _____ $\pm {4.8}$ ${79.2}{}_{\pm \;1\mathrm{c} - 3}^{\pm {4.8}}$ The of $\pm {10.1}$ ${}^{48} \pm 4\mathrm{c}$ . 。 $z \pm {9.4}$ ${59.3}_{\pm 3\mathrm{e} - 3}^{+ - - }$ _____ $\pm {6.7}$ ${}^{45.7}{ \pm }_{2\mathrm{e} - 3}$ to. $- \pm {6.3}$ ${}^{48.5}{ \pm }_{{2e} - 3}$ 66.8 68.7
164
+
165
+ 1-12
166
+
167
+ Table 1: Macro accuracy scores over different $k$ -shot settings and architectures. They are partitioned into baselines (upper section) and our full architecture (lower section). The best scores are in bold. We report standard deviation values in blue and 0.9 confidence intervals in orange. Cells filled with - indicate lack of results in the original works for the corresponding datasets.
168
+
169
+ It is important to note that only the datasets in ${\mathcal{D}}_{\mathrm{B}}$ have enough classes to permit a disjoint set of classes for validation. In contrast, a disjoint subset of the training samples is used as a validation set in the first four by existing works. We argue that this setting is critically unfit for few-shot learning, as the validation set does not make up for a good proxy for the actual testing environment since the classes are not novel. Moreover, the lack of a reliable validation set prevents the usage of early stopping, as there is no way to decide on a good stopping criterion for samples from unseen classes. We nevertheless report the outcomes of this evaluation setting for the sake of comparison.
170
+
171
+ § 4.2 BASELINES
172
+
173
+ We group the considered approaches according to their category. We note, however, that the taxonomy is not strict, and some works may considered to belong to more categories.
174
+
175
+ Graph kernels. Starting from graph kernel methods, we consider Weisfeiler-Lehman (WL) [45], Shortest Path (SP) [7] and Graphlet [44]. These well-known methods compute similarity scores between pairs of graphs, and can be understood as performing inner-product between graphs. We refer the reader to [31] for a thorough treatment. In our implementation, an SVM is used as the head classifier for all the methods. More implementation details can be found in appendix B.
176
+
177
+ Meta learning. Regarding the meta-learning approaches, we consider both vanilla Model-Agnostic Meta-Learning (MAML) [20] and its graph-tailored variant AS-MAML [36]. The former employs a meta-learner trained by optimizing the sum of the losses from a set of downstream tasks, encouraging the learning of features that can be adapted with a small number of optimization steps. The latter builds upon MAML by integrating a reinforcement learning-based adaptive step controller to decide the number of inner optimization steps adaptively.
178
+
179
+ Metric learning. For the metric based approaches, the considered works are SMF-GIN [29], FAITH [60] and SPNP [32]. In SMF-GIN, a GNN is employed to encode both global (via an attention over different GNN layer encodings) and local (via an attention over different substructure encodings) properties. We point out that they include a ProtoNet-based baseline. However, their implementation does not accurately follow the original one and, differently from us, leverages domain-specific prior knowledge. FAITH proposes to capture correlations among meta-training tasks via a hierarchical task graph to transfer meta-knowledge to the target task better. For each meta-training task, a set of additional ones is sampled according to its classes to build the hierarchical graph. Subsequently, the 1 knowledge from the embeddings extracted by the hierarchical task graph is aggregated to classify the query graph samples. Finally, SPNP makes use of Neural Processes (NPs) by introducing an encoder capable of constructing stochastic processes considering the graph structure information extracted by a GNN and a prototypical decoder that provides a metric space where classification is performed.
180
+
181
+ max width=
182
+
183
+ 2*Category 2*Model 4|c|Graph-R52 4|c|COIL-DEL 2|c|mean
184
+
185
+ 3-12
186
+ 2|c|5-shot 2|c|10-shot 2|c|5-shot 2|c|10-shot 5-shot 10-shot
187
+
188
+ 1-12
189
+ 3*Kernel WL 88.2 $\pm {10.9}$ 91.4 ±9.1 56.5 $\pm {12.7}$ 64.0 $\pm {12.8}$ 72.4 77.7
190
+
191
+ 2-12
192
+ SP 84.3 $\pm {11.3}$ 88.9 $\pm {9.6}$ 39.6 $\pm {9.6}$ 45.5 $\pm {11.3}$ 61.9 67.2
193
+
194
+ 2-12
195
+ Graphlet 57.4 $\pm {10.3}$ 58.3 $\pm {10.1}$ 57.6 $\pm {12.2}$ 61.3 $\pm {11.5}$ 57.5 59.8
196
+
197
+ 1-12
198
+ 3*Meta MAML 64.9 $\pm {13.3}$ 70.1 $\pm {12.7}$ 76.7 $\pm {12.6}$ 78.8 $\pm {11.5}$ 70.8 74.4
199
+
200
+ 2-12
201
+ AS-MAML [36] 75.3 $\pm {1.1}$ 78.3 $\pm {1.1}$ 81.5 $\pm {1.3}$ 84.7 $\pm {1.3}$ 78.4 81.5
202
+
203
+ 2-12
204
+ AS-MAML* 72.3 $\pm {14.8}$ 72.0 $\pm {15.5}$ 77.2 $\pm {11.1}$ 80.1 $\pm {9.9}$ 74.7 76.0
205
+
206
+ 1-12
207
+ 4*Transfer GIN 67.2 $\pm {13.9}$ 66.4 $\pm {13.7}$ 72.3 $\pm {11.4}$ 74.0 $\pm {11.3}$ 69.8 74.4
208
+
209
+ 2-12
210
+ GAT 75.2 $\pm {12.8}$ 77.5 $\pm {12.4}$ 79.3 $\pm {10.3}$ 80.8 $\pm {9.9}$ 77.2 79.1
211
+
212
+ 2-12
213
+ GCN 75.1 $\pm {13.0}$ 74.1 $\pm {14.5}$ 75.2 $\pm {11.4}$ 77.1 $\pm {10.8}$ 75.1 75.6
214
+
215
+ 2-12
216
+ GSM* 70.3 $\pm {15.7}$ 71.6 $\pm {14.9}$ 74.9 $\pm {11.4}$ 79.2 $\pm {10.3}$ 72.6 75.4
217
+
218
+ 1-12
219
+ Metric SPNP [32] - X - X 84.8 $\pm {1.6}$ 87.3 $\pm {1.6}$ - -
220
+
221
+ 1-12
222
+ 4*Ours PN 73.1 $\pm {12.1}$ 78.0 $\pm {10.6}$ 85.5 ±9.8 87.2 ±9.3 79.3 82.6
223
+
224
+ 2-12
225
+ PN+TAE 77.9 $\pm {11.8}$ 81.3 $\pm {10.6}$ 86.4 $\pm {9.6}$ 88.8 $\pm {8.5}$ 82.1 85.0
226
+
227
+ 2-12
228
+ 2*PN+TAE+MU 2*77.9 $\pm {11.8}$ 2*81.5 $\pm {10.4}$ 2*87.7 $\pm {9.2}$ 2*90.5 $\pm {7.7}$ 2*82.8 2*86.0
229
+
230
+ 4-4
231
+ 6-6
232
+ 8-8
233
+ 10-10
234
+ $\pm 3\mathrm{e} - 3$ $\pm 4\mathrm{e} - 3$ $\pm 4\mathrm{e} - 3$ $\pm 3\mathrm{e} - 3$
235
+
236
+ 1-12
237
+
238
+ Table 2: Macro accuracy scores over different $k$ -shot settings and architecture. The best scores are in bold. We report standard deviation values in blue and 0.9 confidence intervals in orange. Cells filled with - indicates lack of results in the original works for the corresponding datasets.
239
+
240
+ Transfer learning. Finally, transfer learning approaches include GSM [12] and three simple baselines built on top of varying GNN architectures, namely GIN [67], GAT [55] and GCN [30]. The latter follow the most standard fine-tuning procedure, i.e. training the embedder backbone over the base classes and fine-tuning the classifier head over the $k$ supports. In GSM, graph prototypes are computed as a first step and then clustered based on their spectral properties to create super-classes. These are then used to generate a super-graph which is employed to separate the novel graphs. The original work however does not follow an episodic framework, making the results not directly comparable. For this reason, we also re-implemented it to cast it in the episodic framework. We demand the reader to appendix $\mathrm{B}$ for more details.
241
+
242
+ § 4.3 EXPERIMENTAL DETAILS
243
+
244
+ Our graph embedder is composed of two layers of GIN followed by a mean pooling layer, and the dimension of the resulting embeddings is set to 64. Furthermore, both the latent mixup regularizer and the L2 regularizer of the task-adaptive embedding are weighted at 0.1 . The framework is trained with a batch size of 32 using Adam optimizer with a learning rate of 0.0001 . We implement our framework with Pytorch Lightning [16] using Pytorch Geometric [18], and WandB [6] to log the experiment results. The specific configurations of all our approaches are reported in appendix B.
245
+
246
+ § 5 RESULTS
247
+
248
+ We report in this section the results over the two sets of benchmark datasets ${\mathcal{D}}_{A},{\mathcal{D}}_{B}$ . Given the lack of homogeneity in the evaluation settings of previous works, we will report both the standard deviation of our results between different episodes and the 0.95 confidence interval. Moreover, when possible, we provide the re-implementation of the methods, indicating them with a $\star$ in their name.
249
+
250
+ Benchmark ${\mathcal{D}}_{\mathbf{A}}$ . As can be seen in Table 1, there is no one-fits-all approach for the considered datasets. In fact, the best results for each are obtained with approaches belonging to different categories, including graph kernels. However, the proposed approach obtains the best results if we consider the average performance for both $k = 5,{10}$ . In fact, considering previous published works, we obtain an overall margin of $+ {7.3}\% , + {4.9}\%$ accuracy for $k = 5,{10}$ compared to GSM [12], $+ {9.4}\%$ and +6.8% compared to to AS-MAML* [36], and +3.9%, +2.2% with respect to FAITH [60]. However, we again stress the partial inadequacy of these datasets as a realistic evaluation tool, given the lack of a disjoint set of classes for the validation set. Interestingly, our re-implementation of ${\mathrm{{GSM}}}^{ \star }$ obtains slightly better results than the original over Reddit and Letter-High, a significant improvement over TRIANGLES and a comparable result over ENZYMES. The difference may be attributed to the difference in the evaluation setting, as the non-episodic framework employed in GSM does not have a fixed number of queries per class, and batches are sampled without episodes.
251
+
252
+ Benchmark ${\mathcal{D}}_{\mathbf{B}}$ . Table 2 shows the results for the two datasets in the benchmark. Most surprisingly, graph kernels exhibit superior performance over R-52, outperforming all the considered deep learning models. It must be noted, however, that the latter is characterized by a very skewed sample distribution, with few classes accounting for most of the samples. In this regard, deep learning methods may end up overfitting the most frequent class, while graph kernel methods are less prone due to the smaller parameter volume and stronger inductive bias. Nevertheless, the latter also hinders their adaptivity to different distributions: we can see, in fact, how the same methods perform miserably on COIL-DEL. This can be observed by considering the mean results over both sets of datasets, in which graph kernels generally perform the worst. Compared to existing works, our approach obtains an average margin of $+ {4.37}\%$ and $+ {4.53}\%$ over AS-MAML [36] and $+ {10.2}\% , + {10.6}\%$ over GSM for $k = 5,{10}$ respectively. Finally, the last three rows of Table 2 show the efficacy of the proposed improvements. Task-adaptive embedding (TAE) allows obtaining the most critical gain, yielding an average increment of $+ {2.82}\%$ and $+ {2.42}\%$ for the 5 -shot and 10 -shot cases, respectively. Then, the proposed online data augmentation technique (MU) allows obtaining an additional boost, especially on COIL-DEL. In fact, in the latter case, its addition yields a +0.65% and +1.72% improvement in accuracy for $k = 5,{10}$ . Remarkably, a vanilla Prototypical Network (PN) architecture with the proposed graph embedder is already sufficient to obtain state-of-the-art results.
253
+
254
+ Qualitative analysis. The latent space learned by the graph embedder is the core element of our approach since it determines the prototypes and the subsequent sample classification. To provide a better insight into our method peculiarities, Figure 5 depicts a T-SNE representation of the learned embeddings for novel classes. Each row represents different episodes, while the different columns show the different embeddings obtained with our approach and its further refinements. We also highlight the queries (crosses), the supports (circles) and the prototypes (star). As can be seen, our approach separates samples belonging to novel classes into clearly defined clusters. Already in PN, some classes naturally cluster in different regions of the embedding. The TAE regularization improves the class separation without significantly changing the disposition of the clusters in the space. Our insight is that the context may let the network reorganize the already seen space without moving far from the already obtained representation. Finally, MU allows better use of previously unexplored regions, as expected from this kind of data augmentation. We show that our feature recombination helps the network better generalize and anticipate the coming of novel classes.
255
+
256
+ § 6 CONCLUSIONS
257
+
258
+ Limitations. Employing a graph neural network embedder, the proposed approach may inherit known issues such as the presence of information bottlenecks [52] and over smoothing [13]. These may be aggravated by the additional aggregation required to compute the prototypes, as the readout function to obtain a graph-level representation is already an aggregation of the node embeddings. Also, the nearest-neighbour association in the final embedding assumes that it enjoys a euclidean metric. While this is an excellent local approximation, we expect it may lead to imprecision. To overcome this, further improvements can be inspired by the Computer Vision community [50].
259
+
260
+ Future works. In future work, we aim to enrich the latent space defined by the architecture, for instance, forcing the class prototypes in each episode to be sampled from a learnable distribution rather than directly computed as the mean of the supports. Moreover, it may be worth introducing an attention layer to have supports (or prototypes, directly) affect each other directly and not implicitly, as it now happens with the task embedding module. We also believe data augmentation is a crucial technique for the future of this task: the capacity to meaningfully inflate the small available datasets may result in a significant performance improvement. In this regard, we plan to extensively test the existing graph data augmentation techniques in the few-shot scenario and build upon MixUp to exploit different mixing strategies, such as non-linear interpolation.
261
+
262
+ < g r a p h i c s >
263
+
264
+ Figure 4: Visualization of latent spaces from the COIL-DEL dataset, through T-SNE dimensionality reduction. Each row is a different episode, the colors represent novel classes, the crosses are the queries, the circles are the supports and the stars are the prototypes. The left column is produced with the base model PN, the middle one with the PN+TAE model, the right one with the full model PN+TAE+MU. This comparison shows the TAE and MU regularizations improve the class separation in the latent space, with MU proving essential to obtain accurate latent clusters.
265
+
266
+ Conclusions. In this paper, we tackle the problem of few-shot graph classification, an under-explored problem in the broader machine learning community. We provide a modular and extensible codebase to facilitate practitioners in the field and set a stable ground for fair comparisons. The latter contains re-implementations of the most relevant baselines and state-of-the-art works, allowing us to provide an overview of the possible approaches. Our findings show that while there is no one-fits-all approach for all the datasets, the overall best results are obtained by using a distance metric learning baseline. We then suggest valuable additions to the architecture, adapting a task-adaptive embedding procedure and designing a novel online graph data augmentation technique. Lastly, we prove their benefits for the problem over several datasets. We hope this work to encourage a reconsideration of the effectiveness of distance metric learning when dealing with graph-structured data. In fact, we believe metric learning to be incredibly fit for dealing with graphs, considering that the latent spaces encoded by graph neural networks are known to capture both topological features and node signals effectively. Most importantly, we hope this work and its artifacts to facilitate practitioners in the field and to encourage new ones to approach it.
papers/LOG/LOG 2022/LOG 2022 Conference/Vbfr1jiMxYS/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,923 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # PatchGT: Transformer over Non-trainable Clusters for Learning Graph Representations
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ Recently the Transformer structure has shown good performances in graph learning tasks. However, these Transformer models directly work on graph nodes and may have difficulties learning high-level information. Inspired by the vision transformer, which applies to image patches, we propose a new Transformer-based graph neural network: Patch Graph Transformer (PatchGT). Unlike previous transformer-based models for learning graph representations, PatchGT learns from non-trainable graph patches, not from nodes directly. It can help save computation and improve the model performance. The key idea is to segment a graph into patches based on spectral clustering without any trainable parameters, with which the model can first use GNN layers to learn patch-level representations and then use Transformer to obtain graph-level representations. The architecture leverages the spectral information of graphs and combines the strengths of GNNs and Transformers. Further, We show the limitations of previous hierarchical trainable clusters theoretically and empirically. We also prove the proposed non-trainable spectral clustering method is permutation invariant and can help address the information bottlenecks in the graph. PatchGT achieves higher expressiveness than 1-WL-type GNNs, and the empirical study shows that PatchGT achieves competitive performances on benchmark datasets and provides interpretability to its predictions.
12
+
13
+ ## 201 Introduction
14
+
15
+ Learning from graph data is ubiquitous in applications such as drug design [14] and social network analysis [34]. The success of a graph learning task hinges on effective extraction of information from graph structures, which often contain combinatorial structures and are highly complex. Early works [7] often need to manually extract features from graphs before applying learning models. In the era of deep learning, Graph Neural Networks (GNNs) [32] are developed to automatically extract information from graphs. Through passing learnable messages between nodes, they are able to encode graph information into vector representations of graph nodes. GNNs have become the standard tool for learning tasks on graph data.
16
+
17
+ While they have achieved good performances in a wide range of tasks, GNNs still have a few limitations. For example, GNNs [33] suffer from issues such as inadequate expressiveness [33], over-smoothing [26], and over-squashing [2]. These issues have been partially addressed by techniques such as improving message-passing functions and expanding node features [5, 19].
18
+
19
+ Another important progress is to replace the message-passing network with the Transformer architecture $\left\lbrack {6,{17},{22},{35}}\right\rbrack$ . These models treat graph nodes as tokens and apply the Transformer architecture to nodes directly. The main focus of these models is how to encode node information and how to incorporate adjacency matrices into network calculations. Without the message-passing structure, these models may overcome some associated issues and have shown premium performances in various graph learning tasks. However, these models suffer from computation complexity because of the global attention on all nodes. It is hard to capture the topological information of graphs.
20
+
21
+ As a comparison, the Transformer for image data works on image patches instead of pixels [9, 20]. While this model choice is justified by reduction of computation cost, recent work [29] shows that
22
+
23
+ "patch representation itself may be a critical component to the 'superior' performance of newer architectures like Vision Transformers". One intriguing question is whether patch representation can also improve learning models on graphs. With this question, we consider patches on graphs. Patches over graphs are justified by a "mid-level" understanding of graphs: for example, a molecule graph's property is often decided by some function groups, each of which is a subgraph formed by locally-connected atoms. Therefore, patch representations are able to capture such mid-level concepts and bridge the gap between low-level structures to high-level semantics.
24
+
25
+ Motivated by our question, we propose a new framework, Patch Graph Transformer (PatchGT). It first segments a graph into patches based on spectral clustering, which is a non-trainable segmentation method, then applies GNN layers to learn patch representations, and finally uses Transformer layers to learn a graph-level representation from patch representations. This framework combines the strengths of two types of learning architectures: GNN layers can extract information with message passing, while Transformer layers can aggregate information using the attention mechanism. To our best knowledge, we firstly show several limitations of previous trainable clustering method based on GNN. We also show that the proposed non-trainable clustering can provide more reasonable patches and help overcoming information bottleneck in graphs.
26
+
27
+ We justify our model architecture with theoretical analysis. We show that our patch structure derived from spectral clustering is superior to patch structures learned by GNNs [4, 12, 36]. We also propose a new mathematical description of the information bottleneck in vanilla GNNs and further show that our architecture has the ability of mitigating this issue when graphs have small graph cuts.
28
+
29
+ We run an extensive empirical study and demonstrate that the proposed model outperforms competing methods on a list of graph learning tasks. The ablation study shows that our PatchGT is able to combine the strengths of GNN layers and Transformer layers. The attention weights in Transformer layers also provide explanations for model predictions.
30
+
31
+ ## 2 Related Work
32
+
33
+ Transformer models have gained remarkable successes in NLP applications [15]. Recently, they have also been introduced to vision tasks [9] and graph tasks [6, 17, 22, 35]. These models all treat nodes as tokens. Particularly, Memory-based graph networks[1] apply a hierarchical attention pooling methods on the nodes. Therefore, they are hard to be applied to large graphs because of huge computation complexity. At the same time, image patches have been shown to be useful for Transformer models on image data $\left\lbrack {9,{29}}\right\rbrack$ , so it is not surprising if graph patches are also helpful to Transformer models on graph data. Graph multiset pooling [3] applies trainable pooling methods on the nodes based on GNN. And then adopt a global attention layer on learned cluters. We will show that such trainable clustering has several limitations for attention mechanism in this work.
34
+
35
+ Hierarchical pooling models $\left\lbrack {4,{11},{12},{18},{25},{36}}\right\rbrack$ are relevant to our work in that they also aggregate information from node representations in middle layers of networks. However, these methods all form their pooling structures based on representations learned from GNNs. As a result, these pooling structures inherit drawbacks from GNNs [33]. They may also aggregate nodes that are far apart on the graph and thus cannot preserve the global structure of the input graph. Also such trainable clustering methods need much computation for training. Furthermore, our main purpose is to use non-trainable patches on graphs as tokens for a Transformer model, which is different from these models.
36
+
37
+ ## 3 Patch Graph Transformer
38
+
39
+ ### 3.1 Background
40
+
41
+ In this work, we consider graph-level learning problems. Let $G = \left( {V, E}\right)$ denote a graph with node set $V$ and edge set $E$ . Let $\mathbf{A}$ denote its adjacency matrix. The graph has both node features $\mathbf{X} = \left( {{\mathbf{x}}_{i} \in {\mathbb{R}}^{d} : i \in V}\right)$ and edge features $\mathbf{E} = \left( {{\mathbf{e}}_{i, j} \in {\mathbb{R}}^{{d}^{\prime }} : \left( {i, j}\right) \in E}\right)$ . Let $y$ denotes the label of graph. This work aims to learn a model that maps(A, X, E)to a vector representation $\mathbf{g}$ , which is then used to predict the graph label $y$ .
42
+
43
+ GNN layers. A GNN uses node vectors to represent structural information of the graph. It consists of multiple GNN layers. Each GNN layer passes learnable messages and updates node vectors. Suppose $\mathbf{H} = \left( {{\mathbf{h}}_{i} \in {\mathbb{R}}^{{d}^{\prime \prime }} : i \in V}\right)$ are node vectors, a typical GNN layer updates $\mathbf{H}$ as follows.
44
+
45
+ $$
46
+ {\mathbf{h}}_{i}^{\prime } = \sigma \left( {{\mathbf{W}}_{1}{\mathbf{h}}_{i} + \mathop{\sum }\limits_{{j : \left( {i, j}\right) \in E}}{\mathbf{W}}_{2}{\mathbf{h}}_{j} + {\mathbf{W}}_{3}{\mathbf{e}}_{i, j}}\right) \tag{1}
47
+ $$
48
+
49
+ ![01963ee9-6afc-7b34-900f-7fc043203b0b_2_357_204_1079_557_0.jpg](images/01963ee9-6afc-7b34-900f-7fc043203b0b_2_357_204_1079_557_0.jpg)
50
+
51
+ Figure 1: Model review. We segment a graph into several patch subgraphs by non-trainable clustering. We first extract local information through a GNN, and the initial patch representations are summarized by the aggregation of nodes within the corresponding patches. To further encode structure information, we apply another patch-level GNN to update the representations of patches. Finally, we use Transformer to extract the representation of the entire graph based on patch representations.
52
+
53
+ Here matrices $\left( {{\mathbf{W}}_{1},{\mathbf{W}}_{2},{\mathbf{W}}_{3}}\right)$ are all learnable parameters; and $\sigma$ is the activation function. We denote the layer function by ${\mathbf{H}}^{\prime } = \operatorname{GNN}\left( {\mathbf{A},\mathbf{E},\mathbf{H}}\right)$ . If there are no edge features, then the calculation can be written in matrix form.
54
+
55
+ $$
56
+ {\mathbf{H}}^{\prime } = \sigma \left( {{\mathbf{{HW}}}_{1}^{\top } + {\mathbf{{AHW}}}_{2}^{\top }}\right) \tag{2}
57
+ $$
58
+
59
+ ### 3.2 Model design
60
+
61
+ PatchGT has three components: segmenting the input graph into patches, learning patch representations, and aggregating patch representations into a single graph vector. The overall architecture is shown in Figure 1. The second and third steps are in an end-to-end learning model. Graph segmentation is outside of the learning model, which will be justified by our theoretical analysis later.
62
+
63
+ Forming patches over the graph. We first discuss how to form patches on a graph. One consideration is to include an informative subgraph (e.g., a function group, a motif) into a single patch instead of segmenting it into pieces. A reasonable approach is to run node clustering on the input graph and treat each cluster as a graph patch. If a meaningful subgraph is densely connected, it has a good chance of being contained in a single cluster.
64
+
65
+ In this work, we consider spectral clustering [28,37] for graph segmentation. Let $\mathbf{L} = \mathbf{I} -$ ${\mathbf{D}}^{-1/2}\mathbf{A}{\mathbf{D}}^{-1/2}$ be the normalized Laplacian matrix of $G$ , and its eigen-decomposition is $\mathbf{L} =$ ${\mathbf{{U\Lambda U}}}^{\top }$ , where the eigen-values $\mathbf{\Lambda } = \operatorname{diag}\left( {{\lambda }_{1},\ldots ,{\lambda }_{\left| V\right| }}\right)$ is sorted in the ascending order. By thresholding eigen-values with a small threshold $\gamma$ , we get $k = \arg \mathop{\max }\limits_{{k}^{\prime }}{\lambda }_{{k}^{\prime }} \leq \gamma$ eigen-vectors ${\mathbf{U}}_{1 : k}$ , then we run $k$ -means to get $k$ clusters (denoted by $\mathcal{P}$ ) of graph nodes. Here $\mathcal{P} = \left\{ {C}_{{k}^{\prime }}\right.$ C $\left. {V : {k}^{\prime } = 1,\ldots , k}\right\}$ with each ${C}_{{k}^{\prime }}$ representing a cluster/patch. Note that the threshold $\gamma$ is a hyper-parameter, and $k$ varies depending on the underlying graph’s topology.
66
+
67
+ Computing patch representations. When we learn representations of patches in $\mathcal{P}$ , we consider both node connections within the patch and also connections between patches. Patches form a coarse graph, which is also referred as a patch-level graph, by treating patches as nodes and their connections as edges. We first learn node representations using GNN layers. Let ${\mathbf{H}}_{0} = \mathbf{X}$ denote the initial representations of all nodes. Then we apply ${L}_{1}$ GNN layers to get node representations ${\mathbf{H}}_{{L}_{1}}$ .
68
+
69
+ $$
70
+ {\mathbf{H}}_{\ell } = \operatorname{GNN}\left( {\mathbf{A},\mathbf{E},{\mathbf{H}}_{\ell - 1}}\right) ,\ell = 1,\ldots ,{L}_{1} \tag{3}
71
+ $$
72
+
73
+ Here for easier discussion, we apply GNN layers to the entire graph. We have also tried to apply GNN layers within each patch only and found that the performance is similar.
74
+
75
+ Then we read out the initial patch representation by summarizing representations of nodes within this patch. Let ${\mathbf{z}}_{{k}^{\prime }}^{0}$ denote the initial patch representation, then
76
+
77
+ $$
78
+ {\mathbf{z}}_{{k}^{\prime }}^{0} = \frac{\left| {C}_{{k}^{\prime }}\right| }{\left| V\right| } \cdot \operatorname{readout}\left( {{\mathbf{h}}_{i}^{{L}_{1}} : i \in {C}_{{k}^{\prime }}}\right) ,{k}^{\prime } = 1,\ldots , k \tag{4}
79
+ $$
80
+
81
+ Here ${\mathbf{h}}_{i}^{{L}_{1}}$ is node $i$ ’s representation in ${\mathbf{H}}_{{L}_{1}}$ . We collectively denote these patch representations in a matrix ${\mathbf{Z}}_{0} = \left( {{\mathbf{z}}_{{k}^{\prime }}^{0} : {k}^{\prime } = 1,\ldots , k}\right)$ . The readout function readout $\left( \cdot \right)$ is a function aggregating information from a set of vectors. Our implementation uses the max pooling. We use the factor $\frac{\left| {C}_{{k}^{\prime }}\right| }{\left| V\right| }$ to assign proper weights to patch representations.
82
+
83
+ To further refine patch representations and encode structural information of the entire graph, we apply further GNN layers to the patch-level formed by patches. We first compute the adjacency matrix $\widetilde{\mathbf{A}}$ of the patch-level graph. If we convert the partition $\mathcal{P}$ to an assignment matrix $\mathbf{S} = \left( {{S}_{i,{k}^{\prime }} : i \in }\right.$ $\left. {V,{k}^{\prime } = 1,\ldots k}\right)$ such that ${S}_{i,{k}^{\prime }} = 1\left\lbrack {i \in {C}_{{k}^{\prime }}}\right\rbrack$ , then the adjacency matrix over patches is
84
+
85
+ $$
86
+ \widetilde{\mathbf{A}} = 1\left\lbrack {\left( {{\mathbf{S}}^{\top }\mathbf{A}\mathbf{S}}\right) > 0}\right\rbrack . \tag{5}
87
+ $$
88
+
89
+ Note that $\widetilde{\mathbf{A}}$ only has connections between patches and does not maintain connection strength.
90
+
91
+ We then compute use ${L}_{2}$ GNN layers to refine patch representations.
92
+
93
+ $$
94
+ {\mathbf{Z}}_{\ell } = \operatorname{GNN}\left( {\widetilde{\mathbf{A}},\mathbf{0},{\mathbf{Z}}_{\ell - 1}}\right) ,\;\ell = 1,\ldots ,{L}_{2} \tag{6}
95
+ $$
96
+
97
+ GNN layers here do not have edge features. From the last layer, we get patch representations in ${\mathbf{Z}}_{{L}_{2}}$
98
+
99
+ Graph representation via Transformer layers. Then we use ${L}_{3}$ Transformer layers to extract the representation of the entire graph. Here we use a learnable query vector ${\mathbf{q}}_{0}$ to "retrieve" the global representation $\mathbf{g}$ of the graph from patch representations ${\mathbf{Z}}_{{L}_{2}}$ .
100
+
101
+ $$
102
+ {\mathbf{q}}_{\ell }^{\prime } = \operatorname{MHA}\left( {{\mathbf{q}}_{\ell - 1},{\mathbf{Z}}_{{L}_{2}},{\mathbf{Z}}_{{L}_{2}}}\right) ,\;\ell = 1,\ldots ,{L}_{3} \tag{7}
103
+ $$
104
+
105
+ $$
106
+ {\mathbf{q}}_{\ell } = \operatorname{MLP}\left( {\mathbf{q}}_{\ell }^{\prime }\right) + {\mathbf{q}}_{\ell - 1},\;\ell = 1,\ldots ,{L}_{3} \tag{8}
107
+ $$
108
+
109
+ $$
110
+ \mathbf{g} = \operatorname{LN}\left( {\mathbf{q}}_{{L}_{3}}\right) \tag{9}
111
+ $$
112
+
113
+ Here $\operatorname{MHA}\left( {\cdot ,\cdot , \cdot }\right)$ is the function of a multi-head attention layer (please refer to Chp. 10 of [38]). Its three arguments are the query, key, and value. The two functions $\operatorname{MLP}\left( \cdot \right)$ and $\operatorname{LN}\left( \cdot \right)$ are respectively a multi-layer perceptron and a linear layer. Note that patch representations ${\mathbf{Z}}_{{L}_{2}}$ are carried through without updated. Only the query token is updated to query information from patch representations. The final learned graph representation is $\mathbf{g}$ , from which we can perform various graph level tasks.
114
+
115
+ ## 4 Theoretical Analysis
116
+
117
+ In this section, we study the theoretical properties of the proposed model. To save space, we put all proofs in the appendix.
118
+
119
+ ### 4.1 Enhancing model expressiveness with patches
120
+
121
+ On purpose we form graph patches using a clustering method that is not part of the neural network. An alternative consideration is to learn such cluster assignments with GNNs (e.g. DiffPool [36] and MinCutPool[4]. However, cluster assignment learned by GNNs inherits the limitation of GNNs and hinders the expessiveness of the entire model.
122
+
123
+ Theorem 1. Suppose two graphs receive the same coloring by 1-WL algorithm, then DiffPool will compute the same vector representation for them.
124
+
125
+ Although DiffPool and MinCutPool claims to cluster "similar" graph nodes into clusters during pooling, but these nodes may not be connected. Because of the limitation of GNNs, they may aggregate nodes that are far apart in the graph. For example, nodes in the same orbit always get the same color by the 1-WL algorithm and also the same representations from a GNN, then these nodes always have the same cluster assignment. Merging these nodes into the same cluster does not seem capture the high-level structure of a graph.
126
+
127
+ ![01963ee9-6afc-7b34-900f-7fc043203b0b_4_324_214_473_145_0.jpg](images/01963ee9-6afc-7b34-900f-7fc043203b0b_4_324_214_473_145_0.jpg)
128
+
129
+ Figure 2: Pooling methods on a pair of graphs that cannot be distinguished by the 1-WL algorithm (nodes are colored by the 1-WL algorithm).
130
+
131
+ ![01963ee9-6afc-7b34-900f-7fc043203b0b_4_824_228_661_148_0.jpg](images/01963ee9-6afc-7b34-900f-7fc043203b0b_4_824_228_661_148_0.jpg)
132
+
133
+ Figure 3: It is hard for a GNN to push signal from one graph cluster to the other, but a patch-level GNN can do so with patch representations.
134
+
135
+ Another prominent pooling method is the Graph U-Net [11], which has similar issues. We briefly introduce its calculation here. Suppose the layer input is(A, H), the model’s pooling layer projects $\mathbf{H}$ with a unit vector $\mathbf{p}$ and gets values $\mathbf{v} = \mathbf{{Hp}}$ for all nodes, then it chooses the top $k$ nodes that have largest values in $\mathbf{v}$ and keep their representations only. We will show that this approach is NOT invariant to node orders.
136
+
137
+ We also consider a small variant of Graph U-Net for analysis convenience. Instead of choosing $k$ nodes with top values in $\mathbf{v}$ , the variant uses a threshold $\beta$ (either learnable or a hyper-parameter) to choose nodes: $\mathbf{b} = \mathbf{v} \geq \beta$ . Then the output of the layer is $\left( {\mathbf{A}\left\lbrack {\mathbf{b},\mathbf{b}}\right\rbrack ,\mathbf{H}\left\lbrack \mathbf{b}\right\rbrack }\right)$ . We call the model with the variant with thresholding as Graph U-Net-th. We show that the variant of Graph U-Net-th is also bounded by the 1-WL algorithm.
138
+
139
+ Theorem 2. Suppose two graphs receive the same coloring by 1-WL algorithm, then Graph U-Net-th will compute the same vector representation for them.
140
+
141
+ The two theorems strongly indicate that pooling structures learned by GNNs have the same drawback. We provide detailed analysis for Graph U-Net in Appendix A.3.
142
+
143
+ The spectral clustering algorithm actually injects the structural information into the model and has the strength a GNN lacks. By combining the two, our PatchGT can help to ease the 1-WL limit of expressiveness associated with the trainable pooling methods. And we prove that there exists non-isomorphic graphs that PatchGT can distinguish but the 1-WL algorithm cannot in .
144
+
145
+ ### 4.2 Permutation invariance
146
+
147
+ Our model depends on the patch structure formed by the clustering algorithm, which further depends on the spectral decomposition of the normalized Laplacian. Note that the spectral decomposition is not unique, but we show that the clustering result is not affected by sign variant and multiplicities associated with decomposition, and our model is still invariant to node permutations.
148
+
149
+ Theorem 3. The network function of PatchGT is invariant to node permutations.
150
+
151
+ ### 4.3 Addressing information bottleneck with patch representations
152
+
153
+ Alon et al. [2] recently characterize the issue of information bottleneck in GNNs through empirical methods. Here we consider this issue on a special case when a graph consists of loosely-connected node clusters. Note that molecule graphs often have this property. Here we make the first attempt to characterize the information bottleneck through theoretical analysis. We further show that our PatchGT can partially address this issue.
154
+
155
+ For convenient analysis, we consider a regular graph with degree $\tau$ . Suppose the node set $V$ of $G$ forms two clusters $S$ and $T : V = S \cup T, S \cap T = \varnothing$ , and there are only $m$ edges between $S$ and $T$ .
156
+
157
+ We consider the difficulty of passing signal from $S$ to $T$ . Let ${f}^{\mathrm{{GNN}}}\left( \cdot \right)$ denote the network function of a GNN of $L$ layers with ReLU activation $\sigma$ as in (2), and input $\mathbf{X} = \left( {{\mathbf{x}}_{i} \in {\mathbb{R}}^{d} : i \in V}\right) \in {\mathbb{R}}^{\left| V\right| \times d}$ , which contains $d$ -dimensional feature inputs to nodes in $G$ . Let ${f}_{i}^{\mathrm{{GNN}}}\left( \cdot \right)$ be the output at node $i$ . We can ask this question: if we perturb the input to nodes in $S$ , how much impact we can observe at the output at nodes in $T$ . We need to avoid the case that the impact is amplified by scaling up network parameters. In real applications, scaling up network parameters also amplifies signals within $T$ itself, and the signal from $S$ still cannot be well received. Here we consider relative impact: the ratio between the impact on $T$ from $S$ over that from $T$ itself.
158
+
159
+ ![01963ee9-6afc-7b34-900f-7fc043203b0b_5_339_197_1125_271_0.jpg](images/01963ee9-6afc-7b34-900f-7fc043203b0b_5_339_197_1125_271_0.jpg)
160
+
161
+ Figure 4: Segmentation results from spectral clustering and trainable clustering.
162
+
163
+ Let $\mathbf{\alpha } \in {\mathbb{R}}^{\left| V\right| \times d}$ be some perturbation on $S$ such that ${\alpha }_{ij} \leq \epsilon$ if $i \in S$ and ${\alpha }_{ij} = 0$ otherwise. Here $\epsilon$ is the scale of the perturbation. Similarly let $\mathbf{\beta } \in {\mathbb{R}}^{\left| V\right| \times d}$ be some perturbation on $T : {\beta }_{ij} \leq \epsilon$ if $i \in T$ and ${\beta }_{ij} = 0$ otherwise. Then the impacts on node representations ${f}_{i}^{\mathrm{{GNN}}}, i \in T$ from $\mathbf{\alpha }$ and $\mathbf{\beta }$ are respectively
164
+
165
+ $$
166
+ {\delta }_{S \rightarrow T} = \mathop{\max }\limits_{\mathbf{\alpha }}\mathop{\sum }\limits_{{i \in T}}{\begin{Vmatrix}{f}_{i}^{\mathrm{{GNN}}}\left( \mathbf{X} + \mathbf{\alpha }\right) - {f}_{i}^{\mathrm{{GNN}}}\left( \mathbf{X}\right) \end{Vmatrix}}_{1} \tag{10}
167
+ $$
168
+
169
+ $$
170
+ {\delta }_{T \rightarrow T} = \mathop{\max }\limits_{\mathbf{\beta }}\mathop{\sum }\limits_{{i \in T}}{\begin{Vmatrix}{f}_{i}^{\mathrm{{GNN}}}\left( \mathbf{X} + \mathbf{\beta }\right) - {f}_{i}^{\mathrm{{GNN}}}\left( \mathbf{X}\right) \end{Vmatrix}}_{1} \tag{11}
171
+ $$
172
+
173
+ where the maximum is also over all possible learnable parameters ${\begin{Vmatrix}{\mathbf{W}}_{1}\end{Vmatrix}}_{{L}_{1} \rightarrow {L}_{1}},{\begin{Vmatrix}{\mathbf{W}}_{2}\end{Vmatrix}}_{{L}_{1} \rightarrow {L}_{1}} \leq 1$ as in (2). Then we have the following proposition to bound the ratio ${\delta }_{S \rightarrow T}/{\delta }_{T \rightarrow T}$ .
174
+
175
+ Proposition 1. Given a $\tau$ -regular graph $G$ , a node subset $S$ with its complement $T$ such that there are only $m$ edges between $S$ and $T$ , and a $L$ -layer ${GNN}$ , it holds that
176
+
177
+ $$
178
+ \frac{{\delta }_{S \rightarrow T}}{{\delta }_{T \rightarrow T}} \leq \frac{2mL}{\left| T\right| } \tag{12}
179
+ $$
180
+
181
+ The proposition indicates that when there is a small graph cut between two clusters, then it forms an information bottleneck in a GNN - the network needs to use more layers to pass signal from one group to another. The bound is still conservative: if the signal is extracted in middle layers of the network, then passing the signal is even harder. The proposition is illustrated in Figure 3.
182
+
183
+ In our PatchGT model, communication can happen at the coarse graph and thus can partially address this issue. The coarse graph $\widetilde{\mathbf{A}}$ consists of two notes (we still denote them by $S, T$ ), and there is an edge between $S$ and $T$ . From the output ${f}^{\mathrm{{GNN}}}$ , we construct the patch representations $\left( {{\mathbf{z}}_{S},{\mathbf{z}}_{T}}\right) = \left( {\frac{1}{\left| V\right| }\mathop{\sum }\limits_{{i \in S}}{f}_{i}^{\mathrm{{GNN}}}\left( \mathbf{X}\right) ,\frac{1}{\left| V\right| }\mathop{\sum }\limits_{{i \in T}}{f}_{i}^{\mathrm{{GNN}}}\left( \mathbf{X}\right) }\right) \in {\mathbb{R}}^{2 \times d}$ . Then we apply a GNN layer to get node represents on the coarse graph $\left( {{g}_{S}^{\mathrm{{GNN}}}\left( \mathbf{X}\right) ,{g}_{T}^{\mathrm{{GNN}}}\left( \mathbf{X}\right) }\right) \in {\mathbb{R}}^{2 \times d}$ :
184
+
185
+ $$
186
+ {g}_{S}^{\mathrm{{GNN}}}\left( \mathbf{X}\right) = \sigma \left( {{\mathbf{z}}_{S}{\mathbf{W}}_{1}^{\top } + {\mathbf{z}}_{T}{\mathbf{W}}_{2}^{\top }}\right) ,\;{g}_{T}^{\mathrm{{GNN}}}\left( \mathbf{X}\right) = \sigma \left( {{\mathbf{z}}_{T}{\mathbf{W}}_{1}^{\top } + {\mathbf{z}}_{S}{\mathbf{W}}_{2}^{\top }}\right) , \tag{13}
187
+ $$
188
+
189
+ where ${\mathbf{W}}_{1},{\mathbf{W}}_{2} \in {\mathbb{R}}^{d \times d}$ are learnable parameters. We consider the impact of $\alpha$ on our patch GT, let
190
+
191
+ $$
192
+ {\eta }_{S \rightarrow T} = \mathop{\max }\limits_{\mathbf{\alpha }}{\begin{Vmatrix}{g}_{T}^{\mathrm{{GNN}}}\left( \mathbf{X} + \mathbf{\alpha }\right) - {g}_{T}^{\mathrm{{GNN}}}\left( \mathbf{X}\right) \end{Vmatrix}}_{1} \tag{14}
193
+ $$
194
+
195
+ $$
196
+ {\eta }_{T \rightarrow T} = \mathop{\max }\limits_{\mathbf{\beta }}{\begin{Vmatrix}{g}_{T}^{\mathrm{{GNN}}}\left( \mathbf{X} + \mathbf{\beta }\right) - {g}_{T}^{\mathrm{{GNN}}}\left( \mathbf{X}\right) \end{Vmatrix}}_{1}, \tag{15}
197
+ $$
198
+
199
+ Then we have the following proposition on the ratio ${\eta }_{S \rightarrow T}/{\eta }_{T \rightarrow T}$ .
200
+
201
+ Theorem 4. The ratio $\frac{{\eta }_{S \rightarrow T}}{{\eta }_{T \rightarrow T}}$ can be arbitrarily close to 1 in a PatchGT model.
202
+
203
+ This is because $S$ and $T$ are direct neighbors in the coarse graph, then ${\alpha }_{S}$ can directly impact ${\mathbf{z}}_{S}$ , which can impact ${g}_{T}^{\mathrm{{GNN}}}$ through messages passed by GNN layers or the attention mechanism of Transformer layers. The right part of fig. 3 shows that patch representation can include signals from the other node cluster.
204
+
205
+ ### 4.4 Comparison for different Segmentation methods
206
+
207
+ In the previous researches, there exist many hierarchical pooling models [4, 11, 12, 18, 25, 36]. The most obvious difference from the proposed method is that the pooling/segmentation is trainable.
208
+
209
+ Particularly, the pooling is from the node respresentations learned by GNNs. In the Theorem 1 and Theorem 2, we prove such trainable clustering methods will compute the same representations to the nodes if 1-WL algorithm can not differentiate them. This takes two serious problems for the graph segmentation: First, the nodes with the same representations will be assigned to the same cluster even if they are not connected to each other; Second, too many nodes could be assigned to one cluster to make sure that the nodes far away from each other are in the same cluster.
210
+
211
+ Here we compare the two segmentation results: one is from spectral clustering and another is from Memory-based graph networks[1] which is a typical trainable clustering method. In the first case, we find that nodes in the blue cluster from trainable clustering are not connected. If we adopt such patch representations by aggregate the disconnected nodes, will definitely hurt the performance. This is also can be applied to other hierarchical pooling methods such as Diffpool, Eigenpool, and MinCutpool.
212
+
213
+ In the second case, the spectral clustering methods segment the graph by minimum cuts. This is helpful to solve the information bottleneck between patches. However, the Memory-based graph networks cluster the two benzene rings together. It will be difficult for the model to detect the existence of these two benzene rings.
214
+
215
+ ## 5 Empirical Study
216
+
217
+ In this section, we evaluate the effectiveness of PatchGT through experiments.
218
+
219
+ Datasets. We benchmark the performances of PatchGT on several commonly studied graph-level prediction datasets. The first four are from the Open Graph Benchmark (OGB) datasets [13] (ogbg-molhiv, ogbg-molbace, ogbg-molclintox, and ogbg-molsider). These tasks are predicting molecular attributes. The evaluation metric for these four datasets is ROC-AUC (%). The second group of six datasets are from the TU datasets [23], and they are DD, MUTAG, PROTEINS, PTC-MR, ENZYMES, and Mutagenicity. Each dataset contains one classification task for molecules. The evaluation metric is accuracy (%) over all six datasets. The statistics for the datasets is summarized in Appendix A.11.
220
+
221
+ ### 5.1 Quantitative evaluation
222
+
223
+ Table 1: Results (%) on OGB datasets
224
+
225
+ <table><tr><td/><td>ogbg-molhiv</td><td>ogbg-molbace</td><td>ogbg-molclintox</td><td>ogbg-molsider</td></tr><tr><td>GCN +VN</td><td>${75.99} \pm {1.19}$</td><td>${71.44} \pm {4.01}$</td><td>${88.55} \pm {2.09}$</td><td>${59.84} \pm {1.54}$</td></tr><tr><td>GIN + VN</td><td>77.07±1.49</td><td>${76.41} \pm {2.68}$</td><td>${84.06} \pm {3.84}$</td><td>${57.75} \pm {1.14}$</td></tr><tr><td>Deep LRP</td><td>77.19±1.40</td><td>-</td><td>-</td><td>-</td></tr><tr><td>PNA</td><td>${79.05} \pm {1.32}$</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Nested GIN</td><td>${78.34} \pm {1.86}$</td><td>${74.33} \pm {1.89}$</td><td>${86.35} \pm {1.27}$</td><td>${61.2} \pm {1.15}$</td></tr><tr><td>GRAPHSNN +VN</td><td>79.72±1.83</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Graphormer (pre-trained)</td><td>80.51±0.53</td><td>-</td><td>-</td><td>-</td></tr><tr><td>PatchGT-GCN</td><td>${80.22} \pm {0.84}$</td><td>${86.44} \pm {1.92}$</td><td>$\mathbf{{92.21} \pm {1.35}}$</td><td>${65.21} \pm {0.87}$</td></tr><tr><td>PatchGT-GIN</td><td>${79.99} \pm {1.21}$</td><td>${84.08} \pm {2.03}$</td><td>${86.75} \pm {1.04}$</td><td>${64.90} \pm {0.92}$</td></tr><tr><td>PatchGT-DeeperGCN</td><td>${78.13} \pm {1.89}$</td><td>88.31±1.87</td><td>${89.02} \pm {1.21}$</td><td>65.46±1.03</td></tr></table>
226
+
227
+ Baselines. In this section, we compare the performance of PatchGT against several baselines including GCN [16], GIN [33], as well as recent works Nested Graph Neural Networks [40] and GraphSNN [31]. To compare with learnable pooling methods, we also include DiffPool [36], MinCutPool [4] Graph U-Nets[11], and EigenGCN[21] as baselines for TU datasets. We also include the Graphormer model, but note that Graphormer needs a large-scale pre-training and cannot be easily applied to a wider range of datasets. We also compare our model with other transformer-based models such as U2GNN[24] and SEG-BERT[39].
228
+
229
+ Settings. We search model hyper-parameters such as the eigenvalue threshold, the learning rate, and the number of graph neural network layers on the validation set. Each OGB dataset has its own data split of training, validation, and test sets. We run ten fold cross-validation on each TU dataset. In each fold, one-tenth of the data is used as the test set, one-tenth is used as the validation set, and the rest is used as training. For the detailed search space, please refer to Appendix A.12.
230
+
231
+ Table 2: Results (%) on TU datasets
232
+
233
+ <table><tr><td/><td>DD</td><td>MUTAG</td><td>PROTEINS</td><td>PTC-MR</td><td>ENZYMES</td><td>Mutagenicity</td></tr><tr><td>GCN</td><td>${71.6} \pm {2.8}$</td><td>73.4±10.8</td><td>${71.7} \pm {4.7}$</td><td>${56.4} \pm {7.1}$</td><td>50.17</td><td>-</td></tr><tr><td>GraphSAGE</td><td>${71.6} \pm {3.0}$</td><td>${74.0} \pm {8.8}$</td><td>${71.2} \pm {5.2}$</td><td>${57.0} \pm {5.5}$</td><td>54.25</td><td>-</td></tr><tr><td>GIN</td><td>${70.5} \pm {3.9}$</td><td>${84.5} \pm {8.9}$</td><td>${70.6} \pm {4.3}$</td><td>${51.2} \pm {9.2}$</td><td>59.6</td><td>-</td></tr><tr><td>GAT</td><td>71.0±4.4</td><td>73.9±10.7</td><td>${72.0} \pm {3.3}$</td><td>${57.0} \pm {7.3}$</td><td>58.45</td><td>-</td></tr><tr><td>DiffPool</td><td>${79.3} \pm {2.4}$</td><td>-</td><td>${72.7} \pm {3.8}$</td><td>-</td><td>62.53</td><td>77.6±2.7</td></tr><tr><td>MinCutPool</td><td>${80.8} \pm {2.3}$</td><td>-</td><td>${76.5} \pm {2.6}$</td><td>-</td><td>-</td><td>${79.9} \pm {2.1}$</td></tr><tr><td>Nested GCN</td><td>${76.3} \pm {3.8}$</td><td>${82.9} \pm {11.1}$</td><td>${73.3} \pm {4.0}$</td><td>${57.3} \pm {7.7}$</td><td>${31.2} \pm {6.7}$</td><td>-</td></tr><tr><td>Nested GIN</td><td>77.8±3.9</td><td>${87.9} \pm {8.2}$</td><td>${73.9} \pm {5.1}$</td><td>${54.1} \pm {7.7}$</td><td>${29.0} \pm {8.0}$</td><td>-</td></tr><tr><td>DiffPool-NOLP</td><td>79.98</td><td>-</td><td>76.22</td><td>-</td><td>61.95</td><td>-</td></tr><tr><td>SEG-BERT</td><td>-</td><td>${90.8} \pm {6.5}$</td><td>77.1±4.2</td><td>-</td><td>-</td><td>-</td></tr><tr><td>U2GNN</td><td>${80.2} \pm {1.5}$</td><td>${89.9} \pm {3.6}$</td><td>${78.5} \pm {4.07}$</td><td>-</td><td>-</td><td>-</td></tr><tr><td>EigenGCN</td><td>78.6</td><td>-</td><td>76.6</td><td>-</td><td>64.5</td><td>-</td></tr><tr><td>Graph U-Nets</td><td>82.43</td><td>-</td><td>77.68</td><td>-</td><td>-</td><td>-</td></tr><tr><td>PatchGT-GCN</td><td>$\mathbf{{83.3}} \pm {3.1}$</td><td>${94.7} \pm {3.5}$</td><td>${80.3} \pm {2.5}$</td><td>${62.5} \pm {4.1}$</td><td>${73.3} \pm {3.3}$</td><td>${78.3} \pm {2.2}$</td></tr><tr><td>PatchGT-GIN</td><td>79.6±3.3</td><td>${89.4} \pm {3.2}$</td><td>79.5±3.1</td><td>${58.4} \pm {2.9}$</td><td>${70.0} \pm {3.5}$</td><td>${80.4} \pm {1.4}$</td></tr><tr><td>PatchGT-DeeperGCN</td><td>${76.1} \pm {2.8}$</td><td>${89.4} \pm {3.7}$</td><td>${77.5} \pm {3.4}$</td><td>${60.0} \pm {2.6}$</td><td>${56.6} \pm {3.1}$</td><td>$\mathbf{{80.6}} \pm {1.5}$</td></tr></table>
234
+
235
+ ![01963ee9-6afc-7b34-900f-7fc043203b0b_7_310_825_1168_293_0.jpg](images/01963ee9-6afc-7b34-900f-7fc043203b0b_7_310_825_1168_293_0.jpg)
236
+
237
+ Figure 5: Analysis of the key design for the proposed PatchGT. All results are based on PatchGT GCN. In the left figure, we show how changing the threshold for eigenvalues affects performance on the ogbg-molclintox and PROTEINS datasets; The middle figure shows the model performances with the removal of patch-GNN or Transformer (replaced by mean pool) on DD and ogbg-molhiv datasets; The right figure shows the effect of the different readout functions for patch representations.
238
+
239
+ Results. Table 1 and Table 2 summarize the performance of PatchGT and other baselines on OGB datasets and TU datasets. We take values from the original papers and the OGB website; EXCEPT the performance values of Nested GIN on the last three OGB datasets - we obtain the three values by running Nested GIN. We also tried to run the contemporary method GRAPHSNN+VN on the other three OGB datasets, but we did not find the official implementation at the submission of this work.
240
+
241
+ From the results, we see that the proposed method gets good performances on almost all datasets and often outperforms competing methods with a large margin. On the ogbg-molhiv dataset, the performance of PatchGT with GCN is only slightly worse than Graphormer, but note that Graphormer needs large-scale pre-training, which limits its applications.
242
+
243
+ PatchGT with GCN outperforms three baselines on the other three OGB datasets. The improvements on these three OGB datasets are significant. PatchGT with GCN outperforms baselines on four out of six TU datasets. When it does not outperform all baselines, its performances are only slightly worse than the best performance. Similarly, two other configurations, PatchGT-GIN and PatchGT-DeeperGCN, also perform very well on these two datasets.
244
+
245
+ ### 5.2 Ablation study
246
+
247
+ We perform ablation studies to check how different configurations of our model affect its performance. The results are shown in Figure 5.
248
+
249
+ Effect of eigenvalue threshold. The eigenvalue threshold $\gamma$ influences how many patches for a graph after the segmentation. Generally speaking, larger $\gamma$ introduces more patches and patches with smaller sizes. When $\gamma$ is large enough, the number of patches $k$ equals the number of nodes $\left| V\right|$ in the graph, and the Transformer actually works at the node level. When the $\gamma$ is 0, then the whole graph is treated as one patch, and the model is reduced to a GNN with pooling. The left figure shows that there is a sweet point (depending on the dataset) for the threshold, which means that using patches is a better choice than not using patches.
250
+
251
+ ![01963ee9-6afc-7b34-900f-7fc043203b0b_8_301_220_1185_194_0.jpg](images/01963ee9-6afc-7b34-900f-7fc043203b0b_8_301_220_1185_194_0.jpg)
252
+
253
+ Figure 6: Attention visualization of PatchGT on ogbg-molhiv molecules. The second and fourth figures show the attention weights of query tokens on the node patches for the corresponding molecules, which are in the first and third figures. The molecule in the first figure does not inhibit HIV virus, yet the molecule in the third figure does.
254
+
255
+ Effect of GNN layer on the coarse graph and Transformer layers. This ablation study removes either patch-level GNN layers or Transformer layers to check which part of the architecture is important for the model performance. From the middle plot in Figure 5, we see that both types of layers are useful, and Transformer layers are more useful. This is another piece of evidence that PatchGT can combine the strengths of different models.
256
+
257
+ Comparison of readout functions. We compare the performance of PatchGT model using different readout functions when aggregating node representations at each patch in Equation (4). In the right figure, we observe the remarkable influence of the readout function on the performance. Empirical studies indicate max-pooling is the optimal choice under most circumstances.
258
+
259
+ ### 5.3 Understanding the attention
260
+
261
+ Besides improving learning performances, we are also interested in understanding how the attention mechanism helps the model identify the graph property. We train the PatchGT model on the ogbg-molhiv dataset and visualize the attention weights between query tokens and each patch. Interestingly, the attention only concentrates on some chemical motifs such as ${\mathrm{{ClO}}}_{3}$ and ${\mathrm{{CON}}}_{2}$ but ignores other very common motifs such as benzene rings. It can be noticed that for the molecule in the first figure, the two benzene rings are connected to each other by -C-C-. However, the model does not pay any attention to this part. The two rings in the molecule of the second molecule are connected by -S-S-; differently, the model pays attention to this part this time. It indicates that Transformer can identify which motifs are informative and which motifs are common. Such property offers better model interpretability compared to the traditional global pooling. It not only makes accurate predictions but also provides some insight into why decisions are made. In the two examples shown above, we can start from motifs ${\mathrm{{SO}}}_{3}$ and -S-S- to look for structures meaningful for the classification problem.
262
+
263
+ ## 6 Conclusion and Limitations
264
+
265
+ In this work, we show that graph learning models benefit from modeling patches on graphs, particularly when it is combined with Transformer layers. We propose GraphGT, a new learning model that uses non-trainable clustering to get graph patches and learn graph representations based on patch representations. It combines the strengths of GNN layers and Transformer layers and we theoretically prove that it helps mitigate the bottleneck of graphs and limitations of trainable clustering. It shows superior performances on a list of graph learning tasks. Based on graph patches, Transformer layers also provides a good level of interpretability of model predictions.
266
+
267
+ However, the work tested our model mostly on chemical datasets. It is unclear whether the model still performs well when input graphs do not have clear cluster structures.
268
+
269
+ References
270
+
271
+ [1] Amir Hosein Khas Ahmadi. "Memory-based graph networks". PhD thesis. University of Toronto (Canada), 2020. 2, 7
272
+
273
+ [2] Uri Alon and Eran Yahav. "On the bottleneck of graph neural networks and its practical implications". In: arXiv preprint arXiv:2006.05205 (2020). 1, 5
274
+
275
+ [3] Jinheon Baek, Minki Kang, and Sung Ju Hwang. "Accurate learning of graph representations with graph multiset pooling". In: arXiv preprint arXiv:2102.11533 (2021). 2
276
+
277
+ [4] Filippo Maria Bianchi, Daniele Grattarola, and Cesare Alippi. "Spectral Clustering with Graph Neural Networks for Graph Pooling". In: Proceedings of the 37th International Conference on Machine Learning. Ed. by Hal Daumé III and Aarti Singh. Vol. 119. Proceedings of Machine Learning Research. PMLR, 13-18 Jul 2020, pp. 874-883. 2, 4, 6, 7
278
+
279
+ [5] Cristian Bodnar et al. "Weisfeiler and Lehman go cellular: CW networks". In: Advances in Neural Information Processing Systems 34 (2021), pp. 2625-2640. 1
280
+
281
+ [6] Dexiong Chen, Leslie O'Bray, and Karsten Borgwardt. "Structure-Aware Transformer for Graph Representation Learning". In: arXiv preprint arXiv:2202.03036 (2022). 1, 2
282
+
283
+ [7] D.J. Cook and L.B. Holder. Mining Graph Data. Wiley, 2006. ISBN: 9780470073032. URL: https://books.google.com/books?id=bHGy0%5C_HOg8QC.1
284
+
285
+ [8] James Demmel, Ioana Dumitriu, and Olga Holtz. "Fast linear algebra is stable". In: Numerische Mathematik 108.1 (2007), pp. 59-91. 22
286
+
287
+ [9] Alexey Dosovitskiy et al. "An image is worth ${16} \times {16}$ words: Transformers for image recognition at scale". In: arXiv preprint arXiv:2010.11929 (2020). 1, 2, 19
288
+
289
+ [10] Charbel Farhat. Dimensional Reduction of Highly Nonlinear Multiscale Models Using Most Appropriate Local Reduced-Order Bases. Tech. rep. LELAND STANFORD JUNIOR UNIV CA STANFORD United States, 2020. 22
290
+
291
+ [11] Hongyang Gao and Shuiwang Ji. "Graph u-nets". In: international conference on machine learning. PMLR. 2019, pp. 2083-2092. 2, 5-7, 12
292
+
293
+ [12] Daniele Grattarola et al. "Understanding Pooling in Graph Neural Networks". In: arXiv preprint arXiv:2110.05292 (2021). 2, 6
294
+
295
+ [13] Weihua Hu et al. "Open graph benchmark: Datasets for machine learning on graphs". In: Advances in neural information processing systems 33 (2020), pp. 22118-22133. 7, 19
296
+
297
+ [14] Dejun Jiang et al. "Could graph neural networks learn better molecular representation for drug discovery? A comparison study of descriptor-based and graph-based models". In: Journal of cheminformatics 13.1 (2021), pp. 1-23. 1
298
+
299
+ [15] Katikapalli Subramanyam Kalyan, Ajit Rajasekharan, and Sivanesan Sangeetha. "Ammus: A survey of transformer-based pretrained models in natural language processing". In: arXiv preprint arXiv:2108.05542 (2021). 2
300
+
301
+ [16] Thomas N Kipf and Max Welling. "Semi-supervised classification with graph convolutional networks". In: arXiv preprint arXiv:1609.02907 (2016). 7
302
+
303
+ [17] Devin Kreuzer et al. "Rethinking graph transformers with spectral attention". In: Advances in Neural Information Processing Systems 34 (2021). 1, 2
304
+
305
+ [18] Junhyun Lee, Inyeop Lee, and Jaewoo Kang. "Self-attention graph pooling". In: International conference on machine learning. PMLR. 2019, pp. 3734-3743. 2, 6
306
+
307
+ [19] Derek Lim et al. "Sign and Basis Invariant Networks for Spectral Graph Representation Learning". In: arXiv preprint arXiv:2202.13013 (2022). 1
308
+
309
+ [20] Ze Liu et al. "Swin transformer: Hierarchical vision transformer using shifted windows". In: Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021, pp. 10012- 10022.1,19
310
+
311
+ [21] Yao Ma et al. "Graph convolutional networks with eigenpooling". In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2019, pp. 723-731. 7
312
+
313
+ [22] Grégoire Mialon et al. "Graphit: Encoding graph structure in transformers". In: arXiv preprint arXiv:2106.05667 (2021). 1, 2
314
+
315
+ [23] Christopher Morris et al. "TUDataset: A collection of benchmark datasets for learning with graphs". In: ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020). 2020. arXiv: 2007.08663. URL: www.graphlearning.io. 7, 19
316
+
317
+ [24] Dai Quoc Nguyen, Tu Dinh Nguyen, and Dinh Phung. "Universal graph transformer self-attention networks". In: arXiv preprint arXiv:1909.11855 (2019). 7
318
+
319
+ [25] Emmanuel Noutahi et al. "Towards interpretable sparse graph representation learning with laplacian pooling". In: arXiv preprint arXiv:1905.11577 (2019). 2, 6
320
+
321
+ [26] Hoang Nt and Takanori Maehara. "Revisiting graph neural networks: All we have is low-pass filters". In: arXiv preprint arXiv:1905.09550 (2019). 1
322
+
323
+ [27] Yousef Saad. Iterative methods for sparse linear systems. SIAM, 2003. 22
324
+
325
+ [28] Jianbo Shi and Jitendra Malik. "Normalized cuts and image segmentation". In: IEEE Transactions on pattern analysis and machine intelligence 22.8 (2000), pp. 888-905. 3
326
+
327
+ [29] Asher Trockman and J Zico Kolter. "Patches Are All You Need?" In: arXiv preprint arXiv:2201.09792 (2022). 1, 2
328
+
329
+ [30] Ashish Vaswani et al. "Attention is all you need". In: Advances in neural information processing systems 30 (2017). 16
330
+
331
+ [31] Asiri Wijesinghe and Qing Wang. "A New Perspective on" How Graph Neural Networks Go Beyond Weisfeiler-Lehman?"". In: International Conference on Learning Representations. 2021.7
332
+
333
+ [32] Zonghan Wu et al. "A comprehensive survey on graph neural networks". In: IEEE transactions on neural networks and learning systems 32.1 (2020), pp. 4-24. 1
334
+
335
+ [33] Keyulu Xu et al. "How powerful are graph neural networks?" In: arXiv preprint arXiv:1810.00826 (2018). 1, 2, 7, 12
336
+
337
+ [34] Pinar Yanardag and SVN Vishwanathan. "Deep graph kernels". In: Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. 2015, pp. 1365-1374. 1
338
+
339
+ [35] Chengxuan Ying et al. "Do Transformers Really Perform Badly for Graph Representation?" In: Advances in Neural Information Processing Systems 34 (2021). 1, 2, 21
340
+
341
+ [36] Zhitao Ying et al. "Hierarchical graph representation learning with differentiable pooling". In: Advances in neural information processing systems 31 (2018). 2, 4, 6, 7
342
+
343
+ [37] Habil Zare et al. "Data reduction for spectral clustering to analyze high throughput flow cytometry data". In: BMC bioinformatics 11.1 (2010), pp. 1-16. 3
344
+
345
+ [38] Aston Zhang et al. "Dive into Deep Learning". In: arXiv preprint arXiv:2106.11342 (2021). 4
346
+
347
+ [39] Jiawei Zhang. "Segmented graph-bert for graph instance modeling". In: arXiv preprint arXiv:2002.03283 (2020). 7
348
+
349
+ [40] Muhan Zhang and Pan Li. "Nested Graph Neural Networks". In: Advances in Neural Information Processing Systems 34 (2021). 7
350
+
351
+ ## A Appendix
352
+
353
+ ### A.1 Proof of Theorem 1
354
+
355
+ The proof that DiffPooling cannot distinguish graphs that are colored in the same way by the 1-WL algorithm.
356
+
357
+ Proof. The function form of a pooling layer in DiffPooling is
358
+
359
+ $$
360
+ {\mathbf{H}}^{\prime } = {\mathbf{S}}^{\top }\mathbf{A}\mathbf{S}{\mathbf{S}}^{\top }\mathbf{H},\;\mathbf{S} = {\operatorname{gnn}}_{c}\left( {\mathbf{A},\mathbf{X}}\right) ,\;\mathbf{H} = {\operatorname{gnn}}_{r}\left( {\mathbf{A},\mathbf{X}}\right) \tag{16}
361
+ $$
362
+
363
+ Here ${\operatorname{gnn}}_{c}\left( {\cdot , \cdot }\right)$ learns a cluster assignment $\mathbf{S}$ of all nodes in the graph, and ${\operatorname{gnn}}_{r}\left( {\cdot , \cdot }\right)$ learns node representations.
364
+
365
+ Note that ${\operatorname{gnn}}_{r}$ has at most the ability of 1-WL algorithm [33]. Two nodes must get the same representation when they have the same color in the 1-WL coloring result. We use an indicator matrix $\mathbf{C}$ to represent the 1-WL coloring of the graph, that is, the node $i$ is colored as $j$ if ${C}_{i, j} = 1$ , then we can write
366
+
367
+ $$
368
+ \mathbf{S} = \mathbf{{CB}} \tag{17}
369
+ $$
370
+
371
+ Here the $j$ -th row of $\mathbf{B}$ denote the vector representation learned for color $j$ .
372
+
373
+ If two graphs represented by $\mathbf{A}$ and $\mathbf{\Lambda }$ cannot be distinguished by the 1-WL algorithm, then they get the same coloring matrix $\mathbf{C}$ (subject to some node permutation that does not affect our analysis here). Now we show that:
374
+
375
+ $$
376
+ {\mathbf{C}}^{\top }\mathbf{{AC}} = {\mathbf{C}}^{\top }\mathbf{{\Lambda C}} \tag{18}
377
+ $$
378
+
379
+ Let’s compare the two matrices on both sides of the equation at an arbitrary entry(k, t). Let ${\alpha }_{k}$ and ${\alpha }_{t}$ represent nodes colored in $k$ and $t$ , then the entry at(k, t)is $\mathop{\sum }\limits_{{i \in {\alpha }_{k}}}\mathop{\sum }\limits_{{j \in {\alpha }_{t}}}{A}_{i, j}$ , which is the count of edges that have one incident node colored in $k$ and the other incident node colored in $t$ . Since the coloring is obtained by 1-WL algorithm, each node $i \in {\alpha }_{k}$ has exactly the same number of neighbors colored as $t$ . The number of nodes in color $k$ and the number of neighbors in color $t$ are exactly the same for $\mathbf{\Lambda }$ because $\mathbf{\Lambda }$ receives the same coloring as $\mathbf{A}$ . Therefore, $\mathop{\sum }\limits_{{i \in {\alpha }_{k}}}\mathop{\sum }\limits_{{j \in {\alpha }_{t}}}{A}_{i, j} = \mathop{\sum }\limits_{{i \in {\alpha }_{k}}}\mathop{\sum }\limits_{{j \in {\alpha }_{t}}}{\Lambda }_{i, j}$ , and (18) holds.
380
+
381
+ At the same time, if two graphs cannot be distinguished by 1-WL, they have the same node representations $\mathbf{H}$ , then they have the same ${\mathbf{H}}^{\prime }$ .
382
+
383
+ ### A.2 Proof of Theorem 2
384
+
385
+ We first prove a lemma.
386
+
387
+ Lemma 1. Suppose two graphs represented by $\mathbf{A}$ and $\mathbf{\Lambda }$ obtain the same coloring from the 1-WL algorithm, then
388
+
389
+ i) the resultant two graphs from removal of nodes in the same color still get the same coloring by the 1-WL algorithm; and
390
+
391
+ ii) the two multigraphs represented by ${\mathbf{A}}^{\ell }$ and ${\mathbf{\Lambda }}^{\ell }$ still get the same coloring by the 1-WL algorithm.
392
+
393
+ Here ${\mathbf{A}}^{\ell }$ and ${\mathbf{\Lambda }}^{\ell }$ are the $\ell$ -th power of the two adjacency matrices, and they represent multigraphs that may have self-loops and parallel edges. The 1-WL algorithm is still valid over graphs with self-loops and multi-edges. A 1-WL style GNN defined in Section 3.1 or [11] is still bounded by the 1-WL algorithm on such multigraphs.
394
+
395
+ Proof. i) We first consider updating of 1-WL coloring when nodes in a color is removed. Suppose we have stable coloring of graphs represented by $\mathbf{A}$ . Let ${\alpha }_{t}$ and ${\alpha }_{r}$ denote two groups of nodes in color $t$ and $r$ respectively. We also assume each node in $r$ has $t$ in its color set - if there are not such cases, then we can simply remove nodes in a color and obtain a stable 1-WL coloring.
396
+
397
+ Suppose we remove nodes in color $t$ from both graphs. Note that all nodes ${\alpha }_{r}$ have the same number of neighbors in color $t$ . We update the color set of each $i \in {\alpha }_{r}$ by removing color $t$ from it. Then all nodes in ${\alpha }_{r}$ still get the same color. Therefore, removing the color $t$ from nodes in all relevant color groups gives at least a stable coloring, which, however, might not be the coarsest.
398
+
399
+ Then we merge some colors when nodes share the same color set. If a node in color $r$ has the same color set as a node in color ${r}^{\prime }$ , then we assign the same color to both nodes in colors $r$ and ${r}^{\prime }$ . We run merging steps until no nodes in different colors share the same color set, then the coloring is a stable coloring of the graph, and the resultant coloring of the graph can be viewed as the 1-WL coloring of the graph.
400
+
401
+ In the procedure above, the step of removing a color, and the steps of merging colors directly operate on nodes’ color sets. Since nodes in $\mathbf{A}$ and nodes in $\mathbf{\Lambda }$ have the same color sets, therefore, they will have the same color sets after color updates.
402
+
403
+ The update procedure above purely runs on color relations between different colors. Since $\mathbf{A}$ and $\mathbf{\Lambda }$ have exactly the same color relations because they receive the same 1-WL coloring. Therefore, the update procedure above still gives the same stable coloring to $\mathbf{A}$ and $\mathbf{\Lambda }$ .
404
+
405
+ ii) For the second part of the lemma, we first check the coloring of ${\mathbf{A}}^{\ell }$ . We show that the coloring of $\mathbf{A}$ is a stable coloring of ${\mathbf{A}}^{\ell }$ . Suppose each node $i$ has a color set ${C}_{i}$ . In the graph ${\mathbf{A}}^{\ell }, i$ ’s $\ell$ -th neighbors become direct neighbors of $i$ . The color set of $i$ becomes
406
+
407
+ $$
408
+ {C}_{i} \cup \left( {{ \cup }_{{j}_{1} \in N\left( i\right) }{C}_{j}}\right) \cup \ldots \cup \left( {{ \cup }_{{j}_{1} \in N\left( i\right) }\ldots { \cup }_{{j}_{\ell } \in N\left( {j}_{\ell - 1}\right) }{C}_{{j}_{\ell }}}\right) \tag{19}
409
+ $$
410
+
411
+ We know that if two nodes $i$ and ${i}^{\prime }$ have the same color if and only if their color sets are the same. By using the relation recursively, $i$ and ${i}^{\prime }$ have the same color set in ${\mathbf{A}}^{\ell }$ . Therefore, the stable coloring of $\mathbf{A}$ is also a stable coloring of ${\mathbf{A}}^{\ell }$ . If necessary, we can also run the merging procedure above and eventually get 1-WL coloring of ${\mathbf{A}}^{\ell }$ . With the same argument as above, the operations only run on color sets, therefore, ${\mathbf{A}}^{\ell }$ and ${\mathbf{\Lambda }}^{\ell }$ have the same coloring.
412
+
413
+ Now we are ready to prove the main theorem that the Graph U-Net variant cannot distinguish graphs colored in the same way by the 1-WL algorithm.
414
+
415
+ Proof. In the calculation of Graph U-Net-th, the indicator $\mathbf{b}$ for removing nodes is obtained by thresholding $\mathbf{v}$ , which is computed by a 1-WL GNN. Therefore, nodes in the same color are always kept or removed all together in $\mathbf{b}$ .
416
+
417
+ Suppose the inputs to a Graph U-Net layer are(A, X)and $\left( {\mathbf{\Lambda },\mathbf{X}}\right)$ respectively, and $\mathbf{A}$ and $\mathbf{\Lambda }$ cannot be distinguished by the 1-WL algorithm. The inputs to next layer are $\left( {{\mathbf{A}}^{\ell }\left\lbrack {\mathbf{b},\mathbf{b}}\right\rbrack ,\mathbf{X}\left\lbrack \mathbf{b}\right\rbrack }\right)$ and $\left( {{\mathbf{\Lambda }}^{\ell }\left\lbrack {\mathbf{b},\mathbf{b}}\right\rbrack ,\mathbf{X}\left\lbrack \mathbf{b}\right\rbrack }\right)$ respectively. By the lemma above, the 1-WL algorithm cannot distinguish ${\mathbf{A}}^{\ell }$ and ${\mathbf{\Lambda }}^{\ell }$ , and it cannot be distinguish ${\mathbf{A}}^{\ell }\left\lbrack {\mathbf{b},\mathbf{b}}\right\rbrack$ and ${\mathbf{\Lambda }}^{\ell }\left\lbrack {\mathbf{b},\mathbf{b}}\right\rbrack$ either. Therefore, it still cannot distinguish the inputs $\left( {{\mathbf{A}}^{\ell }\left\lbrack {\mathbf{b},\mathbf{b}}\right\rbrack ,\mathbf{X}\left\lbrack \mathbf{b}\right\rbrack }\right)$ to the next layer.
418
+
419
+ By using the argument above recursively, the network cannot distinguish the graph at the final outputs if network inputs(A, X)and $\left( {\mathbf{\Lambda },\mathbf{X}}\right)$ cannot be distinguished by the 1-WL algorithm.
420
+
421
+ Remark 1. For graphs with noise or low homophily ratios, the aforementioned issue may not be severe and long-distance aggregation is helpful.
422
+
423
+ ### A.3 Analysis for expressiveness of Graph U-Nets
424
+
425
+ In this section we use an example in Fig. 7 to understand how to maintain a graph's global structure with pooling operations. In a pooling step, DiffPool and MinCutPool will assign nodes in the same color to the same cluster and merge them as one node. Clearly it does not maintain the global structure of the graph and cannot distinguish the two graphs.
426
+
427
+ Graph U-Net always ranks nodes in one color above nodes of the other color. It is not always permutation invariant: for example, it may get different structures when it breaks tie to take two green nodes. In many cases, it cannot distinguish the two graphs: when it takes three nodes, either three green nodes or two blue and one green nodes, it cannot distinguish the two graphs. The Graph U-Net variant considered above always remove blue or green nodes, thus it cannot distinguish the two graphs. One important observation is Graph U-Net cannot preserve the global graph structure in its pooling steps. For example, when it removes three nodes, the structure left is vastly different from the original graph.
428
+
429
+ ![01963ee9-6afc-7b34-900f-7fc043203b0b_13_430_222_937_1110_0.jpg](images/01963ee9-6afc-7b34-900f-7fc043203b0b_13_430_222_937_1110_0.jpg)
430
+
431
+ Figure 7: Two graphs that cannot be distinguished by the 1-WL algorithm. The colors illustrate the 1-WL coloring of graph nodes. In comparison, PatchGT can differentiate them through the patch-level graph.
432
+
433
+ ### A.4 Examples beyond 1-WL
434
+
435
+ There are two examples in Figure 7. The two original graphs ${G}_{1}$ and ${G}_{2}$ , or ${G}_{3}$ and ${G}_{4}$ , are non-isomorphic. However, both 1-WL and message passing GNNs cannot differentiate them, since the number for each node color (label) is the same, the two graphs will share the same representation after the pooling. In comparison, PatchGT can discriminate the two graphs by segmenting the graph into two patches, and then building an patch-level graph. For ${G}_{1}$ and ${G}_{2}$ , though the two patches are the same, there is an edge connecting the two parts in ${\widetilde{G}}_{1}$ , while there isn’t in ${\widetilde{G}}_{2}$ . For ${G}_{3}$ and ${G}_{4}$ , the number of nodes for each part is 4 and 2 in ${\widetilde{G}}_{3}$ , while 3 and 3 in ${\widetilde{G}}_{4}$ . So the representations of patches are different. These two examples prove that the expressiveness of PatchGT is beyond 1-WL algorithm.
436
+
437
+ ### A.5 Proof of Theorem 4
438
+
439
+ We prove the theorem 4 through three lemmas below.
440
+
441
+ Lemma 2. The patches split via $k$ -means are invariant to column vectors in $\mathbf{U}$ from the spans of eigenvectors associated with the multiplicities of eigenvalues.
442
+
443
+ $$
444
+ \text{kmeans}\left( \mathbf{V}\right) = \text{kmeans}\left( \mathbf{{VQ}}\right) \tag{20}
445
+ $$
446
+
447
+ ## where $\mathbf{Q}$ is a standard block-diagonal rotation matrix.
448
+
449
+ Proof. If we use ${N}_{u}$ eigenvectors for the graph patch splitting, corresponding to the first ${N}_{u}$ smallest eigenvalues, we can write them as $\left( {{\lambda }_{1},{\mathbf{u}}_{1}}\right) ,\ldots ,\left( {{\lambda }_{{N}_{u}},{\mathbf{u}}_{{N}_{u}}}\right)$ . If we have multiplicities in these eigenvalues, we can rotate the eigenvectors by a block-diagonal rotation matrix $\mathbf{Q} \in {\mathbb{R}}^{{N}_{u} \times {N}_{u}}$ to obtain another set of eigenvectors,
450
+
451
+ $$
452
+ {\mathbf{U}}^{\prime } = \left\lbrack {{\mathbf{u}}_{1}^{\prime },\ldots ,{\mathbf{u}}_{k}^{\prime }}\right\rbrack = \left\lbrack {{\mathbf{u}}_{1},\ldots ,{\mathbf{u}}_{k}}\right\rbrack \mathbf{Q} = \mathbf{{UQ}} \tag{21}
453
+ $$
454
+
455
+ where ${\mathbf{u}}_{i},{\mathbf{u}}_{i}^{\prime } \in {\mathbb{R}}^{\left| V\right| \times 1}$ . If we perform $k$ -means on the row vectors of $\left\lbrack {{\left( {\mathbf{u}}_{1}\right) }_{i},\ldots ,{\left( {\mathbf{u}}_{k}\right) }_{{N}_{u}}}\right\rbrack$ , we can write the nodes' coordinates as
456
+
457
+ $$
458
+ \left\lbrack {{\mathbf{x}}_{1};\ldots ;{\mathbf{x}}_{\left| V\right| }}\right\rbrack = \left\lbrack {{\mathbf{u}}_{1},\ldots ,{\mathbf{u}}_{{N}_{u}}}\right\rbrack . \tag{22}
459
+ $$
460
+
461
+ 516 Similarly, we can write down the new coordinates after rotation as
462
+
463
+ $$
464
+ \left\lbrack {{\mathbf{x}}_{1}^{\prime };\ldots ;{\mathbf{x}}_{\left| V\right| }}\right\rbrack = \left\lbrack {{\mathbf{u}}_{1}^{\prime },\ldots ,{\mathbf{u}}_{{N}_{u}}^{\prime }}\right\rbrack . \tag{23}
465
+ $$
466
+
467
+ 517 From the above three equations, it holds that
468
+
469
+ $$
470
+ \left\lbrack {{\mathbf{x}}_{1}^{\prime };\ldots ;{\mathbf{x}}_{{\left| \mathcal{V}\right| }^{\prime }}}\right\rbrack = \left\lbrack {{\mathbf{x}}_{1};\ldots ;{\mathbf{x}}_{\left| \mathcal{V}\right| }}\right\rbrack \mathbf{Q}. \tag{24}
471
+ $$
472
+
473
+ 518 So for $i, j \in \{ 1,\ldots ,\left| V\right| \}$ , we have
474
+
475
+ $$
476
+ {\mathbf{x}}_{i}^{\prime } = {\mathbf{x}}_{i}\mathbf{Q}\;{\mathbf{x}}_{j}^{\prime } = {\mathbf{x}}_{j}\mathbf{Q}. \tag{25}
477
+ $$
478
+
479
+ 519 The relative distance of new coordinates can be calculated as
480
+
481
+ $$
482
+ \left( {{\mathbf{x}}_{i}^{\prime } - {\mathbf{x}}_{j}^{\prime }}\right) {\left( {\mathbf{x}}_{i}^{\prime } - {\mathbf{x}}_{j}^{\prime }\right) }^{\top } = \left( {{\mathbf{x}}_{i}\mathbf{Q} - {\mathbf{x}}_{j}\mathbf{Q}}\right) {\left( {\mathbf{x}}_{i}\mathbf{Q} - {\mathbf{x}}_{j}\mathbf{Q}\right) }^{\top } = \left( {{\mathbf{x}}_{i} - {\mathbf{x}}_{j}}\right) \mathbf{Q}{\mathbf{Q}}^{\top }{\left( {\mathbf{x}}_{i} - {\mathbf{x}}_{j}\right) }^{\top }. \tag{26}
483
+ $$
484
+
485
+ From the property of the rotational matrix, we have
486
+
487
+ $$
488
+ \mathbf{I} = \mathbf{Q}{\mathbf{Q}}^{\top }. \tag{27}
489
+ $$
490
+
491
+ So it holds that
492
+
493
+ $$
494
+ \left( {{\mathbf{x}}_{i}^{\prime } - {\mathbf{x}}_{j}^{\prime }}\right) {\left( {\mathbf{x}}_{i}^{\prime } - {\mathbf{x}}_{j}^{\prime }\right) }^{\top } = \left( {{\mathbf{x}}_{i} - {\mathbf{x}}_{j}}\right) {\left( {\mathbf{x}}_{i} - {\mathbf{x}}_{j}\right) }^{\top }. \tag{28}
495
+ $$
496
+
497
+ So for any two node pair, the relative distance is preserved, thus it will not affect the $k$ -means results.
498
+
499
+ Lemma 3. The patches split via k-means are invariant to column vectors in $\mathbf{U}$ with different signs.
500
+
501
+ Proof. The sign invariance is a special case of rotation invariance by taking $\mathbf{Q}$ as a diagonal matrix with entry ${\left( \mathbf{Q}\right) }_{ii} \in \{ - 1,1\}$
502
+
503
+ Lemma 4. The patches split via k-means are invariant to the permutations of nodes
504
+
505
+ $$
506
+ \text{kmeans}\left( \mathbf{U}\right) = \text{kmeans}\left( \mathbf{{PU}}\right) \tag{29}
507
+ $$
508
+
509
+ where $\mathbf{P}$ is a permutation matrix.
510
+
511
+ Proof. We denote ${\mathbf{I}}_{\left| V\right| } = {\left\lbrack 1,\ldots ,1\right\rbrack }^{\top } \in {\mathbb{R}}^{\left| V\right| \times 1}$ For a permutation matrix $\mathbf{P}$ of $\mathbf{A}$ , we have the corresponding permutation matrix $P$ such that
512
+
513
+ $$
514
+ {\mathbf{A}}^{\prime } = {\mathbf{P}}^{\top }\mathbf{{AP}} \tag{30}
515
+ $$
516
+
517
+ where $\mathbf{A}$ and ${\mathbf{A}}^{\prime }$ are adjacency matrices of $G$ and ${G}^{\prime }$ respectively. And the for the degree matrix of $G$ and ${G}^{\prime }$
518
+
519
+ $$
520
+ \mathbf{D} = \operatorname{diag}\left( {{\mathbf{A}}^{\prime }{\mathbf{I}}_{\left| V\right| }}\right) ,{\mathbf{D}}^{\prime } = \operatorname{diag}\left( {{\mathbf{A}}^{\prime }{\mathbf{I}}_{\left| V\right| }}\right) \tag{31}
521
+ $$
522
+
523
+ Substitute equation 30 into equation 31
524
+
525
+ $$
526
+ {\mathbf{D}}^{\prime } = \operatorname{diag}\left( {{\mathbf{P}}^{\top }\mathbf{{AP}}{\mathbf{I}}_{\left| V\right| }}\right) = \operatorname{diag}\left( {{\mathbf{P}}^{\top }\mathbf{{AP}}\left( {{\mathbf{P}}^{\top }{\mathbf{I}}_{\left| V\right| }\mathbf{P}}\right) }\right) \tag{32}
527
+ $$
528
+
529
+ 34 From the symmetry of the permutation matrix, it holds that
530
+
531
+ $$
532
+ {\mathbf{P}}^{-1} = {\mathbf{P}}^{\top } \tag{33}
533
+ $$
534
+
535
+ 535 Combine the above three equations, we can get
536
+
537
+ $$
538
+ {\mathbf{D}}^{\prime } = {\mathbf{P}}^{\top }\operatorname{diag}\left( {\mathbf{{AI}}}_{\left| V\right| }\right) \mathbf{P} = {\mathbf{P}}^{\top }\mathbf{{DP}} \tag{34}
539
+ $$
540
+
541
+ 36 So the permuted Laplacian matrix is
542
+
543
+ $$
544
+ {\mathbf{L}}^{\prime } = \mathbf{I} - {\mathbf{D}}^{\prime - {0.5}}{\mathbf{A}}^{\prime }{\mathbf{D}}^{\prime - {0.5}} = {\mathbf{P}}^{\top }\mathbf{I}\mathbf{P} - {\mathbf{P}}^{\top }{\mathbf{D}}^{-{0.5}}\mathbf{P}{\mathbf{P}}^{\top }\mathbf{A}\mathbf{P}{\mathbf{P}}^{\top }{\mathbf{D}}^{-{0.5}}\mathbf{P}
545
+ $$
546
+
547
+ $$
548
+ = {\mathbf{P}}^{\top }\left( {\mathbf{I} - {\mathbf{D}}^{-{0.5}}\mathbf{A}{\mathbf{D}}^{-{0.5}}}\right) \mathbf{P} = {\mathbf{P}}^{\top }\mathbf{{LP}} \tag{35}
549
+ $$
550
+
551
+ Substitute into the Laplacian eigen decomposition, we have the equation
552
+
553
+ $$
554
+ {\mathbf{L}}^{\prime } - \lambda \mathbf{I} = {\mathbf{P}}^{\top }\mathbf{L}{\mathbf{P}}^{\top } - {\mathbf{P}}^{\top }\lambda \mathbf{I}\mathbf{P} = {\mathbf{P}}^{\top }\left( {\mathbf{L} - \lambda \mathbf{I}}\right) \mathbf{P} \tag{36}
555
+ $$
556
+
557
+ 8 and its algebraic form
558
+
559
+ $$
560
+ \det \left( {{\mathbf{L}}^{\prime } - \lambda \mathbf{I}}\right) = \det \left( {\mathbf{P}}^{\top }\right) \det \left( {\mathbf{L} - \lambda \mathbf{I}}\right) \det \left( \mathbf{P}\right) = \det \left( {\mathbf{L} - \lambda \mathbf{I}}\right) , \tag{37}
561
+ $$
562
+
563
+ 539 so the eigenvalues are remaining invariant.
564
+
565
+ Next we look at the eigenvector. For a eigenvector of $b{L}^{\prime },\left( {\lambda ,{\mathbf{u}}^{\prime }}\right)$ , we have
566
+
567
+ $$
568
+ {\mathbf{L}}^{\prime }{\mathbf{u}}^{\prime } = \lambda {\mathbf{u}}^{\prime } \tag{38}
569
+ $$
570
+
571
+ 541 Combine with equation 35 , we can get
572
+
573
+ $$
574
+ {\mathbf{P}}^{\top }\mathbf{{LP}}{\mathbf{u}}^{\prime } = \lambda {\mathbf{u}}^{\prime } \Leftrightarrow \mathbf{L}\left( {\mathbf{{Pu}}}^{\prime }\right) = \lambda \left( {\mathbf{{Pu}}}^{\prime }\right) \tag{39}
575
+ $$
576
+
577
+ 542 So we have the relation of two corresponding eigenvectors as
578
+
579
+ $$
580
+ \mathbf{u} = \mathbf{P}{\mathbf{u}}^{\prime } \Leftrightarrow {\mathbf{u}}^{\prime } = {\mathbf{P}}^{\top }\mathbf{u} \tag{40}
581
+ $$
582
+
583
+ 543 So we have the relation for the node coordinate
584
+
585
+ $$
586
+ \left\lbrack {{\mathbf{x}}_{1}^{\prime };\ldots ;{\mathbf{x}}_{\left| V\right| }^{\prime }}\right\rbrack = {\mathbf{P}}^{T}\left\lbrack {{\mathbf{x}}_{1};\ldots ;{\mathbf{x}}_{\left| V\right| }}\right\rbrack . \tag{41}
587
+ $$
588
+
589
+ Thus there is a bijective mapping $\mathcal{B} : n \rightarrow m$ such that ${\left( \mathbf{P}\right) }_{n\mathcal{B}\left( n\right) } = 1$ and ${\mathbf{x}}_{n} = {\mathbf{x}}_{\mathcal{B}\left( n\right) }^{\prime }$ . Then for any node pair(i, j), we can find $\left( {{i}^{\prime },{j}^{\prime }}\right) = \left( {\mathcal{B}\left( i\right) ,\mathcal{B}\left( j\right) }\right)$ such that
590
+
591
+ $$
592
+ {\mathbf{x}}_{i} = {\mathbf{x}}_{{i}^{\prime }}^{\prime },\;{\mathbf{x}}_{j} = {\mathbf{x}}_{{j}^{\prime }}^{\prime }, \tag{42}
593
+ $$
594
+
595
+ then it clearly holds that
596
+
597
+ $$
598
+ \left( {{\mathbf{x}}_{i} - {\mathbf{x}}_{j}}\right) {\left( {\mathbf{x}}_{i} - {\mathbf{x}}_{j}\right) }^{\top } = \left( {{\mathbf{x}}_{{i}^{\prime }}^{\prime } - {\mathbf{x}}_{{j}^{\prime }}^{\prime }}\right) {\left( {\mathbf{x}}_{{i}^{\prime }}^{\prime } - {\mathbf{x}}_{{j}^{\prime }}^{\prime }\right) }^{\top }. \tag{43}
599
+ $$
600
+
601
+ So for any two node pair, the relative distance is preserved, thus it will not affect the $k$ -means results.
602
+
603
+ ### A.6 Multi-head attention
604
+
605
+ Transformer [30] has been proved successful in the NLP and CV fields. The design of multi-head attention (MHA) layer is based on attention mechanism with Query-Key-Value (QKV). Given the packed matrix representations of queries $\mathbf{Q}$ , keys $\mathbf{K}$ , and values $\mathbf{V}$ , the scaled dot-product attention used by Transformer is given by:
606
+
607
+ $$
608
+ \operatorname{ATTENTION}\left( {\mathbf{Q},\mathbf{K},\mathbf{V}}\right) = \operatorname{softmax}\left( \frac{\mathbf{Q}{\mathbf{K}}^{T}}{\sqrt{{D}_{k}}}\right) \mathbf{V}, \tag{44}
609
+ $$
610
+
611
+ where ${D}_{k}$ represents the dimensions of queries and keys.
612
+
613
+ The multi-head attention applies $H$ heads of attention, allowing a model to attend to different types of information.
614
+
615
+ $$
616
+ \operatorname{MHA}\left( {\mathbf{Q},\mathbf{K},\mathbf{V}}\right) = \operatorname{CONCAT}\left( {{\operatorname{head}}_{1},\ldots ,{\operatorname{head}}_{H}}\right) \mathbf{W}
617
+ $$
618
+
619
+ $$
620
+ \text{where}{\operatorname{head}}_{i} = \operatorname{ATTENTION}\left( {{\mathbf{{QW}}}_{i}^{Q},{\mathbf{{KW}}}_{i}^{K},{\mathbf{{VW}}}_{i}^{V}}\right) , i = 1,\ldots , H\text{.} \tag{45}
621
+ $$
622
+
623
+ ### A.7 Proof of proposition 1
624
+
625
+ Given a $L$ layer GNN with uniform hidden feature and initial feature ${\mathbf{H}}_{0} = \mathbf{X}$ , for $l = 0,\ldots , L$ , the recurrent output of a GNN layer ${\mathbf{H}}_{l + 1}$ follows
626
+
627
+ $$
628
+ {\mathbf{H}}_{l + 1} = \sigma \left( {{\mathbf{H}}_{l}{\mathbf{W}}_{1l}^{\top } + {\mathbf{{AH}}}_{l}{\mathbf{W}}_{2l}^{\top }}\right) \tag{46}
629
+ $$
630
+
631
+ where ${\mathbf{H}}_{l} \in {\mathbb{R}}^{\left| V\right| \times d},{\mathbf{W}}_{1l},{\mathbf{W}}_{2l} \in {\mathbb{R}}^{d \times d}$ . And then we introduce another recurrent relationship to track the output change of each layers propagated from an initial perturbation ${\epsilon }_{0} \in {\mathbb{R}}^{\left| V\right| \times d}$ on ${\mathbf{H}}_{0}$ ,
632
+
633
+ $$
634
+ {\mathbf{\epsilon }}_{l + 1} = \sigma \left( {{\mathbf{H}}_{l}{\mathbf{W}}_{1l}^{\top } + \mathbf{A}{\mathbf{H}}_{l}{\mathbf{W}}_{2l}^{\top } + {\mathbf{\epsilon }}_{l}{\mathbf{W}}_{1l}^{\top } + \mathbf{A}{\mathbf{\epsilon }}_{l}{\mathbf{W}}_{2l}^{\top }}\right) - \sigma \left( {{\mathbf{H}}_{l}{\mathbf{W}}_{1l}^{\top } + \mathbf{A}{\mathbf{H}}_{l}{\mathbf{W}}_{2l}^{\top }}\right) . \tag{47}
635
+ $$
636
+
637
+ We denote $\left| \cdot \right|$ as an operator to replace a matrix’s ( $\cdot$ ) elements with absolute values and we write $\left| \mathbf{J}\right| \leq \left| \mathbf{K}\right|$ if $\left| {\left( \mathbf{J}\right) }_{ij}\right| \leq \left| {\left( \mathbf{K}\right) }_{ij}\right|$ . Let ${\mathbf{I}}_{S} \in {\mathbb{R}}^{\left| V\right| \times 1}$ is an indicator vector of $S$ such that ${\left( {\mathbf{I}}_{S}\right) }_{i} =$ 1 if $i \in S$ else 0 . We firstly prove a lemma below.
638
+
639
+ Lemma 5. Given ${\epsilon }_{0} = \alpha$ , it holds that
640
+
641
+ $$
642
+ \left| {\mathbf{\epsilon }}_{l}\right| \leq {a}_{l}{\mathbf{I}}_{S}{\mathbf{V}}_{l}^{\top } + {\mathbf{r}}_{l} \in {\mathbb{R}}^{\left| V\right| \times d}\text{ or }\left| {\left( {\mathbf{\epsilon }}_{l}\right) }_{ij}\right| \leq {a}_{l}{\left( {\mathbf{V}}_{l}\right) }_{j}{\left( {\mathbf{I}}_{S}\right) }_{i} + {\left( {\mathbf{r}}_{l}\right) }_{ij} \tag{48}
643
+ $$
644
+
645
+ where ${a}_{l} = \epsilon {\left( \tau + 1\right) }^{l},{\mathbf{V}}_{l} \in {\mathbb{R}}_{ + }^{d \times 1},{\begin{Vmatrix}{\mathbf{V}}_{l}\end{Vmatrix}}_{1} \leq d$ and ${\begin{Vmatrix}{\mathbf{r}}_{l}\end{Vmatrix}}_{1} \leq {2r\epsilon m}\left( {l + 1}\right) {\left( \tau + 1\right) }^{l}$ .
646
+
647
+ Proof. We prove by induction. For $l = 0$ , we can take ${a}_{0} = \epsilon ,{\mathbf{V}}_{0} = {\mathbf{I}}_{d} = \left\lbrack \underset{d}{\underbrace{1,\ldots ,1}}\right\rbrack$ and ${\mathbf{r}}_{0} = \mathbf{0}$ , then it holds
648
+
649
+ $$
650
+ \left| {\mathbf{\epsilon }}_{0}\right| \leq {a}_{0}{\mathbf{I}}_{S}{\mathbf{V}}_{0}^{\top } + {\mathbf{r}}_{0}. \tag{49}
651
+ $$
652
+
653
+ From the recurrent relation in equation 47 , it holds that
654
+
655
+ $$
656
+ {\mathbf{\epsilon }}_{l + 1} = \sigma \left( {\left( {{\mathbf{H}}_{l} + {\mathbf{\epsilon }}_{l}}\right) {\mathbf{W}}_{1l}^{\top } + \mathbf{A}\left( {{\mathbf{H}}_{l} + {\mathbf{\epsilon }}_{l}}\right) {\mathbf{W}}_{2l}^{\top }}\right) - \sigma \left( {{\mathbf{H}}_{l}{\mathbf{W}}_{1l}^{\top } + \mathbf{A}{\mathbf{H}}_{l}{\mathbf{W}}_{2l}^{\top }}\right) . \tag{50}
657
+ $$
658
+
659
+ 570 From the Lipschitz continuity of $\sigma$ , it holds that
660
+
661
+ $$
662
+ \left| {\mathbf{\epsilon }}_{l + 1}\right| \leq \left| {{\mathbf{\epsilon }}_{l}{\mathbf{W}}_{1l}^{\top } + \mathbf{A}{\mathbf{\epsilon }}_{l}{\mathbf{W}}_{2l}^{\top }}\right| . \tag{51}
663
+ $$
664
+
665
+ 571 From the triangle inequality, we have
666
+
667
+ $$
668
+ \left| {\mathbf{\epsilon }}_{l + 1}\right| \leq \left| {\mathbf{\epsilon }}_{l}\right| \left| {\mathbf{W}}_{1l}^{\top }\right| + \mathbf{A}\left| {\mathbf{\epsilon }}_{l}\right| \left| {\mathbf{W}}_{2l}^{\top }\right| . \tag{52}
669
+ $$
670
+
671
+ 572 From the assumption the statement holds at $l$ th layer, we have
672
+
673
+ (*)
674
+
675
+ $$
676
+ \left( *\right) \;\left| {\mathbf{\epsilon }}_{l}\right| \leq {a}_{l}{\mathbf{I}}_{S}{\mathbf{V}}_{l}^{\top } + {\mathbf{r}}_{l}. \tag{53}
677
+ $$
678
+
679
+ 573 Substitute equation 53 into equation 52 , we have,
680
+
681
+ $$
682
+ \left| {\mathbf{\epsilon }}_{l + 1}\right| \leq \left( {{a}_{l}{\mathbf{I}}_{S}{\mathbf{V}}_{l}^{\top } + {\mathbf{r}}_{l}}\right) \left| {\mathbf{W}}_{1l}^{\top }\right| + \mathbf{A}\left( {{a}_{l}{\mathbf{I}}_{S}{\mathbf{V}}_{l}^{\top } + {\mathbf{r}}_{l}}\right) \left| {\mathbf{W}}_{2l}^{\top }\right| \tag{54}
683
+ $$
684
+
685
+ 574 Expand the above equation,
686
+
687
+ $$
688
+ \left| {\mathbf{\epsilon }}_{l + 1}\right| \leq {a}_{l}\left( {\mathbf{A}{\mathbf{I}}_{S}}\right) \left( {{\mathbf{V}}_{l}^{\top }\left| {\mathbf{W}}_{2l}^{\top }\right| }\right) + \mathbf{A}{\mathbf{r}}_{l}\left| {\mathbf{W}}_{2l}^{\top }\right| + {a}_{l}{\mathbf{I}}_{S}{\mathbf{V}}_{l}^{\top }\left| {\mathbf{W}}_{1l}^{\top }\right| + {\mathbf{r}}_{l}\left| {\mathbf{W}}_{1l}^{\top }\right| \tag{55}
689
+ $$
690
+
691
+ 575 Using the property of undirected $\tau$ -graph, it holds that
692
+
693
+ $$
694
+ \mathbf{A}{\mathbf{I}}_{S} = \tau {\mathbf{I}}_{S} - \mathop{\sum }\limits_{{\left( {i, j}\right) \in E, i \in S, j \in T}}\left( {{\mathbf{E}}_{i} - {\mathbf{E}}_{j}}\right) = \tau {\mathbf{I}}_{S} + {\mathbf{B}}_{S}, \tag{56}
695
+ $$
696
+
697
+ where we denote
698
+
699
+ $$
700
+ {\mathbf{B}}_{S} = - \mathop{\sum }\limits_{{\left( {i, j}\right) \in E, i \in S, j \in T}}\left( {{\mathbf{E}}_{i} - {\mathbf{E}}_{j}}\right) , \tag{57}
701
+ $$
702
+
703
+ and ${\mathbf{E}}_{i},{\mathbf{E}}_{j} \in {\mathbb{R}}^{\left| V\right| \times 1}$ are unit vectors with $i$ th and $j$ th entry equal to 1 respectively. Then it is trivial to show that
704
+
705
+ $$
706
+ {\begin{Vmatrix}{\mathbf{B}}_{S}\end{Vmatrix}}_{1} \leq {2m} \tag{58}
707
+ $$
708
+
709
+ Substitute equation 56 into equation 55 , we have
710
+
711
+ $$
712
+ \left| {\mathbf{\epsilon }}_{l + 1}\right| \leq {a}_{l}\tau {\mathbf{I}}_{S}{\mathbf{V}}_{l}^{\top }\left| {\mathbf{W}}_{2l}^{\top }\right| + {a}_{l}{\mathbf{B}}_{S}{\mathbf{V}}_{l}^{\top }\left| {\mathbf{W}}_{2l}^{\top }\right| + \mathbf{A}{\mathbf{r}}_{l}\left| {\mathbf{W}}_{2l}^{\top }\right| + {a}_{l}{\mathbf{I}}_{S}{\mathbf{V}}_{l}^{\top }\left| {\mathbf{W}}_{1l}^{\top }\right| + {\mathbf{r}}_{l}\left| {\mathbf{W}}_{1l}^{\top }\right| . \tag{59}
713
+ $$
714
+
715
+ Let
716
+
717
+ $$
718
+ {a}_{l + 1} = \left( {1 + \tau }\right) {a}_{l},
719
+ $$
720
+
721
+ $$
722
+ {\mathbf{V}}_{l + 1}^{\top } = \frac{\tau }{\tau + 1}{\mathbf{V}}_{l}^{\top }\left| {\mathbf{W}}_{2l}^{\top }\right| + \frac{1}{\tau + 1}{\mathbf{V}}_{l}^{\top }\left| {\mathbf{W}}_{1l}^{T}\right| , \tag{60}
723
+ $$
724
+
725
+ $$
726
+ {\mathbf{r}}_{l + 1} = {a}_{l}{\mathbf{B}}_{S}{\mathbf{V}}_{l}^{\top }\left| {\mathbf{W}}_{2l}^{\top }\right| + \mathbf{A}{\mathbf{r}}_{l}\left| {\mathbf{W}}_{2l}^{\top }\right| + {\mathbf{r}}_{l}\left| {\mathbf{W}}_{1l}^{\top }\right| ,
727
+ $$
728
+
729
+ 581 then we rewrite equation 59 as
730
+
731
+ $$
732
+ \left| {\mathbf{\epsilon }}_{l + 1}\right| \leq {a}_{l + 1}{\mathbf{I}}_{S}{\mathbf{V}}_{l + 1}^{\top } + {\mathbf{r}}_{l + 1} \tag{61}
733
+ $$
734
+
735
+ From the assumption that
736
+
737
+ $$
738
+ {\begin{Vmatrix}{\mathbf{W}}_{1l}\end{Vmatrix}}_{1} \leq 1,\;{\begin{Vmatrix}{\mathbf{W}}_{2l}\end{Vmatrix}}_{1} \leq 1, \tag{62}
739
+ $$
740
+
741
+ 583 we have
742
+
743
+ $$
744
+ {\begin{Vmatrix}\left( \left| {\mathbf{W}}_{1l}\right| \right) \end{Vmatrix}}_{1} = {\begin{Vmatrix}{\mathbf{W}}_{1l}\end{Vmatrix}}_{1} \leq 1,\;{\begin{Vmatrix}\left( \left| {\mathbf{W}}_{2l}\right| \right) \end{Vmatrix}}_{1} = {\begin{Vmatrix}{\mathbf{W}}_{2l}\end{Vmatrix}}_{1} \leq 1. \tag{63}
745
+ $$
746
+
747
+ So substitute equation 63, equation 58 and equation 53 into equation 60,
748
+
749
+ $$
750
+ {a}_{l + 1} = \left( {\tau + 1}\right) {a}_{l} \leq \epsilon {\left( \tau + 1\right) }^{l + 1}
751
+ $$
752
+
753
+ $$
754
+ \begin{Vmatrix}{\mathbf{V}}_{l + 1}^{\top }\end{Vmatrix} \leq \frac{\tau }{\tau + 1}{\begin{Vmatrix}{\mathbf{V}}_{l}^{\top }\end{Vmatrix}}_{1} + \frac{1}{\tau + 1}\begin{Vmatrix}{\mathbf{V}}_{l}^{\top }\end{Vmatrix} \leq d \tag{64}
755
+ $$
756
+
757
+ 585 and
758
+
759
+ $$
760
+ {\begin{Vmatrix}{\mathbf{r}}_{l + 1}\end{Vmatrix}}_{1} \leq {a}_{l}{\begin{Vmatrix}{\mathbf{B}}_{S}\end{Vmatrix}}_{1}{\begin{Vmatrix}{\mathbf{V}}_{l}^{\top }\end{Vmatrix}}_{1} + \parallel \mathbf{A}{\parallel }_{1}{\begin{Vmatrix}{\mathbf{r}}_{l}\end{Vmatrix}}_{1} + {\begin{Vmatrix}{\mathbf{r}}_{l}\end{Vmatrix}}_{1}
761
+ $$
762
+
763
+ $$
764
+ \leq 2{a}_{l}{md} + \left( {\tau + 1}\right) \left| \right| {\mathbf{r}}_{l}{\left| \right| }_{1} \leq {2md\epsilon }{\left( \tau + 1\right) }^{l} + \left( {\tau + 1}\right) \left| \right| {\mathbf{r}}_{l}{\left| \right| }_{1} \tag{65}
765
+ $$
766
+
767
+ $$
768
+ \leq 2{\;\operatorname{mod}\;\epsilon }{\left( \tau + 1\right) }^{l} + 2{\;\operatorname{mod}\;\epsilon }\left( {l + 1}\right) {\left( \tau + 1\right) }^{l + 1}
769
+ $$
770
+
771
+ $$
772
+ \leq {2md\epsilon }{\left( \tau + 1\right) }^{l + 1} + {2md\epsilon }\left( {l + 1}\right) {\left( \tau + 1\right) }^{l + 1} = {2md\epsilon }\left( {l + 2}\right) {\left( \tau + 1\right) }^{l + 1}.
773
+ $$
774
+
775
+ This finishes the induction.
776
+
777
+ The above lemma gives
778
+
779
+ $$
780
+ \mathop{\max }\limits_{\substack{{\left| \right| {\mathbf{W}}_{1l}{\left| \right| }_{1}} \\ {\left| \right| {\mathbf{W}}_{2l}{\left| \right| }_{1}} \\ \mathbf{\alpha } }}\left| {\mathbf{\epsilon }}_{l}\right| \leq \epsilon {\left( \tau + 1\right) }^{l}{\mathbf{I}}_{S}{\mathbf{V}}_{l}^{\top } + {\mathbf{r}}_{l} \tag{66}
781
+ $$
782
+
783
+ where $\begin{Vmatrix}{\mathbf{V}}_{l}^{\top }\end{Vmatrix} \leq d$ and ${\begin{Vmatrix}{\mathbf{r}}_{l}\end{Vmatrix}}_{1} \leq {2d\epsilon m}\left( {l + 1}\right) {\left( \tau + 1\right) }^{l}$ . So when only looking at indices ${\epsilon }_{ij}$ with ${589i} \in T$ , the first term vanishes and it holds that
784
+
785
+ $$
786
+ \mathop{\max }\limits_{\substack{{\left| \right| {\mathbf{W}}_{1l}{\left| \right| }_{1}} \\ {\left| \right| {\mathbf{W}}_{2l}{\left| \right| }_{1}} \\ \mathbf{\alpha } }}\mathop{\sum }\limits_{{i \in T}}{\left| {\epsilon }_{l}\right| }_{ij} \leq {2d\epsilon m}\left( {l + 1}\right) {\left( \tau + 1\right) }^{l} \tag{67}
787
+ $$
788
+
789
+ For the denominator, we simply construct ${\mathbf{W}}_{1l} = {\mathbf{W}}_{2l}$ as both identity matrix and take ${\epsilon }_{0} = \mathbf{\beta }$ . Then it simply holds that
790
+
791
+ $$
792
+ \left| {\mathbf{\epsilon }}_{0}\right| = {\left( 1 + \tau \right) }^{0}\epsilon {\mathbf{I}}_{T}{\mathbf{I}}_{d}^{\top } \tag{68}
793
+ $$
794
+
795
+ where ${\mathbf{I}}_{T}$ is the indicator vector on set $T$ . Assume it holds,
796
+
797
+ $$
798
+ {\mathbf{\epsilon }}_{l} = {\left( 1 + \tau \right) }^{l}\epsilon {\mathbf{I}}_{T}{\mathbf{I}}_{d}^{\top } \tag{69}
799
+ $$
800
+
801
+ then from the Lipschitz continuity (ReLU) of $\sigma$ and standard $\tau$ -graph, it holds that
802
+
803
+ ${\mathbf{\epsilon }}_{l + 1} = \sigma \left( {\left( {\mathbf{I} + \mathbf{A}}\right) \left( {{\mathbf{H}}_{l} + {\mathbf{\epsilon }}_{l}}\right) }\right) - \sigma \left( {\left( {\mathbf{I} + \mathbf{A}}\right) \left( {\mathbf{H}}_{l}\right) }\right) = {\left( 1 + d\right) }^{l}\epsilon \left( {\mathbf{I} + \mathbf{A}}\right) {\mathbf{I}}_{T}{\mathbf{I}}_{d}^{\top } = {\left( 1 + \tau \right) }^{l + 1}\epsilon {\mathbf{I}}_{T}{\mathbf{I}}_{d}^{\top }$(70)
804
+
805
+ 94 So we can get
806
+
807
+ $$
808
+ \mathop{\sum }\limits_{{i \in T}}\left| {\left( {\mathbf{\epsilon }}_{l}\right) }_{ij}\right| = \epsilon {\left( 1 + \tau \right) }^{l}\mathop{\sum }\limits_{{i \in T}}{\left( {\mathbf{I}}_{T}{\mathbf{I}}_{d}^{\top }\right) }_{ij} = {\left( 1 + \tau \right) }^{l}\epsilon \left| T\right| d \tag{71}
809
+ $$
810
+
811
+ So that it holds that
812
+
813
+ $$
814
+ \mathop{\max }\limits_{\substack{{\parallel {\mathbf{W}}_{1l}{\parallel }_{1}} \\ {\parallel {\mathbf{W}}_{2l}{\parallel }_{1}} \\ \beta }}\mathop{\sum }\limits_{{i \in T}}{\left| {\epsilon }_{l}\right| }_{ij} \geq {\left( 1 + \tau \right) }^{l}\epsilon \left| T\right| d \tag{72}
815
+ $$
816
+
817
+ Combine equation 67 and equation 72, and substitute the last layer number as $L - 1$ , we have
818
+
819
+ $$
820
+ \frac{\mathop{\max }\limits_{{\begin{Vmatrix}{\mathbf{W}}_{1l}\end{Vmatrix}}_{1}}\mathop{\sum }\limits_{{i \in T}}{\left| {\mathbf{\epsilon }}_{l}\right| }_{ij}}{\mathop{\max }\limits_{{\begin{Vmatrix}{\mathbf{W}}_{2l}\end{Vmatrix}}_{1}}\mathop{\max }\limits_{{\begin{Vmatrix}{\mathbf{W}}_{1l}\end{Vmatrix}}_{1}}\mathop{\sum }\limits_{{i \in T}}{\left| {\mathbf{\epsilon }}_{l}\right| }_{ij}} \leq \frac{2mL}{\left| T\right| }. \tag{73}
821
+ $$
822
+
823
+ ### A.8 Proof of Theorem 5
824
+
825
+ From the proof of proposition 1 in appendix A.7, by simply constructing ${\mathbf{W}}_{1l},{\mathbf{W}}_{2l}$ in the node-level GNN as identity matrix, we have
826
+
827
+ $$
828
+ \mathop{\sum }\limits_{{i \in S}}\left| {\left( {\mathbf{\epsilon }}_{S}\right) }_{ij}\right| = {\left( 1 + \tau \right) }^{L}\epsilon \left| S\right| d\;\text{ if }{\mathbf{\epsilon }}_{0} = \mathbf{\alpha }, \tag{74}
829
+ $$
830
+
831
+ $$
832
+ \mathop{\sum }\limits_{{i \in T}}\left| {\left( {\epsilon }_{S}\right) }_{ij}\right| = {\left( 1 + \tau \right) }^{L}\epsilon \left| T\right| d\;\text{ if }{\epsilon }_{0} = \beta .
833
+ $$
834
+
835
+ 600 Then from Lipschitz continuity (ReLU) we have
836
+
837
+ $$
838
+ {\eta }_{S \rightarrow T} = {g}_{T}^{\mathrm{{GNN}}}\left( {\mathbf{X} + \mathbf{\alpha }}\right) - {g}_{T}^{\mathrm{{GNN}}}\left( \mathbf{X}\right)
839
+ $$
840
+
841
+ $$
842
+ = \sigma \left( {{\mathbf{z}}_{T}{\mathbf{W}}_{1}^{\top } + \left( {{\mathbf{z}}_{S} + \frac{1}{\left| V\right| }\mathop{\sum }\limits_{{i \in S}}{\left( {\mathbf{\epsilon }}_{S}\right) }_{ij}}\right) {\mathbf{W}}_{2}}\right) - \sigma \left( {{\mathbf{z}}_{T}{\mathbf{W}}_{1}^{\top } + {\mathbf{z}}_{S}{\mathbf{W}}_{2}^{\top }}\right) \tag{75}
843
+ $$
844
+
845
+ $$
846
+ = \left( {\frac{1}{\left| V\right| }\mathop{\sum }\limits_{{i \in S}}{\left( {\mathbf{\epsilon }}_{S}\right) }_{ij}}\right) {\mathbf{W}}_{2}^{\top }\;\text{ if }{\mathbf{\epsilon }}_{0} = \mathbf{\alpha }
847
+ $$
848
+
849
+ and
850
+
851
+ $$
852
+ {\begin{Vmatrix}{\eta }_{S \rightarrow T}\end{Vmatrix}}_{1} = {\begin{Vmatrix}\left( \frac{1}{\left| V\right| }\mathop{\sum }\limits_{{i \in S}}{\left( {\epsilon }_{S}\right) }_{ij}\right) {\mathbf{W}}_{2}^{\top }\end{Vmatrix}}_{1} = {\begin{Vmatrix}{\mathbf{W}}_{2}^{\top }\end{Vmatrix}}_{1}{\left( 1 + \tau \right) }^{L}\epsilon \frac{\left| S\right| }{\left| V\right| }d\;\text{ if }{\epsilon }_{0} = \mathbf{\alpha } \tag{76}
853
+ $$
854
+
855
+ Similarly, we can get
856
+
857
+ $$
858
+ {\eta }_{T \rightarrow T} = \left( {\frac{1}{\left| V\right| }\mathop{\sum }\limits_{{i \in T}}{\left( {\mathbf{\epsilon }}_{T}\right) }_{ij}}\right) {\mathbf{W}}_{1}^{\top }.\;\text{ if }{\mathbf{\epsilon }}_{0} = \mathbf{\beta }, \tag{77}
859
+ $$
860
+
861
+ and
862
+
863
+ $$
864
+ {\begin{Vmatrix}{\eta }_{T \rightarrow T}\end{Vmatrix}}_{1} = {\begin{Vmatrix}{\mathbf{W}}_{1}^{\top }\end{Vmatrix}}_{1}{\left( 1 + \tau \right) }^{L}\epsilon \frac{\left| T\right| }{\left| V\right| }d\;\text{ if }{\epsilon }_{0} = \mathbf{\beta }, \tag{78}
865
+ $$
866
+
867
+ Then we can simply make $\frac{{\begin{Vmatrix}{\mathbf{W}}_{1}\end{Vmatrix}}_{1}}{{\begin{Vmatrix}{\mathbf{W}}_{2}\end{Vmatrix}}_{1}} = \frac{\left| S\right| }{\left| T\right| }$ , so that the ratio is 1 .
868
+
869
+ Remark 2. The assumption of output norm unification can be achieved by standard normalization, such as batch and layer normalizations. Lipschitz continuity exists widely in the activation functions such as ReLU. And most molecules can be modeled as qusi standard graphs. These assumptions are fair assumptions in graph learning. Although it is difficult to universally obtain a precise and tight bound, the existence of such bounds is still helpful for GNN structure design.
870
+
871
+ ### A.9 Graph segmentation
872
+
873
+ As a graph has an irregular structure and contains rich structural information, forming patches on a graph is not as straightforward as segmenting images. The previous works $\left\lbrack {9,{20}}\right\rbrack$ generally split an image in the euclidean space. However, graphs are segmented through spectral clustering based on its topology. Figure 8 shows the second eigenvector and patch segmentations based on the algorithm described in Section 3.2. It can be seen that the eigenvectors change along with the graph structures, and the graphs are splitted into several function groups. Such patches are useful for discriminating the property of the given molecule.
874
+
875
+ ### A.10 More results
876
+
877
+ Table 3 provides the performance of PatchGT on ogbg-moltox21 and ogbg-moltoxcast.
878
+
879
+ ### A.11 Datasets
880
+
881
+ Table 4 contains the statistics for the six datasets from Open Graph Bechmark (OGB) [13], and Table 5 contains the statistics for the six datasets from TU datasets [23].
882
+
883
+ ### A.12 Hyper-parameters selection
884
+
885
+ We report the detailed hyper-parameter settings used for training PatchGT in Table 6. The search space for $\lambda$ is $\{ {0.1},{0.2},{0.4},{0.5},{0.8}\}$ .
886
+
887
+ <table><tr><td/><td>Eigenvectors</td><td>Patch Segmentation</td></tr><tr><td> <img src="https://cdn.noedgeai.com/01963ee9-6afc-7b34-900f-7fc043203b0b_19.jpg?x=335&y=478&w=313&h=165&r=0"/> </td><td> <img src="https://cdn.noedgeai.com/01963ee9-6afc-7b34-900f-7fc043203b0b_19.jpg?x=721&y=445&w=336&h=232&r=0"/> </td><td> <img src="https://cdn.noedgeai.com/01963ee9-6afc-7b34-900f-7fc043203b0b_19.jpg?x=1195&y=466&w=167&h=214&r=0"/> </td></tr><tr><td> <img src="https://cdn.noedgeai.com/01963ee9-6afc-7b34-900f-7fc043203b0b_19.jpg?x=340&y=766&w=305&h=193&r=0"/> </td><td> <img src="https://cdn.noedgeai.com/01963ee9-6afc-7b34-900f-7fc043203b0b_19.jpg?x=712&y=745&w=343&h=232&r=0"/> </td><td> <img src="https://cdn.noedgeai.com/01963ee9-6afc-7b34-900f-7fc043203b0b_19.jpg?x=1137&y=761&w=288&h=219&r=0"/> </td></tr><tr><td> <img src="https://cdn.noedgeai.com/01963ee9-6afc-7b34-900f-7fc043203b0b_19.jpg?x=333&y=1043&w=315&h=237&r=0"/> </td><td> <img src="https://cdn.noedgeai.com/01963ee9-6afc-7b34-900f-7fc043203b0b_19.jpg?x=711&y=1040&w=345&h=241&r=0"/> </td><td> <img src="https://cdn.noedgeai.com/01963ee9-6afc-7b34-900f-7fc043203b0b_19.jpg?x=1156&y=1064&w=237&h=215&r=0"/> </td></tr><tr><td> <img src="https://cdn.noedgeai.com/01963ee9-6afc-7b34-900f-7fc043203b0b_19.jpg?x=354&y=1413&w=284&h=78&r=0"/> </td><td> <img src="https://cdn.noedgeai.com/01963ee9-6afc-7b34-900f-7fc043203b0b_19.jpg?x=717&y=1341&w=336&h=229&r=0"/> </td><td> <img src="https://cdn.noedgeai.com/01963ee9-6afc-7b34-900f-7fc043203b0b_19.jpg?x=1171&y=1354&w=215&h=211&r=0"/> </td></tr><tr><td> <img src="https://cdn.noedgeai.com/01963ee9-6afc-7b34-900f-7fc043203b0b_19.jpg?x=335&y=1648&w=313&h=192&r=0"/> </td><td> <img src="https://cdn.noedgeai.com/01963ee9-6afc-7b34-900f-7fc043203b0b_19.jpg?x=722&y=1631&w=328&h=230&r=0"/> </td><td> <img src="https://cdn.noedgeai.com/01963ee9-6afc-7b34-900f-7fc043203b0b_19.jpg?x=1190&y=1644&w=173&h=215&r=0"/> </td></tr></table>
888
+
889
+ Figure 8: Examples of eigenvectors, and graph patches for molecules.
890
+
891
+ Table 3: Results (%) on OGB datasets
892
+
893
+ <table><tr><td/><td>ogbg-moltox21</td><td>ogbg-moltoxcast</td></tr><tr><td>GCN +VN</td><td>${75.51} \pm {0.86}$</td><td>${66.33} \pm {0.35}$</td></tr><tr><td>GIN + VN</td><td>${76.21} \pm {0.82}$</td><td>${66.18} \pm {0.68}$</td></tr><tr><td>GRAPHSNN +VN</td><td>${76.78} \pm {1.27}$</td><td>${67.68} \pm {0.92}$</td></tr><tr><td>PatchGT-GCN</td><td>${76.49} \pm {0.93}$</td><td>${66.58} \pm {0.47}$</td></tr><tr><td>PatchGT-GIN</td><td>$\mathbf{{77.26}} \pm {0.80}$</td><td>67.95 ±0.55</td></tr></table>
894
+
895
+ Table 4: Statistics of OGB datasets
896
+
897
+ <table><tr><td>Name</td><td>#Graphs</td><td>#Nodes per graphs</td><td>#Edges per graph</td><td>#Tasks</td></tr><tr><td>molhiv</td><td>41,127</td><td>25.5</td><td>27.5</td><td>1</td></tr><tr><td>molbace</td><td>1,513</td><td>34.1</td><td>36.9</td><td>1</td></tr><tr><td>molclintox</td><td>1,477</td><td>26.2</td><td>27.9</td><td>2</td></tr><tr><td>molsider</td><td>1,427</td><td>33.6</td><td>35.4</td><td>27</td></tr><tr><td>ogbg-moltox21</td><td>7,831</td><td>18.6</td><td>19.3</td><td>12</td></tr><tr><td>ogbg-moltoxcast</td><td>8,576</td><td>18.8</td><td>19.3</td><td>617</td></tr></table>
898
+
899
+ Table 5: Statistics of TU datasets
900
+
901
+ <table><tr><td>Name</td><td>#Graphs</td><td>#Nodes per graphs</td><td>#Edges per graph</td></tr><tr><td>DD</td><td>1,178</td><td>284.3</td><td>715.7</td></tr><tr><td>MUTAG</td><td>188</td><td>17.9</td><td>19.8</td></tr><tr><td>PROTEINS</td><td>1,113</td><td>39.1</td><td>72.8</td></tr><tr><td>PTC-MR</td><td>344</td><td>14.3</td><td>14.7</td></tr><tr><td>ENZYMES</td><td>600</td><td>32.6</td><td>62.1</td></tr><tr><td>Mutagenicity</td><td>4,337</td><td>30.3</td><td>30.8</td></tr></table>
902
+
903
+ Table 6: Model Configurations and Hyper-parameters
904
+
905
+ <table><tr><td/><td>OGB</td><td>TU</td></tr><tr><td>#GNN layers</td><td>5</td><td>4</td></tr><tr><td>$\#$ patch-GNN layers</td><td>2</td><td>2</td></tr><tr><td>Embedding Dropout</td><td>0.0</td><td>0.1</td></tr><tr><td>Hidden Dimension $d$</td><td>512</td><td>256</td></tr><tr><td>#Attention Heads</td><td>16</td><td>4</td></tr><tr><td>Attention Dropout</td><td>0.1</td><td>0.1</td></tr><tr><td>Batch Size</td><td>512</td><td>256</td></tr><tr><td>Learning Rate</td><td>1e-4</td><td>1e-4</td></tr><tr><td>Max Epochs</td><td>150</td><td>50</td></tr><tr><td>eigenvalue threshold $\lambda$</td><td colspan="2">$\{ {0.1},{0.2},{0.4},{0.5},{0.8}\}$</td></tr></table>
906
+
907
+ ### A.13 Visualization of attention on nodes
908
+
909
+ Figure 9 shows more attention on graphs. We notice that some patches the model concentrates on are far away from each other. This can help address information bottleneck in the graph. Also, it provides more model interpretation.
910
+
911
+ ### A.14 Analysis of the computational complexity
912
+
913
+ We compare our computational complexity with the node-level Transformer, Graphormer [35]. The comptutational complexity for both framework can be classified into two parts. The first part is extracting graph structure infromation. For PatchGT, the complexity is $O\left( {\left| V\right| }^{3}\right)$ for calculating the eigenvectors and perform kmeans for $k$ patches. For Graphormer, the complexity is $O\left( {\left| V\right| }^{4}\right)$ due to node pairwise shortest path computation.
914
+
915
+ ![01963ee9-6afc-7b34-900f-7fc043203b0b_21_340_208_1149_948_0.jpg](images/01963ee9-6afc-7b34-900f-7fc043203b0b_21_340_208_1149_948_0.jpg)
916
+
917
+ Figure 9: Attention visualization of PatchGT on ogbg-molhiv molecules.
918
+
919
+ Remark 3. The software and algorithms of eigen-decomposition are being widely developed in many disciplines [10]. The complexity can be reduced to $O\left( {\left| V\right| }^{2}\right)$ if a partial query and approximation of eigenvectors and eigenvalues are allowed [8, 27]. And spectral clustering does not require all eigenvectors with exact values.
920
+
921
+ The second part is neural network computation. For PatchGT, the complexity is $O\left( \left| E\right| \right)$ for GNN if the adjacency matrix is sparse and $O\left( {k}^{2}\right)$ for Transformer. And for Graphormer, the complexity of Transformer is $O\left( {\left| V\right| }^{2}\right)$ . It shoud be noticed that for a large graph, $k < < \left| V\right|$ . Overall, the complexity of patch-level transformer is significantly less than that of applying transformer directly on the node level.
922
+
923
+ For other hierarchical pooling methods, they also need $O\left( {L\left| E\right| }\right)$ to learn the segmentation $(L$ is the number of layers used in GNN), which is comparable to spectral clustering. And spectral clustering is easier for parallel computation.
papers/LOG/LOG 2022/LOG 2022 Conference/Vbfr1jiMxYS/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,353 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § PATCHGT: TRANSFORMER OVER NON-TRAINABLE CLUSTERS FOR LEARNING GRAPH REPRESENTATIONS
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ Recently the Transformer structure has shown good performances in graph learning tasks. However, these Transformer models directly work on graph nodes and may have difficulties learning high-level information. Inspired by the vision transformer, which applies to image patches, we propose a new Transformer-based graph neural network: Patch Graph Transformer (PatchGT). Unlike previous transformer-based models for learning graph representations, PatchGT learns from non-trainable graph patches, not from nodes directly. It can help save computation and improve the model performance. The key idea is to segment a graph into patches based on spectral clustering without any trainable parameters, with which the model can first use GNN layers to learn patch-level representations and then use Transformer to obtain graph-level representations. The architecture leverages the spectral information of graphs and combines the strengths of GNNs and Transformers. Further, We show the limitations of previous hierarchical trainable clusters theoretically and empirically. We also prove the proposed non-trainable spectral clustering method is permutation invariant and can help address the information bottlenecks in the graph. PatchGT achieves higher expressiveness than 1-WL-type GNNs, and the empirical study shows that PatchGT achieves competitive performances on benchmark datasets and provides interpretability to its predictions.
12
+
13
+ § 201 INTRODUCTION
14
+
15
+ Learning from graph data is ubiquitous in applications such as drug design [14] and social network analysis [34]. The success of a graph learning task hinges on effective extraction of information from graph structures, which often contain combinatorial structures and are highly complex. Early works [7] often need to manually extract features from graphs before applying learning models. In the era of deep learning, Graph Neural Networks (GNNs) [32] are developed to automatically extract information from graphs. Through passing learnable messages between nodes, they are able to encode graph information into vector representations of graph nodes. GNNs have become the standard tool for learning tasks on graph data.
16
+
17
+ While they have achieved good performances in a wide range of tasks, GNNs still have a few limitations. For example, GNNs [33] suffer from issues such as inadequate expressiveness [33], over-smoothing [26], and over-squashing [2]. These issues have been partially addressed by techniques such as improving message-passing functions and expanding node features [5, 19].
18
+
19
+ Another important progress is to replace the message-passing network with the Transformer architecture $\left\lbrack {6,{17},{22},{35}}\right\rbrack$ . These models treat graph nodes as tokens and apply the Transformer architecture to nodes directly. The main focus of these models is how to encode node information and how to incorporate adjacency matrices into network calculations. Without the message-passing structure, these models may overcome some associated issues and have shown premium performances in various graph learning tasks. However, these models suffer from computation complexity because of the global attention on all nodes. It is hard to capture the topological information of graphs.
20
+
21
+ As a comparison, the Transformer for image data works on image patches instead of pixels [9, 20]. While this model choice is justified by reduction of computation cost, recent work [29] shows that
22
+
23
+ "patch representation itself may be a critical component to the 'superior' performance of newer architectures like Vision Transformers". One intriguing question is whether patch representation can also improve learning models on graphs. With this question, we consider patches on graphs. Patches over graphs are justified by a "mid-level" understanding of graphs: for example, a molecule graph's property is often decided by some function groups, each of which is a subgraph formed by locally-connected atoms. Therefore, patch representations are able to capture such mid-level concepts and bridge the gap between low-level structures to high-level semantics.
24
+
25
+ Motivated by our question, we propose a new framework, Patch Graph Transformer (PatchGT). It first segments a graph into patches based on spectral clustering, which is a non-trainable segmentation method, then applies GNN layers to learn patch representations, and finally uses Transformer layers to learn a graph-level representation from patch representations. This framework combines the strengths of two types of learning architectures: GNN layers can extract information with message passing, while Transformer layers can aggregate information using the attention mechanism. To our best knowledge, we firstly show several limitations of previous trainable clustering method based on GNN. We also show that the proposed non-trainable clustering can provide more reasonable patches and help overcoming information bottleneck in graphs.
26
+
27
+ We justify our model architecture with theoretical analysis. We show that our patch structure derived from spectral clustering is superior to patch structures learned by GNNs [4, 12, 36]. We also propose a new mathematical description of the information bottleneck in vanilla GNNs and further show that our architecture has the ability of mitigating this issue when graphs have small graph cuts.
28
+
29
+ We run an extensive empirical study and demonstrate that the proposed model outperforms competing methods on a list of graph learning tasks. The ablation study shows that our PatchGT is able to combine the strengths of GNN layers and Transformer layers. The attention weights in Transformer layers also provide explanations for model predictions.
30
+
31
+ § 2 RELATED WORK
32
+
33
+ Transformer models have gained remarkable successes in NLP applications [15]. Recently, they have also been introduced to vision tasks [9] and graph tasks [6, 17, 22, 35]. These models all treat nodes as tokens. Particularly, Memory-based graph networks[1] apply a hierarchical attention pooling methods on the nodes. Therefore, they are hard to be applied to large graphs because of huge computation complexity. At the same time, image patches have been shown to be useful for Transformer models on image data $\left\lbrack {9,{29}}\right\rbrack$ , so it is not surprising if graph patches are also helpful to Transformer models on graph data. Graph multiset pooling [3] applies trainable pooling methods on the nodes based on GNN. And then adopt a global attention layer on learned cluters. We will show that such trainable clustering has several limitations for attention mechanism in this work.
34
+
35
+ Hierarchical pooling models $\left\lbrack {4,{11},{12},{18},{25},{36}}\right\rbrack$ are relevant to our work in that they also aggregate information from node representations in middle layers of networks. However, these methods all form their pooling structures based on representations learned from GNNs. As a result, these pooling structures inherit drawbacks from GNNs [33]. They may also aggregate nodes that are far apart on the graph and thus cannot preserve the global structure of the input graph. Also such trainable clustering methods need much computation for training. Furthermore, our main purpose is to use non-trainable patches on graphs as tokens for a Transformer model, which is different from these models.
36
+
37
+ § 3 PATCH GRAPH TRANSFORMER
38
+
39
+ § 3.1 BACKGROUND
40
+
41
+ In this work, we consider graph-level learning problems. Let $G = \left( {V,E}\right)$ denote a graph with node set $V$ and edge set $E$ . Let $\mathbf{A}$ denote its adjacency matrix. The graph has both node features $\mathbf{X} = \left( {{\mathbf{x}}_{i} \in {\mathbb{R}}^{d} : i \in V}\right)$ and edge features $\mathbf{E} = \left( {{\mathbf{e}}_{i,j} \in {\mathbb{R}}^{{d}^{\prime }} : \left( {i,j}\right) \in E}\right)$ . Let $y$ denotes the label of graph. This work aims to learn a model that maps(A, X, E)to a vector representation $\mathbf{g}$ , which is then used to predict the graph label $y$ .
42
+
43
+ GNN layers. A GNN uses node vectors to represent structural information of the graph. It consists of multiple GNN layers. Each GNN layer passes learnable messages and updates node vectors. Suppose $\mathbf{H} = \left( {{\mathbf{h}}_{i} \in {\mathbb{R}}^{{d}^{\prime \prime }} : i \in V}\right)$ are node vectors, a typical GNN layer updates $\mathbf{H}$ as follows.
44
+
45
+ $$
46
+ {\mathbf{h}}_{i}^{\prime } = \sigma \left( {{\mathbf{W}}_{1}{\mathbf{h}}_{i} + \mathop{\sum }\limits_{{j : \left( {i,j}\right) \in E}}{\mathbf{W}}_{2}{\mathbf{h}}_{j} + {\mathbf{W}}_{3}{\mathbf{e}}_{i,j}}\right) \tag{1}
47
+ $$
48
+
49
+ < g r a p h i c s >
50
+
51
+ Figure 1: Model review. We segment a graph into several patch subgraphs by non-trainable clustering. We first extract local information through a GNN, and the initial patch representations are summarized by the aggregation of nodes within the corresponding patches. To further encode structure information, we apply another patch-level GNN to update the representations of patches. Finally, we use Transformer to extract the representation of the entire graph based on patch representations.
52
+
53
+ Here matrices $\left( {{\mathbf{W}}_{1},{\mathbf{W}}_{2},{\mathbf{W}}_{3}}\right)$ are all learnable parameters; and $\sigma$ is the activation function. We denote the layer function by ${\mathbf{H}}^{\prime } = \operatorname{GNN}\left( {\mathbf{A},\mathbf{E},\mathbf{H}}\right)$ . If there are no edge features, then the calculation can be written in matrix form.
54
+
55
+ $$
56
+ {\mathbf{H}}^{\prime } = \sigma \left( {{\mathbf{{HW}}}_{1}^{\top } + {\mathbf{{AHW}}}_{2}^{\top }}\right) \tag{2}
57
+ $$
58
+
59
+ § 3.2 MODEL DESIGN
60
+
61
+ PatchGT has three components: segmenting the input graph into patches, learning patch representations, and aggregating patch representations into a single graph vector. The overall architecture is shown in Figure 1. The second and third steps are in an end-to-end learning model. Graph segmentation is outside of the learning model, which will be justified by our theoretical analysis later.
62
+
63
+ Forming patches over the graph. We first discuss how to form patches on a graph. One consideration is to include an informative subgraph (e.g., a function group, a motif) into a single patch instead of segmenting it into pieces. A reasonable approach is to run node clustering on the input graph and treat each cluster as a graph patch. If a meaningful subgraph is densely connected, it has a good chance of being contained in a single cluster.
64
+
65
+ In this work, we consider spectral clustering [28,37] for graph segmentation. Let $\mathbf{L} = \mathbf{I} -$ ${\mathbf{D}}^{-1/2}\mathbf{A}{\mathbf{D}}^{-1/2}$ be the normalized Laplacian matrix of $G$ , and its eigen-decomposition is $\mathbf{L} =$ ${\mathbf{{U\Lambda U}}}^{\top }$ , where the eigen-values $\mathbf{\Lambda } = \operatorname{diag}\left( {{\lambda }_{1},\ldots ,{\lambda }_{\left| V\right| }}\right)$ is sorted in the ascending order. By thresholding eigen-values with a small threshold $\gamma$ , we get $k = \arg \mathop{\max }\limits_{{k}^{\prime }}{\lambda }_{{k}^{\prime }} \leq \gamma$ eigen-vectors ${\mathbf{U}}_{1 : k}$ , then we run $k$ -means to get $k$ clusters (denoted by $\mathcal{P}$ ) of graph nodes. Here $\mathcal{P} = \left\{ {C}_{{k}^{\prime }}\right.$ C $\left. {V : {k}^{\prime } = 1,\ldots ,k}\right\}$ with each ${C}_{{k}^{\prime }}$ representing a cluster/patch. Note that the threshold $\gamma$ is a hyper-parameter, and $k$ varies depending on the underlying graph’s topology.
66
+
67
+ Computing patch representations. When we learn representations of patches in $\mathcal{P}$ , we consider both node connections within the patch and also connections between patches. Patches form a coarse graph, which is also referred as a patch-level graph, by treating patches as nodes and their connections as edges. We first learn node representations using GNN layers. Let ${\mathbf{H}}_{0} = \mathbf{X}$ denote the initial representations of all nodes. Then we apply ${L}_{1}$ GNN layers to get node representations ${\mathbf{H}}_{{L}_{1}}$ .
68
+
69
+ $$
70
+ {\mathbf{H}}_{\ell } = \operatorname{GNN}\left( {\mathbf{A},\mathbf{E},{\mathbf{H}}_{\ell - 1}}\right) ,\ell = 1,\ldots ,{L}_{1} \tag{3}
71
+ $$
72
+
73
+ Here for easier discussion, we apply GNN layers to the entire graph. We have also tried to apply GNN layers within each patch only and found that the performance is similar.
74
+
75
+ Then we read out the initial patch representation by summarizing representations of nodes within this patch. Let ${\mathbf{z}}_{{k}^{\prime }}^{0}$ denote the initial patch representation, then
76
+
77
+ $$
78
+ {\mathbf{z}}_{{k}^{\prime }}^{0} = \frac{\left| {C}_{{k}^{\prime }}\right| }{\left| V\right| } \cdot \operatorname{readout}\left( {{\mathbf{h}}_{i}^{{L}_{1}} : i \in {C}_{{k}^{\prime }}}\right) ,{k}^{\prime } = 1,\ldots ,k \tag{4}
79
+ $$
80
+
81
+ Here ${\mathbf{h}}_{i}^{{L}_{1}}$ is node $i$ ’s representation in ${\mathbf{H}}_{{L}_{1}}$ . We collectively denote these patch representations in a matrix ${\mathbf{Z}}_{0} = \left( {{\mathbf{z}}_{{k}^{\prime }}^{0} : {k}^{\prime } = 1,\ldots ,k}\right)$ . The readout function readout $\left( \cdot \right)$ is a function aggregating information from a set of vectors. Our implementation uses the max pooling. We use the factor $\frac{\left| {C}_{{k}^{\prime }}\right| }{\left| V\right| }$ to assign proper weights to patch representations.
82
+
83
+ To further refine patch representations and encode structural information of the entire graph, we apply further GNN layers to the patch-level formed by patches. We first compute the adjacency matrix $\widetilde{\mathbf{A}}$ of the patch-level graph. If we convert the partition $\mathcal{P}$ to an assignment matrix $\mathbf{S} = \left( {{S}_{i,{k}^{\prime }} : i \in }\right.$ $\left. {V,{k}^{\prime } = 1,\ldots k}\right)$ such that ${S}_{i,{k}^{\prime }} = 1\left\lbrack {i \in {C}_{{k}^{\prime }}}\right\rbrack$ , then the adjacency matrix over patches is
84
+
85
+ $$
86
+ \widetilde{\mathbf{A}} = 1\left\lbrack {\left( {{\mathbf{S}}^{\top }\mathbf{A}\mathbf{S}}\right) > 0}\right\rbrack . \tag{5}
87
+ $$
88
+
89
+ Note that $\widetilde{\mathbf{A}}$ only has connections between patches and does not maintain connection strength.
90
+
91
+ We then compute use ${L}_{2}$ GNN layers to refine patch representations.
92
+
93
+ $$
94
+ {\mathbf{Z}}_{\ell } = \operatorname{GNN}\left( {\widetilde{\mathbf{A}},\mathbf{0},{\mathbf{Z}}_{\ell - 1}}\right) ,\;\ell = 1,\ldots ,{L}_{2} \tag{6}
95
+ $$
96
+
97
+ GNN layers here do not have edge features. From the last layer, we get patch representations in ${\mathbf{Z}}_{{L}_{2}}$
98
+
99
+ Graph representation via Transformer layers. Then we use ${L}_{3}$ Transformer layers to extract the representation of the entire graph. Here we use a learnable query vector ${\mathbf{q}}_{0}$ to "retrieve" the global representation $\mathbf{g}$ of the graph from patch representations ${\mathbf{Z}}_{{L}_{2}}$ .
100
+
101
+ $$
102
+ {\mathbf{q}}_{\ell }^{\prime } = \operatorname{MHA}\left( {{\mathbf{q}}_{\ell - 1},{\mathbf{Z}}_{{L}_{2}},{\mathbf{Z}}_{{L}_{2}}}\right) ,\;\ell = 1,\ldots ,{L}_{3} \tag{7}
103
+ $$
104
+
105
+ $$
106
+ {\mathbf{q}}_{\ell } = \operatorname{MLP}\left( {\mathbf{q}}_{\ell }^{\prime }\right) + {\mathbf{q}}_{\ell - 1},\;\ell = 1,\ldots ,{L}_{3} \tag{8}
107
+ $$
108
+
109
+ $$
110
+ \mathbf{g} = \operatorname{LN}\left( {\mathbf{q}}_{{L}_{3}}\right) \tag{9}
111
+ $$
112
+
113
+ Here $\operatorname{MHA}\left( {\cdot ,\cdot , \cdot }\right)$ is the function of a multi-head attention layer (please refer to Chp. 10 of [38]). Its three arguments are the query, key, and value. The two functions $\operatorname{MLP}\left( \cdot \right)$ and $\operatorname{LN}\left( \cdot \right)$ are respectively a multi-layer perceptron and a linear layer. Note that patch representations ${\mathbf{Z}}_{{L}_{2}}$ are carried through without updated. Only the query token is updated to query information from patch representations. The final learned graph representation is $\mathbf{g}$ , from which we can perform various graph level tasks.
114
+
115
+ § 4 THEORETICAL ANALYSIS
116
+
117
+ In this section, we study the theoretical properties of the proposed model. To save space, we put all proofs in the appendix.
118
+
119
+ § 4.1 ENHANCING MODEL EXPRESSIVENESS WITH PATCHES
120
+
121
+ On purpose we form graph patches using a clustering method that is not part of the neural network. An alternative consideration is to learn such cluster assignments with GNNs (e.g. DiffPool [36] and MinCutPool[4]. However, cluster assignment learned by GNNs inherits the limitation of GNNs and hinders the expessiveness of the entire model.
122
+
123
+ Theorem 1. Suppose two graphs receive the same coloring by 1-WL algorithm, then DiffPool will compute the same vector representation for them.
124
+
125
+ Although DiffPool and MinCutPool claims to cluster "similar" graph nodes into clusters during pooling, but these nodes may not be connected. Because of the limitation of GNNs, they may aggregate nodes that are far apart in the graph. For example, nodes in the same orbit always get the same color by the 1-WL algorithm and also the same representations from a GNN, then these nodes always have the same cluster assignment. Merging these nodes into the same cluster does not seem capture the high-level structure of a graph.
126
+
127
+ < g r a p h i c s >
128
+
129
+ Figure 2: Pooling methods on a pair of graphs that cannot be distinguished by the 1-WL algorithm (nodes are colored by the 1-WL algorithm).
130
+
131
+ < g r a p h i c s >
132
+
133
+ Figure 3: It is hard for a GNN to push signal from one graph cluster to the other, but a patch-level GNN can do so with patch representations.
134
+
135
+ Another prominent pooling method is the Graph U-Net [11], which has similar issues. We briefly introduce its calculation here. Suppose the layer input is(A, H), the model’s pooling layer projects $\mathbf{H}$ with a unit vector $\mathbf{p}$ and gets values $\mathbf{v} = \mathbf{{Hp}}$ for all nodes, then it chooses the top $k$ nodes that have largest values in $\mathbf{v}$ and keep their representations only. We will show that this approach is NOT invariant to node orders.
136
+
137
+ We also consider a small variant of Graph U-Net for analysis convenience. Instead of choosing $k$ nodes with top values in $\mathbf{v}$ , the variant uses a threshold $\beta$ (either learnable or a hyper-parameter) to choose nodes: $\mathbf{b} = \mathbf{v} \geq \beta$ . Then the output of the layer is $\left( {\mathbf{A}\left\lbrack {\mathbf{b},\mathbf{b}}\right\rbrack ,\mathbf{H}\left\lbrack \mathbf{b}\right\rbrack }\right)$ . We call the model with the variant with thresholding as Graph U-Net-th. We show that the variant of Graph U-Net-th is also bounded by the 1-WL algorithm.
138
+
139
+ Theorem 2. Suppose two graphs receive the same coloring by 1-WL algorithm, then Graph U-Net-th will compute the same vector representation for them.
140
+
141
+ The two theorems strongly indicate that pooling structures learned by GNNs have the same drawback. We provide detailed analysis for Graph U-Net in Appendix A.3.
142
+
143
+ The spectral clustering algorithm actually injects the structural information into the model and has the strength a GNN lacks. By combining the two, our PatchGT can help to ease the 1-WL limit of expressiveness associated with the trainable pooling methods. And we prove that there exists non-isomorphic graphs that PatchGT can distinguish but the 1-WL algorithm cannot in .
144
+
145
+ § 4.2 PERMUTATION INVARIANCE
146
+
147
+ Our model depends on the patch structure formed by the clustering algorithm, which further depends on the spectral decomposition of the normalized Laplacian. Note that the spectral decomposition is not unique, but we show that the clustering result is not affected by sign variant and multiplicities associated with decomposition, and our model is still invariant to node permutations.
148
+
149
+ Theorem 3. The network function of PatchGT is invariant to node permutations.
150
+
151
+ § 4.3 ADDRESSING INFORMATION BOTTLENECK WITH PATCH REPRESENTATIONS
152
+
153
+ Alon et al. [2] recently characterize the issue of information bottleneck in GNNs through empirical methods. Here we consider this issue on a special case when a graph consists of loosely-connected node clusters. Note that molecule graphs often have this property. Here we make the first attempt to characterize the information bottleneck through theoretical analysis. We further show that our PatchGT can partially address this issue.
154
+
155
+ For convenient analysis, we consider a regular graph with degree $\tau$ . Suppose the node set $V$ of $G$ forms two clusters $S$ and $T : V = S \cup T,S \cap T = \varnothing$ , and there are only $m$ edges between $S$ and $T$ .
156
+
157
+ We consider the difficulty of passing signal from $S$ to $T$ . Let ${f}^{\mathrm{{GNN}}}\left( \cdot \right)$ denote the network function of a GNN of $L$ layers with ReLU activation $\sigma$ as in (2), and input $\mathbf{X} = \left( {{\mathbf{x}}_{i} \in {\mathbb{R}}^{d} : i \in V}\right) \in {\mathbb{R}}^{\left| V\right| \times d}$ , which contains $d$ -dimensional feature inputs to nodes in $G$ . Let ${f}_{i}^{\mathrm{{GNN}}}\left( \cdot \right)$ be the output at node $i$ . We can ask this question: if we perturb the input to nodes in $S$ , how much impact we can observe at the output at nodes in $T$ . We need to avoid the case that the impact is amplified by scaling up network parameters. In real applications, scaling up network parameters also amplifies signals within $T$ itself, and the signal from $S$ still cannot be well received. Here we consider relative impact: the ratio between the impact on $T$ from $S$ over that from $T$ itself.
158
+
159
+ < g r a p h i c s >
160
+
161
+ Figure 4: Segmentation results from spectral clustering and trainable clustering.
162
+
163
+ Let $\mathbf{\alpha } \in {\mathbb{R}}^{\left| V\right| \times d}$ be some perturbation on $S$ such that ${\alpha }_{ij} \leq \epsilon$ if $i \in S$ and ${\alpha }_{ij} = 0$ otherwise. Here $\epsilon$ is the scale of the perturbation. Similarly let $\mathbf{\beta } \in {\mathbb{R}}^{\left| V\right| \times d}$ be some perturbation on $T : {\beta }_{ij} \leq \epsilon$ if $i \in T$ and ${\beta }_{ij} = 0$ otherwise. Then the impacts on node representations ${f}_{i}^{\mathrm{{GNN}}},i \in T$ from $\mathbf{\alpha }$ and $\mathbf{\beta }$ are respectively
164
+
165
+ $$
166
+ {\delta }_{S \rightarrow T} = \mathop{\max }\limits_{\mathbf{\alpha }}\mathop{\sum }\limits_{{i \in T}}{\begin{Vmatrix}{f}_{i}^{\mathrm{{GNN}}}\left( \mathbf{X} + \mathbf{\alpha }\right) - {f}_{i}^{\mathrm{{GNN}}}\left( \mathbf{X}\right) \end{Vmatrix}}_{1} \tag{10}
167
+ $$
168
+
169
+ $$
170
+ {\delta }_{T \rightarrow T} = \mathop{\max }\limits_{\mathbf{\beta }}\mathop{\sum }\limits_{{i \in T}}{\begin{Vmatrix}{f}_{i}^{\mathrm{{GNN}}}\left( \mathbf{X} + \mathbf{\beta }\right) - {f}_{i}^{\mathrm{{GNN}}}\left( \mathbf{X}\right) \end{Vmatrix}}_{1} \tag{11}
171
+ $$
172
+
173
+ where the maximum is also over all possible learnable parameters ${\begin{Vmatrix}{\mathbf{W}}_{1}\end{Vmatrix}}_{{L}_{1} \rightarrow {L}_{1}},{\begin{Vmatrix}{\mathbf{W}}_{2}\end{Vmatrix}}_{{L}_{1} \rightarrow {L}_{1}} \leq 1$ as in (2). Then we have the following proposition to bound the ratio ${\delta }_{S \rightarrow T}/{\delta }_{T \rightarrow T}$ .
174
+
175
+ Proposition 1. Given a $\tau$ -regular graph $G$ , a node subset $S$ with its complement $T$ such that there are only $m$ edges between $S$ and $T$ , and a $L$ -layer ${GNN}$ , it holds that
176
+
177
+ $$
178
+ \frac{{\delta }_{S \rightarrow T}}{{\delta }_{T \rightarrow T}} \leq \frac{2mL}{\left| T\right| } \tag{12}
179
+ $$
180
+
181
+ The proposition indicates that when there is a small graph cut between two clusters, then it forms an information bottleneck in a GNN - the network needs to use more layers to pass signal from one group to another. The bound is still conservative: if the signal is extracted in middle layers of the network, then passing the signal is even harder. The proposition is illustrated in Figure 3.
182
+
183
+ In our PatchGT model, communication can happen at the coarse graph and thus can partially address this issue. The coarse graph $\widetilde{\mathbf{A}}$ consists of two notes (we still denote them by $S,T$ ), and there is an edge between $S$ and $T$ . From the output ${f}^{\mathrm{{GNN}}}$ , we construct the patch representations $\left( {{\mathbf{z}}_{S},{\mathbf{z}}_{T}}\right) = \left( {\frac{1}{\left| V\right| }\mathop{\sum }\limits_{{i \in S}}{f}_{i}^{\mathrm{{GNN}}}\left( \mathbf{X}\right) ,\frac{1}{\left| V\right| }\mathop{\sum }\limits_{{i \in T}}{f}_{i}^{\mathrm{{GNN}}}\left( \mathbf{X}\right) }\right) \in {\mathbb{R}}^{2 \times d}$ . Then we apply a GNN layer to get node represents on the coarse graph $\left( {{g}_{S}^{\mathrm{{GNN}}}\left( \mathbf{X}\right) ,{g}_{T}^{\mathrm{{GNN}}}\left( \mathbf{X}\right) }\right) \in {\mathbb{R}}^{2 \times d}$ :
184
+
185
+ $$
186
+ {g}_{S}^{\mathrm{{GNN}}}\left( \mathbf{X}\right) = \sigma \left( {{\mathbf{z}}_{S}{\mathbf{W}}_{1}^{\top } + {\mathbf{z}}_{T}{\mathbf{W}}_{2}^{\top }}\right) ,\;{g}_{T}^{\mathrm{{GNN}}}\left( \mathbf{X}\right) = \sigma \left( {{\mathbf{z}}_{T}{\mathbf{W}}_{1}^{\top } + {\mathbf{z}}_{S}{\mathbf{W}}_{2}^{\top }}\right) , \tag{13}
187
+ $$
188
+
189
+ where ${\mathbf{W}}_{1},{\mathbf{W}}_{2} \in {\mathbb{R}}^{d \times d}$ are learnable parameters. We consider the impact of $\alpha$ on our patch GT, let
190
+
191
+ $$
192
+ {\eta }_{S \rightarrow T} = \mathop{\max }\limits_{\mathbf{\alpha }}{\begin{Vmatrix}{g}_{T}^{\mathrm{{GNN}}}\left( \mathbf{X} + \mathbf{\alpha }\right) - {g}_{T}^{\mathrm{{GNN}}}\left( \mathbf{X}\right) \end{Vmatrix}}_{1} \tag{14}
193
+ $$
194
+
195
+ $$
196
+ {\eta }_{T \rightarrow T} = \mathop{\max }\limits_{\mathbf{\beta }}{\begin{Vmatrix}{g}_{T}^{\mathrm{{GNN}}}\left( \mathbf{X} + \mathbf{\beta }\right) - {g}_{T}^{\mathrm{{GNN}}}\left( \mathbf{X}\right) \end{Vmatrix}}_{1}, \tag{15}
197
+ $$
198
+
199
+ Then we have the following proposition on the ratio ${\eta }_{S \rightarrow T}/{\eta }_{T \rightarrow T}$ .
200
+
201
+ Theorem 4. The ratio $\frac{{\eta }_{S \rightarrow T}}{{\eta }_{T \rightarrow T}}$ can be arbitrarily close to 1 in a PatchGT model.
202
+
203
+ This is because $S$ and $T$ are direct neighbors in the coarse graph, then ${\alpha }_{S}$ can directly impact ${\mathbf{z}}_{S}$ , which can impact ${g}_{T}^{\mathrm{{GNN}}}$ through messages passed by GNN layers or the attention mechanism of Transformer layers. The right part of fig. 3 shows that patch representation can include signals from the other node cluster.
204
+
205
+ § 4.4 COMPARISON FOR DIFFERENT SEGMENTATION METHODS
206
+
207
+ In the previous researches, there exist many hierarchical pooling models [4, 11, 12, 18, 25, 36]. The most obvious difference from the proposed method is that the pooling/segmentation is trainable.
208
+
209
+ Particularly, the pooling is from the node respresentations learned by GNNs. In the Theorem 1 and Theorem 2, we prove such trainable clustering methods will compute the same representations to the nodes if 1-WL algorithm can not differentiate them. This takes two serious problems for the graph segmentation: First, the nodes with the same representations will be assigned to the same cluster even if they are not connected to each other; Second, too many nodes could be assigned to one cluster to make sure that the nodes far away from each other are in the same cluster.
210
+
211
+ Here we compare the two segmentation results: one is from spectral clustering and another is from Memory-based graph networks[1] which is a typical trainable clustering method. In the first case, we find that nodes in the blue cluster from trainable clustering are not connected. If we adopt such patch representations by aggregate the disconnected nodes, will definitely hurt the performance. This is also can be applied to other hierarchical pooling methods such as Diffpool, Eigenpool, and MinCutpool.
212
+
213
+ In the second case, the spectral clustering methods segment the graph by minimum cuts. This is helpful to solve the information bottleneck between patches. However, the Memory-based graph networks cluster the two benzene rings together. It will be difficult for the model to detect the existence of these two benzene rings.
214
+
215
+ § 5 EMPIRICAL STUDY
216
+
217
+ In this section, we evaluate the effectiveness of PatchGT through experiments.
218
+
219
+ Datasets. We benchmark the performances of PatchGT on several commonly studied graph-level prediction datasets. The first four are from the Open Graph Benchmark (OGB) datasets [13] (ogbg-molhiv, ogbg-molbace, ogbg-molclintox, and ogbg-molsider). These tasks are predicting molecular attributes. The evaluation metric for these four datasets is ROC-AUC (%). The second group of six datasets are from the TU datasets [23], and they are DD, MUTAG, PROTEINS, PTC-MR, ENZYMES, and Mutagenicity. Each dataset contains one classification task for molecules. The evaluation metric is accuracy (%) over all six datasets. The statistics for the datasets is summarized in Appendix A.11.
220
+
221
+ § 5.1 QUANTITATIVE EVALUATION
222
+
223
+ Table 1: Results (%) on OGB datasets
224
+
225
+ max width=
226
+
227
+ X ogbg-molhiv ogbg-molbace ogbg-molclintox ogbg-molsider
228
+
229
+ 1-5
230
+ GCN +VN ${75.99} \pm {1.19}$ ${71.44} \pm {4.01}$ ${88.55} \pm {2.09}$ ${59.84} \pm {1.54}$
231
+
232
+ 1-5
233
+ GIN + VN 77.07±1.49 ${76.41} \pm {2.68}$ ${84.06} \pm {3.84}$ ${57.75} \pm {1.14}$
234
+
235
+ 1-5
236
+ Deep LRP 77.19±1.40 - - -
237
+
238
+ 1-5
239
+ PNA ${79.05} \pm {1.32}$ - - -
240
+
241
+ 1-5
242
+ Nested GIN ${78.34} \pm {1.86}$ ${74.33} \pm {1.89}$ ${86.35} \pm {1.27}$ ${61.2} \pm {1.15}$
243
+
244
+ 1-5
245
+ GRAPHSNN +VN 79.72±1.83 - - -
246
+
247
+ 1-5
248
+ Graphormer (pre-trained) 80.51±0.53 - - -
249
+
250
+ 1-5
251
+ PatchGT-GCN ${80.22} \pm {0.84}$ ${86.44} \pm {1.92}$ $\mathbf{{92.21} \pm {1.35}}$ ${65.21} \pm {0.87}$
252
+
253
+ 1-5
254
+ PatchGT-GIN ${79.99} \pm {1.21}$ ${84.08} \pm {2.03}$ ${86.75} \pm {1.04}$ ${64.90} \pm {0.92}$
255
+
256
+ 1-5
257
+ PatchGT-DeeperGCN ${78.13} \pm {1.89}$ 88.31±1.87 ${89.02} \pm {1.21}$ 65.46±1.03
258
+
259
+ 1-5
260
+
261
+ Baselines. In this section, we compare the performance of PatchGT against several baselines including GCN [16], GIN [33], as well as recent works Nested Graph Neural Networks [40] and GraphSNN [31]. To compare with learnable pooling methods, we also include DiffPool [36], MinCutPool [4] Graph U-Nets[11], and EigenGCN[21] as baselines for TU datasets. We also include the Graphormer model, but note that Graphormer needs a large-scale pre-training and cannot be easily applied to a wider range of datasets. We also compare our model with other transformer-based models such as U2GNN[24] and SEG-BERT[39].
262
+
263
+ Settings. We search model hyper-parameters such as the eigenvalue threshold, the learning rate, and the number of graph neural network layers on the validation set. Each OGB dataset has its own data split of training, validation, and test sets. We run ten fold cross-validation on each TU dataset. In each fold, one-tenth of the data is used as the test set, one-tenth is used as the validation set, and the rest is used as training. For the detailed search space, please refer to Appendix A.12.
264
+
265
+ Table 2: Results (%) on TU datasets
266
+
267
+ max width=
268
+
269
+ X DD MUTAG PROTEINS PTC-MR ENZYMES Mutagenicity
270
+
271
+ 1-7
272
+ GCN ${71.6} \pm {2.8}$ 73.4±10.8 ${71.7} \pm {4.7}$ ${56.4} \pm {7.1}$ 50.17 -
273
+
274
+ 1-7
275
+ GraphSAGE ${71.6} \pm {3.0}$ ${74.0} \pm {8.8}$ ${71.2} \pm {5.2}$ ${57.0} \pm {5.5}$ 54.25 -
276
+
277
+ 1-7
278
+ GIN ${70.5} \pm {3.9}$ ${84.5} \pm {8.9}$ ${70.6} \pm {4.3}$ ${51.2} \pm {9.2}$ 59.6 -
279
+
280
+ 1-7
281
+ GAT 71.0±4.4 73.9±10.7 ${72.0} \pm {3.3}$ ${57.0} \pm {7.3}$ 58.45 -
282
+
283
+ 1-7
284
+ DiffPool ${79.3} \pm {2.4}$ - ${72.7} \pm {3.8}$ - 62.53 77.6±2.7
285
+
286
+ 1-7
287
+ MinCutPool ${80.8} \pm {2.3}$ - ${76.5} \pm {2.6}$ - - ${79.9} \pm {2.1}$
288
+
289
+ 1-7
290
+ Nested GCN ${76.3} \pm {3.8}$ ${82.9} \pm {11.1}$ ${73.3} \pm {4.0}$ ${57.3} \pm {7.7}$ ${31.2} \pm {6.7}$ -
291
+
292
+ 1-7
293
+ Nested GIN 77.8±3.9 ${87.9} \pm {8.2}$ ${73.9} \pm {5.1}$ ${54.1} \pm {7.7}$ ${29.0} \pm {8.0}$ -
294
+
295
+ 1-7
296
+ DiffPool-NOLP 79.98 - 76.22 - 61.95 -
297
+
298
+ 1-7
299
+ SEG-BERT - ${90.8} \pm {6.5}$ 77.1±4.2 - - -
300
+
301
+ 1-7
302
+ U2GNN ${80.2} \pm {1.5}$ ${89.9} \pm {3.6}$ ${78.5} \pm {4.07}$ - - -
303
+
304
+ 1-7
305
+ EigenGCN 78.6 - 76.6 - 64.5 -
306
+
307
+ 1-7
308
+ Graph U-Nets 82.43 - 77.68 - - -
309
+
310
+ 1-7
311
+ PatchGT-GCN $\mathbf{{83.3}} \pm {3.1}$ ${94.7} \pm {3.5}$ ${80.3} \pm {2.5}$ ${62.5} \pm {4.1}$ ${73.3} \pm {3.3}$ ${78.3} \pm {2.2}$
312
+
313
+ 1-7
314
+ PatchGT-GIN 79.6±3.3 ${89.4} \pm {3.2}$ 79.5±3.1 ${58.4} \pm {2.9}$ ${70.0} \pm {3.5}$ ${80.4} \pm {1.4}$
315
+
316
+ 1-7
317
+ PatchGT-DeeperGCN ${76.1} \pm {2.8}$ ${89.4} \pm {3.7}$ ${77.5} \pm {3.4}$ ${60.0} \pm {2.6}$ ${56.6} \pm {3.1}$ $\mathbf{{80.6}} \pm {1.5}$
318
+
319
+ 1-7
320
+
321
+ < g r a p h i c s >
322
+
323
+ Figure 5: Analysis of the key design for the proposed PatchGT. All results are based on PatchGT GCN. In the left figure, we show how changing the threshold for eigenvalues affects performance on the ogbg-molclintox and PROTEINS datasets; The middle figure shows the model performances with the removal of patch-GNN or Transformer (replaced by mean pool) on DD and ogbg-molhiv datasets; The right figure shows the effect of the different readout functions for patch representations.
324
+
325
+ Results. Table 1 and Table 2 summarize the performance of PatchGT and other baselines on OGB datasets and TU datasets. We take values from the original papers and the OGB website; EXCEPT the performance values of Nested GIN on the last three OGB datasets - we obtain the three values by running Nested GIN. We also tried to run the contemporary method GRAPHSNN+VN on the other three OGB datasets, but we did not find the official implementation at the submission of this work.
326
+
327
+ From the results, we see that the proposed method gets good performances on almost all datasets and often outperforms competing methods with a large margin. On the ogbg-molhiv dataset, the performance of PatchGT with GCN is only slightly worse than Graphormer, but note that Graphormer needs large-scale pre-training, which limits its applications.
328
+
329
+ PatchGT with GCN outperforms three baselines on the other three OGB datasets. The improvements on these three OGB datasets are significant. PatchGT with GCN outperforms baselines on four out of six TU datasets. When it does not outperform all baselines, its performances are only slightly worse than the best performance. Similarly, two other configurations, PatchGT-GIN and PatchGT-DeeperGCN, also perform very well on these two datasets.
330
+
331
+ § 5.2 ABLATION STUDY
332
+
333
+ We perform ablation studies to check how different configurations of our model affect its performance. The results are shown in Figure 5.
334
+
335
+ Effect of eigenvalue threshold. The eigenvalue threshold $\gamma$ influences how many patches for a graph after the segmentation. Generally speaking, larger $\gamma$ introduces more patches and patches with smaller sizes. When $\gamma$ is large enough, the number of patches $k$ equals the number of nodes $\left| V\right|$ in the graph, and the Transformer actually works at the node level. When the $\gamma$ is 0, then the whole graph is treated as one patch, and the model is reduced to a GNN with pooling. The left figure shows that there is a sweet point (depending on the dataset) for the threshold, which means that using patches is a better choice than not using patches.
336
+
337
+ < g r a p h i c s >
338
+
339
+ Figure 6: Attention visualization of PatchGT on ogbg-molhiv molecules. The second and fourth figures show the attention weights of query tokens on the node patches for the corresponding molecules, which are in the first and third figures. The molecule in the first figure does not inhibit HIV virus, yet the molecule in the third figure does.
340
+
341
+ Effect of GNN layer on the coarse graph and Transformer layers. This ablation study removes either patch-level GNN layers or Transformer layers to check which part of the architecture is important for the model performance. From the middle plot in Figure 5, we see that both types of layers are useful, and Transformer layers are more useful. This is another piece of evidence that PatchGT can combine the strengths of different models.
342
+
343
+ Comparison of readout functions. We compare the performance of PatchGT model using different readout functions when aggregating node representations at each patch in Equation (4). In the right figure, we observe the remarkable influence of the readout function on the performance. Empirical studies indicate max-pooling is the optimal choice under most circumstances.
344
+
345
+ § 5.3 UNDERSTANDING THE ATTENTION
346
+
347
+ Besides improving learning performances, we are also interested in understanding how the attention mechanism helps the model identify the graph property. We train the PatchGT model on the ogbg-molhiv dataset and visualize the attention weights between query tokens and each patch. Interestingly, the attention only concentrates on some chemical motifs such as ${\mathrm{{ClO}}}_{3}$ and ${\mathrm{{CON}}}_{2}$ but ignores other very common motifs such as benzene rings. It can be noticed that for the molecule in the first figure, the two benzene rings are connected to each other by -C-C-. However, the model does not pay any attention to this part. The two rings in the molecule of the second molecule are connected by -S-S-; differently, the model pays attention to this part this time. It indicates that Transformer can identify which motifs are informative and which motifs are common. Such property offers better model interpretability compared to the traditional global pooling. It not only makes accurate predictions but also provides some insight into why decisions are made. In the two examples shown above, we can start from motifs ${\mathrm{{SO}}}_{3}$ and -S-S- to look for structures meaningful for the classification problem.
348
+
349
+ § 6 CONCLUSION AND LIMITATIONS
350
+
351
+ In this work, we show that graph learning models benefit from modeling patches on graphs, particularly when it is combined with Transformer layers. We propose GraphGT, a new learning model that uses non-trainable clustering to get graph patches and learn graph representations based on patch representations. It combines the strengths of GNN layers and Transformer layers and we theoretically prove that it helps mitigate the bottleneck of graphs and limitations of trainable clustering. It shows superior performances on a list of graph learning tasks. Based on graph patches, Transformer layers also provides a good level of interpretability of model predictions.
352
+
353
+ However, the work tested our model mostly on chemical datasets. It is unclear whether the model still performs well when input graphs do not have clear cluster structures.
papers/LOG/LOG 2022/LOG 2022 Conference/W2OStztdMhc/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,285 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Interpretable Chirality-Aware Graph Neural Network for Quantitative Structure Activity Relationship Modeling
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ In computer-aided drug discovery, quantitative structure activity relation models are trained to predict biological activity from chemical structure. Despite the recent success of applying graph neural networks to this task, important chemical information such as molecular chirality is ignored. To fill this crucial gap, we propose Molecular-Kernel Graph Neural Network (MolKGNN) for molecular representation learning, which features conformation invariance, chirality-awareness, and interpretability. For MolKGNN, we first design a molecular graph convolution to capture the chemical pattern by comparing the atom's similarity with learnable molecular kernels. Furthermore, we propagate the similarity score to capture the higher-order chemical pattern. To assess the method, we conduct a comprehensive evaluation with nine well-curated datasets spanning numerous important drug targets that feature realistically high class imbalance. Meanwhile, the learned kernels identify patterns that agree with domain knowledge, confirming MolKGNN's pragmatic interpretability.
12
+
13
+ ## 16 1 Introduction
14
+
15
+ Developing new drugs is time-consuming and expensive, e.g., it took cabozantinib, an oncologic drug, 8.8 years and \$1.9 billion to get on the market [1]. To assist this process, computer-aided drug discovery (CADD) has been widely used. One branch of CADD constructs Quantitative Structure Activity Relationship (QSAR) models to predict the biological activity of molecules based on their chemical structure [2].
16
+
17
+ Graph Neural Networks (GNNs) have successfully been applied in many fields. As molecules can be viewed as graphs with atoms as nodes and chemical bonds as edges, GNNs are a logical choice to construct QSAR models [3]. A typical GNN architecture for graph classification begins with an encoder extracting node representations by passing neighborhood information followed by pooling operations that integrate node representations into graph representations, which are fed into a classifier to predict graph classes [4].
18
+
19
+ Despite the promise of GNN models applied to molecular representation learning, existing GNN models either blindly follow the message passing framework without considering molecular constraints on graphs [5], fail to integrate chirality [6], or lack interpretability [7]. To address these limitations, we develop a GNN model named MolKGNN that features conformation invariance, chirality-awareness and provides a form of interpretability. Our contributions are:
20
+
21
+ - Interpretable Molecular Convolution: We design a new convolution operation to capture chemical pattern of each atom by quantifying the similarity between the atom's neighboring subgraph and the learnable molecular kernel, which is inherently interpretable.
22
+
23
+ - Chirality Characterization: Rather than listing all permutations of neighbors for a chiral center [8], or using dihedral angles [7], the chirality calculation module in MolKGNN uses a lightweight linear algebra calculation.
24
+
25
+ - Realistic Benchmark: We perform a comprehensive evaluation using well-curated datasets spanning numerous important drug targets (that feature realistic high class imbalance) and metrics that bias predicted active molecules for actual experimental validation. Ultimately, we demonstrate the superiority of MolKGNN over other GNNs in CADD.
26
+
27
+ ![01963ee7-f70c-793b-820d-59e78cbca0ba_1_313_201_1178_566_0.jpg](images/01963ee7-f70c-793b-820d-59e78cbca0ba_1_313_201_1178_566_0.jpg)
28
+
29
+ Figure 1: (A) An overview of the proposed MolKGNN. (B) An illustration of the molecular convolution that captures three aspects of similarities. (C) An illustration of the chirality calculation.
30
+
31
+ ## 2 Related Work and Preliminaries
32
+
33
+ Several attempts have been made to leverage GNNs for molecular representation learning. Early models capture the 2D connectivity (i.e., molecular constitution) [9, 10]. However, molecules are not planar but 3D entities and bond lengths/angles/dihedral angles need thus to be taken into considerations [6, 11, 12]. To account for chirality, reflection-sensitive models are designed [8, 13].
34
+
35
+ In this work, a molecule is represented as an attributed and undirected graph $G = \left( {{\mathcal{V}}^{G},{\mathcal{E}}^{G}}\right)$ where ${\mathcal{V}}^{G},{\mathcal{E}}^{G}$ are the set of nodes (atoms) and edges (chemical bonds). Let $v \in {\mathcal{V}}^{G}$ denote the node $v$ and ${e}_{vu} \in {\mathcal{E}}^{G}$ denote an edge between $v$ and $u$ . Moreover, we represent the node attribute matrix as ${\mathbf{X}}^{G} \in {\mathbb{R}}^{\left| {\mathcal{V}}^{G}\right| \times {d}_{v}}$ and edge attribute matrix as ${\mathbf{E}}^{G} \in {\mathbb{R}}^{\left| {\mathcal{V}}^{G}\right| \times \left| {\mathcal{V}}^{G}\right| \times {d}_{e}}$ where ${d}_{v},{d}_{e}$ are the dimension of node and edge features. The node coordinate matrix is represented as ${\mathbf{P}}^{G} \in {\mathbb{R}}^{\left| {\mathcal{V}}^{G}\right| \times 3}$ and ${\mathbf{P}}_{v}^{G}$ denotes the $3\mathrm{D}$ coordinates of $v$ . The graph topology is described by its adjacency matrix ${\mathbf{A}}^{G} \in \{ 1,0{\} }^{\left| {\mathcal{V}}^{G}\right| \times \left| {\mathcal{V}}^{G}\right| }$ where ${\mathbf{A}}_{vu}^{G} = 1$ if ${e}_{vu} \in {\mathcal{E}}^{G}$ , and ${\mathbf{A}}_{vu}^{G} = 0$ otherwise. Note that bond types are encoded as edge features.
36
+
37
+ ## 3 Molecular-Kernel Graph Neural Network
38
+
39
+ In this section, we introduce the framework of MolKGNN, shown in Figure 1 (A). Next, we describe our molecular convolution involving three aspects of similarity along with being chirality-aware, and then highlight the entire model architecture.
40
+
41
+ ### 3.1 Molecular Convolution
42
+
43
+ In $2\mathrm{D}$ images, convolution operation can be regarded as calculating the similarity between the image patch and the image kernel. Larger output values indicate higher visual similarity patterns such as edges, strips, curves [14]. Inspired by that, we design a molecular convolution that outputs higher values when a molecular neighborhood and kernels are more chemically similar.
44
+
45
+ However, performing convolution on irregular neighborhood subgraphs requires the learnable molecular kernels to have correspondingly different geometrical structures, which is computationally prohibitive. To handle this challenge, for each atom $v$ of degree $d$ in $G$ , we only consider its 1-hop star-like neighborhood subgraph $S = \left( {{\mathcal{V}}^{S},{\mathcal{E}}^{S}}\right)$ where ${\mathcal{V}}^{S} = \{ v\} \cup {\mathcal{N}}_{v}^{G}$ and ${\mathcal{E}}^{S} = \left\{ {{e}_{vu} \mid u \in {\mathcal{N}}_{v}^{G}}\right\}$ . To make the molecular convolution feasible, we initialize the molecular kernel to also follow star-structure and denote it as ${S}^{\prime } = \left( {{\mathcal{V}}^{{S}^{\prime }},{\mathcal{E}}^{{S}^{\prime }}}\right)$ where ${\mathcal{V}}^{{S}^{\prime }} = \left\{ {v}^{\prime }\right\} \cup {\mathcal{N}}_{{v}^{\prime }}^{{S}^{\prime }}$ with ${v}^{\prime }$ being the central node without loss of generality and ${\mathcal{E}}^{{S}^{\prime }} = \left\{ {{e}_{{v}^{\prime }{u}^{\prime }} \mid {u}^{\prime } \in {\mathcal{N}}_{{v}^{\prime }}^{{S}^{\prime }}}\right\}$ . Let the learnable feature matrix and edge feature matrix of ${S}^{\prime }$ be ${\mathbf{X}}^{{S}^{\prime }} \in {\mathbb{R}}^{\left( {d + 1}\right) \times {d}_{n}}$ and ${\mathbf{E}}^{{S}^{\prime }} \in {\mathbb{R}}^{d \times {d}_{e}}$ , respectively.
46
+
47
+ Then we define the operation of molecular convolution between the atom $v$ and the molecular kernel ${S}^{\prime }$ as quantifying the similarity $\phi$ between $v$ ’s neighborhood subgraph $S$ and the kernel ${S}^{\prime }$ : $\phi \left( {S,{S}^{\prime }}\right) = {w}_{\mathrm{{cs}}}{\phi }_{\mathrm{{cs}}}\left( {S,{S}^{\prime }}\right) + {w}_{\mathrm{{ns}}}{\phi }_{\mathrm{{ns}}}\left( {S,{S}^{\prime }}\right) + {w}_{\mathrm{{es}}}{\phi }_{\mathrm{{es}}}\left( {S,{S}^{\prime }}\right)$ . where ${\phi }_{\mathrm{{cs}}},{\phi }_{\mathrm{{ns}}},{\phi }_{\mathrm{{es}}}$ quantify the similarity from three different aspects: the central similarity, neighborhood similarity, and edge similarity. We combine them together with learnable weights ${w}_{\mathrm{{cs}}},{w}_{\mathrm{{ns}}},{w}_{\mathrm{{es}}} \in \left\lbrack {0,1}\right\rbrack$ after softmax-normalization.
48
+
49
+ Central Similarity. We first capture the chemical property of atom $v$ itself in $S$ by computing its similarity to the central node ${v}^{\prime }$ in the kernel ${S}^{\prime } : {\phi }_{\mathrm{{cs}}}\left( {S,{S}^{\prime }}\right) = \operatorname{sim}\left( {{\mathbf{X}}_{v}^{S},{\mathbf{X}}_{{v}^{\prime }}^{{S}^{\prime }}}\right)$ . where ${\mathbf{X}}_{v}^{S},{\mathbf{X}}_{{v}^{\prime }}^{{S}^{\prime }}$ are attributes of the central atom $v$ in the subgraph $S$ and the central node ${v}^{\prime }$ in the kernel ${S}^{\prime }$ . The $\operatorname{sim}\left( {\cdot , \cdot }\right)$ is the function measuring vector similarity and we use cosine similarity throughout this work.
50
+
51
+ Neighboring Node and Edge Similarity. Besides the central node, the chemical property of an atom is also impacted by its neighborhood context, which motivates us to further quantify the similarity between 1) the neighboring nodes ${\mathcal{N}}_{v}^{S}$ in $S$ and ${\mathcal{N}}_{{v}^{\prime }}^{{S}^{\prime }}$ in ${S}^{\prime }$ , and 2) the edges ${\mathcal{E}}^{S}$ and ${\mathcal{E}}^{{S}^{\prime }}$ .
52
+
53
+ Before calculating ${\phi }_{\mathrm{{ns}}},{\phi }_{\mathrm{{es}}}$ between $S$ and ${S}^{\prime }$ , we face a matching problem. For example, in Figure ??, the node ${u}_{1}$ in $S$ has more than one matching candidates, i.e., $\left\{ {{u}_{1}^{\prime },{u}_{2}^{\prime },{u}_{3}^{\prime }}\right\}$ in ${S}^{\prime }$ . Here we seek a bijective matching ${\chi }^{ * } : {\mathcal{N}}_{v}^{S} \rightarrow {\mathcal{N}}_{{v}^{\prime }}^{{S}^{\prime }}$ such that the average attribute similarity between $u \in {\mathcal{N}}_{v}^{S}$ and ${\chi }^{ * }\left( u\right) \in {\mathcal{N}}_{{v}^{\prime }}^{{S}^{\prime }}$ over all neighbors can be maximized: ${\chi }^{ * } = \arg \mathop{\max }\limits_{\chi }\frac{1}{\left| {\mathcal{N}}_{v}^{S}\right| }\mathop{\sum }\limits_{{u \in {\mathcal{N}}_{v}^{S}}}\operatorname{sim}\left( {{\mathbf{X}}_{u}^{S},{\mathbf{X}}_{\chi \left( u\right) }^{{S}^{\prime }}}\right)$ . Given that exhausting all $\left| {\mathcal{N}}_{v}^{S}\right|$ ! possible matchings to find the optimal one is computationally infeasible, we significantly simplify this computation by constraining the searching space according to the inherent structure of molecules, which are: 1) node degrees in drug-like molecule graphs are usually less than 5 , with most atoms having a degree of 1 and few nodes having a degree of 4 [15]; 2) for nodes of degree 4, only 12 among the total 24 possible matchings are valid after considering chirality [8]. After the node matching, the bijective edge matching is defined as: ${\chi }^{e, * } : {\mathcal{E}}^{S} \rightarrow {\mathcal{E}}^{{S}^{\prime }}$ such that the edge ${e}_{vu} \in {\mathcal{E}}^{S}$ if and only if ${e}_{{v}^{\prime }{\chi }^{ * }\left( u\right) } \in {\mathcal{E}}^{{S}^{\prime }}$ . Then, we compute ${\phi }_{\mathrm{{ns}}}$ and ${\phi }_{\mathrm{{es}}}$ as:
54
+
55
+ ${\phi }_{\mathrm{{ns}}} = \frac{1}{\left| {\mathcal{N}}_{v}^{S}\right| }\mathop{\sum }\limits_{{u \in {\mathcal{N}}_{v}^{S}}}\operatorname{sim}\left( {{\mathbf{X}}_{u}^{S},{\mathbf{X}}_{{\chi }^{ * }\left( u\right) }^{{S}^{\prime }}}\right)$ and ${\phi }_{\mathrm{{es}}} = \frac{1}{\left| {\mathcal{N}}_{v}^{S}\right| }\mathop{\sum }\limits_{{u \in {\mathcal{N}}_{v}^{S}}}\operatorname{sim}\left( {{\mathbf{E}}_{vu}^{S},{\mathbf{E}}_{{v}^{\prime }{\chi }^{e, * }\left( u\right) }^{{S}^{\prime }}}\right) .$
56
+
57
+ Chirality Characterization. Chirality is a key determinant of a molecule's biological activity [16], but mostly exists when the central atom has four unique neighboring substructures (excluding some special scenarios). Given the neighborhood subgraph of an atom $S$ forming the tetrahedron shown in Figure 1 (C) where the four unique neighboring atoms are ${\mathcal{N}}_{v}^{S} = \left\{ {{u}_{1},{u}_{2},{u}_{3},{u}_{4}}\right\}$ , we select ${u}_{1}$ without loss of generality as the anchor neighbor to define the three concurrent sides of the tetrahedron ${\mathbf{a}}^{S} = {\mathbf{P}}_{{u}_{2}}^{S} - {\mathbf{P}}_{{u}_{1}}^{S},{\mathbf{b}}^{S} = {\mathbf{P}}_{{u}_{3}}^{S} - {\mathbf{P}}_{{u}_{1}}^{S},{\mathbf{c}}^{S} = {\mathbf{P}}_{{u}_{4}}^{S} - {\mathbf{P}}_{{u}_{1}}^{S}$ and further calculate the tetrahedral volume of $S$ as: ${\xi }^{S} = \frac{1}{6} * {\mathbf{a}}^{S} \times {\mathbf{b}}^{S} \cdot {\mathbf{c}}^{S}$ Similarly, we calculate ${\xi }^{{S}^{\prime }}$ for the kernel ${S}^{\prime }$ . Notice, that the sign of the tetrahedron volume of the molecule ${\xi }^{S}$ defines its vertices ordering [16]. The $\phi \left( {S,{S}^{\prime }}\right)$ is then updated as $\phi \left( {S,{S}^{\prime }}\right) = \left( {\operatorname{sgn}\left( {\xi }^{S}\right) \operatorname{sgn}\left( {\xi }^{{S}^{\prime }}\right) }\right) \phi \left( {S,{S}^{\prime }}\right)$ with $\operatorname{sgn}\left( \cdot \right)$ being the sign function.
58
+
59
+ ### 3.2 Model Architecture
60
+
61
+ Suppose the set of $K$ kernels at layer $l$ be ${\mathcal{S}}^{\prime l} = {\left\{ {S}_{k}^{\prime l}\right\} }_{k = 1}^{K}$ , the proposed molecular convolution is applied with the molecular kernel ${S}_{k}^{\prime l} \in {\mathcal{S}}^{\prime }{}^{l}$ over the node representation ${\mathbf{H}}^{l - 1}$ at the previous layer $l - 1$ to obtain the node similarity matrix at layer $l$ as ${\mathbf{\Phi }}^{l} \in {\mathbb{R}}^{\left| \mathcal{V}\right| \times K}$ , where ${\mathbf{\Phi }}_{ik}^{l} = \phi \left( {{S}_{{v}_{i}}^{l - 1},{S}_{k}^{\prime }{}^{l - 1}}\right)$ defines the similarity between the neighborhood subgraph around the atom ${v}_{i}$ and the ${k}^{\text{th }}$ kernel at layer $l - 1$ . We note that $\phi \left( {{S}_{{v}_{i}}^{l - 1},{S}_{k}^{\prime l - 1}}\right)$ is set to 0 if ${S}_{{v}_{i}}^{l - 1}$ and ${S}_{k}^{\prime l - 1}$ have different degrees so that back-propagation keeps the parameters in kernels of different degree untouched. The new node representation ${\mathbf{H}}^{l} = \mathbf{A}{\Phi }^{l}$ . After recursively alternating between the molecular convolution and the message-passing $L$ layers, the final atom representation ${\mathbf{H}}^{L}$ describes the chemical pattern up to $L$ hops away of each atom. Molecular representation $\mathbf{G}$ is obtained via global-sum. Ultimately, graph classification is performed using $\widehat{\mathbf{Y}} = {\sigma f}\left( \mathbf{G}\right)$ with classifier $f\left( .\right)$ , e.g., Multi-Layer Perceptron, and softmax normalization $\sigma$ . Computational complexities for MolKGNN is given in Appendix A.6.
62
+
63
+ ## 4 Experiments
64
+
65
+ ### 4.1 Experimental Settings
66
+
67
+ A Realistic Drug Discovery Scenario. High-throughput screening (HTS) enables rapid screening of thousands to millions of molecules for biological activity [17]. QSAR models are trained on HTS results to screen molecules virtually and prioritize acquisition [18]. HTS datasets are of large sizes, have high label imbalance (many more inactive molecules) and often contain false positives [19]. Moreover, an evaluation metric that biases towards molecules with the highest predicted activities is of interest as only these will be acquired or synthesized and tested.
68
+
69
+ Table 1: Results on $\log {\mathrm{{AUC}}}_{\left\lbrack {0.001},{0.1}\right\rbrack }$ (summarized AUC - details in Appendix A.7) over five runs.
70
+
71
+ <table><tr><td>PubChem AID</td><td>MolKGNN (ours)</td><td>SchNet</td><td>$\mathbf{{SphereNet}}$</td><td>DimeNet++</td><td>ChiRo</td><td>KerGNN</td></tr><tr><td>435008</td><td>${0.255} \pm {0.014}$</td><td>${0.187} \pm {0.027}$</td><td>${0.215} \pm {0.024}$</td><td>${0.203} \pm {0.047}$</td><td>${0.168} \pm {0.019}$</td><td>${0.147} \pm {0.015}$</td></tr><tr><td>1798</td><td>${0.174} \pm {0.029}$</td><td>${0.195} \pm {0.025}$</td><td>${0.196} \pm {0.035}$</td><td>${0.208} \pm {0.035}$</td><td>${0.165} \pm {0.040}$</td><td>${0.078} \pm {0.042}$</td></tr><tr><td>435034</td><td>${0.227} \pm {0.022}$</td><td>${0.246} \pm {0.020}$</td><td>${0.230} \pm {0.034}$</td><td>${0.235} \pm {0.044}$</td><td>${0.211} \pm {0.023}$</td><td>${0.179} \pm {0.045}$</td></tr><tr><td>1843</td><td>${0.362} \pm {0.033}$</td><td>${0.358} \pm {0.037}$</td><td>${0.258} \pm {0.048}$</td><td>${0.284} \pm {0.034}$</td><td>${0.326} \pm {0.010}$</td><td>${0.292} \pm {0.027}$</td></tr><tr><td>2258</td><td>${0.301} \pm {0.028}$</td><td>${0.240} \pm {0.037}$</td><td>${0.380} \pm {0.037}$</td><td>${0.340} \pm {0.032}$</td><td>${0.251} \pm {0.010}$</td><td>${0.195} \pm {0.020}$</td></tr><tr><td>463087</td><td>${0.390} \pm {0.056}$</td><td>${0.332} \pm {0.022}$</td><td>${0.399} \pm {0.011}$</td><td>${0.389} \pm {0.026}$</td><td>${0.258} \pm {0.019}$</td><td>${0.150} \pm {0.011}$</td></tr><tr><td>488997</td><td>${0.303} \pm {0.027}$</td><td>${0.319} \pm {0.017}$</td><td>${0.309} \pm {0.029}$</td><td>${0.315} \pm {0.011}$</td><td>${0.193} \pm {0.029}$</td><td>${0.081} \pm {0.023}$</td></tr><tr><td>2689</td><td>${0.415} \pm {0.020}$</td><td>${0.324} \pm {0.020}$</td><td>${0.401} \pm {0.016}$</td><td>${0.367} \pm {0.049}$</td><td>${0.351} \pm {0.048}$</td><td>${0.264} \pm {0.017}$</td></tr><tr><td>485290</td><td>${0.498} \pm {0.015}$</td><td>${0.333} \pm {0.047}$</td><td>${0.450} \pm {0.039}$</td><td>${0.463} \pm {0.040}$</td><td>${0.295} \pm {0.068}$</td><td>${0.223} \pm {0.026}$</td></tr><tr><td>Average</td><td>0.325</td><td>0.282</td><td>0.315</td><td>0.312</td><td>0.247</td><td>0.179</td></tr><tr><td>Avg. Rank</td><td>2.333</td><td>3.222</td><td>2.556</td><td>2.556</td><td>4.556</td><td>5.778</td></tr><tr><td>AUC Average</td><td>0.325</td><td>0.282</td><td>0.315</td><td>0.312</td><td>0.247</td><td>0.179</td></tr><tr><td>AUC Avg. Rank</td><td>2.333</td><td>3.222</td><td>2.556</td><td>2.556</td><td>4.556</td><td>5.778</td></tr></table>
72
+
73
+ ![01963ee7-f70c-793b-820d-59e78cbca0ba_3_331_607_1137_306_0.jpg](images/01963ee7-f70c-793b-820d-59e78cbca0ba_3_331_607_1137_306_0.jpg)
74
+
75
+ Figure 2: (A) Visualization of a learned kernel and three examples. (B) Ablation study result for $\phi \left( {S,{S}^{\prime }}\right)$ components. (C) Performance for different kernel numbers.
76
+
77
+ ## Datasets. Well-curated datasets used are from [20, 21]. Details can be found in Appendix A.1.
78
+
79
+ Baselines. SchNet [6], DimeNet++ [22], SphereNet [13], ChIRo [7] and KerGNN [5] are used. The first four are GNNs for molecular representation learning and the last one is a GNN that is architecturally similar to ours. Further details introducing the baselines is provided in Appendix A.11.
80
+
81
+ Evaluation Metrics. Two metrics are used, detailed in Appendix A.10: Logarithmic Receiver-Operating-Characteristic Area Under the Curve with the False Positive Rate in [0.001, 0.1] (logAUC ${}_{\left\lbrack {0.001},{0.1}\right\rbrack }$ ) [23]: This is used because only a small percentage of molecules predicted with high activity can be selected for experimental tests in consideration of cost in a real-world drug campaign [20]. Receiver-Operating-Characteristic Area Under the Curve (AUC): AUC is included since it has historically been used as a general purpose evaluation metric for graph classification [24].
82
+
83
+ ### 4.2 Experimental Results
84
+
85
+ From Table 1, we can see MolKGNN achieves superior results in recovering the active molecules with a high decision threshold. This highlights the ability of the proposed model to perform well in the application-related metric. Moreover, we find MolKGNN also performs on par with other GNN in terms of AUC, which demonstrates its applicability beyond drug discovery in a general setting. It is worth noting that different ranking of models are observed in the two tables. This demonstrates that a generally good performing model measured by AUC could potentially perform bad in a specific false positive rate region. Moreover, the learned kernel shown in Figure 2 (A) reveals a pattern of a center atom of carbon surrounded by neighboring three fluorine and another carbon. This pattern is known as the trifluoromethyl group in medicinal chemistry and has been used in several drugs [25]. The details of interpretability can be found in Appendix A.8. We also perform an additional experiment to exhibit MolKGNN's ability to distinguish chirality in Appendix A.5.
86
+
87
+ Ablation Studies. Component of $\phi \left( {S,{S}^{\prime }}\right)$ : Results show in Figure 2 (B). Kernel Number: Results show in Figure 2 (C). We provide a discussion on these results in Appendix A.9.
88
+
89
+ ## 5 Conclusion
90
+
91
+ We introduce a new GNN model named MolKGNN to address the QSAR model construction for CADD. MolKGNN utilizes a newly-designed molecular convolution, where a molecular neighbor- 3 hood is compared with a molecular kernel to output a similarity score. Comprehensive benchmarking is conducted to evaluate MolKGNN to show its superiority over existing GNN baselines.
92
+
93
+ References
94
+
95
+ [1] Vinay Prasad and Sham Mailankody. Research and development spending to bring a single cancer drug to market and revenues after approval. JAMA internal medicine, 177(11):1569-1575, 2017.1
96
+
97
+ [2] Gregory Sliwoski, Sandeepkumar Kothiwale, Jens Meiler, and Edward W Lowe. Computational methods in drug discovery. Pharmacological reviews, 66(1):334-395, 2014. 1
98
+
99
+ [3] Kenneth Atz, Francesca Grisoni, and Gisbert Schneider. Geometric deep learning on molecular representations. Nature Machine Intelligence, 3(12):1023-1032, 2021. 1
100
+
101
+ [4] Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. Graph neural networks: A review of methods and applications. AI Open, 1:57-81, 2020. 1
102
+
103
+ [5] Aosong Feng, Chenyu You, Shiqiang Wang, and Leandros Tassiulas. Kergnns: Interpretable graph neural networks with graph kernels. ArXiv Preprint: https://arxiv.org/abs/2201.00491, 2022. 1, 4, 11
104
+
105
+ [6] Kristof Schütt, Pieter-Jan Kindermans, Huziel Enoc Sauceda Felix, Stefan Chmiela, Alexandre Tkatchenko, and Klaus-Robert Müller. Schnet: A continuous-filter convolutional neural network for modeling quantum interactions. Advances in neural information processing systems, 30, 2017.1,2,4,8,11
106
+
107
+ [7] Keir Adams, Lagnajit Pattanaik, and Connor W Coley. Learning 3d representations of molecular chirality with invariance to bond rotations. arXiv preprint arXiv:2110.04383, 2021. 1, 4, 11
108
+
109
+ [8] Lagnajit Pattanaik, Octavian-Eugen Ganea, Ian Coley, Klavs F Jensen, William H Green, and Connor W Coley. Message passing networks for molecules with tetrahedral chirality. arXiv preprint arXiv:2012.00094, 2020. 1, 2, 3, 9
110
+
111
+ [9] Kevin Yang, Kyle Swanson, Wengong Jin, Connor Coley, Philipp Eiden, Hua Gao, Angel Guzman-Perez, Timothy Hopper, Brian Kelley, Miriam Mathea, et al. Analyzing learned molecular representations for property prediction. Journal of chemical information and modeling, 59 (8):3370-3388,2019. 2
112
+
113
+ [10] Connor W Coley, Regina Barzilay, William H Green, Tommi S Jaakkola, and Klavs F Jensen. Convolutional embedding of attributed molecular graphs for physical property prediction. Journal of chemical information and modeling, 57(8):1757-1772, 2017. 2, 8
114
+
115
+ [11] Johannes Klicpera, Janek Groß, and Stephan Günnemann. Directional message passing for molecular graphs. arXiv preprint arXiv:2003.03123, 2020. 2, 8, 11
116
+
117
+ [12] Victor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E (n) equivariant graph neural networks. In International conference on machine learning, pages 9323-9332. PMLR, 2021. 2
118
+
119
+ [13] Yi Liu, Limei Wang, Meng Liu, Yuchao Lin, Xuan Zhang, Bora Oztekin, and Shuiwang Ji. Spherical message passing for $3\mathrm{\;d}$ molecular graphs. In International Conference on Learning Representations, 2021. 2, 4, 8, 11
120
+
121
+ [14] Zhi-Hao Lin, Sheng Yu Huang, and Yu-Chiang Frank Wang. Learning of 3d graph convolution networks for point cloud analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021. 2
122
+
123
+ [15] Graham L Patrick. An introduction to medicinal chemistry. Oxford university press, 2013. 3
124
+
125
+ [16] Gregory Sliwoski, Edward W Lowe Jr, Mariusz Butkiewicz, and Jens Meiler. Bcl:: Emas-enantioselective molecular asymmetry descriptor for 3d-qsar. Molecules, 17(8):9971- 9989, 2012. 3
126
+
127
+ [17] Jürgen Bajorath. Integration of virtual and high-throughput screening. Nature Reviews Drug Discovery, 1(11):882-894, 2002. 3
128
+
129
+ [18] Ralf Mueller, Alice L Rodriguez, Eric S Dawson, Mariusz Butkiewicz, Thuy T Nguyen, Stephen Oleszkiewicz, Annalen Bleckmann, C David Weaver, Craig W Lindsley, P Jeffrey Conn, et al. Identification of metabotropic glutamate receptor subtype 5 potentiators using virtual high-throughput screening. ACS chemical neuroscience, 1(4):288-305, 2010. 3
130
+
131
+ [19] Jonathan B Baell and Georgina A Holloway. New substructure filters for removal of pan assay interference compounds (pains) from screening libraries and for their exclusion in bioassays. Journal of medicinal chemistry, 53(7):2719-2740, 2010. 3
132
+
133
+ [20] Mariusz Butkiewicz, Yanli Wang, Stephen H Bryant, Edward W Lowe Jr, David C Weaver, and Jens Meiler. High-throughput screening assay datasets from the pubchem database. Chemical informatics (Wilmington, Del.), 3(1), 2017. 4, 7, 10
134
+
135
+ [21] Mariusz Butkiewicz, Edward W Lowe Jr, Ralf Mueller, Jeffrey L Mendenhall, Pedro L Teixeira, C David Weaver, and Jens Meiler. Benchmarking ligand-based virtual high-throughput screening with the pubchem database. Molecules, 18(1):735-756, 2013. 4, 7
136
+
137
+ [22] Claudio Gallicchio and Alessio Micheli. Fast and deep graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 3898-3905, 2020. 4, 11
138
+
139
+ [23] Michael M Mysinger and Brian K Shoichet. Rapid context-dependent ligand desolvation in molecular docking. Journal of chemical information and modeling, 50(9):1561-1573, 2010. 4, 10
140
+
141
+ [24] Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S Pappu, Karl Leswing, and Vijay Pande. Moleculenet: a benchmark for molecular machine learning. Chemical science, 9(2):513-530, 2018. 4, 11
142
+
143
+ [25] Harry L Yale. The trifluoromethyl group in medical chemistry. Journal of Medicinal Chemistry, 1(2):121-133, 1958. 4
144
+
145
+ [26] Sunghwan Kim, Jie Chen, Tiejun Cheng, Asta Gindulyte, Jia He, Siqian He, Qingliang Li, Benjamin A Shoemaker, Paul A Thiessen, Bo Yu, et al. Pubchem in 2021: new data content and improved web interfaces. Nucleic acids research, 49(D1):D1388-D1395, 2021. 7
146
+
147
+ [27] Yanli Wang, Jewen Xiao, Tugba O Suzek, Jian Zhang, Jiyao Wang, Zhigang Zhou, Lianyi Han, Karen Karapetyan, Svetlana Dracheva, Benjamin A Shoemaker, et al. Pubchem's bioassay database. Nucleic acids research, 40(D1):D400-D412, 2012. 7
148
+
149
+ [28] Noel M O'Boyle, Michael Banck, Craig A James, Chris Morley, Tim Vandermeersch, and Geoffrey R Hutchison. Open babel: An open chemical toolbox. Journal of cheminformatics, 3 (1):1-14,2011. 7
150
+
151
+ [29] J Gasteiger, C Rudolph, and J Sadowski. Automatic generation of 3d-atomic coordinates for organic molecules. Tetrahedron Computer Methodology, 3(6):537-547, 1990. 7, 8
152
+
153
+ [30] Benjamin Brown, Oanh Vu, Alexander R Geanes, Sandeepkumar Kothiwale, Mariusz Butkiewicz, Edward W Lowe, Ralf Mueller, Richard Pape, Jeffrey Mendenhall, and Jens Meiler. Introduction to the biochemical library (bcl): An application-based open-source toolkit for integrated cheminformatics and machine learning in computer-aided drug discovery. Frontiers in pharmacology, page 341, 2022. 7
154
+
155
+ [31] Jeffrey Mendenhall and Jens Meiler. Improving quantitative structure-activity relationship models using artificial neural networks trained with dropout. Journal of computer-aided molecular design, 30(2):177-189, 2016. 7, 10
156
+
157
+ [32] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019. 7
158
+
159
+ [33] Matthias Fey and Jan Eric Lenssen. Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428, 2019. 7
160
+
161
+ [34] Greg Landrum et al. Rdkit: A software suite for cheminformatics, computational chemistry, and predictive modeling. Greg Landrum, 2013. 8
162
+
163
+ [35] Walter Gordy. Dependence of bond order and of bond energy upon bond length. The Journal of Chemical Physics, 15(5):305-310, 1947. 8
164
+
165
+ [36] RJ Gillespie. Bond angles and the spatial correlation of electrons1. Journal of the American Chemical Society, 82(23):5978-5983, 1960. 8
166
+
167
+ [37] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016. 9
168
+
169
+ [38] Vladimir Golkov, Alexander Becker, Daniel T Plop, Daniel Čuturilo, Neda Davoudi, Jeffrey Mendenhall, Rocco Moretti, Jens Meiler, and Daniel Cremers. Deep learning for virtual screening: Five reasons to use roc cost functions. arXiv preprint arXiv:2007.07029, 2020. 10
170
+
171
+ ## A Appendix
172
+
173
+ ### A.1 Datasets
174
+
175
+ PubChem [26] is a database supported by National Institute of Health (NIH) that contains biological activities for millions of drug-like molecules, often from HTS experiments. However, the raw primary screening data from PubChem have a high false positive rate [20, 21]. We benchmark our model using nine high-quality HTS experiments from PubChem that cover all important protein classes for drug discovery $\left\lbrack {{20},{21}}\right\rbrack$ (statistics in Table 2 where each dataset was carefully curated to have lists of inactive and confirmed active molecules from secondary experimental screens).
176
+
177
+ Table 2: Statistics of datasets used in the experiment. The datasets feature in the large data size, highly imbalanced labels, and diverse protein targets. Datasets are identified by their PubChem Assay ID (AID).
178
+
179
+ <table><tr><td>Protein Target Class</td><td>Protein Target (PubChem AID)</td><td>Total # of Graphs</td><td>#of Active Labels</td><td>Per Graph Avg. # of Nodes (Edges)</td></tr><tr><td rowspan="3">GPCR</td><td>Orexin1 Receptor (435008)</td><td>218,156</td><td>233</td><td>45.14 (94.37)</td></tr><tr><td>M1 Muscarinic Receptor Agonists (1798)</td><td>61,832</td><td>187</td><td>43.60 (91.37)</td></tr><tr><td>M1 Muscarinic Receptor Antagonists (435034)</td><td>61,755</td><td>362</td><td>43.61 (91.41)</td></tr><tr><td rowspan="3">Ion Channel</td><td>Potassium Ion Channel Kir2.1 (1843)</td><td>301,490</td><td>172</td><td>44.41 (92.81)</td></tr><tr><td>KCNQ2 Potassium Channel (2258)</td><td>302,402</td><td>213</td><td>44.44 (92.88)</td></tr><tr><td>Cav3 T-type Calcium Channels (463087)</td><td>100,874</td><td>703</td><td>43.75 (91.57)</td></tr><tr><td>Transporter</td><td>Choline Transporter (488997)</td><td>302,303</td><td>252</td><td>44.46 (92.90)</td></tr><tr><td>Kinase</td><td>Serine/Threonine Kinase 33 (2689)</td><td>319,789</td><td>172</td><td>44.85 (93.70)</td></tr><tr><td>Enzyme</td><td>Tyrosyl-DNA Phosphodiesterase (485290)</td><td>341,304</td><td>281</td><td>46.13 (96.50)</td></tr></table>
180
+
181
+ ### A.2 Experiment Details
182
+
183
+ Data Preprocessing We preprocessed the input SMIELS strings to Structure-Data Files (SDFs). Each dataset is specified by its PubChem BioAssay accession (AID) [27]. Prepossessing to the original data includes converting SMILES strings to 3D SDF files, generating 3D conformation, and filtering. Conversion from SMILES to SDF files is done using Open Babel [28], version 2.4.1. Conformations are generated using Corina [29], version 4.3. Molecules are further filtered with validity, duplicates with BioChemical Library (BCL) [30].
184
+
185
+ Training Details The datasets are randomly split into ${80}\% /{10}\% /{10}\%$ for training, validation, and testing respectively. We then shrink the training set to contain only 10,000 inactive-labeled molecules, while keeping all active-labeled molecules. This shrinking technique was previously used by [31] By shrinking the training data size, we can shorten the training time given the limited computational resources, while keeping most active signal that we're interested in. We did an empirical study on the shrinking effect on AID 2258 (302,402 molecules). Results are shown in Table 3. We can see there is indeed a decrease of performance in terms of ${\operatorname{logAUC}}_{\left\lbrack {0.001},{0.1}\right\rbrack }$ . We leave the benchmarking of the full datasets in a future study. To overcome the highly-imbalanced problem, we sample the training data in each batch according to the inverse frequency of the label occurrence in the training set. For example, if the active label appear at $1\%$ rate in the training set, it has a sampling weight of 1/0.01 $= {100}$ , while if the inactive label appear ${99}\%$ of time in the training set, it gets a sampling weight of $1/{0.99} \approx {1.01}$ . The active-labeled data are thus roughly 100 times more likely to be sampled than inactive-labeled data in each batch. The codes are implemented using PyTorch [32] and PyG [33].
186
+
187
+ Hyperparameters Search Space See Table 4 for details.
188
+
189
+ Hyperparameters See Tables 5 for details for training MolKGNN. For other benchmarking models except KerGNN, we use the same hyperparameters from their codes.
190
+
191
+ For KerGNN, we empirically observe that using the default hyperparameter setting achieves significantly low performance on our well-curated datasets and hence we further tune its hyperparamters as follows: batch size $\{ {64},{128}\}$ , the hidden unit of the linear layer $\{ {16},{32}\}$ .
192
+
193
+ ### A.3 Featurization
194
+
195
+ Different models have different ways of featurization. We use the original features reported in the original papers for each model used in the benchmarking. Our featurization is adapted from [10]. 17 Rdkit(version 2022.3.4) [34] is used for the featurization. See Table 6 and 7 for details.
196
+
197
+ Table 3: Comparing sample and full training results on AID 2258 using MolKGNN over three runs.
198
+
199
+ <table><tr><td>Inactive Training Size</td><td>$\mathbf{{logAU}{C}_{\left\lbrack {0.001},{0.1}\right\rbrack }}$</td><td>AUC</td></tr><tr><td>10K Sample</td><td>${0.296} \pm {0.026}$</td><td>${0.820} \pm {0.021}$</td></tr><tr><td>Full</td><td>${0.384} \pm {0.003}$</td><td>${0.816} \pm {0.030}$</td></tr></table>
200
+
201
+ Table 4: Hyperparameter search space used for MolKGNN.
202
+
203
+ <table><tr><td>$\mathbf{{Hyperparameter}}$</td><td>Search Space</td></tr><tr><td>Hidden Dimension</td><td>$\{ {32},{64}\}$</td></tr><tr><td>Batch Size</td><td>$\{ {16},{32}\}$</td></tr><tr><td>#Layers</td><td>$\{ 1,2,3,4,5\}$</td></tr><tr><td>Peak Learning Rate</td><td>$\{ 5\mathrm{e} - 1,5\mathrm{e} - 2,5\mathrm{e} - 3,5\mathrm{e} - 4\}$</td></tr><tr><td>Dropout</td><td>$\{ {0.1},{0.2},{0.3}\}$</td></tr></table>
204
+
205
+ Table 5: Hyperparameters used for MolKGNN.
206
+
207
+ <table><tr><td>$\mathbf{{Hyperparameter}}$</td><td>Value</td></tr><tr><td>Node Feature Dimension</td><td>28</td></tr><tr><td>Edge Feature Dimension</td><td>7</td></tr><tr><td>Hidden Dimension</td><td>32</td></tr><tr><td>Batch Size</td><td>16</td></tr><tr><td>#Layers</td><td>4</td></tr><tr><td>#of Kernels of Degree 1</td><td>10</td></tr><tr><td>#of Kernels of Degree 2</td><td>20</td></tr><tr><td>#of Kernels of Degree 3</td><td>30</td></tr><tr><td>#of Kernels of Degree 4</td><td>50</td></tr><tr><td>Warmup Steps</td><td>300</td></tr><tr><td>Peak Learning Rate</td><td>5e-3</td></tr><tr><td>End Learning Rate</td><td>1e-10</td></tr><tr><td>Weight Decay</td><td>0.001</td></tr><tr><td>Epochs</td><td>20</td></tr><tr><td>Dropout</td><td>0.2</td></tr></table>
208
+
209
+ ### A.4 2.5D vs 3D
210
+
211
+ While many previous work have attempted to develop 3D models by including distance, angles, torsions into their model designs $\left\lbrack {6,{11},{13}}\right\rbrack$ , we demonstrated that 2.5D model can achieve comparable results in terms of AUC. We provide the explanation of why a model with seemly less information can accomplish this from a chemistry perspective. The bond lengths/angles have little variations given the certain involving atom identities and bond types [35, 36]. Moreover, different than determining bond lengths/angles experimentally, many programs such as Corina [29] that converts SMILES to 3D SDF using standard bond lengths/angles ${}^{1}$ , which stay the same in different molecules. Hence bond lengths/angles provide little additional information in distinguishing different molecules. This can also been seen by the fact that an experienced chemist can just look at a molecular structure and know certain properties of the molecule, without the need to know the exact bond lengths/angles. Nevertheless since our model has the potential to integrate bond length and angles into the $\phi \left( {S,{S}^{\prime }}\right)$ , we plan to include those for comparison in the future studies.
212
+
213
+ ---
214
+
215
+ ${}^{1}$ This is explicitly mentioned in: https://mn-am.com/wp-content/uploads/2021/10/corina_classic_manual.pdf
216
+
217
+ ---
218
+
219
+ Table 6: Node features ${\mathbf{X}}_{v}$ for $v$
220
+
221
+ <table><tr><td>Indices</td><td>Description</td></tr><tr><td>0-11</td><td>One-hot encoding of element type: $\mathrm{H},\mathrm{C},\mathrm{N},\mathrm{O},\mathrm{F}$ , Si, P, S, Cl, Br, I, other</td></tr><tr><td>12-15</td><td>One-hot encoding of node degree:1,2,3,4</td></tr><tr><td>16</td><td>Formal charge</td></tr><tr><td>17</td><td>Is in a range</td></tr><tr><td>18</td><td>Is aromatic</td></tr><tr><td>19</td><td>Explicit valence</td></tr><tr><td>20</td><td>Atom mass</td></tr><tr><td>21</td><td>Gasteiger charge</td></tr><tr><td>22</td><td>Gasteiger H charge</td></tr><tr><td>23</td><td>Crippen contribution to logP</td></tr><tr><td>24</td><td>Crippen contribution to molar refractivity</td></tr><tr><td>25</td><td>Total polar sufrace area contribution</td></tr><tr><td>26</td><td>Labute approximate surface area contribution</td></tr><tr><td>27</td><td>EState index</td></tr></table>
222
+
223
+ Table 7: Edge features ${\mathbf{E}}_{vu}$ for ${e}_{vu}$
224
+
225
+ <table><tr><td>Indices</td><td>Description</td></tr><tr><td>0</td><td>Is aromatic</td></tr><tr><td>1</td><td>Is conjugate</td></tr><tr><td>2</td><td>Is in a range</td></tr><tr><td>3-6</td><td>One-hot encoding of bond type:1,1.5,2,3</td></tr></table>
226
+
227
+ On the other hand, molecules can have different conformations as a result of the single bond rotation. The same molecule with different conformation consequently has different sets of torsions. However, the pharmacological activity is usually linked with few conformations (binding conformation) and hence related to certain sets torsions. It seems that knowing torsion could potentially be help the activity prediction. Nevertheless, knowing which conformation is the binding conformation is a challenging task. A set of torsions related with a wrong predicted binding conformation is detrimental to the model performance. Hence we decide to build a conformation-invariant model and exclude torsion to circumvent this problem.
228
+
229
+ ### A.5 Ability to Distinguish Chirality
230
+
231
+ We further experiment on the expressiveness of our model to determine whether it is able to distinguish chiral molecules. We use the CHIRAL1 dataset [8] that contains 102,389 enantiomer pairs for a single 1,3-dicyclohexylpropane skeletal scaffold with one chiral center. The data is labeled as R or S stereocenter and we use accuracy to evaluate the performance. For comparison, we use GCN [37] and a modified version of our model, MolKGNN-NoChi, that removes the chirality calculation module. Our experiments observed GCN and MolKGNN-NoChi achieve ${50}\%$ accuracy while MolKGNN achieves nearly 100%, which empirically demonstrates our proposed method's ability to distinguish chiral molecules.
232
+
233
+ ### A.6 Computation Complexity
234
+
235
+ It may seem to be formidable to enumerate all possible matchings described in Section 3.1. However, most nodes only have one neighbor (e.g., hydrogen, fluorine, chlorine, bromine and iodine). Take AID 1798 for example, 49.03%, 6.12%, 31.08% and 13.77% nodes are with one, two, three and four neighbors among all nodes, respectively. For nodes with four neighbors, only 12 out of 24 matchings need to be enumerated because of chirality [8].
236
+
237
+ Since the adjacency matrix of molecular graphs are sparse, most GNNs incur a time complexity of $\mathcal{O}\left( \left| \mathcal{E}\right| \right)$ . And as analyzed above, the permutation is bounded by up to four neighbors (12 matchings). Thus, finding the optimal matching has a time complexity of $\mathcal{O}\left( 1\right)$ . The calculation of molecular convolution is linear to the number of $K$ kernels and hence has a time complexity of $\mathcal{O}\left( K\right)$ . Overall, our method takes a computation time of $\mathcal{O}\left( {\left| \mathcal{E}\right| K}\right)$ .
238
+
239
+ Table 8: Comparison of AUC between models. The performance is better when the value is higher. Reported are the mean values over five runs, with standard deviation.
240
+
241
+ <table><tr><td>PubChem AID</td><td>MolKGNN (ours)</td><td>SchNet</td><td>SphereNet</td><td>DimeNet++</td><td>ChiRo</td><td>KerGNN</td></tr><tr><td>435008</td><td>${0.836} \pm {0.012}$</td><td>${0.820} \pm {0.009}$</td><td>${0.794} \pm {0.026}$</td><td>${0.787} \pm {0.028}$</td><td>${0.797} \pm {0.015}$</td><td>${0.806} \pm {0.017}$</td></tr><tr><td>1798</td><td>${0.721} \pm {0.027}$</td><td>${0.707} \pm {0.007}$</td><td>${0.655} \pm {0.025}$</td><td>${0.649} \pm {0.028}$</td><td>${0.683} \pm {0.052}$</td><td>${0.663} \pm {0.041}$</td></tr><tr><td>435034</td><td>${0.816} \pm {0.028}$</td><td>${0.838} \pm {0.009}$</td><td>${0.836} \pm {0.014}$</td><td>${0.834} \pm {0.019}$</td><td>${0.822} \pm {0.017}$</td><td>${0.821} \pm {0.016}$</td></tr><tr><td>1843</td><td>${0.879} \pm {0.025}$</td><td>${0.896} \pm {0.012}$</td><td>${0.875} \pm {0.021}$</td><td>${0.857} \pm {0.011}$</td><td>${0.881} \pm {0.010}$</td><td>${0.906} \pm {0.020}$</td></tr><tr><td>2258</td><td>${0.806} \pm {0.019}$</td><td>${0.792} \pm {0.020}$</td><td>${0.801} \pm {0.042}$</td><td>${0.821} \pm {0.025}$</td><td>${0.782} \pm {0.018}$</td><td>${0.766} \pm {0.024}$</td></tr><tr><td>463087</td><td>${0.895} \pm {0.003}$</td><td>${0.910} \pm {0.005}$</td><td>${0.904} \pm {0.005}$</td><td>${0.902} \pm {0.009}$</td><td>${0.891} \pm {0.004}$</td><td>${0.859} \pm {0.009}$</td></tr><tr><td>488997</td><td>${0.866} \pm {0.018}$</td><td>${0.831} \pm {0.012}$</td><td>${0.822} \pm {0.017}$</td><td>${0.839} \pm {0.023}$</td><td>${0.817} \pm {0.019}$</td><td>${0.757} \pm {0.044}$</td></tr><tr><td>2689</td><td>${0.906} \pm {0.019}$</td><td>${0.905} \pm {0.021}$</td><td>${0.867} \pm {0.021}$</td><td>${0.832} \pm {0.016}$</td><td>${0.919} \pm {0.017}$</td><td>${0.912} \pm {0.013}$</td></tr><tr><td>485290</td><td>${0.866} \pm {0.012}$</td><td>${0.893} \pm {0.011}$</td><td>${0.879} \pm {0.021}$</td><td>${0.884} \pm {0.016}$</td><td>${0.816} \pm {0.015}$</td><td>${0.853} \pm {0.009}$</td></tr><tr><td>Avgerage</td><td>0.843</td><td>0.844</td><td>0.826</td><td>0.823</td><td>0.823</td><td>0.816</td></tr><tr><td>Avg. Rank</td><td>2.889</td><td>2.111</td><td>3.778</td><td>3.889</td><td>4.000</td><td>4.222</td></tr></table>
242
+
243
+ ### A.7 AUC Result
244
+
245
+ See Table 8 for details.
246
+
247
+ ### A.8 Investigation of Interpretability
248
+
249
+ We train a simple autoencoder architecture for interpreting kernels. The encoder is the same as the one used in MolKGNN to convert node features into the node embedding via batch normalization. The decoder converts the node embedding back to the corresponding atomic number. This encoder can be used to translate the node embedding in the kernels into atomic numbers. We examine the learned kernels and Figure 2 (A) is one example that demonstrates the interpretability of our model from dataset AID 2689. Currently we only examine the first layer and the node attributes of the kernels, but our kernels offer the potentials for retrieving more complicated pattern and we leave the investigation of that for future works.
250
+
251
+ ### A.9 Ablation Study Details
252
+
253
+ From the result in Figure 2 (B) shows that the removal of any of the components has a negative impact on $\log {\mathrm{{AUC}}}_{\left\lbrack {0.001},{0.1}\right\rbrack }$ . In fact, the impact is bigger for $\log {\mathrm{{AUC}}}_{\lbrack }{0.001},{0.1}\rbrack$ than $\mathrm{{AUC}}$ in terms of the percentage of performance change. Note that in some cases such as the removal of ${\phi }_{\mathrm{{es}}}$ , there is an increase in performance according to AUC, but this would significantly hinder the ${\operatorname{logAUC}}_{\left\lbrack {0.001},{0.1}\right\rbrack }$ metric.
254
+
255
+ Results in Figure 2 (C) shows that When the number of kernels is too small $\left( { < 5}\right)$ , it greatly impacts the performance. However, once it is large enough to a certain point, a larger number of kernels has little impact on the performance.
256
+
257
+ The ablation studies are conducted usnig dataset AID 435008. Reported are average values over three runs, with standard deviation. The number of kernels shown in Figure 2 (C) is the number of kernels per degree, instead of total number of kernels.
258
+
259
+ ### A.10 Evaluation Metrics Details
260
+
261
+ - Logarithmic Receiver-Operating-Characteristic Area Under the Curve with the False Positive Rate in $\left\lbrack {{0.001},{0.1}}\right\rbrack$ (log ${\mathrm{{AUC}}}_{\left\lbrack {0.001},{0.1}\right\rbrack }$ ): Ranged logAUC [23] is used because only a small percentage of molecules predicted with high activity can be selected for experimental tests in consideration of cost in a real-world drug campaign [20]. This high decision cutoff corresponds to the left side of the Receiver-Operating-Characteristic (ROC) curve, i.e., those False Positive Rates (FPRs) with small values. Also, because the threshold cannot be predetermined, the area under the curve is used to consolidate all possible thresholds within a certain FPR range. Finally, the logarithm is used to bias towards smaller FPRs. Following prior work [31,38], we choose to use ${\operatorname{logAUC}}_{\left\lbrack {0.001},{0.1}\right\rbrack }$ . A perfect classifier achieves a ${\operatorname{logAUC}}_{\left\lbrack {0.001},{0.1}\right\rbrack }$ of 1, while a random classifier reaches a ${\operatorname{logAUC}}_{\left\lbrack {0.001},{0.1}\right\rbrack }$ of around 0.0215 , as shown below:
262
+
263
+ $$
264
+ \frac{{\int }_{0.001}^{0.1}x\mathrm{\;d}{\log }_{10}x}{{\int }_{0.001}^{0.1}1\mathrm{\;d}{\log }_{10}x} = \frac{{\int }_{-3}^{-1}{10}^{u}\mathrm{\;d}u}{{\int }_{-3}^{-1}1\mathrm{\;d}u} \approx {0.0215}
265
+ $$
266
+
267
+ $$
268
+ {0.0215}\left( {\frac{{\int }_{0.001}^{0.1}x\mathrm{\;d}{\log }_{10}x}{{\int }_{0.001}^{0.1}1\mathrm{\;d}{\log }_{10}x} = \frac{{\int }_{-3}^{-1}{10}^{u}\mathrm{\;d}u}{{\int }_{-3}^{-1}1\mathrm{\;d}u} \approx {0.0215}}\right) .
269
+ $$
270
+
271
+ - Receiver-Operating-Characteristic Area Under the Curve (AUC): We include AUC since this has historically been used as a general purpose evaluation metric for graph classification [24]. Comparison with AUC also highlights the fact that overall performance (ranking) of methods according to AUC may not align well with that of the domain specific evaluation metric, i.e., $\log {\mathrm{{AUC}}}_{\left\lbrack {0.001},{0.1}\right\rbrack }$ . Receiver-Operating-Characteristic Area Under the Curve (AUC)
272
+
273
+ Plain AUC is included here to benchmark the methods' performance for general purposes. It also serves as a comparison to the $\log {\mathrm{{AUC}}}_{\left\lbrack {0.001},{0.1}\right\rbrack }$ to highlight the fact that the best general good performing may classifier may not be the best at a high threshold.
274
+
275
+ ### A.11 Baseline Details
276
+
277
+ SchNet [6] is one of the early attempts to extend convolution to molecular representation learning. The traditional convolution can only be applied to grid-like data such as images using discrete filters. This work proposes continuous-filter convolutional layers to be able to model local correlations without requiring the data to lie on a grid.
278
+
279
+ DimeNet++ [22] builds on top on DimeNet [11], which resembles belief propagation. It integrates bond length and angles information into the message passing step by using spherical Bessel functions and spherical harmonics.
280
+
281
+ SphereNet [13] proposes a spherical message passing (SMP) to include atom 3D coordinates. SMP captures relative atom position in the spherical coordinate system and hence enables the chirality characterization.
282
+
283
+ ChIRo [7] designs a novel torsion encoder that is invariant to bond rotation, while being able to learn molecular chirality. This torsion encoder leverage the factor that rotating a bond will change coupled torsions together to achive the conformation-invariance. A phase shift is added to the torsion encoder to break the chirality symmertry.
284
+
285
+ KerGNN [5] is different from the above four models that are specifically designed for molecular representation learning. KerGNN is architecturally similar to ours in the fact that it quantifies the similarity between a subgraph with a kernel via graph kernel method. However, we argue that this structural similarity is not as helpful as the semantic similarity in molecular representation learning tasks. This argument is verified by the experiment in Section 4.
papers/LOG/LOG 2022/LOG 2022 Conference/W2OStztdMhc/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § INTERPRETABLE CHIRALITY-AWARE GRAPH NEURAL NETWORK FOR QUANTITATIVE STRUCTURE ACTIVITY RELATIONSHIP MODELING
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ In computer-aided drug discovery, quantitative structure activity relation models are trained to predict biological activity from chemical structure. Despite the recent success of applying graph neural networks to this task, important chemical information such as molecular chirality is ignored. To fill this crucial gap, we propose Molecular-Kernel Graph Neural Network (MolKGNN) for molecular representation learning, which features conformation invariance, chirality-awareness, and interpretability. For MolKGNN, we first design a molecular graph convolution to capture the chemical pattern by comparing the atom's similarity with learnable molecular kernels. Furthermore, we propagate the similarity score to capture the higher-order chemical pattern. To assess the method, we conduct a comprehensive evaluation with nine well-curated datasets spanning numerous important drug targets that feature realistically high class imbalance. Meanwhile, the learned kernels identify patterns that agree with domain knowledge, confirming MolKGNN's pragmatic interpretability.
12
+
13
+ § 16 1 INTRODUCTION
14
+
15
+ Developing new drugs is time-consuming and expensive, e.g., it took cabozantinib, an oncologic drug, 8.8 years and $1.9 billion to get on the market [1]. To assist this process, computer-aided drug discovery (CADD) has been widely used. One branch of CADD constructs Quantitative Structure Activity Relationship (QSAR) models to predict the biological activity of molecules based on their chemical structure [2].
16
+
17
+ Graph Neural Networks (GNNs) have successfully been applied in many fields. As molecules can be viewed as graphs with atoms as nodes and chemical bonds as edges, GNNs are a logical choice to construct QSAR models [3]. A typical GNN architecture for graph classification begins with an encoder extracting node representations by passing neighborhood information followed by pooling operations that integrate node representations into graph representations, which are fed into a classifier to predict graph classes [4].
18
+
19
+ Despite the promise of GNN models applied to molecular representation learning, existing GNN models either blindly follow the message passing framework without considering molecular constraints on graphs [5], fail to integrate chirality [6], or lack interpretability [7]. To address these limitations, we develop a GNN model named MolKGNN that features conformation invariance, chirality-awareness and provides a form of interpretability. Our contributions are:
20
+
21
+ * Interpretable Molecular Convolution: We design a new convolution operation to capture chemical pattern of each atom by quantifying the similarity between the atom's neighboring subgraph and the learnable molecular kernel, which is inherently interpretable.
22
+
23
+ * Chirality Characterization: Rather than listing all permutations of neighbors for a chiral center [8], or using dihedral angles [7], the chirality calculation module in MolKGNN uses a lightweight linear algebra calculation.
24
+
25
+ * Realistic Benchmark: We perform a comprehensive evaluation using well-curated datasets spanning numerous important drug targets (that feature realistic high class imbalance) and metrics that bias predicted active molecules for actual experimental validation. Ultimately, we demonstrate the superiority of MolKGNN over other GNNs in CADD.
26
+
27
+ < g r a p h i c s >
28
+
29
+ Figure 1: (A) An overview of the proposed MolKGNN. (B) An illustration of the molecular convolution that captures three aspects of similarities. (C) An illustration of the chirality calculation.
30
+
31
+ § 2 RELATED WORK AND PRELIMINARIES
32
+
33
+ Several attempts have been made to leverage GNNs for molecular representation learning. Early models capture the 2D connectivity (i.e., molecular constitution) [9, 10]. However, molecules are not planar but 3D entities and bond lengths/angles/dihedral angles need thus to be taken into considerations [6, 11, 12]. To account for chirality, reflection-sensitive models are designed [8, 13].
34
+
35
+ In this work, a molecule is represented as an attributed and undirected graph $G = \left( {{\mathcal{V}}^{G},{\mathcal{E}}^{G}}\right)$ where ${\mathcal{V}}^{G},{\mathcal{E}}^{G}$ are the set of nodes (atoms) and edges (chemical bonds). Let $v \in {\mathcal{V}}^{G}$ denote the node $v$ and ${e}_{vu} \in {\mathcal{E}}^{G}$ denote an edge between $v$ and $u$ . Moreover, we represent the node attribute matrix as ${\mathbf{X}}^{G} \in {\mathbb{R}}^{\left| {\mathcal{V}}^{G}\right| \times {d}_{v}}$ and edge attribute matrix as ${\mathbf{E}}^{G} \in {\mathbb{R}}^{\left| {\mathcal{V}}^{G}\right| \times \left| {\mathcal{V}}^{G}\right| \times {d}_{e}}$ where ${d}_{v},{d}_{e}$ are the dimension of node and edge features. The node coordinate matrix is represented as ${\mathbf{P}}^{G} \in {\mathbb{R}}^{\left| {\mathcal{V}}^{G}\right| \times 3}$ and ${\mathbf{P}}_{v}^{G}$ denotes the $3\mathrm{D}$ coordinates of $v$ . The graph topology is described by its adjacency matrix ${\mathbf{A}}^{G} \in \{ 1,0{\} }^{\left| {\mathcal{V}}^{G}\right| \times \left| {\mathcal{V}}^{G}\right| }$ where ${\mathbf{A}}_{vu}^{G} = 1$ if ${e}_{vu} \in {\mathcal{E}}^{G}$ , and ${\mathbf{A}}_{vu}^{G} = 0$ otherwise. Note that bond types are encoded as edge features.
36
+
37
+ § 3 MOLECULAR-KERNEL GRAPH NEURAL NETWORK
38
+
39
+ In this section, we introduce the framework of MolKGNN, shown in Figure 1 (A). Next, we describe our molecular convolution involving three aspects of similarity along with being chirality-aware, and then highlight the entire model architecture.
40
+
41
+ § 3.1 MOLECULAR CONVOLUTION
42
+
43
+ In $2\mathrm{D}$ images, convolution operation can be regarded as calculating the similarity between the image patch and the image kernel. Larger output values indicate higher visual similarity patterns such as edges, strips, curves [14]. Inspired by that, we design a molecular convolution that outputs higher values when a molecular neighborhood and kernels are more chemically similar.
44
+
45
+ However, performing convolution on irregular neighborhood subgraphs requires the learnable molecular kernels to have correspondingly different geometrical structures, which is computationally prohibitive. To handle this challenge, for each atom $v$ of degree $d$ in $G$ , we only consider its 1-hop star-like neighborhood subgraph $S = \left( {{\mathcal{V}}^{S},{\mathcal{E}}^{S}}\right)$ where ${\mathcal{V}}^{S} = \{ v\} \cup {\mathcal{N}}_{v}^{G}$ and ${\mathcal{E}}^{S} = \left\{ {{e}_{vu} \mid u \in {\mathcal{N}}_{v}^{G}}\right\}$ . To make the molecular convolution feasible, we initialize the molecular kernel to also follow star-structure and denote it as ${S}^{\prime } = \left( {{\mathcal{V}}^{{S}^{\prime }},{\mathcal{E}}^{{S}^{\prime }}}\right)$ where ${\mathcal{V}}^{{S}^{\prime }} = \left\{ {v}^{\prime }\right\} \cup {\mathcal{N}}_{{v}^{\prime }}^{{S}^{\prime }}$ with ${v}^{\prime }$ being the central node without loss of generality and ${\mathcal{E}}^{{S}^{\prime }} = \left\{ {{e}_{{v}^{\prime }{u}^{\prime }} \mid {u}^{\prime } \in {\mathcal{N}}_{{v}^{\prime }}^{{S}^{\prime }}}\right\}$ . Let the learnable feature matrix and edge feature matrix of ${S}^{\prime }$ be ${\mathbf{X}}^{{S}^{\prime }} \in {\mathbb{R}}^{\left( {d + 1}\right) \times {d}_{n}}$ and ${\mathbf{E}}^{{S}^{\prime }} \in {\mathbb{R}}^{d \times {d}_{e}}$ , respectively.
46
+
47
+ Then we define the operation of molecular convolution between the atom $v$ and the molecular kernel ${S}^{\prime }$ as quantifying the similarity $\phi$ between $v$ ’s neighborhood subgraph $S$ and the kernel ${S}^{\prime }$ : $\phi \left( {S,{S}^{\prime }}\right) = {w}_{\mathrm{{cs}}}{\phi }_{\mathrm{{cs}}}\left( {S,{S}^{\prime }}\right) + {w}_{\mathrm{{ns}}}{\phi }_{\mathrm{{ns}}}\left( {S,{S}^{\prime }}\right) + {w}_{\mathrm{{es}}}{\phi }_{\mathrm{{es}}}\left( {S,{S}^{\prime }}\right)$ . where ${\phi }_{\mathrm{{cs}}},{\phi }_{\mathrm{{ns}}},{\phi }_{\mathrm{{es}}}$ quantify the similarity from three different aspects: the central similarity, neighborhood similarity, and edge similarity. We combine them together with learnable weights ${w}_{\mathrm{{cs}}},{w}_{\mathrm{{ns}}},{w}_{\mathrm{{es}}} \in \left\lbrack {0,1}\right\rbrack$ after softmax-normalization.
48
+
49
+ Central Similarity. We first capture the chemical property of atom $v$ itself in $S$ by computing its similarity to the central node ${v}^{\prime }$ in the kernel ${S}^{\prime } : {\phi }_{\mathrm{{cs}}}\left( {S,{S}^{\prime }}\right) = \operatorname{sim}\left( {{\mathbf{X}}_{v}^{S},{\mathbf{X}}_{{v}^{\prime }}^{{S}^{\prime }}}\right)$ . where ${\mathbf{X}}_{v}^{S},{\mathbf{X}}_{{v}^{\prime }}^{{S}^{\prime }}$ are attributes of the central atom $v$ in the subgraph $S$ and the central node ${v}^{\prime }$ in the kernel ${S}^{\prime }$ . The $\operatorname{sim}\left( {\cdot , \cdot }\right)$ is the function measuring vector similarity and we use cosine similarity throughout this work.
50
+
51
+ Neighboring Node and Edge Similarity. Besides the central node, the chemical property of an atom is also impacted by its neighborhood context, which motivates us to further quantify the similarity between 1) the neighboring nodes ${\mathcal{N}}_{v}^{S}$ in $S$ and ${\mathcal{N}}_{{v}^{\prime }}^{{S}^{\prime }}$ in ${S}^{\prime }$ , and 2) the edges ${\mathcal{E}}^{S}$ and ${\mathcal{E}}^{{S}^{\prime }}$ .
52
+
53
+ Before calculating ${\phi }_{\mathrm{{ns}}},{\phi }_{\mathrm{{es}}}$ between $S$ and ${S}^{\prime }$ , we face a matching problem. For example, in Figure ??, the node ${u}_{1}$ in $S$ has more than one matching candidates, i.e., $\left\{ {{u}_{1}^{\prime },{u}_{2}^{\prime },{u}_{3}^{\prime }}\right\}$ in ${S}^{\prime }$ . Here we seek a bijective matching ${\chi }^{ * } : {\mathcal{N}}_{v}^{S} \rightarrow {\mathcal{N}}_{{v}^{\prime }}^{{S}^{\prime }}$ such that the average attribute similarity between $u \in {\mathcal{N}}_{v}^{S}$ and ${\chi }^{ * }\left( u\right) \in {\mathcal{N}}_{{v}^{\prime }}^{{S}^{\prime }}$ over all neighbors can be maximized: ${\chi }^{ * } = \arg \mathop{\max }\limits_{\chi }\frac{1}{\left| {\mathcal{N}}_{v}^{S}\right| }\mathop{\sum }\limits_{{u \in {\mathcal{N}}_{v}^{S}}}\operatorname{sim}\left( {{\mathbf{X}}_{u}^{S},{\mathbf{X}}_{\chi \left( u\right) }^{{S}^{\prime }}}\right)$ . Given that exhausting all $\left| {\mathcal{N}}_{v}^{S}\right|$ ! possible matchings to find the optimal one is computationally infeasible, we significantly simplify this computation by constraining the searching space according to the inherent structure of molecules, which are: 1) node degrees in drug-like molecule graphs are usually less than 5, with most atoms having a degree of 1 and few nodes having a degree of 4 [15]; 2) for nodes of degree 4, only 12 among the total 24 possible matchings are valid after considering chirality [8]. After the node matching, the bijective edge matching is defined as: ${\chi }^{e, * } : {\mathcal{E}}^{S} \rightarrow {\mathcal{E}}^{{S}^{\prime }}$ such that the edge ${e}_{vu} \in {\mathcal{E}}^{S}$ if and only if ${e}_{{v}^{\prime }{\chi }^{ * }\left( u\right) } \in {\mathcal{E}}^{{S}^{\prime }}$ . Then, we compute ${\phi }_{\mathrm{{ns}}}$ and ${\phi }_{\mathrm{{es}}}$ as:
54
+
55
+ ${\phi }_{\mathrm{{ns}}} = \frac{1}{\left| {\mathcal{N}}_{v}^{S}\right| }\mathop{\sum }\limits_{{u \in {\mathcal{N}}_{v}^{S}}}\operatorname{sim}\left( {{\mathbf{X}}_{u}^{S},{\mathbf{X}}_{{\chi }^{ * }\left( u\right) }^{{S}^{\prime }}}\right)$ and ${\phi }_{\mathrm{{es}}} = \frac{1}{\left| {\mathcal{N}}_{v}^{S}\right| }\mathop{\sum }\limits_{{u \in {\mathcal{N}}_{v}^{S}}}\operatorname{sim}\left( {{\mathbf{E}}_{vu}^{S},{\mathbf{E}}_{{v}^{\prime }{\chi }^{e, * }\left( u\right) }^{{S}^{\prime }}}\right) .$
56
+
57
+ Chirality Characterization. Chirality is a key determinant of a molecule's biological activity [16], but mostly exists when the central atom has four unique neighboring substructures (excluding some special scenarios). Given the neighborhood subgraph of an atom $S$ forming the tetrahedron shown in Figure 1 (C) where the four unique neighboring atoms are ${\mathcal{N}}_{v}^{S} = \left\{ {{u}_{1},{u}_{2},{u}_{3},{u}_{4}}\right\}$ , we select ${u}_{1}$ without loss of generality as the anchor neighbor to define the three concurrent sides of the tetrahedron ${\mathbf{a}}^{S} = {\mathbf{P}}_{{u}_{2}}^{S} - {\mathbf{P}}_{{u}_{1}}^{S},{\mathbf{b}}^{S} = {\mathbf{P}}_{{u}_{3}}^{S} - {\mathbf{P}}_{{u}_{1}}^{S},{\mathbf{c}}^{S} = {\mathbf{P}}_{{u}_{4}}^{S} - {\mathbf{P}}_{{u}_{1}}^{S}$ and further calculate the tetrahedral volume of $S$ as: ${\xi }^{S} = \frac{1}{6} * {\mathbf{a}}^{S} \times {\mathbf{b}}^{S} \cdot {\mathbf{c}}^{S}$ Similarly, we calculate ${\xi }^{{S}^{\prime }}$ for the kernel ${S}^{\prime }$ . Notice, that the sign of the tetrahedron volume of the molecule ${\xi }^{S}$ defines its vertices ordering [16]. The $\phi \left( {S,{S}^{\prime }}\right)$ is then updated as $\phi \left( {S,{S}^{\prime }}\right) = \left( {\operatorname{sgn}\left( {\xi }^{S}\right) \operatorname{sgn}\left( {\xi }^{{S}^{\prime }}\right) }\right) \phi \left( {S,{S}^{\prime }}\right)$ with $\operatorname{sgn}\left( \cdot \right)$ being the sign function.
58
+
59
+ § 3.2 MODEL ARCHITECTURE
60
+
61
+ Suppose the set of $K$ kernels at layer $l$ be ${\mathcal{S}}^{\prime l} = {\left\{ {S}_{k}^{\prime l}\right\} }_{k = 1}^{K}$ , the proposed molecular convolution is applied with the molecular kernel ${S}_{k}^{\prime l} \in {\mathcal{S}}^{\prime }{}^{l}$ over the node representation ${\mathbf{H}}^{l - 1}$ at the previous layer $l - 1$ to obtain the node similarity matrix at layer $l$ as ${\mathbf{\Phi }}^{l} \in {\mathbb{R}}^{\left| \mathcal{V}\right| \times K}$ , where ${\mathbf{\Phi }}_{ik}^{l} = \phi \left( {{S}_{{v}_{i}}^{l - 1},{S}_{k}^{\prime }{}^{l - 1}}\right)$ defines the similarity between the neighborhood subgraph around the atom ${v}_{i}$ and the ${k}^{\text{ th }}$ kernel at layer $l - 1$ . We note that $\phi \left( {{S}_{{v}_{i}}^{l - 1},{S}_{k}^{\prime l - 1}}\right)$ is set to 0 if ${S}_{{v}_{i}}^{l - 1}$ and ${S}_{k}^{\prime l - 1}$ have different degrees so that back-propagation keeps the parameters in kernels of different degree untouched. The new node representation ${\mathbf{H}}^{l} = \mathbf{A}{\Phi }^{l}$ . After recursively alternating between the molecular convolution and the message-passing $L$ layers, the final atom representation ${\mathbf{H}}^{L}$ describes the chemical pattern up to $L$ hops away of each atom. Molecular representation $\mathbf{G}$ is obtained via global-sum. Ultimately, graph classification is performed using $\widehat{\mathbf{Y}} = {\sigma f}\left( \mathbf{G}\right)$ with classifier $f\left( .\right)$ , e.g., Multi-Layer Perceptron, and softmax normalization $\sigma$ . Computational complexities for MolKGNN is given in Appendix A.6.
62
+
63
+ § 4 EXPERIMENTS
64
+
65
+ § 4.1 EXPERIMENTAL SETTINGS
66
+
67
+ A Realistic Drug Discovery Scenario. High-throughput screening (HTS) enables rapid screening of thousands to millions of molecules for biological activity [17]. QSAR models are trained on HTS results to screen molecules virtually and prioritize acquisition [18]. HTS datasets are of large sizes, have high label imbalance (many more inactive molecules) and often contain false positives [19]. Moreover, an evaluation metric that biases towards molecules with the highest predicted activities is of interest as only these will be acquired or synthesized and tested.
68
+
69
+ Table 1: Results on $\log {\mathrm{{AUC}}}_{\left\lbrack {0.001},{0.1}\right\rbrack }$ (summarized AUC - details in Appendix A.7) over five runs.
70
+
71
+ max width=
72
+
73
+ PubChem AID MolKGNN (ours) SchNet $\mathbf{{SphereNet}}$ DimeNet++ ChiRo KerGNN
74
+
75
+ 1-7
76
+ 435008 ${0.255} \pm {0.014}$ ${0.187} \pm {0.027}$ ${0.215} \pm {0.024}$ ${0.203} \pm {0.047}$ ${0.168} \pm {0.019}$ ${0.147} \pm {0.015}$
77
+
78
+ 1-7
79
+ 1798 ${0.174} \pm {0.029}$ ${0.195} \pm {0.025}$ ${0.196} \pm {0.035}$ ${0.208} \pm {0.035}$ ${0.165} \pm {0.040}$ ${0.078} \pm {0.042}$
80
+
81
+ 1-7
82
+ 435034 ${0.227} \pm {0.022}$ ${0.246} \pm {0.020}$ ${0.230} \pm {0.034}$ ${0.235} \pm {0.044}$ ${0.211} \pm {0.023}$ ${0.179} \pm {0.045}$
83
+
84
+ 1-7
85
+ 1843 ${0.362} \pm {0.033}$ ${0.358} \pm {0.037}$ ${0.258} \pm {0.048}$ ${0.284} \pm {0.034}$ ${0.326} \pm {0.010}$ ${0.292} \pm {0.027}$
86
+
87
+ 1-7
88
+ 2258 ${0.301} \pm {0.028}$ ${0.240} \pm {0.037}$ ${0.380} \pm {0.037}$ ${0.340} \pm {0.032}$ ${0.251} \pm {0.010}$ ${0.195} \pm {0.020}$
89
+
90
+ 1-7
91
+ 463087 ${0.390} \pm {0.056}$ ${0.332} \pm {0.022}$ ${0.399} \pm {0.011}$ ${0.389} \pm {0.026}$ ${0.258} \pm {0.019}$ ${0.150} \pm {0.011}$
92
+
93
+ 1-7
94
+ 488997 ${0.303} \pm {0.027}$ ${0.319} \pm {0.017}$ ${0.309} \pm {0.029}$ ${0.315} \pm {0.011}$ ${0.193} \pm {0.029}$ ${0.081} \pm {0.023}$
95
+
96
+ 1-7
97
+ 2689 ${0.415} \pm {0.020}$ ${0.324} \pm {0.020}$ ${0.401} \pm {0.016}$ ${0.367} \pm {0.049}$ ${0.351} \pm {0.048}$ ${0.264} \pm {0.017}$
98
+
99
+ 1-7
100
+ 485290 ${0.498} \pm {0.015}$ ${0.333} \pm {0.047}$ ${0.450} \pm {0.039}$ ${0.463} \pm {0.040}$ ${0.295} \pm {0.068}$ ${0.223} \pm {0.026}$
101
+
102
+ 1-7
103
+ Average 0.325 0.282 0.315 0.312 0.247 0.179
104
+
105
+ 1-7
106
+ Avg. Rank 2.333 3.222 2.556 2.556 4.556 5.778
107
+
108
+ 1-7
109
+ AUC Average 0.325 0.282 0.315 0.312 0.247 0.179
110
+
111
+ 1-7
112
+ AUC Avg. Rank 2.333 3.222 2.556 2.556 4.556 5.778
113
+
114
+ 1-7
115
+
116
+ < g r a p h i c s >
117
+
118
+ Figure 2: (A) Visualization of a learned kernel and three examples. (B) Ablation study result for $\phi \left( {S,{S}^{\prime }}\right)$ components. (C) Performance for different kernel numbers.
119
+
120
+ § DATASETS. WELL-CURATED DATASETS USED ARE FROM [20, 21]. DETAILS CAN BE FOUND IN APPENDIX A.1.
121
+
122
+ Baselines. SchNet [6], DimeNet++ [22], SphereNet [13], ChIRo [7] and KerGNN [5] are used. The first four are GNNs for molecular representation learning and the last one is a GNN that is architecturally similar to ours. Further details introducing the baselines is provided in Appendix A.11.
123
+
124
+ Evaluation Metrics. Two metrics are used, detailed in Appendix A.10: Logarithmic Receiver-Operating-Characteristic Area Under the Curve with the False Positive Rate in [0.001, 0.1] (logAUC ${}_{\left\lbrack {0.001},{0.1}\right\rbrack }$ ) [23]: This is used because only a small percentage of molecules predicted with high activity can be selected for experimental tests in consideration of cost in a real-world drug campaign [20]. Receiver-Operating-Characteristic Area Under the Curve (AUC): AUC is included since it has historically been used as a general purpose evaluation metric for graph classification [24].
125
+
126
+ § 4.2 EXPERIMENTAL RESULTS
127
+
128
+ From Table 1, we can see MolKGNN achieves superior results in recovering the active molecules with a high decision threshold. This highlights the ability of the proposed model to perform well in the application-related metric. Moreover, we find MolKGNN also performs on par with other GNN in terms of AUC, which demonstrates its applicability beyond drug discovery in a general setting. It is worth noting that different ranking of models are observed in the two tables. This demonstrates that a generally good performing model measured by AUC could potentially perform bad in a specific false positive rate region. Moreover, the learned kernel shown in Figure 2 (A) reveals a pattern of a center atom of carbon surrounded by neighboring three fluorine and another carbon. This pattern is known as the trifluoromethyl group in medicinal chemistry and has been used in several drugs [25]. The details of interpretability can be found in Appendix A.8. We also perform an additional experiment to exhibit MolKGNN's ability to distinguish chirality in Appendix A.5.
129
+
130
+ Ablation Studies. Component of $\phi \left( {S,{S}^{\prime }}\right)$ : Results show in Figure 2 (B). Kernel Number: Results show in Figure 2 (C). We provide a discussion on these results in Appendix A.9.
131
+
132
+ § 5 CONCLUSION
133
+
134
+ We introduce a new GNN model named MolKGNN to address the QSAR model construction for CADD. MolKGNN utilizes a newly-designed molecular convolution, where a molecular neighbor- 3 hood is compared with a molecular kernel to output a similarity score. Comprehensive benchmarking is conducted to evaluate MolKGNN to show its superiority over existing GNN baselines.
papers/LOG/LOG 2022/LOG 2022 Conference/W59BHjEDfz/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Graph-based Reinforcement Learning meets Mixed Integer Programs: An application to 3D robot assembly discovery
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ Robot assembly discovery is a challenging problem that lives at the intersection of resource allocation and motion planning. The goal is to combine a predefined set of objects to form something new while considering task execution with the robot-in-the-loop. In this work, we tackle the problem of building arbitrary, predefined target structures entirely from scratch using a set of Tetris-like building blocks and a robot. Our novel hierarchical approach aims at efficiently decomposing the overall task into three feasible levels that benefit mutually from each other. On the high level, we run a classical mixed-integer program for global optimization of blocktype selection and the blocks' final poses to recreate the desired shape. Its output is then exploited as a prior to efficiently guide the exploration of an underlying reinforcement learning (RL) policy handling decisions regarding structural stability and robotic feasibility. This RL policy draws its generalization properties from a flexible graph-based neural network that is learned through Q-learning and can be refined with search. Lastly, a grasp and motion planner transforms the desired assembly commands into robot joint movements. We demonstrate our proposed method's performance on a set of competitive simulated robot assembly discovery environments and report performance and robustness gains compared to an unstructured graph-based end-to-end approach.
12
+
13
+ ## 1 Introduction
14
+
15
+ ![01963ed8-d6c4-727d-9fcf-1b21b9f4d66f_0_1008_1339_477_313_0.jpg](images/01963ed8-d6c4-727d-9fcf-1b21b9f4d66f_0_1008_1339_477_313_0.jpg)
16
+
17
+ Figure 1: Illustrating a simulated RAD environment (left) and all three components of our proposed hierarchical approach (right).
18
+
19
+ A common desire amongst many industry sectors is to increase resource efficiency. The construction industry could significantly reduce its environmental impact by reusing existing material more efficiently [1]. There is a fundamental need for combining intelligent algorithms for reasoning on how existing material can be recombined to form something new, with autonomous execution [2].
20
+
21
+ Herein, we are concerned with the problem of autonomous robotic assembly discovery (RAD), where a robotic agent should reason about abstract $3\mathrm{D}$ target shapes that need to be fulfilled given a set of available building blocks (cf. Fig. 1). Unlike other assembly problems with known instructions, in RAD, the agent does neither have any prior information about which blocks to use and their final poses, nor about the execution sequence. Contrarily, the RAD agent should discover the possible ways of combining the building blocks, find appropriate action sequences, and put them into practice. RAD can thus be structured into two difficulty levels. On the high level, a goal-defined resource allocation problem has to be solved, which is typically NP-complete for discrete resources, and can be viewed as a real-world version of the Knapsack Problem [3]. The low level requires solving a motion planning problem.
22
+
23
+ One way of approaching RAD are end-to-end approaches that directly map from problem definition to low level actions [4-6]. They are typically straightforward to design, and draw their generalization properties from learned graph neural network (GNN) representations. Due to their ability to learn relational encodings $\left\lbrack {7,8}\right\rbrack$ and invariant representations, they can overcome combinatorial barriers [9], and be combined with search for improved generalization and robustness [5, 6, 10]. Yet, they often require extensive training due to the huge combinatorial action space, and are typically hard to debug and interpret. On the other end of the spectrum are Task and Motion Planning (TAMP) approaches $\left\lbrack {{11},{12}}\right\rbrack$ , which naturally represent the hierarchical nature of the problem and necessitate full prior knowledge of geometrics and kinematics. They are usually unsuitable for real-time reactive control, as the full joint optimization suffers from combinatorics and non-convex constraints.
24
+
25
+ We propose a novel hierarchical method for 3D RAD that addresses both, resource allocation and motion planning. On the high level, a model-based mixed-integer linear program (MILP), handling the process of block-type selection and optimizing the blocks' final poses for optimally resembling the desired target shape, is solved. The MILP's solution is then used as a guiding exploration signal in a graph-based Reinforcement Learning (RL) framework. We define a GNN for capturing the geometric, structural, and physical relationships between building blocks, robot, and target shape, thereby incorporating all effects that have not been modelled on the higher level. The GNN is trained through model-free Q-learning allowing the integration with tree search for improved long-term decisions [10]. To put the previous reasoning into practice, at the lowest level, we rely on simple grasp and motion planning. We present an empirical evaluation of our proposed approach in a set of competitive simulated RAD tasks. The results show superior performance of our approach against both empirical and end-end GNN baselines, thereby underlining its effectiveness.
26
+
27
+ ## 2 Problem Definition
28
+
29
+ ![01963ed8-d6c4-727d-9fcf-1b21b9f4d66f_1_1154_887_328_183_0.jpg](images/01963ed8-d6c4-727d-9fcf-1b21b9f4d66f_1_1154_887_328_183_0.jpg)
30
+
31
+ Figure 2: 2D RAD environment with one placed block consisting of two primitive elements (shown in brown / blue). The grid cells are visualized through their centre points. Pink points correspond to target grid-cells that are to be filled, while non-target grid-cells (green) should remain unoccupied.
32
+
33
+ We formulate the problem of having to combine rectlinear blocks into a desired target shape (cf. Fig 1) as Markov Decision Process. Its state is given by the combination of four sets. Namely, the set of unplaced blocks that encodes the remaining blocks, the set of placed blocks that have already been used for the construction, and two sets containing the so called target grid-cells and non-target grid-cells, respectively. While the target grid-cells (pink) are part of the target shape and should thus ideally all be filled, the non-target grid-cells (green) should remain unoccupied (cf. Fig. 2). We also assume that all building blocks are a combination of primitive blocks. This choice allows to modularly represent any more complicated block through primitive elements.
34
+
35
+ For block placement, we use of a discrete, time-varying action space. Every unplaced primitive block can be placed w.r.t. all available grid-cells while additionally selecting from four actions that rotate the block by ${0}^{ \circ }, \pm {90}^{ \circ }$ , or ${180}^{ \circ }$ around the z-axis. We also add one termination action that results in stopping the assembly process. The resulting action space of combinatorial complexity thus contains #unplaced building blocks $\times$ #grid-cells $\times 4 + 1$ actions.
36
+
37
+ After every placement action, the set of placed/unplaced elements and target/non-target grid cells are updated, and a a reward is assigned. The reward is positive when the action reduced the number of target grid-cells, and negative if non-target grid-cells are being filled, therefore actively enforcing resource efficiency. The conditions for a successful placement action are that the block can be placed by the robot without moving or colliding with any other block, and that it is placed in a stable configuration. On any invalid action, the episode is terminated and a high negative reward is assigned. Otherwise, the episode is terminated upon the events of i) the agent choosing the termination action, ii) no more available building blocks, or iii) the completion of RAD.
38
+
39
+ ## 3 Method
40
+
41
+ We introduce the two upper levels of our proposed tri-level hybrid approach for reliable RAD (cf. Fig. 1). For the lowest level that only realizes the commanded assembly actions, we refer to the appendix.
42
+
43
+ High Level: MILP for optimal geometric target filling. We first solve a MILP which is targeted at optimizing the blocks' placing poses to optimally fill the desired shape in light of the problem's combinatorial complexity. Yet, to render the problem tractable, we do not consider the sequencing and robotic constraints. Based on the previous definitions (reward & voxelization), the MILP's objective (subject to maximization) equates to ${\mathcal{O}}_{\text{MILP }} = {\mathbf{c}}^{T}\mathbf{g}$ , with vector $\mathbf{g}$ representing the grid-state, and $c$ containing weighting factors that indicate whether a grid-cell should be filled (1) or not (-1). As every grid-cell can only be occupied at maximum by one block, we add $g\left\lbrack i\right\rbrack \leq 1,\forall g\left\lbrack i\right\rbrack \in \mathbf{g}$ . Next, we determine how every potential action changes the grid-state. I.e., placing the horizontal block from Fig. 2 in the lowest left position results in a grid state of ${\mathbf{p}}_{i = 1, k = 1}^{T} = \left\lbrack {1,1,0,\ldots ,0}\right\rbrack$ , with block type index $i$ and placement action $k$ . By additionally assigning a binary decision variable ${w}_{i, k}$ and taking all object types into account, we can define the change in the grid-state according to $\mathbf{g} = \mathop{\sum }\limits_{{\widehat{i} = 1}}^{P}\mathop{\sum }\limits_{{\widehat{k} = 1}}^{{K\left( \widehat{i}\right) }}{w}_{i = \widehat{i}, k = \widehat{k}}{\mathbf{p}}_{i = \widehat{i}, k = \widehat{k}}$ with a total of $P$ different block types and $K\left( i\right)$ admissible actions. While the binary decision variables prohibit any partial block placement by definition, we still have to restrict that any type of block can only be placed depending on its appearance in the RAD scene $\left( {N}_{i}\right)$ , i.e. $\mathop{\sum }\limits_{{\widehat{k} = 1}}^{{K\left( i\right) }}{w}_{i, k = \widehat{k}} \leq {N}_{i},\forall i \in P$ . We solve the resulting MILP through Gurobi [13].
44
+
45
+ ![01963ed8-d6c4-727d-9fcf-1b21b9f4d66f_2_309_207_1179_210_0.jpg](images/01963ed8-d6c4-727d-9fcf-1b21b9f4d66f_2_309_207_1179_210_0.jpg)
46
+
47
+ Figure 3: Illustrating action selection. First, the current scene is transformed into a graph. Note: Only a subset of the target (pink) and non-target (green) grid-cells is shown. White nodes depict the unplaced primitive blocks. Next follows message passing updating the nodes' features. The action's Q-values are predicted based on the nodes' features of the respective unplaced primitive block and the grid-cells using a feedforward neural network (NN). To incorporate the prior knowledge, we only consider actions part of the MILP solution (shown in red).
48
+
49
+ Medium Level: GNN for task sequencing. The high level MILP only partially resolves the problem's combinatorial aspect. It lacks i) information about the placement actions' sequencing, ii) the exact assignment of which block to use for each placement, and iii) the consideration of robotic feasibility, the blocks' initial positions, and structural stability. Thus, another level is required which is capable of efficiently incorporating the prior knowledge from the MILP to decide upon either executing one of the proposed actions or terminating the current assembly.
50
+
51
+ We propose an approach based on a combination of GNN and Q-learning [5, 6], for the following reasons. The GNN is capable of providing the required representational flexibility and invariance to problem size, while performing action selection based on Q-learning is desirable as i) the herein considered action space is discrete, but remains tractable for exploration due to the MILP's inductive bias, ii) the state-action-based formulation allows to efficiently incorporate the prior knowledge by masking out all actions that are not inside the MILP solution, iii) potential multimodalities in the MILP solution are not problematic and do not erroneously bias this Q-function estimator, and iv) it allows easy and time-effective combination with search-based methods, such as Monte Carlo Tree Search (MCTS) to improve robustness and performance [10].
52
+
53
+ We now describe the action selection process (cf. Fig. 3). We refer to [6] (which esentially uses the same GNN) for the additional details. We first transform the environment's current state into a graph, by creating nodes for all primitive blocks and grid-cells. The nodes' features contain the respective nodes' 3D position, as well as 2 indices that indicate the type of node, i.e. placed/unplaced primitive block, target/non-target grid-cell. Upon graph creation follow three rounds of message passing using an attention mechanism $\left\lbrack {6,9}\right\rbrack$ , in which we sequentially build an encoded graph. The encodings are the basis for computing Q-values for all available actions. As any unplaced primitive block can be placed w.r.t. every grid-cell, a standard feedforward NN is used, that takes as input the encoded node values of i) the primitive block-to-be-placed, and ii) the grid-cell, and outputs the Q-values for all the four rotational-placement actions between these nodes. This process is repeated for all pairs of unplaced primitive blocks and grid-cells. The action decision is done using an $\epsilon$ -greedy strategy, yet, only allowing to choose the best action from the set of actions proposed by the MILP, as well as the termination action. The graph's weights are refined through temporal-difference learning. While this Q-learning procedure by itself already results in good policies, during test time, we additionally consider action selection based on the combination of Q-learning and MCTS (DQN+MCTS).
54
+
55
+ ## 4 Experimental Results & Conclusions
56
+
57
+ We evaluate our proposed MILP-DQN method and potentially adding MCTS (search budget of 5), in simulation (Fig. 1). We aim to answer two questions: 1) Does the MILP's guided exploration signal improve performance compared to end-to-end approaches? 2) How effective is the medium level GNN compared to an heuristic approach for task sequencing?
58
+
59
+ The training is conducted with the same parameters as in [6]. In the evaluations, we describe the environment's difficulty through the grid size, i.e., Fig. 2 shows a potential target shape for a grid size of 3 . The star(*) indicates the agents' evaluation in their training conditions, while the other experiments are out-of-distribution. The results are obtained by averaging the agents' performance in 200 scenes. We report the discounted reward $R$ , the fraction of runs that ended i) upon failure $f$ , i.e., trying to execute an action that is not feasible with the robot, or placing the block in an unstable configuration, or destroying the already existing structure, and the target grid-cell coverage $\bar{a}$ , i.e., the fraction of initially unfilled and finally filled target grid-cells.
60
+
61
+ ## A) Is the high level MILP needed?
62
+
63
+ Table 1: Comparing our proposed method with two learned baselines in the two-sided environment wo robot.
64
+
65
+ <table><tr><td>Grid Size</td><td>Method</td><td>$R$</td><td>$\bar{a}$</td></tr><tr><td rowspan="3">3*</td><td>DQN</td><td>0.63 (0.02)</td><td>0.71</td></tr><tr><td>DQN-REL [6]</td><td>0.67 (0.01)</td><td>0.68</td></tr><tr><td>MILP-DQN</td><td>1.22 (0.01)</td><td>0.87</td></tr><tr><td rowspan="3">4</td><td>DQN</td><td>0.71 (0.08)</td><td>0.69</td></tr><tr><td>DQN-REL [6]</td><td>0.75 (0.08)</td><td>0.66</td></tr><tr><td>MILP-DQN</td><td>1.56 (0.03)</td><td>0.87</td></tr><tr><td>5</td><td>MILP-DQN</td><td>1.92 (0.05)</td><td>0.85</td></tr></table>
66
+
67
+ We consider scenarios without the robot, which reduces the task's complexity to placing the blocks in a stable configuration while trying to optimally fill the desired shape. We compare against two baselines. The first one (DQN) does not consider the MILP's prior knowledge and can therefore place any of the available blocks at all currently unoccupied grid-cells. The second one (DQN-REL) follows [6], in which the available blocks can only be placed next already placed blocks, thus, reducing the action space. In the first step, we allow to place the blocks at any target grid-cell.
68
+
69
+ The results in Table 1 reveal that the MILP provides a strong inductive bias that is effective in guiding the exploration. The agents trained using our proposed MILP-DQN approach outperform the two baselines which in turn exhibit very similar performance. Compared to the baselines, MILP-DQN agents achieve an increase in the success rate and discounted reward by a factor of 2 . These results confirm the task’s combinatorial complexity. Performing an $\epsilon$ - greedy exploration without using an informed prior does not allow for discovering good action sequences. The results also reveal that the MILP-DQN agents generalize well to the out-of-distribution environments as the desired target grid-cell coverage remains high at 0.87 and 0.85 (grid size of4,5), despite the significant increase in task complexity, i.e., the average target grid-cells that should be filled increase from roughly 5 to 12 while increasing the grid size from 3 to 5 .
70
+
71
+ B) How effective is the GNN policy for robotic execution? We now consider the scenario with the robot-in-the-loop (Fig. 1) and investigate the GNN's effectiveness. For this purpose, we compare the learned GNN with a heuristic (HEUR). The agents using HEUR perform action selection as follows: based on actions proposed by the MILP, the heuristic only considers those for which the block's placement will result in a stable configuration and selects one of them at random. If there is no such action, the termination action is selected.
72
+
73
+ Table 2: Comparing our proposed method with a heuristic in the two-sided environment with the robot-in-the-loop.
74
+
75
+ <table><tr><td rowspan="2">Grid Size</td><td rowspan="2">Method</td><td colspan="3"/></tr><tr><td>$R$</td><td>$f$</td><td>$\bar{a}$</td></tr><tr><td rowspan="3">4*</td><td>HEUR</td><td>0.57</td><td>0.4</td><td>0.62</td></tr><tr><td>MILP-DQN</td><td>1.03</td><td>0.16</td><td>0.7</td></tr><tr><td>MILP-DQN-MCTS</td><td>1.24</td><td>0.05</td><td>0.75</td></tr><tr><td rowspan="3">5</td><td>HEUR</td><td>0.34</td><td>0.58</td><td>0.47</td></tr><tr><td>MILP-DQN</td><td>0.98</td><td>0.25</td><td>0.58</td></tr><tr><td>MILP-DQN-MCTS</td><td>1.38</td><td>0.08</td><td>0.65</td></tr></table>
76
+
77
+ The results are presented in Table 2. In both versions of the environment, our proposed agents (MILP-DQN & MILP-DQN-MCTS) clearly outperform the heuristic. Notably, already in the environment with less building blocks, using the heuristic results in ${40}\%$ of failures. Those high rates indicate that a more informed method for action sequencing is required. Both versions of our proposed approach are capable of effectively reducing the percentage of failures, with MILP-DQN decreasing the rates roughly by a factor of 2 , while the addition of MCTS leads to an impressive decrease by a factor of almost 8 . Those results show that our learned graph-based representations are indeed capable of effectively capturing the state of the environment and make informed decisions regarding the action sequencing which is a crucial component of RAD.
78
+
79
+ Conclusions. We have presented a novel hierarchical approach for robot assembly discovery (RAD). Our proposed approach is based on the combination of global reasoning through mixed-integer programming, which forms a powerful inductive bias for the subsequent graph-based reinforcement learning for local decision-making, together with grasp and motion planning for realizing the assembly actions. The hierarchy allows for the efficient decomposition of the original problem's huge combinatorial action space and thereby results in robust and reliable RAD policies. The proposed approach is validated in a set of simulated RAD experiments that illustrate its effectiveness. In the future, we want to investigate how the algorithm can be transferred to different domains.
80
+
81
+ ## References
82
+
83
+ [1] Elma Durmisevic. Circular economy in construction design strategies for reversible buildings. BAMB, Netherlands, 2019. 1
84
+
85
+ [2] Skylar Tibbits. Autonomous assembly: designing for a new era of collective construction. John Wiley & Sons, 2017. 1
86
+
87
+ [3] Harvey M Salkin and Cornelis A De Kluyver. The knapsack problem: a survey. Naval Research Logistics Quarterly, 1975. 1
88
+
89
+ [4] Victor Bapst, Alvaro Sanchez-Gonzalez, and Carl Doersch et al. Structured agents for physical construction. In ICML, 2019. 1
90
+
91
+ [5] Jessica B Hamrick, Victor Bapst, and Alvaro Sanchez-Gonzalez et al. Combining q-learning and search with amortized value estimates. In ICLR, 2019. 2, 3
92
+
93
+ [6] Niklas Funk, Georgia Chalvatzaki, Boris Belousov, and Jan Peters. Learn2assemble with structured representations and search for robotic architectural construction. In CoRL, 2021. 1, 2,3,4
94
+
95
+ [7] Ashish Vaswani, Noam Shazeer, Niki Parmar, and Jakob Uszkoreit et al. Attention is all you need. In NeurIPS, 2017. 2
96
+
97
+ [8] Petar Veličković, Guillem Cucurull, and Arantxa Casanova et al. Graph attention networks. arXiv:1710.10903, 2017. 2
98
+
99
+ [9] Wouter Kool, Herke van Hoof, and Max Welling. Attention, learn to solve routing problems! In ICLR, 2018. 2, 3
100
+
101
+ [10] Davide Silver, Julian Schrittwieser, and Karen Simonyan et al. Mastering the game of go without human knowledge. nature, 2017. 2, 3
102
+
103
+ [11] Marc Toussaint. Logic-geometric programming: An optimization-based approach to combined task and motion planning. In IJCAI, 2015. 2
104
+
105
+ [12] Leslie Pack Kaelbling and Tomás Lozano-Pérez. Hierarchical planning in the now. In Workshops AAAI, 2010. 2
106
+
107
+ [13] Gurobi Optimization, LLC. Gurobi Optimizer, 2022. 3
108
+
109
+ ## A Appendix
110
+
111
+ ### A.1 Visualization of successful RAD sequences
112
+
113
+ To support the experimental evaluations presented in Section V-B, we provide a visualization of RAD in Figs. 4 & 5. In line with the conclusions drawn in the main paper, Fig. 4 underlines that our proposed hierarchical approach is indeed capable of resolving the inherent difficulties of RAD. It depicts the successful assembly of a desired target shape using 4 blocks of 3 different types. Contrarily, in Fig. 5, we visualize one exemplary case of failure, where the useage of the less informed heuristic agent results in bad action sequencing and ultimately in two blocks colliding with each other.
114
+
115
+ ### A.2 Additional details on the lowest level: Grasp and Motion planning (GAMP)
116
+
117
+ The lowest level is tasked with the conversion of the previous level's actions into robot joint commands, and performs the final robot execution of block grasping and moving such that the block is placed in the desired pose. While it would be possible to add those decisions to the higher levels, we decided to consider motion generation as a separate module in our hierarchical framework, as these decisions are heavily dependent on the actual robot manipulator. Moreover, we want to avoid increasing the action space of the previous level. Robotic block grasping and placing is achieved by first checking the feasibility of a predefined set of top-down grasping poses and subsequently checking if this grasp results in a feasible final placement pose. If there exists a pair of feasible grasping and placing poses, we move the robot by approaching the grasping pose from the top, then move to a position that is slightly above the placing location, and finally, approach the placement pose. All intermediate waypoints are computed based on inverse kinematics.
118
+
119
+ ![01963ed8-d6c4-727d-9fcf-1b21b9f4d66f_5_302_782_1194_646_0.jpg](images/01963ed8-d6c4-727d-9fcf-1b21b9f4d66f_5_302_782_1194_646_0.jpg)
120
+
121
+ Figure 5: Illustration of an unsuccessful RAD sequence using the heuristic agent introduced in Sec. 4-B. As shown in the images, it is important to perform informed decisions about the assembly sequence, as the wrong sequencing can result in collisions between the block that is placed and other blocks in the scene.
122
+
papers/LOG/LOG 2022/LOG 2022 Conference/W59BHjEDfz/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § GRAPH-BASED REINFORCEMENT LEARNING MEETS MIXED INTEGER PROGRAMS: AN APPLICATION TO 3D ROBOT ASSEMBLY DISCOVERY
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ Robot assembly discovery is a challenging problem that lives at the intersection of resource allocation and motion planning. The goal is to combine a predefined set of objects to form something new while considering task execution with the robot-in-the-loop. In this work, we tackle the problem of building arbitrary, predefined target structures entirely from scratch using a set of Tetris-like building blocks and a robot. Our novel hierarchical approach aims at efficiently decomposing the overall task into three feasible levels that benefit mutually from each other. On the high level, we run a classical mixed-integer program for global optimization of blocktype selection and the blocks' final poses to recreate the desired shape. Its output is then exploited as a prior to efficiently guide the exploration of an underlying reinforcement learning (RL) policy handling decisions regarding structural stability and robotic feasibility. This RL policy draws its generalization properties from a flexible graph-based neural network that is learned through Q-learning and can be refined with search. Lastly, a grasp and motion planner transforms the desired assembly commands into robot joint movements. We demonstrate our proposed method's performance on a set of competitive simulated robot assembly discovery environments and report performance and robustness gains compared to an unstructured graph-based end-to-end approach.
12
+
13
+ § 1 INTRODUCTION
14
+
15
+ < g r a p h i c s >
16
+
17
+ Figure 1: Illustrating a simulated RAD environment (left) and all three components of our proposed hierarchical approach (right).
18
+
19
+ A common desire amongst many industry sectors is to increase resource efficiency. The construction industry could significantly reduce its environmental impact by reusing existing material more efficiently [1]. There is a fundamental need for combining intelligent algorithms for reasoning on how existing material can be recombined to form something new, with autonomous execution [2].
20
+
21
+ Herein, we are concerned with the problem of autonomous robotic assembly discovery (RAD), where a robotic agent should reason about abstract $3\mathrm{D}$ target shapes that need to be fulfilled given a set of available building blocks (cf. Fig. 1). Unlike other assembly problems with known instructions, in RAD, the agent does neither have any prior information about which blocks to use and their final poses, nor about the execution sequence. Contrarily, the RAD agent should discover the possible ways of combining the building blocks, find appropriate action sequences, and put them into practice. RAD can thus be structured into two difficulty levels. On the high level, a goal-defined resource allocation problem has to be solved, which is typically NP-complete for discrete resources, and can be viewed as a real-world version of the Knapsack Problem [3]. The low level requires solving a motion planning problem.
22
+
23
+ One way of approaching RAD are end-to-end approaches that directly map from problem definition to low level actions [4-6]. They are typically straightforward to design, and draw their generalization properties from learned graph neural network (GNN) representations. Due to their ability to learn relational encodings $\left\lbrack {7,8}\right\rbrack$ and invariant representations, they can overcome combinatorial barriers [9], and be combined with search for improved generalization and robustness [5, 6, 10]. Yet, they often require extensive training due to the huge combinatorial action space, and are typically hard to debug and interpret. On the other end of the spectrum are Task and Motion Planning (TAMP) approaches $\left\lbrack {{11},{12}}\right\rbrack$ , which naturally represent the hierarchical nature of the problem and necessitate full prior knowledge of geometrics and kinematics. They are usually unsuitable for real-time reactive control, as the full joint optimization suffers from combinatorics and non-convex constraints.
24
+
25
+ We propose a novel hierarchical method for 3D RAD that addresses both, resource allocation and motion planning. On the high level, a model-based mixed-integer linear program (MILP), handling the process of block-type selection and optimizing the blocks' final poses for optimally resembling the desired target shape, is solved. The MILP's solution is then used as a guiding exploration signal in a graph-based Reinforcement Learning (RL) framework. We define a GNN for capturing the geometric, structural, and physical relationships between building blocks, robot, and target shape, thereby incorporating all effects that have not been modelled on the higher level. The GNN is trained through model-free Q-learning allowing the integration with tree search for improved long-term decisions [10]. To put the previous reasoning into practice, at the lowest level, we rely on simple grasp and motion planning. We present an empirical evaluation of our proposed approach in a set of competitive simulated RAD tasks. The results show superior performance of our approach against both empirical and end-end GNN baselines, thereby underlining its effectiveness.
26
+
27
+ § 2 PROBLEM DEFINITION
28
+
29
+ < g r a p h i c s >
30
+
31
+ Figure 2: 2D RAD environment with one placed block consisting of two primitive elements (shown in brown / blue). The grid cells are visualized through their centre points. Pink points correspond to target grid-cells that are to be filled, while non-target grid-cells (green) should remain unoccupied.
32
+
33
+ We formulate the problem of having to combine rectlinear blocks into a desired target shape (cf. Fig 1) as Markov Decision Process. Its state is given by the combination of four sets. Namely, the set of unplaced blocks that encodes the remaining blocks, the set of placed blocks that have already been used for the construction, and two sets containing the so called target grid-cells and non-target grid-cells, respectively. While the target grid-cells (pink) are part of the target shape and should thus ideally all be filled, the non-target grid-cells (green) should remain unoccupied (cf. Fig. 2). We also assume that all building blocks are a combination of primitive blocks. This choice allows to modularly represent any more complicated block through primitive elements.
34
+
35
+ For block placement, we use of a discrete, time-varying action space. Every unplaced primitive block can be placed w.r.t. all available grid-cells while additionally selecting from four actions that rotate the block by ${0}^{ \circ }, \pm {90}^{ \circ }$ , or ${180}^{ \circ }$ around the z-axis. We also add one termination action that results in stopping the assembly process. The resulting action space of combinatorial complexity thus contains #unplaced building blocks $\times$ #grid-cells $\times 4 + 1$ actions.
36
+
37
+ After every placement action, the set of placed/unplaced elements and target/non-target grid cells are updated, and a a reward is assigned. The reward is positive when the action reduced the number of target grid-cells, and negative if non-target grid-cells are being filled, therefore actively enforcing resource efficiency. The conditions for a successful placement action are that the block can be placed by the robot without moving or colliding with any other block, and that it is placed in a stable configuration. On any invalid action, the episode is terminated and a high negative reward is assigned. Otherwise, the episode is terminated upon the events of i) the agent choosing the termination action, ii) no more available building blocks, or iii) the completion of RAD.
38
+
39
+ § 3 METHOD
40
+
41
+ We introduce the two upper levels of our proposed tri-level hybrid approach for reliable RAD (cf. Fig. 1). For the lowest level that only realizes the commanded assembly actions, we refer to the appendix.
42
+
43
+ High Level: MILP for optimal geometric target filling. We first solve a MILP which is targeted at optimizing the blocks' placing poses to optimally fill the desired shape in light of the problem's combinatorial complexity. Yet, to render the problem tractable, we do not consider the sequencing and robotic constraints. Based on the previous definitions (reward & voxelization), the MILP's objective (subject to maximization) equates to ${\mathcal{O}}_{\text{ MILP }} = {\mathbf{c}}^{T}\mathbf{g}$ , with vector $\mathbf{g}$ representing the grid-state, and $c$ containing weighting factors that indicate whether a grid-cell should be filled (1) or not (-1). As every grid-cell can only be occupied at maximum by one block, we add $g\left\lbrack i\right\rbrack \leq 1,\forall g\left\lbrack i\right\rbrack \in \mathbf{g}$ . Next, we determine how every potential action changes the grid-state. I.e., placing the horizontal block from Fig. 2 in the lowest left position results in a grid state of ${\mathbf{p}}_{i = 1,k = 1}^{T} = \left\lbrack {1,1,0,\ldots ,0}\right\rbrack$ , with block type index $i$ and placement action $k$ . By additionally assigning a binary decision variable ${w}_{i,k}$ and taking all object types into account, we can define the change in the grid-state according to $\mathbf{g} = \mathop{\sum }\limits_{{\widehat{i} = 1}}^{P}\mathop{\sum }\limits_{{\widehat{k} = 1}}^{{K\left( \widehat{i}\right) }}{w}_{i = \widehat{i},k = \widehat{k}}{\mathbf{p}}_{i = \widehat{i},k = \widehat{k}}$ with a total of $P$ different block types and $K\left( i\right)$ admissible actions. While the binary decision variables prohibit any partial block placement by definition, we still have to restrict that any type of block can only be placed depending on its appearance in the RAD scene $\left( {N}_{i}\right)$ , i.e. $\mathop{\sum }\limits_{{\widehat{k} = 1}}^{{K\left( i\right) }}{w}_{i,k = \widehat{k}} \leq {N}_{i},\forall i \in P$ . We solve the resulting MILP through Gurobi [13].
44
+
45
+ < g r a p h i c s >
46
+
47
+ Figure 3: Illustrating action selection. First, the current scene is transformed into a graph. Note: Only a subset of the target (pink) and non-target (green) grid-cells is shown. White nodes depict the unplaced primitive blocks. Next follows message passing updating the nodes' features. The action's Q-values are predicted based on the nodes' features of the respective unplaced primitive block and the grid-cells using a feedforward neural network (NN). To incorporate the prior knowledge, we only consider actions part of the MILP solution (shown in red).
48
+
49
+ Medium Level: GNN for task sequencing. The high level MILP only partially resolves the problem's combinatorial aspect. It lacks i) information about the placement actions' sequencing, ii) the exact assignment of which block to use for each placement, and iii) the consideration of robotic feasibility, the blocks' initial positions, and structural stability. Thus, another level is required which is capable of efficiently incorporating the prior knowledge from the MILP to decide upon either executing one of the proposed actions or terminating the current assembly.
50
+
51
+ We propose an approach based on a combination of GNN and Q-learning [5, 6], for the following reasons. The GNN is capable of providing the required representational flexibility and invariance to problem size, while performing action selection based on Q-learning is desirable as i) the herein considered action space is discrete, but remains tractable for exploration due to the MILP's inductive bias, ii) the state-action-based formulation allows to efficiently incorporate the prior knowledge by masking out all actions that are not inside the MILP solution, iii) potential multimodalities in the MILP solution are not problematic and do not erroneously bias this Q-function estimator, and iv) it allows easy and time-effective combination with search-based methods, such as Monte Carlo Tree Search (MCTS) to improve robustness and performance [10].
52
+
53
+ We now describe the action selection process (cf. Fig. 3). We refer to [6] (which esentially uses the same GNN) for the additional details. We first transform the environment's current state into a graph, by creating nodes for all primitive blocks and grid-cells. The nodes' features contain the respective nodes' 3D position, as well as 2 indices that indicate the type of node, i.e. placed/unplaced primitive block, target/non-target grid-cell. Upon graph creation follow three rounds of message passing using an attention mechanism $\left\lbrack {6,9}\right\rbrack$ , in which we sequentially build an encoded graph. The encodings are the basis for computing Q-values for all available actions. As any unplaced primitive block can be placed w.r.t. every grid-cell, a standard feedforward NN is used, that takes as input the encoded node values of i) the primitive block-to-be-placed, and ii) the grid-cell, and outputs the Q-values for all the four rotational-placement actions between these nodes. This process is repeated for all pairs of unplaced primitive blocks and grid-cells. The action decision is done using an $\epsilon$ -greedy strategy, yet, only allowing to choose the best action from the set of actions proposed by the MILP, as well as the termination action. The graph's weights are refined through temporal-difference learning. While this Q-learning procedure by itself already results in good policies, during test time, we additionally consider action selection based on the combination of Q-learning and MCTS (DQN+MCTS).
54
+
55
+ § 4 EXPERIMENTAL RESULTS & CONCLUSIONS
56
+
57
+ We evaluate our proposed MILP-DQN method and potentially adding MCTS (search budget of 5), in simulation (Fig. 1). We aim to answer two questions: 1) Does the MILP's guided exploration signal improve performance compared to end-to-end approaches? 2) How effective is the medium level GNN compared to an heuristic approach for task sequencing?
58
+
59
+ The training is conducted with the same parameters as in [6]. In the evaluations, we describe the environment's difficulty through the grid size, i.e., Fig. 2 shows a potential target shape for a grid size of 3 . The star(*) indicates the agents' evaluation in their training conditions, while the other experiments are out-of-distribution. The results are obtained by averaging the agents' performance in 200 scenes. We report the discounted reward $R$ , the fraction of runs that ended i) upon failure $f$ , i.e., trying to execute an action that is not feasible with the robot, or placing the block in an unstable configuration, or destroying the already existing structure, and the target grid-cell coverage $\bar{a}$ , i.e., the fraction of initially unfilled and finally filled target grid-cells.
60
+
61
+ § A) IS THE HIGH LEVEL MILP NEEDED?
62
+
63
+ Table 1: Comparing our proposed method with two learned baselines in the two-sided environment wo robot.
64
+
65
+ max width=
66
+
67
+ Grid Size Method $R$ $\bar{a}$
68
+
69
+ 1-4
70
+ 3*3* DQN 0.63 (0.02) 0.71
71
+
72
+ 2-4
73
+ DQN-REL [6] 0.67 (0.01) 0.68
74
+
75
+ 2-4
76
+ MILP-DQN 1.22 (0.01) 0.87
77
+
78
+ 1-4
79
+ 3*4 DQN 0.71 (0.08) 0.69
80
+
81
+ 2-4
82
+ DQN-REL [6] 0.75 (0.08) 0.66
83
+
84
+ 2-4
85
+ MILP-DQN 1.56 (0.03) 0.87
86
+
87
+ 1-4
88
+ 5 MILP-DQN 1.92 (0.05) 0.85
89
+
90
+ 1-4
91
+
92
+ We consider scenarios without the robot, which reduces the task's complexity to placing the blocks in a stable configuration while trying to optimally fill the desired shape. We compare against two baselines. The first one (DQN) does not consider the MILP's prior knowledge and can therefore place any of the available blocks at all currently unoccupied grid-cells. The second one (DQN-REL) follows [6], in which the available blocks can only be placed next already placed blocks, thus, reducing the action space. In the first step, we allow to place the blocks at any target grid-cell.
93
+
94
+ The results in Table 1 reveal that the MILP provides a strong inductive bias that is effective in guiding the exploration. The agents trained using our proposed MILP-DQN approach outperform the two baselines which in turn exhibit very similar performance. Compared to the baselines, MILP-DQN agents achieve an increase in the success rate and discounted reward by a factor of 2 . These results confirm the task’s combinatorial complexity. Performing an $\epsilon$ - greedy exploration without using an informed prior does not allow for discovering good action sequences. The results also reveal that the MILP-DQN agents generalize well to the out-of-distribution environments as the desired target grid-cell coverage remains high at 0.87 and 0.85 (grid size of4,5), despite the significant increase in task complexity, i.e., the average target grid-cells that should be filled increase from roughly 5 to 12 while increasing the grid size from 3 to 5 .
95
+
96
+ B) How effective is the GNN policy for robotic execution? We now consider the scenario with the robot-in-the-loop (Fig. 1) and investigate the GNN's effectiveness. For this purpose, we compare the learned GNN with a heuristic (HEUR). The agents using HEUR perform action selection as follows: based on actions proposed by the MILP, the heuristic only considers those for which the block's placement will result in a stable configuration and selects one of them at random. If there is no such action, the termination action is selected.
97
+
98
+ Table 2: Comparing our proposed method with a heuristic in the two-sided environment with the robot-in-the-loop.
99
+
100
+ max width=
101
+
102
+ 2*Grid Size 2*Method 3|c|X
103
+
104
+ 3-5
105
+ $R$ $f$ $\bar{a}$
106
+
107
+ 1-5
108
+ 3*4* HEUR 0.57 0.4 0.62
109
+
110
+ 2-5
111
+ MILP-DQN 1.03 0.16 0.7
112
+
113
+ 2-5
114
+ MILP-DQN-MCTS 1.24 0.05 0.75
115
+
116
+ 1-5
117
+ 3*5 HEUR 0.34 0.58 0.47
118
+
119
+ 2-5
120
+ MILP-DQN 0.98 0.25 0.58
121
+
122
+ 2-5
123
+ MILP-DQN-MCTS 1.38 0.08 0.65
124
+
125
+ 1-5
126
+
127
+ The results are presented in Table 2. In both versions of the environment, our proposed agents (MILP-DQN & MILP-DQN-MCTS) clearly outperform the heuristic. Notably, already in the environment with less building blocks, using the heuristic results in ${40}\%$ of failures. Those high rates indicate that a more informed method for action sequencing is required. Both versions of our proposed approach are capable of effectively reducing the percentage of failures, with MILP-DQN decreasing the rates roughly by a factor of 2, while the addition of MCTS leads to an impressive decrease by a factor of almost 8 . Those results show that our learned graph-based representations are indeed capable of effectively capturing the state of the environment and make informed decisions regarding the action sequencing which is a crucial component of RAD.
128
+
129
+ Conclusions. We have presented a novel hierarchical approach for robot assembly discovery (RAD). Our proposed approach is based on the combination of global reasoning through mixed-integer programming, which forms a powerful inductive bias for the subsequent graph-based reinforcement learning for local decision-making, together with grasp and motion planning for realizing the assembly actions. The hierarchy allows for the efficient decomposition of the original problem's huge combinatorial action space and thereby results in robust and reliable RAD policies. The proposed approach is validated in a set of simulated RAD experiments that illustrate its effectiveness. In the future, we want to investigate how the algorithm can be transferred to different domains.
papers/LOG/LOG 2022/LOG 2022 Conference/WtFobB28VDey/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,769 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Learnable Commutative Monoids for Graph Neural Networks
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ Graph neural networks (GNNs) have been shown to be highly sensitive to the choice of aggregation function. While summing over a node's neighbours can approximate any permutation-invariant function over discrete inputs, recent work has proved there are set-aggregation problems for which summing cannot generalise to unbounded inputs, proposing recurrent neural networks regularised towards permutation-invariance as a more expressive aggregator. We show that these results carry over to the graph domain: GNNs equipped with recurrent aggrega-tors are competitive with state-of-the-art invariant aggregators, on both synthetic benchmarks and real-world problems. However, despite the benefits of recurrent aggregators, their $O\left( V\right)$ depth makes them both difficult to parallelise and harder to train on large graphs. Inspired by the observation that a well-behaved aggregator for a GNN is a commutative monoid over its latent space, we propose a framework for constructing learnable, commutative, associative binary operators. And with this, we construct an aggregator of $O\left( {\log V}\right)$ depth, yielding exponential improvements for both parallelism and dependency length while achieving performance competitive with recurrent aggregators. Based on our empirical observations, our proposed learnable commutative monoid (LCM) aggregator represents the "best of both worlds" between efficient and expressive aggregators.
12
+
13
+ ## 201 Introduction
14
+
15
+ When dealing with irregularly structured data [Bronstein et al., 2021], neural networks typically need to process data of arbitrary sizes. In such scenarios, the heart of the network is arguably its aggregation function-a function that reduces a collection of neighbour feature vectors into a single vector. Indeed, graph neural networks (GNNs) have been shown empirically to be highly sensitive to the choice of aggregator [Veličković et al., 2019, Richter and Wattenhofer, 2020], with a wide range of aggregators (e.g. sum, max and mean) and their combinations [Corso et al., 2020] in common use.
16
+
17
+ In this paper, we offer a new perspective for studying aggregators, with clear theoretical and practical implications. It can be said that the true objective of choosing an aggregator is to make it as simple as possible for the parameters of the GNNs to exploit that aggregator in a way that makes it easier to solve the downstream task. Specifically, we study this in the context of learning to align the GNN's aggregator to a desirable target aggregation function. It is already a known fact that higher alignment implies reduced sample complexity [Xu et al., 2019a], and in the context of algorithmic reasoning, it is well-known that a neural network will be better at learning to imitate an algorithm if its aggregator matches that of the algorithm it is trying to imitate [Veličković et al., 2019, Xu et al., 2020].
18
+
19
+ However, beyond the realm of learning a task with a concrete aggregator, many real-world problems offer more challenging settings, wherein the optimal aggregator to learn is not clear-but unlikely to be a trivial fixed aggregator. To formalise this notion, while preserving the useful assumption of permutation invariance, we leverage commutative monoids as a formalism for both the aggregators supported by GNNs and the (potentially unknown) target aggregators one would wish to align to. - This formalism allows us to derive several relevant results, including the fact that using any fixed commutative monoid (e.g. sum or max) would compel the GNN to learn a homomorphism from it to the target commutative monoid, purely from data. We hypothesise this is often difficult to do robustly, and verify our hypothesis by demonstrating several instances (both synthetic and real-world) where fixed aggregators (including combinations of them [Corso et al., 2020]) fail to generalise.
20
+
21
+ Our perspective, inspired by the functional programming motif of folds (or catamorphisms) over arbitrary data structures, leads us to consider flexible and learnable aggregation functions, which can more easily fit a wide range of commutative monoids directly, without needing to learn such a homomorphism. The most popular such aggregator has previously been the RNN (i.e. 'a fold over a list') - used, for instance, in GraphSAGE [Hamilton et al., 2017]. The reason for RNNs' expressive power is simple: their usage of a hidden recurrent state allows them to break away from the constraints of commutative monoids and aggregate inputs more flexibly. However, while empirically powerful, the sequential structure of RNN aggregators leads to clear shortcomings in efficiency: if an RNN had learnt to aggregate $n$ neighbours under a commutative monoid operation $\oplus$ , it would do so with depth linear in $n$ , as $\left( {\left( {\left( {\left( {\ldots \left( {{\mathbf{x}}_{1} \oplus {\mathbf{x}}_{2}}\right) \oplus {\mathbf{x}}_{3}}\right) \oplus \ldots }\right) \oplus {\mathbf{x}}_{n - 1}}\right) \oplus {\mathbf{x}}_{n}}\right)$ .
22
+
23
+ But, by folding over a binary tree instead of a list (in other words, rearranging the order of operations to a balanced binary tree $\left( {\ldots \left( {\left( {{\mathbf{x}}_{1} \oplus {\mathbf{x}}_{2}}\right) \oplus \left( {{\mathbf{x}}_{3} \oplus {\mathbf{x}}_{4}}\right) }\right) \oplus \cdots \oplus \left( {{\mathbf{x}}_{n - 1} \oplus {\mathbf{x}}_{n}}\right) \ldots }\right)$ ), we derive an aggregator that hits a "sweet spot" between flexibility and performance, empirically retaining most of the expressivity of RNNs while having depth logarithmic in $n$ . We also demonstrate how such layers can be effectively constrained and regularised to respect the commutative monoid axioms (essentially creating a learnable commutative monoid), leading to further gains in robustness.
24
+
25
+ ## 2 Motivation
26
+
27
+ Before exploring GNN aggregators, we first review the structure of a GNN. For a graph $G = \left( {V, E}\right)$ whose nodes $u$ have one-hop neighbourhoods ${\mathcal{N}}_{u} = \{ v \in V \mid \left( {v, u}\right) \in E\}$ and features ${\mathbf{x}}_{u}$ , a message-passing GNN over $G$ is defined by Bronstein et al. [2021] as ${\mathbf{h}}_{u} = \phi \left( {{\mathbf{x}}_{u},{\bigoplus }_{v \in {\mathcal{N}}_{u}}\psi \left( {{\mathbf{x}}_{u},{\mathbf{x}}_{v}}\right) }\right)$ for $\psi$ the message function, $\phi$ the readout function and $\oplus$ a permutation-invariant aggregation function. This GNN ’template’ can be instantiated in many ways, with different choices of $\phi ,\psi$ and $\oplus$ yielding popular architectures such as GCNs [Kipf and Welling, 2017] and GATs [Veličković et al., 2018].
28
+
29
+ ### 2.1 To learn a complex aggregator is to learn a commutative monoid homomorphism
30
+
31
+ So we’ve seen that, in order to define a GNN, we must define a permutation-invariant aggregator $\oplus$ over its messages. But how can we characterise a permutation-invariant aggregator in general?
32
+
33
+ In abstract algebra (and in functional programming), a permutation-invariant aggregator over a set can be described as (maps into and out of) a commutative monoid. A commutative monoid $\left( {M,\oplus ,{e}_{ \oplus }}\right)$ is a set $M$ equipped with a commutative, associative binary operator $\oplus : M \times M \rightarrow M$ and an identity element ${e}_{ \oplus } \in M$ - in other words, an instance of the following Haskell typeclass, satisfying the identities to the right for all ( $\mathrm{x}\mathrm{y}\mathrm{z} : : \mathrm{a}$ ):
34
+
35
+ ---
36
+
37
+ class CommutativeMonoid a =
38
+
39
+ e :: a
40
+
41
+ $\Leftrightarrow : : a \rightarrow a \rightarrow a$
42
+
43
+ $\mathrm{x} \Leftrightarrow \mathrm{e} = = \mathrm{e}$
44
+
45
+ $\mathrm{x} \Leftrightarrow \mathrm{y} = = \mathrm{y} \Leftrightarrow \mathrm{x}$
46
+
47
+ $$
48
+ \text{x} \Leftrightarrow \left( {\mathrm{y} \Leftrightarrow \mathrm{z}}\right) = = \left( {\mathrm{x} \Leftrightarrow \mathrm{y}}\right) \Leftrightarrow \mathrm{z}
49
+ $$
50
+
51
+ ---
52
+
53
+ Intuitively, commutative monoids over a set $M$ are ’operations you can use to reduce a multiset, whose members are in $M$ , to a single value’. These include GNN aggregators, like sum-aggregation $\left( {{\mathbb{R}}^{n},+,\mathbf{0}}\right)$ and max-aggregation $\left( {{\mathbb{R}}^{n},\max ,\mathbf{0}}\right)$ . Indeed, Dudzik and Veličković [2022] observe that, for the aggregation function $\oplus$ of a GNN to be well-behaved (in the sense of respecting the axioms of the multiset monad), it must form a commutative monoid $\left( {S,\oplus ,{e}_{ \oplus }}\right)$ over some subspace $S$ of ${\mathbb{R}}^{n}$ .
54
+
55
+ The vast majority of GNNs choose a fixed permutation-invariant function $\oplus$ (or fixed combinations of them [Corso et al., 2020]). While some research [Pellegrini et al., 2020, Li et al., 2020] has explored aggregation functions with learnable parameters, these functions are only very weakly parameterised, and give us limited additional expressivity.
56
+
57
+ For problems where we can anticipate the kind of aggregation function we might need, this approach works well: indeed, choosing a commutative monoid that aligns with the algorithm we want our GNN to learn can improve performance both in and out of distribution [Veličković et al., 2019]. But for problems where we may need to learn aggregations over complex internal representations, these monoids may not always be the most natural choice for the aggregation we're trying to learn. So in space where $\oplus$ -aggregation makes sense. left inverse of $g$ and a surjective monoid homomorphism from $\langle g\left( M\right) \rangle \subseteq {F}^{1}$ to $M$ . all multisets $X$ of $M$ .
58
+
59
+ Observe that this implies, for all nodes $u, v$ in graphs $G$ , the following: that can decompose into a surjective monoid homomorphism from a submonoid of $F$ to $M$ .
60
+
61
+ ### 2.2 Limitations on expressivity and generalisation for constructed aggregators
62
+
63
+ Given this result, what are the implications for prior and present work?
64
+
65
+ ---
66
+
67
+ type M = (Int, Int)
68
+
69
+ instance CommutativeMonoid $M$ where
70
+
71
+ e = (Infinity, Infinity)
72
+
73
+ (a1, a2) $\mathrel{\text{<>}} \left( {\mathrm{b}1,\mathrm{\;b}2}\right) = \left( {\mathrm{c}1,\mathrm{c}2}\right)$
74
+
75
+ where c1:c2:_ =
76
+
77
+ sort [a1, a2, b1, b2]
78
+
79
+ ---
80
+
81
+ such cases, $\psi$ and $\phi$ must take on some of the work of mapping our representations into and out of a
82
+
83
+ Formally, suppose we use a GNN equipped with a fixed commutative monoid aggregator $\left( {F,\oplus ,{e}_{ \oplus }}\right)$ , on a problem for which the 'true' aggregation we want to perform is the commutative monoid $\left( {M,*,{e}_{ * }}\right)$ over the GNN’s latent space. What would it take for our GNN to perform $M$ -aggregation?
84
+
85
+ Proposition 1. Let $\left( {M,*,{e}_{ * }}\right)$ and $\left( {F,\oplus ,{e}_{ \oplus }}\right)$ be commutative monoids. Then for functions $g : M \rightarrow$ $F$ and $h : F \rightarrow M,{ * }_{x \in X}x = h\left( {{\bigoplus }_{x \in X}g\left( x\right) }\right)$ for all multisets $X$ of $M$ , if and only if $h$ is both a
86
+
87
+ Now, given Proposition 1 above (proven in Appendix 4), suppose we had a trained GNN, param-eterised by $\phi : {\mathbb{R}}^{k} \times F \rightarrow {\mathbb{R}}^{k}$ and $\psi : {\mathbb{R}}^{k} \times {\mathbb{R}}^{k} \rightarrow F$ , with a fixed $F$ -aggregator. Suppose this GNN has learned to imitate the $M$ -aggregation commutative monoid - in other words, that there exist functions ${\phi }^{\prime } : {\mathbb{R}}^{k} \times M \rightarrow {\mathbb{R}}^{k},{\psi }^{\prime } : {\mathbb{R}}^{k} \times {\mathbb{R}}^{k} \rightarrow M, g : M \rightarrow F$ and $h : F \rightarrow M$ such that $\phi \left( {{\mathbf{x}}_{u},{\mathbf{m}}_{\mathcal{N}\left( u\right) }}\right) = {\phi }^{\prime }\left( {{\mathbf{x}}_{u}, h\left( {\mathbf{m}}_{\mathcal{N}\left( u\right) }\right) }\right) ,\psi \left( {{\mathbf{x}}_{u},{\mathbf{x}}_{v}}\right) = g\left( {{\psi }^{\prime }\left( {{\mathbf{x}}_{u},{\mathbf{x}}_{v}}\right) }\right)$ and ${ * }_{x \in X}x = h\left( {{ \oplus }_{x \in X}g\left( x\right) }\right)$ for
88
+
89
+ $$
90
+ \phi \left( {{\mathbf{x}}_{u},{\bigoplus }_{v \in {\mathcal{N}}_{u}}\psi \left( {{\mathbf{x}}_{u},{\mathbf{x}}_{v}}\right) }\right) = {\phi }^{\prime }\left( {{\mathbf{x}}_{u},\underset{v \in {\mathcal{N}}_{u}}{ * }{\psi }^{\prime }\left( {{\mathbf{x}}_{u},{\mathbf{x}}_{v}}\right) }\right) = {\phi }^{\prime }\left( {{\mathbf{x}}_{u}, h\left( {{\bigoplus }_{v \in {\mathcal{N}}_{u}}g\left( {{\psi }^{\prime }\left( {{\mathbf{x}}_{u},{\mathbf{x}}_{v}}\right) }\right) }\right) }\right)
91
+ $$
92
+
93
+ Hence $h$ is a surjective monoid homomorphism from $\langle g\left( M\right) \rangle$ to $M$ (i.e. $M$ is a subquotient of $F$ ).
94
+
95
+ So at a high level, for a ${GNN}$ with aggregator $F$ to imitate an aggregator $M$ , it must learn a function
96
+
97
+ As has been seen in [Veličković et al., 2019, Sanchez-Gonzalez et al., 2020], it's clear that if our fixed commutative monoid $F$ is aligned with a target monoid $M$ for the problem we want to solve - in other words, if the homomorphism doesn't have to do much work - then we can easily learn to imitate $M$ . Indeed, if the target homomorphism is linear, and we have appropriate training set coverage, then by $\left\lbrack {\mathrm{{Xu}}\text{et al.,2020}}\right\rbrack$ it may well generalise out-of-distribution - a result that holds (to an extent) in the case of learning to imitate path-finding algorithms such as Bellman-Ford [Veličković et al., 2019].
98
+
99
+ But there are many cases where $M$ is more complex, and there is no commonly used aggregator $F$ for which we can simply apply a linear homomorphism to get from $F$ to $M$ . One such example is the problem of finding the ${2}^{\text{nd }}$ -minimum element in a set. Here, the desired monoid $M$ is as follows:
100
+
101
+ ---
102
+
103
+ secondMinimum :: [Int] -> Int
104
+
105
+ secondMinimum = dec. agg . map enc
106
+
107
+ where
108
+
109
+ enc x = (x, Infinity)
110
+
111
+ agg = reduce (<>)
112
+
113
+ dec $\left( {\_ ,\mathrm{x}2}\right) = \mathrm{x}2$
114
+
115
+ ---
116
+
117
+ Observe that, for this monoid, there is no commonly-used fixed aggregator $F$ (e.g. sum, max, min, mean) for which there is a ’natural’ homomorphism from $F$ to $M$ .
118
+
119
+ In principle, there exists an $F$ from which it is possible to construct a homomorphism to $M$ : by [Zaheer et al.,2017] and [Xu et al.,2019a], for any $\left( {M,*,{e}_{ * }}\right)$ with $M \subseteq {\mathbb{Q}}^{n}$ , there exists a surjective monoid homomorphism $h$ from $\left( {{\mathbb{R}}^{n}, + ,0}\right)$ to $\left( {M,*,{e}_{ * }}\right)$ . But Wagstaff et al. [2019] show that this guarantee may require an $h$ that is highly discontinuous, and therefore not only hard to learn in-distribution, but fully misaligned with the assumptions of the universal approximation theorem. Further, as $\operatorname{dom}\left( h\right) = \langle g\left( M\right) \rangle$ , we are not learning a function whose domain is a bounded set, so we have little hope of generalising out-of-distribution. Indeed, we demonstrate in Section 3.1 that all common fixed aggregators fail to learn the ${2}^{\text{nd }}$ -minimum problem, both in and out of distribution. Similarly, Cohen-Karlik et al. [2020] show that sum-aggregators as implemented in [Zaheer et al., 2017] (i.e. maps in and out of the $\left( {{\mathbb{R}}^{n},+,0}\right)$ commutative monoid) require $\Omega \left( {\log {2}^{n}}\right)$ neurons to learn the parity function over sets of size $n$ . Intuitively, the crux of their proof is that the homomorphism the aggregator would have to learn from $\left( {{\mathbb{R}}^{n},+,0}\right)$ to the parity monoid is a periodic function with unbounded domain. Similar arguments hold for all aggregation tasks involving modular counting.
120
+
121
+ ---
122
+
123
+ ${}^{1}$ where $\langle g\left( M\right) \rangle$ denotes the submonoid of $F$ generated by $g\left( M\right)$
124
+
125
+ ---
126
+
127
+ ### 2.3 Fully learnable recurrent aggregators and their limitations
128
+
129
+ We will now take a step back from homomorphisms, and try to discover a more flexible aggregator. An emerging narrative within deep learning is that of representations as types [Olah, 2015]. If we view the construction of neural networks as the construction of differentiable, parameterised pure functional programs, many of the design patterns commonly used in deep learning correspond to higher-order functions commonly used in functional programming (FP). This paradigm has proven valuable in recent times, embodied by deep learning frameworks such as JAX [Bradbury et al., 2018].
130
+
131
+ In FP, a simple way to aggregate a multiset of elements is to represent them as a list and fold over it: ${}^{2}$
132
+
133
+ ---
134
+
135
+ fold :: (a -> b -> b) -> b -> [a] -> b
136
+
137
+ $$
138
+ \text{fold f z [] = z}
139
+ $$
140
+
141
+ fold f z (x:xs) = f x (fold f z xs)
142
+
143
+ ---
144
+
145
+ And in some sense, a recurrent neural network (RNN) is simply a fold over a list, parameterised by a learnable accumulator $\mathrm{f}$ and a learnable initialisation element $\mathrm{z}{.}^{3}$
146
+
147
+ ---
148
+
149
+ rnnCell :: Learnable rnn : Learnable ([Vec R h1] -> Vec R h2) (Vec R h1 -> Vec R h2 -> Vec R h2) rnn = fold rnnCell initialState initialState :: Learnable (Vec R h2)
150
+
151
+ ---
152
+
153
+ Hence a natural way to construct a learnable aggregator over multisets could be to use an RNN - a 'learnable fold' - and to somehow ensure it is permutation-invariant.
154
+
155
+ Indeed, this approach has been used for permutation-invariant set aggregation, with Murphy et al. [2019] enforcing permutation-invariance by design by taking the average of an RNN applied to all permutations of its input, and Cohen-Karlik et al. [2020] regularising RNNs $f$ towards permutation-invariance by adding a pairwise regularisation term ${L}_{\text{swap }}\left( {{\mathbf{x}}_{1},{\mathbf{x}}_{2}}\right) = \left( {f\left( {f\left( {\mathbf{s},{\mathbf{x}}_{1}}\right) ,{\mathbf{x}}_{2}}\right) - }\right.$ ${\left. f\left( f\left( \mathbf{s},{\mathbf{x}}_{2}\right) ,{\mathbf{x}}_{1}\right) \right) }^{2}$ (which we motivate through the lens of commutative monoids in Appendix B).
156
+
157
+ Recurrent aggregators have also occasionally seen use in GNNs [Hamilton et al., 2017, Xu et al., 2018], but they are scarcely used despite their competitive performance. We assume RNNs likely remain unpopular as a GNN aggregator due to their depth. Indeed, observe that an $N$ -layer GNN equipped with a recurrent aggregator has (worst-case) depth $O\left( {VN}\right)$ . By contrast, the same GNN equipped with a fixed aggregator has (worst-case) depth $O\left( N\right)$ . And as many graphs on which we want to deploy GNNs can have upwards of 100,000 nodes [Hu et al., 2020], the same problems of efficiency and maximum dependency length observed by Vaswani et al. [2017] when using RNNs for sequence transduction also hold when using RNNs for graph message aggregation.
158
+
159
+ ### 2.4 A compromise: fully learnable commutative monoids
160
+
161
+ So, if recurrent aggregators are too deep, is there any way to get a fully learnable aggregator? We've considered the fixed-aggregator approach, where we learn maps into and out of the carrier set of a pre-determined commutative monoid. We've considered the recurrent-aggregator approach, where we represent multisets as lists and implement aggregation as a learnable fold over lists. ${}^{4}$ But another way to represent multisets in FP is as a balanced binary tree, over which aggregation is implemented as a fold parameterised by a commutative monoid. So what if we implemented aggregation as a learnable fold over a balanced binary tree? Or in other words, what if, instead of learning maps into and out of some commutative monoid, we simply learn the commutative monoid itself?
162
+
163
+ ---
164
+
165
+ ${}^{2}$ Note that $a \rightarrow b \rightarrow b$ is an equivalent way (via currying) of specifying a function $a \times b \rightarrow b$ .
166
+
167
+ ${}^{3}$ Note that an RNN can also be viewed as a map to the carrier set of the monoid of endofunctions (i.e. inctions from a set to itself - in this case, from b to b) under composition: see Appendix B for details.
168
+
169
+ ${}^{4}$ Alternatively, we can see this, as in Appendix B, as learning maps into and out of the carrier set of the monoid of endofunctions.
170
+
171
+ ---
172
+
173
+ Let's make precise what exactly we mean by 'learning a commutative monoid' for use in a GNN. Recall that a commutative monoid $\left( {M,\oplus ,{e}_{ \oplus }}\right)$ is defined by its carrier set $M$ , its binary operation $\oplus$ and its identity element ${e}_{ \oplus }$ . So given some learnable commutative, associative binary operator $\oplus$ (written binOp :: Learnable (Vec R h -> Vec R h -> Vec R h)), and some learnable identity element ${e}_{ \oplus }$ (written identity : Learnable (Vec R h)), we can define a learnable commutative monoid over some learned embedding space (in other words, a subset of ${\mathbb{R}}^{h}$ ):
174
+
175
+ ---
176
+
177
+ type LearnableCommutativeMonoid $= \{ - a$ subspace of $- \}$ Vec R h
178
+
179
+ instance CommutativeMonoid LearnableCommutativeMonoid where
180
+
181
+ e = identity; $\Leftrightarrow = \operatorname{bin}\mathrm{{Op}}$
182
+
183
+ ---
184
+
185
+ Thus, our aggregation function can be specified simply, as ${\bigoplus }_{x}x$ , or
186
+
187
+ ---
188
+
189
+ aggregate :: Learnable ([LearnableMonoid] -> LearnableMonoid)
190
+
191
+ aggregate = reduce (<>)
192
+
193
+ ---
194
+
195
+ Note that, here, the carrier set is implicit - when used in a GNN, we expect the message function (i.e. the producer of the elements to be aggregated) to learn a 'return type' representation whose members are elements of this implicit carrier set, and similarly for the 'input type' of the readout function.
196
+
197
+ Now, why do we care about this at all? Indeed, if we implement reduce as a fold, we're no better off than if we just used a recurrent aggregator. But consider the computation graph (or rather, computation binary tree) of such an aggregation ${x}_{1} \oplus \left( {{x}_{2} \oplus \left( {{x}_{3} \oplus {x}_{4}}\right) }\right)$ . By Tamari’s theorem [Tamari, 1962], the associativity of $\oplus$ means that the result of evaluating this computation tree is invariant under rotations of nodes in the tree. Therefore, in order to minimise the depth of the computation, we can rewrite our reduction as a balanced binary tree: $\left( {{x}_{1} \oplus {x}_{2}}\right) \oplus \left( {{x}_{3} \oplus {x}_{4}}\right)$ (see Appendix D). And by doing so, for $V$ elements to aggregate, we obtain a network with $O\left( V\right)$ applications of $\oplus$ and $O\left( {\log V}\right)$ depth - an exponential improvement over our $O\left( V\right)$ -depth recurrent aggregators.
198
+
199
+ ### 2.5 Commutative, associative binary operators for learnable commutative monoids
200
+
201
+ So, given a commutative, associative binary operator, we can get our learnable commutative monoid with $O\left( {\log V}\right)$ depth. But how do we construct such an operator in the first place? As with permutation-invariant RNNs, we have two options: either we construct an operator that strongly enforces the axioms of commutativity and associativity by construction, or we construct some arbitrary binary operator and weakly enforce the axioms through regularisation.
202
+
203
+ Strong enforcement. While some research has been conducted into learning algebraic structures with strongly enforced axioms [Abe et al., 2021, Martires, 2021], these approaches reduce to learning maps to and from a fixed aggregator. ${}^{5}$ We observe that, while we can strongly enforce commutativity in any binary operator $f\left( {x, y}\right)$ by symmetrising it to $g\left( {x, y}\right) = \frac{f\left( {x, y}\right) + f\left( {y, x}\right) }{2}$ , we found no such construction for associativity which doesn't sacrifice expressivity.
204
+
205
+ Example: Binary-GRU. So given this, and given the importance of gating [Tallec and Ollivier, 2018] in neural networks applied over long time horizons, we can construct a simple strongly commutative binary aggregator by symmetrising a GRU [Cho et al., 2014]:
206
+
207
+ ---
208
+
209
+ binaryGRU :: Learnable (Vec R h -> Vec R h -> Vec R h)
210
+
211
+ binaryGRU v1 v2 = do
212
+
213
+ g <- new gruCell (InputDim h) (HiddenDim h)
214
+
215
+ return (g v1 v2 + g v2 v1) / 2
216
+
217
+ ---
218
+
219
+ Weak enforcement. Alternatively, just as we saw with recurrent aggregators in Section 2.3, for a learnable binary operator $\oplus : {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{n}$ we could weakly enforce commutativity and associativity through regularisation losses ${L}_{\text{comm }}\left( {\mathbf{x},\mathbf{y}}\right) = {\lambda }_{\text{comm }}{\left| \left( \mathbf{x} \oplus \mathbf{y}\right) - \left( \mathbf{y} \oplus \mathbf{x}\right) \right| }^{2}$ and ${L}_{\text{assoc }}\left( {\mathbf{x},\mathbf{y},\mathbf{z}}\right) = {\lambda }_{\text{assoc }}{\left| \left( \mathbf{x} \oplus \left( \mathbf{y} \oplus \mathbf{z}\right) \right) - \left( \left( \mathbf{x} \oplus \mathbf{y}\right) \oplus \mathbf{z}\right) \right| }^{2}$ (for implementation details, see Appendix E).
220
+
221
+ Example: Binary-GRU-Assoc. Now, by applying ${L}_{\text{assoc }}$ to Binary-GRU, we obtain a strongly commutative, weakly associative binary operator. ${}^{6}$
222
+
223
+ ---
224
+
225
+ ${}^{5}$ i.e. choosing an algebraic structure (e.g. the Abelian group $\left( {{\mathbb{R}}^{n},+,\mathbf{0}}\right)$ ) and learning maps between the model's latent space and that structure
226
+
227
+ ${}^{6}$ Note that we can instantiate this operator with different values of the regularisation parameter ${\lambda }_{\text{assoc }}$ (hereafter referred to as $\lambda$ ) by which we scale the associativity loss.
228
+
229
+ ---
230
+
231
+ ## 3 Assessing the utility of learnable commutative monoids
232
+
233
+ Now, we've seen three types of aggregators: fixed aggregators, recurrent aggregators and learnable commutative monoids. In order to explore the trade-offs between them in terms of expressivity, generalisation and efficiency, we conduct a range of experiments comparing the performance of state-of-the-art fixed aggregators (such as sum-aggregation [Zaheer et al., 2017], max-aggregation [Veličković et al., 2019] and PNA [Corso et al., 2020]), recurrent aggregators (specifically GRUs [Cho et al., 2014]) and learnable commutative monoid (LCM) aggregators (using the Binary-GRU and Binary-GRU-Assoc learnable operators as described in Section 2.5)
234
+
235
+ ## on the following synthetic and real-world problems:
236
+
237
+ ${2}^{\text{nd }}$ -minimum. We test fixed aggregators, recurrent aggregators and learnable commutative monoids on the problem of finding the second-smallest element in a set of binary-encoded integers. As observed in Section 2.2, this task is a synthetic aggregation problem with an 'unusual' commutative monoid, in that it doesn't align well with common fixed aggregators. Therefore, we expect this task to be a standard problem for which learnable aggregators would outperform any commonly-used fixed aggregator, especially out-of-distribution.
238
+
239
+ PNA synthetic benchmark. We then proceed to test the in-distribution performance of our aggrega-tors on the synthetic dataset presented in [Corso et al., 2020]. This dataset consists of aggregator-heavy, classical graph problems that are mostly aligned with the aggregators used to construct PNA. Thus, we expect PNA (and the relevant fixed aggregators) to perform strongly here, potentially even out-of-distribution. But while our learnable aggregators don't necessarily have the inductive bias to approximate these monoids well over an unbounded domain, we expect them to perform competitively at learning the relevant monoids in-distribution.
240
+
241
+ PNA real-world benchmark. Finally, we test our aggregators on the real-world dataset presented in [Corso et al., 2020], consisting of chemical (ZINC and MolHIV) and computer vision (CIFAR10 and MNIST) datasets from the GNN benchmarks of Dwivedi et al. [2020] and Hu et al. [2020]. In contrast to the algorithmic tasks in the synthetic benchmark, we expect these real-world problems to contain 'unusual' target monoids: for both molecular and computer vision problems, it is likely that our GNN will learn complex representations whose most natural monoid is not the image of a simple homomorphism from any common fixed aggregator. Therefore, we expect fully learnable aggregators (GRU and LCMs) to outperform fixed aggregators on this benchmark.
242
+
243
+ Training details for all experiments are provided in Appendix F. Notably, for all uses of learnable aggregators, we randomly shuffle each batch of sequences before feeding it to the aggregator as a form of regularisation through data augmentation.
244
+
245
+ ### 3.1 ${2}^{\text{nd }}$ -minimum
246
+
247
+ For this experiment, we compared fixed (sum, max, PNA), recurrent (GRU) and LCM (Binary-GRU) aggregators on the synthetic ${2}^{nd}$ -minimum set aggregation problem. In order to evaluate the effects of regularisation towards algebraic axioms on the performance of LCM aggregators, we also tested Binary-GRU-Assoc, sweeping over values of the regularisation parameter $\lambda$ from ${10}^{0}$ to ${10}^{-7}$ .
248
+
249
+ #### 3.1.1 Experimental details
250
+
251
+ For training data, we used 65,536 multisets of integers $\sim U\left( {0,{255}}\right)$ of size $\sim U\left( {1,{16}}\right)$ . For validation data, we used 1,024 multisets of integers $\sim U\left( {0,{255}}\right)$ of size 32 . For evaluation data, we used 1,024 multisets of integers $\sim U\left( {0,{255}}\right)$ of size $l$ , for $l \in \left\lbrack {1,{200}}\right\rbrack$ .
252
+
253
+ We used a standard multiset-aggregation architecture $f\left( \mathbf{X}\right) \mathrel{\text{:=}} \sigma \left( {\psi \left( {{\bigoplus }_{\mathbf{x} \in \mathbf{X}}\phi \left( \mathbf{x}\right) }\right) }\right)$ for $\oplus$ the aggregator being tested, and $\phi$ and $\psi$ MLPs. $f$ takes as input a vector of 8-bit binary-encoded integers (as in [Yan et al.,2020]), and returns a binary-encoded integer in ${\left\lbrack 0,1\right\rbrack }^{8}$ . The full architecture (with details on integer embedding) is outlined in Appendix C.
254
+
255
+ #### 3.1.2 Results and discussion
256
+
257
+ Summary. Recall that this problem was chosen for its comparatively unusual commutative monoid, which we do not expect aligns well with fixed aggregators. Indeed, we confirm this hypothesis: we see in Figure 1 that fixed aggregators fail to learn ${2}^{\text{nd }}$ -minimum in distribution, that recurrent aggregators learn ${2}^{\text{nd }}$ -minimum near-perfectly in distribution, generalising well out of distribution, and that ${LCM}$ aggregators learn ${2}^{\text{nd }}$ -minimum near-perfectly in distribution and are competitive with recurrent aggregators out of distribution, while achieving an exponential speedup over recurrent aggregators on large sets. Furthermore, we observe that regularising towards algebraic axioms improves the performance of LCM aggregators both in and out of distribution.
258
+
259
+ ![01963ef4-56c1-7813-a737-edae493b6b9e_6_313_410_1172_391_0.jpg](images/01963ef4-56c1-7813-a737-edae493b6b9e_6_313_410_1172_391_0.jpg)
260
+
261
+ Figure 1: Generalisation performance for fixed (max, sum, PNA), recurrent (GRU) and LCM (Binary-GRU) aggregators, along with the best-performing regularised LCM aggregator (Binary-GRU-Assoc with $\lambda = {10}^{0}$ ). The shaded region is bounded above and below by the maximum and minimum values across all runs. The vertical purple line denotes the maximum set size present in training data (16); the vertical blue lines denote powers of 2 (from ${2}^{1}$ to ${2}^{7}$ ). For detailed results, see Appendix G.
262
+
263
+ In-distribution performance. Examining Figure 1, observe that only the fully-learnable aggregators - GRU, Binary-GRU and Binary-GRU-Assoc - managed to learn ${2}^{\text{nd }}$ -minimum near-perfectly in distribution, with the next best performing aggregator being PNA. ${}^{7}$
264
+
265
+ Out-of-distribution performance (without regularisation). Out of distribution, observe that all learnable aggregators generalise near-perfectly up to size 32 (twice the size of the input). Beyond this point, while the performance of the recurrent aggregator decays slowly (reaching ${0.912} \pm {0.017}$ at size 200 ), the performance of the LCM quickly drops (reaching ${0.287} \pm {0.068}$ at size 200 ). Despite this, both learnable aggregators consistently outperform the fixed aggregators out-of-distribution. Furthermore, out of the fixed aggregators, we see that the sum-aggregator's performance plateaus extremely quickly, a result we may attribute to the domain of the learned homomorphism from the sum-aggregator being an unbounded set (see Section 2.2).
266
+
267
+ Efficiency. As hypothesised in Section 2.4, we see (in Appendix G, Figure 3) that LCMs are indeed exponentially faster than RNNs for large sets: for $n = {20}$ , Binary-GRU-Assoc takes ${48.2} \pm {0.4}$ seconds per epoch, and GRU takes ${46.6} \pm {0.5}$ seconds per epoch, while for $n = {200}$ , Binary-GRU-Assoc takes ${79.4} \pm {0.5}$ seconds per epoch, and GRU takes ${397.2} \pm {1.3}$ seconds per epoch.
268
+
269
+ Regularisation towards associativity. We show the results from the best-performing regularised LCM aggregator $\left( {\lambda = {10}^{0}}\right)$ in Figure 1 and Table 2. Although the Binary-GRU performs better than all fixed aggregators in its unregularised form, observe that the regularised Binary-GRU-Assoc outperforms its unregularised sibling both in and out of distribution - and indeed, achieves generalisation performance competitive with GRU. Furthermore, observe that the sudden performance drops experienced by Binary-GRU when the size of the set reaches a power of two (i.e. when the depth of the aggregation tree increases) are noticeably dampened in the case of Binary-GRU-Assoc, suggesting that regularisation towards associativity helps prevent overfitting to a particular maximum aggregation tree height. For interest, we present the full results of the regularisation parameter sweep in Figure 4 in Appendix G.
270
+
271
+ ### 3.2 PNA synthetic benchmark
272
+
273
+ For this experiment, we trained recurrent (GRU) and LCM (Binary-GRU, Binary-GRU-Assoc) aggregators on the synthetic benchmark from [Corso et al., 2020], comparing against the fixed-aggregator baselines presented there (for GATs [Veličković et al., 2018], GCNs [Kipf and Welling, 2017], GINs [Xu et al., 2019b] and MPNNs [Gilmer et al., 2017] with sum and max aggregators).
274
+
275
+ ---
276
+
277
+ ${}^{7}$ Note that, out of the fixed aggregators, PNA was the only one to achieve near-perfect accuracy on the training dataset, with a maximum training accuracy of around 0.997 .
278
+
279
+ ---
280
+
281
+ #### 3.2.1 Experimental details
282
+
283
+ In the PNA paper [Corso et al., 2020], experiments testing fixed aggregators (sum, max, PNA) are conducted on a custom GNN architecture centred around an MPNN layer with dimension 16, split into four towers each with hidden dimension 4 . As we hypothesise that the low dimensionality of these towers could harm the expressivity of learnable aggregators, we test our learnable aggregators both in MPNNs of hidden dimension 16, with four towers of hidden dimension 16, and in MPNNs of hidden dimension 128, with one tower of hidden dimension 128.
284
+
285
+ #### 3.2.2 Results and discussion
286
+
287
+ Summary. Recall that this dataset consists of aggregator-heavy classical graph problems ${}^{8}$ that are mostly aligned with the aggregators used to construct PNA. So, as expected, we see in Table 1 that PNA outperforms all other aggregators tested on the dataset in-distribution. But observe that, on these problems, our asymptotically more efficient LCMs are competitive with and sometimes beat ${GRUs}$ - and indeed, on the node-based problems in the dataset, our LCMs are as strong as PNA.
288
+
289
+ In Appendix G, we observe the surprising result that LCMs are more stable than PNA out-of-distribution (OOD), and that regularising LCMs towards associativity improves OOD performance at the cost of impairing performance in distribution. We also discuss the effects of increasing dimensionality on fixed aggregator performance, through the lens of commutative monoid homomorphisms.
290
+
291
+ In-distribution performance. Observe in Table 1 that, while PNA beats all other aggregators tested, our learnable aggregators perform competitively in-distribution, with all learnable aggregators beating all single-aggregator (i.e. non-PNA) architectures. Interestingly, our Binary-GRUs perform better than the corresponding GRUs: perhaps their inductive bias towards commutativity helps us learn in-distribution.
292
+
293
+ Per-task performance. We present the per-task performance of all 128-dimensional aggregators (together with the fixed-aggregator baselines) in Table 1. Observe that, in fact, the Binary-GRU-assoc outperforms Binary-GRU in all tasks apart from the the graph Laplacian.
294
+
295
+ Furthermore, while learnable aggregators do not perform as strongly as fixed aggregators on whole-graph tasks (connectedness, diameter and spectral radius), they perform equally or better than fixed aggregators for node-based tasks (shortest path, eccentricity and graph Laplacian). This may be because the benchmark implementation for whole-graph tasks uses a sum-aggregator over the readout values: it is likely difficult to learn a homomorphism from the sum aggregator to the complex latent-space monoid learned by the LCM, and perhaps fixed ag-gregators provide an inductive bias towards learning representations for which it is easier to map to and from the sum-aggregation monoid.
296
+
297
+ <table><tr><td rowspan="2">Model</td><td rowspan="2">Avg score</td><td colspan="3">Node tasks</td><td colspan="3">Graph tasks</td></tr><tr><td>SSSP</td><td>Ecc</td><td>Lap feat</td><td>Conn</td><td>Diam</td><td>Spec rad</td></tr><tr><td>GCN</td><td>-2.05</td><td>-2.16</td><td>-1.89</td><td>-1.60</td><td>-1.69</td><td>-2.14</td><td>-2.79</td></tr><tr><td>GAT</td><td>-2.26</td><td>-2.34</td><td>-2.09</td><td>-1.60</td><td>-2.44</td><td>-2.40</td><td>-2.70</td></tr><tr><td>GIN</td><td>-1.99</td><td>-2.00</td><td>-1.90</td><td>-1.60</td><td>-1.61</td><td>-2.17</td><td>-2.66</td></tr><tr><td>MPNN (sum)</td><td>-2.50</td><td>-2.33</td><td>-2.26</td><td>-2.37</td><td>-1.82</td><td>-2.69</td><td>-3.52</td></tr><tr><td>MPNN (max)</td><td>-2.53</td><td>-2.36</td><td>-2.16</td><td>-2.59</td><td>-2.54</td><td>-2.67</td><td>-2.87</td></tr><tr><td>PNA-16</td><td>-3.04</td><td>-2.99</td><td>-2.81</td><td>-2.83</td><td>-2.91</td><td>-2.98</td><td>-3.71</td></tr><tr><td>PNA-128</td><td>-3.09</td><td>-2.94</td><td>-2.88</td><td>-3.82</td><td>-2.42</td><td>-3.00</td><td>-3.48</td></tr><tr><td>GRU</td><td>-2.91</td><td>-2.84</td><td>-2.71</td><td>-3.73</td><td>-2.20</td><td>-2.88</td><td>-3.11</td></tr><tr><td>Binary-GRU</td><td>-3.00</td><td>-2.85</td><td>-2.77</td><td>-3.87</td><td>-2.34</td><td>-2.88</td><td>-3.29</td></tr><tr><td>Binary-GRU-Assoc</td><td>-2.95</td><td>-2.99</td><td>-2.88</td><td>-2.92</td><td>-2.62</td><td>-2.92</td><td>-3.37</td></tr></table>
298
+
299
+ Table 1: Mean ${\log }_{10}\left( {MSE}\right)$ on the PNA test dataset
300
+
301
+ ### 3.3 PNA real-world benchmark
302
+
303
+ For this experiment, we trained recurrent (GRU) and LCM (Binary-GRU) aggregators on the real-world benchmark from Corso et al. [2020], containing two molecular graph property prediction datasets (ZINC and MolHIV) and two superpixel graph classification datasets (CIFAR10 and MNIST). Note that, due to limitations on compute resources, we were not able to perform a regularisation parameter sweep to test Binary-GRU-Assoc. The GNN architecture used here is identical to that in [Corso et al., 2020], except that, for learnable aggregators, all MPNN towers have the same dimensionality as the MPNN itself (i.e. we do not divide the towers).
304
+
305
+ ---
306
+
307
+ ${}^{8}$ three node-based algorithmic tasks (single-source shortest paths, eccentricity and computing the Laplacian of node feature vectors) and three graph-based algorithmic tasks (connectedness, diameter and spectral radius)
308
+
309
+ ---
310
+
311
+ #### 3.3.1 Results and discussion
312
+
313
+ Summary. Recall that the real-world benchmark has complex problems that do not necessarily align with common fixed aggregators. We observe in Figure 2 that, while PNA in general outperforms all other aggregators on problems involving property prediction from small molecular graphs, the more expressive GRU substantially outperforms PNA for the (more discrete) task of image classification. We notice also that the (asymptotically efficient) Binary-GRU LCM provides a good trade-off between these two aggregators, being the second-best-performing aggregator for all but two problems. Finally, we see that learnable aggregators might be particularly powerful on problems involving graphs with edge features.
314
+
315
+ <table><tr><td/><td/><td colspan="4">Zinc (MAE)</td><td colspan="4">CIFAR10 (Acc)</td><td colspan="4">MNIST (1(MAE)</td><td colspan="2">MolHIV (%ROC-AUC)</td></tr><tr><td/><td>Model</td><td>No edge features</td><td>std</td><td>Edge features</td><td>std</td><td>No edge features</td><td>std</td><td>Edge features</td><td>std</td><td>No edge features</td><td>std</td><td>Edge features</td><td>std</td><td>No edge features</td><td>std</td></tr><tr><td rowspan="7">Dwivedi et al, Xu et al</td><td>MLP</td><td>0.710</td><td>0.001</td><td/><td/><td>56.01</td><td>0.90</td><td/><td/><td>94.46</td><td>0.28</td><td/><td/><td/><td/></tr><tr><td>GCN</td><td>0.469</td><td>0.002</td><td/><td/><td>54.46</td><td>0.10</td><td/><td/><td>89.99</td><td>0.15</td><td/><td/><td>76.06</td><td>0.97</td></tr><tr><td>GIN</td><td>0.408</td><td>0.008</td><td/><td/><td>53.28</td><td>3.70</td><td/><td/><td>93.96</td><td>1.30</td><td/><td/><td>75.58</td><td>1.40</td></tr><tr><td>DiffPoll</td><td>0.466</td><td>0.006</td><td/><td/><td>57.99</td><td>0.45</td><td/><td/><td>95.02</td><td>0.42</td><td/><td/><td/><td/></tr><tr><td>GAT</td><td>0.463</td><td>0.002</td><td/><td/><td>65.48</td><td>0.33</td><td/><td/><td>95.62</td><td>0.13</td><td/><td/><td/><td/></tr><tr><td>Monet</td><td>0.407</td><td>0.007</td><td/><td/><td>53.42</td><td>0.43</td><td/><td/><td>90.36</td><td>0.47</td><td/><td/><td/><td/></tr><tr><td>GatedGCN</td><td>0.422</td><td>0.006</td><td>0.363</td><td>0.009</td><td>69.19</td><td>0.28</td><td>69.37</td><td>0.48</td><td>97.37</td><td>0.06</td><td>97.47</td><td>0.13</td><td/><td/></tr><tr><td rowspan="4">Corso et al</td><td>MPNN (sum)</td><td>0.381</td><td>0.005</td><td>0.288</td><td>0.002</td><td>65.39</td><td>0.47</td><td>65.61</td><td>0.30</td><td>96.72</td><td>0.17</td><td>96.90</td><td>0.15</td><td/><td/></tr><tr><td>MPNN (max)</td><td>0.468</td><td>0.002</td><td>0.328</td><td>0.008</td><td>69.70</td><td>0.55</td><td>70.86</td><td>0.27</td><td>97.37</td><td>0.11</td><td>97.82</td><td>0.08</td><td/><td/></tr><tr><td>PNA (no scalers)</td><td>0.413</td><td>0.006</td><td>0.247</td><td>0.036</td><td>70.46</td><td>0.44</td><td>70.47</td><td>0.72</td><td>97.41</td><td>0.16</td><td>97.94</td><td>0.12</td><td>78.76</td><td>1.04</td></tr><tr><td>PNA</td><td>0.320</td><td>0.032</td><td>0.188</td><td>0.004</td><td>70.21</td><td>0.15</td><td>70.35</td><td>0.63</td><td>97.19</td><td>0.08</td><td>97.69</td><td>0.22</td><td>79.05</td><td>1.32</td></tr><tr><td rowspan="2">Ours</td><td>Binary-GRU</td><td>0.340</td><td>0.003</td><td>0.175</td><td>0.003</td><td>69.61</td><td>0.18</td><td>71.86</td><td>0.26</td><td>97.79</td><td>0.20</td><td>98.11</td><td>0.07</td><td>77.37</td><td>1.11</td></tr><tr><td>GRU</td><td>0.342</td><td>0.004</td><td>0.171</td><td>0.006</td><td>72.03</td><td>1.06</td><td>74.44</td><td>0.52</td><td>98.15</td><td>0.04</td><td>98.41</td><td>0.10</td><td>76.04</td><td>1.01</td></tr></table>
316
+
317
+ Figure 2: Results of learnable aggregators on the PNA real-world dataset, in comparison with those analysed by Corso et al. [2020]. Best results in bold-face, second-best in underline.
318
+
319
+ Observe that PNA is the strongest aggregator over both the ZINC dataset without edge features and the HIV dataset - indeed, due to the continuous nature of the properties we want to estimate in these datasets, it seems likely that the 'natural' monoids for aggregation over graphs in these datasets would align well with fixed aggregators. By contrast, we observe that GRU-aggregators are the strongest when testing on image data, likely as their expressivity lets them easily learn a complex, perhaps more discrete aggregation function. And while Binary-GRU does not do quite as well as GRU here, in all but one case it outperforms PNA on this problem. Finally, observe that, if we add edge features to ZINC, GRU outperforms PNA - and comparing results on the CIFAR-10 dataset with and without edge features, the average improvement for fixed aggregators when adding edge features is 0.34 , whereas such improvement for learnable aggregators is 2.33 . Learnable aggregators may be particularly strong on tasks with edge features, as making full use of them tends to require the learning of a more complex aggregation function.
320
+
321
+ ## 4 Conclusions
322
+
323
+ In this work we have conducted a thorough study of aggregation functions within graph neural networks (GNNs), demonstrating both theoretically and empirically that many tasks of practical interest rely on a nontrivial integration of neighbourhoods (i.e. a nontrivial commutative monoid). This motivates the use of fully-learnable aggregation functions, but prior proposals based on RNNs had several shortcomings in terms of efficiency. Accordingly, we propose learnable commutative monoid (LCM) aggregators, which trade off the flexibility of RNNs with efficiency of fixed aggregators, producing a simple, yet empirically powerful, GNN aggregator with only $O\left( {\log V}\right)$ depth. 374
324
+
325
+ References
326
+
327
+ Michael M. Bronstein, Joan Bruna, Taco Cohen, and Petar Veličković. Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges. apr 2021. doi: 10.48550/arxiv.2104.13478. URL http://arxiv.org/abs/2104.13478.1, 2
328
+
329
+ Petar Veličković, Rex Ying, Matilde Padovano, Raia Hadsell, and Charles Blundell. Neural Execution of Graph Algorithms. 2019. URL http://arxiv.org/abs/1910.10593.1, 2, 3, 6
330
+
331
+ Oliver Richter and Roger Wattenhofer. Normalized attention without probability cage. arXiv preprint arXiv:2005.09561, 2020. 1
332
+
333
+ Gabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Liò, and Petar Velickovic. Principal Neighbourhood Aggregation for Graph Nets. Advances in Neural Information Processing Systems, 2020-Decem, apr 2020. ISSN 10495258. URL https://arxiv.org/abs/2004.05718v5.1, 2, 6,7,8,9,15,16,17
334
+
335
+ Keyulu Xu, Jingling Li, Mozhi Zhang, Simon S. Du, Ken-ichi Kawarabayashi, and Stefanie Jegelka. What Can Neural Networks Reason About? may 2019a. URL https://arxiv.org/abs/1905.13211v4.1,3
336
+
337
+ Keyulu Xu, Mozhi Zhang, Jingling Li, Simon S. Du, Ken-ichi Kawarabayashi, and Stefanie Jegelka. How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks. sep 2020. URL https://arxiv.org/abs/2009.11848v5.1,3
338
+
339
+ Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017. 2, 4
340
+
341
+ Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=SJU4ayYgl.2,8
342
+
343
+ Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=rJXMpikCZ.2, 8
344
+
345
+ Andrew Dudzik and Petar Veličković. Graph Neural Networks are Dynamic Programmers. pages 1-9, 2022. URL http://arxiv.org/abs/2203.15544.2
346
+
347
+ Giovanni Pellegrini, Alessandro Tibo, Paolo Frasconi, Andrea Passerini, and Manfred Jaeger. Learning Aggregation Functions. pages 2892-2898, dec 2020. doi: 10.24963/ijcai.2021/398. URL https://arxiv.org/abs/2012.08482v2.2
348
+
349
+ Guohao Li, Chenxin Xiong, Ali Thabet, and Bernard Ghanem. DeeperGCN: All You Need to Train Deeper GCNs. jun 2020. doi: 10.48550/arxiv.2006.07739. URL https://arxiv.org/abs/ 2006.07739v1.2
350
+
351
+ Alvaro Sanchez-Gonzalez, Jonathan Godwin, Tobias Pfaff, Rex Ying, Jure Leskovec, and Peter Battaglia. Learning to simulate complex physics with graph networks. In International Conference on Machine Learning, pages 8459-8468. PMLR, 2020. 3
352
+
353
+ Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R. Salakhutdinov, and Alexander J. Smola. Deep Sets. Advances in Neural Information Processing Systems, 30, 2017. 3, 4, 6
354
+
355
+ Edward Wagstaff, Fabian B. Fuchs, Martin Engelcke, Ingmar Posner, and Michael Osborne. On the Limitations of Representing Functions on Sets. 36th International Conference on Machine Learning, ICML 2019, 2019-June:11285-11298, jan 2019. URL https://arxiv.org/abs/ 1901.09006v2.3
356
+
357
+ Edo Cohen-Karlik, Avichai Ben David, and Amir Globerson. Regularizing Towards Permutation Invariance in Recurrent Models. Advances in Neural Information Processing Systems, 2020-Decem, oct 2020. ISSN 10495258. URL https://arxiv.org/abs/2010.13055v1.4, 12
358
+
359
+ Christopher Olah. Neural Networks, Types, and Functional Programming, 2015. URL https: //research.google/pubs/pub45504/. 4
360
+
361
+ James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax.4
362
+
363
+ 378 379
364
+
365
+ Ryan L. Murphy, Balasubramaniam Srinivasan, Vinayak Rao, and Bruno Ribeiro. Janossy pooling: Learning deep permutation-invariant functions for variable-size inputs. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=BJluy2RcFm.4
366
+
367
+ Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken Ichi Kawarabayashi, and Stefanie Jegelka. Representation Learning on Graphs with Jumping Knowledge Networks. 35th International Conference on Machine Learning, ICML 2018, 12:8676-8685, jun 2018. doi: 10.48550/arxiv.1806.03536. URL https://arxiv.org/abs/1806.03536v2.4
368
+
369
+ Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33:22118-22133, 2020. 4, 6
370
+
371
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in Neural Information Processing Systems, 2017-Decem(Nips):5999-6009, 2017. ISSN 10495258. 4
372
+
373
+ Dov Tamari. The algebra of bracketings and their enumeration. Nieuw Arch. Wisk. (3), 10:131-146, 1962. ISSN 0028-9825. 5
374
+
375
+ Kenshin Abe, Takanori Maehara, and Issei Sato. Abelian Neural Networks. feb 2021. URL https://arxiv.org/abs/2102.12232v1.5
376
+
377
+ Pedro Zuidberg Dos Martires. Neural Semirings. CEUR Workshop Proceedings, pages 94-103, 2021. URL http://ceur-ws.org/Vol-2986/paper7.pdf.5
378
+
379
+ Corentin Tallec and Yann Ollivier. Can recurrent neural networks warp time? 6th International Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings, mar 2018. doi: 10.48550/arxiv.1804.11188. URL https://arxiv.org/abs/1804.11188v1.5
380
+
381
+ Kyunghyun Cho, Bart van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. On the Properties of Neural Machine Translation: Encoder-Decoder Approaches. Proceedings of SSST 2014 - 8th Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103-111, sep 2014. doi: 10.48550/arxiv.1409.1259. URL https://arxiv.org/abs/1409.1259v2.5, 6
382
+
383
+ Vijay Prakash Dwivedi, Chaitanya K. Joshi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. Benchmarking Graph Neural Networks. mar 2020. doi: 10.48550/arxiv.2003. 00982. URL https://arxiv.org/abs/2003.00982v4.6
384
+
385
+ Yujun Yan, Kevin Swersky, Danai Koutra, Parthasarathy Ranganathan, and Milad Hashemi. Neural execution engines: Learning to execute subroutines. Advances in Neural Information Processing Systems, 2020-Decem(NeurIPS), 2020. ISSN 10495258. 6
386
+
387
+ Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2019b. URL https: //openreview.net/forum?id=ryGs6iA5Km. 8
388
+
389
+ Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural Message Passing for Quantum Chemistry. 34th International Conference on Machine Learning, ICML 2017, 3:2053-2070, apr 2017. doi: 10.48550/arxiv.1704.01212. URL https://arxiv.org/abs/1704.01212v2.8
390
+
391
+ ## A Proof of Proposition 1
392
+
393
+ Proposition 1. Let $\left( {M,*,{e}_{ * }}\right)$ and $\left( {F,\oplus ,{e}_{ \oplus }}\right)$ be commutative monoids. Then for functions $g : M \rightarrow$ $F$ and $h : F \rightarrow M,{ * }_{x \in X}x = h\left( {{\bigoplus }_{x \in X}g\left( x\right) }\right)$ for all multisets $X$ of $M$ , if and only if $h$ is both a left inverse of $g$ and a surjective monoid homomorphism from $\langle g\left( M\right) \rangle \subseteq {F}^{9}$ to $M$ .
394
+
395
+ Proof. We proceed by cases.
396
+
397
+ $\left( \rightarrow \right)$ Suppose ${ * }_{x \in X}x = h\left( {{\bigoplus }_{x \in X}g\left( x\right) }\right)$ .
398
+
399
+ When $X = \{ x\}$ , have that $h\left( {g\left( x\right) }\right) = x$ trivially, so $h$ must be a left inverse of $g$ (and is therefore surjective).
400
+
401
+ ---
402
+
403
+ ${}^{9}$ where $\langle g\left( M\right) \rangle$ denotes the submonoid of $F$ generated by $g\left( M\right)$
404
+
405
+ ---
406
+
407
+ Now for $x, y \in \langle g\left( M\right) \rangle$ , we want to show that $h\left( {x \oplus y}\right) = h\left( x\right) * h\left( y\right)$ and that $h\left( {e}_{ \oplus }\right) = {e}_{ * }$ .
408
+
409
+ To show the former, observe that $x = {\bigoplus }_{a \in A}g\left( a\right)$ and $y = {\bigoplus }_{b \in B}g\left( b\right)$ for $A, B$ multisets of $M$ .
410
+
411
+ Now have that
412
+
413
+ $$
414
+ h\left( {x \oplus y}\right) = h\left( {\left( {{\bigoplus }_{a \in A}g\left( a\right) }\right) \oplus \left( {{\bigoplus }_{b \in B}g\left( b\right) }\right) }\right)
415
+ $$
416
+
417
+ $$
418
+ = h\left( {{\bigoplus }_{x \in A \uplus B}g\left( x\right) }\right)
419
+ $$
420
+
421
+ $$
422
+ = \underset{x \in A \uplus B}{ * }x
423
+ $$
424
+
425
+ $$
426
+ = \left( {\underset{a \in A}{ * }a}\right) * \left( {\underset{b \in B}{ * }b}\right)
427
+ $$
428
+
429
+ $$
430
+ = h\left( {{\bigoplus }_{a \in A}g\left( a\right) }\right) * \left( {h\left( {{\bigoplus }_{b \in B}g\left( b\right) }\right) }\right)
431
+ $$
432
+
433
+ $$
434
+ = h\left( x\right) * h\left( y\right)
435
+ $$
436
+
437
+ and we're done.
438
+
439
+ To show the latter, observe that $h\left( {e}_{ \oplus }\right) * h\left( f\right) = h\left( {{e}_{ \oplus } \oplus f}\right) = h\left( f\right)$ for all $f \in F$ . As $h$ is surjective, have that $h\left( F\right) = M$ , so $h\left( {e}_{ \oplus }\right) * m = m * h\left( {e}_{ \oplus }\right) = m$ for all $m \in M$ , and $h\left( {e}_{ \oplus }\right) = {e}_{ * }$ as desired.
440
+
441
+ $\left( \leftarrow \right)$ Suppose $h$ is a left inverse of $g$ and a surjective monoid homomorphism from $\langle g\left( M\right) \rangle$ to $M$ . Then
442
+
443
+ $$
444
+ h\left( {{\bigoplus }_{x \in X}g\left( x\right) }\right) = h\left( {{\bigoplus }_{i = 1}^{n}g\left( {x}_{i}\right) }\right)
445
+ $$
446
+
447
+ $$
448
+ = h\left( {f\left( {x}_{1}\right) \oplus {\bigoplus }_{i = 2}^{n}g\left( {x}_{i}\right) }\right)
449
+ $$
450
+
451
+ $$
452
+ = h\left( {g\left( {x}_{1}\right) }\right) * h\left( {{\bigoplus }_{i = 2}^{n}g\left( {x}_{i}\right) }\right)
453
+ $$
454
+
455
+ $$
456
+ = {x}_{1} * h\left( {{\bigoplus }_{i = 2}^{n}g\left( {x}_{i}\right) }\right)
457
+ $$
458
+
459
+ $$
460
+ = \ldots
461
+ $$
462
+
463
+ $$
464
+ = \underset{i = 1}{\overset{n}{ * }}{x}_{i}
465
+ $$
466
+
467
+ $$
468
+ = \underset{x \in X}{ * }x
469
+ $$
470
+
471
+ 484 and we're done. 485
472
+
473
+ ## B Motivating the conditions for permutation-invariance in RNNs
474
+
475
+ An alternative way to motivate the regularisation loss of Cohen-Karlik et al. [2020], through the lens of monoids, is to frame the recurrent aggregator as a monoid, and identify the conditions required for this monoid to be commutative.
476
+
477
+ Keeping in mind that 'RNNs are just learnable folds', we notice that endofunctions form a monoid under composition:
478
+
479
+ ---
480
+
481
+ instance Monoid (a -> a) where
482
+
483
+ e = id
484
+
485
+ <> = (.)
486
+
487
+ ---
488
+
489
+ 492 and observing that, for instance,
490
+
491
+ ---
492
+
493
+ fold f z [x1, x2, x3]
494
+
495
+ $= f \times 1$ (f $\times 2$ (f $\times 3$ z))
496
+
497
+ = (f x1 . f x2 . f x3) z
498
+
499
+ $= \left( {\sharp z}\right) \left( {f \times 1 \cdot f \times 2 \cdot f \times 3}\right)$
500
+
501
+ = (\$ z) (reduce (map f [x1; x2; x3]))
502
+
503
+ ---
504
+
505
+ 493 we can rewrite fold as an aggregation over the composition monoid:
506
+
507
+ ---
508
+
509
+ fold :: (a -> b -> b) -> b -> [a] -> b
510
+
511
+ fold $f\;z = {dec}$ . reduce . map enc
512
+
513
+ where
514
+
515
+ enc $x = f\;x$
516
+
517
+ dec $f = f$ z
518
+
519
+ ---
520
+
521
+ 494 Now, applying this to our recurrent aggregator, we have
522
+
523
+ ---
524
+
525
+ rnn :: Learnable ([Vec R h1] -> Vec R h2)
526
+
527
+ rnn = dec . reduce . map enc
528
+
529
+ where
530
+
531
+ enc x = rnnCell x
532
+
533
+ dec $f = f$ initialState
534
+
535
+ ---
536
+
537
+ Observe that, for rnn, the carrier set of the composition (sub)monoid consists of functions rnnCell $\mathrm{x}$ for inputs $\mathrm{x}$ to the aggregation function. So, in order to enforce that this monoid is commutative, 97 we must simply ensure that
538
+
539
+ ---
540
+
541
+ $\mathrm{f} \Leftrightarrow \mathrm{g} = \mathrm{g} \Leftrightarrow \mathrm{f}$
542
+
543
+ $\Rightarrow$ (rnnCell x1) . (rnnCell x2) = (rnnCell x2) . (rnnCell x1)
544
+
545
+ $\Rightarrow$ rnnCell x1 (rnnCell x2 h) = rnnCell x2 (rnnCell x1 h)
546
+
547
+ ---
548
+
549
+ 498 for all inputs $\mathrm{x}1,\mathrm{x}2$ and all hidden states $\mathrm{h}$ .
550
+
551
+ ## C Architecture used for ${2}^{\text{nd }}$ -minimum benchmark
552
+
553
+ h = 128
554
+
555
+ ---
556
+
557
+ of Mlp :: Learnable (Vec R h -> Vec R h)
558
+
559
+ ofMlp = do
560
+
561
+ dense <- new ofLinearLayer (In h) (Out h)
562
+
563
+ return gelu . dense
564
+
565
+ intEmbedding :: Learnable (Vec Bool 8 -> Vec R h)
566
+
567
+ intEmbedding = toLearnable \$ \\int -> do
568
+
569
+ one_vecs <- newList (Length 8) (Of (learnableParameter (Dim h)))
570
+
571
+ zero_vecs <- newList (Length 8) (Of (learnableParameter (Dim h)))
572
+
573
+ return
574
+
575
+ [one*i + zero*(1-i)
576
+
577
+ | (i, one, zero) <- zip3 int oneVecs zeroVecs]
578
+
579
+ enc :: Learnable (Vec Bool 8 -> Vec R h)
580
+
581
+ enc = do
582
+
583
+ mlp <- new ofMlp
584
+
585
+ return mlp . intEmbedding
586
+
587
+ agg :: Learnable ([Vec R h] -> Vec R h)
588
+
589
+ -- Implementation-dependent
590
+
591
+ dec :: Learnable (Vec R h -> Vec R 8)
592
+
593
+ dec $=$ do
594
+
595
+ mlp <- new ofMlp
596
+
597
+ dense <- new ofLinearLayer (In h) (Out h)
598
+
599
+ return sigmoid . dense . mlp
600
+
601
+ net :: Learnable ([Vec Bool 8] -> Vec R 8)
602
+
603
+ net = dec . agg . map enc
604
+
605
+ ---
606
+
607
+ ## D Implementing binary tree aggregation for learnable commutative monoids
608
+
609
+ More precisely, given a learnable commutative monoid operator <> and a function toBalancedTree which takes a list of elements and returns a balanced Tree whose leaves contain these elements, we aggregate in the following way:
610
+
611
+ ---
612
+
613
+ data Tree a = Lf a | Nd Tree Tree
614
+
615
+ toBalancedTree :: [a] -> Tree a
616
+
617
+ fold :: (a -> a -> a) -> Tree a -> a
618
+
619
+ fold $f = \smallsetminus$ case
620
+
621
+ Nd l r -> f (fold f l) (fold f r)
622
+
623
+ Lf $\mathrm{m} \rightarrow \mathrm{m}$
624
+
625
+ aggregate :: Learnable ([LearnableMonoid] -> LearnableMonoid)
626
+
627
+ aggregate $=$ fold (<>) . toBalancedTree
628
+
629
+ ---
630
+
631
+ ## E Implementing regularisation losses for learnable commutative monoids
632
+
633
+ Observe that, for any learnable binary operator
634
+
635
+ ---
636
+
637
+ (<>) :: Learnable (Vec R h -> Vec R h -> Vec R h)
638
+
639
+ ---
640
+
641
+ aggregating over a tree of messages (of type Tree (Vec R h)), we can construct regularisation losses that penalise the operator for violating commutativity and associativity each time it is applied:
642
+
643
+ ---
644
+
645
+ -- Computes getLossesAtNode at every node in the tree,
646
+
647
+ -- returning a list of the results.
648
+
649
+ accumLosses :: ((Tree (Vec R h)) -> [R]) -> (Tree (Vec R h)) -> [R]
650
+
651
+ accumLosses getLossesAtNode = \\case
652
+
653
+ Nd a b $\rightarrow$
654
+
655
+ getLossesAtNode (Nd a b) :
656
+
657
+ (accumLosses getLossesAtNode a ++ accumLosses getLossesAtNode b)
658
+
659
+ Lf $\rightarrow$ _
660
+
661
+ commLoss :: (Tree(VecRh)) -> $\mathrm{R}$
662
+
663
+ commLoss = mean . accumLosses getLossesAtNode
664
+
665
+ where getLossesAtNode = \\case
666
+
667
+ Nd a b -> [|(a <> b) - (b <> a)|**2]
668
+
669
+ Lf $\rightarrow \left\lbrack \right\rbrack$
670
+
671
+ assocLoss :: (Tree(VecRh)) -> R
672
+
673
+ assocLoss = mean . accumLosses getLossesAtNode
674
+
675
+ where
676
+
677
+ loss a b c = |((a <> b) <> c) - (a <> (b <> c))|**2
678
+
679
+ getLossesAtNode = \\case
680
+
681
+ Nd (Nd a b) (Nd c d) ->
682
+
683
+ [loss (aggregate a) (aggregate b) (aggregate c),
684
+
685
+ loss (aggregate b) (aggregate c) (aggregate d)]
686
+
687
+ $\mathrm{{Nd}}\left( {\mathrm{{Nd}}a\mathrm{\;b}}\right) \left( {\mathrm{{Lf}}c}\right)$
688
+
689
+ [loss (aggregate a) (aggregate b) c]
690
+
691
+ _ -> []
692
+
693
+ aggregateWithLoss :: Learnable ([LearnableMonoid] -> LearnableMonoid)
694
+
695
+ aggregateWithLoss xs = aggregate tree
696
+
697
+ with extraLosses = [commLoss tree, assocLoss tree]
698
+
699
+ where tree = toBalancedTree xs
700
+
701
+ ---
702
+
703
+ ## F Training details for experiments
704
+
705
+ On every experiment, for each model, we performed 3 training runs with different seeds; for each run we used a validation set to choose the highest-performing checkpoint for evaluation.
706
+
707
+ ${2}^{\text{nd }}$ -minimum. We trained each aggregator with the Adam optimiser for 1,000 epochs, with batch size 32 and learning rate ${1e} - 4$ .
708
+
709
+ PNA synthetic benchmark. We trained each aggregator for 1,000 epochs. To ensure convergence, 16-dimensional models were trained with a learning rate of ${10}^{-3}$ as in Corso et al. [2020], and 128-dimensional models were trained with a learning rate of ${10}^{-4}$ . All other hyperparameters were as in Corso et al. [2020].
710
+
711
+ PNA real-world benchmark. All hyperparameters (including training time) are as in [Corso et al., 2020].
712
+
713
+ ## G Detailed results for the ${2}^{\text{nd }}$ -minimum benchmark
714
+
715
+ We present more detailed results for the ${2}^{\text{nd }}$ -minimum benchmark below:
716
+
717
+ - Table 2 contains in-distribution and out-of-distribution results for all aggregators tested.
718
+
719
+ - Figure 3 presents network efficiency against set size for all aggregators tested.
720
+
721
+ - Figure 4 presents the full results of the regularisation parameter sweep for Binary-GRU-Assoc.
722
+
723
+ As a side note, when training the non-regularised Binary-GRU aggregators, we observed that while associativity regularisation loss increased initially, it started decreasing as the GNN's training accuracy began to plateau. This potentially hints at the model's learning trajectory: one might hypothesise that the point at which the loss decreases is the point at which the model shifts from memorisation to learning a parsimonious algorithm that generalises.
724
+
725
+ <table><tr><td rowspan="2">Type</td><td rowspan="2">Aggregator</td><td>ID accuracy</td><td colspan="2">OOD accuracy</td></tr><tr><td>$n \in \left\lbrack {1,{16}}\right\rbrack$</td><td>$n = {32}$</td><td>$n = {200}$</td></tr><tr><td>Recurrent</td><td>GRU</td><td>$\mathbf{{0.996} \pm {0.001}}$</td><td>$\mathbf{{0.998} \pm {0.001}}$</td><td>${0.912} \pm {0.017}$</td></tr><tr><td>LCM</td><td>Binary-GRU-Assoc</td><td>${0.997} \pm {0.002}$</td><td>${0.997} \pm {0.002}$</td><td>${0.822} \pm {0.064}$</td></tr><tr><td>LCM</td><td>Binary-GRU</td><td>${0.997} \pm {0.001}$</td><td>${0.992} \pm {0.005}$</td><td>${0.443} \pm {0.122}$</td></tr><tr><td>Fixed</td><td>PNA</td><td>${0.961} \pm {0.003}$</td><td>${0.794} \pm {0.012}$</td><td>${0.110} \pm {0.027}$</td></tr><tr><td>Fixed</td><td>Max</td><td>${0.901} \pm {0.007}$</td><td>${0.723} \pm {0.025}$</td><td>${0.069} \pm {0.039}$</td></tr><tr><td>Fixed</td><td>Sum</td><td>${0.845} \pm {0.010}$</td><td>${0.261} \pm {0.020}$</td><td>${0.045} \pm {0.011}$</td></tr></table>
726
+
727
+ Table 2: Accuracy (the fraction of multisets at each size for which the ${2}^{\text{nd }}$ -minimum is correctly identified) for fixed, recurrent and LCM aggregators, along with the best-performing regularised LCM aggregator (Binary-GRU-Assoc with $\lambda = {10}^{0}$ ).
728
+
729
+ ## H Detailed results and discussion for the PNA synthetic benchmark
730
+
731
+ Out-of-distribution performance. We present the out-of-distribution performance of our aggrega-tors in Figure 5. Note that the MPNN (max) curve corresponds to the second-best aggregator tested
732
+
733
+ ![01963ef4-56c1-7813-a737-edae493b6b9e_15_311_207_1177_408_0.jpg](images/01963ef4-56c1-7813-a737-edae493b6b9e_15_311_207_1177_408_0.jpg)
734
+
735
+ Figure 3: Efficiency (mean time per epoch on a GPU, over 5 epochs) for fixed (max, sum, PNA), recurrent (GRU), LCM (Binary-GRU) and regularised LCM (Binary-GRU-Assoc) aggregators. The shaded region is bounded above and below by the maximum and minimum values across all runs.
736
+
737
+ ![01963ef4-56c1-7813-a737-edae493b6b9e_15_312_791_1173_390_0.jpg](images/01963ef4-56c1-7813-a737-edae493b6b9e_15_312_791_1173_390_0.jpg)
738
+
739
+ Figure 4: Mean generalisation performance for fixed, recurrent and LCM aggregators, sweeping across regularisation rate $\lambda$ for Binary-GRU-Assoc.
740
+
741
+ ![01963ef4-56c1-7813-a737-edae493b6b9e_15_311_1321_1171_422_0.jpg](images/01963ef4-56c1-7813-a737-edae493b6b9e_15_311_1321_1171_422_0.jpg)
742
+
743
+ Figure 5: Mean generalisation performance (multi-task ${\log }_{10}$ of the ratio between the MSE loss for the GNN and the MSE loss for the baseline) for fixed, recurrent and LCM aggregators on the PNA multi-task benchmark.
744
+
745
+ out-of-distribution in [Corso et al., 2020], after PNA - this curve stops at graphs of sizes between 45 and 50 as this is the maximum graph size on which the aggregator was tested in the paper.
746
+
747
+ Observe that all learnable aggregators generalise as well as, or better than, the max-aggregator. Notably, while the Binary-GRU-Assoc aggregator underperforms in distribution compared to Binary-GRU, it beats Binary-GRU out of distribution and performs competitively with GRU: indeed, the regularisation towards associativity has improved performance out-of-distribution at the cost of a slight decrease in performance in-distribution.
748
+
749
+ Notice also that all learnable aggregators are more stable than PNA for very large graphs - in fact, the 128-dimensional PNA explodes for graph sizes above 75 .
750
+
751
+ Dimensionality and overfitting. Finally, we take a look at the effects of high dimensionality on the performance of various aggregators.
752
+
753
+ For learnable aggregators, increasing dimensionality seems to help performance. We demonstrated that, if learnable aggregators operate over a latent space with a high enough dimension, they can beat individual fixed aggregators on tasks the fixed aggregators should be aligned to, and can even be competitive with PNA. Informal testing showed that the performance of learnable aggregators drops substantially if the dimensionality of these aggregators is reduced.
754
+
755
+ By contrast, for fixed aggregators, increasing dimensionality seems to harm performance: Corso et al. [2020] found that "even when [models with fixed aggregators] are given 30% more parameters than the [model using] PNA, they are qualitatively less capable of capturing the graph structure". (And for this reason, we did not test models with fixed aggregators in the 128-dimensional setting.)
756
+
757
+ For ${PNA}$ , the story is slightly more complex: while the 16-dimensional PNA performs well in-distribution (and, to some extent, out-of-distribution), this improvement in performance is small, especially when compared to PNA's standard deviation. And notably, unlike the 16-dimensional PNA, the 128-dimensional PNA explodes out-of-distribution.
758
+
759
+ So it seems that, when increasing the dimensionality of the aggregator, fixed aggregators may have more of a tendency to overfit.
760
+
761
+ One possible hypothesis for this phenomenon comes from observing that, by Section 2.2,
762
+
763
+ - in cases where the problem we're attempting to solve aligns with the fixed aggregator we want to use, we can often learn a simple homomorphism from the fixed aggregator to our latent space
764
+
765
+ - homomorphisms from fixed aggregators are expressive enough in principle to model any commutative monoid, but the required homomorphism is complex and doesn't generalise out-of-distribution
766
+
767
+ Note also that the aggregator monoid doesn't align perfectly with the combined 'multitask benchmark monoid' we'd need to learn to perform all tasks simultaneously. So, if we have the dimensionality to do so, our fixed aggregator may try to combine the existing monoids to approximate this multitask monoid in-distribution, in a way that does not generalise. In other words, it may be easier to get better performance by learning a very complex homomorphism from our fixed aggregator that works well in-distribution but struggles to extrapolate, as opposed to learning a simple homomorphism from the fixed aggregator that 'mostly works'.
768
+
769
+ Under this hypothesis, low-dimensional feature spaces provide an inductive bias towards learning simple homomorphisms that generalise out of distribution.
papers/LOG/LOG 2022/LOG 2022 Conference/WtFobB28VDey/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,354 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § LEARNABLE COMMUTATIVE MONOIDS FOR GRAPH NEURAL NETWORKS
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ Graph neural networks (GNNs) have been shown to be highly sensitive to the choice of aggregation function. While summing over a node's neighbours can approximate any permutation-invariant function over discrete inputs, recent work has proved there are set-aggregation problems for which summing cannot generalise to unbounded inputs, proposing recurrent neural networks regularised towards permutation-invariance as a more expressive aggregator. We show that these results carry over to the graph domain: GNNs equipped with recurrent aggrega-tors are competitive with state-of-the-art invariant aggregators, on both synthetic benchmarks and real-world problems. However, despite the benefits of recurrent aggregators, their $O\left( V\right)$ depth makes them both difficult to parallelise and harder to train on large graphs. Inspired by the observation that a well-behaved aggregator for a GNN is a commutative monoid over its latent space, we propose a framework for constructing learnable, commutative, associative binary operators. And with this, we construct an aggregator of $O\left( {\log V}\right)$ depth, yielding exponential improvements for both parallelism and dependency length while achieving performance competitive with recurrent aggregators. Based on our empirical observations, our proposed learnable commutative monoid (LCM) aggregator represents the "best of both worlds" between efficient and expressive aggregators.
12
+
13
+ § 201 INTRODUCTION
14
+
15
+ When dealing with irregularly structured data [Bronstein et al., 2021], neural networks typically need to process data of arbitrary sizes. In such scenarios, the heart of the network is arguably its aggregation function-a function that reduces a collection of neighbour feature vectors into a single vector. Indeed, graph neural networks (GNNs) have been shown empirically to be highly sensitive to the choice of aggregator [Veličković et al., 2019, Richter and Wattenhofer, 2020], with a wide range of aggregators (e.g. sum, max and mean) and their combinations [Corso et al., 2020] in common use.
16
+
17
+ In this paper, we offer a new perspective for studying aggregators, with clear theoretical and practical implications. It can be said that the true objective of choosing an aggregator is to make it as simple as possible for the parameters of the GNNs to exploit that aggregator in a way that makes it easier to solve the downstream task. Specifically, we study this in the context of learning to align the GNN's aggregator to a desirable target aggregation function. It is already a known fact that higher alignment implies reduced sample complexity [Xu et al., 2019a], and in the context of algorithmic reasoning, it is well-known that a neural network will be better at learning to imitate an algorithm if its aggregator matches that of the algorithm it is trying to imitate [Veličković et al., 2019, Xu et al., 2020].
18
+
19
+ However, beyond the realm of learning a task with a concrete aggregator, many real-world problems offer more challenging settings, wherein the optimal aggregator to learn is not clear-but unlikely to be a trivial fixed aggregator. To formalise this notion, while preserving the useful assumption of permutation invariance, we leverage commutative monoids as a formalism for both the aggregators supported by GNNs and the (potentially unknown) target aggregators one would wish to align to. - This formalism allows us to derive several relevant results, including the fact that using any fixed commutative monoid (e.g. sum or max) would compel the GNN to learn a homomorphism from it to the target commutative monoid, purely from data. We hypothesise this is often difficult to do robustly, and verify our hypothesis by demonstrating several instances (both synthetic and real-world) where fixed aggregators (including combinations of them [Corso et al., 2020]) fail to generalise.
20
+
21
+ Our perspective, inspired by the functional programming motif of folds (or catamorphisms) over arbitrary data structures, leads us to consider flexible and learnable aggregation functions, which can more easily fit a wide range of commutative monoids directly, without needing to learn such a homomorphism. The most popular such aggregator has previously been the RNN (i.e. 'a fold over a list') - used, for instance, in GraphSAGE [Hamilton et al., 2017]. The reason for RNNs' expressive power is simple: their usage of a hidden recurrent state allows them to break away from the constraints of commutative monoids and aggregate inputs more flexibly. However, while empirically powerful, the sequential structure of RNN aggregators leads to clear shortcomings in efficiency: if an RNN had learnt to aggregate $n$ neighbours under a commutative monoid operation $\oplus$ , it would do so with depth linear in $n$ , as $\left( {\left( {\left( {\left( {\ldots \left( {{\mathbf{x}}_{1} \oplus {\mathbf{x}}_{2}}\right) \oplus {\mathbf{x}}_{3}}\right) \oplus \ldots }\right) \oplus {\mathbf{x}}_{n - 1}}\right) \oplus {\mathbf{x}}_{n}}\right)$ .
22
+
23
+ But, by folding over a binary tree instead of a list (in other words, rearranging the order of operations to a balanced binary tree $\left( {\ldots \left( {\left( {{\mathbf{x}}_{1} \oplus {\mathbf{x}}_{2}}\right) \oplus \left( {{\mathbf{x}}_{3} \oplus {\mathbf{x}}_{4}}\right) }\right) \oplus \cdots \oplus \left( {{\mathbf{x}}_{n - 1} \oplus {\mathbf{x}}_{n}}\right) \ldots }\right)$ ), we derive an aggregator that hits a "sweet spot" between flexibility and performance, empirically retaining most of the expressivity of RNNs while having depth logarithmic in $n$ . We also demonstrate how such layers can be effectively constrained and regularised to respect the commutative monoid axioms (essentially creating a learnable commutative monoid), leading to further gains in robustness.
24
+
25
+ § 2 MOTIVATION
26
+
27
+ Before exploring GNN aggregators, we first review the structure of a GNN. For a graph $G = \left( {V,E}\right)$ whose nodes $u$ have one-hop neighbourhoods ${\mathcal{N}}_{u} = \{ v \in V \mid \left( {v,u}\right) \in E\}$ and features ${\mathbf{x}}_{u}$ , a message-passing GNN over $G$ is defined by Bronstein et al. [2021] as ${\mathbf{h}}_{u} = \phi \left( {{\mathbf{x}}_{u},{\bigoplus }_{v \in {\mathcal{N}}_{u}}\psi \left( {{\mathbf{x}}_{u},{\mathbf{x}}_{v}}\right) }\right)$ for $\psi$ the message function, $\phi$ the readout function and $\oplus$ a permutation-invariant aggregation function. This GNN ’template’ can be instantiated in many ways, with different choices of $\phi ,\psi$ and $\oplus$ yielding popular architectures such as GCNs [Kipf and Welling, 2017] and GATs [Veličković et al., 2018].
28
+
29
+ § 2.1 TO LEARN A COMPLEX AGGREGATOR IS TO LEARN A COMMUTATIVE MONOID HOMOMORPHISM
30
+
31
+ So we’ve seen that, in order to define a GNN, we must define a permutation-invariant aggregator $\oplus$ over its messages. But how can we characterise a permutation-invariant aggregator in general?
32
+
33
+ In abstract algebra (and in functional programming), a permutation-invariant aggregator over a set can be described as (maps into and out of) a commutative monoid. A commutative monoid $\left( {M,\oplus ,{e}_{ \oplus }}\right)$ is a set $M$ equipped with a commutative, associative binary operator $\oplus : M \times M \rightarrow M$ and an identity element ${e}_{ \oplus } \in M$ - in other words, an instance of the following Haskell typeclass, satisfying the identities to the right for all ( $\mathrm{x}\mathrm{y}\mathrm{z} : : \mathrm{a}$ ):
34
+
35
+ class CommutativeMonoid a =
36
+
37
+ e :: a
38
+
39
+ $\Leftrightarrow : : a \rightarrow a \rightarrow a$
40
+
41
+ $\mathrm{x} \Leftrightarrow \mathrm{e} = = \mathrm{e}$
42
+
43
+ $\mathrm{x} \Leftrightarrow \mathrm{y} = = \mathrm{y} \Leftrightarrow \mathrm{x}$
44
+
45
+ $$
46
+ \text{ x } \Leftrightarrow \left( {\mathrm{y} \Leftrightarrow \mathrm{z}}\right) = = \left( {\mathrm{x} \Leftrightarrow \mathrm{y}}\right) \Leftrightarrow \mathrm{z}
47
+ $$
48
+
49
+ Intuitively, commutative monoids over a set $M$ are ’operations you can use to reduce a multiset, whose members are in $M$ , to a single value’. These include GNN aggregators, like sum-aggregation $\left( {{\mathbb{R}}^{n},+,\mathbf{0}}\right)$ and max-aggregation $\left( {{\mathbb{R}}^{n},\max ,\mathbf{0}}\right)$ . Indeed, Dudzik and Veličković [2022] observe that, for the aggregation function $\oplus$ of a GNN to be well-behaved (in the sense of respecting the axioms of the multiset monad), it must form a commutative monoid $\left( {S,\oplus ,{e}_{ \oplus }}\right)$ over some subspace $S$ of ${\mathbb{R}}^{n}$ .
50
+
51
+ The vast majority of GNNs choose a fixed permutation-invariant function $\oplus$ (or fixed combinations of them [Corso et al., 2020]). While some research [Pellegrini et al., 2020, Li et al., 2020] has explored aggregation functions with learnable parameters, these functions are only very weakly parameterised, and give us limited additional expressivity.
52
+
53
+ For problems where we can anticipate the kind of aggregation function we might need, this approach works well: indeed, choosing a commutative monoid that aligns with the algorithm we want our GNN to learn can improve performance both in and out of distribution [Veličković et al., 2019]. But for problems where we may need to learn aggregations over complex internal representations, these monoids may not always be the most natural choice for the aggregation we're trying to learn. So in space where $\oplus$ -aggregation makes sense. left inverse of $g$ and a surjective monoid homomorphism from $\langle g\left( M\right) \rangle \subseteq {F}^{1}$ to $M$ . all multisets $X$ of $M$ .
54
+
55
+ Observe that this implies, for all nodes $u,v$ in graphs $G$ , the following: that can decompose into a surjective monoid homomorphism from a submonoid of $F$ to $M$ .
56
+
57
+ § 2.2 LIMITATIONS ON EXPRESSIVITY AND GENERALISATION FOR CONSTRUCTED AGGREGATORS
58
+
59
+ Given this result, what are the implications for prior and present work?
60
+
61
+ type M = (Int, Int)
62
+
63
+ instance CommutativeMonoid $M$ where
64
+
65
+ e = (Infinity, Infinity)
66
+
67
+ (a1, a2) $\mathrel{\text{ <> }} \left( {\mathrm{b}1,\mathrm{\;b}2}\right) = \left( {\mathrm{c}1,\mathrm{c}2}\right)$
68
+
69
+ where c1:c2:_ =
70
+
71
+ sort [a1, a2, b1, b2]
72
+
73
+ such cases, $\psi$ and $\phi$ must take on some of the work of mapping our representations into and out of a
74
+
75
+ Formally, suppose we use a GNN equipped with a fixed commutative monoid aggregator $\left( {F,\oplus ,{e}_{ \oplus }}\right)$ , on a problem for which the 'true' aggregation we want to perform is the commutative monoid $\left( {M,*,{e}_{ * }}\right)$ over the GNN’s latent space. What would it take for our GNN to perform $M$ -aggregation?
76
+
77
+ Proposition 1. Let $\left( {M,*,{e}_{ * }}\right)$ and $\left( {F,\oplus ,{e}_{ \oplus }}\right)$ be commutative monoids. Then for functions $g : M \rightarrow$ $F$ and $h : F \rightarrow M,{ * }_{x \in X}x = h\left( {{\bigoplus }_{x \in X}g\left( x\right) }\right)$ for all multisets $X$ of $M$ , if and only if $h$ is both a
78
+
79
+ Now, given Proposition 1 above (proven in Appendix 4), suppose we had a trained GNN, param-eterised by $\phi : {\mathbb{R}}^{k} \times F \rightarrow {\mathbb{R}}^{k}$ and $\psi : {\mathbb{R}}^{k} \times {\mathbb{R}}^{k} \rightarrow F$ , with a fixed $F$ -aggregator. Suppose this GNN has learned to imitate the $M$ -aggregation commutative monoid - in other words, that there exist functions ${\phi }^{\prime } : {\mathbb{R}}^{k} \times M \rightarrow {\mathbb{R}}^{k},{\psi }^{\prime } : {\mathbb{R}}^{k} \times {\mathbb{R}}^{k} \rightarrow M,g : M \rightarrow F$ and $h : F \rightarrow M$ such that $\phi \left( {{\mathbf{x}}_{u},{\mathbf{m}}_{\mathcal{N}\left( u\right) }}\right) = {\phi }^{\prime }\left( {{\mathbf{x}}_{u},h\left( {\mathbf{m}}_{\mathcal{N}\left( u\right) }\right) }\right) ,\psi \left( {{\mathbf{x}}_{u},{\mathbf{x}}_{v}}\right) = g\left( {{\psi }^{\prime }\left( {{\mathbf{x}}_{u},{\mathbf{x}}_{v}}\right) }\right)$ and ${ * }_{x \in X}x = h\left( {{ \oplus }_{x \in X}g\left( x\right) }\right)$ for
80
+
81
+ $$
82
+ \phi \left( {{\mathbf{x}}_{u},{\bigoplus }_{v \in {\mathcal{N}}_{u}}\psi \left( {{\mathbf{x}}_{u},{\mathbf{x}}_{v}}\right) }\right) = {\phi }^{\prime }\left( {{\mathbf{x}}_{u},\underset{v \in {\mathcal{N}}_{u}}{ * }{\psi }^{\prime }\left( {{\mathbf{x}}_{u},{\mathbf{x}}_{v}}\right) }\right) = {\phi }^{\prime }\left( {{\mathbf{x}}_{u},h\left( {{\bigoplus }_{v \in {\mathcal{N}}_{u}}g\left( {{\psi }^{\prime }\left( {{\mathbf{x}}_{u},{\mathbf{x}}_{v}}\right) }\right) }\right) }\right)
83
+ $$
84
+
85
+ Hence $h$ is a surjective monoid homomorphism from $\langle g\left( M\right) \rangle$ to $M$ (i.e. $M$ is a subquotient of $F$ ).
86
+
87
+ So at a high level, for a ${GNN}$ with aggregator $F$ to imitate an aggregator $M$ , it must learn a function
88
+
89
+ As has been seen in [Veličković et al., 2019, Sanchez-Gonzalez et al., 2020], it's clear that if our fixed commutative monoid $F$ is aligned with a target monoid $M$ for the problem we want to solve - in other words, if the homomorphism doesn't have to do much work - then we can easily learn to imitate $M$ . Indeed, if the target homomorphism is linear, and we have appropriate training set coverage, then by $\left\lbrack {\mathrm{{Xu}}\text{ et al.,2020 }}\right\rbrack$ it may well generalise out-of-distribution - a result that holds (to an extent) in the case of learning to imitate path-finding algorithms such as Bellman-Ford [Veličković et al., 2019].
90
+
91
+ But there are many cases where $M$ is more complex, and there is no commonly used aggregator $F$ for which we can simply apply a linear homomorphism to get from $F$ to $M$ . One such example is the problem of finding the ${2}^{\text{ nd }}$ -minimum element in a set. Here, the desired monoid $M$ is as follows:
92
+
93
+ secondMinimum :: [Int] -> Int
94
+
95
+ secondMinimum = dec. agg . map enc
96
+
97
+ where
98
+
99
+ enc x = (x, Infinity)
100
+
101
+ agg = reduce (<>)
102
+
103
+ dec $\left( {\_ ,\mathrm{x}2}\right) = \mathrm{x}2$
104
+
105
+ Observe that, for this monoid, there is no commonly-used fixed aggregator $F$ (e.g. sum, max, min, mean) for which there is a ’natural’ homomorphism from $F$ to $M$ .
106
+
107
+ In principle, there exists an $F$ from which it is possible to construct a homomorphism to $M$ : by [Zaheer et al.,2017] and [Xu et al.,2019a], for any $\left( {M,*,{e}_{ * }}\right)$ with $M \subseteq {\mathbb{Q}}^{n}$ , there exists a surjective monoid homomorphism $h$ from $\left( {{\mathbb{R}}^{n}, + ,0}\right)$ to $\left( {M,*,{e}_{ * }}\right)$ . But Wagstaff et al. [2019] show that this guarantee may require an $h$ that is highly discontinuous, and therefore not only hard to learn in-distribution, but fully misaligned with the assumptions of the universal approximation theorem. Further, as $\operatorname{dom}\left( h\right) = \langle g\left( M\right) \rangle$ , we are not learning a function whose domain is a bounded set, so we have little hope of generalising out-of-distribution. Indeed, we demonstrate in Section 3.1 that all common fixed aggregators fail to learn the ${2}^{\text{ nd }}$ -minimum problem, both in and out of distribution. Similarly, Cohen-Karlik et al. [2020] show that sum-aggregators as implemented in [Zaheer et al., 2017] (i.e. maps in and out of the $\left( {{\mathbb{R}}^{n},+,0}\right)$ commutative monoid) require $\Omega \left( {\log {2}^{n}}\right)$ neurons to learn the parity function over sets of size $n$ . Intuitively, the crux of their proof is that the homomorphism the aggregator would have to learn from $\left( {{\mathbb{R}}^{n},+,0}\right)$ to the parity monoid is a periodic function with unbounded domain. Similar arguments hold for all aggregation tasks involving modular counting.
108
+
109
+ ${}^{1}$ where $\langle g\left( M\right) \rangle$ denotes the submonoid of $F$ generated by $g\left( M\right)$
110
+
111
+ § 2.3 FULLY LEARNABLE RECURRENT AGGREGATORS AND THEIR LIMITATIONS
112
+
113
+ We will now take a step back from homomorphisms, and try to discover a more flexible aggregator. An emerging narrative within deep learning is that of representations as types [Olah, 2015]. If we view the construction of neural networks as the construction of differentiable, parameterised pure functional programs, many of the design patterns commonly used in deep learning correspond to higher-order functions commonly used in functional programming (FP). This paradigm has proven valuable in recent times, embodied by deep learning frameworks such as JAX [Bradbury et al., 2018].
114
+
115
+ In FP, a simple way to aggregate a multiset of elements is to represent them as a list and fold over it: ${}^{2}$
116
+
117
+ fold :: (a -> b -> b) -> b -> [a] -> b
118
+
119
+ $$
120
+ \text{ fold f z [] = z }
121
+ $$
122
+
123
+ fold f z (x:xs) = f x (fold f z xs)
124
+
125
+ And in some sense, a recurrent neural network (RNN) is simply a fold over a list, parameterised by a learnable accumulator $\mathrm{f}$ and a learnable initialisation element $\mathrm{z}{.}^{3}$
126
+
127
+ rnnCell :: Learnable rnn : Learnable ([Vec R h1] -> Vec R h2) (Vec R h1 -> Vec R h2 -> Vec R h2) rnn = fold rnnCell initialState initialState :: Learnable (Vec R h2)
128
+
129
+ Hence a natural way to construct a learnable aggregator over multisets could be to use an RNN - a 'learnable fold' - and to somehow ensure it is permutation-invariant.
130
+
131
+ Indeed, this approach has been used for permutation-invariant set aggregation, with Murphy et al. [2019] enforcing permutation-invariance by design by taking the average of an RNN applied to all permutations of its input, and Cohen-Karlik et al. [2020] regularising RNNs $f$ towards permutation-invariance by adding a pairwise regularisation term ${L}_{\text{ swap }}\left( {{\mathbf{x}}_{1},{\mathbf{x}}_{2}}\right) = \left( {f\left( {f\left( {\mathbf{s},{\mathbf{x}}_{1}}\right) ,{\mathbf{x}}_{2}}\right) - }\right.$ ${\left. f\left( f\left( \mathbf{s},{\mathbf{x}}_{2}\right) ,{\mathbf{x}}_{1}\right) \right) }^{2}$ (which we motivate through the lens of commutative monoids in Appendix B).
132
+
133
+ Recurrent aggregators have also occasionally seen use in GNNs [Hamilton et al., 2017, Xu et al., 2018], but they are scarcely used despite their competitive performance. We assume RNNs likely remain unpopular as a GNN aggregator due to their depth. Indeed, observe that an $N$ -layer GNN equipped with a recurrent aggregator has (worst-case) depth $O\left( {VN}\right)$ . By contrast, the same GNN equipped with a fixed aggregator has (worst-case) depth $O\left( N\right)$ . And as many graphs on which we want to deploy GNNs can have upwards of 100,000 nodes [Hu et al., 2020], the same problems of efficiency and maximum dependency length observed by Vaswani et al. [2017] when using RNNs for sequence transduction also hold when using RNNs for graph message aggregation.
134
+
135
+ § 2.4 A COMPROMISE: FULLY LEARNABLE COMMUTATIVE MONOIDS
136
+
137
+ So, if recurrent aggregators are too deep, is there any way to get a fully learnable aggregator? We've considered the fixed-aggregator approach, where we learn maps into and out of the carrier set of a pre-determined commutative monoid. We've considered the recurrent-aggregator approach, where we represent multisets as lists and implement aggregation as a learnable fold over lists. ${}^{4}$ But another way to represent multisets in FP is as a balanced binary tree, over which aggregation is implemented as a fold parameterised by a commutative monoid. So what if we implemented aggregation as a learnable fold over a balanced binary tree? Or in other words, what if, instead of learning maps into and out of some commutative monoid, we simply learn the commutative monoid itself?
138
+
139
+ ${}^{2}$ Note that $a \rightarrow b \rightarrow b$ is an equivalent way (via currying) of specifying a function $a \times b \rightarrow b$ .
140
+
141
+ ${}^{3}$ Note that an RNN can also be viewed as a map to the carrier set of the monoid of endofunctions (i.e. inctions from a set to itself - in this case, from b to b) under composition: see Appendix B for details.
142
+
143
+ ${}^{4}$ Alternatively, we can see this, as in Appendix B, as learning maps into and out of the carrier set of the monoid of endofunctions.
144
+
145
+ Let's make precise what exactly we mean by 'learning a commutative monoid' for use in a GNN. Recall that a commutative monoid $\left( {M,\oplus ,{e}_{ \oplus }}\right)$ is defined by its carrier set $M$ , its binary operation $\oplus$ and its identity element ${e}_{ \oplus }$ . So given some learnable commutative, associative binary operator $\oplus$ (written binOp :: Learnable (Vec R h -> Vec R h -> Vec R h)), and some learnable identity element ${e}_{ \oplus }$ (written identity : Learnable (Vec R h)), we can define a learnable commutative monoid over some learned embedding space (in other words, a subset of ${\mathbb{R}}^{h}$ ):
146
+
147
+ type LearnableCommutativeMonoid $= \{ - a$ subspace of $- \}$ Vec R h
148
+
149
+ instance CommutativeMonoid LearnableCommutativeMonoid where
150
+
151
+ e = identity; $\Leftrightarrow = \operatorname{bin}\mathrm{{Op}}$
152
+
153
+ Thus, our aggregation function can be specified simply, as ${\bigoplus }_{x}x$ , or
154
+
155
+ aggregate :: Learnable ([LearnableMonoid] -> LearnableMonoid)
156
+
157
+ aggregate = reduce (<>)
158
+
159
+ Note that, here, the carrier set is implicit - when used in a GNN, we expect the message function (i.e. the producer of the elements to be aggregated) to learn a 'return type' representation whose members are elements of this implicit carrier set, and similarly for the 'input type' of the readout function.
160
+
161
+ Now, why do we care about this at all? Indeed, if we implement reduce as a fold, we're no better off than if we just used a recurrent aggregator. But consider the computation graph (or rather, computation binary tree) of such an aggregation ${x}_{1} \oplus \left( {{x}_{2} \oplus \left( {{x}_{3} \oplus {x}_{4}}\right) }\right)$ . By Tamari’s theorem [Tamari, 1962], the associativity of $\oplus$ means that the result of evaluating this computation tree is invariant under rotations of nodes in the tree. Therefore, in order to minimise the depth of the computation, we can rewrite our reduction as a balanced binary tree: $\left( {{x}_{1} \oplus {x}_{2}}\right) \oplus \left( {{x}_{3} \oplus {x}_{4}}\right)$ (see Appendix D). And by doing so, for $V$ elements to aggregate, we obtain a network with $O\left( V\right)$ applications of $\oplus$ and $O\left( {\log V}\right)$ depth - an exponential improvement over our $O\left( V\right)$ -depth recurrent aggregators.
162
+
163
+ § 2.5 COMMUTATIVE, ASSOCIATIVE BINARY OPERATORS FOR LEARNABLE COMMUTATIVE MONOIDS
164
+
165
+ So, given a commutative, associative binary operator, we can get our learnable commutative monoid with $O\left( {\log V}\right)$ depth. But how do we construct such an operator in the first place? As with permutation-invariant RNNs, we have two options: either we construct an operator that strongly enforces the axioms of commutativity and associativity by construction, or we construct some arbitrary binary operator and weakly enforce the axioms through regularisation.
166
+
167
+ Strong enforcement. While some research has been conducted into learning algebraic structures with strongly enforced axioms [Abe et al., 2021, Martires, 2021], these approaches reduce to learning maps to and from a fixed aggregator. ${}^{5}$ We observe that, while we can strongly enforce commutativity in any binary operator $f\left( {x,y}\right)$ by symmetrising it to $g\left( {x,y}\right) = \frac{f\left( {x,y}\right) + f\left( {y,x}\right) }{2}$ , we found no such construction for associativity which doesn't sacrifice expressivity.
168
+
169
+ Example: Binary-GRU. So given this, and given the importance of gating [Tallec and Ollivier, 2018] in neural networks applied over long time horizons, we can construct a simple strongly commutative binary aggregator by symmetrising a GRU [Cho et al., 2014]:
170
+
171
+ binaryGRU :: Learnable (Vec R h -> Vec R h -> Vec R h)
172
+
173
+ binaryGRU v1 v2 = do
174
+
175
+ g <- new gruCell (InputDim h) (HiddenDim h)
176
+
177
+ return (g v1 v2 + g v2 v1) / 2
178
+
179
+ Weak enforcement. Alternatively, just as we saw with recurrent aggregators in Section 2.3, for a learnable binary operator $\oplus : {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{n}$ we could weakly enforce commutativity and associativity through regularisation losses ${L}_{\text{ comm }}\left( {\mathbf{x},\mathbf{y}}\right) = {\lambda }_{\text{ comm }}{\left| \left( \mathbf{x} \oplus \mathbf{y}\right) - \left( \mathbf{y} \oplus \mathbf{x}\right) \right| }^{2}$ and ${L}_{\text{ assoc }}\left( {\mathbf{x},\mathbf{y},\mathbf{z}}\right) = {\lambda }_{\text{ assoc }}{\left| \left( \mathbf{x} \oplus \left( \mathbf{y} \oplus \mathbf{z}\right) \right) - \left( \left( \mathbf{x} \oplus \mathbf{y}\right) \oplus \mathbf{z}\right) \right| }^{2}$ (for implementation details, see Appendix E).
180
+
181
+ Example: Binary-GRU-Assoc. Now, by applying ${L}_{\text{ assoc }}$ to Binary-GRU, we obtain a strongly commutative, weakly associative binary operator. ${}^{6}$
182
+
183
+ ${}^{5}$ i.e. choosing an algebraic structure (e.g. the Abelian group $\left( {{\mathbb{R}}^{n},+,\mathbf{0}}\right)$ ) and learning maps between the model's latent space and that structure
184
+
185
+ ${}^{6}$ Note that we can instantiate this operator with different values of the regularisation parameter ${\lambda }_{\text{ assoc }}$ (hereafter referred to as $\lambda$ ) by which we scale the associativity loss.
186
+
187
+ § 3 ASSESSING THE UTILITY OF LEARNABLE COMMUTATIVE MONOIDS
188
+
189
+ Now, we've seen three types of aggregators: fixed aggregators, recurrent aggregators and learnable commutative monoids. In order to explore the trade-offs between them in terms of expressivity, generalisation and efficiency, we conduct a range of experiments comparing the performance of state-of-the-art fixed aggregators (such as sum-aggregation [Zaheer et al., 2017], max-aggregation [Veličković et al., 2019] and PNA [Corso et al., 2020]), recurrent aggregators (specifically GRUs [Cho et al., 2014]) and learnable commutative monoid (LCM) aggregators (using the Binary-GRU and Binary-GRU-Assoc learnable operators as described in Section 2.5)
190
+
191
+ § ON THE FOLLOWING SYNTHETIC AND REAL-WORLD PROBLEMS:
192
+
193
+ ${2}^{\text{ nd }}$ -minimum. We test fixed aggregators, recurrent aggregators and learnable commutative monoids on the problem of finding the second-smallest element in a set of binary-encoded integers. As observed in Section 2.2, this task is a synthetic aggregation problem with an 'unusual' commutative monoid, in that it doesn't align well with common fixed aggregators. Therefore, we expect this task to be a standard problem for which learnable aggregators would outperform any commonly-used fixed aggregator, especially out-of-distribution.
194
+
195
+ PNA synthetic benchmark. We then proceed to test the in-distribution performance of our aggrega-tors on the synthetic dataset presented in [Corso et al., 2020]. This dataset consists of aggregator-heavy, classical graph problems that are mostly aligned with the aggregators used to construct PNA. Thus, we expect PNA (and the relevant fixed aggregators) to perform strongly here, potentially even out-of-distribution. But while our learnable aggregators don't necessarily have the inductive bias to approximate these monoids well over an unbounded domain, we expect them to perform competitively at learning the relevant monoids in-distribution.
196
+
197
+ PNA real-world benchmark. Finally, we test our aggregators on the real-world dataset presented in [Corso et al., 2020], consisting of chemical (ZINC and MolHIV) and computer vision (CIFAR10 and MNIST) datasets from the GNN benchmarks of Dwivedi et al. [2020] and Hu et al. [2020]. In contrast to the algorithmic tasks in the synthetic benchmark, we expect these real-world problems to contain 'unusual' target monoids: for both molecular and computer vision problems, it is likely that our GNN will learn complex representations whose most natural monoid is not the image of a simple homomorphism from any common fixed aggregator. Therefore, we expect fully learnable aggregators (GRU and LCMs) to outperform fixed aggregators on this benchmark.
198
+
199
+ Training details for all experiments are provided in Appendix F. Notably, for all uses of learnable aggregators, we randomly shuffle each batch of sequences before feeding it to the aggregator as a form of regularisation through data augmentation.
200
+
201
+ § 3.1 ${2}^{\TEXT{ ND }}$ -MINIMUM
202
+
203
+ For this experiment, we compared fixed (sum, max, PNA), recurrent (GRU) and LCM (Binary-GRU) aggregators on the synthetic ${2}^{nd}$ -minimum set aggregation problem. In order to evaluate the effects of regularisation towards algebraic axioms on the performance of LCM aggregators, we also tested Binary-GRU-Assoc, sweeping over values of the regularisation parameter $\lambda$ from ${10}^{0}$ to ${10}^{-7}$ .
204
+
205
+ § 3.1.1 EXPERIMENTAL DETAILS
206
+
207
+ For training data, we used 65,536 multisets of integers $\sim U\left( {0,{255}}\right)$ of size $\sim U\left( {1,{16}}\right)$ . For validation data, we used 1,024 multisets of integers $\sim U\left( {0,{255}}\right)$ of size 32 . For evaluation data, we used 1,024 multisets of integers $\sim U\left( {0,{255}}\right)$ of size $l$ , for $l \in \left\lbrack {1,{200}}\right\rbrack$ .
208
+
209
+ We used a standard multiset-aggregation architecture $f\left( \mathbf{X}\right) \mathrel{\text{ := }} \sigma \left( {\psi \left( {{\bigoplus }_{\mathbf{x} \in \mathbf{X}}\phi \left( \mathbf{x}\right) }\right) }\right)$ for $\oplus$ the aggregator being tested, and $\phi$ and $\psi$ MLPs. $f$ takes as input a vector of 8-bit binary-encoded integers (as in [Yan et al.,2020]), and returns a binary-encoded integer in ${\left\lbrack 0,1\right\rbrack }^{8}$ . The full architecture (with details on integer embedding) is outlined in Appendix C.
210
+
211
+ § 3.1.2 RESULTS AND DISCUSSION
212
+
213
+ Summary. Recall that this problem was chosen for its comparatively unusual commutative monoid, which we do not expect aligns well with fixed aggregators. Indeed, we confirm this hypothesis: we see in Figure 1 that fixed aggregators fail to learn ${2}^{\text{ nd }}$ -minimum in distribution, that recurrent aggregators learn ${2}^{\text{ nd }}$ -minimum near-perfectly in distribution, generalising well out of distribution, and that ${LCM}$ aggregators learn ${2}^{\text{ nd }}$ -minimum near-perfectly in distribution and are competitive with recurrent aggregators out of distribution, while achieving an exponential speedup over recurrent aggregators on large sets. Furthermore, we observe that regularising towards algebraic axioms improves the performance of LCM aggregators both in and out of distribution.
214
+
215
+ < g r a p h i c s >
216
+
217
+ Figure 1: Generalisation performance for fixed (max, sum, PNA), recurrent (GRU) and LCM (Binary-GRU) aggregators, along with the best-performing regularised LCM aggregator (Binary-GRU-Assoc with $\lambda = {10}^{0}$ ). The shaded region is bounded above and below by the maximum and minimum values across all runs. The vertical purple line denotes the maximum set size present in training data (16); the vertical blue lines denote powers of 2 (from ${2}^{1}$ to ${2}^{7}$ ). For detailed results, see Appendix G.
218
+
219
+ In-distribution performance. Examining Figure 1, observe that only the fully-learnable aggregators - GRU, Binary-GRU and Binary-GRU-Assoc - managed to learn ${2}^{\text{ nd }}$ -minimum near-perfectly in distribution, with the next best performing aggregator being PNA. ${}^{7}$
220
+
221
+ Out-of-distribution performance (without regularisation). Out of distribution, observe that all learnable aggregators generalise near-perfectly up to size 32 (twice the size of the input). Beyond this point, while the performance of the recurrent aggregator decays slowly (reaching ${0.912} \pm {0.017}$ at size 200 ), the performance of the LCM quickly drops (reaching ${0.287} \pm {0.068}$ at size 200 ). Despite this, both learnable aggregators consistently outperform the fixed aggregators out-of-distribution. Furthermore, out of the fixed aggregators, we see that the sum-aggregator's performance plateaus extremely quickly, a result we may attribute to the domain of the learned homomorphism from the sum-aggregator being an unbounded set (see Section 2.2).
222
+
223
+ Efficiency. As hypothesised in Section 2.4, we see (in Appendix G, Figure 3) that LCMs are indeed exponentially faster than RNNs for large sets: for $n = {20}$ , Binary-GRU-Assoc takes ${48.2} \pm {0.4}$ seconds per epoch, and GRU takes ${46.6} \pm {0.5}$ seconds per epoch, while for $n = {200}$ , Binary-GRU-Assoc takes ${79.4} \pm {0.5}$ seconds per epoch, and GRU takes ${397.2} \pm {1.3}$ seconds per epoch.
224
+
225
+ Regularisation towards associativity. We show the results from the best-performing regularised LCM aggregator $\left( {\lambda = {10}^{0}}\right)$ in Figure 1 and Table 2. Although the Binary-GRU performs better than all fixed aggregators in its unregularised form, observe that the regularised Binary-GRU-Assoc outperforms its unregularised sibling both in and out of distribution - and indeed, achieves generalisation performance competitive with GRU. Furthermore, observe that the sudden performance drops experienced by Binary-GRU when the size of the set reaches a power of two (i.e. when the depth of the aggregation tree increases) are noticeably dampened in the case of Binary-GRU-Assoc, suggesting that regularisation towards associativity helps prevent overfitting to a particular maximum aggregation tree height. For interest, we present the full results of the regularisation parameter sweep in Figure 4 in Appendix G.
226
+
227
+ § 3.2 PNA SYNTHETIC BENCHMARK
228
+
229
+ For this experiment, we trained recurrent (GRU) and LCM (Binary-GRU, Binary-GRU-Assoc) aggregators on the synthetic benchmark from [Corso et al., 2020], comparing against the fixed-aggregator baselines presented there (for GATs [Veličković et al., 2018], GCNs [Kipf and Welling, 2017], GINs [Xu et al., 2019b] and MPNNs [Gilmer et al., 2017] with sum and max aggregators).
230
+
231
+ ${}^{7}$ Note that, out of the fixed aggregators, PNA was the only one to achieve near-perfect accuracy on the training dataset, with a maximum training accuracy of around 0.997 .
232
+
233
+ § 3.2.1 EXPERIMENTAL DETAILS
234
+
235
+ In the PNA paper [Corso et al., 2020], experiments testing fixed aggregators (sum, max, PNA) are conducted on a custom GNN architecture centred around an MPNN layer with dimension 16, split into four towers each with hidden dimension 4 . As we hypothesise that the low dimensionality of these towers could harm the expressivity of learnable aggregators, we test our learnable aggregators both in MPNNs of hidden dimension 16, with four towers of hidden dimension 16, and in MPNNs of hidden dimension 128, with one tower of hidden dimension 128.
236
+
237
+ § 3.2.2 RESULTS AND DISCUSSION
238
+
239
+ Summary. Recall that this dataset consists of aggregator-heavy classical graph problems ${}^{8}$ that are mostly aligned with the aggregators used to construct PNA. So, as expected, we see in Table 1 that PNA outperforms all other aggregators tested on the dataset in-distribution. But observe that, on these problems, our asymptotically more efficient LCMs are competitive with and sometimes beat ${GRUs}$ - and indeed, on the node-based problems in the dataset, our LCMs are as strong as PNA.
240
+
241
+ In Appendix G, we observe the surprising result that LCMs are more stable than PNA out-of-distribution (OOD), and that regularising LCMs towards associativity improves OOD performance at the cost of impairing performance in distribution. We also discuss the effects of increasing dimensionality on fixed aggregator performance, through the lens of commutative monoid homomorphisms.
242
+
243
+ In-distribution performance. Observe in Table 1 that, while PNA beats all other aggregators tested, our learnable aggregators perform competitively in-distribution, with all learnable aggregators beating all single-aggregator (i.e. non-PNA) architectures. Interestingly, our Binary-GRUs perform better than the corresponding GRUs: perhaps their inductive bias towards commutativity helps us learn in-distribution.
244
+
245
+ Per-task performance. We present the per-task performance of all 128-dimensional aggregators (together with the fixed-aggregator baselines) in Table 1. Observe that, in fact, the Binary-GRU-assoc outperforms Binary-GRU in all tasks apart from the the graph Laplacian.
246
+
247
+ Furthermore, while learnable aggregators do not perform as strongly as fixed aggregators on whole-graph tasks (connectedness, diameter and spectral radius), they perform equally or better than fixed aggregators for node-based tasks (shortest path, eccentricity and graph Laplacian). This may be because the benchmark implementation for whole-graph tasks uses a sum-aggregator over the readout values: it is likely difficult to learn a homomorphism from the sum aggregator to the complex latent-space monoid learned by the LCM, and perhaps fixed ag-gregators provide an inductive bias towards learning representations for which it is easier to map to and from the sum-aggregation monoid.
248
+
249
+ max width=
250
+
251
+ 2*Model 2*Avg score 3|c|Node tasks 3|c|Graph tasks
252
+
253
+ 3-8
254
+ SSSP Ecc Lap feat Conn Diam Spec rad
255
+
256
+ 1-8
257
+ GCN -2.05 -2.16 -1.89 -1.60 -1.69 -2.14 -2.79
258
+
259
+ 1-8
260
+ GAT -2.26 -2.34 -2.09 -1.60 -2.44 -2.40 -2.70
261
+
262
+ 1-8
263
+ GIN -1.99 -2.00 -1.90 -1.60 -1.61 -2.17 -2.66
264
+
265
+ 1-8
266
+ MPNN (sum) -2.50 -2.33 -2.26 -2.37 -1.82 -2.69 -3.52
267
+
268
+ 1-8
269
+ MPNN (max) -2.53 -2.36 -2.16 -2.59 -2.54 -2.67 -2.87
270
+
271
+ 1-8
272
+ PNA-16 -3.04 -2.99 -2.81 -2.83 -2.91 -2.98 -3.71
273
+
274
+ 1-8
275
+ PNA-128 -3.09 -2.94 -2.88 -3.82 -2.42 -3.00 -3.48
276
+
277
+ 1-8
278
+ GRU -2.91 -2.84 -2.71 -3.73 -2.20 -2.88 -3.11
279
+
280
+ 1-8
281
+ Binary-GRU -3.00 -2.85 -2.77 -3.87 -2.34 -2.88 -3.29
282
+
283
+ 1-8
284
+ Binary-GRU-Assoc -2.95 -2.99 -2.88 -2.92 -2.62 -2.92 -3.37
285
+
286
+ 1-8
287
+
288
+ Table 1: Mean ${\log }_{10}\left( {MSE}\right)$ on the PNA test dataset
289
+
290
+ § 3.3 PNA REAL-WORLD BENCHMARK
291
+
292
+ For this experiment, we trained recurrent (GRU) and LCM (Binary-GRU) aggregators on the real-world benchmark from Corso et al. [2020], containing two molecular graph property prediction datasets (ZINC and MolHIV) and two superpixel graph classification datasets (CIFAR10 and MNIST). Note that, due to limitations on compute resources, we were not able to perform a regularisation parameter sweep to test Binary-GRU-Assoc. The GNN architecture used here is identical to that in [Corso et al., 2020], except that, for learnable aggregators, all MPNN towers have the same dimensionality as the MPNN itself (i.e. we do not divide the towers).
293
+
294
+ ${}^{8}$ three node-based algorithmic tasks (single-source shortest paths, eccentricity and computing the Laplacian of node feature vectors) and three graph-based algorithmic tasks (connectedness, diameter and spectral radius)
295
+
296
+ § 3.3.1 RESULTS AND DISCUSSION
297
+
298
+ Summary. Recall that the real-world benchmark has complex problems that do not necessarily align with common fixed aggregators. We observe in Figure 2 that, while PNA in general outperforms all other aggregators on problems involving property prediction from small molecular graphs, the more expressive GRU substantially outperforms PNA for the (more discrete) task of image classification. We notice also that the (asymptotically efficient) Binary-GRU LCM provides a good trade-off between these two aggregators, being the second-best-performing aggregator for all but two problems. Finally, we see that learnable aggregators might be particularly powerful on problems involving graphs with edge features.
299
+
300
+ max width=
301
+
302
+ X X 4|c|Zinc (MAE) 4|c|CIFAR10 (Acc) 4|c|MNIST (1(MAE) 2|c|MolHIV (%ROC-AUC)
303
+
304
+ 1-16
305
+ X Model No edge features std Edge features std No edge features std Edge features std No edge features std Edge features std No edge features std
306
+
307
+ 1-16
308
+ 7*Dwivedi et al, Xu et al MLP 0.710 0.001 X X 56.01 0.90 X X 94.46 0.28 X X X X
309
+
310
+ 2-16
311
+ GCN 0.469 0.002 X X 54.46 0.10 X X 89.99 0.15 X X 76.06 0.97
312
+
313
+ 2-16
314
+ GIN 0.408 0.008 X X 53.28 3.70 X X 93.96 1.30 X X 75.58 1.40
315
+
316
+ 2-16
317
+ DiffPoll 0.466 0.006 X X 57.99 0.45 X X 95.02 0.42 X X X X
318
+
319
+ 2-16
320
+ GAT 0.463 0.002 X X 65.48 0.33 X X 95.62 0.13 X X X X
321
+
322
+ 2-16
323
+ Monet 0.407 0.007 X X 53.42 0.43 X X 90.36 0.47 X X X X
324
+
325
+ 2-16
326
+ GatedGCN 0.422 0.006 0.363 0.009 69.19 0.28 69.37 0.48 97.37 0.06 97.47 0.13 X X
327
+
328
+ 1-16
329
+ 4*Corso et al MPNN (sum) 0.381 0.005 0.288 0.002 65.39 0.47 65.61 0.30 96.72 0.17 96.90 0.15 X X
330
+
331
+ 2-16
332
+ MPNN (max) 0.468 0.002 0.328 0.008 69.70 0.55 70.86 0.27 97.37 0.11 97.82 0.08 X X
333
+
334
+ 2-16
335
+ PNA (no scalers) 0.413 0.006 0.247 0.036 70.46 0.44 70.47 0.72 97.41 0.16 97.94 0.12 78.76 1.04
336
+
337
+ 2-16
338
+ PNA 0.320 0.032 0.188 0.004 70.21 0.15 70.35 0.63 97.19 0.08 97.69 0.22 79.05 1.32
339
+
340
+ 1-16
341
+ 2*Ours Binary-GRU 0.340 0.003 0.175 0.003 69.61 0.18 71.86 0.26 97.79 0.20 98.11 0.07 77.37 1.11
342
+
343
+ 2-16
344
+ GRU 0.342 0.004 0.171 0.006 72.03 1.06 74.44 0.52 98.15 0.04 98.41 0.10 76.04 1.01
345
+
346
+ 1-16
347
+
348
+ Figure 2: Results of learnable aggregators on the PNA real-world dataset, in comparison with those analysed by Corso et al. [2020]. Best results in bold-face, second-best in underline.
349
+
350
+ Observe that PNA is the strongest aggregator over both the ZINC dataset without edge features and the HIV dataset - indeed, due to the continuous nature of the properties we want to estimate in these datasets, it seems likely that the 'natural' monoids for aggregation over graphs in these datasets would align well with fixed aggregators. By contrast, we observe that GRU-aggregators are the strongest when testing on image data, likely as their expressivity lets them easily learn a complex, perhaps more discrete aggregation function. And while Binary-GRU does not do quite as well as GRU here, in all but one case it outperforms PNA on this problem. Finally, observe that, if we add edge features to ZINC, GRU outperforms PNA - and comparing results on the CIFAR-10 dataset with and without edge features, the average improvement for fixed aggregators when adding edge features is 0.34, whereas such improvement for learnable aggregators is 2.33 . Learnable aggregators may be particularly strong on tasks with edge features, as making full use of them tends to require the learning of a more complex aggregation function.
351
+
352
+ § 4 CONCLUSIONS
353
+
354
+ In this work we have conducted a thorough study of aggregation functions within graph neural networks (GNNs), demonstrating both theoretically and empirically that many tasks of practical interest rely on a nontrivial integration of neighbourhoods (i.e. a nontrivial commutative monoid). This motivates the use of fully-learnable aggregation functions, but prior proposals based on RNNs had several shortcomings in terms of efficiency. Accordingly, we propose learnable commutative monoid (LCM) aggregators, which trade off the flexibility of RNNs with efficiency of fixed aggregators, producing a simple, yet empirically powerful, GNN aggregator with only $O\left( {\log V}\right)$ depth. 374
papers/LOG/LOG 2022/LOG 2022 Conference/YCgwkDo56q/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,292 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Global-Local Graph Neural Networks for Node-Classification
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ The task of graph node-classification is often approached using a local Graph Neural Network (GNN), that learns only local information from the node input features and their adjacency. In this paper we propose to benefit from global and local information through the form of learning label- and node- features to improve node-classification accuracy. We therefore call our method Global-Local-GNN (GLGNN). To learn proper label features, for each label, we maximize the similarity between its features and nodes features that belong to the label, while maximizing the distance between nodes that do not belong to the considered label. We then use the learnt label features to predict the node-classification map. We demonstrate our GLGNN using GCN and GAT as GNN backbones, and show that our GLGNN approach improves baseline performance on the node-classification task.
12
+
13
+ ## 1 Introduction
14
+
15
+ The field of Graph Neural Networks (GNNs) has gained large popularity in recent years [1-5] in a wide variety of fields and applications such as computer graphics and vision [5-9], Bioinformatics [10, 11], node-classification $\left\lbrack {3,{12},{13}}\right\rbrack$ and others. In the context of node-classification, most of the methods consider only nodal (i.e., local) information by performing local aggregations and $1 \times 1$ convolutions, e.g., [3, 12-14]. In this paper we propose to incorporate label (i.e., global) information to improve the training of GNNs. In particular, we propose to learn a feature vector for each label (class) in the data, which is then used to determine the final prediction map and is mutually utilized with the learnt node features. Because our method is based on learning global features that scale as the number of labels in the dataset, our method does not add significant computational overhead compared to the the backbone GNNs. We show the generality of this approach by demonstrating it on GCN [3] and GAT [12] on a variety of node-classification datasets, both in semi- and fully-supervised settings. Our experiments reveal that our GLGNN approach is beneficial for all the considered datasets, and we also illustrate the learnt global features with respect to the node features for a qualitative assessment of our method. Our contributions are as follows:
16
+
17
+ - We propose to learn label features to capture global information of the input graph.
18
+
19
+ - We fuse label- and node- features to predict a node-classification map.
20
+
21
+ - We demonstrate our method qualitatively by illustrating the learnt label features in Fig. 1 and quantitatively by demonstrating the benefit of using GLGNN approach on 6 real-world datasets.
22
+
23
+ ## 2 Related Work
24
+
25
+ ### 2.1 Graph Neural Networks
26
+
27
+ Typically, graph neural networks (GNNs) are categorized into spectral [1] and spatial [3, 5, 15-17] types. While the former learns a global convolution kernel, it scales as the number nodes in the graph $n$ and is of a higher computational complexity. To obtain local convolutions, spatial GNNs formulate a local-aggregation scheme is usually implemented using the Message-Passing Neural Network mechanism [17], where each node aggregates features (messages) from its neighbours, according to some policy. In this work we follow the latter, whilst adding a global mechanism by learning label features to improve accuracy on node-classification tasks.
28
+
29
+ ### 2.2 Improved training of GNNs
30
+
31
+ To improve accuracy performance, recent works introduce new training policies, objective functions and augmentations. A common trick for training on small datasets like Cora, Citeseer and Pubmed is the incorporation of Dropout [18] after every GNN layer, which has become a standard practice $\left\lbrack {3,{13},{14},{19}}\right\rbrack$ . Other methods suggest to randomly alternate the data rather than the GNN neural units. For example, DropEdge [20] and DropNode [21] randomly drop graph edges and nodes, respectively. In the work PairNorm [22], the authors propose a normalization layer that alleviate the over-smoothing phenomenon in GNNs [23]. Another approach is the Mixup technique that enriches the learning data, and has shown success in image classification [24, 25]. Following that, the work GraphMix [26] proposed an interpolation-based regularization method by parameter sharing of GNNs and point-wise convolutions.
32
+
33
+ Other methods that consider the GNN training from an information and entropy point of view following the success of mutual information in CNNs [27]. For example, DGI [28] learns a global graph vector and considers its correspondence with local patch vectors. However, it does not consider label features as in our work. In the work InfoGraph [29] the authors learn a discriminative network for graph classification tasks, and in [30] a consistency-diversity augmentation is proposed via an entropy perspective for node and graph classification tasks.
34
+
35
+ ## 3 Notations
36
+
37
+ We denote an undirected graph by the tuple $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ where $\mathcal{V}$ is a set of $n$ nodes and $\mathcal{E}$ is a set of $m$ edges, and by ${\mathbf{f}}^{\left( l\right) } \in {\mathbb{R}}^{n \times c}$ the feature tensor of the nodes $\mathcal{V}$ with $c$ channels at the $l$ -th layer. The adjacency matrix is defined by $\mathbf{A} \in {\mathbb{R}}^{n \times n}$ , where ${\mathbf{A}}_{ij} = 1$ if there exists an edge $\left( {i, j}\right) \in \mathcal{E}$ and 0 otherwise, and the diagonal degree matrix is denoted $\overline{\mathbf{D}}$ where ${\mathbf{D}}_{ii}$ is the degree of the $i$ -th node.
38
+
39
+ Let us also denote the adjacency and degree matrices with added self-edges by $\widetilde{\mathbf{A}}$ and $\widetilde{\mathbf{D}}$ , respectively. Using this notation, for example, the propagation operator from GCN [3] is obtained by $\widetilde{\mathbf{P}} =$ ${\widetilde{\mathbf{D}}}^{-\frac{1}{2}}\widetilde{\mathbf{A}}{\widetilde{\mathbf{D}}}^{-\frac{1}{2}}$ , and its architecture is given by
40
+
41
+ $$
42
+ {\mathbf{f}}^{\left( l + 1\right) } = \operatorname{ReLU}\left( {\widetilde{\mathbf{P}}{\mathbf{f}}^{\left( l\right) }{\mathbf{K}}^{\left( l\right) }}\right) , \tag{1}
43
+ $$
44
+
45
+ where ${\mathbf{K}}^{\left( l\right) }$ is a $1 \times 1$ convolution matrix.
46
+
47
+ We consider the node-classification task with $k$ labels. We denote the ground-truth labels by $\mathbf{y} \in$ ${\mathbb{R}}^{n \times k}$ and the node-classification prediction by applying SoftMax to the output of the network ${\mathbf{f}}^{\text{out }}$
48
+
49
+ $$
50
+ \widehat{\mathbf{y}} = \operatorname{SoftMax}\left( {\mathbf{f}}^{\text{out }}\right) \in {\mathbb{R}}^{n \times k}. \tag{2}
51
+ $$
52
+
53
+ ## 4 Method
54
+
55
+ We now describe the local and global feature extraction mechanism and our objective functions.
56
+
57
+ Local features. The local information is obtained by learning node features $\mathbf{f} \in {\mathbb{R}}^{n \times d}$ using some backbone denoted by GNN. In our experiments, we evaluate our method with GNN being a GCN [3] as in Eq. (1) or a GAT [12]. Note that our GLGNN approach does not assume a specific GNN backbone and thus can possibly be utilized with other GNNs.
58
+
59
+ Global features. Our global information mechanism learns label features $\mathbf{g} \in {\mathbb{R}}^{k \times d}$ . Specifically, to obtain the global features we consider the concatenation of initial nodes-embedding ${\mathbf{f}}^{\left( 0\right) }$ and the last GNN layer node features ${\mathbf{f}}^{\left( \mathrm{L}\right) }$ denoted by $\left\lbrack {{\mathbf{f}}^{\left( 0\right) } \oplus {\mathbf{f}}^{\left( L\right) }}\right\rbrack$ . We then perform a single $1 \times 1$ convolution denoted by ${\mathbf{K}}_{\mathrm{g}}$ , followed by a ReLU activation, and feed it to a global MaxPool readout function to obtain a single vector $\mathbf{s} \in {\mathbb{R}}^{d}$ . Formally:
60
+
61
+ $$
62
+ \mathbf{s} = \operatorname{MaxPool}\left( {\operatorname{ReLU}\left( {{\mathbf{K}}_{\mathrm{g}}\left\lbrack {{\mathbf{f}}^{\left( 0\right) } \oplus {\mathbf{f}}^{\left( \mathrm{L}\right) }}\right\rbrack }\right) }\right) . \tag{3}
63
+ $$
64
+
65
+ Using the global vector $\mathbf{s}$ , we utilize $k$ (the number of labels) multi-layer perceptrons (MLPs) that are implemented as an inverted bottleneck [31], and in particular resembles the squeeze-and-excite mechanism from [32]. Each MLP is comprised of the following:
66
+
67
+ $$
68
+ {\mathbf{g}}_{\mathbf{i}} = {\mathbf{K}}_{\mathbf{s}}\left( {\operatorname{ReLU}\left( {{\mathbf{K}}_{\mathrm{e}}\mathbf{s}}\right) }\right) , \tag{4}
69
+ $$
70
+
71
+ where ${\mathbf{K}}_{\mathrm{e}},{\mathbf{K}}_{\mathrm{s}}$ are an expanding (from $d$ to $e \times d$ ) and shrinking (from $e \times d$ to $d$ ) $1 \times 1$ convolutions, and the expansion rate $e$ is a hyper-parameter which is set $e = {12}$ in our experiments. Note that this operation can be efficiently implemented using a grouped convolution to obtain $\mathbf{g} = \left\lbrack {{\mathbf{g}}_{0},\ldots ,{\mathbf{g}}_{\mathrm{k} - 1}}\right\rbrack$ in parallel. Also, because $\mathbf{s}$ is a vector, the computational overhead is rather low compared to the total complexity of the backbone GNN.
72
+
73
+ Node-classification map. To obtain a node-classification prediction map, we consider matrix-vector product of the final GNN output ${\mathbf{f}}^{\left( L\right) } \in {\mathbb{R}}^{n \times d}$ with each of the label features ${\mathbf{g}}_{i} \in {\mathbb{R}}^{d}$ in (4). More formally for each label we obtain the following node-label correspondence vector:
74
+
75
+ $$
76
+ {\mathbf{z}}_{i} = {\mathbf{f}}^{\left( L\right) } \cdot {\mathbf{g}}_{i} \in {\mathbb{R}}^{n}. \tag{5}
77
+ $$
78
+
79
+ 1 By concatenating the $k$ correspondence vectors and applying the SoftMax function, we obtain a node-classification map
80
+
81
+ $$
82
+ \widehat{\mathbf{y}} = \operatorname{SoftMax}\left( {{\mathbf{z}}_{0} \oplus \ldots \oplus {\mathbf{z}}_{\mathrm{k} - 1}}\right) \in {\mathbb{R}}^{\mathrm{n} \times \mathrm{k}}, \tag{6}
83
+ $$
84
+
85
+ which is the final output of our GLGNN.
86
+
87
+ Objective functions. To train our GLGNN we propose to minimize the following objective function:
88
+
89
+ $$
90
+ \mathcal{L} = {\mathcal{L}}_{CE} + \alpha {\mathcal{L}}_{\mathrm{{GL}}}, \tag{7}
91
+ $$
92
+
93
+ where ${\mathcal{L}}_{CE}$ denotes the cross-entropy loss between ground-truth $\mathbf{y}$ and predicted node labels $\widehat{\mathbf{y}}$ from Eq. (6). $\alpha$ is a positive hyper-parameter, and ${\mathcal{L}}_{\mathrm{{GL}}}$ denotes a global-local loss that considers the relationship between the label and node features by demanding the similarity of nodes that belong to a respective label while requiring the dis-similarity of node features that do not belong to that label and its features, as follows
94
+
95
+ $$
96
+ {\mathcal{L}}_{\mathrm{{GL}}} = \mathop{\sum }\limits_{{l = 0}}^{{k - 1}}\left( {\mathop{\sum }\limits_{{{\mathbf{y}}_{i} = l}}{\begin{Vmatrix}{\mathbf{g}}_{l} - {\mathbf{f}}_{i}^{\left( L\right) }\end{Vmatrix}}_{2}^{2} - \mathop{\sum }\limits_{{{\mathbf{y}}_{i} \neq l}}\min \left( {{\begin{Vmatrix}{\mathbf{g}}_{l} - {\mathbf{f}}_{\mathrm{i}}^{\left( \mathrm{L}\right) }\end{Vmatrix}}_{2}^{2},\mathrm{r}}\right) }\right) , \tag{8}
97
+ $$
98
+
99
+ where $\min \left( {\cdot , \cdot }\right)$ is a clamping function that returns the minimal values of its arguments, and $\mathrm{r}$ is a positive hyper-parameter. In our experiments we set $\mathrm{r} = {10}$ .
100
+
101
+ ## 5 Experiments
102
+
103
+ We now demonstrate GLGNN on semi- and fully-supervised node-classification. Our GLGNN consists of an embedding layer $(1 \times 1$ convolution), a series of GNN backbone layers and the label features MLPs as described in Sec. 4. As GNN backbones, we consider GCN [3] and GAT [12]. We elaborate on the specific architecture in Appendix A. We use the Adam [33] optimizer, and perform a grid-search to choose the hyper-parameters (see Appendix B for more information). Our code is implemented using PyTorch [34] and PyTorch-Geometric [35], trained on an Nvidia Titan RTX GPU.
104
+
105
+ We show that for all the considered tasks and datasets, our GLGNN offers a consistent improvement over the baseline methods, and besides the obtained accuracy we report the relative accuracy improvement compared to the baseline GCN and GAT methods. Also, we find that our GLGNN is competitive with recent state-of-the-art methods. We provide further datasets information in Appendix C.
106
+
107
+ ### 5.1 Semi-Supervised Node-Classification
108
+
109
+ We consider Cora, Citeseer and Pubmed [36] datasets and their standard, public training/validation/testing split as in [37], with 20 nodes per class for training. We follow the training and evaluation scheme of [13] and consider various GNN models like GCN, GAT, superGAT [38], APPNP [39], JKNet [40], GCNII [13], GRAND [41], PDE-GCN [42], pathGCN [43], EGNN[14] and superGAT [38]. We also consider other improved training techniques P-reg [44], GraphMix [26] and NASA [30]. We summarize the results in Tab. 1 and illustrate the learnt labels and nodes features in Fig. 1, revealing the clustering effect of learning label nodes.
110
+
111
+ Table 1: Semi-supervised node-classification accuracy (%).
112
+
113
+ <table><tr><td>Method</td><td>Cora</td><td>Citeseer</td><td>Pubmed</td></tr><tr><td>GCN</td><td>81.1</td><td>70.8</td><td>79.0</td></tr><tr><td>GAT</td><td>83.1</td><td>70.8</td><td>78.5</td></tr><tr><td>APPNP</td><td>83.3</td><td>71.8</td><td>80.1</td></tr><tr><td>JKNET</td><td>81.1</td><td>69.8</td><td>78.1</td></tr><tr><td>GCNII</td><td>85.5</td><td>73.4</td><td>80.3</td></tr><tr><td>GRAND</td><td>84.7</td><td>73.6</td><td>81.0</td></tr><tr><td>PDE-GCN</td><td>84.3</td><td>75.6</td><td>80.6</td></tr><tr><td>pathGCN</td><td>85.8</td><td>75.8</td><td>82.7</td></tr><tr><td>EGNN</td><td>85.7</td><td>-</td><td>80.1</td></tr><tr><td>superGAT</td><td>84.3</td><td>72.6</td><td>81.7</td></tr><tr><td>GraphMix</td><td>84.0</td><td>74.7</td><td>81.1</td></tr><tr><td>P-reg</td><td>83.9</td><td>74.8</td><td>80.1</td></tr><tr><td>NASA</td><td>85.1</td><td>75.5</td><td>80.2</td></tr><tr><td>GLGCN (ours)</td><td>${84.2}_{\left( +{3.8}\% \right) }$</td><td>73.3 (+3.5%)</td><td>81.5 (+3.1%)</td></tr><tr><td>GLGAT (ours)</td><td>${84.5}_{\left( +{1.6}\% \right) }$</td><td>72.6 ${}_{\left( +{2.5}\% \right) }$</td><td>81.2(+3.4%)</td></tr></table>
114
+
115
+ Table 2: Fully-supervised node-classification accuracy (%) on homophilic datasets.
116
+
117
+ <table><tr><td>Method Homophily</td><td>Cora 0.81</td><td>Citeseer 0.80</td><td>Pubmed 0.74</td></tr><tr><td>GCN</td><td>85.77</td><td>73.68</td><td>88.13</td></tr><tr><td>GAT</td><td>86.37</td><td>74.32</td><td>87.62</td></tr><tr><td>Geom-GCN</td><td>85.27</td><td>77.99</td><td>90.05</td></tr><tr><td>APPNP</td><td>87.87</td><td>76.53</td><td>89.40</td></tr><tr><td>JKNet (Drop)</td><td>87.46</td><td>75.96</td><td>89.45</td></tr><tr><td>GCNII</td><td>88.49</td><td>77.08</td><td>89.57</td></tr><tr><td>WRGAT</td><td>88.20</td><td>76.81</td><td>88.52</td></tr><tr><td>GCNII*</td><td>88.01</td><td>77.13</td><td>90.30</td></tr><tr><td>GGCN</td><td>87.95</td><td>77.14</td><td>89.15</td></tr><tr><td>H2GCN</td><td>87.87</td><td>77.11</td><td>89.49</td></tr><tr><td>GLGCN (ours)</td><td>${88.47}_{\left( +{3.1}\% \right) }$</td><td>77.72 (+5.4%)</td><td>${88.61}\left( {\pm {0.05}\% }\right)$</td></tr></table>
118
+
119
+ <table><tr><td>GLGCN (ours)</td><td>${88.47}_{\left( +{3.1}\% \right) }$</td><td>77.72 (+5.4%)</td><td>88.61 ${}_{\left( +{0.05}\% \right) }$</td></tr><tr><td>GLGAT (ours)</td><td>${88.65}_{\left( +{2.6}\% \right) }$</td><td>77.37 (+4.1%)</td><td>88.74 (+0.1%)</td></tr></table>
120
+
121
+ ![01963f14-36cf-753a-9082-962c5bec8cba_3_929_222_550_424_0.jpg](images/01963f14-36cf-753a-9082-962c5bec8cba_3_929_222_550_424_0.jpg)
122
+
123
+ Figure 1: tSNE embedding of learnt label- and node-features of Cora.
124
+
125
+ Table 3: Fully-supervised node-classification accuracy (%) on heterophilic datasets.
126
+
127
+ <table><tr><td>Method Homophily</td><td>Corn. 0.30</td><td>Texas 0.11</td><td>Wisc. 0.21</td></tr><tr><td>GCN</td><td>52.70</td><td>52.16</td><td>48.92</td></tr><tr><td>GAT</td><td>54.32</td><td>58.38</td><td>49.41</td></tr><tr><td>Geom-GCN</td><td>60.81</td><td>67.57</td><td>64.12</td></tr><tr><td>JKNet (Drop)</td><td>61.08</td><td>57.30</td><td>50.59</td></tr><tr><td>GCNII</td><td>74.86</td><td>69.46</td><td>74.12</td></tr><tr><td>GCNII*</td><td>76.49</td><td>77.84</td><td>81.57</td></tr><tr><td>GRAND</td><td>82.16</td><td>75.68</td><td>79.41</td></tr><tr><td>WRGAT</td><td>81.62</td><td>83.62</td><td>86.98</td></tr><tr><td>GGCN</td><td>85.68</td><td>84.86</td><td>86.86</td></tr><tr><td>H2GCN</td><td>82.70</td><td>84.86</td><td>87.65</td></tr><tr><td>GraphCON-GCN</td><td>84.30</td><td>85.40</td><td>87.80</td></tr><tr><td>GraphCON-GAT</td><td>83.20</td><td>82.20</td><td>85.70</td></tr><tr><td>GLGCN (ours)</td><td>74.86 (+42.0%)</td><td>70.27 (+34.7%)</td><td>65.29 (+33.4%)</td></tr><tr><td>GLGAT (ours)</td><td>75.67 (+39.3%)</td><td>70.01 (+19.9%)</td><td>${65.88}_{\left( +{33.3}\% \right) }$</td></tr></table>
128
+
129
+ ### 5.2 Fully-Supervised Node-Classification
130
+
131
+ To further validate the efficacy of our method, we employ fully-supervised node-classification on 6 datasets, namely, Cora, Citeseer, Pubmed, Cornell, Texas and Wisconsin using the 10 random splits from [45] with train/validation/test split of ${48}\% ,{32}\% ,{20}\%$ respectively, and report their average accuracy. In all experiments, we use 64 channels and perform a grid-search to determine the hyper-parameters. We compare our accuracy with methods like GCN, GAT, Geom-GCN [45], APPNP, JKNet [40], WRGAT [46], GCNII [13], DropEdge [20], H2GCN [47], GGCN [48] and GraphCON [49]. We distinguish between homophilic and heterophilic datasets, and report the results of the former in Tab. 2, and of the latter in Tab. 3, where we also report the homophily score of each dataset (adapted from [45]). We see an improvement across all benchmarks and types of datasets compared to the baseline methods of GCN and GAT and competitive results on homophilic datasets with recent state-of-the-art methods.
132
+
133
+ ## 6 Conclusion
134
+
135
+ In this paper we propose GLGNN, a method to leverage global information for semi- and fully-supervised node-classification. By learning and fusing global label features and local node features, we show that it is possible to cluster the nodes in a way that enables improved classification accuracy and demonstrate that our method outperforms baseline models by a significant margin. Future research directions include the evaluation of this method on graph classification datasets and exploring additional possible methods of global label information extraction and incorporation.
136
+
137
+ References
138
+
139
+ [1] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203, 2013. 1
140
+
141
+ [2] Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in neural information processing systems, pages 3844-3852, 2016.
142
+
143
+ [3] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016. 1, 2, 3, 8
144
+
145
+ [4] Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34 (4):18-42, 2017.
146
+
147
+ [5] Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda, and Michael M Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5115-5124, 2017. 1
148
+
149
+ [6] Davide Boscaini, Jonathan Masci, Emanuele Rodolà, and Michael Bronstein. Learning shape correspondence with anisotropic convolutional neural networks. 052016.
150
+
151
+ [7] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon. Dynamic graph cnn for learning on point clouds. arXiv preprint arXiv:1801.07829, 2018.
152
+
153
+ [8] Rana Hanocka, Amir Hertz, Noa Fish, Raja Giryes, Shachar Fleishman, and Daniel Cohen-Or. Meshcnn: a network with an edge. ACM Transactions on Graphics (TOG), 38(4):90, 2019.
154
+
155
+ [9] Moshe Eliasof and Eran Treister. Diffgcn: Graph convolutional networks via differential operators and algebraic multigrid pooling. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada., 2020. 1
156
+
157
+ [10] Alexey Strokach, David Becerra, Carles Corbi-Verge, Albert Perez-Riba, and Philip M. Kim. Fast and flexible protein design using deep graph neural networks. Cell Systems, 11(4):402 - 411.e4, 2020. ISSN 2405-4712. doi: https://doi.org/10.1016/j.cels.2020.08.016.URL http://www.sciencedirect.com/science/article/pii/S2405471220303276.1
158
+
159
+ [11] John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ron-neberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin žídek, Anna Potapenko, et al. Highly accurate protein structure prediction with alphafold. Nature, 596(7873):583-589, 2021. 1
160
+
161
+ [12] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph Attention Networks. International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=rJXMpikCZ.1, 2, 3, 8
162
+
163
+ [13] Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, and Yaliang Li. Simple and deep graph convolutional networks. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 1725-1735. PMLR, 13-18 Jul 2020. URL http://proceedings.mlr.press/v119/chen20v.html.1, 2, 3, 4
164
+
165
+ [14] Kaixiong Zhou, Xiao Huang, Daochen Zha, Rui Chen, Li Li, Soo-Hyun Choi, and Xia Hu. Dirichlet energy constrained learning for deep graph neural networks. Advances in Neural Information Processing Systems, 34, 2021. 1, 2, 3
166
+
167
+ [15] Martin Simonovsky and Nikos Komodakis. Dynamic edge-conditioned filters in convolutional neural networks on graphs. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3693-3702, 2017. 1
168
+
169
+ [16] Jonathan Masci, Davide Boscaini, Michael Bronstein, and Pierre Vandergheynst. Geodesic convolutional neural networks on riemannian manifolds. In Proceedings of the IEEE international conference on computer vision workshops, pages 37-45, 2015.
170
+
171
+ [17] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1263-1272. JMLR. org, 2017. 1
172
+
173
+ [18] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958, 2014. 2
174
+
175
+ [19] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2019. URL https: //openreview.net/forum?id=ryGs6iA5Km. 2
176
+
177
+ [20] Yu Rong, Wenbing Huang, Tingyang Xu, and Junzhou Huang. Dropedge: Towards deep graph convolutional networks on node classification. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=Hkx1qkrKPr.2,4
178
+
179
+ [21] Tien Huu Do, Duc Minh Nguyen, Giannis Bekoulis, Adrian Munteanu, and Nikos Deligiannis. Graph convolutional neural networks with node transition probability-based message passing and dropnode regularization. Expert Systems with Applications, 174:114711, 2021. 2
180
+
181
+ [22] Lingxiao Zhao and Leman Akoglu. Pairnorm: Tackling oversmoothing in gnns. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum? id=rkecl1rtwB. 2
182
+
183
+ [23] Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. Proceedings of the AAAI Conference on Artificial Intelligence, 34:3438-3445, 04 2020. doi: 10.1609/aaai. v34i04.5747. 2
184
+
185
+ [24] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017. 2
186
+
187
+ [25] Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, David Lopez-Paz, and Yoshua Bengio. Manifold mixup: Better representations by interpolating hidden states. In International Conference on Machine Learning, pages 6438-6447. PMLR, 2019. 2
188
+
189
+ [26] Vikas Verma, Meng Qu, Kenji Kawaguchi, Alex Lamb, Yoshua Bengio, Juho Kannala, and Jian Tang. Graphmix: Improved training of gnns for semi-supervised learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 10024-10032, 2021. 2, 3
190
+
191
+ [27] R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=Bklr3jOcKX.2
192
+
193
+ [28] Petar Veličković, William Fedus, William L. Hamilton, Pietro Liò, Yoshua Bengio, and R Devon Hjelm. Deep graph infomax. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=rklz9iAcKQ.2
194
+
195
+ [29] Fan-Yun Sun, Jordan Hoffmann, Vikas Verma, and Jian Tang. Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. arXiv preprint arXiv:1908.01000, 2019. 2
196
+
197
+ [30] Deyu Bo, BinBin Hu, Xiao Wang, Zhiqiang Zhang, Chuan Shi, and Jun Zhou. Regularizing graph neural networks via consistency-diversity graph augmentations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 3913-3921, 2022. 2, 3
198
+
199
+ [31] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. MobileNetV2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4510-4520, 2018. 2
200
+
201
+ [32] Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, et al. Searching for mobilenetv3. In Proceedings of the IEEE/CVF international conference on computer vision, pages 1314-1324, 2019. 2
202
+
203
+ [33] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 3
204
+
205
+ [34] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy,
206
+
207
+ Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché- Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc., 2019. 3
208
+
209
+ [35] Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019. 3
210
+
211
+ [36] Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. AI magazine, 29(3):93-93, 2008. 3
212
+
213
+ [37] Zhilin Yang, William Cohen, and Ruslan Salakhudinov. Revisiting semi-supervised learning with graph embeddings. In International conference on machine learning, pages 40-48. PMLR, 2016. 3
214
+
215
+ [38] Dongkwan Kim and Alice Oh. How to find your friendly neighborhood: Graph attention design with self-supervision. In International Conference on Learning Representations, 2020. 3
216
+
217
+ [39] Johannes Klicpera, Aleksandar Bojchevski, and Stephan Günnemann. Combining neural networks with personalized pagerank for classification on graphs. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=H1gL-2A9Ym.3
218
+
219
+ [40] Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. Representation learning on graphs with jumping knowledge networks. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 5453-5462. PMLR, 10-15 Jul 2018. URL http://proceedings.mlr.press/v80/xu18c.html.3, 4
220
+
221
+ [41] Benjamin Paul Chamberlain, James Rowbottom, Maria Gorinova, Stefan Webb, Emanuele Rossi, and Michael M Bronstein. Grand: Graph neural diffusion. arXiv preprint arXiv:2106.10934, 2021. 3
222
+
223
+ [42] Moshe Eliasof, Eldad Haber, and Eran Treister. PDE-GCN: Novel architectures for graph neural networks motivated by partial differential equations. Advances in Neural Information Processing Systems, 34:3836-3849, 2021. 3
224
+
225
+ [43] Moshe Eliasof, Eldad Haber, and Eran Treister. pathgcn: Learning general graph spatial operators from paths. In International Conference on Machine Learning, pages 5878-5891. PMLR, 2022. 3
226
+
227
+ [44] Han Yang, Kaili Ma, and James Cheng. Rethinking graph regularization for graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 4573-4581, 2021. 3
228
+
229
+ [45] Hongbin Pei, Bingzhe Wei, Kevin Chen-Chuan Chang, Yu Lei, and Bo Yang. Geom-gcn: Geometric graph convolutional networks. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=S1e2agrFvS.4
230
+
231
+ [46] Susheel Suresh, Vinith Budde, Jennifer Neville, Pan Li, and Jianzhu Ma. Breaking the limit of graph neural networks by improving the assortativity of graphs with local mixing patterns. Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 2021. 4
232
+
233
+ [47] Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, and Danai Koutra. Beyond homophily in graph neural networks: Current limitations and effective designs. Advances in Neural Information Processing Systems, 33:7793-7804, 2020. 4
234
+
235
+ [48] Yujun Yan, Milad Hashemi, Kevin Swersky, Yaoqing Yang, and Danai Koutra. Two sides of the same coin: Heterophily and oversmoothing in graph convolutional neural networks. arXiv preprint arXiv:2102.06462, 2021. 4
236
+
237
+ [49] T Konstantin Rusch, Ben Chamberlain, James Rowbottom, Siddhartha Mishra, and Michael Bronstein. Graph-coupled oscillator networks. In International Conference on Machine Learning, pages 18888-18909. PMLR, 2022. 4
238
+
239
+ [50] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249-256. JMLR Workshop and Conference Proceedings, 2010. 8
240
+
241
+ ## A Architecture
242
+
243
+ We now elaborate on the specific architecture used in our experiments in Sec. 5. Our network architecture consists of an opening (embedding) layer $(1 \times 1$ convolution), a sequence of GNN backbones layers (see details below for specific aggregation rules for GCN and GAT), and a series of $1 \times 1$ convolutions to learn the global labels features. We have a single type of architecture, based on the scheme of GCN [3] for node-classification. The difference between our GLGCN and GLGAT is the backbone of the GNN. We specify the node feature extraction architecture in Tab. 4, and the label feature extraction architecture in Tab. 5. In what follows, we denote by ${c}_{in}$ and $k$ the input and output channels, respectively, and $c$ denotes the number of features in hidden layers. We initialize the embedding and label features related layers with the Glorot [50] initialization, and ${\mathbf{K}}^{\left( l\right) }$ from Eq. (1) is initialized with an identity matrix of shape $c \times c$ . We denote the number of GNN layers by $L$ , and the dropout probability by $p$ .
244
+
245
+ The GCN [3] backbone is given by:
246
+
247
+ $$
248
+ {\mathbf{f}}^{\left( l + 1\right) } = \operatorname{ReLU}\left( {\widetilde{\mathbf{P}}{\mathbf{f}}^{\left( l\right) }{\mathbf{K}}^{\left( l\right) }}\right) . \tag{9}
249
+ $$
250
+
251
+ as described in Eq. (1) in the main text
252
+
253
+ GAT. The GAT [12] backbone defines the propagation operator:
254
+
255
+ $$
256
+ {\alpha }_{ij}^{\left( l\right) } = \frac{\exp \left( {\operatorname{LeakyReLU}\left( {{\mathbf{a}}^{{\left( l\right) }^{\top }}\left\lbrack {{\widetilde{\mathbf{K}}}^{\left( l\right) }{\mathbf{f}}_{i}^{\left( l\right) } \oplus {\widetilde{\mathbf{K}}}^{\left( l\right) }{\mathbf{f}}_{j}^{\left( l\right) }}\right\rbrack }\right) }\right) }{\mathop{\sum }\limits_{{p \in {\mathcal{N}}_{i}}}\exp \left( {\operatorname{LeakyReLU}\left( {{\mathbf{a}}^{{\left( l\right) }^{\top }}\left\lbrack {{\widetilde{\mathbf{K}}}^{\left( l\right) }{\mathbf{f}}_{i}^{\left( l\right) } \oplus {\widetilde{\mathbf{K}}}^{\left( l\right) }{\mathbf{f}}_{p}^{\left( l\right) }}\right\rbrack }\right) }\right) }, \tag{10}
257
+ $$
258
+
259
+ where ${\mathbf{a}}^{\left( l\right) } \in {\mathbb{R}}^{2c}$ and ${\widetilde{\mathbf{K}}}^{\left( l\right) } \in {\mathbb{R}}^{c \times c}$ are trainable parameters and $\oplus$ denotes channel-wise concatenation and the neighbourhood of the $i$ -th node is denoted by ${\mathcal{N}}_{i} = \{ j \mid \left( {i, j}\right) \in \mathcal{E}\}$ .
260
+
261
+ By gathering all ${\alpha }_{ij}^{\left( l\right) }$ for every edge $\left( {i, j}\right) \in \mathcal{E}$ into a propagation matrix $\mathbf{S} \in {\mathbb{R}}^{n \times n}$ we obtain the
262
+
263
+ GAT architecture:
264
+
265
+ $$
266
+ {\mathbf{f}}^{\left( l + 1\right) } = \operatorname{ReLU}\left( {{\mathbf{S}}^{\left( l\right) }{\mathbf{f}}^{\left( l\right) }{\mathbf{K}}^{\left( l\right) }}\right) . \tag{11}
267
+ $$
268
+
269
+ Table 4: The architecture used for node features extraction.
270
+
271
+ <table><tr><td>Input size</td><td>Layer</td><td>Output size</td></tr><tr><td>$n \times {c}_{in}$</td><td>Dropout(p)</td><td>$n \times {c}_{in}$</td></tr><tr><td>$n \times {c}_{in}$</td><td>$1 \times 1$ Convolution</td><td>$n \times c$</td></tr><tr><td>$n \times c$</td><td>ReLU</td><td>$n \times c$</td></tr><tr><td>$n \times c$</td><td>$L \times$ GNN backbone</td><td>$n \times c$</td></tr></table>
272
+
273
+ Table 5: The architecture used for label features extraction. The input of this architecture is the output of Tab. 4
274
+
275
+ <table><tr><td>Input size</td><td>Layer</td><td>Output size</td></tr><tr><td>$n \times c$</td><td>MaxPool</td><td>$1 \times c$</td></tr><tr><td>$1 \times c$</td><td>$k \times 1 \times 1$ Convolutions</td><td>$k \times {12} \cdot c$</td></tr><tr><td>$k \times {12} \cdot c$</td><td>ReLU</td><td>$k \times {12} \cdot c$</td></tr><tr><td>$k \times {12} \cdot c$</td><td>$k \times 1 \times 1$ Convolutions</td><td>$k \times c$</td></tr></table>
276
+
277
+ ## B Hyper-parameters
278
+
279
+ We perform a grid-search to determine the hyper-parameters values. In Tab. 6 we specify each hyper parameter and the range of values that we considered.
280
+
281
+ ## C Datasets
282
+
283
+ The statistics of the datasets used in our experiments are provided in Tab. 7.
284
+
285
+ Table 6: Hyper-parameters and considered range for grid-search. LR and WD denote the learning rate and weight decay of embedding and label feature extraction layers. ${\mathrm{{LR}}}_{\mathrm{{GNN}}}$ and ${\mathrm{{WD}}}_{\mathrm{{GNN}}}$ denote the learning rate and weight decay of the GNN layers. $\alpha$ is the balancing coefficient from Eq. (7).
286
+
287
+ <table><tr><td>Hyper-parameter</td><td>Values range</td></tr><tr><td>LR</td><td>[1e-1, 1e-2, 1e-3, 1e-4]</td></tr><tr><td>${\mathrm{{LR}}}_{\mathrm{{GNN}}}$</td><td>[1e-1, 1e-2, 1e-3, 1e-4]</td></tr><tr><td>WD</td><td>[1e-3,1e-4,1e-5,0]</td></tr><tr><td>${\mathrm{{WD}}}_{\mathrm{{GNN}}}$</td><td>[1e-3, 1e-4, 1e-5, 0]</td></tr><tr><td>$\alpha$</td><td>[1e+2, 1e+1,1, 1e-1,1e-2]</td></tr><tr><td>$p$</td><td>$\left\lbrack {{0.5},{0.6},{0.7}}\right\rbrack$</td></tr></table>
288
+
289
+ Table 7: Datasets statistics.
290
+
291
+ <table><tr><td>Dataset</td><td>Classes</td><td>Nodes</td><td>Edges</td><td>Features</td></tr><tr><td>Cora</td><td>7</td><td>2,708</td><td>5,429</td><td>1,433</td></tr><tr><td>Citeseer</td><td>6</td><td>3,327</td><td>4,732</td><td>3,703</td></tr><tr><td>Pubmed</td><td>3</td><td>19,717</td><td>44,338</td><td>500</td></tr><tr><td>Cornell</td><td>5</td><td>183</td><td>295</td><td>1,703</td></tr><tr><td>Texas</td><td>5</td><td>183</td><td>309</td><td>1,703</td></tr><tr><td>Wisconsin</td><td>5</td><td>251</td><td>499</td><td>1,703</td></tr></table>
292
+
papers/LOG/LOG 2022/LOG 2022 Conference/YCgwkDo56q/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,274 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § GLOBAL-LOCAL GRAPH NEURAL NETWORKS FOR NODE-CLASSIFICATION
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ The task of graph node-classification is often approached using a local Graph Neural Network (GNN), that learns only local information from the node input features and their adjacency. In this paper we propose to benefit from global and local information through the form of learning label- and node- features to improve node-classification accuracy. We therefore call our method Global-Local-GNN (GLGNN). To learn proper label features, for each label, we maximize the similarity between its features and nodes features that belong to the label, while maximizing the distance between nodes that do not belong to the considered label. We then use the learnt label features to predict the node-classification map. We demonstrate our GLGNN using GCN and GAT as GNN backbones, and show that our GLGNN approach improves baseline performance on the node-classification task.
12
+
13
+ § 1 INTRODUCTION
14
+
15
+ The field of Graph Neural Networks (GNNs) has gained large popularity in recent years [1-5] in a wide variety of fields and applications such as computer graphics and vision [5-9], Bioinformatics [10, 11], node-classification $\left\lbrack {3,{12},{13}}\right\rbrack$ and others. In the context of node-classification, most of the methods consider only nodal (i.e., local) information by performing local aggregations and $1 \times 1$ convolutions, e.g., [3, 12-14]. In this paper we propose to incorporate label (i.e., global) information to improve the training of GNNs. In particular, we propose to learn a feature vector for each label (class) in the data, which is then used to determine the final prediction map and is mutually utilized with the learnt node features. Because our method is based on learning global features that scale as the number of labels in the dataset, our method does not add significant computational overhead compared to the the backbone GNNs. We show the generality of this approach by demonstrating it on GCN [3] and GAT [12] on a variety of node-classification datasets, both in semi- and fully-supervised settings. Our experiments reveal that our GLGNN approach is beneficial for all the considered datasets, and we also illustrate the learnt global features with respect to the node features for a qualitative assessment of our method. Our contributions are as follows:
16
+
17
+ * We propose to learn label features to capture global information of the input graph.
18
+
19
+ * We fuse label- and node- features to predict a node-classification map.
20
+
21
+ * We demonstrate our method qualitatively by illustrating the learnt label features in Fig. 1 and quantitatively by demonstrating the benefit of using GLGNN approach on 6 real-world datasets.
22
+
23
+ § 2 RELATED WORK
24
+
25
+ § 2.1 GRAPH NEURAL NETWORKS
26
+
27
+ Typically, graph neural networks (GNNs) are categorized into spectral [1] and spatial [3, 5, 15-17] types. While the former learns a global convolution kernel, it scales as the number nodes in the graph $n$ and is of a higher computational complexity. To obtain local convolutions, spatial GNNs formulate a local-aggregation scheme is usually implemented using the Message-Passing Neural Network mechanism [17], where each node aggregates features (messages) from its neighbours, according to some policy. In this work we follow the latter, whilst adding a global mechanism by learning label features to improve accuracy on node-classification tasks.
28
+
29
+ § 2.2 IMPROVED TRAINING OF GNNS
30
+
31
+ To improve accuracy performance, recent works introduce new training policies, objective functions and augmentations. A common trick for training on small datasets like Cora, Citeseer and Pubmed is the incorporation of Dropout [18] after every GNN layer, which has become a standard practice $\left\lbrack {3,{13},{14},{19}}\right\rbrack$ . Other methods suggest to randomly alternate the data rather than the GNN neural units. For example, DropEdge [20] and DropNode [21] randomly drop graph edges and nodes, respectively. In the work PairNorm [22], the authors propose a normalization layer that alleviate the over-smoothing phenomenon in GNNs [23]. Another approach is the Mixup technique that enriches the learning data, and has shown success in image classification [24, 25]. Following that, the work GraphMix [26] proposed an interpolation-based regularization method by parameter sharing of GNNs and point-wise convolutions.
32
+
33
+ Other methods that consider the GNN training from an information and entropy point of view following the success of mutual information in CNNs [27]. For example, DGI [28] learns a global graph vector and considers its correspondence with local patch vectors. However, it does not consider label features as in our work. In the work InfoGraph [29] the authors learn a discriminative network for graph classification tasks, and in [30] a consistency-diversity augmentation is proposed via an entropy perspective for node and graph classification tasks.
34
+
35
+ § 3 NOTATIONS
36
+
37
+ We denote an undirected graph by the tuple $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ where $\mathcal{V}$ is a set of $n$ nodes and $\mathcal{E}$ is a set of $m$ edges, and by ${\mathbf{f}}^{\left( l\right) } \in {\mathbb{R}}^{n \times c}$ the feature tensor of the nodes $\mathcal{V}$ with $c$ channels at the $l$ -th layer. The adjacency matrix is defined by $\mathbf{A} \in {\mathbb{R}}^{n \times n}$ , where ${\mathbf{A}}_{ij} = 1$ if there exists an edge $\left( {i,j}\right) \in \mathcal{E}$ and 0 otherwise, and the diagonal degree matrix is denoted $\overline{\mathbf{D}}$ where ${\mathbf{D}}_{ii}$ is the degree of the $i$ -th node.
38
+
39
+ Let us also denote the adjacency and degree matrices with added self-edges by $\widetilde{\mathbf{A}}$ and $\widetilde{\mathbf{D}}$ , respectively. Using this notation, for example, the propagation operator from GCN [3] is obtained by $\widetilde{\mathbf{P}} =$ ${\widetilde{\mathbf{D}}}^{-\frac{1}{2}}\widetilde{\mathbf{A}}{\widetilde{\mathbf{D}}}^{-\frac{1}{2}}$ , and its architecture is given by
40
+
41
+ $$
42
+ {\mathbf{f}}^{\left( l + 1\right) } = \operatorname{ReLU}\left( {\widetilde{\mathbf{P}}{\mathbf{f}}^{\left( l\right) }{\mathbf{K}}^{\left( l\right) }}\right) , \tag{1}
43
+ $$
44
+
45
+ where ${\mathbf{K}}^{\left( l\right) }$ is a $1 \times 1$ convolution matrix.
46
+
47
+ We consider the node-classification task with $k$ labels. We denote the ground-truth labels by $\mathbf{y} \in$ ${\mathbb{R}}^{n \times k}$ and the node-classification prediction by applying SoftMax to the output of the network ${\mathbf{f}}^{\text{ out }}$
48
+
49
+ $$
50
+ \widehat{\mathbf{y}} = \operatorname{SoftMax}\left( {\mathbf{f}}^{\text{ out }}\right) \in {\mathbb{R}}^{n \times k}. \tag{2}
51
+ $$
52
+
53
+ § 4 METHOD
54
+
55
+ We now describe the local and global feature extraction mechanism and our objective functions.
56
+
57
+ Local features. The local information is obtained by learning node features $\mathbf{f} \in {\mathbb{R}}^{n \times d}$ using some backbone denoted by GNN. In our experiments, we evaluate our method with GNN being a GCN [3] as in Eq. (1) or a GAT [12]. Note that our GLGNN approach does not assume a specific GNN backbone and thus can possibly be utilized with other GNNs.
58
+
59
+ Global features. Our global information mechanism learns label features $\mathbf{g} \in {\mathbb{R}}^{k \times d}$ . Specifically, to obtain the global features we consider the concatenation of initial nodes-embedding ${\mathbf{f}}^{\left( 0\right) }$ and the last GNN layer node features ${\mathbf{f}}^{\left( \mathrm{L}\right) }$ denoted by $\left\lbrack {{\mathbf{f}}^{\left( 0\right) } \oplus {\mathbf{f}}^{\left( L\right) }}\right\rbrack$ . We then perform a single $1 \times 1$ convolution denoted by ${\mathbf{K}}_{\mathrm{g}}$ , followed by a ReLU activation, and feed it to a global MaxPool readout function to obtain a single vector $\mathbf{s} \in {\mathbb{R}}^{d}$ . Formally:
60
+
61
+ $$
62
+ \mathbf{s} = \operatorname{MaxPool}\left( {\operatorname{ReLU}\left( {{\mathbf{K}}_{\mathrm{g}}\left\lbrack {{\mathbf{f}}^{\left( 0\right) } \oplus {\mathbf{f}}^{\left( \mathrm{L}\right) }}\right\rbrack }\right) }\right) . \tag{3}
63
+ $$
64
+
65
+ Using the global vector $\mathbf{s}$ , we utilize $k$ (the number of labels) multi-layer perceptrons (MLPs) that are implemented as an inverted bottleneck [31], and in particular resembles the squeeze-and-excite mechanism from [32]. Each MLP is comprised of the following:
66
+
67
+ $$
68
+ {\mathbf{g}}_{\mathbf{i}} = {\mathbf{K}}_{\mathbf{s}}\left( {\operatorname{ReLU}\left( {{\mathbf{K}}_{\mathrm{e}}\mathbf{s}}\right) }\right) , \tag{4}
69
+ $$
70
+
71
+ where ${\mathbf{K}}_{\mathrm{e}},{\mathbf{K}}_{\mathrm{s}}$ are an expanding (from $d$ to $e \times d$ ) and shrinking (from $e \times d$ to $d$ ) $1 \times 1$ convolutions, and the expansion rate $e$ is a hyper-parameter which is set $e = {12}$ in our experiments. Note that this operation can be efficiently implemented using a grouped convolution to obtain $\mathbf{g} = \left\lbrack {{\mathbf{g}}_{0},\ldots ,{\mathbf{g}}_{\mathrm{k} - 1}}\right\rbrack$ in parallel. Also, because $\mathbf{s}$ is a vector, the computational overhead is rather low compared to the total complexity of the backbone GNN.
72
+
73
+ Node-classification map. To obtain a node-classification prediction map, we consider matrix-vector product of the final GNN output ${\mathbf{f}}^{\left( L\right) } \in {\mathbb{R}}^{n \times d}$ with each of the label features ${\mathbf{g}}_{i} \in {\mathbb{R}}^{d}$ in (4). More formally for each label we obtain the following node-label correspondence vector:
74
+
75
+ $$
76
+ {\mathbf{z}}_{i} = {\mathbf{f}}^{\left( L\right) } \cdot {\mathbf{g}}_{i} \in {\mathbb{R}}^{n}. \tag{5}
77
+ $$
78
+
79
+ 1 By concatenating the $k$ correspondence vectors and applying the SoftMax function, we obtain a node-classification map
80
+
81
+ $$
82
+ \widehat{\mathbf{y}} = \operatorname{SoftMax}\left( {{\mathbf{z}}_{0} \oplus \ldots \oplus {\mathbf{z}}_{\mathrm{k} - 1}}\right) \in {\mathbb{R}}^{\mathrm{n} \times \mathrm{k}}, \tag{6}
83
+ $$
84
+
85
+ which is the final output of our GLGNN.
86
+
87
+ Objective functions. To train our GLGNN we propose to minimize the following objective function:
88
+
89
+ $$
90
+ \mathcal{L} = {\mathcal{L}}_{CE} + \alpha {\mathcal{L}}_{\mathrm{{GL}}}, \tag{7}
91
+ $$
92
+
93
+ where ${\mathcal{L}}_{CE}$ denotes the cross-entropy loss between ground-truth $\mathbf{y}$ and predicted node labels $\widehat{\mathbf{y}}$ from Eq. (6). $\alpha$ is a positive hyper-parameter, and ${\mathcal{L}}_{\mathrm{{GL}}}$ denotes a global-local loss that considers the relationship between the label and node features by demanding the similarity of nodes that belong to a respective label while requiring the dis-similarity of node features that do not belong to that label and its features, as follows
94
+
95
+ $$
96
+ {\mathcal{L}}_{\mathrm{{GL}}} = \mathop{\sum }\limits_{{l = 0}}^{{k - 1}}\left( {\mathop{\sum }\limits_{{{\mathbf{y}}_{i} = l}}{\begin{Vmatrix}{\mathbf{g}}_{l} - {\mathbf{f}}_{i}^{\left( L\right) }\end{Vmatrix}}_{2}^{2} - \mathop{\sum }\limits_{{{\mathbf{y}}_{i} \neq l}}\min \left( {{\begin{Vmatrix}{\mathbf{g}}_{l} - {\mathbf{f}}_{\mathrm{i}}^{\left( \mathrm{L}\right) }\end{Vmatrix}}_{2}^{2},\mathrm{r}}\right) }\right) , \tag{8}
97
+ $$
98
+
99
+ where $\min \left( {\cdot , \cdot }\right)$ is a clamping function that returns the minimal values of its arguments, and $\mathrm{r}$ is a positive hyper-parameter. In our experiments we set $\mathrm{r} = {10}$ .
100
+
101
+ § 5 EXPERIMENTS
102
+
103
+ We now demonstrate GLGNN on semi- and fully-supervised node-classification. Our GLGNN consists of an embedding layer $(1 \times 1$ convolution), a series of GNN backbone layers and the label features MLPs as described in Sec. 4. As GNN backbones, we consider GCN [3] and GAT [12]. We elaborate on the specific architecture in Appendix A. We use the Adam [33] optimizer, and perform a grid-search to choose the hyper-parameters (see Appendix B for more information). Our code is implemented using PyTorch [34] and PyTorch-Geometric [35], trained on an Nvidia Titan RTX GPU.
104
+
105
+ We show that for all the considered tasks and datasets, our GLGNN offers a consistent improvement over the baseline methods, and besides the obtained accuracy we report the relative accuracy improvement compared to the baseline GCN and GAT methods. Also, we find that our GLGNN is competitive with recent state-of-the-art methods. We provide further datasets information in Appendix C.
106
+
107
+ § 5.1 SEMI-SUPERVISED NODE-CLASSIFICATION
108
+
109
+ We consider Cora, Citeseer and Pubmed [36] datasets and their standard, public training/validation/testing split as in [37], with 20 nodes per class for training. We follow the training and evaluation scheme of [13] and consider various GNN models like GCN, GAT, superGAT [38], APPNP [39], JKNet [40], GCNII [13], GRAND [41], PDE-GCN [42], pathGCN [43], EGNN[14] and superGAT [38]. We also consider other improved training techniques P-reg [44], GraphMix [26] and NASA [30]. We summarize the results in Tab. 1 and illustrate the learnt labels and nodes features in Fig. 1, revealing the clustering effect of learning label nodes.
110
+
111
+ Table 1: Semi-supervised node-classification accuracy (%).
112
+
113
+ max width=
114
+
115
+ Method Cora Citeseer Pubmed
116
+
117
+ 1-4
118
+ GCN 81.1 70.8 79.0
119
+
120
+ 1-4
121
+ GAT 83.1 70.8 78.5
122
+
123
+ 1-4
124
+ APPNP 83.3 71.8 80.1
125
+
126
+ 1-4
127
+ JKNET 81.1 69.8 78.1
128
+
129
+ 1-4
130
+ GCNII 85.5 73.4 80.3
131
+
132
+ 1-4
133
+ GRAND 84.7 73.6 81.0
134
+
135
+ 1-4
136
+ PDE-GCN 84.3 75.6 80.6
137
+
138
+ 1-4
139
+ pathGCN 85.8 75.8 82.7
140
+
141
+ 1-4
142
+ EGNN 85.7 - 80.1
143
+
144
+ 1-4
145
+ superGAT 84.3 72.6 81.7
146
+
147
+ 1-4
148
+ GraphMix 84.0 74.7 81.1
149
+
150
+ 1-4
151
+ P-reg 83.9 74.8 80.1
152
+
153
+ 1-4
154
+ NASA 85.1 75.5 80.2
155
+
156
+ 1-4
157
+ GLGCN (ours) ${84.2}_{\left( +{3.8}\% \right) }$ 73.3 (+3.5%) 81.5 (+3.1%)
158
+
159
+ 1-4
160
+ GLGAT (ours) ${84.5}_{\left( +{1.6}\% \right) }$ 72.6 ${}_{\left( +{2.5}\% \right) }$ 81.2(+3.4%)
161
+
162
+ 1-4
163
+
164
+ Table 2: Fully-supervised node-classification accuracy (%) on homophilic datasets.
165
+
166
+ max width=
167
+
168
+ Method Homophily Cora 0.81 Citeseer 0.80 Pubmed 0.74
169
+
170
+ 1-4
171
+ GCN 85.77 73.68 88.13
172
+
173
+ 1-4
174
+ GAT 86.37 74.32 87.62
175
+
176
+ 1-4
177
+ Geom-GCN 85.27 77.99 90.05
178
+
179
+ 1-4
180
+ APPNP 87.87 76.53 89.40
181
+
182
+ 1-4
183
+ JKNet (Drop) 87.46 75.96 89.45
184
+
185
+ 1-4
186
+ GCNII 88.49 77.08 89.57
187
+
188
+ 1-4
189
+ WRGAT 88.20 76.81 88.52
190
+
191
+ 1-4
192
+ GCNII* 88.01 77.13 90.30
193
+
194
+ 1-4
195
+ GGCN 87.95 77.14 89.15
196
+
197
+ 1-4
198
+ H2GCN 87.87 77.11 89.49
199
+
200
+ 1-4
201
+ GLGCN (ours) ${88.47}_{\left( +{3.1}\% \right) }$ 77.72 (+5.4%) ${88.61}\left( {\pm {0.05}\% }\right)$
202
+
203
+ 1-4
204
+
205
+ max width=
206
+
207
+ GLGCN (ours) ${88.47}_{\left( +{3.1}\% \right) }$ 77.72 (+5.4%) 88.61 ${}_{\left( +{0.05}\% \right) }$
208
+
209
+ 1-4
210
+ GLGAT (ours) ${88.65}_{\left( +{2.6}\% \right) }$ 77.37 (+4.1%) 88.74 (+0.1%)
211
+
212
+ 1-4
213
+
214
+ < g r a p h i c s >
215
+
216
+ Figure 1: tSNE embedding of learnt label- and node-features of Cora.
217
+
218
+ Table 3: Fully-supervised node-classification accuracy (%) on heterophilic datasets.
219
+
220
+ max width=
221
+
222
+ Method Homophily Corn. 0.30 Texas 0.11 Wisc. 0.21
223
+
224
+ 1-4
225
+ GCN 52.70 52.16 48.92
226
+
227
+ 1-4
228
+ GAT 54.32 58.38 49.41
229
+
230
+ 1-4
231
+ Geom-GCN 60.81 67.57 64.12
232
+
233
+ 1-4
234
+ JKNet (Drop) 61.08 57.30 50.59
235
+
236
+ 1-4
237
+ GCNII 74.86 69.46 74.12
238
+
239
+ 1-4
240
+ GCNII* 76.49 77.84 81.57
241
+
242
+ 1-4
243
+ GRAND 82.16 75.68 79.41
244
+
245
+ 1-4
246
+ WRGAT 81.62 83.62 86.98
247
+
248
+ 1-4
249
+ GGCN 85.68 84.86 86.86
250
+
251
+ 1-4
252
+ H2GCN 82.70 84.86 87.65
253
+
254
+ 1-4
255
+ GraphCON-GCN 84.30 85.40 87.80
256
+
257
+ 1-4
258
+ GraphCON-GAT 83.20 82.20 85.70
259
+
260
+ 1-4
261
+ GLGCN (ours) 74.86 (+42.0%) 70.27 (+34.7%) 65.29 (+33.4%)
262
+
263
+ 1-4
264
+ GLGAT (ours) 75.67 (+39.3%) 70.01 (+19.9%) ${65.88}_{\left( +{33.3}\% \right) }$
265
+
266
+ 1-4
267
+
268
+ § 5.2 FULLY-SUPERVISED NODE-CLASSIFICATION
269
+
270
+ To further validate the efficacy of our method, we employ fully-supervised node-classification on 6 datasets, namely, Cora, Citeseer, Pubmed, Cornell, Texas and Wisconsin using the 10 random splits from [45] with train/validation/test split of ${48}\% ,{32}\% ,{20}\%$ respectively, and report their average accuracy. In all experiments, we use 64 channels and perform a grid-search to determine the hyper-parameters. We compare our accuracy with methods like GCN, GAT, Geom-GCN [45], APPNP, JKNet [40], WRGAT [46], GCNII [13], DropEdge [20], H2GCN [47], GGCN [48] and GraphCON [49]. We distinguish between homophilic and heterophilic datasets, and report the results of the former in Tab. 2, and of the latter in Tab. 3, where we also report the homophily score of each dataset (adapted from [45]). We see an improvement across all benchmarks and types of datasets compared to the baseline methods of GCN and GAT and competitive results on homophilic datasets with recent state-of-the-art methods.
271
+
272
+ § 6 CONCLUSION
273
+
274
+ In this paper we propose GLGNN, a method to leverage global information for semi- and fully-supervised node-classification. By learning and fusing global label features and local node features, we show that it is possible to cluster the nodes in a way that enables improved classification accuracy and demonstrate that our method outperforms baseline models by a significant margin. Future research directions include the evaluation of this method on graph classification datasets and exploring additional possible methods of global label information extraction and incorporation.
papers/LOG/LOG 2022/LOG 2022 Conference/YXHoPO33rk/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,225 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Learning to Reconstruct Missing Data from Spatiotemporal Graphs with Sparse Observations
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ Modeling multivariate time series as temporal signals over a (possibly dynamic) graph is an effective representational framework that allows for developing models for time series analysis. Spatiotemporal graphs are often highly sparse, with time series characterized by multiple, concurrent, and even long sequences of missing data, e.g., due to the unreliable underlying sensor network. In this context, autoregressive models can be brittle and exhibit unstable learning dynamics. The objective of this paper is to tackle the problem of learning effective models to reconstruct, i.e., impute, missing data points by conditioning the reconstruction only on the available observations. In particular, we propose a novel class of attention-based architectures that, given a set of highly sparse discrete observations, learn a representation for points in time and space by exploiting a spatiotemporal propagation architecture aligned with the imputation task. Representations are learned end-to-end to reconstruct observations w.r.t. the corresponding sensor and its neighboring nodes. Compared to the state of the art, our model handles sparse data without propagating prediction errors or requiring a bidirectional model to encode forward and backward time dependencies. Empirical results on representative benchmarks show the effectiveness of the proposed method.
12
+
13
+ ## 191 Introduction
14
+
15
+ Exploiting structure - both temporal and spatial - is arguably the key ingredient for the success of modern deep learning architectures and models. This is the case with spatiotemporal graph neural networks (STGNNs) [1-3], which learn to process multivariate time series while taking into account underlying space and time dependencies by encoding structural spatiotemporal inductive biases in their architectures. However, even when spatiotemporal relationships are present, available data are almost always incomplete and irregularly sampled, both spatially and temporally. This is definitely true for data coming from real sensor networks (SNs), where missing time series observations are usually imputed with simple interpolation strategies before proceeding with the downstream task. More advanced methods deal with missing data by autoregressively replacing missing observations with predicted ones, eventually using bidirectional architectures [4, 5] to exploit both forward and backward temporal dependencies. To account also for spatial dependencies, Cini et al. [6] introduced a method, named GRIN, combining a bidirectional autoregressive architecture with message passing neural networks [7-10]. Despite being the state of the art in spatiotemporal imputation, GRIN suffers from the error propagation typical of autoregressive models. In fact, we argue that the propagation of imputed (biased) values through space and time combined with noisy observations might exacerbate error accumulation in highly sparse data and drive the hidden state of GRIN-like models to drift away.
16
+
17
+ In this paper, we aim at tackling this problem by designing an architecture based on a novel attention mechanism that takes spatiotemporal sparsity into account while learning representations and imputing missing values. Compared with the alternatives discussed so far, our method exploits a novel spatiotemporal propagation process to learn a predictive representation for each missing observation by relying only on observed values propagated through the spatiotemporal structure. This approach achieves the twofold objective of avoiding propagating biased representation - typical in the autoregressive framework - and reconstructing observations at arbitrary nodes in the sensor network. Results on several benchmark datasets show the effectiveness of the approach under several different missing data distributions.
18
+
19
+ ## 2 Problem formulation and related works
20
+
21
+ We denote by ${\mathbf{X}}_{t} \in {\mathbb{R}}^{N \times d}$ the matrix collecting the $d$ -dimensional measurements of $N$ sensors in a SN at time step $t$ , with ${\mathbf{X}}_{t : t + T}$ being the sequence of $T$ measurements collected in the time interval $\lbrack t, t + T)$ . We model functional relationships among the sensors as graph edges, represented by the weighted adjacency matrix $\mathbf{A} \in {\mathbb{R}}^{N \times N}$ , in which each nonzero entry ${a}^{i, j}$ indicates the weight of the edge going from the $i$ -th node to the $j$ -th. We assume to have available sensor-level covariates ${\mathbf{Q}}_{t} \in {\mathbb{R}}^{N \times {d}_{q}}$ that act as spatiotemporal coordinates to localize a point in time and space (e.g., date/time features and geographic location). To account for data availability, we use a binary mask ${\mathbf{m}}_{t}^{i} \in \{ 0,1\}$ which is 1 if the measurements associated with the $i$ -th sensor are valid at time step $t$ . Conversely, if ${\mathbf{m}}_{t}^{i} = 0$ , we consider the measurements ${\mathbf{x}}_{t}^{i}$ to be completely missing, with the exogenous variables ${\mathbf{q}}_{t}^{i}$ being instead available. Finally, we model the multivariate, structured time series as a discrete sequence of graphs, where each graph is a tuple ${\mathcal{G}}_{t} = \left\langle {{\mathbf{X}}_{t},{\mathbf{Q}}_{t},{\mathbf{M}}_{t},\mathbf{A}}\right\rangle$ . Denoting by ${\widetilde{\mathbf{X}}}_{t : t + T}$ the unknown corresponding complete sequence, the goal of multivariate time series imputation (MTSI) is to find an estimate ${\widehat{\mathbf{X}}}_{t : t + T}$ minimizing the reconstruction error over the missing data points. Notice that, since ${\widetilde{\mathbf{X}}}_{t : t + T}$ is not available, one should find a surrogate objective or simulate the presence of missing data, for which the reconstruction error can be computed.
22
+
23
+ Related works. Multivariate time series imputation is a core task in time series analysis and deep learning methods are commonly used in this regard. In particular, deep autoregressive models based on recurrent neural networkss (RNNs) are currently among the most widely adopted methods [4, 5, 11, 12]. Several approaches in the literature, then, rely on generative adversarial neural networks [13] to generate imputed subsequences by matching the underlying data distribution [12, 14, 15]. Recently, several attention-based imputation techniques have also been proposed [16-18]. More related to our work, GRIN [6] uses a bidirectional graph RNN with a message passing spatial decoder, to impute time series based on spatiotemporal dependencies. The attention mechanism [19, 20] has been exploited in several contexts within the graph deep learning literature [21-24]. In particular, TraverseNet [25] is specially related to our work, since it relies on spatiotemporal autoregressive attention to compute messages exchanged between nodes.
24
+
25
+ ## 3 Methodology
26
+
27
+ The autoregressive approach to reconstruction consists in directly modeling distributions $p\left( {{\mathbf{x}}_{t}^{i} \mid {\mathbf{X}}_{ < t}}\right)$ and using one-step-ahead forecasting as a surrogate objective to learn how to recover missing observations. To exploit available data subsequent to the target time step, it is common to use a bidirectional architecture which models also $p\left( {{\mathbf{x}}_{t}^{i} \mid {\mathbf{X}}_{ > t}}\right) \left\lbrack {5,{26}}\right\rbrack$ . Moreover, a third component $p\left( {{\mathbf{x}}_{t}^{i} \mid \left\{ {\mathbf{x}}_{t}^{j \neq i}\right\} }\right)$ must be introduced to account for spatial information at each step. Architectures like GRIN, follow exactly this scheme with different components dedicated to model each of these three aspects. While being effective in practice, these approaches can have multiple drawbacks. Besides the computational overhead of having three separate components and the compounding error in the autoregressive models, they can fall short in capturing global context, as the processing of the structural information is decomposed. Furthermore, merging the information coming from the different modules is also problematic, yielding to further compounding of errors. Finally, in the case of highly sparse observations, the spatial processing should be dealt with special care as propagating information through partially observed spatiotemporal graphs adds another layer of complexity.
28
+
29
+ Our approach, named Spatiotemporal Point Inference Network (SPIN), is a graph attention network for MTSI, designed to learn representations of discrete points associated with nodes of a sequence of spatiotemporal graphs. We denote as observed set ${\mathcal{X}}_{t : t + T} = \left\{ {\left\langle {{\mathbf{x}}_{\tau }^{i},{\mathbf{q}}_{\tau }^{i}}\right\rangle \mid {\mathbf{m}}_{\tau }^{i} = 1 \land \tau \in \lbrack t, t + T)}\right\}$ the set of all observations, paired with their spatiotempotal coordinates. Conversely, we name target set ${\mathcal{Y}}_{t : t + T} = \left\{ {{\mathbf{q}}_{\tau }^{i} \mid {\mathbf{m}}_{\tau }^{i} = 0 \land \tau \in \lbrack t, t + T)}\right\}$ the complement set collecting the coordinates of the discrete spatiotemporal points for which we want to reconstruct an observation. Then, for all discrete points ${\mathbf{q}}_{\tau }^{i} \in {\mathcal{Y}}_{t : t + T}$ , SPIN is trained to learn a model
30
+
31
+ $$
32
+ {f}_{\theta }\left( {{\mathbf{q}}_{\tau }^{i} \mid {\mathcal{X}}_{t : t + T},\mathbf{A}}\right) \approx \mathbb{E}\left\lbrack {p\left( {{\mathbf{x}}_{\tau }^{i} \mid {\mathbf{q}}_{\tau }^{i},{\mathcal{X}}_{t : t + T},\mathbf{A}}\right) }\right\rbrack . \tag{1}
33
+ $$
34
+
35
+ To this end, SPIN learns a parameterized propagation process where each representation, corresponding to a node and time step, is updated by aggregating information from all the available observations acquired at neighboring nodes, weighted by input-dependent attention scores. The core component of SPIN is a novel sparse spatiotemporal attention layer (Figure 1) used to propagate information at the level of single observations. Indeed, leveraging on the attention mechanism, we learn representations for each $i$ -th node at each $\tau$ -th time step by simultaneously aggregating information from (1) the observed set of $i$ -th node ${\mathcal{X}}_{t : t + T}^{i}$ ; (2) the observed set ${\mathcal{X}}_{t : t + T}^{j}$ of its neighbors $j \in \mathcal{N}\left( i\right)$ .
36
+
37
+ ![01963f03-379b-7a93-bbcc-a543869ea03b_2_316_474_1170_326_0.jpg](images/01963f03-379b-7a93-bbcc-a543869ea03b_2_316_474_1170_326_0.jpg)
38
+
39
+ Figure 1: Example of the sparse spatiotemporal attention layer acting for updating ${\mathbf{h}}_{\tau }^{i,\left( l\right) }$ , by simultaneously performing inter-node spatiotemporal cross-attention and intra-node temporal self-attention.
40
+
41
+ Let ${\mathbf{h}}_{\tau }^{i,\left( l\right) } \in {\mathbb{R}}^{{d}_{h}}$ be the learned representation for the $i$ -th node and time step $\tau$ at the $l$ -th layer. The encoding is initialized as $\operatorname{MLP}\left( {{\mathbf{x}}_{\tau }^{i},{\mathbf{q}}_{\tau }^{i}}\right)$ if observation ${\mathbf{x}}_{\tau }^{i}$ is valid, or $\operatorname{MLP}\left( {\mathbf{q}}_{\tau }^{i}\right)$ otherwise, where MLP is a multi-layer perceptron. The next steps involve computations of spatiotemporal messages, i.e., representations computed to propagate information from one discrete space-time point to another. We indicate the propagation along the temporal dimension from time step $s$ to time step $\tau$ as subscripts $s \rightarrow \tau$ . Similarly, superscripts $j \rightarrow i$ indicate messages sent from the $j$ -th node to the $i$ -th. To avoid overloading the notation, we omit the layer superscript in the following. The message ${\mathbf{r}}_{s \rightarrow \tau }^{j \rightarrow i} \in {\mathbb{R}}^{{d}_{h}}$ from the $j$ -th node at time step $s$ to the $i$ -th node at time step $\tau$ is computed with an MLP taking as input both source and target representations (Eq. 2). To account for spatial information, this mechanism is used to perform an inter-node temporal cross-attention, computing a message to ${\mathbf{h}}_{\tau }^{i}$ using encodings in ${\mathbf{h}}_{t : t + T}^{j}$ associated with a valid observation for every neighbor $j \in \mathcal{N}\left( i\right)$ (Eq. 3).
42
+
43
+ $$
44
+ {\mathbf{r}}_{s \rightarrow \tau }^{j \rightarrow i} = \operatorname{MLP}\left( {{\mathbf{h}}_{s}^{j},{\mathbf{h}}_{\tau }^{i}}\right) \tag{2}
45
+ $$
46
+
47
+ $$
48
+ {\mathcal{R}}_{\tau }^{j \rightarrow i} = \left\{ {{\mathbf{r}}_{s \rightarrow \tau }^{j \rightarrow i} \mid \left\langle {{\mathbf{x}}_{s}^{j},{\mathbf{q}}_{s}^{j}}\right\rangle \in {\mathcal{X}}_{t : t + T}}\right\} \tag{3}
49
+ $$
50
+
51
+ Messages in ${\mathcal{R}}_{\tau }^{j \rightarrow i}$ are then weighted by message scores ${\alpha }_{s \rightarrow \tau }^{j \rightarrow i}$ , computed by a linear projection of the messages in ${\mathcal{R}}_{\tau }^{j \rightarrow i}$ followed by a softmax layer, and aggregated to obtain an edge-level context vector ${e}_{\tau }^{j \rightarrow i}$ , encoding the observed sequence at each $j$ -th node w.r.t. the $i$ -th node and time step $\tau$ . Analogously, to account for the observed sequence of the $i$ -th node itself, we exploit an intra-node temporal self-attention mechanism to compute messages from the encodings ${\mathbf{h}}_{t : t + T}^{i}$ corresponding to valid observations, aggregated (weighted by message scores) to obtain a temporal context vector ${\mathbf{c}}_{\tau }^{i}$ . Then, target representation ${\mathbf{h}}_{\tau }^{i,\left( l\right) }$ is updated with a final aggregation step (Eq. 4), and imputations for all spatiotemporal points in ${\mathcal{Y}}_{t : t + T}$ are obtained - after $L$ layers - with a nonlinear readout (Eq. 5).
52
+
53
+ $$
54
+ {\mathbf{h}}_{\tau }^{i,\left( {l + 1}\right) } = \operatorname{MLP}\left( {{\mathbf{h}}_{\tau }^{i,\left( l\right) },{\mathbf{c}}_{\tau }^{i,\left( l\right) },\mathop{\sum }\limits_{{j \in \mathcal{N}\left( i\right) }}{\mathbf{e}}_{\tau }^{j \rightarrow i,\left( l\right) }}\right) \tag{4}
55
+ $$
56
+
57
+ $$
58
+ {\widehat{\mathcal{Y}}}_{t : t + T} = \left\{ {{\widehat{\mathbf{x}}}_{\tau }^{i} = \operatorname{MLP}\left( {\mathbf{h}}_{\tau }^{i,\left( L\right) }\right) \mid {\mathbf{q}}_{\tau }^{i} \in {\mathcal{Y}}_{t : t + T}}\right\} \tag{5}
59
+ $$
60
+
61
+ Hierarchical attention. Roughly speaking, the proposed spatiotemporal attention mechanism can be viewed as performing attention over the spatiotemporal graph $\mathcal{S}$ , obtained by considering the product graph between space and time dimensions. Performing graph attention on the surrogate graph $\mathcal{S}$ has time and memory complexities that scale with $O\left( {\left( {N + E}\right) {T}^{2}}\right)$ , with $N, E$ being the largest number of nodes and edges, respectively, among graphs in ${\mathcal{G}}_{t : t + T}$ . To reduce this computational burden - which undermines the application of the proposed method to large graphs and long time horizons - we propose to rewire the attention mechanism to be hierarchical [27]. We do this by adding $K$ dummy nodes that act as hubs for propagating information. Let ${\mathbf{Z}}^{i} \in {\mathbb{R}}^{K \times {d}_{z}}$ be the the hub nodes’ representations for central node $i$ , and then, for hub $k$ ,(1) update ${\mathbf{z}}_{k}^{i}$ by querying $\left\{ {{\mathbf{h}}_{\tau }^{i} \mid \left\langle {{\mathbf{x}}_{\tau }^{i},{\mathbf{q}}_{\tau }^{i}}\right\rangle \in {\mathcal{X}}_{t : t + T}}\right\}$ , i.e., node encodings associated with valid observations; (2) update node
62
+
63
+ Table 1: Performance (MAE) with increasing data sparsity (average over 5 evaluation masks).
64
+
65
+ <table><tr><td rowspan="2"/><td colspan="3">METR-LA</td><td colspan="3">PEMS-BAY</td><td colspan="3">AQI</td></tr><tr><td>5 %</td><td>10 %</td><td>15 %</td><td>5 %</td><td>10 %</td><td>15 %</td><td>5 %</td><td>10 %</td><td>15 %</td></tr><tr><td>BRITS</td><td>${5.87} \pm {0.03}$</td><td>${7.26} \pm {0.06}$</td><td>${8.29} \pm {0.07}$</td><td>${4.14} \pm {0.05}$</td><td>${5.41} \pm {0.08}$</td><td>${5.84} \pm {0.04}$</td><td>${24.09} \pm {0.30}$</td><td>${31.90} \pm {0.26}$</td><td>${37.62} \pm {0.42}$</td></tr><tr><td>SAITS</td><td>${4.73} \pm {0.07}$</td><td>${6.66} \pm {0.05}$</td><td>${7.27} \pm {0.03}$</td><td>${3.88} \pm {0.09}$</td><td>${7.62} \pm {0.21}$</td><td>${8.01} \pm {0.11}$</td><td>${20.78} \pm {0.30}$</td><td>${30.16} \pm {0.39}$</td><td>${36.34} \pm {0.33}$</td></tr><tr><td>Transformer</td><td>${6.03} \pm {0.04}$</td><td>${7.19} \pm {0.05}$</td><td>${8.06} \pm {0.05}$</td><td>${3.69} \pm {0.06}$</td><td>${5.09} \pm {0.05}$</td><td>${6.02} \pm {0.04}$</td><td>${29.21} \pm {0.33}$</td><td>${33.62} \pm {0.16}$</td><td>${37.31} \pm {0.14}$</td></tr><tr><td>GRIN</td><td>${3.05} \pm {0.02}$</td><td>${4.52} \pm {0.05}$</td><td>${5.82} \pm {0.06}$</td><td>${2.26} \pm {0.03}$</td><td>${3.45} \pm {0.06}$</td><td>${4.35} \pm {0.04}$</td><td>${15.62} \pm {0.24}$</td><td>${22.08} \pm {0.39}$</td><td>${29.03} \pm {0.42}$</td></tr><tr><td>SPIN</td><td>${2.71} \pm {0.02}$</td><td>${3.32} \pm {0.02}$</td><td>${3.87} \pm {0.05}$</td><td>${1.78} \pm {0.03}$</td><td>${2.15} \pm {0.03}$</td><td>${2.41} \pm {0.02}$</td><td>${14.29} \pm {0.24}$</td><td>${18.71} \pm {0.34}$</td><td>${24.34} \pm {0.46}$</td></tr><tr><td>SPIN-H</td><td>${2.64} \pm {0.02}$</td><td>${3.17} \pm {0.02}$</td><td>${3.61} \pm {0.04}$</td><td>${1.75} \pm {0.04}$</td><td>${2.16} \pm {0.03}$</td><td>${2.48} \pm {0.02}$</td><td>${14.55} \pm {0.26}$</td><td>${19.37} \pm {0.36}$</td><td>${25.38} \pm {0.37}$</td></tr></table>
66
+
67
+ encoding ${\mathbf{h}}_{\tau }^{i}$ by querying updated ${\mathbf{Z}}^{i}$ and ${\mathbf{Z}}^{j}$ of every $j$ -th neighbor in $\mathcal{N}\left( i\right)$ . In this way, we can reduce the spatiotemporal attention complexity to $O\left( {\left( {N + E}\right) {KT}}\right)$ , with $K \ll T$ , at the cost of introducing an information bottleneck.
68
+
69
+ ## 4 Empirical evaluation
70
+
71
+ In this section, we evaluate our method on three real-world datasets and compare the performance against state-of-the-art methods and standard approaches for MTSI. In following experiment, we consider both SPIN and the hierarchical version SPIN-H. The figure of merit used is the mean absolute error (MAE), averaged across imputation windows. We consider only the out-of-sample scenario similarly to previous works[6], in which every parametric model is trained and tested on disjoint sets. We consider three openly available datasets coming from real-world SNs. The first two, namely PEMS-BAY and METR-LA [2], record traffic measurements and are both widely used benchmarks in spatiotemporal forecasting literature. We use the same setup of [6] to inject missing data with Point missing policy, in which we randomly drop 25% of the available data. As a third dataset, we consider AQI $\left\lbrack {{28},{29}}\right\rbrack$ , which collects hourly measurements of air pollutants from 437 air quality monitoring stations in China. We consider also a smaller version of this dataset (AQI-36) with only the 36 sensors in the city of Beijing. We use the same missing data distribution used in [6, 29]. In all settings, all the valid observations masked out are used as targets for evaluation. We obtain an adjacency matrix from the pairwise distance of sensors following previous works [2,3,6]. We compare our methods against (1) GRIN [6], a graph-based bidirectional RNN for MTSI with state-of-the-art performance; (2) a spatiotemporal Transformer, where we alternate temporal and spatial Transformer encoder layers from [19] and replace missing values with a [MASK] token (as in [30]); (3) SAITS [16], a recent attention-based architecture; (4) BRITS [5], which leverages on a bidirectional RNN. We assess how performance changes as the percentage of missing values increases. In practice, we change the missing data distribution at test time, simulating the case in which, at each time step, every sensor has a constant probability $\bar{p}$ of going offline for a random number $S \sim \mathcal{U}\left( {{12},{36}}\right)$ of future (consecutive) time steps. Table 1 shows results for all datasets with $\bar{p} = 5\% ,\bar{p} = {10}\%$ , and $\bar{p} = {15}\%$ . Note that, depending on the dataset, the portion of valid observations in each of these cases amounts to $\approx {25} - {30}\% , \approx 8 - {10}\%$ , and $\approx 3 - 4\%$ , respectively. SPIN models, differently from the baselines, can handle all the considered scenarios. In particular, improvements in performance w.r.t. the best performing baseline (GRIN) are more evident as the number of available observations decreases. Indeed, the sparse spatiotemporal attention mechanism of SPIN is not autoregressive and allows an unbounded memory capacity. Note also that SPIN-H performs on par (and in some cases better) with SPIN, making it a valid lightweight alternative to SPIN. In Appendix A, we show that SPIN-based models perform on par or better than state-of-the-art methods on standard benchmarks.
72
+
73
+ ## 5 Conclusions
74
+
75
+ We introduced a graph-based attention network, named SPIN, to reconstruct missing observations in sparse spatiotemporal time series. We showed how the time and space complexities of the approach can be drastically reduced considering a novel hierarchical attention mechanism. Empirical analysis shows that the proposed method widely outperforms state-of-the-art methods for imputation in highly sparse settings.
76
+
77
+ References
78
+
79
+ [1] Youngjoo Seo, Michaël Defferrard, Pierre Vandergheynst, and Xavier Bresson. Structured sequence modeling with graph convolutional recurrent networks. In International Conference on Neural Information Processing, pages 362-373. Springer, 2018. 1
80
+
81
+ [2] Yaguang Li, Rose Yu, Cyrus Shahabi, and Yan Liu. Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. In International Conference on Learning Representations, 2018.4
82
+
83
+ [3] Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, and Chengqi Zhang. Graph wavenet for deep spatial-temporal graph modeling. arXiv preprint arXiv:1906.00121, 2019. 1, 4, 9
84
+
85
+ [4] Jinsung Yoon, William R Zame, and Mihaela van der Schaar. Multi-directional recurrent neural networks: A novel method for estimating missing data. In Time Series Workshop at the 34th International Conference on Machine, pages 1-5, 2017. 1, 2
86
+
87
+ [5] Wei Cao, Dong Wang, Jian Li, Hao Zhou, Yitan Li, and Lei Li. Brits: bidirectional recurrent imputation for time series. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pages 6776-6786, 2018. 1, 2, 4
88
+
89
+ [6] Andrea Cini, Ivan Marisca, and Cesare Alippi. Filling the g_ap_s: Multivariate time series imputation by graph neural networks. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=k0u3-S3wJ7.1, 2, 4, 7, 9
90
+
91
+ [7] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE transactions on neural networks, 20(1):61-80, 2008. 1
92
+
93
+ [8] Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34 (4):18-42, 2017.
94
+
95
+ [9] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International conference on machine learning, pages 1263-1272. PMLR, 2017.
96
+
97
+ [10] Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018. 1
98
+
99
+ [11] Zhengping Che, Sanjay Purushotham, Kyunghyun Cho, David Sontag, and Yan Liu. Recurrent neural networks for multivariate time series with missing values. Scientific reports, 8(1):1-12, 2018.2
100
+
101
+ [12] Xiaoye Miao, Yangyang Wu, Jun Wang, Yunjun Gao, Xudong Mao, and Jianwei Yin. Generative semi-supervised learning for multivariate time series imputation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 8983-8991, 2021. 2, 7
102
+
103
+ [13] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neural information processing systems, 27, 2014. 2
104
+
105
+ [14] Jinsung Yoon, James Jordon, and Mihaela Schaar. Gain: Missing data imputation using generative adversarial nets. In International Conference on Machine Learning, pages 5689- 5698. PMLR, 2018. 2, 7
106
+
107
+ [15] Yonghong Luo, Ying Zhang, Xiangrui Cai, and Xiaojie Yuan. E2gan: End-to-end generative adversarial network for multivariate time series imputation. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pages 3094-3100. International Joint Conferences on Artificial Intelligence Organization, 7 2019. 2
108
+
109
+ [16] Wenjie Du, David Côté, and Yan Liu. Saits: Self-attention-based imputation for time series. arXiv preprint arXiv:2202.08516, 2022. 2, 4
110
+
111
+ [17] Yusuke Tashiro, Jiaming Song, Yang Song, and Stefano Ermon. Csdi: Conditional score-based diffusion models for probabilistic time series imputation. Advances in Neural Information Processing Systems, 34, 2021.
112
+
113
+ [18] Satya Narayan Shukla and Benjamin Marlin. Multi-time attention networks for irregularly sampled time series. In International Conference on Learning Representations, 2020. 2
114
+
115
+ [19] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008, 2017. 2, 4
116
+
117
+ [20] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. 2
118
+
119
+ [21] Kiran K Thekumparampil, Chong Wang, Sewoong Oh, and Li-Jia Li. Attention-based graph neural network for semi-supervised learning. arXiv preprint arXiv:1803.03735, 2018. 2
120
+
121
+ [22] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In International Conference on Learning Representations, 2018.
122
+
123
+ [23] Jiani Zhang, Xingjian Shi, Junyuan Xie, Hao Ma, Irwin King, and Dit Yan Yeung. Gaan: Gated attention networks for learning on large and spatiotemporal graphs. In 34th Conference on Uncertainty in Artificial Intelligence 2018, UAI 2018, 2018.
124
+
125
+ [24] Yunsheng Shi, Zhengjie Huang, Shikun Feng, Hui Zhong, Wenjing Wang, and Yu Sun. Masked label prediction: Unified message passing model for semi-supervised classification. In International Joint Conference on Artificial Intelligence (IJCAI), pages 1548-1554, 2021. 2
126
+
127
+ [25] Zonghan Wu, Da Zheng, Shirui Pan, Quan Gan, Guodong Long, and George Karypis. Tra-versenet: Unifying space and time in message passing for traffic forecasting. IEEE Transactions on Neural Networks and Learning Systems, 2022. 2
128
+
129
+ [26] Jinsung Yoon, William R Zame, and Mihaela van der Schaar. Estimating missing data in temporal data streams using multi-directional recurrent neural networks. IEEE Transactions on Biomedical Engineering, 66(5):1477-1490, 2018. 2
130
+
131
+ [27] Joshua Ainslie, Santiago Ontañón, Chris Alberti, Philip Pham, Anirudh Ravula, and Sumit Sanghai. Etc: Encoding long and structured data in transformers. CoRR, abs/2004.08483, 2020. URL https://arxiv.org/abs/2004.08483.3
132
+
133
+ [28] Yu Zheng, Xiuwen Yi, Ming Li, Ruiyuan Li, Zhangqing Shan, Eric Chang, and Tianrui Li. Forecasting fine-grained air quality based on big data. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pages 2267-2276, 2015. 4
134
+
135
+ [29] Xiuwen Yi, Yu Zheng, Junbo Zhang, and Tianrui Li. St-mvl: Filling missing values in geo-sensory time series data. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI'16, page 2704-2710. AAAI Press, 2016. ISBN 9781577357704. 4, 9
136
+
137
+ [30] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.4
138
+
139
+ [31] Ian R White, Patrick Royston, and Angela M Wood. Multiple imputation using chained equations: issues and guidance for practice. Statistics in medicine, 30(4):377-399, 2011. 7
140
+
141
+ [32] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 8
142
+
143
+ [33] Guido Van Rossum and Fred L. Drake. Python 3 Reference Manual. CreateSpace, Scotts Valley, CA, 2009. ISBN 1441412697. 8
144
+
145
+ [34] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32:8026-8037, 2019. 8
146
+
147
+ [35] Matthias Fey and Jan Eric Lenssen. Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428, 2019. 8
148
+
149
+ [36] Andrea Cini and Ivan Marisca. Torch Spatiotemporal, 3 2022. URL https://github.com/ TorchSpatiotemporal/tsl. 8
150
+
151
+ [37] neptune.ai. Neptune: Metadata store for mlops, built for research and production teams that run a lot of experiments, 2021. URL https://neptune.ai.8
152
+
153
+ [38] David I Shuman, Sunil K Narang, Pascal Frossard, Antonio Ortega, and Pierre Vandergheynst. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE signal processing magazine, 30(3):83-98, 2013. 9
154
+
155
+ ## Appendix
156
+
157
+ ## A Performance on standard benchmarks
158
+
159
+ Table 1, in the main paper, shows the reconstruction error of the different methods as the number of valid observations in input sequences decreases. To assess the performance of our model in standard settings (denser observations), we test all the methods on the original datasets introduced in section 4. For the traffic datasets, we also consider a different evaluation mask, where data are removed according to the Block missing policy [6], in which we randomly mask out 5% of the available data and, in addition, we simulate failures of $S \sim \mathcal{U}\left( {{12},{48}}\right)$ consecutive steps with ${0.15}\%$ probability. For this experiment, we consider also additional baselines: (1) node-level sequence mean (MEAN); (2) neighbors mean (KNN); (3) Matrix Factorization (MF); (4) MICE [31]; (5) VAR, a vector autoregressive one-step-ahead predictor; (6) rGAIN, an adversarial approach which shares similarities with GAIN [14] and SSGAN [12]. Table 2 shows the results in terms of MAE. Whenever possible, we use results from [6].
160
+
161
+ Table 2: Performance (in terms of MAE) averaged over multiple independent runs.
162
+
163
+ <table><tr><td rowspan="2"/><td colspan="2">Block missing</td><td colspan="2">Point missing</td><td colspan="2">Simulated failures</td></tr><tr><td>PEMS-BAY</td><td>METR-LA</td><td>PEMS-BAY</td><td>METR-LA</td><td>AQI-36</td><td>AQI</td></tr><tr><td>Mean</td><td>${5.46} \pm {0.00}$</td><td>${7.48} \pm {0.00}$</td><td>${5.42} \pm {0.00}$</td><td>${7.56} \pm {0.00}$</td><td>${53.48} \pm {0.00}$</td><td>${39.60} \pm {0.00}$</td></tr><tr><td>KNN</td><td>${4.30} \pm {0.00}$</td><td>${7.79} \pm {0.00}$</td><td>${4.30} \pm {0.00}$</td><td>${7.88} \pm {0.00}$</td><td>${30.21} \pm {0.00}$</td><td>${34.10} \pm {0.00}$</td></tr><tr><td>MF</td><td>${3.28} \pm {0.01}$</td><td>${5.46} \pm {0.02}$</td><td>${3.29} \pm {0.01}$</td><td>5.56 ± 0.03</td><td>${30.54} \pm {0.26}$</td><td>${26.74} \pm {0.24}$</td></tr><tr><td>MICE</td><td>${2.94} \pm {0.02}$</td><td>${4.22} \pm {0.05}$</td><td>${3.09} \pm {0.02}$</td><td>${4.42} \pm {0.07}$</td><td>${30.37} \pm {0.09}$</td><td>${26.98} \pm {0.10}$</td></tr><tr><td>VAR</td><td>${2.09} \pm {0.10}$</td><td>${3.11} \pm {0.08}$</td><td>${1.30} \pm {0.00}$</td><td>${2.69} \pm {0.00}$</td><td>${15.64} \pm {0.08}$</td><td>${22.95} \pm {0.30}$</td></tr><tr><td>rGAIN</td><td>${2.18} \pm {0.01}$</td><td>${2.90} \pm {0.01}$</td><td>${1.88} \pm {0.02}$</td><td>${2.83} \pm {0.01}$</td><td>${15.37} \pm {0.26}$</td><td>${21.78} \pm {0.50}$</td></tr><tr><td>BRITS</td><td>${1.70} \pm {0.01}$</td><td>${2.34} \pm {0.01}$</td><td>${1.47} \pm {0.00}$</td><td>${2.34} \pm {0.00}$</td><td>${14.50} \pm {0.35}$</td><td>${20.21} \pm {0.22}$</td></tr><tr><td>SAITS</td><td>${1.56} \pm {0.01}$</td><td>${2.30} \pm {0.01}$</td><td>${1.40} \pm {0.03}$</td><td>${2.26} \pm {0.00}$</td><td>${18.16} \pm {0.42}$</td><td>${21.33} \pm {0.15}$</td></tr><tr><td>Transformer</td><td>${1.70} \pm {0.02}$</td><td>${3.54} \pm {0.00}$</td><td>${0.74} \pm {0.00}$</td><td>${2.16} \pm {0.00}$</td><td>${11.98} \pm {0.53}$</td><td>${18.11} \pm {0.25}$</td></tr><tr><td>GRIN</td><td>${1.14} \pm {0.01}$</td><td>${2.03} \pm {0.00}$</td><td>$\mathbf{{0.67}} \pm {0.00}$</td><td>1.91 $\pm$ 0.00</td><td>${12.08} \pm {0.47}$</td><td>${14.73} \pm {0.15}$</td></tr><tr><td>SPIN</td><td>${1.06} \pm {0.01}$</td><td>${1.97} \pm {0.01}$</td><td>${0.71} \pm {0.01}$</td><td>${1.90} \pm {0.01}$</td><td>${11.77} \pm {0.74}$</td><td>$\mathbf{{14.00}} \pm {0.13}$</td></tr><tr><td>SPIN-H</td><td>$\mathbf{{1.06}} \pm {0.01}$</td><td>${2.05} \pm {0.03}$</td><td>${0.74} \pm {0.02}$</td><td>${1.96} \pm {0.04}$</td><td>$\mathbf{{11.08}} \pm {0.06}$</td><td>${14.39} \pm {0.03}$</td></tr></table>
164
+
165
+ Both SPIN methods outperform the baselines in almost all scenarios. As expected, improvements are more evident when entire blocks of data are missing, as in AQI datasets and block missing settings. With respect to the spatiotemporal Transformer, SPIN performs better in all settings except for AQI-36, which can be attributed to the ineffectiveness of spatial attention alone in determining the dependencies among the different spatial locations.
166
+
167
+ ## B Detailed experimental setup
168
+
169
+ In this appendix, we discuss in detail the experimental settings. We use the same setup of Cini et al. ${\left\lbrack 6\right\rbrack }^{1,2}$ . We refer to [6] for details on these baselines.
170
+
171
+ For SPIN, we use the same hyperparameters in all datasets: $L = 4$ layers; hidden size ${d}_{h} = {32};2$ layers with hidden size 32 for every MLP; ReLU activation functions. Masking out tokens in the target set allows SPIN to propagate only valid information. As a downside, this results in blocking the flow of information on paths through points in the target set. This can be problematic when the input observations are extremely sparse. Nonetheless, it is reasonable to assume that, after only a few propagation steps, the available information has already been partially diffused to locations with missing observations. At this point, blocked paths can be unlocked, allowing for reaching higher-order neighborhoods. In practice, we introduce a hyperparameter $\eta = 3$ to control the number of layers with masked connections and effectively split the propagation process into two phases. It is important to notice that what is being propagated in the second phase are learned representations, not observations (unavailable for masked tokens). For SPIN-H, we use similar hyperparameters, but 5 layers (with $\eta = 3$ ); $K = 4$ hubs per node with ${d}_{z} = {128}$ units each. These hyperparameters have been selected among a small subset of options on the validation set; we expect far better performance to be achievable with further hyperparameter tuning. Depending on the dataset, the number of parameters ranges from $\approx {55K}$ to $\approx {95K}$ for SPIN and $\approx {540K}$ to $\approx {800K}$ for SPIN-H. We use Adam optimizer [32], learning rate ${lr} = {0.0008}$ and a cosine scheduler with a warm-up of 12 steps and (partial) restarts every 100 epochs. We train our models with 300 mini-batches of 8 random samples per epoch, fixing the maximum number of epochs to 300 and using early stopping on the validation set with patience of 40 epochs. Due to constraints on memory capacity on some of the GPUs (see the description of the hardware resources below), for SPIN-H we set the batch size to 6 and 16 in AQI and AQI-36, respectively.
172
+
173
+ ---
174
+
175
+ ${}^{1}$ https://github.com/Graph-Machine-Learning-Group/grin
176
+
177
+ ${}^{2}$ https://github.com/TorchSpatiotemporal/tsl
178
+
179
+ ---
180
+
181
+ ![01963f03-379b-7a93-bbcc-a543869ea03b_7_307_201_1178_169_0.jpg](images/01963f03-379b-7a93-bbcc-a543869ea03b_7_307_201_1178_169_0.jpg)
182
+
183
+ Figure 2: The architecture of SPIN. At first, we encode observations ${\mathbf{X}}_{t : t + T}$ and spatiotemporal coordinates ${\mathbf{Q}}_{t : t + T}$ , obtaining initial representations ${\mathbf{H}}_{t : t + T}^{\left( 0\right) }$ . The representations are updated by a stack of $L$ sparse spatiotemporal attention blocks. Final imputations are obtained from ${\mathbf{H}}_{t : t + T}^{\left( L\right) }$ with a nonlinear readout.
184
+
185
+ To train SPIN-based models, we minimize the following loss function:
186
+
187
+ $$
188
+ \mathcal{L} = \mathop{\sum }\limits_{{l = 1}}^{L}\frac{\mathop{\sum }\limits_{{{\mathbf{q}}_{\tau }^{i} \in {\mathcal{Y}}_{t : t + T}}}\ell \left( {{\widehat{\mathbf{x}}}_{\tau }^{i,\left( l\right) },{\mathbf{x}}_{\tau }^{i}}\right) }{\left| {\mathcal{Y}}_{t : t + T}\right| }, \tag{6}
189
+ $$
190
+
191
+ where $\ell \left( {\cdot , \cdot }\right)$ is the absolute error and ${\widehat{\mathbf{x}}}_{\tau }^{i,\left( l\right) }$ is $l$ -th layer imputation for the $i$ -th node at time step $\tau$ . Note that, to provide more supervision to the architecture, the loss is computed and backpropagated w.r.t. representations learned at each layer, not only at the last one. The error is computed only on data not seen by the model at each forward pass. For this reason, we randomly remove $p$ ratio of the input data for each minibatch sample, with $p$ sampled uniformly from $\left\lbrack {{0.2},{0.5},{0.8}}\right\rbrack$ , and use them to compute the loss. We never use data masked for evaluation to train any model.
192
+
193
+ For the spatiotemporal Transformer baseline, we use the same training strategy and a similar hyperpa-rameters configuration of SPIN-H: $L = 5$ layers; 4 attention heads; hidden size and feed-forward size of 64 and 128 units, respectively. For SAITS, we use the code provided by the authors ${}^{3}$ . Hy-perparameters for SAITS have been selected on the validation set with a random search by using hyperparameter ranges from the original paper.
194
+
195
+ We recall that the time and memory complexities of SPIN and SPIN-H scale with $O\left( {\left( {N + E}\right) {T}^{2}}\right)$ and $O\left( {\left( {N + E}\right) {KT}}\right)$ , respectively. For the sake of comparison, here we also report the asymptotic complexities of the spatiotemporal Transformer and GRIN. The Transformer alternates temporal attention (i.e., $O\left( {N{T}^{2}}\right)$ ) and spatial attention (i.e., $O\left( {T{N}^{2}}\right)$ ), with a resulting $O\left( {\left( {N + T}\right) {NT}}\right)$ complexity. Let $R$ be the spatial receptive field (i.e., number of graph convolution layers) of the inner MPGRU cell, the time complexity required to process a single direction in GRIN scales with $O\left( {TRE}\right)$ . Note also that while most of the operations in the attention-based models can be executed in parallel, GRIN would need to process the entire sequence recurrently, with a consequent performance slowdown at execution time.
196
+
197
+ All the models were developed in Python [33] using PyTorch [34], PyG [35] and Torch Spatiotemporal [36]. We use Neptune ${}^{4}$ [37] for experiments tracking. The code to reproduce the experiments of the paper is available as supplementary material. All the experiments have been run in a cluster using GPU-enabled nodes with different hardware setups. Running times of SPIN-H training on a node equipped with a 12GB NVIDIA Titan V GPU range from 4 to 14 hours (depending on the dataset). For SPIN we used a node with 40GB NVIDIA A100 GPU, with running times ranging from 4 to 26 hours.
198
+
199
+ ---
200
+
201
+ ${}^{3}$ https://github.com/WenjieDu/SAITS
202
+
203
+ 4https://neptune.ai/
204
+
205
+ ---
206
+
207
+ Table 3: Ablation study to assess the contribution of the single components in the spatiotemporal attention block. Performance averaged over multiple independent runs.
208
+
209
+ <table><tr><td rowspan="2"/><td colspan="2">METR-LA (P)</td><td colspan="2">AQI-36</td></tr><tr><td>MAE</td><td>MRE (%)</td><td>MAE</td><td>MRE (%)</td></tr><tr><td>SPIN</td><td>1.90 ± 0.01</td><td>${3.29} \pm {0.01}$</td><td>11.77 $\pm {0.74}$</td><td>16.56 ± 1.05</td></tr><tr><td>SPIN-H</td><td>${1.96} \pm {0.04}$</td><td>${3.39} \pm {0.06}$</td><td>11.08 $\pm {0.06}$</td><td>$\mathbf{{15.60}} \pm {0.09}$</td></tr><tr><td>Without cross-attention</td><td>${2.18} \pm {0.01}$</td><td>${3.78} \pm {0.01}$</td><td>${15.36} \pm {0.09}$</td><td>${21.62} \pm {0.13}$</td></tr><tr><td>Without self-attention</td><td>${2.24} \pm {0.09}$</td><td>${3.88} \pm {0.16}$</td><td>${13.63} \pm {0.23}$</td><td>${19.19} \pm {0.32}$</td></tr><tr><td>Transformer</td><td>${2.16} \pm {0.00}$</td><td>${3.74} \pm {0.01}$</td><td>${11.98} \pm {0.53}$</td><td>${16.87} \pm {0.75}$</td></tr></table>
210
+
211
+ ## C Datasets
212
+
213
+ In this appendix, we provide details on datasets and preprocessing used for the experiments. We use temporal windows of $T = {24}$ steps for all datasets except AQI-36, for which we set $T = {36}$ . For traffic datasets, we split the data sequentially as ${70}\%$ for training, ${10}\%$ for validation, and ${20}\%$ for testing. For air quality datasets, following Yi et al. [29], we consider as the test set the months of March, June, September, and December and we use valid observation ${\mathbf{x}}_{\tau }^{i}$ as ground-truth if the value is missing at the same hour and day in the following month. For data preprocessing we use the same approach of Cini et al. [6], by normalizing data across the feature dimension (graph-wise for graph-based models) to zero mean and unit variance.
214
+
215
+ In line with $\left\lbrack {3,6}\right\rbrack$ , we obtain the adjacency matrix from the node pairwise geographical distances using a thresholded Gaussian kernel [38]
216
+
217
+ $$
218
+ {a}^{i, j} = \left\{ {\begin{matrix} \exp \left( {-\frac{\operatorname{dist}{\left( i, j\right) }^{2}}{\gamma }}\right) & \operatorname{dist}\left( {i, j}\right) \leq \delta \\ 0 & \text{ otherwise } \end{matrix},}\right. \tag{7}
219
+ $$
220
+
221
+ where dist $\left( {\cdot , \cdot }\right)$ is the geographical distance operator, $\gamma$ is a shape parameter and $\delta$ is the threshold.
222
+
223
+ ## D Ablation study
224
+
225
+ Table 3 shows the results of an ablation study on METR-LA (Point missing) and AQI-36. Here, we evaluate the performance in terms of mean absolute error (MAE) and mean relative error (MRE). We consider two different versions of SPIN-H in which we remove the spatiotemporal cross-attention and the temporal self-attention components, respectively. We also report the performance of SPIN, SPIN-H and the Transformer for reference. Results clearly show that both components contribute positively to imputation accuracy. We also point out that in METR-LA (P) observations are masked out uniformly at random while the mask in AQI-36 reflects the empirical distribution of missing data in the dataset.
papers/LOG/LOG 2022/LOG 2022 Conference/YXHoPO33rk/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § LEARNING TO RECONSTRUCT MISSING DATA FROM SPATIOTEMPORAL GRAPHS WITH SPARSE OBSERVATIONS
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ Modeling multivariate time series as temporal signals over a (possibly dynamic) graph is an effective representational framework that allows for developing models for time series analysis. Spatiotemporal graphs are often highly sparse, with time series characterized by multiple, concurrent, and even long sequences of missing data, e.g., due to the unreliable underlying sensor network. In this context, autoregressive models can be brittle and exhibit unstable learning dynamics. The objective of this paper is to tackle the problem of learning effective models to reconstruct, i.e., impute, missing data points by conditioning the reconstruction only on the available observations. In particular, we propose a novel class of attention-based architectures that, given a set of highly sparse discrete observations, learn a representation for points in time and space by exploiting a spatiotemporal propagation architecture aligned with the imputation task. Representations are learned end-to-end to reconstruct observations w.r.t. the corresponding sensor and its neighboring nodes. Compared to the state of the art, our model handles sparse data without propagating prediction errors or requiring a bidirectional model to encode forward and backward time dependencies. Empirical results on representative benchmarks show the effectiveness of the proposed method.
12
+
13
+ § 191 INTRODUCTION
14
+
15
+ Exploiting structure - both temporal and spatial - is arguably the key ingredient for the success of modern deep learning architectures and models. This is the case with spatiotemporal graph neural networks (STGNNs) [1-3], which learn to process multivariate time series while taking into account underlying space and time dependencies by encoding structural spatiotemporal inductive biases in their architectures. However, even when spatiotemporal relationships are present, available data are almost always incomplete and irregularly sampled, both spatially and temporally. This is definitely true for data coming from real sensor networks (SNs), where missing time series observations are usually imputed with simple interpolation strategies before proceeding with the downstream task. More advanced methods deal with missing data by autoregressively replacing missing observations with predicted ones, eventually using bidirectional architectures [4, 5] to exploit both forward and backward temporal dependencies. To account also for spatial dependencies, Cini et al. [6] introduced a method, named GRIN, combining a bidirectional autoregressive architecture with message passing neural networks [7-10]. Despite being the state of the art in spatiotemporal imputation, GRIN suffers from the error propagation typical of autoregressive models. In fact, we argue that the propagation of imputed (biased) values through space and time combined with noisy observations might exacerbate error accumulation in highly sparse data and drive the hidden state of GRIN-like models to drift away.
16
+
17
+ In this paper, we aim at tackling this problem by designing an architecture based on a novel attention mechanism that takes spatiotemporal sparsity into account while learning representations and imputing missing values. Compared with the alternatives discussed so far, our method exploits a novel spatiotemporal propagation process to learn a predictive representation for each missing observation by relying only on observed values propagated through the spatiotemporal structure. This approach achieves the twofold objective of avoiding propagating biased representation - typical in the autoregressive framework - and reconstructing observations at arbitrary nodes in the sensor network. Results on several benchmark datasets show the effectiveness of the approach under several different missing data distributions.
18
+
19
+ § 2 PROBLEM FORMULATION AND RELATED WORKS
20
+
21
+ We denote by ${\mathbf{X}}_{t} \in {\mathbb{R}}^{N \times d}$ the matrix collecting the $d$ -dimensional measurements of $N$ sensors in a SN at time step $t$ , with ${\mathbf{X}}_{t : t + T}$ being the sequence of $T$ measurements collected in the time interval $\lbrack t,t + T)$ . We model functional relationships among the sensors as graph edges, represented by the weighted adjacency matrix $\mathbf{A} \in {\mathbb{R}}^{N \times N}$ , in which each nonzero entry ${a}^{i,j}$ indicates the weight of the edge going from the $i$ -th node to the $j$ -th. We assume to have available sensor-level covariates ${\mathbf{Q}}_{t} \in {\mathbb{R}}^{N \times {d}_{q}}$ that act as spatiotemporal coordinates to localize a point in time and space (e.g., date/time features and geographic location). To account for data availability, we use a binary mask ${\mathbf{m}}_{t}^{i} \in \{ 0,1\}$ which is 1 if the measurements associated with the $i$ -th sensor are valid at time step $t$ . Conversely, if ${\mathbf{m}}_{t}^{i} = 0$ , we consider the measurements ${\mathbf{x}}_{t}^{i}$ to be completely missing, with the exogenous variables ${\mathbf{q}}_{t}^{i}$ being instead available. Finally, we model the multivariate, structured time series as a discrete sequence of graphs, where each graph is a tuple ${\mathcal{G}}_{t} = \left\langle {{\mathbf{X}}_{t},{\mathbf{Q}}_{t},{\mathbf{M}}_{t},\mathbf{A}}\right\rangle$ . Denoting by ${\widetilde{\mathbf{X}}}_{t : t + T}$ the unknown corresponding complete sequence, the goal of multivariate time series imputation (MTSI) is to find an estimate ${\widehat{\mathbf{X}}}_{t : t + T}$ minimizing the reconstruction error over the missing data points. Notice that, since ${\widetilde{\mathbf{X}}}_{t : t + T}$ is not available, one should find a surrogate objective or simulate the presence of missing data, for which the reconstruction error can be computed.
22
+
23
+ Related works. Multivariate time series imputation is a core task in time series analysis and deep learning methods are commonly used in this regard. In particular, deep autoregressive models based on recurrent neural networkss (RNNs) are currently among the most widely adopted methods [4, 5, 11, 12]. Several approaches in the literature, then, rely on generative adversarial neural networks [13] to generate imputed subsequences by matching the underlying data distribution [12, 14, 15]. Recently, several attention-based imputation techniques have also been proposed [16-18]. More related to our work, GRIN [6] uses a bidirectional graph RNN with a message passing spatial decoder, to impute time series based on spatiotemporal dependencies. The attention mechanism [19, 20] has been exploited in several contexts within the graph deep learning literature [21-24]. In particular, TraverseNet [25] is specially related to our work, since it relies on spatiotemporal autoregressive attention to compute messages exchanged between nodes.
24
+
25
+ § 3 METHODOLOGY
26
+
27
+ The autoregressive approach to reconstruction consists in directly modeling distributions $p\left( {{\mathbf{x}}_{t}^{i} \mid {\mathbf{X}}_{ < t}}\right)$ and using one-step-ahead forecasting as a surrogate objective to learn how to recover missing observations. To exploit available data subsequent to the target time step, it is common to use a bidirectional architecture which models also $p\left( {{\mathbf{x}}_{t}^{i} \mid {\mathbf{X}}_{ > t}}\right) \left\lbrack {5,{26}}\right\rbrack$ . Moreover, a third component $p\left( {{\mathbf{x}}_{t}^{i} \mid \left\{ {\mathbf{x}}_{t}^{j \neq i}\right\} }\right)$ must be introduced to account for spatial information at each step. Architectures like GRIN, follow exactly this scheme with different components dedicated to model each of these three aspects. While being effective in practice, these approaches can have multiple drawbacks. Besides the computational overhead of having three separate components and the compounding error in the autoregressive models, they can fall short in capturing global context, as the processing of the structural information is decomposed. Furthermore, merging the information coming from the different modules is also problematic, yielding to further compounding of errors. Finally, in the case of highly sparse observations, the spatial processing should be dealt with special care as propagating information through partially observed spatiotemporal graphs adds another layer of complexity.
28
+
29
+ Our approach, named Spatiotemporal Point Inference Network (SPIN), is a graph attention network for MTSI, designed to learn representations of discrete points associated with nodes of a sequence of spatiotemporal graphs. We denote as observed set ${\mathcal{X}}_{t : t + T} = \left\{ {\left\langle {{\mathbf{x}}_{\tau }^{i},{\mathbf{q}}_{\tau }^{i}}\right\rangle \mid {\mathbf{m}}_{\tau }^{i} = 1 \land \tau \in \lbrack t,t + T)}\right\}$ the set of all observations, paired with their spatiotempotal coordinates. Conversely, we name target set ${\mathcal{Y}}_{t : t + T} = \left\{ {{\mathbf{q}}_{\tau }^{i} \mid {\mathbf{m}}_{\tau }^{i} = 0 \land \tau \in \lbrack t,t + T)}\right\}$ the complement set collecting the coordinates of the discrete spatiotemporal points for which we want to reconstruct an observation. Then, for all discrete points ${\mathbf{q}}_{\tau }^{i} \in {\mathcal{Y}}_{t : t + T}$ , SPIN is trained to learn a model
30
+
31
+ $$
32
+ {f}_{\theta }\left( {{\mathbf{q}}_{\tau }^{i} \mid {\mathcal{X}}_{t : t + T},\mathbf{A}}\right) \approx \mathbb{E}\left\lbrack {p\left( {{\mathbf{x}}_{\tau }^{i} \mid {\mathbf{q}}_{\tau }^{i},{\mathcal{X}}_{t : t + T},\mathbf{A}}\right) }\right\rbrack . \tag{1}
33
+ $$
34
+
35
+ To this end, SPIN learns a parameterized propagation process where each representation, corresponding to a node and time step, is updated by aggregating information from all the available observations acquired at neighboring nodes, weighted by input-dependent attention scores. The core component of SPIN is a novel sparse spatiotemporal attention layer (Figure 1) used to propagate information at the level of single observations. Indeed, leveraging on the attention mechanism, we learn representations for each $i$ -th node at each $\tau$ -th time step by simultaneously aggregating information from (1) the observed set of $i$ -th node ${\mathcal{X}}_{t : t + T}^{i}$ ; (2) the observed set ${\mathcal{X}}_{t : t + T}^{j}$ of its neighbors $j \in \mathcal{N}\left( i\right)$ .
36
+
37
+ < g r a p h i c s >
38
+
39
+ Figure 1: Example of the sparse spatiotemporal attention layer acting for updating ${\mathbf{h}}_{\tau }^{i,\left( l\right) }$ , by simultaneously performing inter-node spatiotemporal cross-attention and intra-node temporal self-attention.
40
+
41
+ Let ${\mathbf{h}}_{\tau }^{i,\left( l\right) } \in {\mathbb{R}}^{{d}_{h}}$ be the learned representation for the $i$ -th node and time step $\tau$ at the $l$ -th layer. The encoding is initialized as $\operatorname{MLP}\left( {{\mathbf{x}}_{\tau }^{i},{\mathbf{q}}_{\tau }^{i}}\right)$ if observation ${\mathbf{x}}_{\tau }^{i}$ is valid, or $\operatorname{MLP}\left( {\mathbf{q}}_{\tau }^{i}\right)$ otherwise, where MLP is a multi-layer perceptron. The next steps involve computations of spatiotemporal messages, i.e., representations computed to propagate information from one discrete space-time point to another. We indicate the propagation along the temporal dimension from time step $s$ to time step $\tau$ as subscripts $s \rightarrow \tau$ . Similarly, superscripts $j \rightarrow i$ indicate messages sent from the $j$ -th node to the $i$ -th. To avoid overloading the notation, we omit the layer superscript in the following. The message ${\mathbf{r}}_{s \rightarrow \tau }^{j \rightarrow i} \in {\mathbb{R}}^{{d}_{h}}$ from the $j$ -th node at time step $s$ to the $i$ -th node at time step $\tau$ is computed with an MLP taking as input both source and target representations (Eq. 2). To account for spatial information, this mechanism is used to perform an inter-node temporal cross-attention, computing a message to ${\mathbf{h}}_{\tau }^{i}$ using encodings in ${\mathbf{h}}_{t : t + T}^{j}$ associated with a valid observation for every neighbor $j \in \mathcal{N}\left( i\right)$ (Eq. 3).
42
+
43
+ $$
44
+ {\mathbf{r}}_{s \rightarrow \tau }^{j \rightarrow i} = \operatorname{MLP}\left( {{\mathbf{h}}_{s}^{j},{\mathbf{h}}_{\tau }^{i}}\right) \tag{2}
45
+ $$
46
+
47
+ $$
48
+ {\mathcal{R}}_{\tau }^{j \rightarrow i} = \left\{ {{\mathbf{r}}_{s \rightarrow \tau }^{j \rightarrow i} \mid \left\langle {{\mathbf{x}}_{s}^{j},{\mathbf{q}}_{s}^{j}}\right\rangle \in {\mathcal{X}}_{t : t + T}}\right\} \tag{3}
49
+ $$
50
+
51
+ Messages in ${\mathcal{R}}_{\tau }^{j \rightarrow i}$ are then weighted by message scores ${\alpha }_{s \rightarrow \tau }^{j \rightarrow i}$ , computed by a linear projection of the messages in ${\mathcal{R}}_{\tau }^{j \rightarrow i}$ followed by a softmax layer, and aggregated to obtain an edge-level context vector ${e}_{\tau }^{j \rightarrow i}$ , encoding the observed sequence at each $j$ -th node w.r.t. the $i$ -th node and time step $\tau$ . Analogously, to account for the observed sequence of the $i$ -th node itself, we exploit an intra-node temporal self-attention mechanism to compute messages from the encodings ${\mathbf{h}}_{t : t + T}^{i}$ corresponding to valid observations, aggregated (weighted by message scores) to obtain a temporal context vector ${\mathbf{c}}_{\tau }^{i}$ . Then, target representation ${\mathbf{h}}_{\tau }^{i,\left( l\right) }$ is updated with a final aggregation step (Eq. 4), and imputations for all spatiotemporal points in ${\mathcal{Y}}_{t : t + T}$ are obtained - after $L$ layers - with a nonlinear readout (Eq. 5).
52
+
53
+ $$
54
+ {\mathbf{h}}_{\tau }^{i,\left( {l + 1}\right) } = \operatorname{MLP}\left( {{\mathbf{h}}_{\tau }^{i,\left( l\right) },{\mathbf{c}}_{\tau }^{i,\left( l\right) },\mathop{\sum }\limits_{{j \in \mathcal{N}\left( i\right) }}{\mathbf{e}}_{\tau }^{j \rightarrow i,\left( l\right) }}\right) \tag{4}
55
+ $$
56
+
57
+ $$
58
+ {\widehat{\mathcal{Y}}}_{t : t + T} = \left\{ {{\widehat{\mathbf{x}}}_{\tau }^{i} = \operatorname{MLP}\left( {\mathbf{h}}_{\tau }^{i,\left( L\right) }\right) \mid {\mathbf{q}}_{\tau }^{i} \in {\mathcal{Y}}_{t : t + T}}\right\} \tag{5}
59
+ $$
60
+
61
+ Hierarchical attention. Roughly speaking, the proposed spatiotemporal attention mechanism can be viewed as performing attention over the spatiotemporal graph $\mathcal{S}$ , obtained by considering the product graph between space and time dimensions. Performing graph attention on the surrogate graph $\mathcal{S}$ has time and memory complexities that scale with $O\left( {\left( {N + E}\right) {T}^{2}}\right)$ , with $N,E$ being the largest number of nodes and edges, respectively, among graphs in ${\mathcal{G}}_{t : t + T}$ . To reduce this computational burden - which undermines the application of the proposed method to large graphs and long time horizons - we propose to rewire the attention mechanism to be hierarchical [27]. We do this by adding $K$ dummy nodes that act as hubs for propagating information. Let ${\mathbf{Z}}^{i} \in {\mathbb{R}}^{K \times {d}_{z}}$ be the the hub nodes’ representations for central node $i$ , and then, for hub $k$ ,(1) update ${\mathbf{z}}_{k}^{i}$ by querying $\left\{ {{\mathbf{h}}_{\tau }^{i} \mid \left\langle {{\mathbf{x}}_{\tau }^{i},{\mathbf{q}}_{\tau }^{i}}\right\rangle \in {\mathcal{X}}_{t : t + T}}\right\}$ , i.e., node encodings associated with valid observations; (2) update node
62
+
63
+ Table 1: Performance (MAE) with increasing data sparsity (average over 5 evaluation masks).
64
+
65
+ max width=
66
+
67
+ 2*X 3|c|METR-LA 3|c|PEMS-BAY 3|c|AQI
68
+
69
+ 2-10
70
+ 5 % 10 % 15 % 5 % 10 % 15 % 5 % 10 % 15 %
71
+
72
+ 1-10
73
+ BRITS ${5.87} \pm {0.03}$ ${7.26} \pm {0.06}$ ${8.29} \pm {0.07}$ ${4.14} \pm {0.05}$ ${5.41} \pm {0.08}$ ${5.84} \pm {0.04}$ ${24.09} \pm {0.30}$ ${31.90} \pm {0.26}$ ${37.62} \pm {0.42}$
74
+
75
+ 1-10
76
+ SAITS ${4.73} \pm {0.07}$ ${6.66} \pm {0.05}$ ${7.27} \pm {0.03}$ ${3.88} \pm {0.09}$ ${7.62} \pm {0.21}$ ${8.01} \pm {0.11}$ ${20.78} \pm {0.30}$ ${30.16} \pm {0.39}$ ${36.34} \pm {0.33}$
77
+
78
+ 1-10
79
+ Transformer ${6.03} \pm {0.04}$ ${7.19} \pm {0.05}$ ${8.06} \pm {0.05}$ ${3.69} \pm {0.06}$ ${5.09} \pm {0.05}$ ${6.02} \pm {0.04}$ ${29.21} \pm {0.33}$ ${33.62} \pm {0.16}$ ${37.31} \pm {0.14}$
80
+
81
+ 1-10
82
+ GRIN ${3.05} \pm {0.02}$ ${4.52} \pm {0.05}$ ${5.82} \pm {0.06}$ ${2.26} \pm {0.03}$ ${3.45} \pm {0.06}$ ${4.35} \pm {0.04}$ ${15.62} \pm {0.24}$ ${22.08} \pm {0.39}$ ${29.03} \pm {0.42}$
83
+
84
+ 1-10
85
+ SPIN ${2.71} \pm {0.02}$ ${3.32} \pm {0.02}$ ${3.87} \pm {0.05}$ ${1.78} \pm {0.03}$ ${2.15} \pm {0.03}$ ${2.41} \pm {0.02}$ ${14.29} \pm {0.24}$ ${18.71} \pm {0.34}$ ${24.34} \pm {0.46}$
86
+
87
+ 1-10
88
+ SPIN-H ${2.64} \pm {0.02}$ ${3.17} \pm {0.02}$ ${3.61} \pm {0.04}$ ${1.75} \pm {0.04}$ ${2.16} \pm {0.03}$ ${2.48} \pm {0.02}$ ${14.55} \pm {0.26}$ ${19.37} \pm {0.36}$ ${25.38} \pm {0.37}$
89
+
90
+ 1-10
91
+
92
+ encoding ${\mathbf{h}}_{\tau }^{i}$ by querying updated ${\mathbf{Z}}^{i}$ and ${\mathbf{Z}}^{j}$ of every $j$ -th neighbor in $\mathcal{N}\left( i\right)$ . In this way, we can reduce the spatiotemporal attention complexity to $O\left( {\left( {N + E}\right) {KT}}\right)$ , with $K \ll T$ , at the cost of introducing an information bottleneck.
93
+
94
+ § 4 EMPIRICAL EVALUATION
95
+
96
+ In this section, we evaluate our method on three real-world datasets and compare the performance against state-of-the-art methods and standard approaches for MTSI. In following experiment, we consider both SPIN and the hierarchical version SPIN-H. The figure of merit used is the mean absolute error (MAE), averaged across imputation windows. We consider only the out-of-sample scenario similarly to previous works[6], in which every parametric model is trained and tested on disjoint sets. We consider three openly available datasets coming from real-world SNs. The first two, namely PEMS-BAY and METR-LA [2], record traffic measurements and are both widely used benchmarks in spatiotemporal forecasting literature. We use the same setup of [6] to inject missing data with Point missing policy, in which we randomly drop 25% of the available data. As a third dataset, we consider AQI $\left\lbrack {{28},{29}}\right\rbrack$ , which collects hourly measurements of air pollutants from 437 air quality monitoring stations in China. We consider also a smaller version of this dataset (AQI-36) with only the 36 sensors in the city of Beijing. We use the same missing data distribution used in [6, 29]. In all settings, all the valid observations masked out are used as targets for evaluation. We obtain an adjacency matrix from the pairwise distance of sensors following previous works [2,3,6]. We compare our methods against (1) GRIN [6], a graph-based bidirectional RNN for MTSI with state-of-the-art performance; (2) a spatiotemporal Transformer, where we alternate temporal and spatial Transformer encoder layers from [19] and replace missing values with a [MASK] token (as in [30]); (3) SAITS [16], a recent attention-based architecture; (4) BRITS [5], which leverages on a bidirectional RNN. We assess how performance changes as the percentage of missing values increases. In practice, we change the missing data distribution at test time, simulating the case in which, at each time step, every sensor has a constant probability $\bar{p}$ of going offline for a random number $S \sim \mathcal{U}\left( {{12},{36}}\right)$ of future (consecutive) time steps. Table 1 shows results for all datasets with $\bar{p} = 5\% ,\bar{p} = {10}\%$ , and $\bar{p} = {15}\%$ . Note that, depending on the dataset, the portion of valid observations in each of these cases amounts to $\approx {25} - {30}\% , \approx 8 - {10}\%$ , and $\approx 3 - 4\%$ , respectively. SPIN models, differently from the baselines, can handle all the considered scenarios. In particular, improvements in performance w.r.t. the best performing baseline (GRIN) are more evident as the number of available observations decreases. Indeed, the sparse spatiotemporal attention mechanism of SPIN is not autoregressive and allows an unbounded memory capacity. Note also that SPIN-H performs on par (and in some cases better) with SPIN, making it a valid lightweight alternative to SPIN. In Appendix A, we show that SPIN-based models perform on par or better than state-of-the-art methods on standard benchmarks.
97
+
98
+ § 5 CONCLUSIONS
99
+
100
+ We introduced a graph-based attention network, named SPIN, to reconstruct missing observations in sparse spatiotemporal time series. We showed how the time and space complexities of the approach can be drastically reduced considering a novel hierarchical attention mechanism. Empirical analysis shows that the proposed method widely outperforms state-of-the-art methods for imputation in highly sparse settings.
papers/LOG/LOG 2022/LOG 2022 Conference/YcnAf3cEvH3/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Graph Machine Learning for Assembly Modeling
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ Assembly modeling refers to the design engineering process of composing assemblies (e.g., machines or machine components) from a common catalog of existing components. There is a natural correspondence of assemblies to graphs which can be exploited for services based on graph machine learning such as component recommendation, clustering/taxonomy creation, or anomaly detection. However, this domain imposes particular challenges such as the treatment of unknown or new components, ambiguously extracted edges, incomplete information about the design sequence, interaction with design engineers as users, to name a few. Based on our initial results on component recommendation using GATs and GCNs, we present a novel data set along with open research questions.
12
+
13
+ ## 1 Assembly Modeling
14
+
15
+ Assemblies are groups of components that make up a product (see Figure 1). In computer-aided design (CAD), assembly modeling refers to designing a new product based on existing components - think of a cabinet that consists of screws, doors and hinges, or a bike that consists of a frame, wheels, etc [1]. The connection type (e.g., welding or fastening using bolts) may contain geometric information or constraints that are also part of the assembly model. By its very nature, assembly modeling gives rise to a number of interesting novel applications for graph machine learning. Note that assembly modeling in this paper refers to the act of designing a new product using the same library of existing components whereas other lines of work (e.g. [2]) emphasize the computer vision perspective of perceiving physical components - also using geometric deep learning. Our goal is to support design engineers, e.g., by suggesting next components to insert or categorizing the existing components by their usage.
16
+
17
+ Some challenges that manufacturing companies face are:
18
+
19
+ - Assemblies similar to existing ones frequently need to be designed and adjusted - in accordance to customer specifications (e.g., in special mechanical engineering).
20
+
21
+ - Knowledge about proven component combinations (e.g., particular hinges and doors, screws and bolts,...) is available to senior design engineers and may follow a desirable component management but not made explicit and enforced in CAD software.
22
+
23
+ - Assembly models are produced in an arbitrary sequence which depends on individual preferences (e.g., start working on the front or back wheel of a bicycle is arbitrary); moreover, this insertion ordering is not stored in the final design by common CAD tools.
24
+
25
+ - Extracting a useful graph from CAD assembly models to begin with is not obvious. Although so-called "mates" specify a connection between components in a design to, e.g., jointly perform rotations, they are sometimes used for convenience in the CAD tool (cf. grouping elements) instead of actually denoting a physical connection or meaningful co-occurrence that could be reused.
26
+
27
+ In this extended abstract, we highlight opportunities for the graph machine learning community to work on CAD assembly modeling as a novel application along with an accompanying data set [3].
28
+
29
+ Formally, in our problem setting we assume a set of component types $\mathcal{C}$ (e.g., a particular type of screw, hinge, etc.) on which a data set of $N$ assemblies ${\left\{ {A}_{i}\right\} }_{i = 1}^{N}$ is based. An assembly ${A}_{i}$ specifies its containing components (multiple instances of the same type are possible) as nodes ${\mathcal{N}}^{{A}_{i}}$ , and information about connected components as edges ${\mathcal{E}}^{{A}_{i}} \subseteq {\mathcal{N}}^{{A}_{i}} \times {\mathcal{N}}^{{A}_{i}}$ . Each component $c \in {\mathcal{N}}^{{A}_{i}}$ is an instance of a component type - i.e. $\mathcal{T}\left( c\right) \in \mathcal{C}$ . Consequently, each assembly is represented as an undirected graph where the nodes are heterogeneous (different component types) and the edges are homogeneous (only one type of edges - expressing connectivity in a design - is allowed). Note, however, that we do not assume these graphs to be simply given but their extraction from CAD assembly models as a vital challenge of this application scenario (beyond well-defined "mates").
30
+
31
+ ![01963ee2-1e0b-777f-834f-0a018c385855_1_333_204_1130_332_0.jpg](images/01963ee2-1e0b-777f-834f-0a018c385855_1_333_204_1130_332_0.jpg)
32
+
33
+ Figure 1: Assembly models (here, a jaw of a gripper) contain the structure of the included components. Multiple instances of the same component type (here, A, B, C, D) may occur multiple times.
34
+
35
+ ## 2 Graph ML Use Cases in Assembly Modeling
36
+
37
+ Given a data collection of assemblies as described, there are several application scenarios for graph machine learning. With respect to the distinction into structural (graph structure is explicit as in molecule generation) and non-structural scenarios (graphs are implicit and derived from text or images) given in [4], the proposed applications fall into a "semi-structural" category since some edges can be extracted canonically from defined CAD mates whereas others need to be extracted from, e.g., geometric proximity or could be defined by designers out of convenience with no actual meaning.
38
+
39
+ ### 2.1 Component Recommendation
40
+
41
+ In assembly modeling, design engineers can choose from a variety of existing components, making selecting the right ones a cumbersome task. Past assemblies contain information both on the collection of used components and on their combination to solve a specific task. We assume that components which are used together frequently are causally related and, therefore, components that are likely to be inserted can be predicted using graph ML. In previous work [5], recommending next required component types during construction based on GNNs has already been investigated showing promising results - i.e., learning $P\left( {\mathcal{C} \mid {A}_{i}^{t}}\right)$ where ${A}_{i}^{t}$ refers to the state of an assembly at time step $t$ . However, the presented approach does not incorporate where to assemble the recommended component to the partial design. This may be acceptable for small assemblies, but becomes unwieldy for large assemblies with plenty of possible extension nodes. A natural next step is to predict applicable component types for every component that is already part of the assembly - i.e., learning $P\left( {\mathcal{C} \mid c \in {A}_{i}^{t}}\right)$ to localize the component recommendation within an assembly. Due to this autoregressive nature of component recommendation, it bears similarity to graph generation although the focus is on incremental steps with user interaction as opposed to learning an assembly distribution $P\left( A\right)$ all at once, as will be discussed in Section 3. Since the main goal of component recommendation is to provide a design engineer with a small set of relevant component types to reduce the cognitive burden, approaches using conformal prediction [6] could prove useful.
42
+
43
+ ### 2.2 Anomaly Detection of Mismatching Components
44
+
45
+ A second (unsupervised) use case consists of detecting anomalies in assembly models such as an unexpected choice of particular component types (e.g., screws from a different manufacturer) or rarely used substructures that could hint at an unconventional way of solving a design task. From a 8 business perspective, companies might limit their procurement to a set of well-known component types (better contracts with manufacturers, more reliable during the product lifecycle) within their strategic component management. Anomalous assembly models might emerge, e.g., from starting a new model based on a much earlier project with some components having become obsolete or simply a lack of experience/knowledge on behalf of the design engineer. From a graph ML perspective, both identifying anomalous graphs in a database as well as identifying anomalous graph objects (nodes, edges) needs to be addressed [7], in particular to show to design engineers or procurers where and how the assembly is deviating.
46
+
47
+ ### 2.3 Creating a Taxonomy of Components
48
+
49
+ Third, using node embeddings ${h}_{c}$ of the components ${\left\{ c \in {A}_{i}\right\} }_{i = 1}^{N}$ such as node2vec [8], Deep-Walk [9], or comp2vec [5] for visualization and clustering could aid companies as well in their component management. The availability of well-curated, hierarchical taxonomies of component types depends on the level of maturity of a company and traditionally requires significant manual effort. A data-driven solution that exploits usage patterns in assembly models could organize a company's frequently used component types better. There has been an interest in making these embeddings (or latent representations) of nodes more interpretable to humans [10] which is what needs to be done for this task. However, the graphs retrieved from assemblies do not show homophily - two components that are connected are most likely not similar but rather complementary (e.g., door and hinge) which is an underlying assumption of many existing node embedding techniques. Here, synonymity tends to be a better replacement, i.e., components are similar if they can be used in the same contexts.
50
+
51
+ ## 3 Related Work
52
+
53
+ The task of component recommendation bears some similarities to graph generation, i.e., approximating ${P}_{\text{data }}\left( G\right)$ with a parametrizable ${P}_{\text{model }}\left( {G \mid \theta }\right)$ . The graph can be generated either all at once, for example using variational autoencoders or generative adversarial networks, or incrementally by so-called autoregressive models that predict single or multiple nodes or edges step by step - conditioned on an intermediate state of the graph. Since our goal is to support design engineers by presenting suggestions instead of taking over the whole task, we go with the second approach. During CAD modeling, we want to allow changes on the partial assembly by designers. Therefore, we also need a model that can generate arbitrarily large graphs which is typically not the case for non-incremental models.
54
+
55
+ Generating graphs with matching structural characteristics to the training data along with handling only one node type (i.e., $\left| \mathcal{C}\right| = 1$ ) as done using recurrent neural networks in GRAN [11] or GraphRNN [12] is not sufficient: the relevant information for predicting next needed components lies more in the already used component types, i.e., the node features, than the structure of the graph. The focus is mainly on the type of component that should be added and only secondarily where to insert it into the existing graph. Since these approaches only evaluate the final graph structure (in particular, in terms of aggregated graph statistics such as degree distributions) without incorporating its intermediate states, canonical numbering of nodes can be performed for generating training instances, keeping their number small as no node permutations need to be considered. Common choices for GRAN or GraphRNN are breadth-first or depth-first traversals starting from the most connected node or random orderings. For assemblies, however, designers can start with any component or subgraph, followed by a generation sequence depending on the designer's preferences. Therefore, the authors in [5] create instances for every possible creation sequence of an assembly by iteratively cutting off nodes that serve as labels for the resulting partial assemblies. Unlike [13], in these approaches newly added nodes are always connected to the previous graph structure, which we want to enforce during construction.
56
+
57
+ Molecule graph generation refers to generating valid molecules with desired chemical properties, incorporating various types of nodes (i.e. different atoms) and even various types of connections (which is not necessary for using the current representation of assemblies). While guaranteeing the validity of the generated graph (like in [14] concerning the chemical structure of the generated molecule) may be assumed to be an important aspect for assembly modeling as well, this check-up turns out be not this obvious as the number of connection points of an component is typically not available or misleading since design engineers may adapt their geometry, e.g., by drilling holes, in order to assemble additional components. However, this application domain seems to be the most similar to assembly modeling in terms of data representation. Nevertheless, again only the final generated graphs are relevant for evaluating the generation model - as expounded above, also the intermediate steps matter for component recommendation.
58
+
59
+ ## 4 Open Questions
60
+
61
+ The domain of assembly modeling imposes particular challenges that can stipulate further research in graph machine learning as described in detail in the following.
62
+
63
+ How to deal with evolving data sets? Over time, the component catalogs may get updated as well as new catalogs and components may be incorporated to a company's component library. In particular at test time, we might be confronted with component types in assemblies that were not available during training. This setting confronts us with so-called attribute-missing graphs where all attributes of a subset of nodes are missing, opposed to attribute-incomplete graphs [15] that are composed of nodes all with non-empty attribute sets, typically treated by value imputation techniques either in a preprocessing step (e.g. [16] or [17]), or during processing the graph in the model (e.g., [18] or [19]). Methods based on the homophily assumption are not applicable since the assumption of connected nodes been similar is clearly violated in the assembly modeling use cases as connected components typically serve different purposes. Initial work has been done on handling attribute-missing graphs, e.g., [15] that make a shared-latent space assumption on graphs resulting in a new form of GNN called SAT - its applicability on assembly modeling needs to be investigated.
64
+
65
+ How to handle ambiguous edges? As already introduced in Section 1, extracting a graph structure from an assembly model is an ambiguous task as only some mating relations may be given in the CAD system - some of them even serving other purposes than denoting meaningful connections (e.g., to simply support the designer's workflow). Consequently, extracting a graph structure from an assembly is not straightforward and when based on geometric proximity of components an computational expensive approach. Even in a perfect world, where all components are connected by mates in a meaningful way, this graph structure may be insufficient for the learning task (as mentioned in [20] and [21]) because components that are far away from each other according the graph structure may have a certain relationship which is relevant for recommending next components. Graph rewiring may be promising solution for this issue, transforming the initial graph structure by adding and removing edges to improve information processing. Moreover, due to the novelty of the application domain, it is still unclear whether a graph structure is really helpful for solving assembly modeling tasks, possibly a set-based approach could perform as well or even better.
66
+
67
+ How to improve intuitive sequence generation and interactive inference? Especially for component recommendation, the proposed sequence of component insertions needs to be intuitive in the eye of the design engineers that interact with the assembly modeling tool. There is not an obvious way to extract a sequence from a data set of graphs - as is done in generative graph models such as GraphRNN or GRAN. Either all possible insertion sequences (that leave the assembly connected) or a sample thereof need to be considered - as done in [5] - or the data sets need to be augmented with insertion sequences.
68
+
69
+ Finally, we encourage readers to investigate the data set [3] and identify similarities with their preferred data sets or applicability of their methods that can address the above challenges.
70
+
71
+ ## References
72
+
73
+ [1] Stephen J Schoonmaker. The CAD guidebook: A basic manual for understanding and improving computer-aided design. CRC Press, 2002. 1
74
+
75
+ [2] Guanqi Zhan, Qingnan Fan, Kaichun Mo, Lin Shao, Baoquan Chen, Leonidas J Guibas, Hao Dong, et al. Generative 3d part assembly via dynamic graph learning. Advances in Neural Information Processing Systems, 33:6315-6326, 2020. 1
76
+
77
+ [3] Carola Gajek. ECML22 GRAPE Data. 7 2022. doi: 10.6084/m9.figshare.20239767.v1. URL https://figshare.com/articles/dataset/ECML22_GRAPE_Data/20239767.1,4
78
+
79
+ [4] Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. Graph neural networks: A review of methods and
80
+
81
+ applications. AI Open, 1:57-81, 2020. ISSN 2666-6510. doi: https://doi.org/10.1016/j.aiopen.2021.01.001.2
82
+
83
+ [5] Carola Gajek, Alexander Schiendorfer, and Wolfgang Reif. A recommendation system for cad assembly modeling based on gnns. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 2022. 2, 3, 4
84
+
85
+ [6] Glenn Shafer and Vladimir Vovk. A tutorial on conformal prediction. Journal of Machine Learning Research, 9(3), 2008. 2
86
+
87
+ [7] Xiaoxiao Ma, Jia Wu, Shan Xue, Jian Yang, Chuan Zhou, Quan Z Sheng, Hui Xiong, and Leman Akoglu. A comprehensive survey on graph anomaly detection with deep learning. IEEE Transactions on Knowledge and Data Engineering, 2021. 3
88
+
89
+ [8] Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 855-864, 2016. 3
90
+
91
+ [9] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 701-710, 2014. 3
92
+
93
+ [10] Ayushi Dalmia and Manish Gupta. Towards interpretation of node embeddings. In Companion Proceedings of the The Web Conference 2018, pages 945-952, 2018. 3
94
+
95
+ [11] Renjie Liao, Yujia Li, Yang Song, Shenlong Wang, William L. Hamilton, David Duvenaud, Raquel Urtasun, and Richard Zemel. Efficient Graph Generation with Graph Recurrent Attention Networks. Curran Associates Inc., Red Hook, NY, USA, 2019. 3
96
+
97
+ [12] Jiaxuan You, Rex Ying, Xiang Ren, William Hamilton, and Jure Leskovec. GraphRNN: Generating realistic graphs with deep auto-regressive models. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 5708-5717. PMLR, 10-15 Jul 2018. 3
98
+
99
+ [13] Yujia Li, Oriol Vinyals, Chris Dyer, Razvan Pascanu, and Peter Battaglia. Learning Deep Generative Models of Graphs, 2018. 3
100
+
101
+ [14] Chence Shi, Minkai Xu, Zhaocheng Zhu, Weinan Zhang, Ming Zhang, and Jian Tang. Graphaf: a flow-based autoregressive model for molecular graph generation. 2020. doi: 10.48550/ARXIV. 2001.09382. 3
102
+
103
+ [15] Xu Chen, Siheng Chen, Jiangchao Yao, Huangjie Zheng, Ya Zhang, and Ivor W. Tsang. Learning on attribute-missing graphs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(2):740-757, 2022. doi: 10.1109/TPAMI.2020.3032189. 4
104
+
105
+ [16] Joonyoung Yi, Juhyuk Lee, Kwang Joon Kim, Sung Ju Hwang, and Eunho Yang. Why not to use zero imputation? correcting sparsity bias in training neural networks. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. 4
106
+
107
+ [17] Indro Spinelli, Simone Scardapane, and Aurelio Uncini. Missing data imputation with adversarially-trained graph convolutional networks. Neural Networks, 129:249-260, 2020. ISSN 0893-6080. doi: https://doi.org/10.1016/j.neunet.2020.06.005.4
108
+
109
+ [18] Hibiki Taguchi, Xin Liu, and Tsuyoshi Murata. Graph convolutional networks for graphs containing missing features. Future Generation Computer Systems, 117:155-168, 2021. ISSN 0167-739X. doi: https://doi.org/10.1016/j.future.2020.11.016.4
110
+
111
+ [19] Jiaxuan You, Xiaobai Ma, Yi Ding, Mykel J Kochenderfer, and Jure Leskovec. Handling missing data with graph representation learning. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 19075-19087. Curran Associates, Inc., 2020. 4
112
+
113
+ [20] Uri Alon and Eran Yahav. On the bottleneck of graph neural networks and its practical implications, 2020. 4
114
+
115
+ [21] Jake Topping, Francesco Di Giovanni, Benjamin Paul Chamberlain, Xiaowen Dong, and Michael M. Bronstein. Understanding over-squashing and bottlenecks on graphs via curvature. ArXiv, abs/2111.14522, 2022. 4
papers/LOG/LOG 2022/LOG 2022 Conference/YcnAf3cEvH3/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § GRAPH MACHINE LEARNING FOR ASSEMBLY MODELING
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ Assembly modeling refers to the design engineering process of composing assemblies (e.g., machines or machine components) from a common catalog of existing components. There is a natural correspondence of assemblies to graphs which can be exploited for services based on graph machine learning such as component recommendation, clustering/taxonomy creation, or anomaly detection. However, this domain imposes particular challenges such as the treatment of unknown or new components, ambiguously extracted edges, incomplete information about the design sequence, interaction with design engineers as users, to name a few. Based on our initial results on component recommendation using GATs and GCNs, we present a novel data set along with open research questions.
12
+
13
+ § 1 ASSEMBLY MODELING
14
+
15
+ Assemblies are groups of components that make up a product (see Figure 1). In computer-aided design (CAD), assembly modeling refers to designing a new product based on existing components - think of a cabinet that consists of screws, doors and hinges, or a bike that consists of a frame, wheels, etc [1]. The connection type (e.g., welding or fastening using bolts) may contain geometric information or constraints that are also part of the assembly model. By its very nature, assembly modeling gives rise to a number of interesting novel applications for graph machine learning. Note that assembly modeling in this paper refers to the act of designing a new product using the same library of existing components whereas other lines of work (e.g. [2]) emphasize the computer vision perspective of perceiving physical components - also using geometric deep learning. Our goal is to support design engineers, e.g., by suggesting next components to insert or categorizing the existing components by their usage.
16
+
17
+ Some challenges that manufacturing companies face are:
18
+
19
+ * Assemblies similar to existing ones frequently need to be designed and adjusted - in accordance to customer specifications (e.g., in special mechanical engineering).
20
+
21
+ * Knowledge about proven component combinations (e.g., particular hinges and doors, screws and bolts,...) is available to senior design engineers and may follow a desirable component management but not made explicit and enforced in CAD software.
22
+
23
+ * Assembly models are produced in an arbitrary sequence which depends on individual preferences (e.g., start working on the front or back wheel of a bicycle is arbitrary); moreover, this insertion ordering is not stored in the final design by common CAD tools.
24
+
25
+ * Extracting a useful graph from CAD assembly models to begin with is not obvious. Although so-called "mates" specify a connection between components in a design to, e.g., jointly perform rotations, they are sometimes used for convenience in the CAD tool (cf. grouping elements) instead of actually denoting a physical connection or meaningful co-occurrence that could be reused.
26
+
27
+ In this extended abstract, we highlight opportunities for the graph machine learning community to work on CAD assembly modeling as a novel application along with an accompanying data set [3].
28
+
29
+ Formally, in our problem setting we assume a set of component types $\mathcal{C}$ (e.g., a particular type of screw, hinge, etc.) on which a data set of $N$ assemblies ${\left\{ {A}_{i}\right\} }_{i = 1}^{N}$ is based. An assembly ${A}_{i}$ specifies its containing components (multiple instances of the same type are possible) as nodes ${\mathcal{N}}^{{A}_{i}}$ , and information about connected components as edges ${\mathcal{E}}^{{A}_{i}} \subseteq {\mathcal{N}}^{{A}_{i}} \times {\mathcal{N}}^{{A}_{i}}$ . Each component $c \in {\mathcal{N}}^{{A}_{i}}$ is an instance of a component type - i.e. $\mathcal{T}\left( c\right) \in \mathcal{C}$ . Consequently, each assembly is represented as an undirected graph where the nodes are heterogeneous (different component types) and the edges are homogeneous (only one type of edges - expressing connectivity in a design - is allowed). Note, however, that we do not assume these graphs to be simply given but their extraction from CAD assembly models as a vital challenge of this application scenario (beyond well-defined "mates").
30
+
31
+ < g r a p h i c s >
32
+
33
+ Figure 1: Assembly models (here, a jaw of a gripper) contain the structure of the included components. Multiple instances of the same component type (here, A, B, C, D) may occur multiple times.
34
+
35
+ § 2 GRAPH ML USE CASES IN ASSEMBLY MODELING
36
+
37
+ Given a data collection of assemblies as described, there are several application scenarios for graph machine learning. With respect to the distinction into structural (graph structure is explicit as in molecule generation) and non-structural scenarios (graphs are implicit and derived from text or images) given in [4], the proposed applications fall into a "semi-structural" category since some edges can be extracted canonically from defined CAD mates whereas others need to be extracted from, e.g., geometric proximity or could be defined by designers out of convenience with no actual meaning.
38
+
39
+ § 2.1 COMPONENT RECOMMENDATION
40
+
41
+ In assembly modeling, design engineers can choose from a variety of existing components, making selecting the right ones a cumbersome task. Past assemblies contain information both on the collection of used components and on their combination to solve a specific task. We assume that components which are used together frequently are causally related and, therefore, components that are likely to be inserted can be predicted using graph ML. In previous work [5], recommending next required component types during construction based on GNNs has already been investigated showing promising results - i.e., learning $P\left( {\mathcal{C} \mid {A}_{i}^{t}}\right)$ where ${A}_{i}^{t}$ refers to the state of an assembly at time step $t$ . However, the presented approach does not incorporate where to assemble the recommended component to the partial design. This may be acceptable for small assemblies, but becomes unwieldy for large assemblies with plenty of possible extension nodes. A natural next step is to predict applicable component types for every component that is already part of the assembly - i.e., learning $P\left( {\mathcal{C} \mid c \in {A}_{i}^{t}}\right)$ to localize the component recommendation within an assembly. Due to this autoregressive nature of component recommendation, it bears similarity to graph generation although the focus is on incremental steps with user interaction as opposed to learning an assembly distribution $P\left( A\right)$ all at once, as will be discussed in Section 3. Since the main goal of component recommendation is to provide a design engineer with a small set of relevant component types to reduce the cognitive burden, approaches using conformal prediction [6] could prove useful.
42
+
43
+ § 2.2 ANOMALY DETECTION OF MISMATCHING COMPONENTS
44
+
45
+ A second (unsupervised) use case consists of detecting anomalies in assembly models such as an unexpected choice of particular component types (e.g., screws from a different manufacturer) or rarely used substructures that could hint at an unconventional way of solving a design task. From a 8 business perspective, companies might limit their procurement to a set of well-known component types (better contracts with manufacturers, more reliable during the product lifecycle) within their strategic component management. Anomalous assembly models might emerge, e.g., from starting a new model based on a much earlier project with some components having become obsolete or simply a lack of experience/knowledge on behalf of the design engineer. From a graph ML perspective, both identifying anomalous graphs in a database as well as identifying anomalous graph objects (nodes, edges) needs to be addressed [7], in particular to show to design engineers or procurers where and how the assembly is deviating.
46
+
47
+ § 2.3 CREATING A TAXONOMY OF COMPONENTS
48
+
49
+ Third, using node embeddings ${h}_{c}$ of the components ${\left\{ c \in {A}_{i}\right\} }_{i = 1}^{N}$ such as node2vec [8], Deep-Walk [9], or comp2vec [5] for visualization and clustering could aid companies as well in their component management. The availability of well-curated, hierarchical taxonomies of component types depends on the level of maturity of a company and traditionally requires significant manual effort. A data-driven solution that exploits usage patterns in assembly models could organize a company's frequently used component types better. There has been an interest in making these embeddings (or latent representations) of nodes more interpretable to humans [10] which is what needs to be done for this task. However, the graphs retrieved from assemblies do not show homophily - two components that are connected are most likely not similar but rather complementary (e.g., door and hinge) which is an underlying assumption of many existing node embedding techniques. Here, synonymity tends to be a better replacement, i.e., components are similar if they can be used in the same contexts.
50
+
51
+ § 3 RELATED WORK
52
+
53
+ The task of component recommendation bears some similarities to graph generation, i.e., approximating ${P}_{\text{ data }}\left( G\right)$ with a parametrizable ${P}_{\text{ model }}\left( {G \mid \theta }\right)$ . The graph can be generated either all at once, for example using variational autoencoders or generative adversarial networks, or incrementally by so-called autoregressive models that predict single or multiple nodes or edges step by step - conditioned on an intermediate state of the graph. Since our goal is to support design engineers by presenting suggestions instead of taking over the whole task, we go with the second approach. During CAD modeling, we want to allow changes on the partial assembly by designers. Therefore, we also need a model that can generate arbitrarily large graphs which is typically not the case for non-incremental models.
54
+
55
+ Generating graphs with matching structural characteristics to the training data along with handling only one node type (i.e., $\left| \mathcal{C}\right| = 1$ ) as done using recurrent neural networks in GRAN [11] or GraphRNN [12] is not sufficient: the relevant information for predicting next needed components lies more in the already used component types, i.e., the node features, than the structure of the graph. The focus is mainly on the type of component that should be added and only secondarily where to insert it into the existing graph. Since these approaches only evaluate the final graph structure (in particular, in terms of aggregated graph statistics such as degree distributions) without incorporating its intermediate states, canonical numbering of nodes can be performed for generating training instances, keeping their number small as no node permutations need to be considered. Common choices for GRAN or GraphRNN are breadth-first or depth-first traversals starting from the most connected node or random orderings. For assemblies, however, designers can start with any component or subgraph, followed by a generation sequence depending on the designer's preferences. Therefore, the authors in [5] create instances for every possible creation sequence of an assembly by iteratively cutting off nodes that serve as labels for the resulting partial assemblies. Unlike [13], in these approaches newly added nodes are always connected to the previous graph structure, which we want to enforce during construction.
56
+
57
+ Molecule graph generation refers to generating valid molecules with desired chemical properties, incorporating various types of nodes (i.e. different atoms) and even various types of connections (which is not necessary for using the current representation of assemblies). While guaranteeing the validity of the generated graph (like in [14] concerning the chemical structure of the generated molecule) may be assumed to be an important aspect for assembly modeling as well, this check-up turns out be not this obvious as the number of connection points of an component is typically not available or misleading since design engineers may adapt their geometry, e.g., by drilling holes, in order to assemble additional components. However, this application domain seems to be the most similar to assembly modeling in terms of data representation. Nevertheless, again only the final generated graphs are relevant for evaluating the generation model - as expounded above, also the intermediate steps matter for component recommendation.
58
+
59
+ § 4 OPEN QUESTIONS
60
+
61
+ The domain of assembly modeling imposes particular challenges that can stipulate further research in graph machine learning as described in detail in the following.
62
+
63
+ How to deal with evolving data sets? Over time, the component catalogs may get updated as well as new catalogs and components may be incorporated to a company's component library. In particular at test time, we might be confronted with component types in assemblies that were not available during training. This setting confronts us with so-called attribute-missing graphs where all attributes of a subset of nodes are missing, opposed to attribute-incomplete graphs [15] that are composed of nodes all with non-empty attribute sets, typically treated by value imputation techniques either in a preprocessing step (e.g. [16] or [17]), or during processing the graph in the model (e.g., [18] or [19]). Methods based on the homophily assumption are not applicable since the assumption of connected nodes been similar is clearly violated in the assembly modeling use cases as connected components typically serve different purposes. Initial work has been done on handling attribute-missing graphs, e.g., [15] that make a shared-latent space assumption on graphs resulting in a new form of GNN called SAT - its applicability on assembly modeling needs to be investigated.
64
+
65
+ How to handle ambiguous edges? As already introduced in Section 1, extracting a graph structure from an assembly model is an ambiguous task as only some mating relations may be given in the CAD system - some of them even serving other purposes than denoting meaningful connections (e.g., to simply support the designer's workflow). Consequently, extracting a graph structure from an assembly is not straightforward and when based on geometric proximity of components an computational expensive approach. Even in a perfect world, where all components are connected by mates in a meaningful way, this graph structure may be insufficient for the learning task (as mentioned in [20] and [21]) because components that are far away from each other according the graph structure may have a certain relationship which is relevant for recommending next components. Graph rewiring may be promising solution for this issue, transforming the initial graph structure by adding and removing edges to improve information processing. Moreover, due to the novelty of the application domain, it is still unclear whether a graph structure is really helpful for solving assembly modeling tasks, possibly a set-based approach could perform as well or even better.
66
+
67
+ How to improve intuitive sequence generation and interactive inference? Especially for component recommendation, the proposed sequence of component insertions needs to be intuitive in the eye of the design engineers that interact with the assembly modeling tool. There is not an obvious way to extract a sequence from a data set of graphs - as is done in generative graph models such as GraphRNN or GRAN. Either all possible insertion sequences (that leave the assembly connected) or a sample thereof need to be considered - as done in [5] - or the data sets need to be augmented with insertion sequences.
68
+
69
+ Finally, we encourage readers to investigate the data set [3] and identify similarities with their preferred data sets or applicability of their methods that can address the above challenges.
papers/LOG/LOG 2022/LOG 2022 Conference/ZBsxA6_gp3/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,541 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Graph Learning Indexer: A Contributor-Friendly Platform for Better Curation of Graph Learning Benchmarks
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ Establishing common benchmarks has been a critical driving force behind the success of modern machine learning techniques. As machine learning is being applied in broader domains and tasks, there is a need to establish more and diverse benchmarks to better reflect the reality of the application scenarios. For graph learning, an emerging field of machine learning, the need of establishing better benchmarks is particularly urgent. Towards this goal, we introduce Graph Learning Indexer (GLI) ${}^{1}$ , a benchmark curation platform for graph learning. In comparison to existing graph learning benchmark libraries, GLI highlights two novel design objectives. First, GLI is designed to incentivize dataset contributors. In particular, we incorporate various measures to minimize the effort of contributing and maintaining a dataset, increase the usability of the contributed dataset, as well as encourage better credits to different contributors of the dataset. Second, GLI is designed to curate a knowledge base, instead of a collection, of benchmark datasets. For this purpose, we come up with multiple sources of meta information of the benchmark datasets in order to better characterize the datasets.
12
+
13
+ ## 1 Introduction
14
+
15
+ The practice of establishing common benchmarks in machine learning dates back to research programs of speech recognition in 1980s [1, 2], and has become a dominant paradigm of machine learning research. In the past, the community has been focusing on a handful of benchmarks in each major domain of machine learning applications ${}^{2}$ , usually developed by few institutes or research groups. However, as machine learning is becoming a general-purpose technology, there are new demands from modern machine learning research that are not entirely met by the current common practice of benchmarking:
16
+
17
+ 1. Broad Application. Machine learning is being applied to increasingly broad domains, where the emerging field of graph learning is an example with a variety of machine learning tasks in this domain. Representative new benchmarks are needed for such new domains and tasks. Furthermore, the development of good benchmarks often require inter-disciplinary knowledge and collaborations.
18
+
19
+ 2. Trustworthiness. The collection of each individual benchmark datasets could be biased. Driving the development of machine learning technologies by a few fixed benchmark datasets may suffer from the biases in these datasets. It is therefore desirable to leverage a set of diverse benchmark datasets to expose the potential trustworthy concerns of the machine learning technologies.
20
+
21
+ 3. General Technology. Towards more general-purpose artificial intelligence, there is a strong emerging interest in developing machine learning models that can perform well on a wide range of downstream tasks [6]. In conjunction with this interest, there have been efforts constructing benchmarks with many tasks, such as SuperGLUE [4], GEM [7], and BIG-Bench [8], where BIG-Bench consists of 204 tasks by more than 400 authors across 132 institutes.
22
+
23
+ ---
24
+
25
+ ${}^{1}$ The anonymized codebase for this platform is available here: https://anonymous.4open.science/r/ gli/README.md.
26
+
27
+ ${}^{2}$ For example, ImageNet [3] in Computer Vision, SuperGLUE [4] in Natural Language Processing, and Open Graph Benchmark [5] in Graph Learning.
28
+
29
+ ${}^{3}$ https://en.wikipedia.org/wiki/General-purpose_technology
30
+
31
+ ---
32
+
33
+ These new demands, especially for emerging fields such as graph learning, require the development of a large quantity of diverse benchmark datasets, in order to better reflect the reality of machine learning applications. This requirement poses challenges in both creation and curation of the benchmarks.
34
+
35
+ In this paper, we introduce Graph Learning Indexer (GLI), a graph learning benchmark curation platform, to mitigate the aforementioned challenges. In particular, GLI highlights two novel design objectives that respectively mitigate the challenges in benchmark creation and benchmark curation.
36
+
37
+ First, GLI aims to leverage contributions from the broad graph learning community to establish a wide range of benchmarks. As a result, GLI is designed to be contributor-centric, where we treat benchmark contributors as our core users when designing the platform. Specifically, we incorporate various designs, such as file-based data API, automated test, and template files, to minimize the effort of contribution and maintenance by the benchmark contributors. We have also considered measures to incentivize research effort in benchmark contributions in general. For example, in order to encourage better credits to the benchmark contributors, GLI includes the chain of prior versions of each benchmark dataset in the bibliographic section of the dataset README file.
38
+
39
+ Second, with the increasing quantity and diversity of benchmark datasets, GLI aims to build a knowledge base of the datasets, instead of a simple collection of datasets. GLI includes a Benchmark Indexing System ${}^{4}$ with various sources of meta information about the benchmark datasets collected by GLI. Such meta information can be later used for better curation and retrieval of the benchmarks.
40
+
41
+ The rest of this paper is organized as following. We introduce the contributor-centric design and the benchmark indexing system respectively in Section 2 and Section 3. Section 4 reviews related prior work on benchmark collections and graph learning libraries. We also include a sketch of future plan for GLI in Section 5. Finally, in Section 6, we conclude this paper with some open questions on benchmark design.
42
+
43
+ ## 2 Contributor-Centric Design
44
+
45
+ A central goal of GLI is to incentivize the graph learning community to put more effort on contributing high-quality benchmark datasets. To achieve this goal, we treat dataset contributors as the core users of GLI and come up with three contributor-centric design objectives. First, GLI aims to provide smooth user experience for contributors by minimizing the effort on submission and maintenance of the datasets. Second, GLI aims to increase the impact of the hosted datasets by improving their usability. Third, GLI aims to encourage better credits to the dataset contributors through tangible measures.
46
+
47
+ ### 2.1 User Experience and Quality Assurance
48
+
49
+ A key challenge in the design of GLI is to minimize the effort by the dataset contributors while assure a high quality of the contributed datasets. Our solution to this challenge is to first design a standard data management API that is both stable and extensible for graph learning datasets; and then design a GitHub-based contribution workflow with concise instructions and rich feedback for dataset contributors to convert the benchmark datasets into the standard API.
50
+
51
+ #### 2.1.1 Data Management API
52
+
53
+ The GLI Data Management API (Figure 1) has two key design features: the API is file-based; there is an explicit separation of data and task.
54
+
55
+ File-based storage API. The data API for almost all existing graph learning libraries (such as DGL [9] and PyG [10]) are code-based, which means that each dataset is associated with an ad hoc class that is dedicated to represent this dataset. For example, DGL [9] defines a CoraGraphDataset class for the node classification task on the Cora dataset [11, 12]. This code-based API couples the datasets with the codebase and increases the difficulty of maintenance. In particular, some change to the graph learning library codebase may break the ad hoc dataset classes so additional maintenance effort is required for each dataset.
56
+
57
+ ---
58
+
59
+ ${}^{4}$ Thus "Indexer" in the name of GLI.
60
+
61
+ ---
62
+
63
+ ![01963ee0-cd93-75c4-afe7-453bfc720832_2_313_194_1173_576_0.jpg](images/01963ee0-cd93-75c4-afe7-453bfc720832_2_313_194_1173_576_0.jpg)
64
+
65
+ Figure 1: The file-based GLI Data Management API with explicit separation of data and task. The GLI Data Storage part contains all necessary information to construct the graph data, including three levels: node, edge, and graph information. Each level may have multiple features or labels as its attributes. The GLI Task Configuration part contains the necessary information to perform a predefined task. Both parts further compress big chunks of data (such as the attributes or edge list) into NumPy standard binary format, with indexes to these data stored in JSON files. The NumPy data files are hosted in an external storage system, while all other files are hosted in the GitHub repo of GLI. In addition, the GLI Auxiliary part contains a README document, a conversion script which converts the raw data into GLI file format, and a urls. json providing the URLs to the NumPy data in the external storage system.
66
+
67
+ To avoid such unnecessary maintenance burden for dataset contributors, GLI adopts a file-based data storage API that is more stable compared to code-based APIs. While there has been file-based graph storage API, such as GraphML [13], they are not dedicated to graph learning datasets and lacks essential features such as storing the node attributes or data splits. We therefore designed a novel file-based storage API for graph learning datasets.
68
+
69
+ Explicit separation of data and task. We recognize that there is a clear distinction between the information of the content in a dataset, i.e., the data, and the information about how to use the data to train and evaluate the models, i.e., the task. For example, in graph learning benchmarks, there could often be multiple tasks (e.g., node classification and link prediction) defined on the same dataset, or there could be multiple settings for the same task (e.g., random split or fixed split). From the persepctive of dataset contribution and curation, it is cumbersome to make a new version of dataset for each new task on top of the same data. Therefore, we propose to store the data information and the task information separately in our API. And we design a task-specific API for each type of tasks.
70
+
71
+ This explicit separation of data and task turns out to offer a number of benefits. First, it makes the API more extensible, as the introduction of a new type of task will not affect the API for the data. Second, this separation makes automated tests more modularized (see Section 2.1.2). Third, it allows the implementation of general data loading schemes (see Section 2.2). Finally, it leads to a bottom-up approach to grow the taxonomy of graph learning tasks (see Section 3.1).
72
+
73
+ Overview of the API. Figure 1 shows the architecture of the file-based API with explicit separation of data and task ${}^{5}$ .
74
+
75
+ The information of the graph data is divided into three levels: node, edge, and graph level. Each level can be assigned multiple attributes as features or labels and can be further divided into multiple sub-levels to represent heterogeneous graphs. The attributes support both dense and sparse tensors to allow efficient storage and fast loading. The GLI data format has a strong representative power to accommodate most graph-structured data.
76
+
77
+ ---
78
+
79
+ ${}^{5}$ A detailed document for the API is available at https://anonymous.4open.science/r/gli/FORMAT.md.
80
+
81
+ ---
82
+
83
+ ![01963ee0-cd93-75c4-afe7-453bfc720832_3_315_209_1160_398_0.jpg](images/01963ee0-cd93-75c4-afe7-453bfc720832_3_315_209_1160_398_0.jpg)
84
+
85
+ Figure 2: GLI Contribution Workflow. A contributor will first use the conversion script to convert the raw data into the GLI format. Then the contributor will fill in the templates of README , md and urls. json. The JSON files and README.md (blue box) will be uploaded to GitHub as a pull request and the NumPy data files (green box) will be uploaded to the external storage system. GLI will perform automated tests on the submitted datasets and the GLI development team will further review the pull request before approval.
86
+
87
+ For the task, we have predefined a number of graph learning task types, such as NodeClassification, LinkPrediction, GraphClassification, etc. The information in the task configuration can be divided into two kinds: general configuration and task-specific configuration. General configurations are commonly required by all tasks, including features that are allowed to use during prediction, train/validation/test split, etc. On the contrary, the contents of task-specific configurations depend on task types. For example, both NodeClassification and GraphClassification requires to specify the number of possible classes (num_classes), and LinkPrediction provides an optional configuration on negative samples during validation and test (val_neg and test_neg).
88
+
89
+ Overall, the file-based design improves the stability of the API while the separation of data and task makes the API more extensible, both in turn improves the user experience for dataset contributors.
90
+
91
+ #### 2.1.2 Contribution Workflow
92
+
93
+ In companion with the data management API, we designed a GitHub-based contribution workflow (Figure 2) to ease the dataset contribution process.
94
+
95
+ Template files. To begin with, GLI provides a list of well-commented template files ${}^{6}$ for all the required files in our API. The contributor only needs to fill in all the blanks to convert a dataset into the GLI format.
96
+
97
+ Pull request review. After finishing converting the dataset, the contributor will submit the required files as a pull request to the GitHub repository of GLI. The GLI development team or other researchers can provide detailed and interactive feedback in the pull request.
98
+
99
+ Automated tests. In addition to the manual peer review, the pull request will also trigger automated tests with detailed error feedback to help the contributors debug their implementation. The tests include the standard pycodestyle, pydocstyle, and pylint for syntax and style check. We have also implemented a wide range of in-depth tests with pytest to check the correctness of dataset format and to expose potential errors during runtime by testing with short model training. Contributors can also use several well-documented utility functions to test the correctness of their data format locally.
100
+
101
+ ---
102
+
103
+ ${}^{6}$ See https://anonymous.4open.science/r/gli/examples/template/README.md.
104
+
105
+ ---
106
+
107
+ ### 2.2 Dataset Usability
108
+
109
+ ---
110
+
111
+ import gli
112
+
113
+ cora_node_dataset = gli.get_gli_dataset(dataset="cora",
114
+
115
+ task="NodeClassification")
116
+
117
+ ---
118
+
119
+ 138
120
+
121
+ Demo 1: Example usage of the general data loading scheme. cora_node_dataset is an instance of dgl. data. DGLDataset, thus it can be fit into DGL dataloader seamlessly.
122
+
123
+ To increase the impact of the datasets hosted on GLI, we implemented a general data loading scheme that can seamlessly integrated into major graph learning libraries ${}^{7}$ for downstream experiments. Demo 1 demonstrates an example of the general data loading scheme. Once a contributed dataset (and the task defined on it) is merged into the GLI repository, the dataset can be retrieved by calling gli. get_gli_dataset with the dataset name and task type as arguments.
124
+
125
+ Under the hood, as shown in Demo 2, gli.get_gli_dataset calls gli.get_gli_graph and gli. get_gli_task to respectively load the GLI Data Storage and GLI Task Configuration shown in Figure 1. Thanks to the explicit separation of data and task, we only need to maintain a general graph loading function and a set of task loading functions with each function dedicated to a task type, which is much less effort than maintaining a separate dataset class for each task and dataset combination.
126
+
127
+ ---
128
+
129
+ graph_cora = gli.get_gli_graph(dataset="cora")
130
+
131
+ task_node = gli.get_gli_task(dataset="cora",
132
+
133
+ task="NodeClassification")
134
+
135
+ cora_node_dataset = gli.combine_graph_and_task(graph_cora, task_node)
136
+
137
+ Demo 2: The innerworkings of gli.get_gli_dataset.
138
+
139
+ ---
140
+
141
+ ### 2.3 Credits to Contributors
142
+
143
+ An important aspect to incentivize the dataset contributors is to ensure that they get the proper credits. For this purpose, we have made a couple of designs to help the community cite properly. There is citation information in the README file of each dataset listing the BibTex of the work relevant to the dataset. Specifically, the citation information is split into dataset and tasks, as there could often be multiple tasks defined on top of a graph dataset, and the definition of tasks could come from work that is different from the one contributing the dataset. Moreover, the citation information for the dataset is further split into three parts:
144
+
145
+ - Original Source: The first work that created the dataset.
146
+
147
+ - Current Version: The work that is directly responsible for the dataset stored in GLI.
148
+
149
+ - Previous Versions: Any intermediate versions between Original Source and Current Version. There can be multiple citations in Previous Versions.
150
+
151
+ The paper popularizing a benchmark dataset is often not the paper originally contributing the dataset. And it is not uncommon that the former gets most of the citations while the latter gets few. This phenomenon is possibly due to two factors. First, tracking the chain of contributions to a dataset through literature search is a tedious work. Second, researchers tend to get information about a dataset from the methodology papers that cite the dataset rather than the original paper creating the dataset. So the mistakes in citation accumulate.
152
+
153
+ By providing succinct bibliographic information relevant to the dataset in the README file, we hope to help the community better recognize the contributions of all contributors, with a particular emphasis on crediting the original source.
154
+
155
+ ## 3 Benchmark Indexing System
156
+
157
+ With the growing quantity and diversity of benchmark datasets, it is important for the benchmark curation platform to help users navigate through the large collection of datasets. For this purpose,
158
+
159
+ ---
160
+
161
+ ${}^{7}$ Currently we have implemented data loading for DGL.
162
+
163
+ ---
164
+
165
+ GLI is designed to serve as an "indexer" that builds a database consisting of various meta information of the benchmark datasets. And we name the database as Benchmark Indexing System. To some extent, this is in a similar vein as the idea of Datasheets for Datasets [14]. Datasheets for Datasets focus more on the characteristics of each individual dataset while our design of the database also cares about the synergy among different datasets.
166
+
167
+ Ultimately, we hope to use this database to help users 1) retrieve the right benchmarks that match the context of the applications of their interest; 2) identify potential biases and trustworthy issues existed in the datasets; or 3 ) motivate the development of new methodology based on the characteristics of tasks and datasets.
168
+
169
+ At the current stage, however, we focus on coming up with different sources of meta information to be included in the database. The current implementation consists of three types of meta information, which are detailed in the following subsections.
170
+
171
+ ### 3.1 Task Types
172
+
173
+ The task types come as meta information naturally from the implementation of data management API in GLI. Graph data are ubiquitous but also diverse and so are the graph learning tasks defined on top of graph data. Different graph learning tasks may have distinct nature and thus require very different methodology. Therefore the task type is an important source of meta information for each benchmark dataset.
174
+
175
+ In GLI, the definition of task types is driven by the contributed benchmarks. When a contributor is contributing a new benchmark, they will first check if their benchmark belongs to one of the existing task types in GLI. If none of the existing task types can accommodate the new benchmark, the contributor can initiate the definition of a new task type. The GLI development team and the contributors will implement the support for the new task type, including dataset class, documentation, and automated tests.
176
+
177
+ This bottom-up approach of developing task types not only makes GLI highly extensible to new benchmark datasets, but also gradually grows a taxonomy of graph learning tasks as more benchmarks are being collected. A list of currently supported task types is given in Appendix A.
178
+
179
+ ### 3.2 Graph Metrics
180
+
181
+ Another type of meta information included in GLI is various graph metrics, such as average degree or average clustering coefficient. In classical network science literature $\left\lbrack {{15},{16}}\right\rbrack$ , the graph metrics have been shown to be informative about the characteristics of the graph data. In a recent study, Palowitch et al. [17] empirically demonstrated that there are clear patterns in the graph neural network performance associated with certain graph metrics of the benchmark datasets.
182
+
183
+ GLI integrates a function that can calculate a list of graph metrics for each contributed dataset. The graph metrics integrated in this function can be categorized into 6 groups.
184
+
185
+ - Basic: Is Directed, Number of Nodes, Number of Edges, Edge Density, Average Degree, Edge Reciprocity, Degree Assortativity;
186
+
187
+ - Distance: Diameter, Pseudo Diameter, Average Shortest Path Length, Global Efficiency;
188
+
189
+ - Connectivity: Relative Size of LCC, Relative Size of LSCC, Average Node Connectivity;
190
+
191
+ - Clustering: Average Clustering Coefficient, Transitivity, Degeneracy;
192
+
193
+ - Distribution: Power Law Exponent, Pareto Exponent, Gini Coefficient of Degree, Gini Coefficient of Coreness;
194
+
195
+ - Attribute: Edge Homogeneity, Feature Homogeneity, Homophily Measure, Attribute Assortativ-ity.
196
+
197
+ The formal definitions of these graph metrics can be found in Appendix C.
198
+
199
+ ### 3.3 Model Performance
200
+
201
+ The third type of meta information included in GLI is the performance of various popular models on the datasets. It is common to use a model's performance on different experiment settings and datasets to understand the model characteristics. Recently, it is shown that one can also use the performance of different models to characterize the datasets and obtain meaningful clusters of the datasets [18].
202
+
203
+ In GLI, we provide a benchmark suite that can benchmark a few popular graph learning models on the contributed benchmarks. The benchmark suite implements a separate set of training and hyperparameter tuning functions for each task type. Thanks to the general data loading scheme (as introduced in Section 2.2), the benchmark code can be easily extended to new datasets with the same task type. We currently have supported NodeClassification and GraphClassification in the benchmark suite.
204
+
205
+ Below, we provide an example to showcase how the model performance could provide useful information to characterize the datasets. Using the benchmark suite in GLI, we provide the performance of several popular models on a set of node classification datasets in Table 1. This experiment is a rough replication of Lim et al. [19], with extension to more datasets enabled by GLI. The detailed experiment setup (and citations to models and datasets) can be found in Appendix D.
206
+
207
+ Readers who are familiar with the recent graph learning literature may find that, not surprisingly, the best and second best performing models on each dataset are a good indicator of how "homphil-ious" [20] the dataset is. The early graph neural network models, GCN, GAT, and GraphSAEG, have better performance on more homophilious datasets, such as cora, citeseer, and pubmed. LINKX performs better on most of the remaining non-homophilious datasets. A few datasets, texas, cornell, and wisconsin, have notoriously unstable performance, as shown by the large standard deviations. It also seems that the graph structure does not help much for the task, as MLP performs the best on these datasets.
208
+
209
+ In general, the GLI API makes it easier to implement the benchmark suite for a wide range of models and datasets in well-controlled experiment setups, which enables the use of model performance as a way to characterize the datasets.
210
+
211
+ Table 1: Benchmark experiment results for node classification datasets. Test accuracy is reported for most datasets, while test ROC AUC is reported for binary classification datasets (genius, twitch-gamers, penn94, pokec). Standard deviations are over 5 runs. The best result on each dataset is bolded, and the second best result is underlined.
212
+
213
+ <table><tr><td/><td>GCN</td><td>GAT</td><td>GraphSAGE</td><td>MoNet</td><td>MLP</td><td>LINKX</td><td>MixHop</td></tr><tr><td>cora</td><td>${81.03} \pm {0.82}$</td><td>$\mathbf{{83.0} \pm {0.62}}$</td><td>${81.46} \pm {0.74}$</td><td>${76.44} \pm {1.85}$</td><td>59.1±2.3</td><td>${59.36} \pm {2.41}$</td><td>79.64±1.55</td></tr><tr><td>citeseer</td><td>${72.28} \pm {0.56}$</td><td>${69.9} \pm {1.54}$</td><td>73.38±0.82</td><td>${64.4} \pm {0.62}$</td><td>${54.62} \pm {6.26}$</td><td>${42.5} \pm {7.88}$</td><td>69.64±1.2</td></tr><tr><td>pubmed</td><td>79.44±0.43</td><td>79.04±0.76</td><td>${78.4} \pm {0.35}$</td><td>${76.18} \pm {0.84}$</td><td>73.7±0.5</td><td>${56.49} \pm {7.92}$</td><td>76.61±1.35</td></tr><tr><td>texas</td><td>61.08±3.07</td><td>67.02±1.21</td><td>${66.48} \pm {1.48}$</td><td>55.13±7.04</td><td>78.92±2.25</td><td>76.57±4.87</td><td>77.84±1.7</td></tr><tr><td>cornell</td><td>${52.97} \pm {4.09}$</td><td>${48.64} \pm {1.9}$</td><td>47.02±3.08</td><td>${51.89} \pm {2.25}$</td><td>68.64±7.78</td><td>${65.46} \pm {5.85}$</td><td>${66.48} \pm {5.43}$</td></tr><tr><td>wisconsin</td><td>56.46±3.5</td><td>${54.89} \pm {1.96}$</td><td>${52.54} \pm {1.63}$</td><td>36.86±3.22</td><td>78.82±4.24</td><td>78.62±1.94</td><td>${76.9} \pm {5.61}$</td></tr><tr><td>actor</td><td>29.36±0.73</td><td>${30.15} \pm {0.56}$</td><td>29.26±0.5</td><td>${26.35} \pm {1.01}$</td><td>37.11±0.54</td><td>33.56±1.84</td><td>${34.77} \pm {0.94}$</td></tr><tr><td>squirrel</td><td>${32.4} \pm {1.18}$</td><td>29.14±1.55</td><td>${31.64} \pm {1.93}$</td><td>${27.14} \pm {2.34}$</td><td>${34.87} \pm {0.47}$</td><td>62.43±1.23</td><td>33.37±1.45</td></tr><tr><td>chameleon</td><td>45.92±2.61</td><td>${46.18} \pm {0.93}$</td><td>${48.72} \pm {0.47}$</td><td>${32.54} \pm {1.24}$</td><td>49.16±0.66</td><td>67.08±1.69</td><td>${48.72} \pm {1.39}$</td></tr><tr><td>arxiv-year</td><td>49.6±0.16</td><td>${34.91} \pm {0.56}$</td><td>${43.39} \pm {0.74}$</td><td>${40.19} \pm {0.48}$</td><td>${36.49} \pm {0.19}$</td><td>52.73±0.34</td><td>${40.63} \pm {0.12}$</td></tr><tr><td>snap-patents</td><td>55.46±0.11</td><td>36.34±0.6</td><td>43.33±0.27</td><td>${43.48} \pm {0.73}$</td><td>31.32±0.04</td><td>${53.43} \pm {0.32}$</td><td>43.27±0.03</td></tr><tr><td>penn94</td><td>${88.79} \pm {0.6}$</td><td>${66.29} \pm {12.21}$</td><td>${85.0} \pm {0.53}$</td><td>73.92±3.71</td><td>${83.92} \pm {0.32}$</td><td>93.47±0.27</td><td>91.62±0.11</td></tr><tr><td>pokec</td><td>71.17±10.76</td><td>${53.03} \pm {0.4}$</td><td>${63.02} \pm {5.68}$</td><td>53.65±2.17</td><td>${64.69} \pm {4.92}$</td><td>90.54±0.12</td><td>${86.84} \pm {0.2}$</td></tr><tr><td>genius</td><td>${84.15} \pm {1.71}$</td><td>49.86±28.68</td><td>80.31±0.23</td><td>${63.23} \pm {2.39}$</td><td>${84.42} \pm {0.2}$</td><td>90.88±0.1</td><td>90.04±0.12</td></tr><tr><td>twitch-gamers</td><td>${62.4} \pm {0.22}$</td><td>${59.57} \pm {0.88}$</td><td>${61.68} \pm {0.3}$</td><td>${58.02} \pm {1.26}$</td><td>59.66±0.09</td><td>66.21±0.3</td><td>${64.22} \pm {0.08}$</td></tr></table>
214
+
215
+ ## 4 Related Work
216
+
217
+ In this section, we review prior work on graph learning benchmarks, graph learning libraries, and other relevant effort on machine learning benchmark infrastructures.
218
+
219
+ ### 4.1 Graph Learning Benchmarks and Graph Learning Libraries
220
+
221
+ Recently, there have been many infrastructural efforts on developing benchmark collections for graph learning [5, 21-24]. Among which the most widely-used ones at present are perhaps Open Graph Benchmark [5] and Benchmarking Graph Neural Networks [22]. GLI differs from the prior work in two key aspects.
222
+
223
+ 1. GLI is specifically optimized to better serve the dataset contributors. Most existing graph learning benchmarks are designed with the "dataset consumers", instead of contributors, as the core users. To our best knowledge, dedicated designs to optimize contribution workflow of graph learning datasets were essentially non-exist prior to this work. For example, the contribution workflow for Open Graph Benchmark is to pack the dataset in a fixed format and email it to the maintenance team ${}^{8}$ . In comparison, our GitHub-based contribution workflow is more interactive and potentially more scalable.
224
+
225
+ 2. GLI maintains a bottom-up dynamic task taxonomy while most of the existing benchmark collections have a top-down static taxonomy of graph learning tasks. The static taxonomy of graph learning tasks may limit the type of dataset and tasks could be contributed to the benchmark collections.
226
+
227
+ There are also a few workshops and conference tracks dedicated to research on benchmarks and datasets, such as the Workshop on Graph Learning Benchmarks ${}^{9}$ and the NeurIPS Datasets and Benchmarks Track ${}^{10}$ . These venues are friendly to the publications of benchmark contributions and have successfully solicited a number of new graph learning benchmark datasets. The development of GLI shares the same motivation as these endeavors towards incentivizing more contributions on benchmarks. And GLI could be used as an infrastructural tool for these publication venues to better evaluate and curate the collected benchmarks.
228
+
229
+ ### 4.2 Graph Learning Libraries
230
+
231
+ In addition, there are a few general-purpose graph learning libraries, such as PyG [10], DGL [9], and TF-GNN [25], that are relevant to this work. While the primary focus of these libraries is not benchmark datasets, they also provide graph data API at the dataloader level. We suggest that the file-based API design in GLI is more contributor-friendly because 1) it is easier to convert the data to files than to implement a dataset class; 2) the file-based API does not rely on any software dependency and is less likely to break; 3) the GLI developers will take care of the maintenance of the data loading code.
232
+
233
+ ### 4.3 Other Relevant Benchmark Infrastructures
234
+
235
+ Outside the area of graph learning, there are various machine learning benchmark infrastructures that are remotely relevant to this work.
236
+
237
+ One relevant machine learning benchmark infrastructure is Papers With Code ${}^{11}$ , which has a database of datasets in different domains of machine learning. Each dataset in this database is associated with types of machine learning tasks and a massive record of machine learning model performances, similarly as our design in Section 3. However, the performances are directly taken from papers or self-reported, and the experiment setups and data versions may not be well controlled.
238
+
239
+ More generally, there are a number of dataset search engines, such as Google Dataset Search ${}^{12}$ , Microsoft Research Open Data ${}^{13}$ , and DataMed ${}^{14}$ . These search engines index a huge and growing amount of datasets in various domains but does not contain detailed domain-specific dataset characteristics, such as the graph metrics as described in Section 3.2. These dataset are also usually not machine-learning ready, i.e., there is no data loading code that transforms these datasets into machine learning data loaders.
240
+
241
+ ---
242
+
243
+ ${}^{8}$ https://ogb.stanford.edu/docs/dataset_overview/
244
+
245
+ ${}^{9}$ https://graph-learning-benchmarks.github.io/
246
+
247
+ ${}^{10}$ https://neurips.cc/Conferences/2021/CallForDatasetsBenchmarks
248
+
249
+ 11https://paperswithcode.com/
250
+
251
+ ${}^{12}$ https://datasetsearch.research.google.com/
252
+
253
+ ${}^{13}$ https://msropendata.com/
254
+
255
+ ${}^{14}$ https://datamed.org/
256
+
257
+ ---
258
+
259
+ ## 5 Future Plan
260
+
261
+ In the future, there are a few directions that the GLI development team will focus on.
262
+
263
+ ## $\mathbf{{Userverience}.}$
264
+
265
+ - Helper functions for dataset conversion. We plan to implement a few helper functions that can automatically convert commonly seen raw data formats into the GLI format.
266
+
267
+ - Automatic generation of README documents. We would like to implement a function that can automatically generate the README document for a dataset based on dataset characteristic and a few structured survey questions for the contributors.
268
+
269
+ Automatic benchmarking popular models. We plan to implement a service that can automatically benchmark popular models on new contributed datasets such that the model performance can be directly leveraged into the meta information of the datasets.
270
+
271
+ Citation tracking. We plan to track the citations to each dataset hosted on GLI. In this way, we can send an alert to the authors citing a dataset when critical issues/bugs are identified for the dataset.
272
+
273
+ Dataset exploration. We plan to implement an interface to explore and retrieve the datasets hosted on GLI, based on the database of the datasets described in Section 3.
274
+
275
+ ## 6 Conclusion
276
+
277
+ In this paper, we have introduced Graph Learning Indexer (GLI), a benchmark curation platform for graph learning. GLI is designed to solicit and curate massive benchmark datasets contributed by the community. With the contributor-centric design, we hope that GLI can better assist the community contribution on the development of benchmark datasets. We also hope that GLI can help improve our understanding on the taxonomy of graph learning tasks based on the rich meta information about the datasets.
278
+
279
+ ## References
280
+
281
+ [1] Patti J. Price, William M. Fisher, Jared Bernstein, and David S. Pallett. The darpa 1000-word resource management database for continuous speech recognition. In ICASSP-88., International Conference on Acoustics, Speech, and Signal Processing, pages 651-654 vol.1, 1988. 1
282
+
283
+ [2] William M. Fisher, George R. Doddington, and Kathleen M. Goudie-Marshall. The darpa speech recognition research database: specifications and status. In Proceedings of DARPAR Speech Recognition Workshop, pages 93-99, 1986. 1
284
+
285
+ [3] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. Ieee, 2009. 1
286
+
287
+ [4] Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. Superglue: A stickier benchmark for general-purpose language understanding systems. Advances in neural information processing systems, 32, 2019. 1,2
288
+
289
+ [5] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33:22118-22133, 2020. 1, 7, 13
290
+
291
+ [6] Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021. 2
292
+
293
+ [7] Sebastian Gehrmann, Tosin Adewumi, Karmanya Aggarwal, Pawan Sasanka Ammanamanchi, Anuoluwapo Aremu, Antoine Bosselut, Khyathi Raghavi Chandu, Miruna-Adriana Clinciu, Dipanjan Das, Kaustubh Dhole, et al. The gem benchmark: Natural language generation, its evaluation and metrics. In Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021), pages 96-120, 2021. 2
294
+
295
+ [8] Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615, 2022. 2
296
+
297
+ [9] Minjie Wang, Da Zheng, Zihao Ye, Quan Gan, Mufei Li, Xiang Song, Jinjing Zhou, Chao Ma, Lingfan Yu, Yu Gai, Tianjun Xiao, Tong He, George Karypis, Jinyang Li, and Zheng Zhang. Deep graph library: A graph-centric, highly-performant package for graph neural networks. arXiv preprint arXiv:1909.01315, 2019. 2, 8, 15
298
+
299
+ [10] Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019. 2, 8
300
+
301
+ [11] Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. AI magazine, 29(3):93-93, 2008. 2, 13
302
+
303
+ [12] Zhilin Yang, William Cohen, and Ruslan Salakhudinov. Revisiting semi-supervised learning with graph embeddings. In International conference on machine learning, pages 40-48. PMLR, 2016. 2,13
304
+
305
+ [13] Ulrik Brandes, Markus Eiglsperger, Jürgen Lerner, and Christian Pich. Graph markup language (graphml), 2013. 3
306
+
307
+ [14] Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii, and Kate Crawford. Datasheets for datasets. Communications of the ACM, 64(12):86-92, 2021. 6
308
+
309
+ [15] Mark Newman. Networks. Oxford university press, 2018. 6
310
+
311
+ [16] Davide Easley and Jon Kleinberg. Networks, crowds, and markets: Reasoning about a highly connected world. Cambridge university press, 2010. 6
312
+
313
+ [17] John Palowitch, Anton Tsitsulin, Brandon Mayer, and Bryan Perozzi. Graphworld: Fake graphs bring real insights for gnns. arXiv preprint arXiv:2203.00112, 2022. 6, 15
314
+
315
+ [18] Renming Liu, Semih Cantürk, Frederik Wenkel, Dylan Sandfelder, Devin Kreuzer, Anna Little, Sarah McGuire, Leslie O'Bray, Michael Perlmutter, Bastian Rieck, Matthew Hirn, Guy Wolf, and Ladislav Rampášek. Taxonomy of benchmarks in graph representation learning. arXiv:2206.07729, 2022. 7
316
+
317
+ [19] Derek Lim, Felix Hohne, Xiuyu Li, Sijia Linda Huang, Vaishnavi Gupta, Omkar Bhalerao, and Ser Nam Lim. Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods. Advances in Neural Information Processing Systems, 34:20887-20902, 2021. 7, 13, 15
318
+
319
+ [20] Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, and Danai Koutra. Beyond homophily in graph neural networks: Current limitations and effective designs. Advances in Neural Information Processing Systems, 33:7793-7804, 2020. 7
320
+
321
+ [21] Christopher Morris, Nils M Kriege, Franka Bause, Kristian Kersting, Petra Mutzel, and Marion Neumann. Tudataset: A collection of benchmark datasets for learning with graphs. arXiv preprint arXiv:2007.08663, 2020. 7
322
+
323
+ [22] Vijay Prakash Dwivedi, Chaitanya K Joshi, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. Benchmarking graph neural networks. arXiv preprint arXiv:2003.00982, 2020. 7, 13
324
+
325
+ [23] Meng Liu, Youzhi Luo, Limei Wang, Yaochen Xie, Hao Yuan, Shurui Gui, Haiyang Yu, Zhao Xu, Jingtun Zhang, Yi Liu, Keqiang Yan, Haoran Liu, Cong Fu, Bora M Oztekin, Xuan Zhang, and Shuiwang Ji. DIG: A turnkey library for diving into graph deep learning research. Journal of Machine Learning Research, 22(240):1-9, 2021. URL http://jmlr.org/papers/v22/ 21-0343.html.
326
+
327
+ [24] Benedek Rozemberczki, Paul Scherer, Yixuan He, George Panagopoulos, Alexander Riedel, Maria Astefanoaei, Oliver Kiss, Ferenc Beres, Guzman Lopez, Nicolas Collignon, and Rik Sarkar. PyTorch Geometric Temporal: Spatiotemporal Signal Processing with Neural Machine Learning Models. In Proceedings of the 30th ACM International Conference on Information and Knowledge Management, page 4564-4573, 2021. 7
328
+
329
+ [25] Oleksandr Ferludin, Arno Eigenwillig, Martin Blais, Dustin Zelle, Jan Pfeifer, Alvaro Sanchez-Gonzalez, Sibon Li, Sami Abu-El-Haija, Peter Battaglia, Neslihan Bulut, et al. Tf-gnn: Graph neural networks in tensorflow. arXiv preprint arXiv:2207.03522, 2022. 8
330
+
331
+ [26] Jie Tang, Jimeng Sun, Chi Wang, and Zi Yang. Social influence analysis in large-scale networks. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and mining, pages 807-816, 2009. 13
332
+
333
+ [27] Hongbin Pei, Bingzhe Wei, Kevin Chen-Chuan Chang, Yu Lei, and Bo Yang. Geom-gcn: Geometric graph convolutional networks. arXiv preprint arXiv:2002.05287, 2020. 13
334
+
335
+ [28] Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh SPappu, Karl Leswing, and Vijay Pande. Moleculenet: a benchmark for molecular machine learning. In Chemical Science, pages 513-530, 2018. 13
336
+
337
+ [29] Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, and Jure Leskovec. Strategies for pre-training graph neural networks. in international conference on learning representations. In ${ICLR},{2020.13}$
338
+
339
+ [30] Kuansan Wang, Zhihong Shen, Chiyuan Huang, Chieh-Han Wu, Yuxiao Dong, and Anshul Kanakia. Microsoft academic graph: When experts are not enough. In Quantitative Science Studies, pages 396-413, 2020. 13
340
+
341
+ [31] Benedek Rozemberczki, Carl Allen, and Rik Sarkar. Multi-scale attributed node embedding. Journal of Complex Networks, 9(2):cnab014, 2021. 13
342
+
343
+ [32] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems, 26, 2013. 13
344
+
345
+ [33] Alex Krizhevsky. Learning multiple layers of features from tiny images. pages 32-33, 2009. URL https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf.13
346
+
347
+ [34] K. Bhatia, K. Dahiya, H. Jain, P. Kar, A. Mittal, Y. Prabhu, and M. Varma. The extreme classification repository: Multi-label datasets and code, 2016. URL http://manikvarma.org/ downloads/XC/XMLRepository.html. 13
348
+
349
+ [35] Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, and Cho-Jui Hsieh. Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pages 257-266, 2019. 13
350
+
351
+ [36] TGene Ontology Consortium. The gene ontology resource: 20 years and still going strong. In Nucleic Acids Research, page ${330} = {338},{2018.13}$
352
+
353
+ [37] Damian Szklarczyk, Annika L Gable, David Lyon, Alexander Junge, Stefan Wyder, Jaime Huerta-Cepas, Milan Simonovic, Nadezhda T Doncheva, John H Morris, Peer Bork, et al. String v11: protein-protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets. Nucleic acids research, 47(D1):D607-D613, 2019.13
354
+
355
+ [38] WebKb Group. Cmu world wide knowledge base. URL http://www.cs.cmu.edu/afs/cs.cmu.edu/project/theo-11/www/wwkb/.13
356
+
357
+ [39] Amanda L. Traud, Peter J. Mucha, and Mason A. Porter. Social structure of facebook networks. Physica A: Statistical Mechanics and its Applications, 391(16):4165-4180, 2012. ISSN 0378- 4371. doi: https://doi.org/10.1016/j.physa.2011.12.021.URL https://www.sciencedirect.com/science/article/pii/S0378437111009186.13
358
+
359
+ [40] Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. Freebase: A collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, SIGMOD '08, page 1247-1250, New York, NY, USA, 2008. Association for Computing Machinery. ISBN 9781605581026. doi: 10.1145/1376616.1376746. URL https://doi.org/10.1145/1376616.1376746.13
360
+
361
+ [41] Xu Han, Shulin Cao, Lv Xin, Yankai Lin, Zhiyuan Liu, Maosong Sun, and Juanzi Li. Openke: An open toolkit for knowledge embedding. In Proceedings of EMNLP, 2018. 13
362
+
363
+ [42] Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. Reasoning with neural tensor networks for knowledge base completion. Advances in neural information processing systems, 26, 2013. 13
364
+
365
+ [43] Jure Leskovec and Andrej Krevl. SNAP Datasets: Stanford large network dataset collection. http://snap.stanford.edu/data, June 2014. 13
366
+
367
+ [44] Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon. Representing text for joint embedding of text and knowledge bases. In Proceedings of the 2015 conference on empirical methods in natural language processing, pages 1499-1509, 2015.13
368
+
369
+ [45] Jure Leskovec, Jon Kleinberg, and Christos Faloutsos. Graphs over time: densification laws, shrinking diameters and possible explanations. In Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, pages 177-187, 2005. 13
370
+
371
+ [46] Derek Lim and Austin R Benson. Expertise and dynamics within crowdsourced musical knowledge curation: A case study of the genius platform. In ICWSM, pages 373-384, 2021. 13
372
+
373
+ [47] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. doi: 10.1109/5.726791. 13
374
+
375
+ [48] Wenhan Xiong, Thien Hoang, and William Yang Wang. Deeppath: A reinforcement learning method for knowledge graph reasoning. arXiv preprint arXiv:1707.06690, 2017. 13
376
+
377
+ [49] Ankur Padia, Konstantinos Kalpakis, Francis Ferraro, and Tim Finin. Knowledge graph fact prediction via knowledge-enriched tensor factorization. Journal of Web Semantics, 59:100497, 2019.13
378
+
379
+ [50] Benedek Rozemberczki and Rik Sarkar. Twitch gamers: a dataset for evaluating proximity preserving and structural role-based node embeddings. arXiv preprint arXiv:2101.03091, 2021. 13
380
+
381
+ [51] George A Miller. WordNet: An electronic lexical dat @articlebordes2013translating, title=Translating embeddings for modeling multi-relational data, author=Bordes, Antoine and Usunier, Nicolas and Garcia-Duran, Alberto and Weston, Jason and Yakhnenko, Oksana, journal=Advances in neural information processing systems, volume=26, year=2013 abase. MIT press, 1998. 13
382
+
383
+ [52] Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multi-relational data. Advances in neural information processing systems, 26, 2013. 13
384
+
385
+ [53] Fabian M Suchanek, Gjergji Kasneci, and Gerhard Weikum. Yago: a core of semantic knowledge. In Proceedings of the 16th international conference on World Wide Web, pages 697-706, 2007.13
386
+
387
+ [54] Farzaneh Mahdisoltani, Joanna Biega, and Fabian Suchanek. Yago3: A knowledge base from multilingual wikipedias. In 7th biennial conference on innovative data systems research. CIDR Conference, 2014. 13
388
+
389
+ [55] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016. 15
390
+
391
+ [56] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017. 15
392
+
393
+ [57] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017. 15
394
+
395
+ [58] Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda, and Michael M Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5115-5124, 2017. 15
396
+
397
+ [59] Sami Abu-El-Haija, Bryan Perozzi, Amol Kapoor, Nazanin Alipourfard, Kristina Lerman, Hrayr Harutyunyan, Greg Ver Steeg, and Aram Galstyan. Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing. In international conference on machine learning, pages 21-29. PMLR, 2019. 15
398
+
399
+ [60] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 15
400
+
401
+ [61] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101,2017. 15
402
+
403
+ ## A List of Task Types
404
+
405
+ Currently, GLI supports the following task types ${}^{15}$ :
406
+
407
+ 1. NodeClassification: Node classification task. This task aims to predict unknown node properties based on other nodes and its features in a graph.
408
+
409
+ 2. GraphClassification: Graph classification task. This task aims to predict unknown graph properties based on known graph's features.
410
+
411
+ 3. LinkPrediction: Link prediction task. This task aims to predict the existence of a link between two nodes in a graph.
412
+
413
+ 4. TimeDependentLinkPrediction: Link prediction task, split by time. This task is the special case of LinkPrediction. Its train-validation-test split depends on the creation time of links.
414
+
415
+ 5. KGEntityPrediction: Knowledge graph entity prediction task. This task aims to predict the tail or head node for a triplet in the graph.
416
+
417
+ 6. KGRelationPrediction: Knowledge graph relation prediction task. This task aims to predict the relation type for a triplet in the graph.
418
+
419
+ ## 514 B Reference of Datasets
420
+
421
+ Table 2 summarizes the original source, current version and previous versions of the datasets that we have incorporated.
422
+
423
+ Table 2: Reference of datasets.
424
+
425
+ <table><tr><td>Dataset</td><td>Original</td><td>Current</td><td>Previous</td><td>Dataset</td><td>Original</td><td>Current</td><td>Previous</td></tr><tr><td>actor</td><td>[26]</td><td>[27]</td><td>/</td><td>ogbg-molpcba</td><td>[28]</td><td>[5]</td><td>[29]</td></tr><tr><td>arxiv-year</td><td>[30]</td><td>[19]</td><td>[5]</td><td>ogbl-collab</td><td>[30]</td><td>[5]</td><td>/</td></tr><tr><td>chameleon</td><td>[31]</td><td>[27]</td><td>/</td><td>ogbn-arxiv</td><td>[30]</td><td>[5]</td><td>[32]</td></tr><tr><td>cifar</td><td>[33]</td><td>[22]</td><td>/</td><td>ogbn-mag</td><td>[30]</td><td>[5]</td><td>/</td></tr><tr><td>citeseer</td><td>[11]</td><td>[12]</td><td>/</td><td>ogbn-products</td><td>[34]</td><td>[5]</td><td>[35]</td></tr><tr><td>cora</td><td>[11]</td><td>[12]</td><td>/</td><td>ogbn-proteins</td><td>[36]</td><td>[5]</td><td>[37]</td></tr><tr><td>cornell</td><td>[38]</td><td>[27]</td><td>/</td><td>penn94</td><td>[39]</td><td>[19]</td><td>/</td></tr><tr><td>FB13</td><td>[40]</td><td>[41]</td><td>[42]</td><td>pokec</td><td>[43]</td><td>[19]</td><td>/</td></tr><tr><td>FB15K</td><td>[40]</td><td>[41]</td><td>[44]</td><td>pubmed</td><td>[11]</td><td>[12]</td><td>/</td></tr><tr><td>FB15K237</td><td>[40]</td><td>[41]</td><td>[44]</td><td>snap-patents</td><td>[43]</td><td>[19]</td><td>[45]</td></tr><tr><td>genius</td><td>[46]</td><td>[19]</td><td>/</td><td>squirrel</td><td>[31]</td><td>[27]</td><td>/</td></tr><tr><td>mnist</td><td>[47]</td><td>[22]</td><td>/</td><td>texas</td><td>[38]</td><td>[27]</td><td>/</td></tr><tr><td>NELL-995</td><td>[48]</td><td>[49]</td><td>[41]</td><td>twitch-gamers</td><td>[50]</td><td>[19]</td><td>/</td></tr><tr><td>ogbg-molbace</td><td>[28]</td><td>[5]</td><td>[29]</td><td>wiki</td><td>[19]</td><td>[19]</td><td>/</td></tr><tr><td>ogbg-molclintox</td><td>[28]</td><td>[5]</td><td>[29]</td><td>wiscousin</td><td>[38]</td><td>[27]</td><td>/</td></tr><tr><td>ogbg-molfreesolv</td><td>[28]</td><td>[5]</td><td>[29]</td><td>WN11</td><td>[51]</td><td>[41]</td><td>/</td></tr><tr><td>ogbg-molhiv</td><td>[28]</td><td>[5]</td><td>[29]</td><td>WN18</td><td>[51]</td><td>[41]</td><td>[52]</td></tr><tr><td>ogbg-molmuv</td><td>[28]</td><td>[5]</td><td>[29]</td><td>WN18RR</td><td>[51]</td><td>[41]</td><td>[52]</td></tr><tr><td>ogbg-molsider</td><td>[28]</td><td>[5]</td><td>[29]</td><td>YAGO3-10</td><td>[53]</td><td>[41]</td><td>[54]</td></tr></table>
426
+
427
+ ## C Definitions of Graph Metrics
428
+
429
+ Here we introduce the formal definitions of the graph metrics mentioned in Section 3.2. Given a graph $G = \left( {V, E}\right)$ , where $V = \{ 1,2,\ldots , N\}$ is the set of $N$ nodes and $E \subseteq V \times V$ is the set of edges. Denote $M = \left| E\right|$ . Assume $X = {\mathbb{R}}^{N \times D}$ is the matrix of node features, where $D$ is the feature dimension. Also assume $Y = \{ 1,2,\ldots , C{\} }^{N}$ is the vector of node labels, where $C$ is the number of classes.
430
+
431
+ ---
432
+
433
+ ${}^{15}$ See details at https://anonymous.4open.science/r/gli/FORMAT.md.
434
+
435
+ ---
436
+
437
+ ### C.1 Basic
438
+
439
+ Is Directed: whether the graph is a directed graph.
440
+
441
+ Number of Nodes: the number of nodes $N$ .
442
+
443
+ Number of Edges: the number of edges $M$ .
444
+
445
+ Edge Density: the edge density is defined as $\frac{2M}{N\left( {N - 1}\right) }$ for undirected graph and $\frac{M}{N\left( {N - 1}\right) }$ for directed graph.
446
+
447
+ Average Degree: the average degree is defined as $\frac{2M}{N}$ for undirected graph and $\frac{M}{N}$ for directed graph.
448
+
449
+ Edge Reciprocity:the edge reciprocity of a directed graph is defined as $\overset{\overleftrightarrow{} }{M}$ , where $\overset{\overleftrightarrow{} }{M}$ denotes the number of edges pointing in both directions.
450
+
451
+ Degree Assortativity: the degree assortativity is defined as the average Pearson correlation coefficient of degree between all pairs of linked nodes.
452
+
453
+ ### C.2 Distance
454
+
455
+ Diameter: the maximum pairwise shortest path distance in the graph.
456
+
457
+ Pseudo Diameter: the pseudo diameter approximates diameter, which serves as a lower bound of the exact value of diameter.
458
+
459
+ Average Shortest Path Length: the average of all the pairwise shortest path distance in the graph.
460
+
461
+ Global Efficiency: the efficiency between a pair of nodes is the multiplicative inverse of the shortest path distance and the global efficiency is the average efficiency of all pairs of nodes in the graph.
462
+
463
+ ### C.3 Connectivity
464
+
465
+ Relative Size of LCC: the relative size of the largest connected component is defined as the ratio between the size of the largest connected component and $N$ .
466
+
467
+ Relative Size of LSCC: the relative size of the largest strongly connected component is defined as the ratio between the size of the largest strongly connected component and $N$ .
468
+
469
+ Average Node Connectivity: the local node connectivity for two non-adjacent nodes $u$ and $v$ is the minimum number of nodes that must be removed in order to disconnect them and the average node connectivity is the average local node connectivity of all pairs of two non-adjacent nodes in the graph.
470
+
471
+ ### C.4 Clustering
472
+
473
+ Average Clustering Coefficient: the local clustering coefficient for node $u$ is defined as $\frac{2}{\deg \left( u\right) \left( {\deg \left( u\right) - 1}\right) }T\left( u\right)$ for undirected graph, where $T\left( u\right)$ is the number of triangles passing through node $u$ and $\deg \left( u\right)$ is the degree of node $u$ ; and defined as $\frac{2}{{\deg }^{\text{tot }}\left( u\right) \left( {{\deg }^{\text{tot }}\left( u\right) - 1}\right) - 2{\deg }^{ \leftrightarrow }\left( u\right) }T\left( u\right)$ for directed graph, where $T\left( u\right)$ is the number of directed triangles through node $u,{de}{g}^{tot}\left( u\right)$ is the sum of in degree and out degree of node $u$ and ${\deg }^{ \leftrightarrow }\left( u\right)$ is the reciprocal degree of $u$ and average clustering coefficient is the average local clustering of all the nodes in the graph.
474
+
475
+ Transitivity: the fraction of all possible triangles present in the graph, which is defined as $3\underset{\text{ #triads }}{\overset{\text{ #triangles }}{ = }}$ , where a triad is a pair of two edges with a shared vertex.
476
+
477
+ Degeneracy: the least integer $k$ such that every induced subgraph of the graph contains a vertex with $k$ or fewer neighbors.
478
+
479
+ ### C.5 Distribution
480
+
481
+ Power Law Exponent: the exponent parameter of a Power-law distribution that best fits the degree-sequence distribution of the graph.
482
+
483
+ Pareto Exponent: the exponent parameter of a Pareto distribution that best fits the degree-sequence distribution of the graph. Gini Coefficient of Degree: the Gini coefficient of the the degree-sequence of the graph.
484
+
485
+ Gini Coefficient of Coreness: the Gini coefficient of the the coreness-sequence of the graph, where the coreness of a node $u$ indicates the largest integer $k$ of a $k$ -core containing node $u$ .
486
+
487
+ ### C.6 Attribute
488
+
489
+ Edge Homogeneity [17]: the ratio of edges that connect nodes with the same node labels.
490
+
491
+ Average Within-Class Feature Angular Similarity [17]: within-class angular feature similarity is $1 -$ angular_distance $\left( {{X}_{i},{X}_{j}}\right)$ for an edge with its endpoints $i$ and $j$ with the same node labels and average within-class angular feature similarity is the average of all such edges in the graph.
492
+
493
+ Average Between-Class Feature Angular Similarity [17]: between-class angular feature similarity is 1-angular_distance $\left( {{X}_{i},{X}_{j}}\right)$ for an edge with its endpoints $i$ and $j$ with different node labels and average between-class angular feature similarity is the average of all such edges in the graph.
494
+
495
+ Feature Angular SNR [17]: the feature angular SNR is defined as the ratio between average within-class feature angular similarity and average between-class feature angular similarity.
496
+
497
+ Homophily Measure [19]: the homophily measure is defined as
498
+
499
+ $$
500
+ \widehat{h} = \frac{1}{C - 1}\mathop{\sum }\limits_{{k = 1}}^{C}{\left\lbrack {h}_{k} - \frac{\left| {C}_{k}\right| }{N}\right\rbrack }_{ + }, \tag{1}
501
+ $$
502
+
503
+ where ${\left\lbrack a\right\rbrack }_{ + } = \max \left( {a,0}\right) ,\left| {C}_{k}\right|$ is the number of nodes with node label $k$ and ${h}_{k}$ is the class-wise homophily metric defined below,
504
+
505
+ $$
506
+ {h}_{k} = \frac{\mathop{\sum }\limits_{{u : {Y}_{u} = k}}{d}_{u}^{\left( {Y}_{u}\right) }}{\mathop{\sum }\limits_{{u : {Y}_{u} = k}}{d}_{u}}, \tag{2}
507
+ $$
508
+
509
+ where ${d}_{u}$ is the number of neighbors of node $u$ and ${d}_{u}^{\left( {Y}_{u}\right) }$ is the number of neighbors of node $u$ that have the same class label.
510
+
511
+ Attribute Assortativity: the attribute assortativity is defined as the average Pearson correlation coefficient of the attribute (class labels) between all pairs of linked nodes.
512
+
513
+ ## D Benchmark Experiment Setup
514
+
515
+ We make GCN [55], GAT [56], GraphSAGE [57], MoNet [58], MLP, and MixHop [59] all have two layers. For LINKX [19], we set ${ML}{P}_{A},{ML}{P}_{X}$ to be a one-layer network and ${ML}{P}_{f}$ to be a two-layers network, following Lim et al. [19].
516
+
517
+ In order to make a fair comparison, we adopt the same training configuration for all experiments. We use Adam [60] as optimizer for all models except LINKX. AdamW [61] is used with LINKX in order to stay the same with Lim et al. [19]. For all binary classification datasets (penn94, pokec, genius and twitch-gamers), we choose ROC AUC as evaluation metric. For other datasets, test accuracy is used.
518
+
519
+ Our implementaions of GCN, GAT, GraphSAGE and MoNet are based on DGL [9]. When implementing the models, we reserve default settings in DGL implementation as much as possible. For MixHop and LINKX, we adopt the implementation in Lim et al. [19]. The detailed settings for different models are:
520
+
521
+ - GAT: number of heads in multi-head attention $= 8$ . leakyReLU angle of negative slope $= {0.2}$ . No residual is applied. Dropout rate on attention weight is the same as overall dropout.
522
+
523
+ - GraphSAGE: Aggregator type is GCN. No norm is applied.
524
+
525
+ - MoNet: Number of kernels $= 3$ . Dimension of pseudo-coordinte $= 2$ . Aggregator type $=$ sum.
526
+
527
+ - MixHop: List of powers of adjacency matrix $= \left\lbrack {1,2,3}\right\rbrack$ . No norm is applied.
528
+
529
+ - LINKX: ${ML}{P}_{A},{ML}{P}_{X}$ are both one-layer network and ${ML}{P}_{f}$ is a two-layers network. AdamW is used as optimizer. No inner activation.
530
+
531
+ Hyperparameter tunning. Random search on the following hyperparameter tuning range is performed for every models.
532
+
533
+ - Hidden size: $\left\lbrack {{32},{64}}\right\rbrack$
534
+
535
+ - Learning rate: $\left\lbrack {{.001},{.005},{.01},{.1}}\right\rbrack$
536
+
537
+ - Dropout rate: $\left\lbrack {{.2},{.4},{.6},{.8}}\right\rbrack$
538
+
539
+ - Weight decay: $\left\lbrack {{.0001},{.001},{.01},{.1}}\right\rbrack$
540
+
541
+ We generate 100 random configurations for each model, where each random configuration is run for 5 times on each dataset. The max training epoch number is 10000 . We apply early stopping, where training is stopped if the validation accuracy does not improve for the last 50 epochs. When training is finished, we load the model weight for highest validation accuracy on the dataset. Test accuracy and standard deviation are reported in Table 1.
papers/LOG/LOG 2022/LOG 2022 Conference/ZBsxA6_gp3/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,294 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § GRAPH LEARNING INDEXER: A CONTRIBUTOR-FRIENDLY PLATFORM FOR BETTER CURATION OF GRAPH LEARNING BENCHMARKS
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ Establishing common benchmarks has been a critical driving force behind the success of modern machine learning techniques. As machine learning is being applied in broader domains and tasks, there is a need to establish more and diverse benchmarks to better reflect the reality of the application scenarios. For graph learning, an emerging field of machine learning, the need of establishing better benchmarks is particularly urgent. Towards this goal, we introduce Graph Learning Indexer (GLI) ${}^{1}$ , a benchmark curation platform for graph learning. In comparison to existing graph learning benchmark libraries, GLI highlights two novel design objectives. First, GLI is designed to incentivize dataset contributors. In particular, we incorporate various measures to minimize the effort of contributing and maintaining a dataset, increase the usability of the contributed dataset, as well as encourage better credits to different contributors of the dataset. Second, GLI is designed to curate a knowledge base, instead of a collection, of benchmark datasets. For this purpose, we come up with multiple sources of meta information of the benchmark datasets in order to better characterize the datasets.
12
+
13
+ § 1 INTRODUCTION
14
+
15
+ The practice of establishing common benchmarks in machine learning dates back to research programs of speech recognition in 1980s [1, 2], and has become a dominant paradigm of machine learning research. In the past, the community has been focusing on a handful of benchmarks in each major domain of machine learning applications ${}^{2}$ , usually developed by few institutes or research groups. However, as machine learning is becoming a general-purpose technology, there are new demands from modern machine learning research that are not entirely met by the current common practice of benchmarking:
16
+
17
+ 1. Broad Application. Machine learning is being applied to increasingly broad domains, where the emerging field of graph learning is an example with a variety of machine learning tasks in this domain. Representative new benchmarks are needed for such new domains and tasks. Furthermore, the development of good benchmarks often require inter-disciplinary knowledge and collaborations.
18
+
19
+ 2. Trustworthiness. The collection of each individual benchmark datasets could be biased. Driving the development of machine learning technologies by a few fixed benchmark datasets may suffer from the biases in these datasets. It is therefore desirable to leverage a set of diverse benchmark datasets to expose the potential trustworthy concerns of the machine learning technologies.
20
+
21
+ 3. General Technology. Towards more general-purpose artificial intelligence, there is a strong emerging interest in developing machine learning models that can perform well on a wide range of downstream tasks [6]. In conjunction with this interest, there have been efforts constructing benchmarks with many tasks, such as SuperGLUE [4], GEM [7], and BIG-Bench [8], where BIG-Bench consists of 204 tasks by more than 400 authors across 132 institutes.
22
+
23
+ ${}^{1}$ The anonymized codebase for this platform is available here: https://anonymous.4open.science/r/ gli/README.md.
24
+
25
+ ${}^{2}$ For example, ImageNet [3] in Computer Vision, SuperGLUE [4] in Natural Language Processing, and Open Graph Benchmark [5] in Graph Learning.
26
+
27
+ ${}^{3}$ https://en.wikipedia.org/wiki/General-purpose_technology
28
+
29
+ These new demands, especially for emerging fields such as graph learning, require the development of a large quantity of diverse benchmark datasets, in order to better reflect the reality of machine learning applications. This requirement poses challenges in both creation and curation of the benchmarks.
30
+
31
+ In this paper, we introduce Graph Learning Indexer (GLI), a graph learning benchmark curation platform, to mitigate the aforementioned challenges. In particular, GLI highlights two novel design objectives that respectively mitigate the challenges in benchmark creation and benchmark curation.
32
+
33
+ First, GLI aims to leverage contributions from the broad graph learning community to establish a wide range of benchmarks. As a result, GLI is designed to be contributor-centric, where we treat benchmark contributors as our core users when designing the platform. Specifically, we incorporate various designs, such as file-based data API, automated test, and template files, to minimize the effort of contribution and maintenance by the benchmark contributors. We have also considered measures to incentivize research effort in benchmark contributions in general. For example, in order to encourage better credits to the benchmark contributors, GLI includes the chain of prior versions of each benchmark dataset in the bibliographic section of the dataset README file.
34
+
35
+ Second, with the increasing quantity and diversity of benchmark datasets, GLI aims to build a knowledge base of the datasets, instead of a simple collection of datasets. GLI includes a Benchmark Indexing System ${}^{4}$ with various sources of meta information about the benchmark datasets collected by GLI. Such meta information can be later used for better curation and retrieval of the benchmarks.
36
+
37
+ The rest of this paper is organized as following. We introduce the contributor-centric design and the benchmark indexing system respectively in Section 2 and Section 3. Section 4 reviews related prior work on benchmark collections and graph learning libraries. We also include a sketch of future plan for GLI in Section 5. Finally, in Section 6, we conclude this paper with some open questions on benchmark design.
38
+
39
+ § 2 CONTRIBUTOR-CENTRIC DESIGN
40
+
41
+ A central goal of GLI is to incentivize the graph learning community to put more effort on contributing high-quality benchmark datasets. To achieve this goal, we treat dataset contributors as the core users of GLI and come up with three contributor-centric design objectives. First, GLI aims to provide smooth user experience for contributors by minimizing the effort on submission and maintenance of the datasets. Second, GLI aims to increase the impact of the hosted datasets by improving their usability. Third, GLI aims to encourage better credits to the dataset contributors through tangible measures.
42
+
43
+ § 2.1 USER EXPERIENCE AND QUALITY ASSURANCE
44
+
45
+ A key challenge in the design of GLI is to minimize the effort by the dataset contributors while assure a high quality of the contributed datasets. Our solution to this challenge is to first design a standard data management API that is both stable and extensible for graph learning datasets; and then design a GitHub-based contribution workflow with concise instructions and rich feedback for dataset contributors to convert the benchmark datasets into the standard API.
46
+
47
+ § 2.1.1 DATA MANAGEMENT API
48
+
49
+ The GLI Data Management API (Figure 1) has two key design features: the API is file-based; there is an explicit separation of data and task.
50
+
51
+ File-based storage API. The data API for almost all existing graph learning libraries (such as DGL [9] and PyG [10]) are code-based, which means that each dataset is associated with an ad hoc class that is dedicated to represent this dataset. For example, DGL [9] defines a CoraGraphDataset class for the node classification task on the Cora dataset [11, 12]. This code-based API couples the datasets with the codebase and increases the difficulty of maintenance. In particular, some change to the graph learning library codebase may break the ad hoc dataset classes so additional maintenance effort is required for each dataset.
52
+
53
+ ${}^{4}$ Thus "Indexer" in the name of GLI.
54
+
55
+ < g r a p h i c s >
56
+
57
+ Figure 1: The file-based GLI Data Management API with explicit separation of data and task. The GLI Data Storage part contains all necessary information to construct the graph data, including three levels: node, edge, and graph information. Each level may have multiple features or labels as its attributes. The GLI Task Configuration part contains the necessary information to perform a predefined task. Both parts further compress big chunks of data (such as the attributes or edge list) into NumPy standard binary format, with indexes to these data stored in JSON files. The NumPy data files are hosted in an external storage system, while all other files are hosted in the GitHub repo of GLI. In addition, the GLI Auxiliary part contains a README document, a conversion script which converts the raw data into GLI file format, and a urls. json providing the URLs to the NumPy data in the external storage system.
58
+
59
+ To avoid such unnecessary maintenance burden for dataset contributors, GLI adopts a file-based data storage API that is more stable compared to code-based APIs. While there has been file-based graph storage API, such as GraphML [13], they are not dedicated to graph learning datasets and lacks essential features such as storing the node attributes or data splits. We therefore designed a novel file-based storage API for graph learning datasets.
60
+
61
+ Explicit separation of data and task. We recognize that there is a clear distinction between the information of the content in a dataset, i.e., the data, and the information about how to use the data to train and evaluate the models, i.e., the task. For example, in graph learning benchmarks, there could often be multiple tasks (e.g., node classification and link prediction) defined on the same dataset, or there could be multiple settings for the same task (e.g., random split or fixed split). From the persepctive of dataset contribution and curation, it is cumbersome to make a new version of dataset for each new task on top of the same data. Therefore, we propose to store the data information and the task information separately in our API. And we design a task-specific API for each type of tasks.
62
+
63
+ This explicit separation of data and task turns out to offer a number of benefits. First, it makes the API more extensible, as the introduction of a new type of task will not affect the API for the data. Second, this separation makes automated tests more modularized (see Section 2.1.2). Third, it allows the implementation of general data loading schemes (see Section 2.2). Finally, it leads to a bottom-up approach to grow the taxonomy of graph learning tasks (see Section 3.1).
64
+
65
+ Overview of the API. Figure 1 shows the architecture of the file-based API with explicit separation of data and task ${}^{5}$ .
66
+
67
+ The information of the graph data is divided into three levels: node, edge, and graph level. Each level can be assigned multiple attributes as features or labels and can be further divided into multiple sub-levels to represent heterogeneous graphs. The attributes support both dense and sparse tensors to allow efficient storage and fast loading. The GLI data format has a strong representative power to accommodate most graph-structured data.
68
+
69
+ ${}^{5}$ A detailed document for the API is available at https://anonymous.4open.science/r/gli/FORMAT.md.
70
+
71
+ < g r a p h i c s >
72
+
73
+ Figure 2: GLI Contribution Workflow. A contributor will first use the conversion script to convert the raw data into the GLI format. Then the contributor will fill in the templates of README, md and urls. json. The JSON files and README.md (blue box) will be uploaded to GitHub as a pull request and the NumPy data files (green box) will be uploaded to the external storage system. GLI will perform automated tests on the submitted datasets and the GLI development team will further review the pull request before approval.
74
+
75
+ For the task, we have predefined a number of graph learning task types, such as NodeClassification, LinkPrediction, GraphClassification, etc. The information in the task configuration can be divided into two kinds: general configuration and task-specific configuration. General configurations are commonly required by all tasks, including features that are allowed to use during prediction, train/validation/test split, etc. On the contrary, the contents of task-specific configurations depend on task types. For example, both NodeClassification and GraphClassification requires to specify the number of possible classes (num_classes), and LinkPrediction provides an optional configuration on negative samples during validation and test (val_neg and test_neg).
76
+
77
+ Overall, the file-based design improves the stability of the API while the separation of data and task makes the API more extensible, both in turn improves the user experience for dataset contributors.
78
+
79
+ § 2.1.2 CONTRIBUTION WORKFLOW
80
+
81
+ In companion with the data management API, we designed a GitHub-based contribution workflow (Figure 2) to ease the dataset contribution process.
82
+
83
+ Template files. To begin with, GLI provides a list of well-commented template files ${}^{6}$ for all the required files in our API. The contributor only needs to fill in all the blanks to convert a dataset into the GLI format.
84
+
85
+ Pull request review. After finishing converting the dataset, the contributor will submit the required files as a pull request to the GitHub repository of GLI. The GLI development team or other researchers can provide detailed and interactive feedback in the pull request.
86
+
87
+ Automated tests. In addition to the manual peer review, the pull request will also trigger automated tests with detailed error feedback to help the contributors debug their implementation. The tests include the standard pycodestyle, pydocstyle, and pylint for syntax and style check. We have also implemented a wide range of in-depth tests with pytest to check the correctness of dataset format and to expose potential errors during runtime by testing with short model training. Contributors can also use several well-documented utility functions to test the correctness of their data format locally.
88
+
89
+ ${}^{6}$ See https://anonymous.4open.science/r/gli/examples/template/README.md.
90
+
91
+ § 2.2 DATASET USABILITY
92
+
93
+ import gli
94
+
95
+ cora_node_dataset = gli.get_gli_dataset(dataset="cora",
96
+
97
+ task="NodeClassification")
98
+
99
+ 138
100
+
101
+ Demo 1: Example usage of the general data loading scheme. cora_node_dataset is an instance of dgl. data. DGLDataset, thus it can be fit into DGL dataloader seamlessly.
102
+
103
+ To increase the impact of the datasets hosted on GLI, we implemented a general data loading scheme that can seamlessly integrated into major graph learning libraries ${}^{7}$ for downstream experiments. Demo 1 demonstrates an example of the general data loading scheme. Once a contributed dataset (and the task defined on it) is merged into the GLI repository, the dataset can be retrieved by calling gli. get_gli_dataset with the dataset name and task type as arguments.
104
+
105
+ Under the hood, as shown in Demo 2, gli.get_gli_dataset calls gli.get_gli_graph and gli. get_gli_task to respectively load the GLI Data Storage and GLI Task Configuration shown in Figure 1. Thanks to the explicit separation of data and task, we only need to maintain a general graph loading function and a set of task loading functions with each function dedicated to a task type, which is much less effort than maintaining a separate dataset class for each task and dataset combination.
106
+
107
+ graph_cora = gli.get_gli_graph(dataset="cora")
108
+
109
+ task_node = gli.get_gli_task(dataset="cora",
110
+
111
+ task="NodeClassification")
112
+
113
+ cora_node_dataset = gli.combine_graph_and_task(graph_cora, task_node)
114
+
115
+ Demo 2: The innerworkings of gli.get_gli_dataset.
116
+
117
+ § 2.3 CREDITS TO CONTRIBUTORS
118
+
119
+ An important aspect to incentivize the dataset contributors is to ensure that they get the proper credits. For this purpose, we have made a couple of designs to help the community cite properly. There is citation information in the README file of each dataset listing the BibTex of the work relevant to the dataset. Specifically, the citation information is split into dataset and tasks, as there could often be multiple tasks defined on top of a graph dataset, and the definition of tasks could come from work that is different from the one contributing the dataset. Moreover, the citation information for the dataset is further split into three parts:
120
+
121
+ * Original Source: The first work that created the dataset.
122
+
123
+ * Current Version: The work that is directly responsible for the dataset stored in GLI.
124
+
125
+ * Previous Versions: Any intermediate versions between Original Source and Current Version. There can be multiple citations in Previous Versions.
126
+
127
+ The paper popularizing a benchmark dataset is often not the paper originally contributing the dataset. And it is not uncommon that the former gets most of the citations while the latter gets few. This phenomenon is possibly due to two factors. First, tracking the chain of contributions to a dataset through literature search is a tedious work. Second, researchers tend to get information about a dataset from the methodology papers that cite the dataset rather than the original paper creating the dataset. So the mistakes in citation accumulate.
128
+
129
+ By providing succinct bibliographic information relevant to the dataset in the README file, we hope to help the community better recognize the contributions of all contributors, with a particular emphasis on crediting the original source.
130
+
131
+ § 3 BENCHMARK INDEXING SYSTEM
132
+
133
+ With the growing quantity and diversity of benchmark datasets, it is important for the benchmark curation platform to help users navigate through the large collection of datasets. For this purpose,
134
+
135
+ ${}^{7}$ Currently we have implemented data loading for DGL.
136
+
137
+ GLI is designed to serve as an "indexer" that builds a database consisting of various meta information of the benchmark datasets. And we name the database as Benchmark Indexing System. To some extent, this is in a similar vein as the idea of Datasheets for Datasets [14]. Datasheets for Datasets focus more on the characteristics of each individual dataset while our design of the database also cares about the synergy among different datasets.
138
+
139
+ Ultimately, we hope to use this database to help users 1) retrieve the right benchmarks that match the context of the applications of their interest; 2) identify potential biases and trustworthy issues existed in the datasets; or 3 ) motivate the development of new methodology based on the characteristics of tasks and datasets.
140
+
141
+ At the current stage, however, we focus on coming up with different sources of meta information to be included in the database. The current implementation consists of three types of meta information, which are detailed in the following subsections.
142
+
143
+ § 3.1 TASK TYPES
144
+
145
+ The task types come as meta information naturally from the implementation of data management API in GLI. Graph data are ubiquitous but also diverse and so are the graph learning tasks defined on top of graph data. Different graph learning tasks may have distinct nature and thus require very different methodology. Therefore the task type is an important source of meta information for each benchmark dataset.
146
+
147
+ In GLI, the definition of task types is driven by the contributed benchmarks. When a contributor is contributing a new benchmark, they will first check if their benchmark belongs to one of the existing task types in GLI. If none of the existing task types can accommodate the new benchmark, the contributor can initiate the definition of a new task type. The GLI development team and the contributors will implement the support for the new task type, including dataset class, documentation, and automated tests.
148
+
149
+ This bottom-up approach of developing task types not only makes GLI highly extensible to new benchmark datasets, but also gradually grows a taxonomy of graph learning tasks as more benchmarks are being collected. A list of currently supported task types is given in Appendix A.
150
+
151
+ § 3.2 GRAPH METRICS
152
+
153
+ Another type of meta information included in GLI is various graph metrics, such as average degree or average clustering coefficient. In classical network science literature $\left\lbrack {{15},{16}}\right\rbrack$ , the graph metrics have been shown to be informative about the characteristics of the graph data. In a recent study, Palowitch et al. [17] empirically demonstrated that there are clear patterns in the graph neural network performance associated with certain graph metrics of the benchmark datasets.
154
+
155
+ GLI integrates a function that can calculate a list of graph metrics for each contributed dataset. The graph metrics integrated in this function can be categorized into 6 groups.
156
+
157
+ * Basic: Is Directed, Number of Nodes, Number of Edges, Edge Density, Average Degree, Edge Reciprocity, Degree Assortativity;
158
+
159
+ * Distance: Diameter, Pseudo Diameter, Average Shortest Path Length, Global Efficiency;
160
+
161
+ * Connectivity: Relative Size of LCC, Relative Size of LSCC, Average Node Connectivity;
162
+
163
+ * Clustering: Average Clustering Coefficient, Transitivity, Degeneracy;
164
+
165
+ * Distribution: Power Law Exponent, Pareto Exponent, Gini Coefficient of Degree, Gini Coefficient of Coreness;
166
+
167
+ * Attribute: Edge Homogeneity, Feature Homogeneity, Homophily Measure, Attribute Assortativ-ity.
168
+
169
+ The formal definitions of these graph metrics can be found in Appendix C.
170
+
171
+ § 3.3 MODEL PERFORMANCE
172
+
173
+ The third type of meta information included in GLI is the performance of various popular models on the datasets. It is common to use a model's performance on different experiment settings and datasets to understand the model characteristics. Recently, it is shown that one can also use the performance of different models to characterize the datasets and obtain meaningful clusters of the datasets [18].
174
+
175
+ In GLI, we provide a benchmark suite that can benchmark a few popular graph learning models on the contributed benchmarks. The benchmark suite implements a separate set of training and hyperparameter tuning functions for each task type. Thanks to the general data loading scheme (as introduced in Section 2.2), the benchmark code can be easily extended to new datasets with the same task type. We currently have supported NodeClassification and GraphClassification in the benchmark suite.
176
+
177
+ Below, we provide an example to showcase how the model performance could provide useful information to characterize the datasets. Using the benchmark suite in GLI, we provide the performance of several popular models on a set of node classification datasets in Table 1. This experiment is a rough replication of Lim et al. [19], with extension to more datasets enabled by GLI. The detailed experiment setup (and citations to models and datasets) can be found in Appendix D.
178
+
179
+ Readers who are familiar with the recent graph learning literature may find that, not surprisingly, the best and second best performing models on each dataset are a good indicator of how "homphil-ious" [20] the dataset is. The early graph neural network models, GCN, GAT, and GraphSAEG, have better performance on more homophilious datasets, such as cora, citeseer, and pubmed. LINKX performs better on most of the remaining non-homophilious datasets. A few datasets, texas, cornell, and wisconsin, have notoriously unstable performance, as shown by the large standard deviations. It also seems that the graph structure does not help much for the task, as MLP performs the best on these datasets.
180
+
181
+ In general, the GLI API makes it easier to implement the benchmark suite for a wide range of models and datasets in well-controlled experiment setups, which enables the use of model performance as a way to characterize the datasets.
182
+
183
+ Table 1: Benchmark experiment results for node classification datasets. Test accuracy is reported for most datasets, while test ROC AUC is reported for binary classification datasets (genius, twitch-gamers, penn94, pokec). Standard deviations are over 5 runs. The best result on each dataset is bolded, and the second best result is underlined.
184
+
185
+ max width=
186
+
187
+ X GCN GAT GraphSAGE MoNet MLP LINKX MixHop
188
+
189
+ 1-8
190
+ cora ${81.03} \pm {0.82}$ $\mathbf{{83.0} \pm {0.62}}$ ${81.46} \pm {0.74}$ ${76.44} \pm {1.85}$ 59.1±2.3 ${59.36} \pm {2.41}$ 79.64±1.55
191
+
192
+ 1-8
193
+ citeseer ${72.28} \pm {0.56}$ ${69.9} \pm {1.54}$ 73.38±0.82 ${64.4} \pm {0.62}$ ${54.62} \pm {6.26}$ ${42.5} \pm {7.88}$ 69.64±1.2
194
+
195
+ 1-8
196
+ pubmed 79.44±0.43 79.04±0.76 ${78.4} \pm {0.35}$ ${76.18} \pm {0.84}$ 73.7±0.5 ${56.49} \pm {7.92}$ 76.61±1.35
197
+
198
+ 1-8
199
+ texas 61.08±3.07 67.02±1.21 ${66.48} \pm {1.48}$ 55.13±7.04 78.92±2.25 76.57±4.87 77.84±1.7
200
+
201
+ 1-8
202
+ cornell ${52.97} \pm {4.09}$ ${48.64} \pm {1.9}$ 47.02±3.08 ${51.89} \pm {2.25}$ 68.64±7.78 ${65.46} \pm {5.85}$ ${66.48} \pm {5.43}$
203
+
204
+ 1-8
205
+ wisconsin 56.46±3.5 ${54.89} \pm {1.96}$ ${52.54} \pm {1.63}$ 36.86±3.22 78.82±4.24 78.62±1.94 ${76.9} \pm {5.61}$
206
+
207
+ 1-8
208
+ actor 29.36±0.73 ${30.15} \pm {0.56}$ 29.26±0.5 ${26.35} \pm {1.01}$ 37.11±0.54 33.56±1.84 ${34.77} \pm {0.94}$
209
+
210
+ 1-8
211
+ squirrel ${32.4} \pm {1.18}$ 29.14±1.55 ${31.64} \pm {1.93}$ ${27.14} \pm {2.34}$ ${34.87} \pm {0.47}$ 62.43±1.23 33.37±1.45
212
+
213
+ 1-8
214
+ chameleon 45.92±2.61 ${46.18} \pm {0.93}$ ${48.72} \pm {0.47}$ ${32.54} \pm {1.24}$ 49.16±0.66 67.08±1.69 ${48.72} \pm {1.39}$
215
+
216
+ 1-8
217
+ arxiv-year 49.6±0.16 ${34.91} \pm {0.56}$ ${43.39} \pm {0.74}$ ${40.19} \pm {0.48}$ ${36.49} \pm {0.19}$ 52.73±0.34 ${40.63} \pm {0.12}$
218
+
219
+ 1-8
220
+ snap-patents 55.46±0.11 36.34±0.6 43.33±0.27 ${43.48} \pm {0.73}$ 31.32±0.04 ${53.43} \pm {0.32}$ 43.27±0.03
221
+
222
+ 1-8
223
+ penn94 ${88.79} \pm {0.6}$ ${66.29} \pm {12.21}$ ${85.0} \pm {0.53}$ 73.92±3.71 ${83.92} \pm {0.32}$ 93.47±0.27 91.62±0.11
224
+
225
+ 1-8
226
+ pokec 71.17±10.76 ${53.03} \pm {0.4}$ ${63.02} \pm {5.68}$ 53.65±2.17 ${64.69} \pm {4.92}$ 90.54±0.12 ${86.84} \pm {0.2}$
227
+
228
+ 1-8
229
+ genius ${84.15} \pm {1.71}$ 49.86±28.68 80.31±0.23 ${63.23} \pm {2.39}$ ${84.42} \pm {0.2}$ 90.88±0.1 90.04±0.12
230
+
231
+ 1-8
232
+ twitch-gamers ${62.4} \pm {0.22}$ ${59.57} \pm {0.88}$ ${61.68} \pm {0.3}$ ${58.02} \pm {1.26}$ 59.66±0.09 66.21±0.3 ${64.22} \pm {0.08}$
233
+
234
+ 1-8
235
+
236
+ § 4 RELATED WORK
237
+
238
+ In this section, we review prior work on graph learning benchmarks, graph learning libraries, and other relevant effort on machine learning benchmark infrastructures.
239
+
240
+ § 4.1 GRAPH LEARNING BENCHMARKS AND GRAPH LEARNING LIBRARIES
241
+
242
+ Recently, there have been many infrastructural efforts on developing benchmark collections for graph learning [5, 21-24]. Among which the most widely-used ones at present are perhaps Open Graph Benchmark [5] and Benchmarking Graph Neural Networks [22]. GLI differs from the prior work in two key aspects.
243
+
244
+ 1. GLI is specifically optimized to better serve the dataset contributors. Most existing graph learning benchmarks are designed with the "dataset consumers", instead of contributors, as the core users. To our best knowledge, dedicated designs to optimize contribution workflow of graph learning datasets were essentially non-exist prior to this work. For example, the contribution workflow for Open Graph Benchmark is to pack the dataset in a fixed format and email it to the maintenance team ${}^{8}$ . In comparison, our GitHub-based contribution workflow is more interactive and potentially more scalable.
245
+
246
+ 2. GLI maintains a bottom-up dynamic task taxonomy while most of the existing benchmark collections have a top-down static taxonomy of graph learning tasks. The static taxonomy of graph learning tasks may limit the type of dataset and tasks could be contributed to the benchmark collections.
247
+
248
+ There are also a few workshops and conference tracks dedicated to research on benchmarks and datasets, such as the Workshop on Graph Learning Benchmarks ${}^{9}$ and the NeurIPS Datasets and Benchmarks Track ${}^{10}$ . These venues are friendly to the publications of benchmark contributions and have successfully solicited a number of new graph learning benchmark datasets. The development of GLI shares the same motivation as these endeavors towards incentivizing more contributions on benchmarks. And GLI could be used as an infrastructural tool for these publication venues to better evaluate and curate the collected benchmarks.
249
+
250
+ § 4.2 GRAPH LEARNING LIBRARIES
251
+
252
+ In addition, there are a few general-purpose graph learning libraries, such as PyG [10], DGL [9], and TF-GNN [25], that are relevant to this work. While the primary focus of these libraries is not benchmark datasets, they also provide graph data API at the dataloader level. We suggest that the file-based API design in GLI is more contributor-friendly because 1) it is easier to convert the data to files than to implement a dataset class; 2) the file-based API does not rely on any software dependency and is less likely to break; 3) the GLI developers will take care of the maintenance of the data loading code.
253
+
254
+ § 4.3 OTHER RELEVANT BENCHMARK INFRASTRUCTURES
255
+
256
+ Outside the area of graph learning, there are various machine learning benchmark infrastructures that are remotely relevant to this work.
257
+
258
+ One relevant machine learning benchmark infrastructure is Papers With Code ${}^{11}$ , which has a database of datasets in different domains of machine learning. Each dataset in this database is associated with types of machine learning tasks and a massive record of machine learning model performances, similarly as our design in Section 3. However, the performances are directly taken from papers or self-reported, and the experiment setups and data versions may not be well controlled.
259
+
260
+ More generally, there are a number of dataset search engines, such as Google Dataset Search ${}^{12}$ , Microsoft Research Open Data ${}^{13}$ , and DataMed ${}^{14}$ . These search engines index a huge and growing amount of datasets in various domains but does not contain detailed domain-specific dataset characteristics, such as the graph metrics as described in Section 3.2. These dataset are also usually not machine-learning ready, i.e., there is no data loading code that transforms these datasets into machine learning data loaders.
261
+
262
+ ${}^{8}$ https://ogb.stanford.edu/docs/dataset_overview/
263
+
264
+ ${}^{9}$ https://graph-learning-benchmarks.github.io/
265
+
266
+ ${}^{10}$ https://neurips.cc/Conferences/2021/CallForDatasetsBenchmarks
267
+
268
+ 11https://paperswithcode.com/
269
+
270
+ ${}^{12}$ https://datasetsearch.research.google.com/
271
+
272
+ ${}^{13}$ https://msropendata.com/
273
+
274
+ ${}^{14}$ https://datamed.org/
275
+
276
+ § 5 FUTURE PLAN
277
+
278
+ In the future, there are a few directions that the GLI development team will focus on.
279
+
280
+ § $\MATHBF{{USERVERIENCE}.}$
281
+
282
+ * Helper functions for dataset conversion. We plan to implement a few helper functions that can automatically convert commonly seen raw data formats into the GLI format.
283
+
284
+ * Automatic generation of README documents. We would like to implement a function that can automatically generate the README document for a dataset based on dataset characteristic and a few structured survey questions for the contributors.
285
+
286
+ Automatic benchmarking popular models. We plan to implement a service that can automatically benchmark popular models on new contributed datasets such that the model performance can be directly leveraged into the meta information of the datasets.
287
+
288
+ Citation tracking. We plan to track the citations to each dataset hosted on GLI. In this way, we can send an alert to the authors citing a dataset when critical issues/bugs are identified for the dataset.
289
+
290
+ Dataset exploration. We plan to implement an interface to explore and retrieve the datasets hosted on GLI, based on the database of the datasets described in Section 3.
291
+
292
+ § 6 CONCLUSION
293
+
294
+ In this paper, we have introduced Graph Learning Indexer (GLI), a benchmark curation platform for graph learning. GLI is designed to solicit and curate massive benchmark datasets contributed by the community. With the contributor-centric design, we hope that GLI can better assist the community contribution on the development of benchmark datasets. We also hope that GLI can help improve our understanding on the taxonomy of graph learning tasks based on the rich meta information about the datasets.
papers/LOG/LOG 2022/LOG 2022 Conference/Zg8y2-v8ia/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Anonymous Author(s)
2
+
3
+ Anonymous Affiliation
4
+
5
+ Anonymous Email
6
+
7
+ ## Abstract
8
+
9
+ Graph Neural Networks (GNNs) are able to achieve high classification accuracy on many large real world datasets, but provide no rigorous notion of predictive uncertainty. We leverage recent advances in conformal prediction to construct prediction sets for node classification in inductive learning scenarios, and verify the efficacy of our approach across standard benchmark datasets using popular GNN models. The code is available at this link.
10
+
11
+ ## 8 1 Introduction
12
+
13
+ Machine learning on graph structured data has seen a boom of popularity in recent years, with applications ranging from recommendation systems to biology and physics. Graph neural networks are quickly maturing as a technology; many state of the art models are commoditised in frameworks such as Pytorch Geometric [1] and DGL [2]. Despite their overwhelming popularity and success, very little progress has been made towards quantifying the uncertainty of the predictions made by these models, a vital step towards robust real world deployments.
14
+
15
+ In related areas of machine learning such as computer vision, conformal prediction [3] has emerged as a promising candidate for uncertainty quantification [4]. Conformal prediction is a very appealing approach as it is compatible with any black box machine learning algorithm and dataset as long as the data is statistically exchangeable. The most wide-spread method, so called split-conformal, also requires trivial computational overhead when compared to model fitting. Conformal prediction uses the assumption that test and training data have exchangeable statistical properties in order to assess the uncertainty of predictions.
16
+
17
+ Graph structured data is in general not exchangeable and so the guarantees provided by conformal prediction in its naive form do not hold. Recent work by Barber et al. [5] extends conformal prediction to the non-exchangeable setting and provides theoretical guarantees on the performance of conformal prediction in this setting. We leverage insights from [5] to apply conformal prediction in the node classification setting. The key insight is that for a homophilous graph, the model calibration should be similar in a neighbourhood around any given node. We leverage this insight to localise the calibration of conformal prediction. We show that our method improves calibration of predictive uncertainty and provides tighter prediction sets when compared with a naive application of conformal prediction across several state of the art models applied to popular node classification datasets.
18
+
19
+ ## 2 Conformal Prediction
20
+
21
+ Conformal prediction is a family of algorithms that generate finite sample valid prediction intervals or sets from an arbitrary black box machine learning model. Amazingly, the predictive model does not even need be well specified for these guarantees to hold (although the prediction intervals or sets may not be useful in this case). In the exposition below we will focus on conformal classification as that is the object of study in this work, but note that conformal prediction can be used for regression and other risk control procedures. We recommend consulting the excellent tutorial by Angelopoulos and Bates [6] for an introduction to conformal prediction.
22
+
23
+ ### 2.1 The Exchangeable Case
24
+
25
+ Suppose we are working on a $K$ -class classification problem and we have a fitted model $\widehat{f} : \mathcal{X} \rightarrow$ ${\left\lbrack 0,1\right\rbrack }^{K}$ that outputs the probability of each class. Given an exchangeable set of held-out calibration
26
+
27
+ # Distribution Free Prediction Sets for Node Classification
28
+
29
+ datapoints $\left( {{X}_{1},{Y}_{1}}\right) ,\ldots ,\left( {{X}_{n},{Y}_{n}}\right)$ (held out meaning they were not used to fit the model) and a new test point $\left( {{X}_{n + 1},{Y}_{n + 1}}\right)$ , conformal prediction constructs a prediction set $\mathcal{T}\left( {X}_{n + 1}\right)$ that satisfies
30
+
31
+ $$
32
+ 1 - \alpha \leq \mathbb{P}\left( {{Y}_{n + 1} \in \mathcal{T}\left( {X}_{n + 1}\right) }\right) \leq 1 - \alpha + \frac{1}{n + 1}
33
+ $$
34
+
35
+ for a user specified error rate $\alpha \in \left\lbrack {0,1}\right\rbrack$ . Conformal prediction relies on a score function $S : \mathcal{X} \times \mathcal{Y} \rightarrow$ $\mathbb{R}$ , which is a measure of the calibration of the prediction at a given datapoint. We will introduce one specific score function for classification, namely the Adaptive Prediction Sets (APS, [7]) procedure, but note that there are many possible options (see [6]). Given a score function $S$ , the procedure for constructing a prediction set is very simple; for each datapoint $\left( {{X}_{i},{Y}_{i}}\right)$ in the calibration set, compute the score ${s}_{i} = S\left( {{X}_{i},{Y}_{i}}\right)$ . Define $\widehat{q}$ to be the $\left\lceil {\left( {n + 1}\right) \left( {1 - \alpha }\right) }\right\rceil /n$ empirical quantile of the scores ${s}_{1},\ldots ,{s}_{n}$ , and finally create the prediction set $\mathcal{T}\left( {X}_{n + 1}\right) = \left\{ {y : S\left( {{X}_{n + 1}, y}\right) \leq \widehat{q}}\right\}$ .
36
+
37
+ To motivate the APS score function, suppose we have access to an oracle predictor that exactly matches the conditional distribution $\pi \left( x\right) = \mathbb{P}\left( {{Y}_{n + 1} \mid {X}_{n + 1} = x}\right)$ . Then to construct a $1 - \alpha$ prediction set, we simply sort the probabilities into descending order, and add labels to the set until the cumulative probability exceeds $1 - \alpha$ (with appropriate tie breaking to ensure exact coverage).
38
+
39
+ Let $\left\{ {{\pi }_{\left( 1\right) },\ldots ,{\pi }_{\left( K\right) }}\right\}$ be the order statistics of the conditional probabilities $\pi \left( x\right)$ so that ${\pi }_{\left( 1\right) } \geq {\pi }_{\left( 2\right) } \geq$ $\cdots \geq {\pi }_{\left( K\right) }$ . Prediction sets can be constructed from the oracle as
40
+
41
+ $$
42
+ \left\{ {{\pi }_{\left( 1\right) },\ldots ,{\pi }_{\left( k\right) }}\right\} \text{, where}k = \inf \left\{ {{k}^{\prime } : \mathop{\sum }\limits_{{j = 1}}^{{k}^{\prime }}{\pi }_{\left( j\right) } \geq 1 - \alpha }\right\} \text{.}
43
+ $$
44
+
45
+ In practice the probabilities given by the classifier $\widehat{f}\left( x\right)$ will not be exactly equal to $\mathbb{P}\left( {{Y}_{n + 1} \mid {X}_{n + 1} = }\right.$ $x)$ , but the oracle prediction set is used to define a conformal score as
46
+
47
+ $$
48
+ S\left( {x, y}\right) = \mathop{\sum }\limits_{{j = 1}}^{k}\widehat{f}{\left( x\right) }_{\left( j\right) },\text{ where }y = k. \tag{1}
49
+ $$
50
+
51
+ Intuitively, labels are added to the set until the true label is included, and the score is the cumulative probability of this set. To give an example, if we ran this procedure on a set of calibration data and found that we needed to use the level ${94}\%$ to get ${90}\%$ coverage, then we would use $\widehat{q} = {0.94}$ as the threshold when constructing new prediction sets. Note to get exact coverage ties need to be broken randomly when including the final label in the set, see Appendix A. 1 for details.
52
+
53
+ ### 2.2 Beyond Exchangeability
54
+
55
+ Conformal prediction in the form presented above relies on the assumption that the data points ${Z}_{i} = \left( {{X}_{i},{Y}_{i}}\right)$ are exchangeable. The exchangeable form of conformal prediction provides no guarantee if these assumptions are violated, however non-exchangeable conformal prediction was introduced in the pioneering work of Barber et al. [5].
56
+
57
+ Formally, the non-exchangeable conformal prediction procedure assumes a choice of deterministic fixed weights ${w}_{1},\ldots ,{w}_{n} \in \left\lbrack {0,1}\right\rbrack$ (normalized as detailed in [5]). As before, one computes the scores ${s}_{1},\ldots ,{s}_{n}$ but now defines the prediction set in terms of the weighted quantiles of the score distribution
58
+
59
+ $$
60
+ {\widehat{C}}_{n}\left( {X}_{n + 1}\right) = \left\{ {y \in \mathcal{Y} : S\left( {{X}_{n + 1}, y}\right) \leq {\mathrm{Q}}_{1 - \alpha }\left( {\mathop{\sum }\limits_{{i = 1}}^{n}{w}_{i} \cdot {\delta }_{{s}_{i}} + {w}_{n + 1} \cdot {\delta }_{+\infty }}\right) }\right\} \tag{2}
61
+ $$
62
+
63
+ where ${\mathrm{Q}}_{\tau }\left( \cdot \right)$ denotes the $\tau$ -quantile of a distribution. Non-exchangeable conformal prediction also comes with performance guarantees; the authors define the coverage gap
64
+
65
+ $$
66
+ \text{ Coverage gap } = \left( {1 - \alpha }\right) - \mathbb{P}\left\{ {{Y}_{n + 1} \in {\widehat{C}}_{n}\left( {X}_{n + 1}\right) }\right\} \tag{3}
67
+ $$
68
+
69
+ as the loss of coverage when compared to the exchangeable setting, and show that this can be bounded as follows: let $Z = \left( {\left( {{X}_{1},{Y}_{1}}\right) ,\ldots ,\left( {{X}_{n + 1},{Y}_{n + 1}}\right) }\right)$ be the full dataset and define ${Z}^{i}$ as the same dataset after swapping the test point and the ${i}^{th}$ training point
70
+
71
+ $$
72
+ {Z}^{i} = \left( {\left( {{X}_{1},{Y}_{1}}\right) ,\ldots ,\left( {{X}_{i - 1},{Y}_{i - 1}}\right) ,\left( {{X}_{n + 1},{Y}_{n + 1}}\right) ,\left( {{X}_{i + 1},{Y}_{i + 1}}\right) ,\ldots ,\left( {{X}_{n},{Y}_{n}}\right) ,\left( {{X}_{i},{Y}_{i}}\right) }\right) .
73
+ $$
74
+
75
+ Then the coverage gap in Equation (3) can be bounded as (Theorem 2a, Barber et al. [5]):
76
+
77
+ $$
78
+ \text{ Coverage gap } \leq \frac{\mathop{\sum }\limits_{{i = 1}}^{n}{w}_{i} \cdot {\mathrm{d}}_{TV}\left( {Z,{Z}^{i}}\right) }{1 + \mathop{\sum }\limits_{{i = 1}}^{n}{w}_{i}} \tag{4}
79
+ $$
80
+
81
+ where ${\mathrm{d}}_{TV}$ is the total variation distance. To make this bound small one would like to place a large weight ${w}_{i}$ on datapoints $\left( {{X}_{i},{Y}_{i}}\right)$ that are drawn from a similar distribution to the test point $\left( {{X}_{n + 1},{Y}_{n + 1}}\right)$ .
82
+
83
+ ## 3 Conformal Prediction for Node Classification
84
+
85
+ Consider now the node classification setting: we are given a graph $G = \left( {V, E}\right)$ , and for each node $i \in V$ we are given a node feature vector ${X}_{i} \in {\mathbb{R}}^{F}$ and a label ${Y}_{i} \in \mathcal{Y}$ . A standard pipeline for node classification usually consists of a GNN model that produces a node embedding ${h}_{i} \in {\mathbb{R}}^{H}$ followed by a classifier $f : {\mathbb{R}}^{H} \rightarrow \mathcal{Y}$ . Here the data points ${Z}_{i} = \left( {{X}_{i},{Y}_{i}}\right)$ are certainly not assumed to be exchangeable; the underlying principle of GNN models is that the adjacency matrix of $G$ provides information about the dependency between datapoints (and hence neighbourhood information of $G$ is aggregated and used for prediction). Barber et al. [5] show in particular that non-exchangeable data can be navigated when the inference algorithm is symmetric. Fitting the model on training data is trivially a symmetric function of the held-out data (as the held-out data do not enter) and hence falls into this framework.
86
+
87
+ We combine non-exchangeable conformal prediction with the information given by the adjacency matrix into an algorithm for constructing prediction sets for node classification, which we call Neighbourhood Adaptive Prediction Sets (NAPS). We set the weights in Equation (2) to ${w}_{i} = 1$ if $i \in {\mathcal{N}}_{n + 1}^{k}$ , where ${\mathcal{N}}_{n + 1}^{k}$ is the $k$ -hop neighbourhood of node $n + 1$ . We then apply non-exchangeable conformal prediction with the APS scoring function in Equation (1). The coverage gap of NAPS is
88
+
89
+ bounded as
90
+
91
+ $$
92
+ \text{ Coverage gap } \leq \frac{\mathop{\sum }\limits_{{i \in {\mathcal{N}}_{n + 1}^{k}}}{\mathrm{\;d}}_{TV}\left( {Z,{Z}^{i}}\right) }{1 + \left| {\mathcal{N}}_{n + 1}^{k}\right| } \tag{5}
93
+ $$
94
+
95
+ by simple substitution into Equation (4). This bound will be small if the $k$ -hop neighbours of node $n + 1$ are distributed similarly, which is otherwise known as homophily [8]. Homophily is a key principle of many real world networks, where linked nodes often belong to the same class and have similar features, and is in crucial for good performance in many popular GNN architectures (although recent work has considered the heterophilic case, see [9], which we will discuss in the future work section). This is also related to network homogeneity, where nodes in a neighbourhood play similar roles in the network and are considered interchangeable on average.
96
+
97
+ The neighbourhood depth parameter $k$ introduces a tradeoff; expanding the neighbourhood increases the sample size for calibration, but introduces nodes that may be progressively less exchangeable with the test node. In the form presented here we recommend only applying NAPS to large homophilous networks with dense 1 or 2 hop neighbourhoods, but we will discuss extensions in future work.
98
+
99
+ ## 4 Experiments
100
+
101
+ We now perform experiments with popular real world datasets and models to evaluate the performance of our procedure. Note that our method is compatible with any node classification model and any dataset. Our experiments follow the following format: we split each graph into training, validation and test nodes (where the validation and test nodes are not available during model fitting i.e. an inductive node split). The training and validation sets are used for model fitting, and the test set is used to evaluate the conformal prediction procedure by constructing prediction sets and evaluating the empirical coverage.
102
+
103
+ Evaluating Conformal Prediction. When evaluating a conformal prediction procedure, there are two forms of randomness that need to be controlled. The first is in the coverage; the coverage of a single run of a conformal prediction procedure is a random quantity, meaning even with infinitely many validation points the coverage will not converge to a fixed value. Conformal prediction instead gives coverage on average over the randomness in the calibration set. It is therefore important to pick a large enough number of calibration points, and also perform enough repetitions of the experiment to be sure of the results. For simplicity we follow the guidelines given in Angelopoulos and Bates [6], which suggest using at least 1000 validation points, and we repeat each experiment 100 times; with this setup by the law of large numbers the probability of observing significant deviations from the true coverage is extremely low, and therefore we can evaluate the performance of our method with high confidence.
104
+
105
+ Experimental Setup. In each experiment, we sample a batch of nodes and construct a $1 - \alpha$ probability prediction set using NAPS as described in Section 3, as well as using a naive application of APS calibrated among all the other nodes in the test set. We then report the empirical coverage, average prediction set size and average size of the prediction set given that the set contains the true label across all nodes. For each experiment we sample 1000 nodes randomly from the nodes in the test set, and we perform 100 repetitions of the experiment. We only apply our method to large connected components from the test set following the discussion in Section 3 (see Appendix A.2 for details on the datasets and the test set construction procedure). We apply our method to two popular node classification datasets, namely Reddit2 and Flickr introduced in [10]. We apply two variants of two popular GNN models, namely GraphSAGE [11] with the mean and max aggregators, and the ShaDow [12] subgraph sampling scheme with GraphSAGE and GCN [13] layers.
106
+
107
+ The results for the Reddit2 and Flickr datasets are displayed in Tables 1 and 2 respectively. We see across all models on both datasets, NAPS produces well calibrated, tight prediction sets, while the naive application of APS tends to overcover and produces wider prediction sets.
108
+
109
+ Table 1: The test accuracy, empirical coverage, average prediction set size and average prediction set size conditional on coverage for all models considered on the Reddit2 dataset with $\alpha = {0.1}$ . Bold indicates the best performing method.
110
+
111
+ <table><tr><td rowspan="2">Model</td><td>Accuracy</td><td colspan="2">Coverage</td><td colspan="2">Size</td><td colspan="2">Size | Coverage</td></tr><tr><td>Top-1</td><td>APS</td><td>NAPS</td><td>APS</td><td>NAPS</td><td>APS</td><td>NAPS</td></tr><tr><td>GraphSAGE-Mean</td><td>0.914</td><td>0.928</td><td>0.897</td><td>2.23</td><td>1.77</td><td>2.37</td><td>1.93</td></tr><tr><td>GraphSAGE-Pool</td><td>0.771</td><td>0.918</td><td>0.904</td><td>3.97</td><td>3.41</td><td>4.08</td><td>3.53</td></tr><tr><td>ShaDow-SAGE</td><td>0.844</td><td>0.930</td><td>0.902</td><td>2.15</td><td>1.72</td><td>2.21</td><td>1.78</td></tr><tr><td>ShaDow-GCN</td><td>0.827</td><td>0.931</td><td>0.902</td><td>2.18</td><td>1.73</td><td>2.22</td><td>1.81</td></tr></table>
112
+
113
+ Table 2: The test accuracy, empirical coverage, average prediction set size and average prediction set size conditional on coverage for all models considered on the Flickr dataset with $\alpha = {0.1}$ .
114
+
115
+ <table><tr><td rowspan="2">Model</td><td>Accuracy</td><td colspan="2">Coverage</td><td colspan="2">Size</td><td colspan="2">Size | Coverage</td></tr><tr><td>Top-1</td><td>APS</td><td>NAPS</td><td>APS</td><td>NAPS</td><td>APS</td><td>NAPS</td></tr><tr><td>GraphSAGE-Mean</td><td>0.503</td><td>0.912</td><td>0.904</td><td>4.22</td><td>3.82</td><td>4.26</td><td>3.87</td></tr><tr><td>GraphSAGE-Max</td><td>0.501</td><td>0.907</td><td>0.902</td><td>4.26</td><td>4.03</td><td>4.28</td><td>4.08</td></tr><tr><td>ShaDow-SAGE</td><td>0.500</td><td>0.910</td><td>0.904</td><td>4.24</td><td>4.02</td><td>4.25</td><td>4.09</td></tr><tr><td>ShaDow-GCN</td><td>0.496</td><td>0.913</td><td>0.905</td><td>4.25</td><td>4.05</td><td>4.26</td><td>4.01</td></tr></table>
116
+
117
+ ## 5 Conclusion and Future Work
118
+
119
+ In this work we have introduced NAPS, an approach for constructing prediction sets on graph structured data. Our method comes with theoretical guarantees on the coverage and we have shown that our approach produces high quality prediction sets when using popular GNN models on standard node classification datasets. Several natural extensions to NAPS will follow in future work; here we applied equal weights to the scores at each neighbourhood depth, but for a homophilous network one could place more weight on shallower neighbours relative to deeper neighbours. Our method could also be extended to heterophilic networks; in a heterophilic network nodes tend to be connected to dissimilar nodes. One could therefore calibrate among alternating neighbourhoods $\mathop{\bigcup }\limits_{{j = 1}}^{k}{\mathcal{N}}_{n + 1}^{2j} \smallsetminus {\mathcal{N}}_{n + 1}^{{2j} - 1}$ .
120
+
121
+ One restriction of our approach is that if a node belongs to a small connected component, it may produce lower quality prediction sets due to having a small number of calibration points. An approach for conformal prediction in hierachical models was introduced in Dunn et al. [14], where the quantiles are calibrated in different groups before being pooled. An approach similar to this could be applied for nodes in small connected components, where calibration on similar neighbourhoods or components could be pooled to provide a better estimate of the conformal quantile.
122
+
123
+ ## References
124
+
125
+ [1] Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019.
126
+
127
+ [2] Minjie Wang, Da Zheng, Zihao Ye, Quan Gan, Mufei Li, Xiang Song, Jinjing Zhou, Chao Ma, Lingfan Yu, Yu Gai, Tianjun Xiao, Tong He, George Karypis, Jinyang Li, and Zheng Zhang. Deep graph library: A graph-centric, highly-performant package for graph neural networks. arXiv preprint arXiv:1909.01315, 2019.
128
+
129
+ [3] Vladimir Vovk, Alex Gammerman, and Glenn Shafer. Algorithmic Learning in a Random World. Springer-Verlag, Berlin, Heidelberg, 2005. ISBN 0387001522.
130
+
131
+ [4] Anastasios N Angelopoulos, Stephen Bates, Jitendra Malik, and Michael I Jordan. Uncertainty sets for image classifiers using conformal prediction. arXiv preprint arXiv:2009.14193, 2020.
132
+
133
+ [5] Rina Foygel Barber, Emmanuel J. Candes, Aaditya Ramdas, and Ryan J. Tibshirani. Conformal prediction beyond exchangeability, 2022. URL https://arxiv.org/abs/2202.13415.
134
+
135
+ [6] Anastasios N. Angelopoulos and Stephen Bates. A gentle introduction to conformal prediction and distribution-free uncertainty quantification, 2021. URL https://arxiv.org/abs/2107.07511.
136
+
137
+ [7] Yaniv Romano, Matteo Sesia, and Emmanuel Candes. Classification with valid and adaptive coverage. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 3581-3591. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/ 244edd7e85dc81602b7615cd705545f5-Paper.pdf.
138
+
139
+ [8] Miller McPherson, Lynn Smith-Lovin, and James M Cook. Birds of a feather: Homophily in social networks. Annual Review of Sociology, 27(1):415-444, 2001. doi: 10.1146/annurev.soc. 27.1.415. URL https://doi.org/10.1146/annurev.soc.27.1.415.
140
+
141
+ [9] Jiong Zhu, Ryan A Rossi, Anup Rao, Tung Mai, Nedim Lipka, Nesreen K Ahmed, and Danai Koutra. Graph neural networks with heterophily. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 11168-11176, 2021.
142
+
143
+ [10] Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, and Viktor Prasanna. GraphSAINT: Graph sampling based inductive learning method. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=BJe8pkHFwS.
144
+
145
+ [11] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper/2017/ file/5dd9db5e033da9c6fb5ba83c7a7ebea9-Paper.pdf.
146
+
147
+ [12] Hanqing Zeng, Muhan Zhang, Yinglong Xia, Ajitesh Srivastava, Andrey Malevich, Rajgopal Kannan, Viktor Prasanna, Long Jin, and Ren Chen. Decoupling the depth and scope of graph neural networks. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, 2021. URL https://openreview.net/ forum?id=dOMtHWYONZ.
148
+
149
+ [13] Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR), 2017.
150
+
151
+ [14] Robin Dunn, Larry Wasserman, and Aaditya Ramdas. Distribution-free prediction sets for two-layer hierarchical models. Journal of the American Statistical Association, 0(0):1-12, 2022. doi: 10.1080/01621459.2022.2060112. URL https://doi.org/10.1080/01621459.2022.2060112.
152
+
153
+ [15] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. International Conference on Learning Representations, 12 2014.
154
+
155
+ ## A Appendix
156
+
157
+ ### A.1 Adaptive Prediction Sets
158
+
159
+ Here we give a formal description of the APS [7] procedure for completeness. Denote the oracle classifier ${\pi }_{y}\left( x\right) = \mathbb{P}\left\lbrack {Y = y \mid X = x}\right\rbrack$ for all $y \in \mathcal{Y}$ , and again let ${\pi }_{\left( 1\right) }\left( x\right) \geq {\pi }_{\left( 2\right) }\left( x\right) \geq \ldots \geq$
160
+
161
+ ${\pi }_{\left( K\right) }\left( x\right)$ be the order statistics of this classifier. For any $\tau \in \left\lbrack {0,1}\right\rbrack$ define the generalised conditional quantile as
162
+
163
+ $$
164
+ L\left( {x;\pi ,\tau }\right) = \min \left\{ {k \in \{ 1,\ldots , K\} : {\pi }_{\left( 1\right) }\left( x\right) + {\pi }_{\left( 2\right) }\left( x\right) + \ldots + {\pi }_{\left( k\right) }\left( x\right) \geq \tau }\right\} . \tag{6}
165
+ $$
166
+
167
+ 203 One can now define the set valued function
168
+
169
+ $$
170
+ \mathcal{S}\left( {x, u;\pi ,\tau }\right) = \left\{ \begin{array}{ll} \text{ Labels of the }L\left( {x;\pi ,\tau }\right) - 1\text{ largest }{\pi }_{y}\left( x\right) , & \text{ if }u \leq V\left( {x;\pi ,\tau }\right) , \\ \text{ Labels of the }L\left( {x;\pi ,\tau }\right) \text{ largest }{\pi }_{y}\left( x\right) , & \text{ otherwise } \end{array}\right. \tag{7}
171
+ $$
172
+
173
+ 204 where
174
+
175
+ $$
176
+ V\left( {x;\pi ,\tau }\right) = \frac{1}{{\pi }_{\left( L\left( x;\pi ,\tau \right) \right) }\left( x\right) }\left\lbrack {\mathop{\sum }\limits_{{c = 1}}^{{L\left( {x;\pi ,\tau }\right) }}{\pi }_{\left( c\right) }\left( x\right) - \tau }\right\rbrack . \tag{8}
177
+ $$
178
+
179
+ The oracle prediction set may then be defined as
180
+
181
+ $$
182
+ {C}_{\alpha }^{\text{oracle }}\left( x\right) = \mathcal{S}\left( {x, U;\pi ,1 - \alpha }\right)
183
+ $$
184
+
185
+ where $U \sim \operatorname{Uniform}\left( {0,1}\right)$ is independent of everything else. The above is saying one should break ties proportional to the gap between the cumulative sum and the desired level $\tau$ .
186
+
187
+ ### A.2 Dataset Details
188
+
189
+ For the experiments above we used the Flickr and Reddit2 datasets from [10]. The Flickr dataset is constructed using images uploaded to the Flickr site, where the node features consist of the meta-data for each image and the label is the image tag. The Reddit2 dataset is constructed from posts on the social media site Reddit, with posts representing nodes. The node features are bag-of-word vectors from the post, and the label is the community (or sub-reddit) that the post belongs to. Our train/validation/test splits were done using the splits given in the original papers (which are conveniently implemented in Pytorch Geometric [1]). As mentioned in the main text we tested our graph only on large connected components, which we chose as nodes with at least 502-hop neighbours in Flickr, and nodes with at least 1000 2-hop neighbours in Reddit2. We call this set of 17 nodes ${\mathcal{N}}^{\text{cal }}$ , and report the sizes of these sets as well as some summary statistics about each dataset in Table 3.
190
+
191
+ Table 3: Statistics for the Reddit2 and Flickr datasets.
192
+
193
+ <table><tr><td>Dataset</td><td>Nodes</td><td>Edges</td><td>#Features</td><td>#Classes</td><td>#Test Nodes</td><td>Ncal</td></tr><tr><td>Flickr</td><td>89,250</td><td>899,756</td><td>500</td><td>7</td><td>22313</td><td>5161</td></tr><tr><td>Reddit2</td><td>232,965</td><td>23,213,838</td><td>602</td><td>41</td><td>55334</td><td>22160</td></tr></table>
194
+
195
+ ### A.3 Model Training Details
196
+
197
+ We used the implementations of GraphSAGE and ShaDow provided by Pytorch Geometric [1]. All models on all datasets used the same hyper-parameters. Each GNN used 2 layers with hidden dimension $H = {64}$ . We used the Adam optimiser [15] with default hyper-parameters, learning rate $\eta = {0.1}$ , and used dropout probability $\delta = {0.5}$ . For the GraphSAGE neighbour sampling training we used 251-hop neighbours and 102-hop neighbours. We used early stopping based on the accuracy on the validation set. We made no effort to optimise any of these parameters as we are not trying to optimise for accuracy, merely show our method performs well with a variety of architectures.
198
+
199
+ Each experiment here took less than two hours in total on a single machine with an NVIDIA GeForce RTX 2060 SUPER GPU and an AMD Ryzen 7 3700X 8-Core Processor. One run of the conformal prediction procedure has trivial overhead when compared with model fitting (and actually NAPS is faster than APS as we use less data points to calibrate the procedure).
papers/LOG/LOG 2022/LOG 2022 Conference/Zg8y2-v8ia/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Anonymous Author(s)
2
+
3
+ Anonymous Affiliation
4
+
5
+ Anonymous Email
6
+
7
+ § ABSTRACT
8
+
9
+ Graph Neural Networks (GNNs) are able to achieve high classification accuracy on many large real world datasets, but provide no rigorous notion of predictive uncertainty. We leverage recent advances in conformal prediction to construct prediction sets for node classification in inductive learning scenarios, and verify the efficacy of our approach across standard benchmark datasets using popular GNN models. The code is available at this link.
10
+
11
+ § 8 1 INTRODUCTION
12
+
13
+ Machine learning on graph structured data has seen a boom of popularity in recent years, with applications ranging from recommendation systems to biology and physics. Graph neural networks are quickly maturing as a technology; many state of the art models are commoditised in frameworks such as Pytorch Geometric [1] and DGL [2]. Despite their overwhelming popularity and success, very little progress has been made towards quantifying the uncertainty of the predictions made by these models, a vital step towards robust real world deployments.
14
+
15
+ In related areas of machine learning such as computer vision, conformal prediction [3] has emerged as a promising candidate for uncertainty quantification [4]. Conformal prediction is a very appealing approach as it is compatible with any black box machine learning algorithm and dataset as long as the data is statistically exchangeable. The most wide-spread method, so called split-conformal, also requires trivial computational overhead when compared to model fitting. Conformal prediction uses the assumption that test and training data have exchangeable statistical properties in order to assess the uncertainty of predictions.
16
+
17
+ Graph structured data is in general not exchangeable and so the guarantees provided by conformal prediction in its naive form do not hold. Recent work by Barber et al. [5] extends conformal prediction to the non-exchangeable setting and provides theoretical guarantees on the performance of conformal prediction in this setting. We leverage insights from [5] to apply conformal prediction in the node classification setting. The key insight is that for a homophilous graph, the model calibration should be similar in a neighbourhood around any given node. We leverage this insight to localise the calibration of conformal prediction. We show that our method improves calibration of predictive uncertainty and provides tighter prediction sets when compared with a naive application of conformal prediction across several state of the art models applied to popular node classification datasets.
18
+
19
+ § 2 CONFORMAL PREDICTION
20
+
21
+ Conformal prediction is a family of algorithms that generate finite sample valid prediction intervals or sets from an arbitrary black box machine learning model. Amazingly, the predictive model does not even need be well specified for these guarantees to hold (although the prediction intervals or sets may not be useful in this case). In the exposition below we will focus on conformal classification as that is the object of study in this work, but note that conformal prediction can be used for regression and other risk control procedures. We recommend consulting the excellent tutorial by Angelopoulos and Bates [6] for an introduction to conformal prediction.
22
+
23
+ § 2.1 THE EXCHANGEABLE CASE
24
+
25
+ Suppose we are working on a $K$ -class classification problem and we have a fitted model $\widehat{f} : \mathcal{X} \rightarrow$ ${\left\lbrack 0,1\right\rbrack }^{K}$ that outputs the probability of each class. Given an exchangeable set of held-out calibration
26
+
27
+ § DISTRIBUTION FREE PREDICTION SETS FOR NODE CLASSIFICATION
28
+
29
+ datapoints $\left( {{X}_{1},{Y}_{1}}\right) ,\ldots ,\left( {{X}_{n},{Y}_{n}}\right)$ (held out meaning they were not used to fit the model) and a new test point $\left( {{X}_{n + 1},{Y}_{n + 1}}\right)$ , conformal prediction constructs a prediction set $\mathcal{T}\left( {X}_{n + 1}\right)$ that satisfies
30
+
31
+ $$
32
+ 1 - \alpha \leq \mathbb{P}\left( {{Y}_{n + 1} \in \mathcal{T}\left( {X}_{n + 1}\right) }\right) \leq 1 - \alpha + \frac{1}{n + 1}
33
+ $$
34
+
35
+ for a user specified error rate $\alpha \in \left\lbrack {0,1}\right\rbrack$ . Conformal prediction relies on a score function $S : \mathcal{X} \times \mathcal{Y} \rightarrow$ $\mathbb{R}$ , which is a measure of the calibration of the prediction at a given datapoint. We will introduce one specific score function for classification, namely the Adaptive Prediction Sets (APS, [7]) procedure, but note that there are many possible options (see [6]). Given a score function $S$ , the procedure for constructing a prediction set is very simple; for each datapoint $\left( {{X}_{i},{Y}_{i}}\right)$ in the calibration set, compute the score ${s}_{i} = S\left( {{X}_{i},{Y}_{i}}\right)$ . Define $\widehat{q}$ to be the $\left\lceil {\left( {n + 1}\right) \left( {1 - \alpha }\right) }\right\rceil /n$ empirical quantile of the scores ${s}_{1},\ldots ,{s}_{n}$ , and finally create the prediction set $\mathcal{T}\left( {X}_{n + 1}\right) = \left\{ {y : S\left( {{X}_{n + 1},y}\right) \leq \widehat{q}}\right\}$ .
36
+
37
+ To motivate the APS score function, suppose we have access to an oracle predictor that exactly matches the conditional distribution $\pi \left( x\right) = \mathbb{P}\left( {{Y}_{n + 1} \mid {X}_{n + 1} = x}\right)$ . Then to construct a $1 - \alpha$ prediction set, we simply sort the probabilities into descending order, and add labels to the set until the cumulative probability exceeds $1 - \alpha$ (with appropriate tie breaking to ensure exact coverage).
38
+
39
+ Let $\left\{ {{\pi }_{\left( 1\right) },\ldots ,{\pi }_{\left( K\right) }}\right\}$ be the order statistics of the conditional probabilities $\pi \left( x\right)$ so that ${\pi }_{\left( 1\right) } \geq {\pi }_{\left( 2\right) } \geq$ $\cdots \geq {\pi }_{\left( K\right) }$ . Prediction sets can be constructed from the oracle as
40
+
41
+ $$
42
+ \left\{ {{\pi }_{\left( 1\right) },\ldots ,{\pi }_{\left( k\right) }}\right\} \text{ , where }k = \inf \left\{ {{k}^{\prime } : \mathop{\sum }\limits_{{j = 1}}^{{k}^{\prime }}{\pi }_{\left( j\right) } \geq 1 - \alpha }\right\} \text{ . }
43
+ $$
44
+
45
+ In practice the probabilities given by the classifier $\widehat{f}\left( x\right)$ will not be exactly equal to $\mathbb{P}\left( {{Y}_{n + 1} \mid {X}_{n + 1} = }\right.$ $x)$ , but the oracle prediction set is used to define a conformal score as
46
+
47
+ $$
48
+ S\left( {x,y}\right) = \mathop{\sum }\limits_{{j = 1}}^{k}\widehat{f}{\left( x\right) }_{\left( j\right) },\text{ where }y = k. \tag{1}
49
+ $$
50
+
51
+ Intuitively, labels are added to the set until the true label is included, and the score is the cumulative probability of this set. To give an example, if we ran this procedure on a set of calibration data and found that we needed to use the level ${94}\%$ to get ${90}\%$ coverage, then we would use $\widehat{q} = {0.94}$ as the threshold when constructing new prediction sets. Note to get exact coverage ties need to be broken randomly when including the final label in the set, see Appendix A. 1 for details.
52
+
53
+ § 2.2 BEYOND EXCHANGEABILITY
54
+
55
+ Conformal prediction in the form presented above relies on the assumption that the data points ${Z}_{i} = \left( {{X}_{i},{Y}_{i}}\right)$ are exchangeable. The exchangeable form of conformal prediction provides no guarantee if these assumptions are violated, however non-exchangeable conformal prediction was introduced in the pioneering work of Barber et al. [5].
56
+
57
+ Formally, the non-exchangeable conformal prediction procedure assumes a choice of deterministic fixed weights ${w}_{1},\ldots ,{w}_{n} \in \left\lbrack {0,1}\right\rbrack$ (normalized as detailed in [5]). As before, one computes the scores ${s}_{1},\ldots ,{s}_{n}$ but now defines the prediction set in terms of the weighted quantiles of the score distribution
58
+
59
+ $$
60
+ {\widehat{C}}_{n}\left( {X}_{n + 1}\right) = \left\{ {y \in \mathcal{Y} : S\left( {{X}_{n + 1},y}\right) \leq {\mathrm{Q}}_{1 - \alpha }\left( {\mathop{\sum }\limits_{{i = 1}}^{n}{w}_{i} \cdot {\delta }_{{s}_{i}} + {w}_{n + 1} \cdot {\delta }_{+\infty }}\right) }\right\} \tag{2}
61
+ $$
62
+
63
+ where ${\mathrm{Q}}_{\tau }\left( \cdot \right)$ denotes the $\tau$ -quantile of a distribution. Non-exchangeable conformal prediction also comes with performance guarantees; the authors define the coverage gap
64
+
65
+ $$
66
+ \text{ Coverage gap } = \left( {1 - \alpha }\right) - \mathbb{P}\left\{ {{Y}_{n + 1} \in {\widehat{C}}_{n}\left( {X}_{n + 1}\right) }\right\} \tag{3}
67
+ $$
68
+
69
+ as the loss of coverage when compared to the exchangeable setting, and show that this can be bounded as follows: let $Z = \left( {\left( {{X}_{1},{Y}_{1}}\right) ,\ldots ,\left( {{X}_{n + 1},{Y}_{n + 1}}\right) }\right)$ be the full dataset and define ${Z}^{i}$ as the same dataset after swapping the test point and the ${i}^{th}$ training point
70
+
71
+ $$
72
+ {Z}^{i} = \left( {\left( {{X}_{1},{Y}_{1}}\right) ,\ldots ,\left( {{X}_{i - 1},{Y}_{i - 1}}\right) ,\left( {{X}_{n + 1},{Y}_{n + 1}}\right) ,\left( {{X}_{i + 1},{Y}_{i + 1}}\right) ,\ldots ,\left( {{X}_{n},{Y}_{n}}\right) ,\left( {{X}_{i},{Y}_{i}}\right) }\right) .
73
+ $$
74
+
75
+ Then the coverage gap in Equation (3) can be bounded as (Theorem 2a, Barber et al. [5]):
76
+
77
+ $$
78
+ \text{ Coverage gap } \leq \frac{\mathop{\sum }\limits_{{i = 1}}^{n}{w}_{i} \cdot {\mathrm{d}}_{TV}\left( {Z,{Z}^{i}}\right) }{1 + \mathop{\sum }\limits_{{i = 1}}^{n}{w}_{i}} \tag{4}
79
+ $$
80
+
81
+ where ${\mathrm{d}}_{TV}$ is the total variation distance. To make this bound small one would like to place a large weight ${w}_{i}$ on datapoints $\left( {{X}_{i},{Y}_{i}}\right)$ that are drawn from a similar distribution to the test point $\left( {{X}_{n + 1},{Y}_{n + 1}}\right)$ .
82
+
83
+ § 3 CONFORMAL PREDICTION FOR NODE CLASSIFICATION
84
+
85
+ Consider now the node classification setting: we are given a graph $G = \left( {V,E}\right)$ , and for each node $i \in V$ we are given a node feature vector ${X}_{i} \in {\mathbb{R}}^{F}$ and a label ${Y}_{i} \in \mathcal{Y}$ . A standard pipeline for node classification usually consists of a GNN model that produces a node embedding ${h}_{i} \in {\mathbb{R}}^{H}$ followed by a classifier $f : {\mathbb{R}}^{H} \rightarrow \mathcal{Y}$ . Here the data points ${Z}_{i} = \left( {{X}_{i},{Y}_{i}}\right)$ are certainly not assumed to be exchangeable; the underlying principle of GNN models is that the adjacency matrix of $G$ provides information about the dependency between datapoints (and hence neighbourhood information of $G$ is aggregated and used for prediction). Barber et al. [5] show in particular that non-exchangeable data can be navigated when the inference algorithm is symmetric. Fitting the model on training data is trivially a symmetric function of the held-out data (as the held-out data do not enter) and hence falls into this framework.
86
+
87
+ We combine non-exchangeable conformal prediction with the information given by the adjacency matrix into an algorithm for constructing prediction sets for node classification, which we call Neighbourhood Adaptive Prediction Sets (NAPS). We set the weights in Equation (2) to ${w}_{i} = 1$ if $i \in {\mathcal{N}}_{n + 1}^{k}$ , where ${\mathcal{N}}_{n + 1}^{k}$ is the $k$ -hop neighbourhood of node $n + 1$ . We then apply non-exchangeable conformal prediction with the APS scoring function in Equation (1). The coverage gap of NAPS is
88
+
89
+ bounded as
90
+
91
+ $$
92
+ \text{ Coverage gap } \leq \frac{\mathop{\sum }\limits_{{i \in {\mathcal{N}}_{n + 1}^{k}}}{\mathrm{\;d}}_{TV}\left( {Z,{Z}^{i}}\right) }{1 + \left| {\mathcal{N}}_{n + 1}^{k}\right| } \tag{5}
93
+ $$
94
+
95
+ by simple substitution into Equation (4). This bound will be small if the $k$ -hop neighbours of node $n + 1$ are distributed similarly, which is otherwise known as homophily [8]. Homophily is a key principle of many real world networks, where linked nodes often belong to the same class and have similar features, and is in crucial for good performance in many popular GNN architectures (although recent work has considered the heterophilic case, see [9], which we will discuss in the future work section). This is also related to network homogeneity, where nodes in a neighbourhood play similar roles in the network and are considered interchangeable on average.
96
+
97
+ The neighbourhood depth parameter $k$ introduces a tradeoff; expanding the neighbourhood increases the sample size for calibration, but introduces nodes that may be progressively less exchangeable with the test node. In the form presented here we recommend only applying NAPS to large homophilous networks with dense 1 or 2 hop neighbourhoods, but we will discuss extensions in future work.
98
+
99
+ § 4 EXPERIMENTS
100
+
101
+ We now perform experiments with popular real world datasets and models to evaluate the performance of our procedure. Note that our method is compatible with any node classification model and any dataset. Our experiments follow the following format: we split each graph into training, validation and test nodes (where the validation and test nodes are not available during model fitting i.e. an inductive node split). The training and validation sets are used for model fitting, and the test set is used to evaluate the conformal prediction procedure by constructing prediction sets and evaluating the empirical coverage.
102
+
103
+ Evaluating Conformal Prediction. When evaluating a conformal prediction procedure, there are two forms of randomness that need to be controlled. The first is in the coverage; the coverage of a single run of a conformal prediction procedure is a random quantity, meaning even with infinitely many validation points the coverage will not converge to a fixed value. Conformal prediction instead gives coverage on average over the randomness in the calibration set. It is therefore important to pick a large enough number of calibration points, and also perform enough repetitions of the experiment to be sure of the results. For simplicity we follow the guidelines given in Angelopoulos and Bates [6], which suggest using at least 1000 validation points, and we repeat each experiment 100 times; with this setup by the law of large numbers the probability of observing significant deviations from the true coverage is extremely low, and therefore we can evaluate the performance of our method with high confidence.
104
+
105
+ Experimental Setup. In each experiment, we sample a batch of nodes and construct a $1 - \alpha$ probability prediction set using NAPS as described in Section 3, as well as using a naive application of APS calibrated among all the other nodes in the test set. We then report the empirical coverage, average prediction set size and average size of the prediction set given that the set contains the true label across all nodes. For each experiment we sample 1000 nodes randomly from the nodes in the test set, and we perform 100 repetitions of the experiment. We only apply our method to large connected components from the test set following the discussion in Section 3 (see Appendix A.2 for details on the datasets and the test set construction procedure). We apply our method to two popular node classification datasets, namely Reddit2 and Flickr introduced in [10]. We apply two variants of two popular GNN models, namely GraphSAGE [11] with the mean and max aggregators, and the ShaDow [12] subgraph sampling scheme with GraphSAGE and GCN [13] layers.
106
+
107
+ The results for the Reddit2 and Flickr datasets are displayed in Tables 1 and 2 respectively. We see across all models on both datasets, NAPS produces well calibrated, tight prediction sets, while the naive application of APS tends to overcover and produces wider prediction sets.
108
+
109
+ Table 1: The test accuracy, empirical coverage, average prediction set size and average prediction set size conditional on coverage for all models considered on the Reddit2 dataset with $\alpha = {0.1}$ . Bold indicates the best performing method.
110
+
111
+ max width=
112
+
113
+ 2*Model Accuracy 2|c|Coverage 2|c|Size 2|c|Size | Coverage
114
+
115
+ 2-8
116
+ Top-1 APS NAPS APS NAPS APS NAPS
117
+
118
+ 1-8
119
+ GraphSAGE-Mean 0.914 0.928 0.897 2.23 1.77 2.37 1.93
120
+
121
+ 1-8
122
+ GraphSAGE-Pool 0.771 0.918 0.904 3.97 3.41 4.08 3.53
123
+
124
+ 1-8
125
+ ShaDow-SAGE 0.844 0.930 0.902 2.15 1.72 2.21 1.78
126
+
127
+ 1-8
128
+ ShaDow-GCN 0.827 0.931 0.902 2.18 1.73 2.22 1.81
129
+
130
+ 1-8
131
+
132
+ Table 2: The test accuracy, empirical coverage, average prediction set size and average prediction set size conditional on coverage for all models considered on the Flickr dataset with $\alpha = {0.1}$ .
133
+
134
+ max width=
135
+
136
+ 2*Model Accuracy 2|c|Coverage 2|c|Size 2|c|Size | Coverage
137
+
138
+ 2-8
139
+ Top-1 APS NAPS APS NAPS APS NAPS
140
+
141
+ 1-8
142
+ GraphSAGE-Mean 0.503 0.912 0.904 4.22 3.82 4.26 3.87
143
+
144
+ 1-8
145
+ GraphSAGE-Max 0.501 0.907 0.902 4.26 4.03 4.28 4.08
146
+
147
+ 1-8
148
+ ShaDow-SAGE 0.500 0.910 0.904 4.24 4.02 4.25 4.09
149
+
150
+ 1-8
151
+ ShaDow-GCN 0.496 0.913 0.905 4.25 4.05 4.26 4.01
152
+
153
+ 1-8
154
+
155
+ § 5 CONCLUSION AND FUTURE WORK
156
+
157
+ In this work we have introduced NAPS, an approach for constructing prediction sets on graph structured data. Our method comes with theoretical guarantees on the coverage and we have shown that our approach produces high quality prediction sets when using popular GNN models on standard node classification datasets. Several natural extensions to NAPS will follow in future work; here we applied equal weights to the scores at each neighbourhood depth, but for a homophilous network one could place more weight on shallower neighbours relative to deeper neighbours. Our method could also be extended to heterophilic networks; in a heterophilic network nodes tend to be connected to dissimilar nodes. One could therefore calibrate among alternating neighbourhoods $\mathop{\bigcup }\limits_{{j = 1}}^{k}{\mathcal{N}}_{n + 1}^{2j} \smallsetminus {\mathcal{N}}_{n + 1}^{{2j} - 1}$ .
158
+
159
+ One restriction of our approach is that if a node belongs to a small connected component, it may produce lower quality prediction sets due to having a small number of calibration points. An approach for conformal prediction in hierachical models was introduced in Dunn et al. [14], where the quantiles are calibrated in different groups before being pooled. An approach similar to this could be applied for nodes in small connected components, where calibration on similar neighbourhoods or components could be pooled to provide a better estimate of the conformal quantile.
papers/LOG/LOG 2022/LOG 2022 Conference/ZuMgYX1irC/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,327 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Combining Graph and Recurrent Networks for Efficient and Effective Segment Tagging
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ Graph Neural Networks have been demonstrated to be highly effective and efficient in learning relationships between nodes locally and globally. Also, they are suitable for documents-related tasks due to their flexibility and capacity of adapting to complex layouts. However, information extraction on documents still remains a challenge, especially when dealing with unstructured documents. The semantic tagging of the text segments (a.k.a. entity tagging) is one of the essential tasks. In this paper we present SeqGraph, a new model that combines Transformers for text feature extraction, and Graph Neural Networks and recurrent layers for segments interaction, for an efficient and effective segment tagging. We address some of the limitations of current architectures and Transformer-based solutions. We optimize the model architecture by combining Graph Attention layers (GAT) and Gated Recurrent Units (GRUs), and we provide an ablation study on the design choices to demonstrate the effectiveness of SeqGraph. The proposed model is extremely light ( 4 million parameters), reducing the number of parameters between 100- and 200-times compared to its competitors, while achieving state-of-the-art results (97.23% F1 score on CORD dataset).
12
+
13
+ ## 18 1 Introduction
14
+
15
+ Information Extraction (IE) has become a focus task over the last years within the Machine Learning community. There is a growing need to automate the extraction and storage of information from documents, especially from unstructured ones. The rise of Deep Learning has been extremely beneficial, leading to the release of models capable of performing with the same quality as humans [1-5]. Moreover, a myriad of businesses and use cases within industry require the automatic processing and understanding of documents and their contents to convert unstructured data into their semantically structured components. The main aim is to reduce the manual burden in day to day operations implementing efficient and cost-effective automatic solutions.
16
+
17
+ Within IE, the semantic tagging of the text segments (a.k.a. entity tagging) is an essential task that allows the system to understand the different parts of the document and to focus on the most relevant information. Usually, these text segments are detected in a previous stage by an OCR engine at word level. This paper presents an innovative solution for efficient and effective segment tagging on unstructured documents.
18
+
19
+ Existing entity tagging models are huge, they contain an overwhelming number of parameters (hundreds of millions) that could be highly reduced. Furthermore, most of them are purely or mostly based on Transformer [6] architectures [1, 3-5]. They need to define a sequence limit and, therefore, they suffer from the sequence truncation problem. In addition, this sequence limit must be chosen carefully, as the number of parameters increases exponentially with it. This is due to the fact that the Transformers are fully connected architectures, where each segment needs to interact with the rest.
20
+
21
+ These challenges can be solved using more flexible architectures, such as the ones based on Graph Neural Networks (GNN) $\left\lbrack {2,7,8}\right\rbrack$ . GNNs have been demonstrated to be highly effective and efficient in learning relationships between nodes locally and globally. However, they do not use the sequential order of the nodes as a source of information, which is important for the considered task. To this extent, some attempts have been made to compensate for this limitation by combining them with other mechanisms, such as recurrent layers [2]. We believe they are also overparameterized and that the selection and treatment of the features is not optimal. To overcome these limitations, we present SeqGraph, a new model that combines Transformers for text feature extraction, and GNNs and recurrent layers for segments interaction, for an efficient and effective segment tagging. The main contributions of this work are:
22
+
23
+ - Extremely light-weight entity tagging model ( 4 million parameters) capable of achieving state-of-the-art results (97.23% on entity tagging task of CORD dataset [9]).
24
+
25
+ - Optimal selection and extraction of the node features from the text and bounding boxes of the segments.
26
+
27
+ - Optimized model architecture combining Graph Attention layers (GAT) [10] and Gated Recurrent Units (GRUs) [11].
28
+
29
+ - Ablation study on the impact of the different sources of information on the model accuracy and parameters.
30
+
31
+ ## 2 Related Work
32
+
33
+ In recent years, the increase in the demand for information extraction systems has been reflected in the number of publications and, consequently, also on entity tagging $\left\lbrack {1 - 5,7,8,{12} - {18}}\right\rbrack$ . Although there are few works that attempt to solve the problem from scratch [19], almost all the released models rely on text segments extracted using an OCR engine. Most of them are based purely or partially on Transformer architectures [6]. However, there is an emerging trend on the application of GNNs for entity tagging.
34
+
35
+ ### 2.1 Transformer-based models
36
+
37
+ Since the most basic versions, such as BERT [20] or RoBERTa [21], which only use the text and the sequential order of the segments to extract the input features, a lot of novelties have been introduced to enhance the Transformers performance on this task. New models include different sources of information, such as the image or the layout, but also new ways of extracting the features and combining them. For instance, in several works, the authors inject the layout information of each segment into their features, some of them at word level [1, 12-15], and others dividing the document into regions that share the same embeddings $\left\lbrack {3,5,{16},{17}}\right\rbrack$ . In some cases, the layout information has also been used to enhance the self-attention mechanisms, usually as a bias term [3,4,12,18,22]. The image features are usually integrated with the textual ones by concatenating or adding them $\left\lbrack {5,{14},{15},{18}}\right\rbrack$ , but some models use more sophisticated ways of combining them. For instance, in [3], [4], and [16] the authors use a multi-modal transformer architecture that incorporates a multimodal self-attention to enforce the cross-modality feature correlation.
38
+
39
+ ### 2.2 GNN-based models
40
+
41
+ Within IE tasks, GNNs try to overcome the limitations of the Transformer-based models. The Transformers are fully connected architectures, where each segment must interact with each other. The number of parameters increases rapidly with the number of segments. In addition, they need a predefined maximum sequence length, leading to truncation problems for large sequences. GNNs avoid all these problems with their flexible structure, where each segment only needs to interact with a reduced number of neighbors. However, the setup is more complex, with some critical design choices, such as the graph structure, the edge sampling strategy, or the message propagation approach. Some promising approaches have been recently released. For instance, in [7] the authors propose a GNN-based model for solving entity tagging (ET), building (EB), and linking (EL). First, they generate a k-Nearest Neighbor (kNN) graph at text segment level for solving the EB task as an edge prediction task, using features extracted from the bounding boxes and from the text and passing them through several GAT layers. Then, the entity features are computed by aggregating the output features from the GNN and processing them with a linear layer. The features are used to solve the ET task, using a Multi-Layer Perceptron (MLP) classifier, and the EL task, evaluating all the possible entity pairs with another MLP classifier. In [8], the model incorporates also visual features from the image The text features are extracted using a BERT encoder and the visual ones using a SWIN Transformer with a Feature Pyramid Network (FPN). Both vectors are enriched with layout information by adding a layout embedding. Then, before each GAT layer of the GNN architecture, the visual features are fused into the input features using a fusion layer. In these layers, the relative layout is also included in the self-attention mechanism. On the other hand, in [2] the image and text features are fused before feeding them into the GNN. The layout information is not embedded into the node features but used for computing the weights for the message aggregation within the Graph Convolution Network (GCN) layers [23]. The text features are extracted using a Transformer encoder at character level and the image features using a Convolutional Neural Network (CNN) architecture. Then, the character features of each segment are averaged and passed through the GNN layers. The output features are then aggregated to the previous features at character level and fed to two bidirectional LSTM layers [22] in order to extract the sequential information. Finally, they use the Viterbi algorithm to generate the final predictions.
42
+
43
+ All the above works have some drawbacks. In [7], the quality of the extracted text features is poor, and they do not leverage the sequential information, which is important for the considered task. Consequently, the results obtained are very limited. In the case of [2] and [8], the models have a huge number of parameters, in part due to the heavy image backbones that they use. In this work we aim at combining the benefits of all of them to make a light, fast, and effective entity tagging model.
44
+
45
+ ## 110 3 Methodology
46
+
47
+ ## 111 3.1 Problem definition
48
+
49
+ Given a list of text segments (usually at word level) provided by an OCR engine who extracted them from an image of a document, the goal is to tag each segment with its corresponding semantic category from a closed list. Each segment consists of the text string and the rotated bounding box. For instance, having a purchase receipt, each segment could be tagged as store address, phone, date, time, item description, item value, etc. Figure 1 shows an illustration.
50
+
51
+ ![01963f08-ed9a-7227-b3c3-0cccdb5b6e2a_2_308_1182_1177_611_0.jpg](images/01963f08-ed9a-7227-b3c3-0cccdb5b6e2a_2_308_1182_1177_611_0.jpg)
52
+
53
+ Figure 1: Illustration of document entity tagging. In the right image, the color of the bounding box denotes its category. Segments with the bounding boxes of the same color belong to the same category.
54
+
55
+ Some of the challenges present on the raw collected images of documents include highly unstructured documents, such as purchase receipts, with multiple and complex layouts and non-natural language (abbreviations, brands, product names, punctuations, etc). The noise caused by the OCR engine (errors in the text, inaccurate bounding boxes, missing or duplicated detections...) and due to bad image and/or physical conditions (perspective, rotation, wrinkles, ripples...).
56
+
57
+ ### 3.2 Overview
58
+
59
+ Given the above background, we propose a model based on GNNs due to these reasons:
60
+
61
+ - Graph-based representations are flexible and capable of adapting to complex layouts.
62
+
63
+ - The task can be defined as node classification, where the semantic category of a node is highly dependent on the features of its neighbor nodes and on the global context. The literature about GNNs has demonstrated they are highly effective and efficient in learning relationships between nodes locally and globally.
64
+
65
+ - The number of nodes in the document varies from a few to hundreds of them, which can be unfeasible to process for fully connected networks or Transformer-based models. However, for this use case, the number of interactions can be limited based on the bounding box coordinates, accelerating the inference and reducing the number of required resources. GNNs are suitable for this type of highly sparse data structure.
66
+
67
+ We must note one of the weaknesses of the proposed GNN-based approach. GNNs do not consider the position of the segments in the input sequence. However, it is important to keep an order to read a document beyond text layout prediction. Several approaches have tried to overcome this limitation injecting the sequential information into the node features [24], using it within the attention mechanism of the GAT layers [25], or combining the GNN with recurrent layers [2]. The first two require defining how to extract and combine it with the rest of the features, which can be tedious and lead to a more unstable model. In addition, it requires increasing the number of GNN layers or its number of parameters to learn from this new source of information. On the contrary, in recurrent layers this information is learnt directly from the order of the segments with a reduced number of parameters and without altering the GNN architecture.
68
+
69
+ Following this reasoning, we developed a hybrid model based on GNN and RNN for text segment tagging as in Figure 2. Starting from the list of text segments coming from the OCR, the model first extracts and preprocesses the text and region features from each segment. In parallel, it generates the segment nodes and performs the edge sampling between them. Next, the node features are passed through the graph attention layers and get enriched by their neighbors. Two bidirectional Gated Recurrent Unit (GRU) layers [11] processes the featuers to add information about the order of the segments. Finally, we add a linear layer and a Softmax layer to obtain the output probabilities for each text segment.
70
+
71
+ ![01963f08-ed9a-7227-b3c3-0cccdb5b6e2a_3_308_1314_1183_435_0.jpg](images/01963f08-ed9a-7227-b3c3-0cccdb5b6e2a_3_308_1314_1183_435_0.jpg)
72
+
73
+ Figure 2: High Level Architecture of the proposed model.
74
+
75
+ ### 3.3 Feature extraction
76
+
77
+ We use three sources of information: the text string, the bounding box and the position in the sequence.
78
+
79
+ Diving through the literature, we can find different approaches for extracting features from the text of a segment, but we can group them into two categories: the ones that extract the features attending to its semantic meaning and the ones that extract the features attending to its composition. The first one assigns a feature vector to each text string (usually at word level) using an embedding layer and a predefined dictionary [7, 26]. The embedding layer can be pretrained on another dataset or it can be trained directly from scratch, using the training set for generating the dictionary [7]. The latter method, extracting the features attending to the text composition, means inspecting its characters and their position within the text and finding relevant relationships between them [2]. We include a deeper dive with pros and cons of each of the two approaches in the Appendix A.1.
80
+
81
+ Analyzing both approaches and the challenges of the segment tagging task for documents, we adopt the second approach, text feature extraction based on its composition, similar to how it is done in [2]. We decide to consider only ASCII characters, setting the length of the dictionary to 128 . We convert all the Unicode characters using the standard Unidecode Python package. Its function unidecode( ) takes Unicode data and tries to represent it in ASCII characters using transliteration tables. The characters that cannot be converted are removed. The size of the embedding layer is 256 . The Transformer has 3 layers with 4 heads and an internal dimension of 512 .
82
+
83
+ Regarding the rotated bounding box, we select the following features:
84
+
85
+ - Left center coordinates: middle point between the top-left and the bottom-left vertices of the rotated bounding box.
86
+
87
+ - Right center coordinates: middle point between the top-right and the bottom-right vertices of the rotated bounding box.
88
+
89
+ - Bounding box rotation: angle of the bounding box in radians, between -PI/2 and PI/2.
90
+
91
+ Note that we discard the height of the bounding box as we observed that the model tended to overfit using this feature. We believe that the height of the segment is not a crucial feature for this task, as it might vary across segments that share the same category, and it does not contain reliable information about the distance between different text lines.
92
+
93
+ Finally, the position in the sequence is already implicit in the order of the segments and used by the recurrent layers. It could be also injected into the node features by using for instance a positional embedding, but that would require selecting a maximum position and truncating the sequences that exceed this length, which would yield a drop of accuracy. In addition, the positional embeddings do not work well with very long sequences, and we want to consider lengths of hundreds. For these reasons, this information is not injected into the node features.
94
+
95
+ After extracting the textual and positional features they are fused by increasing the dimension of the positional features using a linear layer to match the textual features one (256) and adding both. The whole feature extraction process is described in Figure 3.
96
+
97
+ ![01963f08-ed9a-7227-b3c3-0cccdb5b6e2a_4_311_1367_1180_432_0.jpg](images/01963f08-ed9a-7227-b3c3-0cccdb5b6e2a_4_311_1367_1180_432_0.jpg)
98
+
99
+ Figure 3: Feature extraction process applied to each text segment.
100
+
101
+ ### 3.4 GNN
102
+
103
+ The GNN architecture takes advantage of the fact that all the information needed for computing the message passing weights (positional and textual information) is already embedded in the node features and we select Graph Attention Layers (GAT) [10] as the one that best suits our needs. In the GAT layers, the weights for the message passing are computed directly inside the layer using the input node features, in a similar way as it is done in the original attention layers (see Equation 1).
104
+
105
+ $$
106
+ {z}_{i}^{\left( l\right) } = {W}^{\left( l\right) }{h}_{i}^{\left( l\right) }
107
+ $$
108
+
109
+ $$
110
+ {e}_{ij}^{\left( l\right) } = \operatorname{LeakyReLU}\left( {{a}^{{\left( l\right) }^{T}}\left( {{z}_{i}^{\left( l\right) }\parallel {z}_{j}^{\left( l\right) }}\right) }\right)
111
+ $$
112
+
113
+ $$
114
+ {\alpha }_{ij}^{\left( l\right) } = \frac{\exp \left( {e}_{ij}^{\left( l\right) }\right) }{\mathop{\sum }\limits_{{k \in \mathcal{N}\left( i\right) }}\exp \left( {e}_{ik}^{\left( l\right) }\right) } \tag{1}
115
+ $$
116
+
117
+ $$
118
+ {h}_{i}^{\left( l\right) } = \sigma \left( {\mathop{\sum }\limits_{{k \in \mathcal{N}\left( i\right) }}^{{\alpha }_{ij}^{\left( l\right) }}{z}_{j}^{\left( l\right) }}\right)
119
+ $$
120
+
121
+ They have been widely applied in document understanding tasks $\left\lbrack {7,8}\right\rbrack$ . To avoid 0 -in-degree errors (disconnected nodes) while using the GAT layers, we add a self-loop for each node, i. e. adding an edge that connects the node with itself.
122
+
123
+ The proposed architecture in Figure 4 is composed of 3 GAT layers. All the layers are followed by SiLU activations [27] except for the last one. In our research, this activation worked better than ReLU and other variants. We also add residual connections in all the layers to accelerate the convergence. Inspired by [8], we introduce a global document node. We use one global node per graph level, and we connect it bidirectionally to the rest of the level nodes. Its feature embedding is initially computed by averaging all the level node embeddings. It has two purposes: Firstly, it provides some context information to the nodes, as it gathers information from the whole graph. Secondly, it acts as a regularization term for the GAT layer weights, as it is not a real neighbor node. These global nodes are only used during the message passing but discarded once the GNN stage is finished.
124
+
125
+ ![01963f08-ed9a-7227-b3c3-0cccdb5b6e2a_5_595_867_607_702_0.jpg](images/01963f08-ed9a-7227-b3c3-0cccdb5b6e2a_5_595_867_607_702_0.jpg)
126
+
127
+ Figure 4: Proposed GNN architecture.
128
+
129
+ For the edge sampling we use a custom approach. We are dealing with unstructured documents with an unknown variability in layouts and we cannot assume any constraint related to the distance between the segments. We define a sampling function that aims at connecting each segment with the rest of the segments that are on the same line or in the adjacent ones: an edge from segment A to segment $\mathrm{B}$ is created if the vertical distance between their centers(C)is less than the height(H)of 3 segment A by a constant (K) (see Equation 2). In our experiments we set this constant to two.
130
+
131
+ $$
132
+ {\text{edge}}_{A - B} = \left| {{C}_{A}^{y} - {C}_{B}^{y}}\right| < {H}_{A} * K \tag{2}
133
+ $$
134
+
135
+ ### 3.5 Recurrent layers
136
+
137
+ The recurrent layers gather the information about the sequence order that the GNN layers are missing and inject it into the node features. More specifically, we use two bidirectional Gated Recurrent Units (GRUs) [11] with 256 hidden features size. We also considered using Long Short Term Memory layers (LSTM) [22], but as reported in the ablation study of the appendix the accuracy obtained is similar while the GRU layers have less parameters and are faster. The contribution of these layers is analyzed in the experimental section.
138
+
139
+ ### 3.6 Classification head
140
+
141
+ The classification head takes the output features for each node from the recurrent layer and transforms them into the class probabilities. It consists of one linear layer that generates the logits, followed by a Softmax layer that produces the normalized probabilities.
142
+
143
+ ## 4 Experiments
144
+
145
+ ### 4.1 Dataset
146
+
147
+ We select one well-known public dataset of purchase receipts, CORD [9], that contains annotations for the segment tagging task in order to compare our model with other approaches. In addition, we include a larger and challenging private dataset to better analyze the performance of the model. Due to space limitation, they are further described with examples in Appendix A.2.
148
+
149
+ ### 4.2 Training and Evaluation details
150
+
151
+ For all the datasets, the model is trained from scratch for 30 epochs using a batch of 4 documents on each iteration. The selected optimizer is AdamW [28] with an initial learning rate of 3e-4 and a reduction factor of 0.1 in epochs 20 and 25. For the loss function, we use Cross Entropy Loss for FUNSD and CORD datasets and Focal Loss for the private in order to deal with the high class imbalance. We also test the impact of pretraining the model on the private dataset before training on CORD. In this case, the models are finetuned for 1000 steps with batch size of 64, an initial learning rate of $1\mathrm{e} - 4$ and a reduction factor 0.1 in step 900 . To reduce the overfitting, we use a dropout of 0.1 for the Transformer encoder and before each GAT layer, and a dropout of 0.2 for the GRU layers and before the final linear layer. For both datasets we sort the segments of each document from top to bottom and from left to right in order to have a consistent ordering for the recurrent layers. The maximum character length for the Transformer encoder is 30 , longer segments are truncated. Finally, we convert all the characters of the segments into ASCII characters as described in Section 3.3.
152
+
153
+ ### 4.3 Metrics
154
+
155
+ We select two well-known metrics for evaluating the accuracy of the model:
156
+
157
+ - F1 score micro: compute the F1 score using all the samples. In this case, all the samples contribute equally to the result, without considering their category.
158
+
159
+ - F1 score macro: computes the F1 score per class and then averages them to obtain the final score. This is a more robust metric when dealing with unbalanced datasets.
160
+
161
+ ### 4.4 Results
162
+
163
+ #### 4.4.1 CORD
164
+
165
+ First, we use the CORD public dataset to compare SeqGraph against other baseline and state-of-the-art models that perform segment tagging. In this case, as the rest of the methods report the results at entity level, we train and test SeqGraph using the annotations at entity level, grouping together the segments that belong to the same entity and using the minimum rotated rectangle as the entity bounding box. The results are reported in Table 1. For all the rest of the methods (except PICK), even their base versions have more than 100 million parameters, while the proposed method hardly reaches the 4 million. Despite this huge difference, SeqGraph outperforms almost all the base versions and most of the large versions of the other state-of-the-art methods with 96.36%. It stays just 1% below the best result achieved by LayoutLMv3 [3] while having almost 100 times less parameters.
166
+
167
+ Table 1: Comparison with different state-of-the-art models on the tagging task of the CORD dataset at entity level. We also include the number of parameters, if the model needs pretraining or not, and the modality of the input data ("T/L/T" denotes "text/layout/image").
168
+
169
+ <table><tr><td>Model</td><td>Parameters</td><td>Pretrained</td><td>Modality</td><td>CORD F1 micro</td></tr><tr><td>${\mathrm{{BERT}}}_{BASE}\left\lbrack {20}\right\rbrack$</td><td>110M</td><td>yes</td><td>T</td><td>89.68</td></tr><tr><td>${\mathrm{{RoBERTa}}}_{BASE}\left\lbrack {21}\right\rbrack$</td><td>125M</td><td>yes</td><td>T</td><td>93.54</td></tr><tr><td>${\mathrm{{BROS}}}_{BASE}\left\lbrack 1\right\rbrack$</td><td>110M</td><td>yes</td><td>T+L</td><td>95.73</td></tr><tr><td>${\operatorname{LiLT}}_{BASE}\left\lbrack {29}\right\rbrack$</td><td>-</td><td>yes</td><td>T+L</td><td>96.07</td></tr><tr><td>${\mathrm{{TILT}}}_{BASE}\left\lbrack {18}\right\rbrack$</td><td>230M</td><td>yes</td><td>T+L+I</td><td>95.11</td></tr><tr><td>LayoutLMv2 ${}_{BASE}\left\lbrack {14}\right\rbrack$</td><td>200M</td><td>yes</td><td>T+L+I</td><td>94.95</td></tr><tr><td>DocFormer ${}_{BASE}$ [4]</td><td>183M</td><td>yes</td><td>T+L+I</td><td>96.33</td></tr><tr><td>LayoutLMv ${3}_{BASE}\left\lbrack 3\right\rbrack$</td><td>133M</td><td>yes</td><td>T+L+I</td><td>96.56</td></tr><tr><td>${\mathrm{{BERT}}}_{LARGE}\left\lbrack {20}\right\rbrack$</td><td>340M</td><td>yes</td><td>T</td><td>90.25</td></tr><tr><td>${\mathrm{{RoBERTa}}}_{LARGE}\left\lbrack {21}\right\rbrack$</td><td>355M</td><td>yes</td><td>T</td><td>93.8</td></tr><tr><td>${\mathrm{{BROS}}}_{\text{LARGE }}\left\lbrack 1\right\rbrack$</td><td>340M</td><td>yes</td><td>T+L</td><td>97.4</td></tr><tr><td>FormNet[30]</td><td>345M</td><td>yes</td><td>T+L</td><td>97.28</td></tr><tr><td>${\mathrm{{TILT}}}_{LARGE}\left\lbrack {18}\right\rbrack$</td><td>780M</td><td>yes</td><td>T+L+I</td><td>96.33</td></tr><tr><td>LayoutLMv2 ${}_{LARGE}$ [14]</td><td>426M</td><td>yes</td><td>T+L+I</td><td>96.01</td></tr><tr><td>DocFormer ${}_{LARGE}$ [4]</td><td>536M</td><td>yes</td><td>T+L+I</td><td>96.99</td></tr><tr><td>GraphDoc[8]</td><td>265M</td><td>yes</td><td>T+L+I</td><td>96.93</td></tr><tr><td>LayoutLMv3 ${}_{LARGE}$ [3]</td><td>368M</td><td>yes</td><td>T+L+I</td><td>97.46</td></tr><tr><td>PICK[2]</td><td>68M</td><td>no</td><td>T+L+I</td><td>95.81</td></tr><tr><td>SeqGraph (ours)</td><td>4M</td><td>no</td><td>T+L</td><td>96.36</td></tr><tr><td>SeqGraph pret (ours)</td><td>4M</td><td>yes</td><td>T+L</td><td>97.23</td></tr></table>
170
+
171
+ Note that almost all the rest of the models are pretrained in other huge datasets before being finetuned in the CORD dataset while SeqGraph and PICK are trained from scratch. Although they are trained in an unsupervised way and in other tasks, this pretraining impacts a lot in the text feature extraction, especially for datasets like CORD with limited training information, where many words that appear in the test set might not appear in the training set but could have been learnt during the pretraining. In order to test this impact in our model, we try pretraining it on the private dataset, even though it has less than 10 thousand single page documents while usually the models are pretrained using millions (for instance LayoutLMv3 uses 50 million pages). As it can be seen in Table 2, this pretraining improves the results of the model, reaching 97.23% and reducing the gap between SeqGraph and LayoutLMv3 to 0.23%.
172
+
173
+ Another important point that should be taken into account is that the rest of the presented methods are purely or mostly based on Transformer architectures operating at segment level, so they need to define a sequence limit and, therefore, they suffer from the sequence truncation problem. In addition, this sequence limit must be chosen carefully, as the number of parameters increases exponentially with it. However, SeqGraph and PICK do not suffer from this issue, so they do not have a sequence limit. In the CORD dataset this is not a problem, as the number of segments per document is low on average, but for other datasets such as the private receipts dataset, where the receipts may contain hundreds of segments this limitation would impact in the accuracy and could cause the loss of relevant information.
174
+
175
+ We also extract the results at segment level for SeqGraph (from scratch and pretrained versions) and PICK (see Table 2). Again, SeqGraph outperforms PICK, with a higher difference in this case (almost $2\%$ ), and with the pretraining our model improves by 0.47%.
176
+
177
+ #### 4.4.2 Private dataset
178
+
179
+ For the next experiment, we train and evaluate the proposed model on the segment tagging task of the private dataset. The model is trained following the procedure specified in Section 4.2. We compare our model against PICK [2], as it also performs exclusively the tagging task, it operates at segment level, and it has several similarities with SeqGraph, such as the character encoder for the text features or the combination of GNN and recurrent layers. The PICK model is trained and evaluated using the official repository and the default configuration. Both models were trained on a machine with one NVIDIA Tesla V100 GPU, 64 GB of RAM, and 1 Intel(R) Xeon(R) Gold 6142 CPU.
180
+
181
+ Table 2: Results on the tagging of the CORD dataset at word level.
182
+
183
+ <table><tr><td>Model</td><td>Parameters</td><td>Modality</td><td>CORD F1 micro</td></tr><tr><td>PICK[2]</td><td>68M</td><td>T+L+I</td><td>92.87</td></tr><tr><td>SeqGraph (ours)</td><td>4M</td><td>T+L</td><td>94.61</td></tr><tr><td>SeqGraph pret (ours)</td><td>4M</td><td>T+L</td><td>95.08</td></tr></table>
184
+
185
+ The micro and macro F1 score for both methods are presented in Table 3, along with the number of parameters (in millions), the modality of the input data ("T/L/I" denotes "text/layout/image"), and the time taken for the whole training process. Note that the proposed method has 17 times less parameters than PICK and that, unlike PICK, it does not use the image as an input source. Despite this, it can be observed that SeqGraph outperforms PICK in both metrics and specially in the F1 macro, where it improves more than a $1\%$ . These results demonstrate that the image does not provide additional relevant information to the one extracted from the text, the layout, and the order of the segments.
186
+
187
+ Table 3: Results on the segment tagging task of the private receipts dataset. We also include the number of parameters, the modality of the input data ( "T/L/I" denotes "text/layout/image" ), and the total training time.
188
+
189
+ <table><tr><td>Model</td><td>Parameters</td><td>Modality</td><td>F1 micro</td><td>F1 macro</td><td>Training time</td></tr><tr><td>PICK[2]</td><td>68M</td><td>T+L+I</td><td>96.99</td><td>93.27</td><td>33h</td></tr><tr><td>SeqGraph (ours)</td><td>4M</td><td>T+L</td><td>97.47</td><td>94.51</td><td>1h20m</td></tr></table>
190
+
191
+ Regarding the training time, for the same number of epochs, PICK was trained in 33 hours ( 1 hour per epoch) while SeqGraph was trained in 1 hour and 20 minutes (less than 3 minutes per epoch). Some of the causes of this overwhelming difference are the heavy image feature extraction done by PICK or the fact that the recurrent layers of PICK process the sequences at character level. An ablation study provides further discussion in Appendix A. 3
192
+
193
+ ## 5 Conclusions and Future Work
194
+
195
+ In this work we have addressed the problem of text segment tagging on unstructured documents. We believe that the existing state-of-the-art models are unnecessarily huge, with an overwhelming number of parameters. Furthermore, most of them are based on Transformer architectures, suffering from the sequence truncation problem and not taking advantage of the sparse nature of the use case To overcome these limitations, we have proposed SeqGraph, a new model which optimizes the feature extraction stage and mixes GNNs and RNNs to efficiently and effectively solve the segment tagging problem. We have demonstrated its capabilities by testing it on the CORD dataset, where it achieves state-of-the-art results while reducing the number of parameters between 100- and 200-times compared to its competitors. In the benchmark against PICK [2], we have also demonstrated that the image features are not essential for this task and that they do not provide additional relevant information that can be added to the one extracted from the OCR text segments.
196
+
197
+ Future work will focus on improving the performance of the model trying to mitigate the bottlenecks. For instance, injecting positional embeddings into the node features, to see if the sequence information can be extracted by the GAT layers and thus removing the recurrent layers, which are computationally heavy. Another research line is extending the capabilities of the model to cover also segment grouping and entity linking tasks, evolving into an end-to-end information extraction model.
198
+
199
+ ## References
200
+
201
+ [1] Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, and Sungrae Park. Bros: A layout-aware pre-trained language model for understanding documents. ArXiv,
202
+
203
+ abs/2108.04539, 2021. 1, 2, 8, 11
204
+
205
+ [2] Wenwen Yu, Ning Lu, Xianbiao Qi, Ping Gong, and Rong Xiao. Pick: Processing key information extraction from documents using improved graph learning-convolutional networks. 2020 25th International Conference on Pattern Recognition (ICPR), pages 4363-4370, 2021. 1, 2,3,4,5,8,9
206
+
207
+ [3] Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, and Furu Wei. Layoutlmv3: Pre-training for document ai with unified text and image masking. ArXiv, abs/2204.08387, 2022. 1, 2, 7, 8
208
+
209
+ [4] Srikar Appalaraju, Bhavan Jasani, Bhargava Urala Kota, Yusheng Xie, and R. Manmatha. Docformer: End-to-end transformer for document understanding. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 973-983, 2021. 2, 8
210
+
211
+ [5] Jiuxiang Gu, Jason Kuen, Vlad I. Morariu, Handong Zhao, Nikolaos Barmpalios, R. Jain, Ani Nenkova, and Tong Sun. Unidoc: Unified pretraining framework for document understanding. In NeurIPS, 2021. 1, 2
212
+
213
+ [6] Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. ArXiv, abs/1706.03762, 2017.1,2
214
+
215
+ [7] Manuel Carbonell, Pau Riba, Mauricio Villegas, Alicia Fornés, and Josep Lladós. Named entity recognition and relation extraction with graph neural networks in semi structured documents. 2020 25th International Conference on Pattern Recognition (ICPR), pages 9622-9627, 2021. 1, 2,3,5,6
216
+
217
+ [8] Zhenrong Zhang, Jiefeng Ma, Jun Du, Licheng Wang, and Jian shu Zhang. Multimodal pretraining based on graph attention network for document understanding. ArXiv, abs/2203.13530, 2022.1,2,3,6,8
218
+
219
+ [9] Seunghyun Park, Seung Shin, Bado Lee, Junyeop Lee, Jaeheung Surh, Minjoon Seo, and Hwalsuk Lee. Cord: A consolidated receipt dataset for post-ocr parsing. 2019. 2, 7, 12
220
+
221
+ [10] Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio’, and Yoshua Bengio. Graph attention networks. ArXiv, abs/1710.10903, 2018. 2, 5
222
+
223
+ [11] Junyoung Chung, Çaglar Gülçehre, Kyunghyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. ArXiv, abs/1412.3555, 2014. 2, 4, 7, 12
224
+
225
+ [12] Lukasz Garncarek, Rafal Powalski, Tomasz Stanisfawek, Bartosz Topolski, Piotr Halama, Michat P. Turski, and Filip Grali'nski. Lambert: Layout-aware language modeling for information extraction. In ${ICDAR},{2021.2}$
226
+
227
+ [13] Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. Layoutlm: Pre-training of text and layout for document image understanding. Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020.
228
+
229
+ [14] Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei A. F. Florêncio, Cha Zhang, Wanxiang Che, Min Zhang, and Lidong Zhou. LayoutImv2: Multi-modal pre-training for visually-rich document understanding. ArXiv, abs/2012.14740, 2021. 2, 8
230
+
231
+ [15] Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei A. F. Florêncio, Cha Zhang, and Furu Wei. Layoutxlm: Multimodal pre-training for multilingual visually-rich document understanding. ArXiv, abs/2104.08836, 2021. 2
232
+
233
+ [16] Peizhao Li, Jiuxiang Gu, Jason Kuen, Vlad I. Morariu, Handong Zhao, R. Jain, Varun Man-junatha, and Hongfu Liu. Selfdoc: Self-supervised document representation learning. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5648-5656, 2021. 2
234
+
235
+ [17] Chenliang Li, Bin Bi, Ming Yan, Wei Wang, Songfang Huang, Fei Huang, and Luo Si. Struc-turallm: Structural pre-training for form understanding. ArXiv, abs/2105.11210, 2021. 2
236
+
237
+ [18] Rafal Powalski, Łukasz Borchmann, Dawid Jurkiewicz, Tomasz Dwojak, Michal Pietruzzka, and Gabriela Palka. Going full-tilt boogie on document understanding with text-image-layout transformer. In ${ICDAR},{2021.2},8$
238
+
239
+ [19] Geewook Kim, Teakgyu Hong, Moonbin Yim, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, and Seunghyun Park. Donut: Document understanding transformer without ocr. ArXiv, abs/2111.15664, 2021. 2
240
+
241
+ [20] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. ArXiv, abs/1810.04805, 2019. 2, 8
242
+
243
+ [21] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692, 2019. 2, 8
244
+
245
+ [22] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9: 1735-1780, 1997. 2, 3, 7, 12
246
+
247
+ [23] Thomas Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. ArXiv, abs/1609.02907, 2017. 3
248
+
249
+ [24] Vijay Prakash Dwivedi, Chaitanya K. Joshi, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. Benchmarking graph neural networks. ArXiv, abs/2003.00982, 2020. 4
250
+
251
+ [25] Liheng Ma, Reihaneh Rabbany, and Adriana Romero-Soriano. Graph attention networks with positional embeddings. ArXiv, abs/2105.04037, 2021. 4
252
+
253
+ [26] Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5: 135-146, 2017. 5, 11
254
+
255
+ [27] Stefan Elfwing, Eiji Uchibe, and Kenji Doya. Sigmoid-weighted linear units for neural network function approximation in reinforcement learning. Neural networks : the official journal of the International Neural Network Society, 107:3-11, 2018. 6
256
+
257
+ [28] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In ICLR, 2019. 7
258
+
259
+ [29] Jiapeng Wang, Lianwen Jin, and Kai Ding. Lilt: A simple yet effective language-independent layout transformer for structured document understanding. In ${ACL},{2022.8}$
260
+
261
+ [30] Chen-Yu Lee, Chun-Liang Li, Timothy Dozat, Vincent Perot, Guolong Su, Nan Hua, Joshua Ainslie, Renshen Wang, Yasuhisa Fujii, and Tomas Pfister. Formnet: Structural encoding beyond sequential modeling in form document information extraction. In ${ACL},{2022.8}$
262
+
263
+ ## A Appendix
264
+
265
+ ### A.1 Analysis of the text feature extraction methods
266
+
267
+ As it is described in Section 3.3, we can group the text feature extraction methods in two main categories: the ones that extract the features attending to its semantic meaning and the ones that extract the features attending to its composition.
268
+
269
+ Feature extraction based on the semantic meaning of the features has some important drawbacks listed below:
270
+
271
+ - The words that are not in the dictionary will get assigned a useless embedding, so the model will not perform well on unseen data or on noisy data (for instance due to parsing errors in the OCR module).
272
+
273
+ - The size of the dictionary must be huge to include as many words as possible, and so does the size of the dataset for generating it. This size problem worsens when working with multilingual data.
274
+
275
+ - It is very prone to overfitting, especially for the words that are less frequent.
276
+
277
+ - Problems when dealing with wide ranges of numbers. For instance, when working with prices, the model cannot extract general rules for them and needs to treat each number as an independent word, while it is not feasible to include all the possible numbers in the dictionary.
278
+
279
+ Some of these weak points can be partially mitigated by decomposing the text in known character grams and extracting the features from them $\left\lbrack {1,{26}}\right\rbrack$ . This variant can improve the performance over noisy and unseen data and reduce the overfitting. Nevertheless, this strategy can increase even more the size of the dictionary and the models are still very sensitive to the noise and the unseen data.
280
+
281
+ On the other hand, extracting the features attending to the text composition has the following advantages:
282
+
283
+ - The size of the dictionary is drastically reduced. For instance, if considering only the ASCII characters the length would be only 128 .
284
+
285
+ - The models are more robust to unseen or noisy data, as even if some characters are missing or they are different, it can find relationships between the rest of them.
286
+
287
+ - The models tend to analyze the words at a lower level, without attending too much to the semantic meaning and finding more general rules, which reduces overfitting.
288
+
289
+ - The numbers range problem is eliminated, as it can find general rules to group all the segments of the same type under the same meaning. For instance, the model could learn that when a digit is followed by a dot and then by other digits, the segment is a price without having to analyze all the possible numbers.
290
+
291
+ - These previous advantages also impact on reducing the amount of data required for pretraining the embedding layer.
292
+
293
+ ### A.2 Datasets
294
+
295
+ #### A.2.1 CORD
296
+
297
+ Consolidated Receipt Dataset [9] is composed of 1000 Indonesian purchase receipts which contain images and box/text annotations for OCR, and multi-level semantic labels for semantic parsing and relation extraction tasks. In the ground truth, each segment is associated with the category field and the group_id field for joining the segments at entity level. It contains 30 different categories. The samples are split into 800 for train, 100 for dev (validation), and 100 for test.
298
+
299
+ #### A.2.2 Private dataset
300
+
301
+ For effectively evaluating the capabilities of the model, we propose an internal challenging dataset composed of 8814 purchase receipt images from 5 countries: Germany, Italy, France, Mexico, and Brazil. Receipts vary widely in height, density, and image quality. They may contain perspective artifacts, 3D rotations and all kinds of wrinkles. Each receipt has all its text segments annotated. The available annotated information for each text segment is the rotated bounding box, the text, the entity category, and the product ID (in case the segment belongs to a product cluster). About the entity categories, there are 21 types of different entities, some of them at receipt level (purchase_date, purchase_time, total_value,...) and others at purchase item level (item_description, item_code, item_value, ...).
302
+
303
+ The dataset also contains the receipt region annotation. We have cropped the images, filtering the segments that are outside the receipt, and shifting the coordinates of the remaining segments to the cropped pixel space. Finally, we split the dataset in training, validation and test sets using a ratio of ${70}/{10}/{20}$ . In Figure 5 we present some examples. We also overlay the ground-truth labels, where the boxes with the same color belong to the same category. Note that this dataset is more challenging than CORD in the number of samples, languages, high imbalance in the classes (especially for 'other', i.e. text not belonging to targeted classes) and that the number of segments can vary from several to hundreds from one receipt to another. Furthermore, the layouts may vary highly intra- and inter-retailers and there are a large number of them (hundreds per country). Finally, note also that the quality of the receipts related to paper and printing defects and image capture is worse than in CORD, which means injecting more noise and variability into the input data.
304
+
305
+ ### A.3 Ablation study
306
+
307
+ We analyze some design choices and their impact on the model accuracy and on the number of parameters. All these experiments are performed using the private dataset and the results are gathered in Table 4.
308
+
309
+ First, we study the differences between using GRU [11] and LSTM [22] as the recurrent layers. The experiment shows that the results slightly improve when using GRU layers, while reducing the number of parameters of the model by 0.6 million.
310
+
311
+ Next, we want to analyze the contribution of each source of input information. We start with the sequential information, which is gathered by the recurrent layers. Thus, we remove the recurrent layers, connecting the output of the GNN directly to the final linear layer (SeqGraph w/o RNN in
312
+
313
+ ![01963f08-ed9a-7227-b3c3-0cccdb5b6e2a_12_309_202_1176_928_0.jpg](images/01963f08-ed9a-7227-b3c3-0cccdb5b6e2a_12_309_202_1176_928_0.jpg)
314
+
315
+ Figure 5: Examples from the private dataset of receipts including ground truth labels. The color of the bounding box denotes its category. On each image, multiple text segments can belong to same category.
316
+
317
+ Table 4: Ablation study on the proposed method. SeqGraph is the baseline model; SeqGraph LSTM replaces the GRU layers by LSTM ones; SeqGraph w/o RNN removes the recurrent layers; SeqGraph $\mathrm{w}/\mathrm{o}$ RNN extended removes the recurrent layers and add more GAT layers to compensate the drop of parameters; SeqGraph w/o RNN & layout removes the recurrent layers and the layout features; and SeqGraph w/o text removes the text feature extraction module.
318
+
319
+ <table><tr><td>Model</td><td>Parameters</td><td>Modality</td><td>F1 micro</td><td>F1 macro</td></tr><tr><td>SeqGraph</td><td>4M</td><td>T+L+S</td><td>97.47</td><td>94.51</td></tr><tr><td>SeqGraph LSTM</td><td>4.6M</td><td>T+L+S</td><td>97.40</td><td>94.39</td></tr><tr><td>SeqGraph w/o RNN</td><td>2.3M</td><td>T+L</td><td>96.33</td><td>91.56</td></tr><tr><td>SeqGraph w/o RNN extended</td><td>6.5M</td><td>T+L</td><td>96.84</td><td>92.81</td></tr><tr><td>SeqGraph w/o layout</td><td>4M</td><td>T+S</td><td>97.22</td><td>94.18</td></tr><tr><td>SeqGraph w/o RNN & layout</td><td>2.3M</td><td>T</td><td>93.98</td><td>87.99</td></tr><tr><td>SeqGraph w/o text</td><td>2.4M</td><td>L+S</td><td>95.13</td><td>88.22</td></tr></table>
320
+
321
+ Table 4). As can be observed, there is a drop of 1% in F1 score micro and almost 3% in F1 score macro. However, note that there is also an important drop in the number of parameters, almost 50%, In order to compensate for this drop, the number of GAT layers is increased from 3 to 5, and the number of heads of each layer from 4 to 8 . With this variant (SeqGraph w/o RNN extended) the drop is halved, but it is still important for the F1 score macro. Therefore, we can conclude that the sequential information is relevant for this task and that it cannot be fully replaced by the layout features.
322
+
323
+ The next source of information considered is the layout, embedded in the coordinates of the segment bounding boxes. We remove this information from the node features, maintaining only the text ones (SeqGraph w/o layout). Surprisingly, the drop in performance is almost null, 0.25 in micro and 0.33 in macro metrics. Nevertheless, note that the layout features are also employed during the edge sampling step for finding the neighbors of each node, so we believe that this information embedded in the graph structure, together with the sequence information, is mitigating the suppression of the features from the nodes.
324
+
325
+ We also try removing both the sequential and the layout information (SeqGraph w/o RNN & layout). In this case, the drop in performance is huge, 6.6% in the macro metric. This demonstrates that, although each of these sources contains some exclusive relevant information, most of it is shared by both.
326
+
327
+ Finally, we test a version of the model where we remove the text information from the node features (SeqGraph w/o text). As expected, the F1 score macro highly decreases (more than 6%), demonstrating that the text features are the most important ones, but that they need to be complemented with other sequential and/or layout features.
papers/LOG/LOG 2022/LOG 2022 Conference/ZuMgYX1irC/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,284 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § COMBINING GRAPH AND RECURRENT NETWORKS FOR EFFICIENT AND EFFECTIVE SEGMENT TAGGING
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ Graph Neural Networks have been demonstrated to be highly effective and efficient in learning relationships between nodes locally and globally. Also, they are suitable for documents-related tasks due to their flexibility and capacity of adapting to complex layouts. However, information extraction on documents still remains a challenge, especially when dealing with unstructured documents. The semantic tagging of the text segments (a.k.a. entity tagging) is one of the essential tasks. In this paper we present SeqGraph, a new model that combines Transformers for text feature extraction, and Graph Neural Networks and recurrent layers for segments interaction, for an efficient and effective segment tagging. We address some of the limitations of current architectures and Transformer-based solutions. We optimize the model architecture by combining Graph Attention layers (GAT) and Gated Recurrent Units (GRUs), and we provide an ablation study on the design choices to demonstrate the effectiveness of SeqGraph. The proposed model is extremely light ( 4 million parameters), reducing the number of parameters between 100- and 200-times compared to its competitors, while achieving state-of-the-art results (97.23% F1 score on CORD dataset).
12
+
13
+ § 18 1 INTRODUCTION
14
+
15
+ Information Extraction (IE) has become a focus task over the last years within the Machine Learning community. There is a growing need to automate the extraction and storage of information from documents, especially from unstructured ones. The rise of Deep Learning has been extremely beneficial, leading to the release of models capable of performing with the same quality as humans [1-5]. Moreover, a myriad of businesses and use cases within industry require the automatic processing and understanding of documents and their contents to convert unstructured data into their semantically structured components. The main aim is to reduce the manual burden in day to day operations implementing efficient and cost-effective automatic solutions.
16
+
17
+ Within IE, the semantic tagging of the text segments (a.k.a. entity tagging) is an essential task that allows the system to understand the different parts of the document and to focus on the most relevant information. Usually, these text segments are detected in a previous stage by an OCR engine at word level. This paper presents an innovative solution for efficient and effective segment tagging on unstructured documents.
18
+
19
+ Existing entity tagging models are huge, they contain an overwhelming number of parameters (hundreds of millions) that could be highly reduced. Furthermore, most of them are purely or mostly based on Transformer [6] architectures [1, 3-5]. They need to define a sequence limit and, therefore, they suffer from the sequence truncation problem. In addition, this sequence limit must be chosen carefully, as the number of parameters increases exponentially with it. This is due to the fact that the Transformers are fully connected architectures, where each segment needs to interact with the rest.
20
+
21
+ These challenges can be solved using more flexible architectures, such as the ones based on Graph Neural Networks (GNN) $\left\lbrack {2,7,8}\right\rbrack$ . GNNs have been demonstrated to be highly effective and efficient in learning relationships between nodes locally and globally. However, they do not use the sequential order of the nodes as a source of information, which is important for the considered task. To this extent, some attempts have been made to compensate for this limitation by combining them with other mechanisms, such as recurrent layers [2]. We believe they are also overparameterized and that the selection and treatment of the features is not optimal. To overcome these limitations, we present SeqGraph, a new model that combines Transformers for text feature extraction, and GNNs and recurrent layers for segments interaction, for an efficient and effective segment tagging. The main contributions of this work are:
22
+
23
+ * Extremely light-weight entity tagging model ( 4 million parameters) capable of achieving state-of-the-art results (97.23% on entity tagging task of CORD dataset [9]).
24
+
25
+ * Optimal selection and extraction of the node features from the text and bounding boxes of the segments.
26
+
27
+ * Optimized model architecture combining Graph Attention layers (GAT) [10] and Gated Recurrent Units (GRUs) [11].
28
+
29
+ * Ablation study on the impact of the different sources of information on the model accuracy and parameters.
30
+
31
+ § 2 RELATED WORK
32
+
33
+ In recent years, the increase in the demand for information extraction systems has been reflected in the number of publications and, consequently, also on entity tagging $\left\lbrack {1 - 5,7,8,{12} - {18}}\right\rbrack$ . Although there are few works that attempt to solve the problem from scratch [19], almost all the released models rely on text segments extracted using an OCR engine. Most of them are based purely or partially on Transformer architectures [6]. However, there is an emerging trend on the application of GNNs for entity tagging.
34
+
35
+ § 2.1 TRANSFORMER-BASED MODELS
36
+
37
+ Since the most basic versions, such as BERT [20] or RoBERTa [21], which only use the text and the sequential order of the segments to extract the input features, a lot of novelties have been introduced to enhance the Transformers performance on this task. New models include different sources of information, such as the image or the layout, but also new ways of extracting the features and combining them. For instance, in several works, the authors inject the layout information of each segment into their features, some of them at word level [1, 12-15], and others dividing the document into regions that share the same embeddings $\left\lbrack {3,5,{16},{17}}\right\rbrack$ . In some cases, the layout information has also been used to enhance the self-attention mechanisms, usually as a bias term [3,4,12,18,22]. The image features are usually integrated with the textual ones by concatenating or adding them $\left\lbrack {5,{14},{15},{18}}\right\rbrack$ , but some models use more sophisticated ways of combining them. For instance, in [3], [4], and [16] the authors use a multi-modal transformer architecture that incorporates a multimodal self-attention to enforce the cross-modality feature correlation.
38
+
39
+ § 2.2 GNN-BASED MODELS
40
+
41
+ Within IE tasks, GNNs try to overcome the limitations of the Transformer-based models. The Transformers are fully connected architectures, where each segment must interact with each other. The number of parameters increases rapidly with the number of segments. In addition, they need a predefined maximum sequence length, leading to truncation problems for large sequences. GNNs avoid all these problems with their flexible structure, where each segment only needs to interact with a reduced number of neighbors. However, the setup is more complex, with some critical design choices, such as the graph structure, the edge sampling strategy, or the message propagation approach. Some promising approaches have been recently released. For instance, in [7] the authors propose a GNN-based model for solving entity tagging (ET), building (EB), and linking (EL). First, they generate a k-Nearest Neighbor (kNN) graph at text segment level for solving the EB task as an edge prediction task, using features extracted from the bounding boxes and from the text and passing them through several GAT layers. Then, the entity features are computed by aggregating the output features from the GNN and processing them with a linear layer. The features are used to solve the ET task, using a Multi-Layer Perceptron (MLP) classifier, and the EL task, evaluating all the possible entity pairs with another MLP classifier. In [8], the model incorporates also visual features from the image The text features are extracted using a BERT encoder and the visual ones using a SWIN Transformer with a Feature Pyramid Network (FPN). Both vectors are enriched with layout information by adding a layout embedding. Then, before each GAT layer of the GNN architecture, the visual features are fused into the input features using a fusion layer. In these layers, the relative layout is also included in the self-attention mechanism. On the other hand, in [2] the image and text features are fused before feeding them into the GNN. The layout information is not embedded into the node features but used for computing the weights for the message aggregation within the Graph Convolution Network (GCN) layers [23]. The text features are extracted using a Transformer encoder at character level and the image features using a Convolutional Neural Network (CNN) architecture. Then, the character features of each segment are averaged and passed through the GNN layers. The output features are then aggregated to the previous features at character level and fed to two bidirectional LSTM layers [22] in order to extract the sequential information. Finally, they use the Viterbi algorithm to generate the final predictions.
42
+
43
+ All the above works have some drawbacks. In [7], the quality of the extracted text features is poor, and they do not leverage the sequential information, which is important for the considered task. Consequently, the results obtained are very limited. In the case of [2] and [8], the models have a huge number of parameters, in part due to the heavy image backbones that they use. In this work we aim at combining the benefits of all of them to make a light, fast, and effective entity tagging model.
44
+
45
+ § 110 3 METHODOLOGY
46
+
47
+ § 111 3.1 PROBLEM DEFINITION
48
+
49
+ Given a list of text segments (usually at word level) provided by an OCR engine who extracted them from an image of a document, the goal is to tag each segment with its corresponding semantic category from a closed list. Each segment consists of the text string and the rotated bounding box. For instance, having a purchase receipt, each segment could be tagged as store address, phone, date, time, item description, item value, etc. Figure 1 shows an illustration.
50
+
51
+ < g r a p h i c s >
52
+
53
+ Figure 1: Illustration of document entity tagging. In the right image, the color of the bounding box denotes its category. Segments with the bounding boxes of the same color belong to the same category.
54
+
55
+ Some of the challenges present on the raw collected images of documents include highly unstructured documents, such as purchase receipts, with multiple and complex layouts and non-natural language (abbreviations, brands, product names, punctuations, etc). The noise caused by the OCR engine (errors in the text, inaccurate bounding boxes, missing or duplicated detections...) and due to bad image and/or physical conditions (perspective, rotation, wrinkles, ripples...).
56
+
57
+ § 3.2 OVERVIEW
58
+
59
+ Given the above background, we propose a model based on GNNs due to these reasons:
60
+
61
+ * Graph-based representations are flexible and capable of adapting to complex layouts.
62
+
63
+ * The task can be defined as node classification, where the semantic category of a node is highly dependent on the features of its neighbor nodes and on the global context. The literature about GNNs has demonstrated they are highly effective and efficient in learning relationships between nodes locally and globally.
64
+
65
+ * The number of nodes in the document varies from a few to hundreds of them, which can be unfeasible to process for fully connected networks or Transformer-based models. However, for this use case, the number of interactions can be limited based on the bounding box coordinates, accelerating the inference and reducing the number of required resources. GNNs are suitable for this type of highly sparse data structure.
66
+
67
+ We must note one of the weaknesses of the proposed GNN-based approach. GNNs do not consider the position of the segments in the input sequence. However, it is important to keep an order to read a document beyond text layout prediction. Several approaches have tried to overcome this limitation injecting the sequential information into the node features [24], using it within the attention mechanism of the GAT layers [25], or combining the GNN with recurrent layers [2]. The first two require defining how to extract and combine it with the rest of the features, which can be tedious and lead to a more unstable model. In addition, it requires increasing the number of GNN layers or its number of parameters to learn from this new source of information. On the contrary, in recurrent layers this information is learnt directly from the order of the segments with a reduced number of parameters and without altering the GNN architecture.
68
+
69
+ Following this reasoning, we developed a hybrid model based on GNN and RNN for text segment tagging as in Figure 2. Starting from the list of text segments coming from the OCR, the model first extracts and preprocesses the text and region features from each segment. In parallel, it generates the segment nodes and performs the edge sampling between them. Next, the node features are passed through the graph attention layers and get enriched by their neighbors. Two bidirectional Gated Recurrent Unit (GRU) layers [11] processes the featuers to add information about the order of the segments. Finally, we add a linear layer and a Softmax layer to obtain the output probabilities for each text segment.
70
+
71
+ < g r a p h i c s >
72
+
73
+ Figure 2: High Level Architecture of the proposed model.
74
+
75
+ § 3.3 FEATURE EXTRACTION
76
+
77
+ We use three sources of information: the text string, the bounding box and the position in the sequence.
78
+
79
+ Diving through the literature, we can find different approaches for extracting features from the text of a segment, but we can group them into two categories: the ones that extract the features attending to its semantic meaning and the ones that extract the features attending to its composition. The first one assigns a feature vector to each text string (usually at word level) using an embedding layer and a predefined dictionary [7, 26]. The embedding layer can be pretrained on another dataset or it can be trained directly from scratch, using the training set for generating the dictionary [7]. The latter method, extracting the features attending to the text composition, means inspecting its characters and their position within the text and finding relevant relationships between them [2]. We include a deeper dive with pros and cons of each of the two approaches in the Appendix A.1.
80
+
81
+ Analyzing both approaches and the challenges of the segment tagging task for documents, we adopt the second approach, text feature extraction based on its composition, similar to how it is done in [2]. We decide to consider only ASCII characters, setting the length of the dictionary to 128 . We convert all the Unicode characters using the standard Unidecode Python package. Its function unidecode( ) takes Unicode data and tries to represent it in ASCII characters using transliteration tables. The characters that cannot be converted are removed. The size of the embedding layer is 256 . The Transformer has 3 layers with 4 heads and an internal dimension of 512 .
82
+
83
+ Regarding the rotated bounding box, we select the following features:
84
+
85
+ * Left center coordinates: middle point between the top-left and the bottom-left vertices of the rotated bounding box.
86
+
87
+ * Right center coordinates: middle point between the top-right and the bottom-right vertices of the rotated bounding box.
88
+
89
+ * Bounding box rotation: angle of the bounding box in radians, between -PI/2 and PI/2.
90
+
91
+ Note that we discard the height of the bounding box as we observed that the model tended to overfit using this feature. We believe that the height of the segment is not a crucial feature for this task, as it might vary across segments that share the same category, and it does not contain reliable information about the distance between different text lines.
92
+
93
+ Finally, the position in the sequence is already implicit in the order of the segments and used by the recurrent layers. It could be also injected into the node features by using for instance a positional embedding, but that would require selecting a maximum position and truncating the sequences that exceed this length, which would yield a drop of accuracy. In addition, the positional embeddings do not work well with very long sequences, and we want to consider lengths of hundreds. For these reasons, this information is not injected into the node features.
94
+
95
+ After extracting the textual and positional features they are fused by increasing the dimension of the positional features using a linear layer to match the textual features one (256) and adding both. The whole feature extraction process is described in Figure 3.
96
+
97
+ < g r a p h i c s >
98
+
99
+ Figure 3: Feature extraction process applied to each text segment.
100
+
101
+ § 3.4 GNN
102
+
103
+ The GNN architecture takes advantage of the fact that all the information needed for computing the message passing weights (positional and textual information) is already embedded in the node features and we select Graph Attention Layers (GAT) [10] as the one that best suits our needs. In the GAT layers, the weights for the message passing are computed directly inside the layer using the input node features, in a similar way as it is done in the original attention layers (see Equation 1).
104
+
105
+ $$
106
+ {z}_{i}^{\left( l\right) } = {W}^{\left( l\right) }{h}_{i}^{\left( l\right) }
107
+ $$
108
+
109
+ $$
110
+ {e}_{ij}^{\left( l\right) } = \operatorname{LeakyReLU}\left( {{a}^{{\left( l\right) }^{T}}\left( {{z}_{i}^{\left( l\right) }\parallel {z}_{j}^{\left( l\right) }}\right) }\right)
111
+ $$
112
+
113
+ $$
114
+ {\alpha }_{ij}^{\left( l\right) } = \frac{\exp \left( {e}_{ij}^{\left( l\right) }\right) }{\mathop{\sum }\limits_{{k \in \mathcal{N}\left( i\right) }}\exp \left( {e}_{ik}^{\left( l\right) }\right) } \tag{1}
115
+ $$
116
+
117
+ $$
118
+ {h}_{i}^{\left( l\right) } = \sigma \left( {\mathop{\sum }\limits_{{k \in \mathcal{N}\left( i\right) }}^{{\alpha }_{ij}^{\left( l\right) }}{z}_{j}^{\left( l\right) }}\right)
119
+ $$
120
+
121
+ They have been widely applied in document understanding tasks $\left\lbrack {7,8}\right\rbrack$ . To avoid 0 -in-degree errors (disconnected nodes) while using the GAT layers, we add a self-loop for each node, i. e. adding an edge that connects the node with itself.
122
+
123
+ The proposed architecture in Figure 4 is composed of 3 GAT layers. All the layers are followed by SiLU activations [27] except for the last one. In our research, this activation worked better than ReLU and other variants. We also add residual connections in all the layers to accelerate the convergence. Inspired by [8], we introduce a global document node. We use one global node per graph level, and we connect it bidirectionally to the rest of the level nodes. Its feature embedding is initially computed by averaging all the level node embeddings. It has two purposes: Firstly, it provides some context information to the nodes, as it gathers information from the whole graph. Secondly, it acts as a regularization term for the GAT layer weights, as it is not a real neighbor node. These global nodes are only used during the message passing but discarded once the GNN stage is finished.
124
+
125
+ < g r a p h i c s >
126
+
127
+ Figure 4: Proposed GNN architecture.
128
+
129
+ For the edge sampling we use a custom approach. We are dealing with unstructured documents with an unknown variability in layouts and we cannot assume any constraint related to the distance between the segments. We define a sampling function that aims at connecting each segment with the rest of the segments that are on the same line or in the adjacent ones: an edge from segment A to segment $\mathrm{B}$ is created if the vertical distance between their centers(C)is less than the height(H)of 3 segment A by a constant (K) (see Equation 2). In our experiments we set this constant to two.
130
+
131
+ $$
132
+ {\text{ edge }}_{A - B} = \left| {{C}_{A}^{y} - {C}_{B}^{y}}\right| < {H}_{A} * K \tag{2}
133
+ $$
134
+
135
+ § 3.5 RECURRENT LAYERS
136
+
137
+ The recurrent layers gather the information about the sequence order that the GNN layers are missing and inject it into the node features. More specifically, we use two bidirectional Gated Recurrent Units (GRUs) [11] with 256 hidden features size. We also considered using Long Short Term Memory layers (LSTM) [22], but as reported in the ablation study of the appendix the accuracy obtained is similar while the GRU layers have less parameters and are faster. The contribution of these layers is analyzed in the experimental section.
138
+
139
+ § 3.6 CLASSIFICATION HEAD
140
+
141
+ The classification head takes the output features for each node from the recurrent layer and transforms them into the class probabilities. It consists of one linear layer that generates the logits, followed by a Softmax layer that produces the normalized probabilities.
142
+
143
+ § 4 EXPERIMENTS
144
+
145
+ § 4.1 DATASET
146
+
147
+ We select one well-known public dataset of purchase receipts, CORD [9], that contains annotations for the segment tagging task in order to compare our model with other approaches. In addition, we include a larger and challenging private dataset to better analyze the performance of the model. Due to space limitation, they are further described with examples in Appendix A.2.
148
+
149
+ § 4.2 TRAINING AND EVALUATION DETAILS
150
+
151
+ For all the datasets, the model is trained from scratch for 30 epochs using a batch of 4 documents on each iteration. The selected optimizer is AdamW [28] with an initial learning rate of 3e-4 and a reduction factor of 0.1 in epochs 20 and 25. For the loss function, we use Cross Entropy Loss for FUNSD and CORD datasets and Focal Loss for the private in order to deal with the high class imbalance. We also test the impact of pretraining the model on the private dataset before training on CORD. In this case, the models are finetuned for 1000 steps with batch size of 64, an initial learning rate of $1\mathrm{e} - 4$ and a reduction factor 0.1 in step 900 . To reduce the overfitting, we use a dropout of 0.1 for the Transformer encoder and before each GAT layer, and a dropout of 0.2 for the GRU layers and before the final linear layer. For both datasets we sort the segments of each document from top to bottom and from left to right in order to have a consistent ordering for the recurrent layers. The maximum character length for the Transformer encoder is 30, longer segments are truncated. Finally, we convert all the characters of the segments into ASCII characters as described in Section 3.3.
152
+
153
+ § 4.3 METRICS
154
+
155
+ We select two well-known metrics for evaluating the accuracy of the model:
156
+
157
+ * F1 score micro: compute the F1 score using all the samples. In this case, all the samples contribute equally to the result, without considering their category.
158
+
159
+ * F1 score macro: computes the F1 score per class and then averages them to obtain the final score. This is a more robust metric when dealing with unbalanced datasets.
160
+
161
+ § 4.4 RESULTS
162
+
163
+ § 4.4.1 CORD
164
+
165
+ First, we use the CORD public dataset to compare SeqGraph against other baseline and state-of-the-art models that perform segment tagging. In this case, as the rest of the methods report the results at entity level, we train and test SeqGraph using the annotations at entity level, grouping together the segments that belong to the same entity and using the minimum rotated rectangle as the entity bounding box. The results are reported in Table 1. For all the rest of the methods (except PICK), even their base versions have more than 100 million parameters, while the proposed method hardly reaches the 4 million. Despite this huge difference, SeqGraph outperforms almost all the base versions and most of the large versions of the other state-of-the-art methods with 96.36%. It stays just 1% below the best result achieved by LayoutLMv3 [3] while having almost 100 times less parameters.
166
+
167
+ Table 1: Comparison with different state-of-the-art models on the tagging task of the CORD dataset at entity level. We also include the number of parameters, if the model needs pretraining or not, and the modality of the input data ("T/L/T" denotes "text/layout/image").
168
+
169
+ max width=
170
+
171
+ Model Parameters Pretrained Modality CORD F1 micro
172
+
173
+ 1-5
174
+ ${\mathrm{{BERT}}}_{BASE}\left\lbrack {20}\right\rbrack$ 110M yes T 89.68
175
+
176
+ 1-5
177
+ ${\mathrm{{RoBERTa}}}_{BASE}\left\lbrack {21}\right\rbrack$ 125M yes T 93.54
178
+
179
+ 1-5
180
+ ${\mathrm{{BROS}}}_{BASE}\left\lbrack 1\right\rbrack$ 110M yes T+L 95.73
181
+
182
+ 1-5
183
+ ${\operatorname{LiLT}}_{BASE}\left\lbrack {29}\right\rbrack$ - yes T+L 96.07
184
+
185
+ 1-5
186
+ ${\mathrm{{TILT}}}_{BASE}\left\lbrack {18}\right\rbrack$ 230M yes T+L+I 95.11
187
+
188
+ 1-5
189
+ LayoutLMv2 ${}_{BASE}\left\lbrack {14}\right\rbrack$ 200M yes T+L+I 94.95
190
+
191
+ 1-5
192
+ DocFormer ${}_{BASE}$ [4] 183M yes T+L+I 96.33
193
+
194
+ 1-5
195
+ LayoutLMv ${3}_{BASE}\left\lbrack 3\right\rbrack$ 133M yes T+L+I 96.56
196
+
197
+ 1-5
198
+ ${\mathrm{{BERT}}}_{LARGE}\left\lbrack {20}\right\rbrack$ 340M yes T 90.25
199
+
200
+ 1-5
201
+ ${\mathrm{{RoBERTa}}}_{LARGE}\left\lbrack {21}\right\rbrack$ 355M yes T 93.8
202
+
203
+ 1-5
204
+ ${\mathrm{{BROS}}}_{\text{ LARGE }}\left\lbrack 1\right\rbrack$ 340M yes T+L 97.4
205
+
206
+ 1-5
207
+ FormNet[30] 345M yes T+L 97.28
208
+
209
+ 1-5
210
+ ${\mathrm{{TILT}}}_{LARGE}\left\lbrack {18}\right\rbrack$ 780M yes T+L+I 96.33
211
+
212
+ 1-5
213
+ LayoutLMv2 ${}_{LARGE}$ [14] 426M yes T+L+I 96.01
214
+
215
+ 1-5
216
+ DocFormer ${}_{LARGE}$ [4] 536M yes T+L+I 96.99
217
+
218
+ 1-5
219
+ GraphDoc[8] 265M yes T+L+I 96.93
220
+
221
+ 1-5
222
+ LayoutLMv3 ${}_{LARGE}$ [3] 368M yes T+L+I 97.46
223
+
224
+ 1-5
225
+ PICK[2] 68M no T+L+I 95.81
226
+
227
+ 1-5
228
+ SeqGraph (ours) 4M no T+L 96.36
229
+
230
+ 1-5
231
+ SeqGraph pret (ours) 4M yes T+L 97.23
232
+
233
+ 1-5
234
+
235
+ Note that almost all the rest of the models are pretrained in other huge datasets before being finetuned in the CORD dataset while SeqGraph and PICK are trained from scratch. Although they are trained in an unsupervised way and in other tasks, this pretraining impacts a lot in the text feature extraction, especially for datasets like CORD with limited training information, where many words that appear in the test set might not appear in the training set but could have been learnt during the pretraining. In order to test this impact in our model, we try pretraining it on the private dataset, even though it has less than 10 thousand single page documents while usually the models are pretrained using millions (for instance LayoutLMv3 uses 50 million pages). As it can be seen in Table 2, this pretraining improves the results of the model, reaching 97.23% and reducing the gap between SeqGraph and LayoutLMv3 to 0.23%.
236
+
237
+ Another important point that should be taken into account is that the rest of the presented methods are purely or mostly based on Transformer architectures operating at segment level, so they need to define a sequence limit and, therefore, they suffer from the sequence truncation problem. In addition, this sequence limit must be chosen carefully, as the number of parameters increases exponentially with it. However, SeqGraph and PICK do not suffer from this issue, so they do not have a sequence limit. In the CORD dataset this is not a problem, as the number of segments per document is low on average, but for other datasets such as the private receipts dataset, where the receipts may contain hundreds of segments this limitation would impact in the accuracy and could cause the loss of relevant information.
238
+
239
+ We also extract the results at segment level for SeqGraph (from scratch and pretrained versions) and PICK (see Table 2). Again, SeqGraph outperforms PICK, with a higher difference in this case (almost $2\%$ ), and with the pretraining our model improves by 0.47%.
240
+
241
+ § 4.4.2 PRIVATE DATASET
242
+
243
+ For the next experiment, we train and evaluate the proposed model on the segment tagging task of the private dataset. The model is trained following the procedure specified in Section 4.2. We compare our model against PICK [2], as it also performs exclusively the tagging task, it operates at segment level, and it has several similarities with SeqGraph, such as the character encoder for the text features or the combination of GNN and recurrent layers. The PICK model is trained and evaluated using the official repository and the default configuration. Both models were trained on a machine with one NVIDIA Tesla V100 GPU, 64 GB of RAM, and 1 Intel(R) Xeon(R) Gold 6142 CPU.
244
+
245
+ Table 2: Results on the tagging of the CORD dataset at word level.
246
+
247
+ max width=
248
+
249
+ Model Parameters Modality CORD F1 micro
250
+
251
+ 1-4
252
+ PICK[2] 68M T+L+I 92.87
253
+
254
+ 1-4
255
+ SeqGraph (ours) 4M T+L 94.61
256
+
257
+ 1-4
258
+ SeqGraph pret (ours) 4M T+L 95.08
259
+
260
+ 1-4
261
+
262
+ The micro and macro F1 score for both methods are presented in Table 3, along with the number of parameters (in millions), the modality of the input data ("T/L/I" denotes "text/layout/image"), and the time taken for the whole training process. Note that the proposed method has 17 times less parameters than PICK and that, unlike PICK, it does not use the image as an input source. Despite this, it can be observed that SeqGraph outperforms PICK in both metrics and specially in the F1 macro, where it improves more than a $1\%$ . These results demonstrate that the image does not provide additional relevant information to the one extracted from the text, the layout, and the order of the segments.
263
+
264
+ Table 3: Results on the segment tagging task of the private receipts dataset. We also include the number of parameters, the modality of the input data ( "T/L/I" denotes "text/layout/image" ), and the total training time.
265
+
266
+ max width=
267
+
268
+ Model Parameters Modality F1 micro F1 macro Training time
269
+
270
+ 1-6
271
+ PICK[2] 68M T+L+I 96.99 93.27 33h
272
+
273
+ 1-6
274
+ SeqGraph (ours) 4M T+L 97.47 94.51 1h20m
275
+
276
+ 1-6
277
+
278
+ Regarding the training time, for the same number of epochs, PICK was trained in 33 hours ( 1 hour per epoch) while SeqGraph was trained in 1 hour and 20 minutes (less than 3 minutes per epoch). Some of the causes of this overwhelming difference are the heavy image feature extraction done by PICK or the fact that the recurrent layers of PICK process the sequences at character level. An ablation study provides further discussion in Appendix A. 3
279
+
280
+ § 5 CONCLUSIONS AND FUTURE WORK
281
+
282
+ In this work we have addressed the problem of text segment tagging on unstructured documents. We believe that the existing state-of-the-art models are unnecessarily huge, with an overwhelming number of parameters. Furthermore, most of them are based on Transformer architectures, suffering from the sequence truncation problem and not taking advantage of the sparse nature of the use case To overcome these limitations, we have proposed SeqGraph, a new model which optimizes the feature extraction stage and mixes GNNs and RNNs to efficiently and effectively solve the segment tagging problem. We have demonstrated its capabilities by testing it on the CORD dataset, where it achieves state-of-the-art results while reducing the number of parameters between 100- and 200-times compared to its competitors. In the benchmark against PICK [2], we have also demonstrated that the image features are not essential for this task and that they do not provide additional relevant information that can be added to the one extracted from the OCR text segments.
283
+
284
+ Future work will focus on improving the performance of the model trying to mitigate the bottlenecks. For instance, injecting positional embeddings into the node features, to see if the sequence information can be extracted by the GAT layers and thus removing the recurrent layers, which are computationally heavy. Another research line is extending the capabilities of the model to cover also segment grouping and entity linking tasks, evolving into an end-to-end information extraction model.
papers/LOG/LOG 2022/LOG 2022 Conference/_nlbNbawXDi/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,249 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Towards Training GNNs using Explanation Directed Message Passing
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ With the increasing use of Graph Neural Networks (GNNs) in critical real-world applications, several post hoc explanation methods have been proposed to understand their predictions. However, there has been no work in generating explanations on the fly during model training and utilizing them to improve the expressive power of the underlying GNN models. In this work, we introduce a novel explanation-directed neural message passing framework for GNNs, EXPASS (EXplainable message PASSing), which aggregates only embeddings from nodes and edges identified as important by a GNN explanation method. EXPASS can be used with any existing GNN architecture and subgraph-optimizing explainer to learn accurate graph embeddings. We theoretically show that EXPASS alleviates the oversmooth-ing problem in GNNs by slowing the layer-wise loss of Dirichlet energy and that the embedding difference between the vanilla message passing and EXPASS framework can be upper bounded by the difference of their respective model weights. Our empirical results show that graph embeddings learned using EXPASS improve the predictive performance and alleviate the oversmoothing problems of GNNs, opening up new frontiers in graph machine learning to develop explanation-based training frameworks.
12
+
13
+ ## 191 Introduction
14
+
15
+ Graph Neural Networks (GNNs) are increasingly used as powerful tools for representing graph-structured data, such as social, information, chemical, and biological networks [1, 2]. With the deployment of GNN models in critical applications (e.g., financial systems and crime forecasting [3, 4]), it becomes essential to ensure that the relevant stakeholders understand and trust their decisions. To this end, several approaches [5-13] have been proposed in recent literature to generate post-hoc explanations for predictions of GNN models.
16
+
17
+ In contrast to other modalities like images and texts, generating instance-level explanations for graphs is non-trivial. In particular, it is more challenging since individual node embeddings in GNNs aggregate information using the entire graph structure, and, therefore, explanations can be on different levels (i.e., node attributes, nodes, and edges). While several categories of GNN explanation methods have been proposed: gradient-based [5, 10, 14], perturbation-based [8, 9, 11, 13, 15], and surrogate-based $\left\lbrack {7,{12}}\right\rbrack$ , their utility is limited to generating post hoc node- and edge-level explanations for a given pre-trained GNN model. Thus, the capability of GNN explainers to improve the predictive performance of a GNN model lacks understanding as there is very little work on systematically analyzing the reliability of state-of-the-art GNN explanation methods on model performance [16].
18
+
19
+ To address this, recent works have explored the joint optimization of machine learning models and explanation methods to improve the reliability of explanations $\left\lbrack {{17},{18}}\right\rbrack$ . Zhou et al. $\left\lbrack {18}\right\rbrack$ proposed DropEdge as a technique to drop random edges (similar to generating random edge explanations) during training to reduce overfitting in GNNs. More recently, Spinelli et al. [17] used meta-learning frameworks to generate GNN explanations and show an improvement in the performance of specific GNN explanation methods. While these works make an initial attempt at jointly optimizing explainers and predictive models, they are neither generalizable nor exhaustive. They fail to show improvement in the downstream GNN performance [17] and degree of explainability [18] across diverse GNN architectures and explainers. Further, there is little to no work done on either theoretically analyzing the effect of GNN explanations on the neural message framework in GNNs or on important GNN properties like oversmoothing [19].
20
+
21
+ Present work. In this work, we introduce a novel explanation-directed neural message passing framework, EXPASS, which can be used with any GNN model and subgraph-optimizing explainer to learn accurate graph representations. In particular, EXPASS utilizes GNN explanations to steer the underlying GNN model to learn graph embeddings using only important nodes and edges. EXPASS aims to define local neighborhoods for neural message passing, i.e., identify the most important edges and nodes, using explanation weights, in the $k$ -hop local neighborhood of every node in the graph. Formally, we augment existing message passing architectures to allow information flow along important edges while blocking information along irrelevant edges.
22
+
23
+ We present an extensive theoretical and empirical analysis to show the effectiveness of EXPASS on the predictive, explainability, and oversmoothing performance of GNNs. Our theoretical results show that the embedding difference between vanilla message passing and EXPASS frameworks is upper-bounded by the difference between their model weights. Further, we show that embeddings learned using EXPASS relieve the oversmoothing problem in GNNs as they reduce information propagation by slowing the layer-wise loss of Dirichlet energy (Section 4.2). For our empirical analysis, we integrate EXPASS into state-of-the-art GNN models and evaluate their predictive, oversmoothing, and explainability performance on real-world graph datasets (Section 5). Our results show that, on average, across five GNN models, EXPASS improves the degree of explainability of the underlying GNNs by 39.68%. Our ablation studies show that for an increasing number of GNN layers, EXPASS achieves 34.4% better oversmoothing performance than its vanilla counterpart. Finally, our results demonstrate the effectiveness of using explanations during training, paving the way for new frontiers in GraphXAI research to develop explanation-based training algorithms.
24
+
25
+ ## 2 Related works
26
+
27
+ Graph Neural Networks. Graph Neural Networks (GNNs) are complex non-linear functions that transform input graph structures into a lower dimensional embedding space. The main goal of GNNs is to learn embeddings that reflect the underlying input graph structure, i.e., neighboring nodes in the graph are mapped to neighboring points in the embedding space. Prior works have proposed several GNN models using spectral and non-spectral approaches. Spectral models [20-24] leverage Fourier transform and graph Laplacian to define convolution approaches for GNN models. However, non-spectral approaches [25-29] define the convolution operation by leveraging the local neighborhood of individual nodes in the graph. Most modern non-spectral models are message passing frameworks [30, 31], where nodes update their embedding by aggregating information from $k$ -hop neighboring nodes.
28
+
29
+ Post hoc Explanations. With the increasing development of complex high-performing GNN models [25-29], it becomes critical to understand their decisions. Prior works have focused on developing several post hoc explanation methods to explain the decisions of GNN models [5, 7, 9, 11-13, 32]. More specifically, these explanation methods can be broadly categorized into i) gradient-based methods [5] that leverage the gradients of the GNN model to generate explanations; ii) perturbation-based methods $\left\lbrack {9,{11},{13}}\right\rbrack$ that aim to generate explanations by calculating the change in GNN predictions upon perturbations of the input graph structure (nodes, edges, or subgraphs); and iii) surrogate-based methods $\left\lbrack {7,{12}}\right\rbrack$ that fit a simple interpretable model to approximate the predictive behavior of the given GNN model. Finally, recent works have introduced frameworks to theoretically and empirically 7 analyze the behavior of state-of-the-art GNN explanation methods with respect to several desirable properties [16, 33].
30
+
31
+ ## 3 Preliminaries
32
+
33
+ Notations. Let $\mathcal{G} = \left( {\mathcal{V},\mathcal{E},\mathbf{X}}\right)$ denote an undirected graph comprising of a set of nodes $\mathcal{V}$ and a set of edges $\mathcal{E}$ . Let $\mathbf{X} = \left\{ {{\mathbf{x}}_{1},{\mathbf{x}}_{2},\ldots ,{\mathbf{x}}_{N}}\right\}$ denote the set of node feature vectors for all nodes in $\mathcal{V}$ , where ${\mathbf{x}}_{v} \in {\mathbb{R}}^{d}$ captures the attribute values of a node $v$ and $N = \left| \mathcal{V}\right|$ denotes the number of nodes in the graph. Let $\mathbf{A} \in {\mathbb{R}}^{N \times N}$ be the graph adjacency matrix, where element ${\mathbf{A}}_{uv} = 1$ if there exists an edge $e \in \mathcal{E}$ between nodes $u$ and $v$ and ${\mathbf{A}}_{uv} = 0$ otherwise. We use ${\mathcal{N}}_{u}$ to denote the set of immediate neighbors of node $u$ , i.e., ${\mathcal{N}}_{u} = \left\{ {v \in \mathcal{V} \mid {A}_{uv} = 1}\right\}$ . Finally, the function $\deg : \mathcal{V} \mapsto {\mathbb{Z}}_{ > 0}$ is defined as $\deg \left( v\right) = \left| {\mathcal{N}}_{v}\right|$ and outputs the degree of a node $v \in \mathcal{V}$
34
+
35
+ Graph Neural Networks (GNNs). Formally, GNNs can be formulated as message passing networks [30] specified by three key operators MSG, AGG, and UPD. These operators are recursively applied on a given graph $\mathcal{G}$ for a $L$ -layer GNN model defining how neural messages are shared, aggregated, and updated between nodes to learn the final node representations in the ${L}^{\text{th }}$ layer of the GNN. Commonly, a message between a pair of nodes(u, v)in layer $l$ is characterized as a function of their hidden representations ${\mathbf{h}}_{u}^{\left( l - 1\right) }$ and ${\mathbf{h}}_{v}^{\left( l - 1\right) }$ from the previous layer: ${\mathbf{m}}_{uv}^{\left( l\right) } = \operatorname{MSG}\left( {{\mathbf{h}}_{u}^{\left( l - 1\right) },{\mathbf{h}}_{v}^{\left( l - 1\right) }}\right)$ . The AGG operator retrieves the messages from the neighborhood of node $u$ and aggregates them as: ${\mathbf{m}}_{u}^{\left( l\right) } = \operatorname{AGG}\left( {{\mathbf{m}}_{uv}^{\left( l\right) } \mid v \in {\mathcal{N}}_{u}}\right)$ . Next, the UPD operator takes the aggregated message ${\mathbf{m}}_{u}^{\left( l\right) }$ at layer $l$ and combines it with ${\mathbf{h}}_{u}^{\left( l - 1\right) }$ to produce node $u$ ’s representation for layer $l$ as ${\mathbf{h}}_{u}^{\left( l\right) } = \operatorname{UPD}\left( {{\mathbf{m}}_{u}^{\left( l\right) },{\mathbf{h}}_{u}^{\left( l - 1\right) }}\right)$ . Lastly, the final node representation for node $u$ is given as ${\mathbf{z}}_{u} = {\mathbf{h}}_{u}^{\left( L\right) }$ .
36
+
37
+ Graph Explanations. In contrast to other modalities like images and texts, an explanation method for graphs can formally generate multi-level explanations. For instance, in a graph classification task, the explanations for a given graph prediction can be with respect to node attributes ${\mathbf{M}}_{\mathrm{x}} \in {\mathbb{R}}^{d}$ , nodes ${\mathbf{M}}_{n} \in {\mathbb{R}}^{N}$ , or edges ${\mathbf{M}}_{e} \in {\mathbb{R}}^{\bar{N} \times \bar{N}}$ . Note that these explanation masks are continuous but can be discretized using specific thresholding strategies [33].
38
+
39
+ Oversmoothing. Cai et al. [34] and Zhou et al. [35] defined bounds for analyzing oversmoothing for a GNN using Dirichlet Energy. For a graph $\mathcal{G}$ with adjacency matrix $\mathbf{A}$ and degree matrix $\mathbf{D}$ , we define $\widetilde{\mathbf{A}} = \mathbf{A} + {\mathbf{I}}_{N}$ and $\widetilde{\mathbf{D}} = \mathbf{D} + {\mathbf{I}}_{N}$ as the adjacency and degree matrices respectively of the graph $\mathcal{G}$ with self-loops. We also define the augmented normalized Laplacian of $\mathcal{G}$ as $\widetilde{\Delta } = {\mathbf{I}}_{N} - {\widetilde{\mathbf{D}}}^{-\frac{1}{2}}\widetilde{A}{\widetilde{D}}^{-\frac{1}{2}}$ , and $\mathbf{P} = {\mathbf{I}}_{N} - \widetilde{\Delta }$ .
40
+
41
+ ## 4 Our Framework: EXPASS
42
+
43
+ Here, we describe EXPASS, our proposed explainable message passing framework that aims to learn accurate and interpretable graph embeddings. In particular, EXPASS incorporates explanations into the message passing framework of GNN models by only aggregating embeddings from key nodes and edges as identified using an explanation method.
44
+
45
+ Problem formulation (Explanation Directed Message Passing). Given a graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}, X}\right)$ , EXPASS aims to generate a $d$ -dimensional embedding ${\mathbf{z}}_{u} \in {\mathbb{R}}^{d}$ for each node $u \in \mathcal{V}$ using an explanation-directed message passing framework that filters out the noise from unimportant edges and improves the expressive power of GNNs.
46
+
47
+ ### 4.1 Explanation Directed Message Passing
48
+
49
+ The central idea of EXPASS is to propose a novel method for improving the neural message passing scheme of GNN models by utilizing explanations during model training and aggregating important neural messages along edges in graph neighborhoods. Next, we describe the existing message passing scheme in GNNs and our explainable counterpart.
50
+
51
+ Message Passing. As described in Section 3, each GNN layer can be described using the MSG, AGG, and UPD operators. For each node $u \in \mathcal{V}$ , the ${\left( l + 1\right) }^{th}$ layer embeddings ${\mathbf{h}}_{u}^{\left( l + 1\right) }$ is computed using a GNN operating on the node's neighboring attributes. Formally, the GNN layer can be formulated as:
52
+
53
+ ![01963ef9-b541-7dc4-b2e7-081b3259e7b1_2_664_1771_459_151_0.jpg](images/01963ef9-b541-7dc4-b2e7-081b3259e7b1_2_664_1771_459_151_0.jpg)
54
+
55
+ where ${\mathbf{h}}_{u}^{\left( l + 1\right) }$ represents the updated embedding of node $u,\psi$ is the MSG operator, $\oplus$ is the AGG operator (e.g., summation), $\phi$ is an UPD function (e.g., any non-linear activation function), and ${\mathbf{h}}_{u}^{\left( l\right) }$ represents the embedding of node $u$ from the previous layer. We obtain an embedding ${\mathbf{z}}_{u}$ for node $u$ by stacking $L$ GNN layers. Finally, the node embeddings $\mathbf{Z} \in \mathbb{R}$ are then passed to a READOUT function to obtain an embedding for the graph.
56
+
57
+ ![01963ef9-b541-7dc4-b2e7-081b3259e7b1_3_351_205_1086_720_0.jpg](images/01963ef9-b541-7dc4-b2e7-081b3259e7b1_3_351_205_1086_720_0.jpg)
58
+
59
+ Figure 1: Overview of EXPASS : a) EXPASS investigates the problem of injecting explanations into the message passing framework to increase the expressive power and performance of GNNs. b) Shown is the general message passing scheme where, for node $u$ , messages are aggregated from nodes ${v}_{i} \in {\mathcal{N}}_{u}$ in the 1-hop neighborhood of $u$ . c) EXPASS injects explanations into the message passing framework translates by masking out messages from neighboring nodes ${v}_{i} \in {\mathcal{N}}_{u}$ with explanation scores ${s}_{u{v}_{i}} \approx 0$ when $u$ is correctly classified.
60
+
61
+ EXPASS . Here, we describe our proposed explainable message passing scheme that incorporates
62
+
63
+ explanations into the message passing step in individual GNN layers on the fly during the training process. Given an explanation method, which generates an importance score ${s}_{uv} \in {\mathbf{M}}_{u}^{e}$ for every edge ${e}_{uv} \in \mathcal{E}$ , we can weight the edge contribution in the neighborhood ${\mathcal{N}}_{u}$ of node $u$ as:
64
+
65
+ $$
66
+ {\mathbf{h}}_{u}^{\prime }{}^{\left( l + 1\right) } = \phi \left( {{\mathbf{h}}_{u}^{\left( l\right) },{\bigoplus }_{v \in {\mathcal{N}}_{u}}\overline{{s}_{uv}}\psi \left( {{\mathbf{h}}_{u}^{\left( l\right) },{\mathbf{h}}_{v}^{\left( l\right) }}\right) }\right)
67
+ $$
68
+
69
+ Note that EXPASS is agnostic to explanation types and can also incorporate explanations on node attributes and node level. For instance, the importance scores for individual nodes can be computed by averaging the outgoing scores ${s}_{uv}$ for all $v \in {\mathcal{N}}_{u}$ . Subsequently, we can replace the ${s}_{uv}$ score by using the average score ${s}_{u}$ to weight edges in the EXPASS layers, and for node attributes, we can multiply the node attribute explanation ${\mathbf{M}}_{u}^{a}$ to the original node attribute vector.
70
+
71
+ To enable explainable message passing and only retain the important embeddings for node $u$ , EXPASS removes the knowledge of irrelevant nodes and edges from the local neighborhood ${\mathcal{N}}_{u}$ of node $u$ using its explanations. For instance, if node $v$ is considered important to node $u$ , EXPASS transforms the aggregated messages of node $u$ using the node importance scores ${s}_{uv}$ . Note that since the explanations of node $u$ include important nodes and edges in the $L$ -hop neighborhood of node $u$ , even though node $u$ is only locally modified, the change will spread through all the nodes in every GNN layer. Furthermore, to avoid spurious correlations, we ensure that explanations are only generated for correctly classified nodes and graphs. Explanation weights infuse information from higher-order neighborhoods into each layer of the GNN model, specifically, from as many $L$ -hop neighbors because explanation weights within each layer are computed using the $L$ -layer GNN model. To illustrate this, we next show the weight computations for a GNN explanation method.
72
+
73
+ Without loss of generality, let us consider GNNExplainer as our explanation method whose mask for selected graph is formulated as: ${\mathcal{G}}_{\text{mask }} = \left( {{\mathbf{X}}^{\prime },{\mathbf{A}}^{\prime }}\right) = \left( {\mathbf{X} \odot \sigma \left( {\mathbf{M}}^{\mathrm{x}}\right) ,\mathbf{A} \odot \sigma \left( {\mathbf{M}}^{\mathrm{e}}\right) }\right)$ , where $W = \left\lbrack {{\mathbf{M}}^{\mathrm{x}},{\mathbf{M}}^{\mathrm{e}}}\right\rbrack$ are the explainers parameters, $\sigma$ is the sigmoid function, and $\odot$ denotes element-wise multiplication. Here, ${s}_{uv}$ represents the element in row $v$ and column $u$ of ${\mathbf{M}}^{\mathrm{e}}$ . Gradient descent-based optimization is used to find the optimal values for the masks minimizing the following objective: ${L}_{e} = - \mathop{\sum }\limits_{{c = 1}}^{C}1\left\lbrack {y = c}\right\rbrack \log {f}_{\theta }\left( {Y = y \mid {\mathcal{G}}_{\text{mask }}}\right)$ , where ${f}_{\theta }$ is the $L$ -layer GNN model and $C$ is the total number of classes. This shows that a $L$ -hop neighborhood is used to compute ${s}_{uv}$ .
74
+
75
+ ### 4.2 Theoretical Analysis
76
+
77
+ Here, we provide a detailed theoretical analysis of our proposed EXPASS framework. In particular, we (i) provide a theoretical upper bound on the embedding difference obtained from a vanilla message passing and EXPASS framework and (ii) show that graph embeddings learned using EXPASS relieves the oversmoothing problem in GNNs by reducing information propagation.
78
+
79
+ Theorem 1 (Differences between EXPASS and Vanilla Message Passing). Given a non-linear activation function $\sigma$ that is Lipschitz continuous, the difference between the node embeddings between a vanilla message passing and EXPASS framework can be bounded by the difference in their individual weights, i.e.,
80
+
81
+ $$
82
+ {\begin{Vmatrix}{\mathbf{h}}_{u}^{\left( l\right) } - {\mathbf{h}}^{\prime }{}_{u}^{\left( l\right) }\end{Vmatrix}}_{2} \leq {\begin{Vmatrix}{\mathbf{W}}_{a}^{\left( l\right) } - {\mathbf{W}}^{\prime }{}_{a}^{\left( l\right) }\end{Vmatrix}}_{2}{\begin{Vmatrix}{\mathbf{h}}_{u}^{\left( l - 1\right) }\end{Vmatrix}}_{2} + {\begin{Vmatrix}{\mathbf{W}}_{n}^{\left( l\right) } - {\mathbf{W}}^{\prime }{}_{n}^{\left( l\right) }\end{Vmatrix}}_{2}\mathop{\sum }\limits_{\substack{{v \in {\mathcal{N}}_{u} \cap {s}_{v} = 1} }}{\begin{Vmatrix}{\mathbf{h}}_{v}^{\left( l - 1\right) }\end{Vmatrix}}_{2}, \tag{1}
83
+ $$
84
+
85
+ where ${\mathbf{W}}_{a}^{\left( l\right) }$ and ${\mathbf{W}}_{a}^{\prime \left( l\right) }$ are the weights for node $u$ in layer $l$ of the vanilla message passing and EXPASS framework and ${\mathbf{W}}_{n}^{\left( l\right) }$ and ${\mathbf{W}}^{\prime }{}_{n}^{\left( l\right) }$ are their respective weight matrix with the neighbors of node $u$ at layer $l$ .
86
+
87
+ Proof Sketch. In Theorem 1, we prove that the ${\ell }_{2}$ -norm of the differences between the embeddings of vanilla message passing and EXPASS framework at layer $l$ is upper bounded by the difference between their weights and the embeddings of node $u$ and its subgraph. See Appendix A for more details.
88
+
89
+ Definition 1 (Dirichlet Energy for a Node Embedding Matrix [35]). Given a node embedding matrix ${\mathbf{H}}^{\left( l\right) } = {\left\lbrack {\mathbf{h}}_{1}^{\left( l\right) },\ldots ,{\mathbf{h}}_{n}^{\left( l\right) }\right\rbrack }^{T}$ learned from the GNN model at the ${l}^{\text{th }}$ layer, the Dirichlet Energy $E\left( {\mathbf{H}}^{\left( l\right) }\right)$ is defined as:
90
+
91
+ $$
92
+ E\left( {\mathbf{H}}^{\left( l\right) }\right) = \operatorname{tr}\left( {{\mathbf{H}}^{{\left( l\right) }^{T}}\widetilde{\Delta }{\mathbf{H}}^{\left( l\right) }}\right) = \frac{1}{2}\mathop{\sum }\limits_{{i, j \in \mathcal{V}}}{a}_{ij}{\begin{Vmatrix}\frac{{\mathbf{H}}_{i}^{\left( l\right) }}{\sqrt{1 + {\deg }_{i}}} - \frac{{\mathbf{H}}_{j}^{\left( l\right) }}{\sqrt{1 + {\deg }_{j}}}\end{Vmatrix}}_{2}^{2} \tag{2}
93
+ $$
94
+
95
+ where ${a}_{ij}$ are elements in the adjacency matrix $\widetilde{\mathbf{A}}$ and ${\deg }_{i},{\deg }_{j}$ is the degree of node $i$ and $j$ , respectively.
96
+
97
+ Cai et al. [34] extensively show that higher Dirichlet energies correspond to lower oversmoothing. Furthermore, they show that the removal of edges or , similarly, reduction of edge weights on graphs help alleviate oversmoothing.
98
+
99
+ Proposition (EXPASS relieves Oversmoothing). EXPASS alleviates oversmoothing by slowing the layer-wise loss of Dirichlet energy.
100
+
101
+ The complete proof is provided in Appendix A.
102
+
103
+ ## 5 Experiments
104
+
105
+ Next, we present experimental results for our EXPASS framework. More specifically, we address the following questions: Q1) Does EXPASS enable GNNs to learn more accurate embeddings and improve their degree of explainability? Q2) How does EXPASS affect the oversmoothing and predictive performance of GNNs with an increasing number of layers? Q3) Does EXPASS depend on the quality of explanations for improving the predictive and oversmoothing performance of GNNs?
106
+
107
+ ### 5.1 Datasets and Experimental setup
108
+
109
+ We first describe the datasets used to study the utility of our proposed EXPASS framework and then outline the experimental setup.
110
+
111
+ Datasets. We use real-world molecular chemistry datasets to evaluate the effectiveness of EXPASS w.r.t. the performance of the underlying GNN model and understand the trade-off between explainability and accuracy for a graph classification task. We consider four benchmark datasets, which includes Mutag [36], Alkane-Carbonyl [37], DD [38], and Proteins [39]. See Appendix B. 1 for a detailed overview of the datasets.
112
+
113
+ GNN Architectures and Explainers. To investigate the flexibility of EXPASS, we incorporate it into five different GNN models: GCN [40], GraphConv [41], LEConv [42], GraphSAGE [28], and GIN [27]. We use GNNExplainer [13] as our baseline GNN explanation method to generate edge-level explanations for most of our experiments. In addition, we use Integrated Gradients [43], a node-level explanation method, to demonstrate EXPASS 's sensitivity to the choice of explainers.
114
+
115
+ Implementation details. We consider DropEdge [44] as our baseline method for comparing the oversmoothing performance of EXPASS as DropEdge randomly removes edges from the input graph at each training epoch, acting like a message passing reducer. Across all experiments, we use topK (k=40%) node features/edges, and use them to generate explanations for all explanation methods. All other hyperparameters of the explanation and baseline methods were set following the author’s guidelines. For all our experiments (unless mentioned otherwise), we use the baseline architectures with three GNN layers followed by ReLU layers and set the hidden dimensionality to 32 . Finally, we use a single linear layer to transform the graph embeddings to their respective classes. See Appendix B. 2 for more details.
116
+
117
+ Performance metrics for GNN Explainers. To measure the reliability of GNN explanation methods, we use the graph explanation faithfulness metric [16]: $\operatorname{GEF}\left( {{\widehat{y}}_{u},{\widehat{y}}_{{u}^{\prime }}}\right) = 1 - {\exp }^{-\mathrm{{KL}}\left( {{\widehat{y}}_{u}\parallel {\widehat{y}}_{{u}^{\prime }}}\right) }$ , where ${\widehat{y}}_{u}$ is predicted probability vector using the whole subgraph and ${\widehat{y}}_{{u}^{\prime }}$ is the predicted probability vector using the masked subgraph, where we generate the masked subgraph by only using the topK features identified by an explanation and the Kullback-Leibler (KL) divergence score (denoted by "||" operator) quantifies the distance between two probability distributions. Note that GEF is a measure of the unfaithfulness of the explanation. So, higher values indicate a higher degree of unfaithfulness.
118
+
119
+ Performance metrics for Oversmoothing. Zhou et al. [18] introduced the Group Distance Ratio (GDR) metric to quantify oversmoothing in GNNs. It measures the ratio between the average of pairwise representation distances between graphs belonging to different (inter) and same (intra) groups. Formally, one would prefer to reduce the intra-group class representations and increase the inter-group distance to relieve the over-smoothing issue. Hence, lower GDR values denote higher oversmoothing in GNNs.
120
+
121
+ Burn-in period. We defined the burn-in period as a number $n$ of epochs during training in which no explanations are used. The burn-in period is necessary to avoid feeding spurious explanations to the model. The length of the burn-in period, e.g. the number of epochs, was treated as a hyperparameter and fine-tuned during the model fine-tuning phase. At the end of the burn-in period, a predefined percentage of correctly predicted graphs per batch is randomly sampled and their explanations are used in the model training. The percentage of correctly predicted graphs sampled in each batch was treated as an hyperparameter and was set to 0.4 for all our experiments.
122
+
123
+ ### 5.2 Results
124
+
125
+ Q1) EXPASS improves the predictive performance and explainability of GNNs. To measure the predictive performance and degree of explainability of GNNs trained using EXPASS, we compute their average predictive performance (using AUROC and F1-score) and fidelity (using Graph Explanation Faithfulness) using different GNN models and datasets. Across four datasets and five GNN architectures, we find that EXPASS-augmented GNNs learn graph embeddings that are more accurate (higher AUROC and F1-score) and result in more faithful explanations (lower Graph Explanation Faithfulness score) than their vanilla counterparts. On average, EXPASS improves the AUROC and F1-score by 1.51% and 1.05%, respectively. In particular, we observe that EXPASS improves the predictive behavior of high-performing models like GIN (+2.06% in AUROC and +2.50% in F1-score) but shows little to no improvement in the case of LeConv, which utilizes a node-scoring mechanism through the similarity between a node and its neighbors' embeddings. Finally, we find that EXPASS-augmented GNNs significantly improve the explainability of a GNN and achieve a 39.68% better faithfulness score as compared to vanilla GNNs (Table 1).
126
+
127
+ Table 1: Results of EXPASS for five GNNs and four graph datasets. Shown is average performance across three independent runs. Arrows $\left( { \uparrow , \downarrow }\right)$ indicate the direction of better performance. EXPASS improves the predictive power (AUROC and F1-score) and degree of explainability (Graph Explanation Faithfulness) of original GNNs across multiple datasets (shaded area). Values corresponding to best performance are bolded.
128
+
129
+ <table><tr><td>Dataset</td><td>Method</td><td>AUROC (↑)</td><td>F1-score (↑)</td><td>GEF (↓)</td></tr><tr><td rowspan="10">ALKANE- Carbonyl</td><td>GCN</td><td>${0.97} \pm {0.01}$</td><td>${0.95} \pm {0.01}$</td><td>${0.33} \pm {0.02}$</td></tr><tr><td>EXPASS-GCN</td><td>0.98 $\pm {0.00}$</td><td>0.96±0.01</td><td>${0.23} \pm {0.02}$</td></tr><tr><td>GraphConv</td><td>${0.97} \pm {0.01}$</td><td>${0.94} \pm {0.00}$</td><td>${0.38} \pm {0.05}$</td></tr><tr><td>EXPASS-GraphConv</td><td>0.98 $\pm {0.00}$</td><td>${\mathbf{{0.97}}}_{\pm {0.00}}$</td><td>0.22±0.03</td></tr><tr><td>LeConv</td><td>${0.98} \pm {0.01}$</td><td>${0.96} \pm {0.00}$</td><td>${0.37} \pm {0.03}$</td></tr><tr><td>EXPASS-LeConv</td><td>${0.98} \pm {0.00}$</td><td>${0.96} \pm {0.01}$</td><td>0.24±0.03</td></tr><tr><td>GraphSAGE</td><td>${0.98} \pm {0.00}$</td><td>${0.96} \pm {0.00}$</td><td>${0.40} \pm {0.12}$</td></tr><tr><td>EXPASS-GraphSAGE</td><td>$\mathbf{{0.99}} \pm {0.00}$</td><td>0.97±0.01</td><td>0.18±0.06</td></tr><tr><td>GIN</td><td>${0.96} \pm {0.01}$</td><td>${0.94} \pm {0.02}$</td><td>${0.35} \pm {0.06}$</td></tr><tr><td>EXPASS-GIN</td><td>0.98±0.01</td><td>$\mathbf{{0.96}} \pm {0.02}$</td><td>0.11±0.04</td></tr><tr><td rowspan="10">DD</td><td>GCN</td><td>${0.73} \pm {0.02}$</td><td>${0.70} \pm {0.02}$</td><td>${0.49} \pm {0.04}$</td></tr><tr><td>EXPASS-GCN</td><td>0.74±0.01</td><td>${0.70} \pm {0.02}$</td><td>${\mathbf{{0.30}}}_{\pm {0.09}}$</td></tr><tr><td>GraphConv</td><td>${}_{{0.75} \pm {0.03}}$</td><td>${0.73} \pm {0.03}$</td><td>${0.25} \pm {0.10}$</td></tr><tr><td>EXPASS-GraphConv</td><td>0.77±0.03</td><td>${0.73} \pm {0.03}$</td><td>${\mathbf{{0.19}}}_{\pm {0.04}}$</td></tr><tr><td>LeConv</td><td>${0.76} \pm {0.03}$</td><td>0.74±0.02</td><td>0.17±0.03</td></tr><tr><td>EXPASS-LeConv</td><td>0.77±0.03</td><td>${0.73} \pm {0.04}$</td><td>${0.31} \pm {0.10}$</td></tr><tr><td>GraphSAGE</td><td>${0.74} \pm {0.02}$</td><td>${0.70} \pm {0.02}$</td><td>${0.21} \pm {0.04}$</td></tr><tr><td>EXPASS-GraphSAGE</td><td>0.76±0.03</td><td>${\mathbf{{0.71}}}_{\pm {0.02}}$</td><td>0.20±0.03</td></tr><tr><td>GIN</td><td>${0.74} \pm {0.01}$</td><td>${0.70} \pm {0.01}$</td><td>${0.37} \pm {0.03}$</td></tr><tr><td>EXPASS-GIN</td><td>0.76±0.01</td><td>0.74±0.01</td><td>0.35±0.05</td></tr><tr><td rowspan="10">MUTAG</td><td>GCN</td><td>${0.71} \pm {0.11}$</td><td>${0.87} \pm {0.01}$</td><td>${0.09} \pm {0.03}$</td></tr><tr><td>EXPASS-GCN</td><td>0.77±0.02</td><td>${\mathbf{{0.89}}}_{\pm {0.00}}$</td><td>0.04±0.01</td></tr><tr><td>GraphConv</td><td>${0.91} \pm {0.02}$</td><td>${0.94} \pm {0.02}$</td><td>${0.66} \pm {0.03}$</td></tr><tr><td>EXPASS-GraphConv</td><td>0.93±0.01</td><td>0.94±0.01</td><td>0.24±0.03</td></tr><tr><td>LeConv</td><td>${0.92} \pm {0.03}$</td><td>${0.94} \pm {0.02}$</td><td>${0.65} \pm {0.05}$</td></tr><tr><td>EXPASS-LeConv</td><td>${0.92} \pm {0.03}$</td><td>$\mathbf{{0.96}} \pm {0.01}$</td><td>${\mathbf{{0.30}}}_{\pm {0.06}}$</td></tr><tr><td>GraphSAGE</td><td>${0.76} \pm {0.02}$</td><td>${0.86} \pm {0.03}$</td><td>${0.24} \pm {0.08}$</td></tr><tr><td>EXPASS-GraphSAGE</td><td>0.76±0.02</td><td>0.87±0.03</td><td>${\mathbf{{0.11}}}_{\pm {0.03}}$</td></tr><tr><td>GIN</td><td>${0.92} \pm {0.02}$</td><td>${0.93} \pm {0.01}$</td><td>${0.61}_{\pm {0.05}}$</td></tr><tr><td>EXPASS-GIN</td><td>0.94±0.02</td><td>${0.95} \pm {0.01}$</td><td>0.32±0.04</td></tr><tr><td rowspan="10">Proteins</td><td>GCN</td><td>${0.73} \pm {0.05}$</td><td>${0.68} \pm {0.04}$</td><td>${0.19} \pm {0.02}$</td></tr><tr><td>EXPASS-GCN</td><td>0.74±0.03</td><td>${\mathbf{{0.69}}}_{\pm {0.03}}$</td><td>0.08±0.02</td></tr><tr><td>GraphConv</td><td>${0.75} \pm {0.03}$</td><td>${0.70} \pm {0.03}$</td><td>${0.49} \pm {0.06}$</td></tr><tr><td>EXPASS-GraphConv</td><td>${0.75} \pm {0.03}$</td><td>${0.70} \pm {0.04}$</td><td>0.10±0.03</td></tr><tr><td>LeConv</td><td>0.77±0.03</td><td>0.72±0.04</td><td>${0.51} \pm {0.01}$</td></tr><tr><td>EXPASS-LeConv</td><td>0.76±0.02</td><td>${0.71} \pm {0.03}$</td><td>0.15±0.07</td></tr><tr><td>GraphSAGE</td><td>${0.73} \pm {0.04}$</td><td>${0.69} \pm {0.04}$</td><td>${0.17} \pm {0.07}$</td></tr><tr><td>EXPASS-GraphSAGE</td><td>${0.73} \pm {0.04}$</td><td>${0.69} \pm {0.04}$</td><td>0.06±0.01</td></tr><tr><td>GIN</td><td>0.77±0.04</td><td>${0.73} \pm {0.05}$</td><td>${0.20} \pm {0.07}$</td></tr><tr><td>EXPASS-GIN</td><td>0.78±0.03</td><td>${0.73} \pm {0.04}$</td><td>${\mathbf{{0.19}}}_{\pm {0.01}}$</td></tr></table>
130
+
131
+ Q2) EXPASS relieves Oversmoothing in GNNs. We examine the oversmoothing (using the Group Distance Ratio metric [18]) and predictive performance of GNNs trained using EXPASS with their vanilla counterparts. The oversmoothing problem in GNNs shows that the representations of nodes converge to similar vectors as the number of layers increases. Therefore, we analyze the over-smoothing of the GNNs for an increasing number of layers and find that, on average, across two architectures, EXPASS improves the group distance ratio by 34.4% (Figure 2). Further, our results indicate an inherent trade-off between oversmoothing and predictive performance of GNNs, as shown in Figures 4-5.
132
+
133
+ ![01963ef9-b541-7dc4-b2e7-081b3259e7b1_7_313_207_1169_379_0.jpg](images/01963ef9-b541-7dc4-b2e7-081b3259e7b1_7_313_207_1169_379_0.jpg)
134
+
135
+ Figure 2: The effects of the number of GNN layers on the oversmoothing performance of EXPASS (orange) and Vanilla (green) GCN (left column) and GIN (right column) models trained on Alkane-Carbonyl dataset. Across models with increasing number of layers, EXPASS achieves higher GDR performance without sacrificing the predictive performance of the GCN model. See Figs. 4-5 for predictive performance results.
136
+
137
+ Q3) Ablation studies. We conduct ablations on several components of EXPASS with respect to its oversmoothing and predictive performance. First, we investigate the oversmoothing and predictive performance of GNNs for different topK explanations (i.e., topK edges identified by a GNN explanation) chosen in the message passing. Results show that EXPASS alleviates oversmoothing by using only the topK edges to learn graph embeddings and explicitly filter out the noise from unimportant edges. In particular, we observe that the GDR values decrease (denoting higher oversmoothing) with the increase in the use of topK edges (Figure 3). More specifically, we find that the GDR value at topK=0.1 is 11.92% higher than vanilla message passing (i.e., using all edges in the graph).
138
+
139
+ Second, we compare the predictive and oversmoothing and predictive performance of EXPASS and DropEdge. Here, we show that message passing using optimized explanation-directed information outperforms random edge removal. We find that EXPASS outperforms DropEdge across both over-smoothing and accuracy metrics. In particular, on average, across different topK values, EXPASS improves the oversmoothing, AUROC, and F1-score performance of vanilla message passing by ${71.16}\% ,{9.53}\%$ , and 12.63%, respectively (Figure 3).
140
+
141
+ ![01963ef9-b541-7dc4-b2e7-081b3259e7b1_7_312_1283_1172_337_0.jpg](images/01963ef9-b541-7dc4-b2e7-081b3259e7b1_7_312_1283_1172_337_0.jpg)
142
+
143
+ Figure 3: The effects of choosing only the topK percent of important edges on the (a) oversmoothing, (b) AUROC, and (c) F1-score performance of GCN model trained on Alkane-Carbonyl dataset. Over a wide range of topK values $\left( {{0.1} < \text{topK} < {1.0}}\right)$ , EXPASS outperforms DropEdge [44] on all the three metrics. Note that their performance converges for top $\mathrm{K} = {1.0}$ as that denotes using all the edges in the graph.
144
+
145
+ Last, we investigate the effect of the choice of the baseline explanation method on the performance of EXPASS with respect to the vanilla message passing framework. More specifically, we evaluate the predictive and explainability performance of EXPASS-augmented GNNs when trained using node explanations generated using Integrated Gradients (IG) [43]. Similar to the results of EXPASS with GNNExplainer as the baseline explanation method (Table 1), we find that EXPASS trained using IG explanations also improves the AUROC (+2.80%), F1-score (+1.11%), and GEF (+23.67%) of the vanilla GNN model. Our results show that the choice of explainer can make a difference in the EXPASS performance, depending on the dataset. For instance, IG is a node-masking explainer that is not considered a strong explanation method and its effects are variable across datasets [33]. We recommend using graph-specific explainers that optimise for fidelity and sparsity on the edges
146
+
147
+ Table 2: Results of EXPASS for GCN using the node explanations from Integrated Gradients [43] for message passing for various datasets. Shown is average performance across three independent runs. Arrows $\left( { \uparrow , \downarrow }\right)$ indicate the direction of better performance. EXPASS improves the predictive power (AUROC and F1-score) and degree of explainability (Graph Explanation Faithfulness) of original GNNs across multiple datasets (shaded area).
148
+
149
+ <table><tr><td>Dataset</td><td>Method</td><td>AUROC (↑)</td><td>F1-score (↑)</td><td>GEF (↓)</td></tr><tr><td rowspan="2">DD</td><td>GCN</td><td>${0.73} \pm {0.02}$</td><td>${0.70} \pm {0.02}$</td><td>${0.25} \pm {0.03}$</td></tr><tr><td>EXPASS-GCN</td><td>0.75±0.01</td><td>0.71±0.03</td><td>${0.23} \pm {0.04}$</td></tr><tr><td rowspan="2">ALKANE</td><td>GCN</td><td>${0.97} \pm {0.01}$</td><td>${0.95} \pm {0.01}$</td><td>${\mathbf{{0.09}}}_{\pm {0.01}}$</td></tr><tr><td>EXPASS-GCN</td><td>${0.97} \pm {0.01}$</td><td>${0.95} \pm {0.01}$</td><td>${0.1} \pm {0.01}$</td></tr><tr><td rowspan="2">Mutag</td><td>GCN</td><td>${0.71} \pm {0.11}$</td><td>${0.87} \pm {0.01}$</td><td>${0.09} \pm {0.02}$</td></tr><tr><td>EXPASS-GCN</td><td>0.77±0.02</td><td>$\mathbf{{0.88}} \pm {0.01}$</td><td>0.04±0.02</td></tr><tr><td rowspan="2">Proteins</td><td>GCN</td><td>${0.73} \pm {0.04}$</td><td>$\mathbf{{0.68}} \pm {0.04}$</td><td>${0.05} \pm {0.01}$</td></tr><tr><td>EXPASS-GCN</td><td>${0.73} \pm {0.04}$</td><td>${0.67} \pm {0.05}$</td><td>0.04±0.01</td></tr></table>
150
+
151
+ of the input graph, which would be a best fit to increase the performance of the network. Further, our results show that EXPASS is a model- and explainer-agnostic framework that can improve the downstream task and explainability performance across different GNN architectures using diverse GNN explainers.
152
+
153
+ ## 6 Conclusion
154
+
155
+ In this work, we propose the problem of learning graph embeddings using explanation-directed message passing in GNNs. To this end, we introduce EXPASS , a novel message passing framework that can be used with any existing GNN model and subgraph-optimizing explainer to learn accurate embeddings by aggregating only embeddings from nodes and edges identified as important by a GNN explainer. We perform an extensive theoretical analysis to show that EXPASS relieves the oversmoothing problem in GNNs, and the embedding difference between the vanilla message passing framework and EXPASS can be upper bounded by the difference of their respective layer weights. Our empirical results on benchmark datasets show that EXPASS improves the explainability of the underlying GNN model without sacrificing its predictive performance. Our proposed method and findings open up exciting new avenues to generate graph embeddings by jointly training models and explanation methods. We anticipate that EXPASS could open new frontiers in graph machine learning for developing explanation-based training frameworks.
156
+
157
+ ## References
158
+
159
+ [1] Marinka Zitnik, Monica Agrawal, and Jure Leskovec. Modeling polypharmacy side effects with graph convolutional networks. In Bioinformatics, 2018. 1
160
+
161
+ [2] Kexin Huang, Cao Xiao, Lucas M Glass, Marinka Zitnik, and Jimeng Sun. Skipgnn: predicting molecular interactions with skip-graph networks. In Scientific Reports, 2020. 1
162
+
163
+ [3] Guangyin Jin, Qi Wang, Cunchao Zhu, Yanghe Feng, Jincai Huang, and Jiangping Zhou. Addressing crime situation forecasting task with temporal graph convolutional neural network approach. In ICMTMA, 2020. 1
164
+
165
+ [4] Chirag Agarwal, Himabindu Lakkaraju, and Marinka Zitnik. Towards a unified framework for fair and stable graph representation learning. In UAI. PMLR, 2021. 1, 13
166
+
167
+ [5] Federico Baldassarre and Hossein Azizpour. Explainability techniques for graph convolutional networks. In ICML Workshop on Learning and Reasoning with Graph-Structured Data, 2019. 1,2
168
+
169
+ [6] Lukas Faber, Amin K Moghaddam, and Roger Wattenhofer. Contrastive graph neural network explanation. In ICML Workshop on Graph Representation Learning and Beyond, 2020.
170
+
171
+ [7] Qiang Huang, Makoto Yamada, Yuan Tian, Dinesh Singh, Dawei Yin, and Yi Chang. Graphlime: Local interpretable model explanations for graph neural networks. arXiv, 2020. 1, 2
172
+
173
+ [8] Ana Lucic, Maartje ter Hoeve, Gabriele Tolomei, Maarten de Rijke, and Fabrizio Silvestri. Cf-gnnexplainer: Counterfactual explanations for graph neural networks. arXiv, 2021. 1
174
+
175
+ [9] Dongsheng Luo, Wei Cheng, Dongkuan Xu, Wenchao Yu, Bo Zong, Haifeng Chen, and Xiang Zhang. Parameterized explainer for graph neural network. In NeurIPS, 2020. 1, 2
176
+
177
+ [10] Phillip E Pope, Soheil Kolouri, Mohammad Rostami, Charles E Martin, and Heiko Hoffmann. Explainability methods for graph convolutional neural networks. In CVPR, 2019. 1
178
+
179
+ [11] Michael Sejr Schlichtkrull, Nicola De Cao, and Ivan Titov. Interpreting graph neural networks for nlp with differentiable edge masking. In ${ICLR},{2021.1},2$
180
+
181
+ [12] Minh N Vu and My T Thai. Pgm-explainer: Probabilistic graphical model explanations for graph neural networks. In NeurIPS, 2020. 1, 2
182
+
183
+ [13] Rex Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, and Jure Leskovec. Gnnexplainer: Generating explanations for graph neural networks. In NeurIPS, 2019. 1, 2, 6
184
+
185
+ [14] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. In ${ICLR},{2014}$ . 1
186
+
187
+ [15] Hao Yuan, Haiyang Yu, Jie Wang, Kang Li, and Shuiwang Ji. On explainability of graph neural networks via subgraph explorations. In ICML, 2021. 1
188
+
189
+ [16] Chirag Agarwal, Owen Queen, Himabindu Lakkaraju, and Marinka Zitnik. Evaluating explainability for graph neural networks. arXiv, 2022. 1, 2, 6
190
+
191
+ [17] Indro Spinelli, Simone Scardapane, and Aurelio Uncini. A meta-learning approach for training explainable graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 2022. 1, 2
192
+
193
+ [18] Kaixiong Zhou, Xiao Huang, Yuening Li, Daochen Zha, Rui Chen, and Xia Hu. Towards deeper graph neural networks with differentiable group normalization. NeurIPS, 2020. 1, 2, 6, 7
194
+
195
+ [19] Kenta Oono and Taiji Suzuki. Graph neural networks exponentially lose expressive power for node classification. arXiv, 2019. 2, 13
196
+
197
+ [20] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. arXiv, 2013. 2
198
+
199
+ [21] Mikael Henaff, Joan Bruna, and Yann LeCun. Deep convolutional networks on graph-structured data. arXiv, 2015.
200
+
201
+ [22] Filippo Maria Bianchi, Daniele Grattarola, and Cesare Alippi. Spectral clustering with graph neural networks for graph pooling. In ICML, 2020.
202
+
203
+ [23] Kimberly Stachenfeld, Jonathan Godwin, and Peter Battaglia. Graph networks with spectral message passing. arXiv, 2020.
204
+
205
+ [24] Muhammet Balcilar, Renton Guillaume, Pierre Héroux, Benoit Gaüzère, Sébastien Adam, and Paul Honeine. Analyzing the expressive power of graph neural networks in a spectral perspective. In ${ICLR},{2021.2}$
206
+
207
+ [25] Rianne van den Berg, Thomas N Kipf, and Max Welling. Graph convolutional matrix completion. arXiv, 2017. 2
208
+
209
+ [26] Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. Representation learning on graphs with jumping knowledge networks. In ICML, 2018.
210
+
211
+ [27] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In ICLR, 2019. 6
212
+
213
+ [28] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In NeurIPS, 2017. 6
214
+
215
+ [29] Ziniu Hu, Yuxiao Dong, Kuansan Wang, and Yizhou Sun. Heterogeneous graph transformer. In ${WWW},{2020.2}$
216
+
217
+ [30] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. A comprehensive survey on graph neural networks. In IEEE Transactions on Neural Networks and Learning Systems, 2020. 2, 3
218
+
219
+ [31] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural
220
+
221
+ message passing for quantum chemistry. In ICML, 2017. 2
222
+
223
+ [32] Zhen Han, Peng Chen, Yunpu Ma, and Volker Tresp. Explainable subgraph reasoning for forecasting on temporal knowledge graphs. In ${ICLR},{2020.2}$
224
+
225
+ [33] Chirag Agarwal, Marinka Zitnik, and Himabindu Lakkaraju. Probing gnn explainers: A rigorous theoretical and empirical analysis of gnn explanation methods. In AISTATS, 2022. 2, 3, 8
226
+
227
+ [34] Chen Cai and Yusu Wang. A note on over-smoothing for graph neural networks. ArXiv, 2020. 3, 5, 13
228
+
229
+ [35] Kaixiong Zhou, Xiao Huang, Daochen Zha, Rui Chen, Li Li, Soo-Hyun Choi, and Xia Hu. Dirichlet energy constrained learning for deep graph neural networks. In NeurIPS, 2021. 3, 5, 13
230
+
231
+ [36] Jeroen Kazius, Ross McGuire, and Roberta Bursi. Derivation and validation of toxicophores for mutagenicity prediction. In Journal of medicinal chemistry. ACS Publications, 2005. 6, 14
232
+
233
+ [37] Benjamin Sanchez-Lengeling, Jennifer Wei, Brian Lee, Emily Reif, Peter Wang, Wesley Qian, Kevin McCloskey, Lucy Colwell, and Alexander Wiltschko. Evaluating attribution for graph neural networks. In NeurIPS, 2020. 6, 14
234
+
235
+ [38] Nino Shervashidze, SVN Vishwanathan, Tobias Petri, Kurt Mehlhorn, and Karsten Borgwardt. Efficient graphlet kernels for large graph comparison. In AISTATS, 2009. 6, 14
236
+
237
+ [39] Karsten M. Borgwardt, Cheng Soon Ong, Stefan Schönauer, S. V. N. Vishwanathan, Alex J. Smola, and Hans-Peter Kriegel. Protein function prediction via graph kernels. Bioinformatics, 21, 06 2005. 6, 14
238
+
239
+ [40] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In ICLR, 2017. 6
240
+
241
+ [41] Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In ${AAAI},{2019.6}$
242
+
243
+ [42] Ekagra Ranjan, Soumya Sanyal, and Partha Talukdar. Asap: Adaptive structure aware pooling for learning hierarchical graph representations. In ${AAAI},{2020.6}$
244
+
245
+ [43] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In ICML, 2017. 6, 8, 9
246
+
247
+ [44] Yu Rong, Wenbing Huang, Tingyang Xu, and Junzhou Huang. Dropedge: Towards deep graph convolutional networks on node classification. In ${ICLR},{2020.6},8$
248
+
249
+ [45] Paul D. Dobson and Andrew J. Doig. Distinguishing enzyme structures from non-enzymes without alignments. Journal of Molecular Biology, 330(4), 2003. 14
papers/LOG/LOG 2022/LOG 2022 Conference/_nlbNbawXDi/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,307 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § TOWARDS TRAINING GNNS USING EXPLANATION DIRECTED MESSAGE PASSING
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ With the increasing use of Graph Neural Networks (GNNs) in critical real-world applications, several post hoc explanation methods have been proposed to understand their predictions. However, there has been no work in generating explanations on the fly during model training and utilizing them to improve the expressive power of the underlying GNN models. In this work, we introduce a novel explanation-directed neural message passing framework for GNNs, EXPASS (EXplainable message PASSing), which aggregates only embeddings from nodes and edges identified as important by a GNN explanation method. EXPASS can be used with any existing GNN architecture and subgraph-optimizing explainer to learn accurate graph embeddings. We theoretically show that EXPASS alleviates the oversmooth-ing problem in GNNs by slowing the layer-wise loss of Dirichlet energy and that the embedding difference between the vanilla message passing and EXPASS framework can be upper bounded by the difference of their respective model weights. Our empirical results show that graph embeddings learned using EXPASS improve the predictive performance and alleviate the oversmoothing problems of GNNs, opening up new frontiers in graph machine learning to develop explanation-based training frameworks.
12
+
13
+ § 191 INTRODUCTION
14
+
15
+ Graph Neural Networks (GNNs) are increasingly used as powerful tools for representing graph-structured data, such as social, information, chemical, and biological networks [1, 2]. With the deployment of GNN models in critical applications (e.g., financial systems and crime forecasting [3, 4]), it becomes essential to ensure that the relevant stakeholders understand and trust their decisions. To this end, several approaches [5-13] have been proposed in recent literature to generate post-hoc explanations for predictions of GNN models.
16
+
17
+ In contrast to other modalities like images and texts, generating instance-level explanations for graphs is non-trivial. In particular, it is more challenging since individual node embeddings in GNNs aggregate information using the entire graph structure, and, therefore, explanations can be on different levels (i.e., node attributes, nodes, and edges). While several categories of GNN explanation methods have been proposed: gradient-based [5, 10, 14], perturbation-based [8, 9, 11, 13, 15], and surrogate-based $\left\lbrack {7,{12}}\right\rbrack$ , their utility is limited to generating post hoc node- and edge-level explanations for a given pre-trained GNN model. Thus, the capability of GNN explainers to improve the predictive performance of a GNN model lacks understanding as there is very little work on systematically analyzing the reliability of state-of-the-art GNN explanation methods on model performance [16].
18
+
19
+ To address this, recent works have explored the joint optimization of machine learning models and explanation methods to improve the reliability of explanations $\left\lbrack {{17},{18}}\right\rbrack$ . Zhou et al. $\left\lbrack {18}\right\rbrack$ proposed DropEdge as a technique to drop random edges (similar to generating random edge explanations) during training to reduce overfitting in GNNs. More recently, Spinelli et al. [17] used meta-learning frameworks to generate GNN explanations and show an improvement in the performance of specific GNN explanation methods. While these works make an initial attempt at jointly optimizing explainers and predictive models, they are neither generalizable nor exhaustive. They fail to show improvement in the downstream GNN performance [17] and degree of explainability [18] across diverse GNN architectures and explainers. Further, there is little to no work done on either theoretically analyzing the effect of GNN explanations on the neural message framework in GNNs or on important GNN properties like oversmoothing [19].
20
+
21
+ Present work. In this work, we introduce a novel explanation-directed neural message passing framework, EXPASS, which can be used with any GNN model and subgraph-optimizing explainer to learn accurate graph representations. In particular, EXPASS utilizes GNN explanations to steer the underlying GNN model to learn graph embeddings using only important nodes and edges. EXPASS aims to define local neighborhoods for neural message passing, i.e., identify the most important edges and nodes, using explanation weights, in the $k$ -hop local neighborhood of every node in the graph. Formally, we augment existing message passing architectures to allow information flow along important edges while blocking information along irrelevant edges.
22
+
23
+ We present an extensive theoretical and empirical analysis to show the effectiveness of EXPASS on the predictive, explainability, and oversmoothing performance of GNNs. Our theoretical results show that the embedding difference between vanilla message passing and EXPASS frameworks is upper-bounded by the difference between their model weights. Further, we show that embeddings learned using EXPASS relieve the oversmoothing problem in GNNs as they reduce information propagation by slowing the layer-wise loss of Dirichlet energy (Section 4.2). For our empirical analysis, we integrate EXPASS into state-of-the-art GNN models and evaluate their predictive, oversmoothing, and explainability performance on real-world graph datasets (Section 5). Our results show that, on average, across five GNN models, EXPASS improves the degree of explainability of the underlying GNNs by 39.68%. Our ablation studies show that for an increasing number of GNN layers, EXPASS achieves 34.4% better oversmoothing performance than its vanilla counterpart. Finally, our results demonstrate the effectiveness of using explanations during training, paving the way for new frontiers in GraphXAI research to develop explanation-based training algorithms.
24
+
25
+ § 2 RELATED WORKS
26
+
27
+ Graph Neural Networks. Graph Neural Networks (GNNs) are complex non-linear functions that transform input graph structures into a lower dimensional embedding space. The main goal of GNNs is to learn embeddings that reflect the underlying input graph structure, i.e., neighboring nodes in the graph are mapped to neighboring points in the embedding space. Prior works have proposed several GNN models using spectral and non-spectral approaches. Spectral models [20-24] leverage Fourier transform and graph Laplacian to define convolution approaches for GNN models. However, non-spectral approaches [25-29] define the convolution operation by leveraging the local neighborhood of individual nodes in the graph. Most modern non-spectral models are message passing frameworks [30, 31], where nodes update their embedding by aggregating information from $k$ -hop neighboring nodes.
28
+
29
+ Post hoc Explanations. With the increasing development of complex high-performing GNN models [25-29], it becomes critical to understand their decisions. Prior works have focused on developing several post hoc explanation methods to explain the decisions of GNN models [5, 7, 9, 11-13, 32]. More specifically, these explanation methods can be broadly categorized into i) gradient-based methods [5] that leverage the gradients of the GNN model to generate explanations; ii) perturbation-based methods $\left\lbrack {9,{11},{13}}\right\rbrack$ that aim to generate explanations by calculating the change in GNN predictions upon perturbations of the input graph structure (nodes, edges, or subgraphs); and iii) surrogate-based methods $\left\lbrack {7,{12}}\right\rbrack$ that fit a simple interpretable model to approximate the predictive behavior of the given GNN model. Finally, recent works have introduced frameworks to theoretically and empirically 7 analyze the behavior of state-of-the-art GNN explanation methods with respect to several desirable properties [16, 33].
30
+
31
+ § 3 PRELIMINARIES
32
+
33
+ Notations. Let $\mathcal{G} = \left( {\mathcal{V},\mathcal{E},\mathbf{X}}\right)$ denote an undirected graph comprising of a set of nodes $\mathcal{V}$ and a set of edges $\mathcal{E}$ . Let $\mathbf{X} = \left\{ {{\mathbf{x}}_{1},{\mathbf{x}}_{2},\ldots ,{\mathbf{x}}_{N}}\right\}$ denote the set of node feature vectors for all nodes in $\mathcal{V}$ , where ${\mathbf{x}}_{v} \in {\mathbb{R}}^{d}$ captures the attribute values of a node $v$ and $N = \left| \mathcal{V}\right|$ denotes the number of nodes in the graph. Let $\mathbf{A} \in {\mathbb{R}}^{N \times N}$ be the graph adjacency matrix, where element ${\mathbf{A}}_{uv} = 1$ if there exists an edge $e \in \mathcal{E}$ between nodes $u$ and $v$ and ${\mathbf{A}}_{uv} = 0$ otherwise. We use ${\mathcal{N}}_{u}$ to denote the set of immediate neighbors of node $u$ , i.e., ${\mathcal{N}}_{u} = \left\{ {v \in \mathcal{V} \mid {A}_{uv} = 1}\right\}$ . Finally, the function $\deg : \mathcal{V} \mapsto {\mathbb{Z}}_{ > 0}$ is defined as $\deg \left( v\right) = \left| {\mathcal{N}}_{v}\right|$ and outputs the degree of a node $v \in \mathcal{V}$
34
+
35
+ Graph Neural Networks (GNNs). Formally, GNNs can be formulated as message passing networks [30] specified by three key operators MSG, AGG, and UPD. These operators are recursively applied on a given graph $\mathcal{G}$ for a $L$ -layer GNN model defining how neural messages are shared, aggregated, and updated between nodes to learn the final node representations in the ${L}^{\text{ th }}$ layer of the GNN. Commonly, a message between a pair of nodes(u, v)in layer $l$ is characterized as a function of their hidden representations ${\mathbf{h}}_{u}^{\left( l - 1\right) }$ and ${\mathbf{h}}_{v}^{\left( l - 1\right) }$ from the previous layer: ${\mathbf{m}}_{uv}^{\left( l\right) } = \operatorname{MSG}\left( {{\mathbf{h}}_{u}^{\left( l - 1\right) },{\mathbf{h}}_{v}^{\left( l - 1\right) }}\right)$ . The AGG operator retrieves the messages from the neighborhood of node $u$ and aggregates them as: ${\mathbf{m}}_{u}^{\left( l\right) } = \operatorname{AGG}\left( {{\mathbf{m}}_{uv}^{\left( l\right) } \mid v \in {\mathcal{N}}_{u}}\right)$ . Next, the UPD operator takes the aggregated message ${\mathbf{m}}_{u}^{\left( l\right) }$ at layer $l$ and combines it with ${\mathbf{h}}_{u}^{\left( l - 1\right) }$ to produce node $u$ ’s representation for layer $l$ as ${\mathbf{h}}_{u}^{\left( l\right) } = \operatorname{UPD}\left( {{\mathbf{m}}_{u}^{\left( l\right) },{\mathbf{h}}_{u}^{\left( l - 1\right) }}\right)$ . Lastly, the final node representation for node $u$ is given as ${\mathbf{z}}_{u} = {\mathbf{h}}_{u}^{\left( L\right) }$ .
36
+
37
+ Graph Explanations. In contrast to other modalities like images and texts, an explanation method for graphs can formally generate multi-level explanations. For instance, in a graph classification task, the explanations for a given graph prediction can be with respect to node attributes ${\mathbf{M}}_{\mathrm{x}} \in {\mathbb{R}}^{d}$ , nodes ${\mathbf{M}}_{n} \in {\mathbb{R}}^{N}$ , or edges ${\mathbf{M}}_{e} \in {\mathbb{R}}^{\bar{N} \times \bar{N}}$ . Note that these explanation masks are continuous but can be discretized using specific thresholding strategies [33].
38
+
39
+ Oversmoothing. Cai et al. [34] and Zhou et al. [35] defined bounds for analyzing oversmoothing for a GNN using Dirichlet Energy. For a graph $\mathcal{G}$ with adjacency matrix $\mathbf{A}$ and degree matrix $\mathbf{D}$ , we define $\widetilde{\mathbf{A}} = \mathbf{A} + {\mathbf{I}}_{N}$ and $\widetilde{\mathbf{D}} = \mathbf{D} + {\mathbf{I}}_{N}$ as the adjacency and degree matrices respectively of the graph $\mathcal{G}$ with self-loops. We also define the augmented normalized Laplacian of $\mathcal{G}$ as $\widetilde{\Delta } = {\mathbf{I}}_{N} - {\widetilde{\mathbf{D}}}^{-\frac{1}{2}}\widetilde{A}{\widetilde{D}}^{-\frac{1}{2}}$ , and $\mathbf{P} = {\mathbf{I}}_{N} - \widetilde{\Delta }$ .
40
+
41
+ § 4 OUR FRAMEWORK: EXPASS
42
+
43
+ Here, we describe EXPASS, our proposed explainable message passing framework that aims to learn accurate and interpretable graph embeddings. In particular, EXPASS incorporates explanations into the message passing framework of GNN models by only aggregating embeddings from key nodes and edges as identified using an explanation method.
44
+
45
+ Problem formulation (Explanation Directed Message Passing). Given a graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E},X}\right)$ , EXPASS aims to generate a $d$ -dimensional embedding ${\mathbf{z}}_{u} \in {\mathbb{R}}^{d}$ for each node $u \in \mathcal{V}$ using an explanation-directed message passing framework that filters out the noise from unimportant edges and improves the expressive power of GNNs.
46
+
47
+ § 4.1 EXPLANATION DIRECTED MESSAGE PASSING
48
+
49
+ The central idea of EXPASS is to propose a novel method for improving the neural message passing scheme of GNN models by utilizing explanations during model training and aggregating important neural messages along edges in graph neighborhoods. Next, we describe the existing message passing scheme in GNNs and our explainable counterpart.
50
+
51
+ Message Passing. As described in Section 3, each GNN layer can be described using the MSG, AGG, and UPD operators. For each node $u \in \mathcal{V}$ , the ${\left( l + 1\right) }^{th}$ layer embeddings ${\mathbf{h}}_{u}^{\left( l + 1\right) }$ is computed using a GNN operating on the node's neighboring attributes. Formally, the GNN layer can be formulated as:
52
+
53
+ < g r a p h i c s >
54
+
55
+ where ${\mathbf{h}}_{u}^{\left( l + 1\right) }$ represents the updated embedding of node $u,\psi$ is the MSG operator, $\oplus$ is the AGG operator (e.g., summation), $\phi$ is an UPD function (e.g., any non-linear activation function), and ${\mathbf{h}}_{u}^{\left( l\right) }$ represents the embedding of node $u$ from the previous layer. We obtain an embedding ${\mathbf{z}}_{u}$ for node $u$ by stacking $L$ GNN layers. Finally, the node embeddings $\mathbf{Z} \in \mathbb{R}$ are then passed to a READOUT function to obtain an embedding for the graph.
56
+
57
+ < g r a p h i c s >
58
+
59
+ Figure 1: Overview of EXPASS : a) EXPASS investigates the problem of injecting explanations into the message passing framework to increase the expressive power and performance of GNNs. b) Shown is the general message passing scheme where, for node $u$ , messages are aggregated from nodes ${v}_{i} \in {\mathcal{N}}_{u}$ in the 1-hop neighborhood of $u$ . c) EXPASS injects explanations into the message passing framework translates by masking out messages from neighboring nodes ${v}_{i} \in {\mathcal{N}}_{u}$ with explanation scores ${s}_{u{v}_{i}} \approx 0$ when $u$ is correctly classified.
60
+
61
+ EXPASS . Here, we describe our proposed explainable message passing scheme that incorporates
62
+
63
+ explanations into the message passing step in individual GNN layers on the fly during the training process. Given an explanation method, which generates an importance score ${s}_{uv} \in {\mathbf{M}}_{u}^{e}$ for every edge ${e}_{uv} \in \mathcal{E}$ , we can weight the edge contribution in the neighborhood ${\mathcal{N}}_{u}$ of node $u$ as:
64
+
65
+ $$
66
+ {\mathbf{h}}_{u}^{\prime }{}^{\left( l + 1\right) } = \phi \left( {{\mathbf{h}}_{u}^{\left( l\right) },{\bigoplus }_{v \in {\mathcal{N}}_{u}}\overline{{s}_{uv}}\psi \left( {{\mathbf{h}}_{u}^{\left( l\right) },{\mathbf{h}}_{v}^{\left( l\right) }}\right) }\right)
67
+ $$
68
+
69
+ Note that EXPASS is agnostic to explanation types and can also incorporate explanations on node attributes and node level. For instance, the importance scores for individual nodes can be computed by averaging the outgoing scores ${s}_{uv}$ for all $v \in {\mathcal{N}}_{u}$ . Subsequently, we can replace the ${s}_{uv}$ score by using the average score ${s}_{u}$ to weight edges in the EXPASS layers, and for node attributes, we can multiply the node attribute explanation ${\mathbf{M}}_{u}^{a}$ to the original node attribute vector.
70
+
71
+ To enable explainable message passing and only retain the important embeddings for node $u$ , EXPASS removes the knowledge of irrelevant nodes and edges from the local neighborhood ${\mathcal{N}}_{u}$ of node $u$ using its explanations. For instance, if node $v$ is considered important to node $u$ , EXPASS transforms the aggregated messages of node $u$ using the node importance scores ${s}_{uv}$ . Note that since the explanations of node $u$ include important nodes and edges in the $L$ -hop neighborhood of node $u$ , even though node $u$ is only locally modified, the change will spread through all the nodes in every GNN layer. Furthermore, to avoid spurious correlations, we ensure that explanations are only generated for correctly classified nodes and graphs. Explanation weights infuse information from higher-order neighborhoods into each layer of the GNN model, specifically, from as many $L$ -hop neighbors because explanation weights within each layer are computed using the $L$ -layer GNN model. To illustrate this, we next show the weight computations for a GNN explanation method.
72
+
73
+ Without loss of generality, let us consider GNNExplainer as our explanation method whose mask for selected graph is formulated as: ${\mathcal{G}}_{\text{ mask }} = \left( {{\mathbf{X}}^{\prime },{\mathbf{A}}^{\prime }}\right) = \left( {\mathbf{X} \odot \sigma \left( {\mathbf{M}}^{\mathrm{x}}\right) ,\mathbf{A} \odot \sigma \left( {\mathbf{M}}^{\mathrm{e}}\right) }\right)$ , where $W = \left\lbrack {{\mathbf{M}}^{\mathrm{x}},{\mathbf{M}}^{\mathrm{e}}}\right\rbrack$ are the explainers parameters, $\sigma$ is the sigmoid function, and $\odot$ denotes element-wise multiplication. Here, ${s}_{uv}$ represents the element in row $v$ and column $u$ of ${\mathbf{M}}^{\mathrm{e}}$ . Gradient descent-based optimization is used to find the optimal values for the masks minimizing the following objective: ${L}_{e} = - \mathop{\sum }\limits_{{c = 1}}^{C}1\left\lbrack {y = c}\right\rbrack \log {f}_{\theta }\left( {Y = y \mid {\mathcal{G}}_{\text{ mask }}}\right)$ , where ${f}_{\theta }$ is the $L$ -layer GNN model and $C$ is the total number of classes. This shows that a $L$ -hop neighborhood is used to compute ${s}_{uv}$ .
74
+
75
+ § 4.2 THEORETICAL ANALYSIS
76
+
77
+ Here, we provide a detailed theoretical analysis of our proposed EXPASS framework. In particular, we (i) provide a theoretical upper bound on the embedding difference obtained from a vanilla message passing and EXPASS framework and (ii) show that graph embeddings learned using EXPASS relieves the oversmoothing problem in GNNs by reducing information propagation.
78
+
79
+ Theorem 1 (Differences between EXPASS and Vanilla Message Passing). Given a non-linear activation function $\sigma$ that is Lipschitz continuous, the difference between the node embeddings between a vanilla message passing and EXPASS framework can be bounded by the difference in their individual weights, i.e.,
80
+
81
+ $$
82
+ {\begin{Vmatrix}{\mathbf{h}}_{u}^{\left( l\right) } - {\mathbf{h}}^{\prime }{}_{u}^{\left( l\right) }\end{Vmatrix}}_{2} \leq {\begin{Vmatrix}{\mathbf{W}}_{a}^{\left( l\right) } - {\mathbf{W}}^{\prime }{}_{a}^{\left( l\right) }\end{Vmatrix}}_{2}{\begin{Vmatrix}{\mathbf{h}}_{u}^{\left( l - 1\right) }\end{Vmatrix}}_{2} + {\begin{Vmatrix}{\mathbf{W}}_{n}^{\left( l\right) } - {\mathbf{W}}^{\prime }{}_{n}^{\left( l\right) }\end{Vmatrix}}_{2}\mathop{\sum }\limits_{\substack{{v \in {\mathcal{N}}_{u} \cap {s}_{v} = 1} }}{\begin{Vmatrix}{\mathbf{h}}_{v}^{\left( l - 1\right) }\end{Vmatrix}}_{2}, \tag{1}
83
+ $$
84
+
85
+ where ${\mathbf{W}}_{a}^{\left( l\right) }$ and ${\mathbf{W}}_{a}^{\prime \left( l\right) }$ are the weights for node $u$ in layer $l$ of the vanilla message passing and EXPASS framework and ${\mathbf{W}}_{n}^{\left( l\right) }$ and ${\mathbf{W}}^{\prime }{}_{n}^{\left( l\right) }$ are their respective weight matrix with the neighbors of node $u$ at layer $l$ .
86
+
87
+ Proof Sketch. In Theorem 1, we prove that the ${\ell }_{2}$ -norm of the differences between the embeddings of vanilla message passing and EXPASS framework at layer $l$ is upper bounded by the difference between their weights and the embeddings of node $u$ and its subgraph. See Appendix A for more details.
88
+
89
+ Definition 1 (Dirichlet Energy for a Node Embedding Matrix [35]). Given a node embedding matrix ${\mathbf{H}}^{\left( l\right) } = {\left\lbrack {\mathbf{h}}_{1}^{\left( l\right) },\ldots ,{\mathbf{h}}_{n}^{\left( l\right) }\right\rbrack }^{T}$ learned from the GNN model at the ${l}^{\text{ th }}$ layer, the Dirichlet Energy $E\left( {\mathbf{H}}^{\left( l\right) }\right)$ is defined as:
90
+
91
+ $$
92
+ E\left( {\mathbf{H}}^{\left( l\right) }\right) = \operatorname{tr}\left( {{\mathbf{H}}^{{\left( l\right) }^{T}}\widetilde{\Delta }{\mathbf{H}}^{\left( l\right) }}\right) = \frac{1}{2}\mathop{\sum }\limits_{{i,j \in \mathcal{V}}}{a}_{ij}{\begin{Vmatrix}\frac{{\mathbf{H}}_{i}^{\left( l\right) }}{\sqrt{1 + {\deg }_{i}}} - \frac{{\mathbf{H}}_{j}^{\left( l\right) }}{\sqrt{1 + {\deg }_{j}}}\end{Vmatrix}}_{2}^{2} \tag{2}
93
+ $$
94
+
95
+ where ${a}_{ij}$ are elements in the adjacency matrix $\widetilde{\mathbf{A}}$ and ${\deg }_{i},{\deg }_{j}$ is the degree of node $i$ and $j$ , respectively.
96
+
97
+ Cai et al. [34] extensively show that higher Dirichlet energies correspond to lower oversmoothing. Furthermore, they show that the removal of edges or, similarly, reduction of edge weights on graphs help alleviate oversmoothing.
98
+
99
+ Proposition (EXPASS relieves Oversmoothing). EXPASS alleviates oversmoothing by slowing the layer-wise loss of Dirichlet energy.
100
+
101
+ The complete proof is provided in Appendix A.
102
+
103
+ § 5 EXPERIMENTS
104
+
105
+ Next, we present experimental results for our EXPASS framework. More specifically, we address the following questions: Q1) Does EXPASS enable GNNs to learn more accurate embeddings and improve their degree of explainability? Q2) How does EXPASS affect the oversmoothing and predictive performance of GNNs with an increasing number of layers? Q3) Does EXPASS depend on the quality of explanations for improving the predictive and oversmoothing performance of GNNs?
106
+
107
+ § 5.1 DATASETS AND EXPERIMENTAL SETUP
108
+
109
+ We first describe the datasets used to study the utility of our proposed EXPASS framework and then outline the experimental setup.
110
+
111
+ Datasets. We use real-world molecular chemistry datasets to evaluate the effectiveness of EXPASS w.r.t. the performance of the underlying GNN model and understand the trade-off between explainability and accuracy for a graph classification task. We consider four benchmark datasets, which includes Mutag [36], Alkane-Carbonyl [37], DD [38], and Proteins [39]. See Appendix B. 1 for a detailed overview of the datasets.
112
+
113
+ GNN Architectures and Explainers. To investigate the flexibility of EXPASS, we incorporate it into five different GNN models: GCN [40], GraphConv [41], LEConv [42], GraphSAGE [28], and GIN [27]. We use GNNExplainer [13] as our baseline GNN explanation method to generate edge-level explanations for most of our experiments. In addition, we use Integrated Gradients [43], a node-level explanation method, to demonstrate EXPASS 's sensitivity to the choice of explainers.
114
+
115
+ Implementation details. We consider DropEdge [44] as our baseline method for comparing the oversmoothing performance of EXPASS as DropEdge randomly removes edges from the input graph at each training epoch, acting like a message passing reducer. Across all experiments, we use topK (k=40%) node features/edges, and use them to generate explanations for all explanation methods. All other hyperparameters of the explanation and baseline methods were set following the author’s guidelines. For all our experiments (unless mentioned otherwise), we use the baseline architectures with three GNN layers followed by ReLU layers and set the hidden dimensionality to 32 . Finally, we use a single linear layer to transform the graph embeddings to their respective classes. See Appendix B. 2 for more details.
116
+
117
+ Performance metrics for GNN Explainers. To measure the reliability of GNN explanation methods, we use the graph explanation faithfulness metric [16]: $\operatorname{GEF}\left( {{\widehat{y}}_{u},{\widehat{y}}_{{u}^{\prime }}}\right) = 1 - {\exp }^{-\mathrm{{KL}}\left( {{\widehat{y}}_{u}\parallel {\widehat{y}}_{{u}^{\prime }}}\right) }$ , where ${\widehat{y}}_{u}$ is predicted probability vector using the whole subgraph and ${\widehat{y}}_{{u}^{\prime }}$ is the predicted probability vector using the masked subgraph, where we generate the masked subgraph by only using the topK features identified by an explanation and the Kullback-Leibler (KL) divergence score (denoted by "||" operator) quantifies the distance between two probability distributions. Note that GEF is a measure of the unfaithfulness of the explanation. So, higher values indicate a higher degree of unfaithfulness.
118
+
119
+ Performance metrics for Oversmoothing. Zhou et al. [18] introduced the Group Distance Ratio (GDR) metric to quantify oversmoothing in GNNs. It measures the ratio between the average of pairwise representation distances between graphs belonging to different (inter) and same (intra) groups. Formally, one would prefer to reduce the intra-group class representations and increase the inter-group distance to relieve the over-smoothing issue. Hence, lower GDR values denote higher oversmoothing in GNNs.
120
+
121
+ Burn-in period. We defined the burn-in period as a number $n$ of epochs during training in which no explanations are used. The burn-in period is necessary to avoid feeding spurious explanations to the model. The length of the burn-in period, e.g. the number of epochs, was treated as a hyperparameter and fine-tuned during the model fine-tuning phase. At the end of the burn-in period, a predefined percentage of correctly predicted graphs per batch is randomly sampled and their explanations are used in the model training. The percentage of correctly predicted graphs sampled in each batch was treated as an hyperparameter and was set to 0.4 for all our experiments.
122
+
123
+ § 5.2 RESULTS
124
+
125
+ Q1) EXPASS improves the predictive performance and explainability of GNNs. To measure the predictive performance and degree of explainability of GNNs trained using EXPASS, we compute their average predictive performance (using AUROC and F1-score) and fidelity (using Graph Explanation Faithfulness) using different GNN models and datasets. Across four datasets and five GNN architectures, we find that EXPASS-augmented GNNs learn graph embeddings that are more accurate (higher AUROC and F1-score) and result in more faithful explanations (lower Graph Explanation Faithfulness score) than their vanilla counterparts. On average, EXPASS improves the AUROC and F1-score by 1.51% and 1.05%, respectively. In particular, we observe that EXPASS improves the predictive behavior of high-performing models like GIN (+2.06% in AUROC and +2.50% in F1-score) but shows little to no improvement in the case of LeConv, which utilizes a node-scoring mechanism through the similarity between a node and its neighbors' embeddings. Finally, we find that EXPASS-augmented GNNs significantly improve the explainability of a GNN and achieve a 39.68% better faithfulness score as compared to vanilla GNNs (Table 1).
126
+
127
+ Table 1: Results of EXPASS for five GNNs and four graph datasets. Shown is average performance across three independent runs. Arrows $\left( { \uparrow , \downarrow }\right)$ indicate the direction of better performance. EXPASS improves the predictive power (AUROC and F1-score) and degree of explainability (Graph Explanation Faithfulness) of original GNNs across multiple datasets (shaded area). Values corresponding to best performance are bolded.
128
+
129
+ max width=
130
+
131
+ Dataset Method AUROC (↑) F1-score (↑) GEF (↓)
132
+
133
+ 1-5
134
+ 10*ALKANE- Carbonyl GCN ${0.97} \pm {0.01}$ ${0.95} \pm {0.01}$ ${0.33} \pm {0.02}$
135
+
136
+ 2-5
137
+ EXPASS-GCN 0.98 $\pm {0.00}$ 0.96±0.01 ${0.23} \pm {0.02}$
138
+
139
+ 2-5
140
+ GraphConv ${0.97} \pm {0.01}$ ${0.94} \pm {0.00}$ ${0.38} \pm {0.05}$
141
+
142
+ 2-5
143
+ EXPASS-GraphConv 0.98 $\pm {0.00}$ ${\mathbf{{0.97}}}_{\pm {0.00}}$ 0.22±0.03
144
+
145
+ 2-5
146
+ LeConv ${0.98} \pm {0.01}$ ${0.96} \pm {0.00}$ ${0.37} \pm {0.03}$
147
+
148
+ 2-5
149
+ EXPASS-LeConv ${0.98} \pm {0.00}$ ${0.96} \pm {0.01}$ 0.24±0.03
150
+
151
+ 2-5
152
+ GraphSAGE ${0.98} \pm {0.00}$ ${0.96} \pm {0.00}$ ${0.40} \pm {0.12}$
153
+
154
+ 2-5
155
+ EXPASS-GraphSAGE $\mathbf{{0.99}} \pm {0.00}$ 0.97±0.01 0.18±0.06
156
+
157
+ 2-5
158
+ GIN ${0.96} \pm {0.01}$ ${0.94} \pm {0.02}$ ${0.35} \pm {0.06}$
159
+
160
+ 2-5
161
+ EXPASS-GIN 0.98±0.01 $\mathbf{{0.96}} \pm {0.02}$ 0.11±0.04
162
+
163
+ 1-5
164
+ 10*DD GCN ${0.73} \pm {0.02}$ ${0.70} \pm {0.02}$ ${0.49} \pm {0.04}$
165
+
166
+ 2-5
167
+ EXPASS-GCN 0.74±0.01 ${0.70} \pm {0.02}$ ${\mathbf{{0.30}}}_{\pm {0.09}}$
168
+
169
+ 2-5
170
+ GraphConv ${}_{{0.75} \pm {0.03}}$ ${0.73} \pm {0.03}$ ${0.25} \pm {0.10}$
171
+
172
+ 2-5
173
+ EXPASS-GraphConv 0.77±0.03 ${0.73} \pm {0.03}$ ${\mathbf{{0.19}}}_{\pm {0.04}}$
174
+
175
+ 2-5
176
+ LeConv ${0.76} \pm {0.03}$ 0.74±0.02 0.17±0.03
177
+
178
+ 2-5
179
+ EXPASS-LeConv 0.77±0.03 ${0.73} \pm {0.04}$ ${0.31} \pm {0.10}$
180
+
181
+ 2-5
182
+ GraphSAGE ${0.74} \pm {0.02}$ ${0.70} \pm {0.02}$ ${0.21} \pm {0.04}$
183
+
184
+ 2-5
185
+ EXPASS-GraphSAGE 0.76±0.03 ${\mathbf{{0.71}}}_{\pm {0.02}}$ 0.20±0.03
186
+
187
+ 2-5
188
+ GIN ${0.74} \pm {0.01}$ ${0.70} \pm {0.01}$ ${0.37} \pm {0.03}$
189
+
190
+ 2-5
191
+ EXPASS-GIN 0.76±0.01 0.74±0.01 0.35±0.05
192
+
193
+ 1-5
194
+ 10*MUTAG GCN ${0.71} \pm {0.11}$ ${0.87} \pm {0.01}$ ${0.09} \pm {0.03}$
195
+
196
+ 2-5
197
+ EXPASS-GCN 0.77±0.02 ${\mathbf{{0.89}}}_{\pm {0.00}}$ 0.04±0.01
198
+
199
+ 2-5
200
+ GraphConv ${0.91} \pm {0.02}$ ${0.94} \pm {0.02}$ ${0.66} \pm {0.03}$
201
+
202
+ 2-5
203
+ EXPASS-GraphConv 0.93±0.01 0.94±0.01 0.24±0.03
204
+
205
+ 2-5
206
+ LeConv ${0.92} \pm {0.03}$ ${0.94} \pm {0.02}$ ${0.65} \pm {0.05}$
207
+
208
+ 2-5
209
+ EXPASS-LeConv ${0.92} \pm {0.03}$ $\mathbf{{0.96}} \pm {0.01}$ ${\mathbf{{0.30}}}_{\pm {0.06}}$
210
+
211
+ 2-5
212
+ GraphSAGE ${0.76} \pm {0.02}$ ${0.86} \pm {0.03}$ ${0.24} \pm {0.08}$
213
+
214
+ 2-5
215
+ EXPASS-GraphSAGE 0.76±0.02 0.87±0.03 ${\mathbf{{0.11}}}_{\pm {0.03}}$
216
+
217
+ 2-5
218
+ GIN ${0.92} \pm {0.02}$ ${0.93} \pm {0.01}$ ${0.61}_{\pm {0.05}}$
219
+
220
+ 2-5
221
+ EXPASS-GIN 0.94±0.02 ${0.95} \pm {0.01}$ 0.32±0.04
222
+
223
+ 1-5
224
+ 10*Proteins GCN ${0.73} \pm {0.05}$ ${0.68} \pm {0.04}$ ${0.19} \pm {0.02}$
225
+
226
+ 2-5
227
+ EXPASS-GCN 0.74±0.03 ${\mathbf{{0.69}}}_{\pm {0.03}}$ 0.08±0.02
228
+
229
+ 2-5
230
+ GraphConv ${0.75} \pm {0.03}$ ${0.70} \pm {0.03}$ ${0.49} \pm {0.06}$
231
+
232
+ 2-5
233
+ EXPASS-GraphConv ${0.75} \pm {0.03}$ ${0.70} \pm {0.04}$ 0.10±0.03
234
+
235
+ 2-5
236
+ LeConv 0.77±0.03 0.72±0.04 ${0.51} \pm {0.01}$
237
+
238
+ 2-5
239
+ EXPASS-LeConv 0.76±0.02 ${0.71} \pm {0.03}$ 0.15±0.07
240
+
241
+ 2-5
242
+ GraphSAGE ${0.73} \pm {0.04}$ ${0.69} \pm {0.04}$ ${0.17} \pm {0.07}$
243
+
244
+ 2-5
245
+ EXPASS-GraphSAGE ${0.73} \pm {0.04}$ ${0.69} \pm {0.04}$ 0.06±0.01
246
+
247
+ 2-5
248
+ GIN 0.77±0.04 ${0.73} \pm {0.05}$ ${0.20} \pm {0.07}$
249
+
250
+ 2-5
251
+ EXPASS-GIN 0.78±0.03 ${0.73} \pm {0.04}$ ${\mathbf{{0.19}}}_{\pm {0.01}}$
252
+
253
+ 1-5
254
+
255
+ Q2) EXPASS relieves Oversmoothing in GNNs. We examine the oversmoothing (using the Group Distance Ratio metric [18]) and predictive performance of GNNs trained using EXPASS with their vanilla counterparts. The oversmoothing problem in GNNs shows that the representations of nodes converge to similar vectors as the number of layers increases. Therefore, we analyze the over-smoothing of the GNNs for an increasing number of layers and find that, on average, across two architectures, EXPASS improves the group distance ratio by 34.4% (Figure 2). Further, our results indicate an inherent trade-off between oversmoothing and predictive performance of GNNs, as shown in Figures 4-5.
256
+
257
+ < g r a p h i c s >
258
+
259
+ Figure 2: The effects of the number of GNN layers on the oversmoothing performance of EXPASS (orange) and Vanilla (green) GCN (left column) and GIN (right column) models trained on Alkane-Carbonyl dataset. Across models with increasing number of layers, EXPASS achieves higher GDR performance without sacrificing the predictive performance of the GCN model. See Figs. 4-5 for predictive performance results.
260
+
261
+ Q3) Ablation studies. We conduct ablations on several components of EXPASS with respect to its oversmoothing and predictive performance. First, we investigate the oversmoothing and predictive performance of GNNs for different topK explanations (i.e., topK edges identified by a GNN explanation) chosen in the message passing. Results show that EXPASS alleviates oversmoothing by using only the topK edges to learn graph embeddings and explicitly filter out the noise from unimportant edges. In particular, we observe that the GDR values decrease (denoting higher oversmoothing) with the increase in the use of topK edges (Figure 3). More specifically, we find that the GDR value at topK=0.1 is 11.92% higher than vanilla message passing (i.e., using all edges in the graph).
262
+
263
+ Second, we compare the predictive and oversmoothing and predictive performance of EXPASS and DropEdge. Here, we show that message passing using optimized explanation-directed information outperforms random edge removal. We find that EXPASS outperforms DropEdge across both over-smoothing and accuracy metrics. In particular, on average, across different topK values, EXPASS improves the oversmoothing, AUROC, and F1-score performance of vanilla message passing by ${71.16}\% ,{9.53}\%$ , and 12.63%, respectively (Figure 3).
264
+
265
+ < g r a p h i c s >
266
+
267
+ Figure 3: The effects of choosing only the topK percent of important edges on the (a) oversmoothing, (b) AUROC, and (c) F1-score performance of GCN model trained on Alkane-Carbonyl dataset. Over a wide range of topK values $\left( {{0.1} < \text{ topK } < {1.0}}\right)$ , EXPASS outperforms DropEdge [44] on all the three metrics. Note that their performance converges for top $\mathrm{K} = {1.0}$ as that denotes using all the edges in the graph.
268
+
269
+ Last, we investigate the effect of the choice of the baseline explanation method on the performance of EXPASS with respect to the vanilla message passing framework. More specifically, we evaluate the predictive and explainability performance of EXPASS-augmented GNNs when trained using node explanations generated using Integrated Gradients (IG) [43]. Similar to the results of EXPASS with GNNExplainer as the baseline explanation method (Table 1), we find that EXPASS trained using IG explanations also improves the AUROC (+2.80%), F1-score (+1.11%), and GEF (+23.67%) of the vanilla GNN model. Our results show that the choice of explainer can make a difference in the EXPASS performance, depending on the dataset. For instance, IG is a node-masking explainer that is not considered a strong explanation method and its effects are variable across datasets [33]. We recommend using graph-specific explainers that optimise for fidelity and sparsity on the edges
270
+
271
+ Table 2: Results of EXPASS for GCN using the node explanations from Integrated Gradients [43] for message passing for various datasets. Shown is average performance across three independent runs. Arrows $\left( { \uparrow , \downarrow }\right)$ indicate the direction of better performance. EXPASS improves the predictive power (AUROC and F1-score) and degree of explainability (Graph Explanation Faithfulness) of original GNNs across multiple datasets (shaded area).
272
+
273
+ max width=
274
+
275
+ Dataset Method AUROC (↑) F1-score (↑) GEF (↓)
276
+
277
+ 1-5
278
+ 2*DD GCN ${0.73} \pm {0.02}$ ${0.70} \pm {0.02}$ ${0.25} \pm {0.03}$
279
+
280
+ 2-5
281
+ EXPASS-GCN 0.75±0.01 0.71±0.03 ${0.23} \pm {0.04}$
282
+
283
+ 1-5
284
+ 2*ALKANE GCN ${0.97} \pm {0.01}$ ${0.95} \pm {0.01}$ ${\mathbf{{0.09}}}_{\pm {0.01}}$
285
+
286
+ 2-5
287
+ EXPASS-GCN ${0.97} \pm {0.01}$ ${0.95} \pm {0.01}$ ${0.1} \pm {0.01}$
288
+
289
+ 1-5
290
+ 2*Mutag GCN ${0.71} \pm {0.11}$ ${0.87} \pm {0.01}$ ${0.09} \pm {0.02}$
291
+
292
+ 2-5
293
+ EXPASS-GCN 0.77±0.02 $\mathbf{{0.88}} \pm {0.01}$ 0.04±0.02
294
+
295
+ 1-5
296
+ 2*Proteins GCN ${0.73} \pm {0.04}$ $\mathbf{{0.68}} \pm {0.04}$ ${0.05} \pm {0.01}$
297
+
298
+ 2-5
299
+ EXPASS-GCN ${0.73} \pm {0.04}$ ${0.67} \pm {0.05}$ 0.04±0.01
300
+
301
+ 1-5
302
+
303
+ of the input graph, which would be a best fit to increase the performance of the network. Further, our results show that EXPASS is a model- and explainer-agnostic framework that can improve the downstream task and explainability performance across different GNN architectures using diverse GNN explainers.
304
+
305
+ § 6 CONCLUSION
306
+
307
+ In this work, we propose the problem of learning graph embeddings using explanation-directed message passing in GNNs. To this end, we introduce EXPASS, a novel message passing framework that can be used with any existing GNN model and subgraph-optimizing explainer to learn accurate embeddings by aggregating only embeddings from nodes and edges identified as important by a GNN explainer. We perform an extensive theoretical analysis to show that EXPASS relieves the oversmoothing problem in GNNs, and the embedding difference between the vanilla message passing framework and EXPASS can be upper bounded by the difference of their respective layer weights. Our empirical results on benchmark datasets show that EXPASS improves the explainability of the underlying GNN model without sacrificing its predictive performance. Our proposed method and findings open up exciting new avenues to generate graph embeddings by jointly training models and explanation methods. We anticipate that EXPASS could open new frontiers in graph machine learning for developing explanation-based training frameworks.
papers/LOG/LOG 2022/LOG 2022 Conference/vEbUaN9Z2V8/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,148 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Leave Graphs Alone: Addressing Over-Squashing without Rewiring
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ Recent works have investigated the role of graph bottlenecks in preventing long-range information propagation in message-passing graph neural networks, causing the so-called 'over-squashing' phenomenon. As a remedy, graph rewiring mechanisms have been proposed as preprocessing steps. Graph Echo State Networks (GESNs) are a reservoir computing model for graphs, where node embeddings are recursively computed by an untrained message-passing function. In this paper, we show that GESNs can achieve a singnificantly better accuracy on six heterophilic node classification tasks without altering the graph connectivity, thus suggesting a different route for addressing the over-squashing problem.
12
+
13
+ ## 1 Challenges in Node Classification
14
+
15
+ Relations between entities, such as paper citations or links between web pages, can be best represented by graphs. Since the introduction of pioneering models such as Neural Network for Graphs [1] and Graph Neural Network [2], a plethora of neural models have been proposed to solve graph-, edge-, and node-level tasks [3-5], most of them sharing an architecture structured in layers that perform local aggregations of node features, e.g. graph convolution networks (GCNs) [6-8].
16
+
17
+ However, as the development of deep learning on graphs progressed, several challenges preventing the computation of effective node representations have emerged. Li et al. [9] first presented over-smoothing as an issue by analysing the accuracy decay as the number of layers increases in deep graph convolutional networks on semi-supervised node classification tasks. Oono and Suzuki [10] showed that repeated applications of a GCN layer cause the node representations to asymptotically converge to a low-frequency subspace of the graph spectrum. Furthermore, by acting as a low-pass filter, GCNs representation are biased in favour of tasks whose graphs present an high degree of homophily, that is nodes in the same neighbourhood share the same class [11]. In general, the inability to extract meaningful features in deeper layers for tasks that require discovering long-range relationships between nodes is called under-reaching. Alon and Yahav [12] maintain that one of its causes is over-squashing: the problem of encoding an exponentially growing receptive field [1] in a fixed-size node embedding dimension. Topping et al. [13] have provided theoretical insights into this issue by identifying over-squashing with the exponential decrease in sensitivity of node representations to the input features on distant nodes as the number of layers increases. For example, a GCN model [8] computes the representation ${\mathbf{h}}_{v}^{\left( \ell \right) } \in {\mathbb{R}}^{H}$ of node $v$ in layer $\ell$ as the aggregation of previous-layer features in neighbouring nodes ${v}^{\prime } \in \mathcal{N}\left( v\right)$ , i. e.
18
+
19
+ $$
20
+ {\mathbf{h}}_{v}^{\left( \ell \right) } = \operatorname{relu}\left( {\mathop{\sum }\limits_{{{v}^{\prime } \in \mathcal{N}\left( v\right) }}{\widehat{\mathbf{A}}}_{v,{v}^{\prime }}{\mathbf{W}}^{\left( \ell \right) }{\mathbf{h}}_{{v}^{\prime }}^{\left( \ell - 1\right) }}\right) , \tag{1}
21
+ $$
22
+
23
+ with $\widehat{\mathbf{A}}$ as the normalized graph adjacency matrix and input node features ${\mathbf{x}}_{v} \in {\mathbb{R}}^{X}$ in layer $\ell = 1$ . The sensitivity of ${\mathbf{h}}_{v}^{\left( \ell \right) }$ to the input ${\mathbf{x}}_{{v}^{\prime }}$ , assuming that there exists a $\ell$ -path between nodes $v$ and ${v}^{\prime }$ , is upper bounded by(2)
24
+
25
+ ![01963ed8-000b-7ea5-bf7d-718dd5208a58_0_658_1964_481_152_0.jpg](images/01963ed8-000b-7ea5-bf7d-718dd5208a58_0_658_1964_481_152_0.jpg)
26
+
27
+ Submitted to the First Learning on Graphs Conference (LoG 2022). Do not distribute.
28
+
29
+ Topping et al. [13] have further investigated the connection of over-squashing - as measured by the Jacobian of node representations in (2) - with the graph topology via the term ${\left( {\widehat{\mathbf{A}}}^{\ell }\right) }_{v,{v}^{\prime }}$ , and have identified in negative local graph curvature the cause of 'bottlenecks' in message propagation. In order to remove these bottlenecks, they have proposed rewiring the input graph, i.e. altering the original set of edges as a preprocessing step, via Stochastic Discrete Ricci Flow (SDRF). This method works by iteratively adding an edge to support the most negatively-curved edge while removing the most positively-curved one according to the balanced Forman curvature [13], until convergence or a maximum number of iterations is reached. This rewiring approach can be contrasted to e.g. Graph Diffusion Convolution (DIGL) [14], which aims to address the problem of noisy edges in the input graph by alterning the connectivity according to a generalized graph diffusion process, such as personalized PageRank (PPR). Since DIGL has a smoothing effect on the graph adjacency - by promoting connectivity between nodes that are a short diffusion distance -, it may be more suitable for tasks that present a high degree of homophily [13], i.e. graphs with an high ratio of intra-class edges [11].
30
+
31
+ In our opinion, equation (2) instead suggests a different method of addressing the exponentially vanishing sensitivity in deeper layers, by acting on the layers’ Lipshitz constants $\begin{Vmatrix}{\mathbf{W}}^{\left( l\right) }\end{Vmatrix}$ . In the next section, we present a model for computing node embeddings in which Lipshitz constants can be explicitly chosen as part of the hyper-parameter selection. This will enable an experimental comparison between the two approaches in section 3 .
32
+
33
+ ## 2 Reservor Computing for Graphs
34
+
35
+ Reservoir computing [15-17] is a paradigm for the efficient design of recurrent neural networks (RNNs). Input data is encoded by a randomly initialized reservoir, while only the readout layer for downstream task predictions requires training. Reservoir computing models, in particular Echo State Networks (ESNs) [18], have been studied in order to obtain insights into the architectural bias of RNNs [19, 20].
36
+
37
+ Graph Echo State Networks (GESNs) have been introduced by Gallicchio and Micheli [21], extending the reservoir computing paradigm to graph-structured data. GESNs have already demonstrated their effectiveness in graph-level classification tasks [22], and more recently in node-level classification tasks [23], in particular when the underlying graphs present low homophily. Node embeddings are recursively computed by the non-linear dynamical system
38
+
39
+ $$
40
+ {\mathbf{h}}_{v}^{\left( k\right) } = \tanh \left( {{\mathbf{W}}_{\text{in }}{\mathbf{x}}_{v} + \mathop{\sum }\limits_{{{v}^{\prime } \in \mathcal{N}\left( v\right) }}\widehat{\mathbf{W}}{\mathbf{h}}_{{v}^{\prime }}^{\left( k - 1\right) }}\right) ,\;{\mathbf{h}}_{v}^{\left( 0\right) } = \mathbf{0}, \tag{3}
41
+ $$
42
+
43
+ where ${\mathbf{W}}_{\text{in }} \in {\mathbb{R}}^{H \times X}$ and $\widehat{\mathbf{W}} \in {\mathbb{R}}^{H \times H}$ are the input-to-reservoir and the recurrent weights, respectively, for a reservoir with $H$ units (input bias is omitted). Equation (3) is iterated over $k$ until the system state converges to fixed point ${\mathbf{h}}_{v}^{\left( \infty \right) }$ , which is used as the embedding. For node classification tasks, a linear readout is applied to node embeddings ${\mathbf{y}}_{v} = {\mathbf{W}}_{\text{out }}{\mathbf{h}}_{v}^{\left( \infty \right) } + {\mathbf{b}}_{\text{out }}$ , where the weights ${\mathbf{W}}_{\text{out }} \in {\mathbb{R}}^{C \times H},{\mathbf{b}}_{\text{out }} \in {\mathbb{R}}^{C}$ are trained by ridge regression on one-hot encodings of target classes ${y}_{v}$ .
44
+
45
+ The existence of a fixed point is guaranteed by the Graph Embedding Stability (GES) property [22], which also guarantees independence from the system’s initial state ${\mathbf{x}}_{v}^{\left( 0\right) }$ . A sufficient condition for the GES property is requiring that the transition function defined in (3) to be contractive, i.e. to have Lipschitz constant $\parallel \widehat{\mathbf{W}}\parallel \parallel \mathbf{A}\parallel < 1$ . In standard reservoir computing practice, however, the recurrent weights are initialized according to a necessary condition [24] for the GES property, which is $\rho \left( \widehat{\mathbf{W}}\right) < 1/\alpha$ , where $\rho \left( \cdot \right)$ denotes the spectral radius of a matrix, i.e. its largest absolute eigenvalue, and $\alpha = \rho \left( \mathbf{A}\right)$ is the graph spectral radius. This condition provides the best estimate of the system bifurcation point, i.e. the threshold beyond which (3) becomes asymptotically unstable [24]. Reservoir weights are randomly initialized from a uniform distribution in $\left\lbrack {-1,1}\right\rbrack$ , and then rescaled to the desired input scaling and reservoir spectral radius, without requiring any training.
46
+
47
+ A recent work by Tortorella and Micheli [23] has shown that in tasks where the graph structure is relevant in the prediction, the best node embeddings are computed well beyond the stability threshold; in this case, the number of iterations of (3) is fixed to a constant $K$ . The $K$ iterations of (3) can be interpreted as equivalent to $K$ graph convolution layers with weights shared among layers and input skip connections. Since the spectral radius is a lower bound for the spectral norm [25], i.e. $\rho \left( \widehat{\mathbf{W}}\right) \leq \parallel \widehat{\mathbf{W}}\parallel$ , increasing $\rho \left( \widehat{\mathbf{W}}\right)$ allows us to increase the Lipschitz constant of (3). This in turn should allow us to constrast the exponentially vanishing sensitivity in (2) caused by topological bottlenecks with the factor $\parallel \widehat{\mathbf{W}}{\parallel }^{K}$ , which can be increasing with the number of iterations (unfolded recursive layers) if $\parallel \widehat{\mathbf{W}}\parallel > 1$ .
48
+
49
+ Table 1: Average test accuracy with ${95}\%$ confidence intervals (best results in bold). Except for GESN, the other results are reported from [13].
50
+
51
+ <table><tr><td/><td>Cornell</td><td>Texas</td><td>Wisconsin</td><td>Chameleon</td><td>Squirrel</td><td>Actor</td></tr><tr><td>None</td><td>${52.69}_{\pm {0.21}}$</td><td>${61.19}_{\pm {0.49}}$</td><td>${54.60}_{\pm {0.86}}$</td><td>${41.80}_{\pm {0.41}}$</td><td>${39.83}_{\pm {0.14}}$</td><td>${28.70}_{\pm {0.09}}$</td></tr><tr><td>Undirected</td><td>${53.20}_{\pm {0.53}}$</td><td>${63.38}_{\pm {0.87}}$</td><td>${51.37}_{\pm {1.15}}$</td><td>${42.63}_{\pm {0.30}}$</td><td>${40.77}_{\pm {0.16}}$</td><td>${28.10}_{\pm {0.11}}$</td></tr><tr><td>+FA</td><td>${58.29}_{\pm {0.49}}$</td><td>${64.82}_{\pm {0.29}}$</td><td>${55.48}_{\pm {0.62}}$</td><td>${42.33}_{\pm {0.17}}$</td><td>${40.74}_{\pm {0.13}}$</td><td>${28.68}_{\pm {0.16}}$</td></tr><tr><td>DIGL (PPR)</td><td>${58.26}_{\pm {0.50}}$</td><td>${62.03}_{\pm {0.43}}$</td><td>${49.53}_{\pm {0.27}}$</td><td>${42.02}_{\pm {0.13}}$</td><td>${34.38}_{\pm {0.11}}$</td><td>${30.79}_{\pm {0.10}}$</td></tr><tr><td>DIGL + Undir.</td><td>${59.54}_{\pm {0.64}}$</td><td>${63.54}_{\pm {0.38}}$</td><td>${52.23}_{\pm {0.54}}$</td><td>${42.68}_{\pm {0.12}}$</td><td>${33.36}_{\pm {0.21}}$</td><td>${29.71}_{\pm {0.11}}$</td></tr><tr><td>SDRF</td><td>${54.60}_{\pm {0.39}}$</td><td>${64.46}_{\pm {0.38}}$</td><td>${55.51}_{\pm {0.27}}$</td><td>${43.75}_{\pm {0.31}}$</td><td>${40.97}_{\pm {0.14}}$</td><td>${29.70}_{\pm {0.13}}$</td></tr><tr><td>SDRF + Undir.</td><td>${57.54}_{\pm {0.34}}$</td><td>${70.35}_{\pm {0.60}}$</td><td>${61.55}_{\pm {0.86}}$</td><td>${44.46}_{\pm {0.17}}$</td><td>${41.47}_{\pm {0.21}}$</td><td>${29.85}_{\pm {0.07}}$</td></tr><tr><td>GESN</td><td>${\mathbf{{69.75}}}_{\pm {1.11}}$</td><td>${\mathbf{{73.96}}}_{\pm {1.45}}$</td><td>${\mathbf{{77.76}}}_{\pm {1.68}}$</td><td>${\mathbf{{50.19}}}_{\pm {0.65}}$</td><td>${\mathbf{{42.70}}}_{\pm {0.29}}$</td><td>${\mathbf{{35.07}}}_{\pm {0.24}}$</td></tr></table>
52
+
53
+ ## 3 Experiments and Discussion
54
+
55
+ In this section, we compare the accuracy of GESNs on six low homophily node classification tasks against different rewiring mechanisms applied in conjuction with fully-trained GCNs. In our experiments we follow the same setting and training/validation/test splits of [13, 14], reporting the average accuracy with ${95}\%$ confidence intervals on 1000 test bootstraps. As in [23], the hyper-parameters selected on the validation split for GESN are the reservoir radius ${\rho }^{1}$ in the range $\left\lbrack {{0.1},{35}}\right\rbrack$ , the input scaling in the range $\left\lbrack {\frac{1}{320},1}\right\rbrack$ , the number of units $H$ in the range $\left\lbrack {{2}^{4},{2}^{12}}\right\rbrack$ , and the readout regularization; the number of iterations is fixed at $K = {100}$ . In Table 1 we compare the accuracy of GESN against the +FA rewiring method by Alon and Yahav [12], the diffusion-based rewiring method DIGL (with PPR) by Gasteiger et al. [14], and the curvature-based graph rewiring method by Topping et al. [13] (for details on these models hyper-parameters, we refer to [13], where experimental results are taken from). We observe that GESNs beat the other models by a significant margin on all the six tasks. Indeed, DIGL and SDRF offer improvements over the baseline GCN of a few accuracy points on average, usually requiring also that the graph to be made undirected. In contrast, GESN improves up to ${16}\%$ over the best rewiring methods, and by 4-6 points on average. Notice also that rewiring algorithms, in particular SDRF, can be extremely costly and need careful tuning in model selection, in contrast to the efficiency of the reservoir computing approach, which ditches both the preprocessing of input graphs and the training of the node embedding function. Indeed, just the preprocessing step of SDRF can require computations ranging from the order of minutes to hours, while a complete model can be obtained with GESN in a few seconds' time on the same GPU.
56
+
57
+ As a further insight, in Figure 1 we present the t-SNE plots of node embeddings of the Cora graph computed at different iterations of (3) with reservoir radius set at ${\rho \alpha } = 6$ . In GESNs, the iterations of the recursive transition function can be interpreted as equivalent to layers in deep message-passing graph networks where weights are shared among layers, in analogy with the unrolling in RNNs for sequences. We observe that instead of the collapse of node representations that has been shown in Li et al. [9] and subsequent works on the over-smoothing issue, node embeddings become more and more seaparable as the number of iterations increases. This observation, in conjunction with the accuracy results of Table 1 and of [23], suggests that the contractivity of the message-passing function is the critical factor in addressing the degradation of accuracy in deep graph neural networks. Indeed, tuning the layer contractivity was implicitly done by Chen et al. [26] via a regularization term that favors larger pairwise distances of node representations as a mean to address the over-smoothing problem.
58
+
59
+ ---
60
+
61
+ ${}^{1}$ Indirectly controlling how large the Lipschitz constant of (3) should be.
62
+
63
+ ---
64
+
65
+ ![01963ed8-000b-7ea5-bf7d-718dd5208a58_3_312_223_1167_505_0.jpg](images/01963ed8-000b-7ea5-bf7d-718dd5208a58_3_312_223_1167_505_0.jpg)
66
+
67
+ Figure 1: Node embeddings for the Cora graph at different iterations $k\left( {{\rho \alpha } = 6,{4096}\text{units}}\right)$ . Colors in the t-SNE plots represent different node classes, qualitatively showing how well separable are the node representations.
68
+
69
+ ## 4 Conclusion
70
+
71
+ Motivated by the analysis of over-squashing via sensitivity to input features advanced by Topping et al. [13], we have proposed a different route to address this issue affecting the capability of deep graph neural networks to learn effective node representations. Instead of altering the input graph connectivity - as rewiring methods such as SDRF and DIGL propose -, we have shown that a model able to select the suitable Lipschitz constant for its graph convolution can achieve a significantly better accuracy on six node classification tasks with low homophily, even computing the node embeddings in a completely unsupervised and untrained fashion. Future work will involve investigating how the change in Lipschitz constant affects the organization of the node embedding space, and assessing the merit of transfering those results in fully-trained graph convolution models via a regularization term o via constraints on layers' weights.
72
+
73
+ ## References
74
+
75
+ [1] Alessio Micheli. Neural network for graphs: A contextual constructive approach. IEEE Transactions on Neural Networks, 20(3):498-511, 2009. ISSN 1045-9227. 1
76
+
77
+ [2] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61-80, 2009. 1
78
+
79
+ [3] Davide Bacciu, Federico Errica, Alessio Micheli, and Marco Podda. A gentle introduction to deep learning for graphs. Neural Networks, 129:203-221, 2020. 1
80
+
81
+ [4] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S. Yu. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 32(1):4-24, 2021.
82
+
83
+ [5] Peter W. Battaglia, Jessica B. Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinícius Flores Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, Çaglar Gülçehre, H. Francis Song, Andrew J. Ballard, Justin Gilmer, George E. Dahl, Ashish Vaswani, Kelsey R. Allen, Charles Nash, Victoria Langston, Chris Dyer, Nicolas Heess, Daan Wierstra, Pushmeet Kohli, Matthew M. Botvinick, Oriol Vinyals, Yujia Li, and Razvan Pascanu. Relational inductive biases, deep learning, and graph networks. CoRR, abs/1806.01261, 2018. URL http://arxiv.org/abs/1806.01261.1
84
+
85
+ [6] David K. Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alan Aspuru-Guzik, and Ryan P. Adams. Convolutional networks on graphs for learning molecular fingerprints. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett,
86
+
87
+ editors, Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015. 1
88
+
89
+ [7] James Atwood and Don Towsley. Diffusion-convolutional neural networks. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016.
90
+
91
+ [8] Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, 2017. 1
92
+
93
+ [9] Qimai Li, Zhichao Han, and Xiao-ming Wu. Deeper insights into graph convolutional networks for semi-supervised learning. Proceedings of the AAAI Conference on Artificial Intelligence, 32 (1),2018. 1,3
94
+
95
+ [10] Kenta Oono and Taiji Suzuki. Graph neural networks exponentially lose expressive power for node classification. In 8th International Conference on Learning Representations, 2020. 1
96
+
97
+ [11] Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, and Danai Koutra. Beyond homophily in graph neural networks: Current limitations and effective designs. In Advances in Neural Information Processing Systems, volume 33, pages 7793-7804, 2020. 1, 2, 6
98
+
99
+ [12] Uri Alon and Eran Yahav. On the bottleneck of graph neural networks and its practical implications. In 9th International Conference on Learning Representations, 2021. 1, 3
100
+
101
+ [13] Jake Topping, Francesco Di Giovanni, Benjamin Paul Chamberlain, Xiaowen Dong, and Michael M. Bronstein. Understanding over-squashing and bottlenecks on graphs via curvature. In 10th International Conference on Learning Representations, 2022. 1, 2, 3, 4
102
+
103
+ [14] Johannes Gasteiger, Stefan Weißenberger, and Stephan Günnemann. Diffusion improves graph learning. In Advances in Neural Information Processing Systems, volume 32, pages 13298- 13310, 2019. 2, 3
104
+
105
+ [15] Kohei Nakajima and Ingo Fischer, editors. Reservoir Computing: Theory, Physical Implementations, and Applications. Natural Computing Series. Springer, Singapore, 2021. ISBN 978-981-13-1686-9.2
106
+
107
+ [16] Mantas Lukoševičius and Herbert Jaeger. Reservoir computing approaches to recurrent neural network training. Computer Science Review, 3(3):127-149, 2009. ISSN 15740137.
108
+
109
+ [17] David Verstraeten, Benjamin Schrauwen, Michiel d'Haene, and Dirk Stroobandt. An experimental unification of reservoir computing methods. Neural networks, 20(3):391-403, 2007. 2
110
+
111
+ [18] Herbert Jaeger and Harald Haas. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science, 304(5667):78-80, 2004. 2
112
+
113
+ [19] Barbara Hammer and Peter Tiño. Recurrent neural networks with small weights implement definite memory machines. Neural Computation, 15(8):1897-1929, 2003. 2
114
+
115
+ [20] Claudio Gallicchio and Alessio Micheli. Architectural and markovian factors of echo state networks. Neural Networks, 24(5):440-456, 2011. 2
116
+
117
+ [21] Claudio Gallicchio and Alessio Micheli. Graph echo state networks. In The 2010 International Joint Conference on Neural Networks, pages 3967-3974, 2010. 2
118
+
119
+ [22] Claudio Gallicchio and Alessio Micheli. Fast and deep graph neural networks. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04):3898-3905, 2020. 2
120
+
121
+ [23] Domenico Tortorella and Alessio Micheli. Beyond homophily with graph echo state networks. In 30th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2022), 2022. 2, 3, 6
122
+
123
+ [24] Domenico Tortorella, Claudio Gallicchio, and Alessio Micheli. Spectral bounds for graph echo state network stability. In The 2022 International Joint Conference on Neural Networks, 2022. 2
124
+
125
+ [25] Moshe Goldberg and Gideon Zwas. On matrices having equal spectral radius and spectral norm. Linear Algebra and its Applications, 8(5):427-434, 1974. 2
126
+
127
+ [26] Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04):3438-3445, 2020. 3
128
+
129
+ ## A Comparison with node classification models
130
+
131
+ For the sake of completeness, in Table 2 we report accuracy of GESN and other node classification models on nine graphs with different degrees of homophily, following the experimental setting of Zhu et al. [11]. The results show that GESN is effective on tasks with high homophily as well as on tasks with low homophily, thanks to the ability to tune the Lipschitz constant of (3).
132
+
133
+ Table 2: Node classification accuracy on low and high homophily graphs following the experimental setting of Zhu et al. [11]. Average accuracy and standard deviation for GESN is reported from [23], while other models are reported from [11]. Results within one standard deviation of the best accuracy are highlighted.
134
+
135
+ <table><tr><td/><td>Texas</td><td>Wisconsin</td><td>Actor</td><td>Squirrel</td><td>Chameleon</td><td>Cornell</td><td>Citeseer</td><td>Pubmed</td><td>Cora</td></tr><tr><td>GraphSAGE</td><td>${82.4}_{\pm {6.1}}$</td><td>${81.2}_{\pm {5.6}}$</td><td>${34.2}_{\pm {1.0}}$</td><td>${41.6}_{\pm {0.7}}$</td><td>${58.7}_{\pm {1.7}}$</td><td>${75.9}_{\pm {5.0}}$</td><td>${76.0}_{\pm {1.3}}$</td><td>${88.5}_{\pm {0.5}}$</td><td>${86.9}_{\pm {1.0}}$</td></tr><tr><td>GAT</td><td>${58.4}_{\pm {4.5}}$</td><td>${55.3}_{\pm {8.7}}$</td><td>${26.3}_{\pm {1.7}}$</td><td>${30.6}_{\pm {2.1}}$</td><td>${54.7} \pm {1.9}$</td><td>${58.9}_{\pm {3.3}}$</td><td>${75.5}_{\pm {1.7}}$</td><td>${84.7}_{\pm {0.4}}$</td><td>${82.7} \pm {1.8}$</td></tr><tr><td>GCN</td><td>${59.5}_{\pm {5.3}}$</td><td>${59.8}_{\pm {7.0}}$</td><td>${30.3}_{\pm {0.8}}$</td><td>${36.9}_{\pm {1.3}}$</td><td>${59.8}_{\pm {2.6}}$</td><td>${57.0}_{\pm {4.7}}$</td><td>${76.7}_{\pm {1.6}}$</td><td>${87.4}_{\pm {0.7}}$</td><td>${87.3}_{\pm {1.3}}$</td></tr><tr><td>GCN+JK</td><td>${66.5}_{\pm {6.6}}$</td><td>${74.3}_{\pm {6.4}}$</td><td>${34.2}_{\pm {0.9}}$</td><td>${40.5}_{\pm {1.6}}$</td><td>${63.4}_{\pm {2.0}}$</td><td>${64.6}_{\pm {8.7}}$</td><td>${74.5}_{\pm {1.8}}$</td><td>${88.4}_{\pm {0.5}}$</td><td>${85.8}_{\pm {0.9}}$</td></tr><tr><td>GCN+Cheby</td><td>${77.3}_{\pm {4.1}}$</td><td>${79.4}_{\pm {4.5}}$</td><td>${34.1}_{\pm {1.1}}$</td><td>${43.9}_{\pm {1.6}}$</td><td>${55.2}_{\pm {2.8}}$</td><td>${74.3}_{\pm {7.5}}$</td><td>${75.8}_{\pm {1.5}}$</td><td>${88.7}_{\pm {0.6}}$</td><td>${86.8}_{\pm {1.0}}$</td></tr><tr><td>MixHop</td><td>${77.8}_{\pm {7.7}}$</td><td>${75.9}_{\pm {4.9}}$</td><td>${32.2}_{\pm {2.3}}$</td><td>${43.8}_{\pm {1.5}}$</td><td>${60.5} \pm {2.5}$</td><td>${73.5}_{\pm {6.3}}$</td><td>${76.3}_{\pm {1.3}}$</td><td>${85.3}_{\pm {0.6}}$</td><td>${87.6}_{\pm {0.9}}$</td></tr><tr><td>H2GCN</td><td>${84.9}_{\pm {6.8}}$</td><td>${86.7}_{\pm {4.7}}$</td><td>${35.9}_{\pm {1.0}}$</td><td>${36.4}_{\pm {1.9}}$</td><td>${57.1}_{\pm {1.6}}$</td><td>${82.2}_{\pm {4.8}}$</td><td>${77.1}_{\pm {1.6}}$</td><td>${89.4}_{\pm {0.3}}$</td><td>${86.9}_{\pm {1.4}}$</td></tr><tr><td>MLP</td><td>${81.9}_{\pm {4.8}}$</td><td>${85.3}_{\pm {3.6}}$</td><td>${35.8}_{\pm {1.0}}$</td><td>${29.7}_{\pm {1.8}}$</td><td>${46.4}_{\pm {2.5}}$</td><td>${81.1}_{\pm {6.4}}$</td><td>${72.4}_{\pm {2.2}}$</td><td>${86.7}_{\pm {0.4}}$</td><td>${74.8}_{\pm {2.2}}$</td></tr><tr><td>GESN</td><td>${84.3}_{\pm {4.4}}$</td><td>${83.3}_{\pm {3.8}}$</td><td>${34.5}_{\pm {0.8}}$</td><td>${71.2}_{\pm {1.5}}$</td><td>${76.2}_{\pm {1.2}}$</td><td>${81.1}_{\pm {6.0}}$</td><td>${74.5}_{\pm {2.1}}$</td><td>${89.2}_{\pm {0.3}}$</td><td>${86.0}_{\pm {1.0}}$</td></tr></table>
136
+
137
+ Table 3: Statistics for the tasks in Table 2.
138
+
139
+ <table><tr><td>Task</td><td>Homophily</td><td>Nodes</td><td>Edges</td><td>Radius $\alpha$</td><td>Features</td><td>Classes</td></tr><tr><td>Texas</td><td>0.11</td><td>183</td><td>295</td><td>2.56</td><td>1,703</td><td>5</td></tr><tr><td>Wisconsin</td><td>0.21</td><td>251</td><td>466</td><td>2.88</td><td>1,703</td><td>5</td></tr><tr><td>Actor</td><td>0.22</td><td>7,600</td><td>26,752</td><td>9.99</td><td>932</td><td>5</td></tr><tr><td>Squirrel</td><td>0.22</td><td>5,201</td><td>198,493</td><td>138.60</td><td>2,089</td><td>5</td></tr><tr><td>Chameleon</td><td>0.23</td><td>2,277</td><td>31,421</td><td>61.90</td><td>2,089</td><td>5</td></tr><tr><td>Cornell</td><td>0.30</td><td>183</td><td>280</td><td>2.68</td><td>1,703</td><td>5</td></tr><tr><td>Citeseer</td><td>0.74</td><td>3,327</td><td>9,104</td><td>13.74</td><td>3,703</td><td>6</td></tr><tr><td>Pubmed</td><td>0.80</td><td>19,717</td><td>88,648</td><td>23.24</td><td>500</td><td>3</td></tr><tr><td>Cora</td><td>0.81</td><td>2,708</td><td>10,556</td><td>14.39</td><td>1,433</td><td>7</td></tr></table>
140
+
141
+ ## 213 B Role of reservoir radius
142
+
143
+ In Figure 2, we show the impact of reservoir radius $\rho$ and input scaling factor on average test accuracy for the tasks in Appendix A, reaffirming the analysis of Tortorella and Micheli [23]. Chameleon and Squirrel (two tasks with low homophily) require an extremely large reservoir radius, while essentially ignoring the input features due to the extremely small input scaling factor. This suggests that having a large Lipschitz constant is beneficial for the extraction of relevant topological features from the graph. The other four low homophily tasks (Actor, Cornell, Texas, Wisconsin) seem to exploit more the information of node input lables instead of graph connectivity, by requiring reservoir radii within the stability threshold. Finally, the three high homophily tasks (Cora, Citeseer, Pubmed) achieve the best accuracy with a combination of moderately high spectral radius and input scaling relatively close to 1 . Overall, what we have observed shows that GESN can be flexible enough to accomodate the two opposite task requirements thanks to the explicit tuning of both input scaling and reservoir radius in the model selection phase.
144
+
145
+ ![01963ed8-000b-7ea5-bf7d-718dd5208a58_6_360_243_1070_1755_0.jpg](images/01963ed8-000b-7ea5-bf7d-718dd5208a58_6_360_243_1070_1755_0.jpg)
146
+
147
+ Figure 2: Impact of input scaling and reservoir radius on test accuracy (4096 units).
148
+
papers/LOG/LOG 2022/LOG 2022 Conference/vEbUaN9Z2V8/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § LEAVE GRAPHS ALONE: ADDRESSING OVER-SQUASHING WITHOUT REWIRING
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ Recent works have investigated the role of graph bottlenecks in preventing long-range information propagation in message-passing graph neural networks, causing the so-called 'over-squashing' phenomenon. As a remedy, graph rewiring mechanisms have been proposed as preprocessing steps. Graph Echo State Networks (GESNs) are a reservoir computing model for graphs, where node embeddings are recursively computed by an untrained message-passing function. In this paper, we show that GESNs can achieve a singnificantly better accuracy on six heterophilic node classification tasks without altering the graph connectivity, thus suggesting a different route for addressing the over-squashing problem.
12
+
13
+ § 1 CHALLENGES IN NODE CLASSIFICATION
14
+
15
+ Relations between entities, such as paper citations or links between web pages, can be best represented by graphs. Since the introduction of pioneering models such as Neural Network for Graphs [1] and Graph Neural Network [2], a plethora of neural models have been proposed to solve graph-, edge-, and node-level tasks [3-5], most of them sharing an architecture structured in layers that perform local aggregations of node features, e.g. graph convolution networks (GCNs) [6-8].
16
+
17
+ However, as the development of deep learning on graphs progressed, several challenges preventing the computation of effective node representations have emerged. Li et al. [9] first presented over-smoothing as an issue by analysing the accuracy decay as the number of layers increases in deep graph convolutional networks on semi-supervised node classification tasks. Oono and Suzuki [10] showed that repeated applications of a GCN layer cause the node representations to asymptotically converge to a low-frequency subspace of the graph spectrum. Furthermore, by acting as a low-pass filter, GCNs representation are biased in favour of tasks whose graphs present an high degree of homophily, that is nodes in the same neighbourhood share the same class [11]. In general, the inability to extract meaningful features in deeper layers for tasks that require discovering long-range relationships between nodes is called under-reaching. Alon and Yahav [12] maintain that one of its causes is over-squashing: the problem of encoding an exponentially growing receptive field [1] in a fixed-size node embedding dimension. Topping et al. [13] have provided theoretical insights into this issue by identifying over-squashing with the exponential decrease in sensitivity of node representations to the input features on distant nodes as the number of layers increases. For example, a GCN model [8] computes the representation ${\mathbf{h}}_{v}^{\left( \ell \right) } \in {\mathbb{R}}^{H}$ of node $v$ in layer $\ell$ as the aggregation of previous-layer features in neighbouring nodes ${v}^{\prime } \in \mathcal{N}\left( v\right)$ , i. e.
18
+
19
+ $$
20
+ {\mathbf{h}}_{v}^{\left( \ell \right) } = \operatorname{relu}\left( {\mathop{\sum }\limits_{{{v}^{\prime } \in \mathcal{N}\left( v\right) }}{\widehat{\mathbf{A}}}_{v,{v}^{\prime }}{\mathbf{W}}^{\left( \ell \right) }{\mathbf{h}}_{{v}^{\prime }}^{\left( \ell - 1\right) }}\right) , \tag{1}
21
+ $$
22
+
23
+ with $\widehat{\mathbf{A}}$ as the normalized graph adjacency matrix and input node features ${\mathbf{x}}_{v} \in {\mathbb{R}}^{X}$ in layer $\ell = 1$ . The sensitivity of ${\mathbf{h}}_{v}^{\left( \ell \right) }$ to the input ${\mathbf{x}}_{{v}^{\prime }}$ , assuming that there exists a $\ell$ -path between nodes $v$ and ${v}^{\prime }$ , is upper bounded by(2)
24
+
25
+ < g r a p h i c s >
26
+
27
+ Submitted to the First Learning on Graphs Conference (LoG 2022). Do not distribute.
28
+
29
+ Topping et al. [13] have further investigated the connection of over-squashing - as measured by the Jacobian of node representations in (2) - with the graph topology via the term ${\left( {\widehat{\mathbf{A}}}^{\ell }\right) }_{v,{v}^{\prime }}$ , and have identified in negative local graph curvature the cause of 'bottlenecks' in message propagation. In order to remove these bottlenecks, they have proposed rewiring the input graph, i.e. altering the original set of edges as a preprocessing step, via Stochastic Discrete Ricci Flow (SDRF). This method works by iteratively adding an edge to support the most negatively-curved edge while removing the most positively-curved one according to the balanced Forman curvature [13], until convergence or a maximum number of iterations is reached. This rewiring approach can be contrasted to e.g. Graph Diffusion Convolution (DIGL) [14], which aims to address the problem of noisy edges in the input graph by alterning the connectivity according to a generalized graph diffusion process, such as personalized PageRank (PPR). Since DIGL has a smoothing effect on the graph adjacency - by promoting connectivity between nodes that are a short diffusion distance -, it may be more suitable for tasks that present a high degree of homophily [13], i.e. graphs with an high ratio of intra-class edges [11].
30
+
31
+ In our opinion, equation (2) instead suggests a different method of addressing the exponentially vanishing sensitivity in deeper layers, by acting on the layers’ Lipshitz constants $\begin{Vmatrix}{\mathbf{W}}^{\left( l\right) }\end{Vmatrix}$ . In the next section, we present a model for computing node embeddings in which Lipshitz constants can be explicitly chosen as part of the hyper-parameter selection. This will enable an experimental comparison between the two approaches in section 3 .
32
+
33
+ § 2 RESERVOR COMPUTING FOR GRAPHS
34
+
35
+ Reservoir computing [15-17] is a paradigm for the efficient design of recurrent neural networks (RNNs). Input data is encoded by a randomly initialized reservoir, while only the readout layer for downstream task predictions requires training. Reservoir computing models, in particular Echo State Networks (ESNs) [18], have been studied in order to obtain insights into the architectural bias of RNNs [19, 20].
36
+
37
+ Graph Echo State Networks (GESNs) have been introduced by Gallicchio and Micheli [21], extending the reservoir computing paradigm to graph-structured data. GESNs have already demonstrated their effectiveness in graph-level classification tasks [22], and more recently in node-level classification tasks [23], in particular when the underlying graphs present low homophily. Node embeddings are recursively computed by the non-linear dynamical system
38
+
39
+ $$
40
+ {\mathbf{h}}_{v}^{\left( k\right) } = \tanh \left( {{\mathbf{W}}_{\text{ in }}{\mathbf{x}}_{v} + \mathop{\sum }\limits_{{{v}^{\prime } \in \mathcal{N}\left( v\right) }}\widehat{\mathbf{W}}{\mathbf{h}}_{{v}^{\prime }}^{\left( k - 1\right) }}\right) ,\;{\mathbf{h}}_{v}^{\left( 0\right) } = \mathbf{0}, \tag{3}
41
+ $$
42
+
43
+ where ${\mathbf{W}}_{\text{ in }} \in {\mathbb{R}}^{H \times X}$ and $\widehat{\mathbf{W}} \in {\mathbb{R}}^{H \times H}$ are the input-to-reservoir and the recurrent weights, respectively, for a reservoir with $H$ units (input bias is omitted). Equation (3) is iterated over $k$ until the system state converges to fixed point ${\mathbf{h}}_{v}^{\left( \infty \right) }$ , which is used as the embedding. For node classification tasks, a linear readout is applied to node embeddings ${\mathbf{y}}_{v} = {\mathbf{W}}_{\text{ out }}{\mathbf{h}}_{v}^{\left( \infty \right) } + {\mathbf{b}}_{\text{ out }}$ , where the weights ${\mathbf{W}}_{\text{ out }} \in {\mathbb{R}}^{C \times H},{\mathbf{b}}_{\text{ out }} \in {\mathbb{R}}^{C}$ are trained by ridge regression on one-hot encodings of target classes ${y}_{v}$ .
44
+
45
+ The existence of a fixed point is guaranteed by the Graph Embedding Stability (GES) property [22], which also guarantees independence from the system’s initial state ${\mathbf{x}}_{v}^{\left( 0\right) }$ . A sufficient condition for the GES property is requiring that the transition function defined in (3) to be contractive, i.e. to have Lipschitz constant $\parallel \widehat{\mathbf{W}}\parallel \parallel \mathbf{A}\parallel < 1$ . In standard reservoir computing practice, however, the recurrent weights are initialized according to a necessary condition [24] for the GES property, which is $\rho \left( \widehat{\mathbf{W}}\right) < 1/\alpha$ , where $\rho \left( \cdot \right)$ denotes the spectral radius of a matrix, i.e. its largest absolute eigenvalue, and $\alpha = \rho \left( \mathbf{A}\right)$ is the graph spectral radius. This condition provides the best estimate of the system bifurcation point, i.e. the threshold beyond which (3) becomes asymptotically unstable [24]. Reservoir weights are randomly initialized from a uniform distribution in $\left\lbrack {-1,1}\right\rbrack$ , and then rescaled to the desired input scaling and reservoir spectral radius, without requiring any training.
46
+
47
+ A recent work by Tortorella and Micheli [23] has shown that in tasks where the graph structure is relevant in the prediction, the best node embeddings are computed well beyond the stability threshold; in this case, the number of iterations of (3) is fixed to a constant $K$ . The $K$ iterations of (3) can be interpreted as equivalent to $K$ graph convolution layers with weights shared among layers and input skip connections. Since the spectral radius is a lower bound for the spectral norm [25], i.e. $\rho \left( \widehat{\mathbf{W}}\right) \leq \parallel \widehat{\mathbf{W}}\parallel$ , increasing $\rho \left( \widehat{\mathbf{W}}\right)$ allows us to increase the Lipschitz constant of (3). This in turn should allow us to constrast the exponentially vanishing sensitivity in (2) caused by topological bottlenecks with the factor $\parallel \widehat{\mathbf{W}}{\parallel }^{K}$ , which can be increasing with the number of iterations (unfolded recursive layers) if $\parallel \widehat{\mathbf{W}}\parallel > 1$ .
48
+
49
+ Table 1: Average test accuracy with ${95}\%$ confidence intervals (best results in bold). Except for GESN, the other results are reported from [13].
50
+
51
+ max width=
52
+
53
+ X Cornell Texas Wisconsin Chameleon Squirrel Actor
54
+
55
+ 1-7
56
+ None ${52.69}_{\pm {0.21}}$ ${61.19}_{\pm {0.49}}$ ${54.60}_{\pm {0.86}}$ ${41.80}_{\pm {0.41}}$ ${39.83}_{\pm {0.14}}$ ${28.70}_{\pm {0.09}}$
57
+
58
+ 1-7
59
+ Undirected ${53.20}_{\pm {0.53}}$ ${63.38}_{\pm {0.87}}$ ${51.37}_{\pm {1.15}}$ ${42.63}_{\pm {0.30}}$ ${40.77}_{\pm {0.16}}$ ${28.10}_{\pm {0.11}}$
60
+
61
+ 1-7
62
+ +FA ${58.29}_{\pm {0.49}}$ ${64.82}_{\pm {0.29}}$ ${55.48}_{\pm {0.62}}$ ${42.33}_{\pm {0.17}}$ ${40.74}_{\pm {0.13}}$ ${28.68}_{\pm {0.16}}$
63
+
64
+ 1-7
65
+ DIGL (PPR) ${58.26}_{\pm {0.50}}$ ${62.03}_{\pm {0.43}}$ ${49.53}_{\pm {0.27}}$ ${42.02}_{\pm {0.13}}$ ${34.38}_{\pm {0.11}}$ ${30.79}_{\pm {0.10}}$
66
+
67
+ 1-7
68
+ DIGL + Undir. ${59.54}_{\pm {0.64}}$ ${63.54}_{\pm {0.38}}$ ${52.23}_{\pm {0.54}}$ ${42.68}_{\pm {0.12}}$ ${33.36}_{\pm {0.21}}$ ${29.71}_{\pm {0.11}}$
69
+
70
+ 1-7
71
+ SDRF ${54.60}_{\pm {0.39}}$ ${64.46}_{\pm {0.38}}$ ${55.51}_{\pm {0.27}}$ ${43.75}_{\pm {0.31}}$ ${40.97}_{\pm {0.14}}$ ${29.70}_{\pm {0.13}}$
72
+
73
+ 1-7
74
+ SDRF + Undir. ${57.54}_{\pm {0.34}}$ ${70.35}_{\pm {0.60}}$ ${61.55}_{\pm {0.86}}$ ${44.46}_{\pm {0.17}}$ ${41.47}_{\pm {0.21}}$ ${29.85}_{\pm {0.07}}$
75
+
76
+ 1-7
77
+ GESN ${\mathbf{{69.75}}}_{\pm {1.11}}$ ${\mathbf{{73.96}}}_{\pm {1.45}}$ ${\mathbf{{77.76}}}_{\pm {1.68}}$ ${\mathbf{{50.19}}}_{\pm {0.65}}$ ${\mathbf{{42.70}}}_{\pm {0.29}}$ ${\mathbf{{35.07}}}_{\pm {0.24}}$
78
+
79
+ 1-7
80
+
81
+ § 3 EXPERIMENTS AND DISCUSSION
82
+
83
+ In this section, we compare the accuracy of GESNs on six low homophily node classification tasks against different rewiring mechanisms applied in conjuction with fully-trained GCNs. In our experiments we follow the same setting and training/validation/test splits of [13, 14], reporting the average accuracy with ${95}\%$ confidence intervals on 1000 test bootstraps. As in [23], the hyper-parameters selected on the validation split for GESN are the reservoir radius ${\rho }^{1}$ in the range $\left\lbrack {{0.1},{35}}\right\rbrack$ , the input scaling in the range $\left\lbrack {\frac{1}{320},1}\right\rbrack$ , the number of units $H$ in the range $\left\lbrack {{2}^{4},{2}^{12}}\right\rbrack$ , and the readout regularization; the number of iterations is fixed at $K = {100}$ . In Table 1 we compare the accuracy of GESN against the +FA rewiring method by Alon and Yahav [12], the diffusion-based rewiring method DIGL (with PPR) by Gasteiger et al. [14], and the curvature-based graph rewiring method by Topping et al. [13] (for details on these models hyper-parameters, we refer to [13], where experimental results are taken from). We observe that GESNs beat the other models by a significant margin on all the six tasks. Indeed, DIGL and SDRF offer improvements over the baseline GCN of a few accuracy points on average, usually requiring also that the graph to be made undirected. In contrast, GESN improves up to ${16}\%$ over the best rewiring methods, and by 4-6 points on average. Notice also that rewiring algorithms, in particular SDRF, can be extremely costly and need careful tuning in model selection, in contrast to the efficiency of the reservoir computing approach, which ditches both the preprocessing of input graphs and the training of the node embedding function. Indeed, just the preprocessing step of SDRF can require computations ranging from the order of minutes to hours, while a complete model can be obtained with GESN in a few seconds' time on the same GPU.
84
+
85
+ As a further insight, in Figure 1 we present the t-SNE plots of node embeddings of the Cora graph computed at different iterations of (3) with reservoir radius set at ${\rho \alpha } = 6$ . In GESNs, the iterations of the recursive transition function can be interpreted as equivalent to layers in deep message-passing graph networks where weights are shared among layers, in analogy with the unrolling in RNNs for sequences. We observe that instead of the collapse of node representations that has been shown in Li et al. [9] and subsequent works on the over-smoothing issue, node embeddings become more and more seaparable as the number of iterations increases. This observation, in conjunction with the accuracy results of Table 1 and of [23], suggests that the contractivity of the message-passing function is the critical factor in addressing the degradation of accuracy in deep graph neural networks. Indeed, tuning the layer contractivity was implicitly done by Chen et al. [26] via a regularization term that favors larger pairwise distances of node representations as a mean to address the over-smoothing problem.
86
+
87
+ ${}^{1}$ Indirectly controlling how large the Lipschitz constant of (3) should be.
88
+
89
+ < g r a p h i c s >
90
+
91
+ Figure 1: Node embeddings for the Cora graph at different iterations $k\left( {{\rho \alpha } = 6,{4096}\text{ units }}\right)$ . Colors in the t-SNE plots represent different node classes, qualitatively showing how well separable are the node representations.
92
+
93
+ § 4 CONCLUSION
94
+
95
+ Motivated by the analysis of over-squashing via sensitivity to input features advanced by Topping et al. [13], we have proposed a different route to address this issue affecting the capability of deep graph neural networks to learn effective node representations. Instead of altering the input graph connectivity - as rewiring methods such as SDRF and DIGL propose -, we have shown that a model able to select the suitable Lipschitz constant for its graph convolution can achieve a significantly better accuracy on six node classification tasks with low homophily, even computing the node embeddings in a completely unsupervised and untrained fashion. Future work will involve investigating how the change in Lipschitz constant affects the organization of the node embedding space, and assessing the merit of transfering those results in fully-trained graph convolution models via a regularization term o via constraints on layers' weights.
papers/LOG/LOG 2022/LOG 2022 Conference/wY_IYhh6pqj/Initial_manuscript_md/Initial_manuscript.md ADDED
The diff for this file is too large to render. See raw diff
 
papers/LOG/LOG 2022/LOG 2022 Conference/wY_IYhh6pqj/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,325 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § WEISFEILER AND LEMAN GO RELATIONAL
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ Knowledge graphs, modeling multi-relational data, improve numerous applications such as question answering or graph logical reasoning. Many graph neural networks for such data emerged recently, often outperforming shallow architectures. However, the design of such multi-relational graph neural networks is ad-hoc, driven mainly by intuition and empirical insights. Up to now, their expressivity, their relation to each other, and their (practical) learning performance is poorly understood. Here, we initiate the study of deriving a more principled understanding of multi-relational graph neural networks. Namely, we investigate the limitations in the expressive power of the well-known Relational GCN and Compositional GCN architectures and shed some light on their practical learning undertaking. By aligning both architectures with a suitable version of the Weisfeiler-Leman test, we establish under which conditions both models have the same expressive power in distinguishing non-isomorphic (multi-relational) graphs or nodes with different structural roles. Further, by leveraging recent progress in designing expressive graph neural networks, we introduce the $k$ -RN architecture that provably overcomes the expressiveness limitations of the above two architectures. Empirically, we confirm our theoretical findings in a node classification setting over small and large multi-relational graphs.
12
+
13
+ § 1 INTRODUCTION
14
+
15
+ Recently, GNNs [1, 2] emerged as the most prominent graph representation learning architecture. Notable instances of this architecture include, e.g., Duvenaud et al. [3], Hamilton et al. [4], and Veličković et al. [5], which can be subsumed under the message-passing framework introduced in Gilmer et al. [1]. In parallel, approaches based on spectral information were introduced in, e.g., Defferrard et al. [6], Bruna et al. [7], Kipf and Welling [8], and Monti et al. [9]—all of which descend from early work in Scarselli et al. [2], Baskin et al. [10], Kireev [11], Micheli and Sestito [12], Merkwirth and Lengauer [13], Micheli [14] and Sperduti and Starita [15].
16
+
17
+ By now, we have a deep understanding of the expressive power of GNNs [16]. To start with, connections between GNNs and Weisfeiler-Leman type algorithms have been shown. Specifically, Morris et al. [17] and Xu et al. [18] showed that the 1-WL limits the expressive power of any possible GNN architecture in terms of distinguishing non-isomorphic graphs. In turn, these results have been generalized to the $k$ -WL, see, e.g., Morris et al. [17], Azizian and Lelarge [19], Geerts et al. [20], Geerts [21], Maron et al. [22], Morris et al. [23, 24], and connected to permutation-equivariant function approximation over graphs, see, e.g., Chen et al. [25], Geerts and Reutter [26], Maehara and NT [27]. Barceló et al. [28] further established an equivalence between the expressiveness of GNNs with readout functions and ${\mathrm{C}}^{2}$ , the 2-variable fragment of first-order logic with counting quantifiers.
18
+
19
+ Most previous works focus on graphs that admit labels on nodes but not edges. However, knowledge or multi-relational graphs, that admit labels on both nodes and edges play a crucial role in numerous applications, such as complex question answering in NLP [29] or visual question answering [30] in the intersection of NLP and vision. To extract the rich information encoded in the graph's multi-relational structure and its annotations, the knowledge graph community has proposed a large set of relational GNN architectures, e.g., [31-33] tailored toward knowledge or multi-relational graphs, targeting tasks such as node and link prediction [31, 33, 34]. Notably, Schlichtkrull et al. [31] proposed the first architecture, namely, R-GCN, being able to handle multi-relational data. Further, Vashishth et al. [32] proposed an alternative GNN architecture, CompGCN, using less number of parameters and reported improved empirical performance. In the knowledge graph reasoning area, R-GCN and CompGCN, being strong baselines, spun off numerous improved GNNs for node classification and transductive link prediction tasks [35-37]. They also inspired architectures for more complex reasoning tasks such as inductive link prediction [34, 38-40] and query answering [41-43].
20
+
21
+ Although these approaches show meaningful empirical performance, their limitations in extracting relevant structural information, their learning performance, and their relation to each other are not understood well. For example, there is no understanding of these approaches' inherent limitations in distinguishing between knowledge graphs with different structural features, explicitly considering the unique properties of multi-relational graphs. Hence, a thorough theoretical investigation of multi-relational GNNs' expressive power and learning performance is yet to be established to become meaningful, vital components in today's knowledge graph reasoning pipeline.
22
+
23
+ Present work. Here, we initiate the study on deriving a principled understanding of the capabilities of GNNs for knowledge or multi-relational graphs. More concretely:
24
+
25
+ * We investigate the expressive power of two well-known GNNs for multi-relation data, Relational ${GCNs}$ (R-GCN) [31] and Compositional GCNs (CompGCN) [32]. We quantify their limitations by relating them to a suitable version of the established Weisfeiler-Leman graph isomorphism test [44]. In particular, we show under which conditions the above two architectures possess the same expressive power in distinguishing non-isomorphic, multi-relational graphs or nodes with different structural features.
26
+
27
+ * To overcome both architectures’ expressiveness limitations, we introduce the $k$ -RN architecture, which provably overcomes their limitations and show that increasing $k$ always leads to strictly more expressive architectures.
28
+
29
+ * Empirically, we confirm our theoretical findings on established small- and large-scale multi-relational node classification benchmarks.
30
+
31
+ See Appendix A. 1 for an expanded discussion of related work.
32
+
33
+ § 2 PRELIMINARIES
34
+
35
+ As usual, let $\left\lbrack n\right\rbrack = \{ 1,\ldots ,n\} \subset \mathbb{N}$ for $n \geq 1$ , and let $\{ \{ \ldots \} \}$ denote a multiset.
36
+
37
+ A (undirected) graph $G$ is a pair $\left( {V\left( G\right) ,E\left( G\right) }\right)$ with a finite set of vertices $V\left( G\right)$ and a set of edges $E\left( G\right) \subseteq \{ \{ u,v\} \subseteq V \mid u \neq v\}$ . For notational convenience, we usually denote an edge $\{ u,v\}$ in $E\left( G\right)$ by(u, v)or(v, u). We assume the usual definition of adjacency matrix $\mathbf{A}$ of $G$ . A colored or labeled graph $G$ is a triple $\left( {V\left( G\right) ,E\left( G\right) ,\ell }\right)$ with a coloring or label function $\ell : V\left( G\right) \rightarrow \mathbb{N}$ . Then $\ell \left( w\right)$ is a color or label of $w$ , for $w$ in $V\left( G\right)$ . The neighborhood of $v$ in $V\left( G\right)$ is denoted by $N\left( v\right) = \{ u \in V\left( G\right) \mid \left( {v,u}\right) \in E\left( G\right) \} .$
38
+
39
+ An (undirected) multi-relational graph $G$ is a tuple $\left( {V\left( G\right) ,{R}_{1}\left( G\right) ,\ldots ,{R}_{r}\left( G\right) }\right)$ with a finite set of vertices $V\left( G\right)$ and relations ${R}_{i} \subseteq \{ \{ u,v\} \subseteq V\left( G\right) \mid u \neq v\}$ for $i$ in $\left\lbrack r\right\rbrack$ . The neighborhood of $v$ in $V\left( G\right)$ with respect to the relation ${R}_{i}$ is denoted by ${N}_{i}\left( v\right) = \left\{ {u \in V\left( G\right) \mid \left( {v,u}\right) \in {R}_{i}}\right\}$ . We define colored (or labeled) multi-relational graphs in the expected way.
40
+
41
+ Two graphs $G$ and $H$ are isomorphic $\left( {G \simeq H}\right)$ if there exists a bijection $\varphi : V\left( G\right) \rightarrow V\left( H\right)$ preserving the adjacency relation, i.e.,(u, v)in $E\left( G\right)$ if and only if $\left( {\varphi \left( u\right) ,\varphi \left( v\right) }\right)$ in $E\left( H\right)$ . We then call $\varphi$ an isomorphism from $G$ to $H$ . If the graphs have vertex labels, the isomorphism is additionally required to match these labels. In the case of multi-relational graphs $G$ and $H$ , the bijection $\varphi : V\left( G\right) \rightarrow V\left( H\right)$ needs to preserve all relations, i.e.,(u, v)is in ${R}_{i}\left( G\right)$ if and only if $\left( {\varphi \left( u\right) ,\varphi \left( v\right) }\right)$ is in ${R}_{i}\left( H\right)$ for each $i$ in $\left\lbrack r\right\rbrack$ . For labeled multi-relational graphs, the bijection needs to preserve the labels.
42
+
43
+ We define the atomic type atp: $V{\left( G\right) }^{k} \rightarrow \mathbb{N}$ such that $\operatorname{atp}\left( \mathbf{v}\right) = \operatorname{atp}\left( \mathbf{w}\right)$ for $\mathbf{v}$ and $\mathbf{w}$ in $V{\left( G\right) }^{k}$ if and only if the mapping $\varphi : V\left( G\right) \rightarrow V\left( G\right)$ where ${v}_{i} \mapsto {w}_{i}$ induces a partial isomorphism, i.e., ${v}_{i} = {v}_{j} \Leftrightarrow {w}_{i} = {w}_{j}$ and $\left( {{v}_{i},{v}_{j}}\right)$ in $E\left( G\right) \Leftrightarrow \left( {\varphi \left( {v}_{i}\right) ,\varphi \left( {v}_{j}\right) }\right)$ in $E\left( G\right) .$
44
+
45
+ The Weisfeiler-Leman Algorithm. The 1-dimensional Weisfeiler-Leman algorithm (1-WL), or color refinement, is a simple heuristic for the graph isomorphism problem, originally proposed by Weisfeiler and Leman [45]. ${}^{1}$ Intuitively, the algorithm determines if two graphs are non-isomorphic by iteratively coloring or labeling vertices. Given an initial coloring or labeling of the vertices of both graphs, e.g., their degree or application-specific information, in each iteration, two vertices with the same label get different labels if the number of identically labeled neighbors is not equal. If, after some iteration, the number of vertices annotated with a specific label is different in both graphs, the algorithm terminates and a stable coloring, inducing a vertex partition, is obtained. We can then conclude that the two graphs are not isomorphic. It is easy to see that the algorithm cannot distinguish all non-isomorphic graphs [47]. Nonetheless, it is a powerful heuristic that can successfully test isomorphism for a broad class of graphs [48-50].
46
+
47
+ Formally, let $G = \left( {V\left( G\right) ,E\left( G\right) ,\ell }\right)$ be a labeled graph. In each iteration, $t > 0$ , the 1-WL computes a vertex coloring ${C}^{\left( t\right) } : V\left( G\right) \rightarrow \mathbb{N}$ , which depends on the coloring of the neighbors. That is, in iteration $t > 0$ , we set
48
+
49
+ $$
50
+ {C}^{\left( t\right) }\left( v\right) \mathrel{\text{ := }} \operatorname{RELABEL}\left( \left( {{C}^{\left( t - 1\right) }\left( v\right) ,\left\{ \left\lbrack {{C}^{\left( t - 1\right) }\left( u\right) \mid u \in N\left( v\right) }\right\rbrack \right\} }\right) \right) ,
51
+ $$
52
+
53
+ where RELABEL injectively maps the above pair to a unique natural number, which has not been used in previous iterations. In iteration 0, the coloring ${C}^{\left( 0\right) } \mathrel{\text{ := }} \ell$ . To test if two graphs $G$ and $H$ are non-isomorphic, we run the above algorithm in "parallel" on both graphs. If the two graphs have a different number of vertices colored $c$ in $\mathbb{N}$ at some iteration, the 1-WL distinguishes the graphs as non-isomorphic. Moreover, if the number of colors between two iterations, $t$ and $\left( {t + 1}\right)$ , does not change, i.e., the cardinalities of the images of ${C}^{\left( t\right) }$ and ${C}^{\left( t + 1\right) }$ are equal, or, equivalently,
54
+
55
+ $$
56
+ {C}^{\left( t\right) }\left( v\right) = {C}^{\left( t\right) }\left( w\right) \Leftrightarrow {C}^{\left( t + 1\right) }\left( v\right) = {C}^{\left( t + 1\right) }\left( w\right) ,
57
+ $$
58
+
59
+ for all vertices $v$ and $w$ in $V\left( G\right)$ , the algorithm terminates. For such $t$ , we define the stable coloring ${C}^{\infty }\left( v\right) = {C}^{\left( t\right) }\left( v\right)$ for $v$ in $V\left( G\right)$ . The stable coloring is reached after at most $\max \{ \left| {V\left( G\right) }\right| ,\left| {V\left( H\right) }\right| \}$ iterations [51].
60
+
61
+ Due to the shortcomings of the 1-WL in distinguishing non-isomorphic graphs, several researchers, e.g.,[52,53], devised a more powerful generalization of the former, today known as the $k$ -dimensional Weisfeiler-Leman algorithm ( $k$ -WL), see Appendix A. 2 for details.
62
+
63
+ Graph Neural Networks. Intuitively, GNNs learn a vectorial representation, i.e., a d-dimensional vector, representing each vertex in a graph by aggregating information from neighboring vertices. Formally, let $G = \left( {V\left( G\right) ,E\left( G\right) ,\ell }\right)$ be a labeled graph with initial vertex features ${\left( {\mathbf{h}}_{v}^{\left( 0\right) }\right) }_{v \in V\left( G\right) }$ in ${\mathbb{R}}^{d}$ that are consistent with $\ell$ , that is, ${\mathbf{h}}_{u}^{\left( 0\right) } = {\mathbf{h}}_{v}^{\left( 0\right) }$ if and only if $\ell \left( u\right) = \ell \left( v\right)$ , e.g., a one-hot encoding of the labelling $\ell$ . Alternatively, ${\left( {\mathbf{h}}_{v}^{\left( 0\right) }\right) }_{v \in V\left( G\right) }$ can be arbitrary vertex features annotating the vertices of $G$ .
64
+
65
+ A GNN architecture consists of a stack of neural network layers, i.e., a composition of permutation-invariant or -equivariant parameterized functions. Similarly to 1-WL, each layer aggregates local neighborhood information, i.e., the neighbors' features, around each vertex and then passes this aggregated information on to the next layer.
66
+
67
+ GNNs are often realized as follows [17]. In each layer, $t > 0$ , we compute vertex features
68
+
69
+ $$
70
+ {\mathbf{h}}_{v}^{\left( t\right) } \mathrel{\text{ := }} \sigma \left( {{\mathbf{h}}_{v}^{\left( t - 1\right) }{\mathbf{W}}_{0}^{\left( t\right) } + \mathop{\sum }\limits_{{w \in N\left( v\right) }}{\mathbf{h}}_{w}^{\left( t - 1\right) }{\mathbf{W}}_{1}^{\left( t\right) }}\right) \in {\mathbb{R}}^{e}, \tag{1}
71
+ $$
72
+
73
+ for $v$ in $V\left( G\right)$ , where ${\mathbf{W}}_{0}^{\left( t\right) }$ and ${\mathbf{W}}_{1}^{\left( t\right) }$ are parameter matrices from ${\mathbb{R}}^{d \times e}$ and $\sigma$ denotes an entry-wise non-linear function, e.g., a sigmoid or a ReLU function. ${}^{2}$ Following Gilmer et al. [1] and Scarselli et al. [2], in each layer, $t > 0$ , we can generalize the above by computing a vertex feature
74
+
75
+ $$
76
+ {\mathbf{h}}_{v}^{\left( t\right) } \mathrel{\text{ := }} {\operatorname{UPD}}^{\left( t\right) }\left( {{\mathbf{h}}_{v}^{\left( t - 1\right) },{\operatorname{AGG}}^{\left( t\right) }\left( \left\{ \left\{ {{\mathbf{h}}_{w}^{\left( t - 1\right) } \mid w \in N\left( v\right) }\right\} \right\} \right) }\right) ,
77
+ $$
78
+
79
+ ${}^{1}$ Strictly speaking,1-WL and color refinement are two different algorithms. That is,1-WL considers neighbors and non-neighbors to update the coloring, resulting in a slightly higher expressive power when distinguishing vertices in a given graph, see Grohe [46] for details. For brevity, we consider both algorithms to be equivalent.
80
+
81
+ ${}^{2}$ For clarity of presentation, we omit biases.
82
+
83
+ where ${\mathrm{{UPD}}}^{\left( t\right) }$ and ${\mathrm{{AGG}}}^{\left( t\right) }$ may be differentiable parameterized functions, e.g., neural networks. ${}^{3}$ In the case of graph-level tasks, e.g., graph classification, one uses
84
+
85
+ $$
86
+ {\mathbf{h}}_{G} \mathrel{\text{ := }} \operatorname{READOUT}\left( \left\{ \left\{ {{\mathbf{h}}_{v}^{\left( T\right) } \mid v \in V\left( G\right) }\right\} \right\} \right) ,
87
+ $$
88
+
89
+ to compute a single vectorial representation based on learned vertex features after iteration $T$ . Again, READOUT may be a differentiable parameterized function. To adapt the parameters of the above three functions, they are optimized end-to-end, usually through a variant of stochastic gradient descent, e.g., [54], together with the parameters of a neural network used for classification or regression.
90
+
91
+ Graph neural networks for multi-relational graphs. In the following, we describe GNN layers for multi-relational graphs, namely R-GCN [31] and CompGCN [32]. Initial features are computed in the same way as in the previous subsection.
92
+
93
+ R-GCN. Let $G$ be a labeled multi-relational graph. In essence, R-GCN generalizes Equation (1) by using an additional sum iterating over the different relations. That is, we compute a vertex feature
94
+
95
+ $$
96
+ {\mathbf{h}}_{v,\mathrm{R} \cdot \mathrm{{GCN}}}^{\left( t\right) } \mathrel{\text{ := }} \sigma \left( {{\mathbf{h}}_{v,\mathrm{R} \cdot \mathrm{{GCN}}}^{\left( t - 1\right) }{\mathbf{W}}_{0}^{\left( t\right) } + \mathop{\sum }\limits_{{i \in \left\lbrack r\right\rbrack }}\mathop{\sum }\limits_{{w \in {N}_{i}\left( v\right) }}{\mathbf{h}}_{w,\mathrm{R} \cdot \mathrm{{GCN}}}^{\left( t - 1\right) }{\mathbf{W}}_{i}^{\left( t\right) }}\right) \in {\mathbb{R}}^{e}, \tag{2}
97
+ $$
98
+
99
+ for $v$ in $V\left( G\right)$ , where ${\mathbf{W}}_{0}^{\left( t\right) }$ and ${\mathbf{W}}_{i}^{\left( t\right) }$ for $i$ in $\left\lbrack r\right\rbrack$ are parameter matrices from ${\mathbb{R}}^{d \times e}$ , and $\sigma$ denotes a entry-wise non-linear function. We note here that the original R-GCN layer defined in [31] uses a mean operation instead of a sum in the most inner sum of Equation (2). We investigate the empirical advantages of this two layer variation in Section 5.
100
+
101
+ CompGCN. Let $G$ be a labeled multi-relational graph. A CompGCN layer generalizes Equation (1) by encoding relational information as edge features. That is, we compute a vertex feature
102
+
103
+ $$
104
+ {\mathbf{h}}_{v,\text{ CompGCN }}^{\left( t\right) } \mathrel{\text{ := }} \sigma \left( {{\mathbf{h}}_{v,\text{ CompGCN }}^{\left( t - 1\right) }{\mathbf{W}}_{0}^{\left( t\right) } + \mathop{\sum }\limits_{{i \in \left\lbrack r\right\rbrack }}\mathop{\sum }\limits_{{w \in {N}_{i}\left( v\right) }}\phi \left( {{\mathbf{h}}_{w,\text{ CompGCN }}^{\left( t - 1\right) },{\mathbf{z}}_{i}^{\left( t\right) }}\right) {\mathbf{W}}_{1}^{\left( t\right) }}\right) \in {\mathbb{R}}^{e}, \tag{3}
105
+ $$
106
+
107
+ for $v$ in $V\left( G\right)$ , where ${\mathbf{W}}_{0}^{\left( t\right) }$ and ${\mathbf{W}}_{1}^{\left( t\right) }$ are parameter matrices from ${\mathbb{R}}^{d \times e}$ and ${\mathbb{R}}^{c \times e}$ , respectively, and ${\mathbf{z}}_{i}^{\left( t\right) }$ in ${\mathbb{R}}^{b}$ is the learned edge feature for the $i$ -th relation at layer $t$ . Further, $\phi : {\mathbb{R}}^{d} \times {\mathbb{R}}^{b} \rightarrow {\mathbb{R}}^{c}$ is a composition map, mapping two vectors onto a single vector in a non-parametric way, e.g., summation, point-wise multiplication, or concatenation. We note here that the original CompGCN layer defined in [32] uses an additional sum to differentiate between in-going and out-going edges and self loops, see Appendix E for details.
108
+
109
+ § 3 RELATIONAL WEISFEILER-LEMAN ALGORITHM
110
+
111
+ In the following, to study the limitations in expressivity of the above two GNN layers, R-GCN and CompGCN, we define the multi-relational 1-WL (1-RWL). Let $G = \left( {V\left( G\right) ,{R}_{1}\left( G\right) ,\ldots ,{R}_{r}\left( G\right) ,\ell }\right)$ be a labeled, multi-relational graph. Then the 1-RWL computes a vertex coloring ${C}_{\mathrm{R}}^{\left( t\right) } : V\left( G\right) \rightarrow \mathbb{N}$ for $t > 0$ by interpreting the different relations as edge types, i.e.,
112
+
113
+ $$
114
+ {C}_{\mathrm{R}}^{\left( t\right) }\left( v\right) \mathrel{\text{ := }} \operatorname{RELABEL}\left( \left( \left( {{C}_{\mathrm{R}}^{\left( t - 1\right) }\left( v\right) ,\left\{ {\left( {{C}_{\mathrm{R}}^{\left( t - 1\right) }\left( u\right) ,i}\right) \mid i \in \left\lbrack r\right\rbrack ,u \in {N}_{i}\left( v\right) }\right\} }\right\} \right) \right) , \tag{4}
115
+ $$
116
+
117
+ for $v$ in $V\left( G\right)$ . In iteration 0, the coloring ${C}_{\mathrm{R}}^{\left( 0\right) } \mathrel{\text{ := }} \ell$ . In particular, two vertices $v$ and $w$ of the same color in iteration(t - 1)get different colors in iteration $t$ if there is a relation ${R}_{i}$ such that the number of neighbors in ${N}_{i}\left( v\right)$ and ${N}_{i}\left( w\right)$ colored with a certain color is different. We define the stable coloring ${C}_{\mathrm{R}}^{\infty }$ in the expected way, analogously to the 1-WL.
118
+
119
+ Relationship between 1-RWL, R-GCN, and CompGCN Morris et al. [17], Xu et al. [18] established the exact relationship between the expressive power of 1-WL and GNNs. In particular, 1-WL upper bounds the capacity of any GNN architecture for distinguishing nodes in graphs. In turn, over every graph $G$ there is a GNN architecture with the same expressive power as 1-WL for distinguishing nodes in $G$ . In this section, we show that the same relationship can be established between multi-relational 1-WL, on the one hand, and the R-GCN and CompGCN architectures, on the other. Let $G = \left( {V\left( G\right) ,{R}_{1}\left( G\right) ,\ldots ,{R}_{r}\left( G\right) ,\ell }\right)$ be a labeled, multi-relational graph, and let
120
+
121
+ $$
122
+ {\mathbf{W}}_{\text{ R-GCN }}^{\left( t\right) } = {\left( {\mathbf{W}}_{0}^{\left( {t}^{\prime }\right) },{\mathbf{W}}_{i}^{\left( {t}^{\prime }\right) }\right) }_{{t}^{\prime } \leq t,i \in \left\lbrack r\right\rbrack }
123
+ $$
124
+
125
+ ${}^{3}$ Strictly speaking, Gilmer et al. [1] consider a slightly more general setting in which vertex features are computed by ${\mathbf{h}}_{v}^{\left( t + 1\right) } \mathrel{\text{ := }} {\operatorname{UPD}}^{\left( t + 1\right) }\left( {{\mathbf{h}}_{v}^{\left( t\right) },{\operatorname{AGG}}^{\left( t + 1\right) }\left( \left\{ {\left. \left( {{\mathbf{h}}_{v}^{\left( t\right) },{\mathbf{h}}_{w}^{\left( t\right) },\ell \left( {v,w}\right) }\right) \right| \; \mid w \in N\left( v\right) }\right\} \right) }\right)$ .
126
+
127
+ denote the sequence of R-GCN parameters given by Equation (2) up to iteration $t$ . Analogously, we denote by
128
+
129
+ $$
130
+ {\mathbf{W}}_{\text{ CompGCN }}^{\left( t\right) } = {\left( {\mathbf{W}}_{0}^{\left( {t}^{\prime }\right) },{\mathbf{W}}_{1}^{\left( {t}^{\prime }\right) },{\mathbf{z}}_{i}^{\left( {t}^{\prime }\right) }\right) }_{{t}^{\prime } \leq t,i \in \left\lbrack r\right\rbrack }
131
+ $$
132
+
133
+ the sequence of CompGCN parameters given by Equation (3) up to iteration $t$ . We first show that the multi-relational 1-WL upper bounds the expressivity of both the R-GCN and CompGCN layers in terms of their capacity to distinguish nodes in labeled multi-relational graphs.
134
+
135
+ Theorem 1. Let $G = \left( {V\left( G\right) ,{R}_{1}\left( G\right) ,\ldots ,{R}_{r}\left( G\right) ,\ell }\right)$ be a labeled, multi-relational graph. Then for all $t \geq 0$ the following holds:
136
+
137
+ * For all choices of initial vertex features consistent with $\ell$ , sequences ${\mathbf{W}}_{R - {GCN}}^{\left( t\right) }$ of $R - {GCN}$ parameters, and nodes $v$ and $w$ in $V\left( G\right)$ ,
138
+
139
+ $$
140
+ {C}_{R}^{\left( t\right) }\left( v\right) = {C}_{R}^{\left( t\right) }\left( w\right) \Rightarrow {\mathbf{h}}_{v,R - {GCN}}^{\left( t\right) } = {\mathbf{h}}_{w,R - {GCN}}^{\left( t\right) }.
141
+ $$
142
+
143
+ * For all choices of initial vertex features consistent with $\ell$ , sequences ${\mathbf{W}}_{\text{ CompGCN }}^{\left( t\right) }$ of CompGCN parameters, composition functions $\phi$ , and nodes $v$ and $w$ in $V\left( G\right)$ ,
144
+
145
+ $$
146
+ {C}_{R}^{\left( t\right) }\left( v\right) = {C}_{R}^{\left( t\right) }\left( w\right) \Rightarrow {\mathbf{h}}_{v,\text{ CompGCN }}^{\left( t\right) } = {\mathbf{h}}_{w,\text{ CompGCN }}^{\left( t\right) }.
147
+ $$
148
+
149
+ Noticeably, the converse also holds. That is, there is a sequence of parameter matrices ${\mathbf{W}}_{\mathrm{R} - \mathrm{{GCN}}}^{\left( t\right) }$ such that R-GCN has the same expressive power in terms of distinguishing nodes in graphs as the coloring ${C}_{\mathrm{R}}^{\left( t\right) }$ . This equivalence holds provided the initial labels are encoded by linearly independent vertex features, e.g., using one-hot encodings. The result also holds for CompGCN as long as the composition map $\phi$ can express vector scaling, e.g., $\phi$ is point-wise multiplication or circular-correlation, two of the composition functions studied and implemented in the paper that introduced the CompGCN architecture [32].
150
+
151
+ Theorem 2. Let $G = \left( {V\left( G\right) ,{R}_{1}\left( G\right) ,\ldots ,{R}_{r}\left( G\right) ,\ell }\right)$ be a labeled, multi-relational graph. Then for all $t \geq 0$ the following holds:
152
+
153
+ * There are initial vertex features and a sequence ${\mathbf{W}}_{R - {GCN}}^{\left( t\right) }$ of parameters such that for all $v$ and $w$ in $V\left( G\right)$ ,
154
+
155
+ $$
156
+ {C}_{R}^{\left( t\right) }\left( v\right) = {C}_{R}^{\left( t\right) }\left( w\right) \Leftrightarrow {\mathbf{h}}_{v,R - {GCN}}^{\left( t\right) } = {\mathbf{h}}_{w,R - {GCN}}^{\left( t\right) }.
157
+ $$
158
+
159
+ * There are initial vertex features, a sequence ${\mathbf{W}}_{\text{ CompGCN }}^{\left( t\right) }$ of parameters and a composition function $\phi$ such that for all $v$ and $w$ in $V\left( G\right)$ ,
160
+
161
+ $$
162
+ {C}_{R}^{\left( t\right) }\left( v\right) = {C}_{R}^{\left( t\right) }\left( w\right) \Leftrightarrow {\mathbf{h}}_{v,\text{ CompGCN }}^{\left( t\right) } = {\mathbf{h}}_{w,\text{ CompGCN }}^{\left( t\right) }.
163
+ $$
164
+
165
+ On the choice of the composition function for CompGCN architectures As Theorem 2 shows the expressive power of the 1-RWL is matched by that of the CompGCN architectures if we allow the latter to implement vector scaling in composition functions. However, not all composition maps that have been considered in relationship with CompGCN architectures admit such a possibility. Think, for instance, of natural composition maps such as point-wise summation or vector concatenation. Interestingly, we can show that CompGCN architectures equipped with these composition maps are provably weaker in terms of expressive power than the ones studied in the proof of Theorem 2, as they correlate with a weaker variant of 1-WL that we define next.
166
+
167
+ Let $G = \left( {V\left( G\right) ,{R}_{1}\left( G\right) ,\ldots ,{R}_{r}\left( G\right) ,\ell }\right)$ be a labeled, multi-relational graph. The weak multi-relational 1-WL computes a vertex coloring ${C}_{\mathrm{{WR}}}^{\left( t\right) } : V\left( G\right) \rightarrow \mathbb{N}$ for $t > 0$ as follows:
168
+
169
+ $$
170
+ {C}_{\mathrm{{WR}}}^{\left( t\right) }\left( v\right) \mathrel{\text{ := }} \operatorname{RELABEL}\left( \left( {{C}_{\mathrm{{WR}}}^{\left( t - 1\right) }\left( v\right) ,\left\{ \left\{ {{C}_{\mathrm{{WR}}}^{\left( t - 1\right) }\left( u\right) \mid i \in \left\lbrack r\right\rbrack ,u \in {N}_{i}\left( v\right) }\right\} \right\} ,\left| {{N}_{1}\left( v\right) }\right| ,\ldots ,\left| {{N}_{r}\left( v\right) }\right| }\right\} \right)
171
+ $$
172
+
173
+ for $v$ in $V\left( G\right)$ . In iteration 0, the coloring ${C}_{\mathrm{{WB}}}^{\left( 0\right) } \mathrel{\text{ := }} \ell$ . During aggregation, the weak variant does not take information about the relations into account. The only information relative to the different relations is the number of neighbors associated with each of them. We define the stable coloring ${C}_{\mathrm{{WB}}}^{\infty }$ analogously to the 1-WL. As it turns out, this variant is less powerful than the original one.
174
+
175
+ Proposition 3. There exist a labeled, multi-relational graph $G = \left( {V\left( G\right) ,{R}_{1}\left( G\right) ,{R}_{2}\left( G\right) ,\ell }\right)$ and two nodes $v$ and $w$ in $V\left( G\right)$ , such that ${C}_{R}^{\left( 1\right) }\left( v\right) \neq {C}_{R}^{\left( 1\right) }\left( w\right)$ but ${C}_{WR}^{\infty }\left( v\right) = {C}_{WR}^{\infty }\left( w\right)$ .
176
+
177
+ As shown next, the expressive power of CompGCN architectures that use point-wise summation or vector concatenation is captured by this weaker form of 1-RWL.
178
+
179
+ Theorem 4. Let $G = \left( {V\left( G\right) ,{R}_{1}\left( G\right) ,\ldots ,{R}_{r}\left( G\right) ,\ell }\right)$ be a labeled, multi-relational graph. Then for all $t \geq 0$ the following holds:
180
+
181
+ * For all choices of initial vertex features consistent with $\ell$ , sequence ${\mathbf{W}}_{\text{ CompGCN }}^{\left( t\right) }$ of CompGCN parameters, and nodes $v$ and $w$ in $V\left( G\right)$ ,
182
+
183
+ $$
184
+ {C}_{WR}^{\left( t\right) }\left( v\right) = {C}_{WR}^{\left( t\right) }\left( w\right) \Rightarrow {\mathbf{h}}_{v,\text{ CompGCN }}^{\left( t\right) } = {\mathbf{h}}_{w,\text{ CompGCN }}^{\left( t\right) },
185
+ $$
186
+
187
+ for either point-wise summation or concatenation as the composition map.
188
+
189
+ * There exist initial vertex features and a sequence ${\mathbf{W}}_{\text{ CompGCN }}^{\left( t\right) }$ of CompGCN parameters, such that for all nodes $v$ and $w$ in $V\left( G\right)$ ,
190
+
191
+ $$
192
+ {C}_{WR}^{\left( t\right) }\left( v\right) = {C}_{WR}^{\left( t\right) }\left( w\right) \Leftrightarrow {\mathbf{h}}_{v,\text{ CompGCN }}^{\left( t\right) } = {\mathbf{h}}_{w,\text{ CompGCN }}^{\left( t\right) },
193
+ $$
194
+
195
+ for either point-wise summation or concatenation as the composition map.
196
+
197
+ Together with Proposition 3 and Theorem 2, this result states that CompGCN architectures based on vector summation or concatenation are provably weaker in terms of their capacity to distinguish nodes in graphs than the ones that use vector scaling.
198
+
199
+ We have shown that R-GCN and CompGCN with point-wise multiplication have the same expressive power in terms of distinguishing non-isomorphic multi-relational graphs or distinguishing nodes in a multi-relational graph. As it turns out, these two architectures actually define the same functions. A similar result holds between CompGCN with vector summation/subtraction and concatenation. See Appendix B. 2 for details.
200
+
201
+ § 4 LIMITATIONS AND MORE EXPRESSIVE ARCHITECTURES
202
+
203
+ Theorem 1 shows that both R-GCN as well as CompGCN have severe limitations in distinguishing structurally different multi-relational graphs. Indeed the following results shows that there exist pairs of non-isomorphic, multi-relational graphs that neither R-GCN nor CompGCN can distinguish.
204
+
205
+ Proposition 5. For all $r \geq 1$ , there exists a pair of non-isomorphic graphs $G =$ $\left( {V\left( G\right) ,{R}_{1}\left( G\right) ,\ldots ,{R}_{r}\left( G\right) ,\ell }\right)$ and $H = \left( {V\left( H\right) ,{R}_{1}\left( H\right) ,\ldots ,{R}_{r}\left( H\right) ,\ell }\right)$ that cannot be distinguished by R-GCN or CompGCN.
206
+
207
+ We note here that the two graphs $G$ and $H$ from the above theorem can also be used to show that neither R-GCN nor CompGCN will be able to compute different features for nodes in $G$ and $H$ , making them indistinguishable. Hence, to overcome the limitations of the CompGCN and R-GCN, we introduce local $k$ -order relational networks ( $k$ -RNs), leveraging recent progress in overcoming GNNs’ inherent limitations in expressive power $\left\lbrack {{16},{17},{23},{24}}\right\rbrack$ . To do so, we first extend the local $k$ -dimensional Weisfeiler-Leman algorithm [23], see Appendix A.2, to multi-relational graphs.
208
+
209
+ Multi-relational local $k$ -WL. Given a multi-relational graph $G = \left( {V\left( G\right) ,{R}_{1}\left( G\right) ,\ldots ,{R}_{r}\left( G\right) ,\ell }\right)$ , we define the multi-relational atomic type atp ${}_{r} : V{\left( G\right) }^{k} \rightarrow \mathbb{N}$ such that ${\operatorname{atp}}_{r}\left( \mathbf{v}\right) = {\operatorname{atp}}_{r}\left( \mathbf{w}\right)$ for $\mathbf{v}$ and $\mathbf{w}$ in $V{\left( G\right) }^{k}$ if and only if the mapping $\varphi : V\left( G\right) \rightarrow V\left( G\right)$ where ${v}_{g} \mapsto {w}_{g}$ induces a partial isomorphism, preserving the relations, i.e., we have ${v}_{p} = {v}_{q} \Leftrightarrow {w}_{p} = {w}_{q}$ and $\left( {{v}_{p},{v}_{q}}\right) \in$ ${R}_{i}\left( G\right) \Leftrightarrow \left( {\varphi \left( {v}_{p}\right) ,\varphi \left( {v}_{q}\right) }\right) \in {R}_{i}\left( G\right)$ for $i$ in $\left\lbrack r\right\rbrack$ . The multi-relational local $k$ -WL ( $k$ -RLWL) computes ${C}_{k,r}^{\left( t\right) } : V{\left( G\right) }^{k} \rightarrow \mathbb{N}$ for $t \geq 0$ , where ${C}_{k,r}^{\left( 0\right) } \mathrel{\text{ := }} {\operatorname{atp}}_{r}\left( \mathbf{v}\right)$ , and refines a coloring ${C}_{k,r}^{\left( t\right) }$ (obtained after $t$ iterations of the $k$ -RLWL) via the aggregation function
210
+
211
+ $$
212
+ {M}_{r}^{\left( t\right) }\left( \mathbf{v}\right) \mathrel{\text{ := }} \left( {\left\{ {\left( {{C}_{k,r}^{\left( t\right) }\left( {{\theta }_{1}\left( {\mathbf{v},w}\right) }\right) ,i}\right) \mid w \in {N}_{i}\left( {v}_{1}\right) \text{ and }i \in \left\lbrack r\right\rbrack }\right\} \} ,\ldots ,}\right. \tag{5}
213
+ $$
214
+
215
+ $$
216
+ \left. \left\{ \left\lbrack {\left( {{C}_{k,r}^{\left( t\right) }\left( {{\theta }_{k}\left( {\mathbf{v},w}\right) }\right) ,i}\right) \mid w \in {N}_{i}\left( {v}_{k}\right) \text{ and }i \in \left\lbrack r\right\rbrack }\right\rbrack \right\} \right) \text{ , }
217
+ $$
218
+
219
+ where ${\theta }_{j}\left( {\mathbf{v},w}\right) \mathrel{\text{ := }} \left( {{v}_{1},\ldots ,{v}_{j - 1},w,{v}_{j + 1},\ldots ,{v}_{k}}\right)$ . That is, ${\theta }_{j}\left( {\mathbf{v},w}\right)$ replaces the $j$ -th component of the tuple $\mathbf{v}$ with the vertex $w$ . Like the local $k$ -WL, the algorithm considers only the local $j$ -neighbors,
220
+
221
+ i.e., ${v}_{i}$ and $w$ must be adjacent, for each relation in each iteration and additionally differentiates between different relations. The coloring functions for the iterations of the multi-relational $k$ -RLWL are then defined by
222
+
223
+ $$
224
+ {C}_{k,r}^{\left( t + 1\right) }\left( \mathbf{v}\right) \mathrel{\text{ := }} \left( {{C}_{k,r}^{\left( t\right) }\left( \mathbf{v}\right) ,{M}_{r}^{\left( t\right) }\left( \mathbf{v}\right) }\right) .
225
+ $$
226
+
227
+ In the following, we derive a neural architecture, the $k$ -RN, that has the same expressive power as the $k$ -RLWL in terms of distinguishing non-isomorphic multi-relational graphs.
228
+
229
+ The $k$ -RN architecture. Given a labeled, multi-relational graph $G$ , for each $k$ -tuple $\mathbf{v}$ in $V{\left( G\right) }^{k}$ , a $k$ -RN architecture computes an initial feature ${\mathbf{h}}_{v}^{\left( 0\right) }$ consistent with its multi-relational atomic type, e.g., a one-hot encoding of ${\operatorname{atp}}_{r}\left( \mathbf{v}\right)$ . In each layer, $t > 0$ , a $k$ -RN computes a $k$ -tuple feature
230
+
231
+ $$
232
+ {\mathbf{h}}_{\mathbf{v},k}^{\left( t\right) } \mathrel{\text{ := }} {\operatorname{UPD}}^{\left( t\right) }\left( {{\mathbf{h}}_{\mathbf{v},k}^{\left( t - 1\right) },{\operatorname{AGG}}^{\left( t\right) }\left( \left\{ \left\lbrack {\phi \left( {{\mathbf{h}}_{{\theta }_{1}\left( {\mathbf{v},w}\right) ,k}^{\left( t\right) },{\mathbf{z}}_{i}^{\left( t\right) }}\right) \mid w \in {N}_{i}\left( {v}_{1}\right) \text{ and }i \in \lbrack r\rbrack \} ,\ldots ,}\right. \right. \right. }\right. \tag{6}
233
+ $$
234
+
235
+ $$
236
+ \left. \left. \left\{ {\phi \left( {{\mathbf{h}}_{{\theta }_{k}\left( {\mathbf{v},w}\right) ,k}^{\left( t - 1\right) },{\mathbf{z}}_{i}^{\left( t\right) }}\right) \mid w \in {N}_{i}\left( {v}_{k}\right) \text{ and }i \in \left\lbrack r\right\rbrack \} }\right\} \right) \right) \in {\mathbb{R}}^{e}\text{ , }
237
+ $$
238
+
239
+ where the functions ${\mathrm{{UPD}}}^{\left( t\right) }$ and ${\mathrm{{AGG}}}^{\left( t\right) }$ for $t > 0$ may be a differentiable parameterized functions, e.g., neural networks. Similarly to Equation (3), ${\mathbf{z}}_{i}^{\left( t\right) }$ in ${\mathbb{R}}^{c}$ is the learned edge feature for the $i$ th relation at layer $t$ and $\phi : {\mathbb{R}}^{d} \times {\mathbb{R}}^{b} \rightarrow {\mathbb{R}}^{c}$ is a composition map. In the case of graph-level tasks, e.g., graph classification, one uses
240
+
241
+ $$
242
+ {\mathbf{h}}_{G} \mathrel{\text{ := }} \operatorname{READOUT}\left( \left\{ {{\mathbf{h}}_{\mathbf{v}}^{\left( T\right) } \mid \mathbf{v} \in V{\left( G\right) }^{k}}\right\} \right) \in {\mathbb{R}}^{e}, \tag{7}
243
+ $$
244
+
245
+ to compute a single vectorial representation based on learned $k$ -tuple features after iteration $T$ . The following results shows that the $k$ -RLWL upperbounds the expressivity of any $k$ -RN in terms of distinguishing non-isomorphic graphs.
246
+
247
+ Proposition 6. Let $G = \left( {V\left( G\right) ,{R}_{1}\left( G\right) ,\ldots ,{R}_{r}\left( G\right) ,\ell }\right)$ be a labeled, multi-relational graph. Then for all $t \geq 0,r > 0,k \geq 1$ , and all choices of ${\mathrm{{UPD}}}^{\left( t\right) },{\mathrm{{AGG}}}^{\left( t\right) }$ , and all $\mathbf{v}$ and $\mathbf{w}$ in $V\left( G\right)$ ,
248
+
249
+ $$
250
+ {C}_{k,r}^{\left( t\right) }\left( \mathbf{v}\right) = {C}_{k,r}^{\left( t\right) }\left( \mathbf{w}\right) \Rightarrow {\mathbf{h}}_{\mathbf{v},k}^{\left( t\right) } = {\mathbf{h}}_{\mathbf{w},k}^{\left( t\right) }.
251
+ $$
252
+
253
+ Moreover, we can also show the converse, resulting in the following theorem.
254
+
255
+ Proposition 7. Let $G = \left( {V\left( G\right) ,{R}_{1}\left( G\right) ,\ldots ,{R}_{r}\left( G\right) ,\ell }\right)$ be a labeled, multi-relational graph. Then for all $t \geq 0$ and $k \geq 1$ , there exists ${\mathrm{{UPD}}}^{\left( t\right) },{\mathrm{{AGG}}}^{\left( t\right) }$ , such that for all $\mathbf{v}$ and $\mathbf{w}$ in $V\left( G\right)$ ,
256
+
257
+ $$
258
+ {C}_{k,r}^{\left( t\right) }\left( \mathbf{v}\right) = {C}_{k,r}^{\left( t\right) }\left( \mathbf{w}\right) \Leftrightarrow {\mathbf{h}}_{\mathbf{v},k}^{\left( t\right) } = {\mathbf{h}}_{\mathbf{w},k}^{\left( t\right) }.
259
+ $$
260
+
261
+ The following result implies that increasing $k$ leads to a strict boost in terms of expressivity of the $k$ -RLWL and $k$ -RN architectures in terms of distinguishing non-isomorphic multi-relational graphs.
262
+
263
+ Proposition 8. For $k \geq 2$ and $r \geq 1$ , there exists a pair of non-isomorphic multi-relational graphs ${G}_{r} = \left( {V\left( {G}_{r}\right) ,{R}_{1}\left( {G}_{r}\right) ,\ldots ,{R}_{r}\left( {G}_{r}\right) ,\ell }\right)$ and $H = \left( {V\left( {H}_{r}\right) ,{R}_{1}\left( {H}_{r}\right) ,\ldots ,{R}_{r}\left( {H}_{r}\right) ,\ell }\right)$ such that:
264
+
265
+ * For all choices of ${\mathrm{{UPD}}}^{\left( t\right) },{\mathrm{{AGG}}}^{\left( t\right) }$ , for $t > 0$ , and READOUT the $k$ -RN architecture will not distinguish the graphs ${G}_{r}$ and ${H}_{r}$ .
266
+
267
+ * There exists ${\mathrm{{UPD}}}^{\left( t\right) },{\mathrm{{AGG}}}^{\left( t\right) }$ , for $t > 0$ , and READOUT such that the $\left( {k + 1}\right)$ -RN will distinguish them.
268
+
269
+ Moreover, the following results shows that for $k = 2$ the $k$ -RN architecture is strictly more expressive than CompGCN and R-GCN in distinguishing non-isomophics graphs.
270
+
271
+ Corollary 9. There exists a 2-RN architecture that is strictly more expressive than the CompGCN and the R-GCN architecture in terms of distinguishing non-isomorphic graphs.
272
+
273
+ See Appendix C for discussion on scalability and node-level prediction with a $k$ -RN architecture.
274
+
275
+ < g r a p h i c s >
276
+
277
+ Figure 1: Node classification performance of CompGCN and R-GCN on smaller (AIFB) and larger (AM) graphs. Initial vertex feature dimensions higher than 4 do not improve the accuracy.
278
+
279
+ § 5 EXPERIMENTAL STUDY
280
+
281
+ Here, we investigate to what extend the above theoretical results hold for real-world data distribution. Specifically, we aim to answer the following questions.
282
+
283
+ Q1 Does the theoretical equivalence of R-GCN and CompGCN hold in practice?
284
+
285
+ Q2 Does the performance depend on the dimension of node features?
286
+
287
+ Q3 Does CompGCN benefit from normalization and learnable edge weights?
288
+
289
+ Q4 Does the theoretical difference in composition functions of CompGCN hold in practice?
290
+
291
+ Datasets. To answer Q1 to Q4, we investige R-GCN and CompGCN's empirical performance on the small-scale AIFB (6000 nodes) and the large-scale AM (1.6 million nodes) [55] vertex classification benchmark dataset; see Appendix F for dataset statistics.
292
+
293
+ Featurization. Most relational GNNs for vertex- and link-level tasks assume that the initial vertex states come from a learnable vertex embedding matrix [56, 57]. However, this vertex feature initialization or featurization method makes the model inherently transductive, i.e., the model must be re-trained when adding new vertices. Moreover, such an initialization strategy is incompatible with our Weisfeiler-Leman-based theoretical results since a learnable vertex embedding matrix will result in most initial node features being pair-wise different. Here, however, being faithful to the Weisfeiler-Leman formulation, we initialize all vertex features with the same $d$ -dimensional vector ${}^{4}$ , namely, a standard basis vector of ${\mathbb{R}}^{d}$ , e.g., $\left( {1,0,\ldots ,0}\right)$ in ${\mathbb{R}}^{d}$ . Relation-specific weight matrices in the case of R-GCN and edge features in the case of CompGCN are still learnable. We stress here that such a featurization strategy endows GNNs with inductive properties. Since we are using the same vertex feature initialization, we can perform inference on previously unseen vertices or graphs.
294
+
295
+ Implementation. We use the R-GCN and CompGCN implementation provided by PyG framework [59]. The source code of all methods and evaluation procedures is available at https://www.github.com/ ABC/XXX. ${}^{5}$ For the smaller AIFB dataset, both models use two GNN layers. For the larger AM dataset, R-GCN saturates with three layers. Following the theory, we do not use any basis decomposition of relation weights in R-GCN. We list other hyperparameters in Appendix F. We report averaged results of five independent runs using different random seeds. We conducted all experiments in full-batch mode on a single GPU using a Tesla V100 32 GB or RTX 8000.
296
+
297
+ Discussion. Probing R-GCN with different aggregations and CompGCN on the smaller AIFB (Fig. 1a) and larger AM (Figure 1b) datasets, we largely confirm the theoretical hypothesis of their expressiveness equivalence (Q1) and observe similar performance of both GNNs. The higher variance on AIFB is due to the small test set size (36 nodes), i.e., one misclassified vertex drops accuracy by $\approx 3\%$ .
298
+
299
+ To test if increasing the input vertex feature dimensions leads to more expressive GNN architectures (Q2), we vary the initial vertex feature dimension in $\{ 2,4,8,\ldots ,{64},{128}\}$ on the smaller AIFB dataset (Figure 1a) and do not observe any significant differences starting from $d = 4$ and above. Having
300
+
301
+ ${}^{4}$ We also probed a vector initialized with the Glorot and Bengio [58] strategy, showing similar results.
302
+
303
+ ${}^{5}$ Hidden for the anonymous review.
304
+
305
+ < g r a p h i c s >
306
+
307
+ Figure 2: CompGCN ablations. Directionality (-dir) and normalization (-norm) are the most crucial components, i.e., their removal does lead to significant performance drops.
308
+
309
+ < g r a p h i c s >
310
+
311
+ Figure 3: CompGCN with different composition functions. No significant differences.
312
+
313
+ identified that, we report the best results of compared models on the larger AM graph with the vertex feature dimension $d$ in $\{ 4,8\}$ .
314
+
315
+ Following the theory where the sum aggregator is most expressive, we investigate this finding on the smaller AIFB dataset for both GNNs. R-GCN with mean aggregation shows slightly better results on the larger AM dataset, which we attribute to the unstable optimization process of the sum aggregator where nodes might have thousands of neighbors, leading to large losses and noisy gradients. We hypothesize that stabilizing the training process on larger graphs might improve performance.
316
+
317
+ Furthermore, we perform an ablation study (Figure 2) of main CompGCN components (Q3), i.e., direction-based weighting (over direct, inverse, and self-loop edges), relation projection update in each layer, and message normalization in the GCN style ${\mathrm{D}}^{-\frac{1}{2}}{\mathrm{{AD}}}^{-\frac{1}{2}}$ ; see also Appendices $\mathrm{D}$ and $\mathrm{E}$ .
318
+
319
+ The crucial components for the smaller and larger graphs are (1) three-way direction-based message passing and (2) normalization. Replacing message passing over three directions (and three weight matrices) with one weight matrix using a single adjacency leads to a significant drop in performance. Removing normalization increases variance in the larger graph. Finally, removing both directionality and normalization leads to significant degradation in predictive performance.
320
+
321
+ Studying composition functions (Figure 3), we do not find significant differences among non-parametric mult, add, rotate functions (Q4); see Appendix E. Performance of an MLP over a concatenation of node and edge features falls within confidence intervals of other compositions and does not exhibit a significant accuracy boost.
322
+
323
+ § 6 CONCLUSION
324
+
325
+ Here, we investigated the expressive power of two popular GNN architectures for knowledge or multi-relational graphs, namely, CompGCN and R-GCN. By deriving a variant of the 1-WL, we quantified their limits in distinguishing vertices in multi-relational graphs. Further, we investigated under which conditions, i.e., the choice of the composition function, CompGCN, reaches the same expressive power as R-GCN. To overcome the limitations of the two architectures, we derived the provably more powerful $k$ -RN architecture. By increasing $k$ , the $k$ -RN architecture gets strictly more expressive. Empirically, we verified that our theoretical results translate largely into practice. Using CompGCN and R-GCN in a vertex classification setting over small and large multi-relational graphs shows that both architectures provide a similar performance level. We believe that our paper is the first step in a principled design of GNNs for knowledge or multi-relational graphs.
papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/B1xPG55qZS/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Stain Standardization Capsule: A pre-processing module for histopathological image analysis
2
+
3
+ Yushan Zheng ${}^{1,2}$ , Zhiguo Jiang ${}^{2,1}$ , Haopeng Zhang ${}^{2,1}$ , Jun ${\mathrm{{Shi}}}^{3}$ , and Fengying ${\mathrm{{Xie}}}^{2,1}$
4
+
5
+ ${}^{1}$ Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China \{yszheng, jiangzg\}@buaa.edu.cn
6
+
7
+ ${}^{2}$ Image Processing Center, School of Astronautics, Beihang University, Beijing, 102206, China
8
+
9
+ ${}^{3}$ School of Software, Hefei University of Technology, Hefei 230601, China
10
+
11
+ Abstract. Color consistency is crucial to developing robust deep learning methods for histopathological image analysis. With the increasing application of digital histopathological images, the deep learning methods are likely developed based on the data from multiple medical centers. This requirement makes it a challenge task to normalize the color variance of histopathological images from different medical centers. In this paper, we proposed a novel color standardization module named stain standardization capsule (SSC) based on the paradigm of capsule network and the corresponding dynamic routing algorithm. The proposed module can learn and generate uniform stain separation outputs for histopathological images in various color appearance without the reference to manually selected template images. The SSC module is light and can be trained end-to-end with the application-driven CNN model. The proposed method was validated on two public datasets and compared with the state-of-the-art methods. The experimental results have demonstrated that the SSC module is effective in color normalization for histopathological images and achieves the best performance in the compared methods.
12
+
13
+ Keywords: Stain standardization - Histopathological image analysis - Digital pathology - Capsule network.
14
+
15
+ ## 1 Introduction
16
+
17
+ Based on the widespread application of digital pathology (DP) in cancer research and clinical diagnosis, an increasing number of methods for histopathological image analysis (HIA) have been proposed. In practice, color appearance of digital whole slide images (WSI) varies due to the diversity in the section fabrication and digitization, which makes it a challenge task to establish robust analysis frameworks for digital histopathological images from different medical centers. Generally, stain standardization (or normalization) is the main approach to solve the problem of stain color variances.
18
+
19
+ The early studies utilized the color style transformation methods in natural scene image processing and concentrated on matching the color of one histopathological image patch to another one $\left\lbrack {7,5}\right\rbrack$ . With the development of whole slide imaging techniques, the requirement of computer-aided diagnosis has changed from histopathological image patches to WSIs. Simultaneously, the stain transformation algorithms for WSIs are developed $\left\lbrack {1,{15},{16}}\right\rbrack$ . The color transform parameters were estimated or optimized with abundant pixels sampled from the entire WSI. The robustness of stain normalization has been greatly improved compared to the patch-based transformation methods $\left\lbrack {7,5,3}\right\rbrack$ . However, the dependence of abundant pixels and whole slide images meanwhile narrowed the scope of application.
20
+
21
+ Recently, the data-driven deep-learning methods, especially the convolutional neural networks (CNNs), have become the major basics of emerging HIA researches. Correspondingly, the requirement of data standardization is further promoted to adapt multiple stain domains from different datasets provided by different medical centers. One popular scheme to solve the dataset-wise color variance is color domain transfer, where the algorithms based on generative adversarial networks (GANs) are widely studied $\left\lbrack {{14},{10}}\right\rbrack$ . Instead of estimating transform parameters between image pairs or WSI pairs, these methods established a GAN structure to learn the data adaption principle between the training dataset and application (testing) dataset. The performance of stain standardization has proven very promising. Nevertheless, the present GAN-based methods require to know the full data distribution of the application dataset and the transform model is required to be trained in pairs if there are more than two medical centers providing the data. Another scheme to solve the problem is color augmentation. Tellez et al. [11] proposed a stain augmentation strategy based on CD theory to simulate different staining situations, which has proven effective in improving the generalization ability of the CNN model for stain variance. However, the stain information is extracted using fixed model parameters that estimated under ideal dyeing case. When facing the samples in non-ideal situation, the augmented samples would be out of the distribution of real cases. Another study [12] constructed an U-Net model to learn an uniform color style from images with random color biases. The trained network is powerful in the color normalization of unseen histological images. While, the network contains millions of parameters, which makes it less efficient in computation.
22
+
23
+ Facing the current issues in the multiple stain domain standardization, we proposed a novel stain standardization module for CNN-based histopathological image analysis, which is named as stain standardization capsule (SSC). The basic theory of SSC is the stain separation in optical density space [8]. The structure of the module is modified from the Capsule Network [9] and the stain standardization is realized referring to the dynamic routing (DR) operations in the capsule network (as shown in Fig. 1). The contribution of this paper and novelty to the existing methods can be summarized as follows.
24
+
25
+ 1) We brings the insight of dynamic routing into histopathological image standardization. Beyond optimizing the normalization parameters for specific image (or WSI) $\left\lbrack {5,1,{13},{16}}\right\rbrack$ or estimating the color transfer model depending on plenty of samples from the application dataset $\left\lbrack {{14},{10}}\right\rbrack$ , the proposed SSC module automatically summarizes a set of candidate ways to stain separation based on the training data that involves various color appearance. In the application stage, the stain standardization is achieved by optimizing the forward route within the pre-trained candidate ways via the designed sparsity routing process. It prevents the standardization results from serious artifacts and even failures.
26
+
27
+ ![01963a56-e38a-7797-afd4-a73b0cb378f3_2_437_333_933_436_0.jpg](images/01963a56-e38a-7797-afd4-a73b0cb378f3_2_437_333_933_436_0.jpg)
28
+
29
+ Fig. 1. Structure of the proposed SSC module, where the input RGB-format images are first converted to the optical density space, then projected into $M$ groups of $S$ stain channels via linear transformations, and finally assembled to obtain the stain separation results via the designed sparsity routing algorithm.
30
+
31
+ 2) The SSC module is much lighter (containing only tens of parameters) than CNN-based methods $\left\lbrack {{12},{14},{10}}\right\rbrack$ and can be trained end-to-end with specific HIA tasks. Furthermore, the module does not need manually selected template images, which determines the SSC module is easy-to-use in both the development and deployment of HIA applications.
32
+
33
+ 3) The proposed method is evaluated on two public datasets and compared with the sate-of-the-art methods. The experimental results have demonstrated the effectiveness and advantages in developing HIA applications.
34
+
35
+ ## 2 Method
36
+
37
+ The approach of the proposed SSC module to stain standardization is achieved by generating uniform stain separation tensors for images in various color appearance. A set of stain separation candidates are first constructed and the separation result is obtained by a weighted sum of these candidates.
38
+
39
+ ### 2.1 Stain separation candidates
40
+
41
+ Color deconvolution (CD) [8] is a popular stain separation method for digital slides where the staining dyes obey Beer-Lambert law. CD is utilized as the basic theory of popular stain standardization methods $\left\lbrack {5,{13},{16}}\right\rbrack$ . Referring to $\left\lbrack 8\right\rbrack$ , independent stain components can be extracted through linear transformation in the optical density (OD) space. Hence, we constructed a CNN structure with linear projection operations to learn possible stain extraction principles in OD-space based on all the training images. Then, we assigned the stain extraction layers into $M$ groups with the same structures, generating $M$ stain separation candidates. The detail of the SSC structure is illustrated in Fig. 1.
42
+
43
+ Letting $\mathbf{o} \in {\mathbb{R}}^{m \times n \times 3}$ denote the optical density of an image in size of $m \times n$ pixels ${}^{4}$ , the grouped linear projections can be represented as
44
+
45
+ $$
46
+ {\mathbf{u}}_{i} = \operatorname{Conv}\left( {\mathbf{o},{\mathbf{W}}_{i}^{\left( 1\right) }}\right) \in {\mathbb{R}}^{m \times n \times N},
47
+ $$
48
+
49
+ $$
50
+ {\widehat{\mathbf{u}}}_{i} = \operatorname{Conv}\left( {{\mathbf{u}}_{i},{\mathbf{W}}_{i}^{\left( 2\right) }}\right) \in {\mathbb{R}}^{m \times n \times S}, i = 1,2,\ldots , M,
51
+ $$
52
+
53
+ where Conv represents a convolution operation followed by a leakly-relu activation, ${\mathbf{W}}_{i}^{\left( 1\right) }$ and ${\mathbf{W}}_{i}^{\left( 2\right) }$ are the convolutional weights, and $M, N$ and $S$ denotes the number of groups, the number of channels in the first convolution and the number of stains involved in the images, respectively.
54
+
55
+ ### 2.2 Sparsity routing
56
+
57
+ Capsule Network is a new paradigm of artificial neural networks proposed by Hinton et al. [9], in which the input of the neurons are defined as a set of vectors, rather than scalars that defined in traditional neural networks. The set of vectors are assembled by a weighted sum operation and then activated. And the weights of the input vectors for ensemble are decided by the dynamic routing (DR) algorithm.
58
+
59
+ Motivated by the insight of capsule network, we propose assembling the stain candidates $\left\{ {{\widehat{\mathbf{u}}}_{i} \mid i = 1,\ldots , M}\right\}$ through DR. The aim of the routing is to find the most appreciate stain separation result for each specific image from the $M$ candidates in the forward way of the network.
60
+
61
+ Generally, a good stain separation should exclusively assign the value of a pixel to one stain channel, i.e. the separated result is desired to be pixel-wise sparse $\left\lbrack {5,1,{16}}\right\rbrack$ . Therefore, we designed a novel Sparsity Routing (SR) algorithm by modifying the agreement scoring in DR. The pseudo-code of SR is given in Algorithm 1. The score of the pixel-wise sparsity is calculated based on the sparseness measure defined in [4]:
62
+
63
+ $$
64
+ {\eta }_{p}\left( \mathbf{x}\right) = \frac{1}{mn}\mathop{\sum }\limits_{i}\mathop{\sum }\limits_{j}\frac{\sqrt{S} - \mathop{\sum }\limits_{k}\left| {{x}_{ijk} + \epsilon }\right| /\sqrt{\mathop{\sum }\limits_{k}{\left( {x}_{ijk} + \epsilon \right) }^{2}}}{\sqrt{S} - 1},
65
+ $$
66
+
67
+ where $\mathbf{x} \in {\mathbb{R}}^{m \times n \times S}$ denotes the tensor to score. To avoid all the image data being assigned to a single stain channel, a channel-wise sparseness is additionally
68
+
69
+ ---
70
+
71
+ ${}^{4}\mathbf{o} = - \log \left( {\mathbf{I} + \epsilon }\right) /{I}_{max}$ , where $\mathbf{I}$ represents a RGB-format image, ${I}_{max}$ is the upper intensity for the digitization and $\epsilon$ is a small scalar to protect the log operation.
72
+
73
+ ---
74
+
75
+ defined:
76
+
77
+ $$
78
+ {\eta }_{c}\left( \mathbf{x}\right) = \frac{1}{S}\mathop{\sum }\limits_{k}\frac{\sqrt{mn} - \mathop{\sum }\limits_{i}\mathop{\sum }\limits_{j}\left| {{x}_{ijk} + \epsilon }\right| /\sqrt{\mathop{\sum }\limits_{i}\mathop{\sum }\limits_{j}{\left( {x}_{ijk} + \epsilon \right) }^{2}}}{\sqrt{mn} - 1}.
79
+ $$
80
+
81
+ Then, the sparsity score is formulated as $\eta \left( \mathbf{x}\right) = {\eta }_{p}\left( \mathbf{x}\right) + {\eta }_{c}\left( \mathbf{x}\right)$ and referred as SparseScore(x)in Algorithm 1. After SR, the output of SSC is calculated by equation
82
+
83
+ $$
84
+ \mathbf{s} = \mathop{\sum }\limits_{{i = 1}}^{M}{c}_{i} \cdot {\widehat{\mathbf{u}}}_{i}.
85
+ $$
86
+
87
+ The SR process allows SSC generating refined stain separation results by tuning the weights $\left\{ {c}_{i}\right\}$ and then allows the following CNNs concentrate on the structural variances of tissue images.
88
+
89
+ ---
90
+
91
+ Data: $\left\{ {{\widehat{\mathbf{u}}}_{i} \mid i = 1,\ldots , M}\right\} \leftarrow$ The grouped outputs of the candidate layer;
92
+
93
+ $R \leftarrow$ The number of routings;
94
+
95
+ SparseRouting $\left( {\left\{ {\widehat{\mathbf{u}}}_{i}\right\} , R}\right)$ :
96
+
97
+ for all the group $i$ in the candidate layer: ${b}_{i} \leftarrow 0,{c}_{i} \leftarrow 1/M$ ;
98
+
99
+ for $r = 1$ to $R$ do
100
+
101
+ $\widehat{\mathbf{s}} \leftarrow \mathop{\sum }\limits_{i}{c}_{i} \cdot {\widehat{\mathbf{u}}}_{i};$
102
+
103
+ for all the group $i$ in the candidate layer: ${b}_{i} \leftarrow {b}_{i} +$ SparseScore $\left( {{\widehat{\mathbf{u}}}_{i} + \widehat{\mathbf{s}}}\right)$ ;
104
+
105
+ for all the group $i$ in the candidate layer: ${c}_{i} \leftarrow \exp \left( {b}_{i}\right) /\mathop{\sum }\limits_{i}\exp \left( {b}_{i}\right)$ ;
106
+
107
+ end
108
+
109
+ return ${\left\{ {c}_{i}\right\} }_{1}^{M}$ ;
110
+
111
+ Algorithm 1: The algorithm of sparsity routing.
112
+
113
+ ---
114
+
115
+ ### 2.3 Training and Application of SSC
116
+
117
+ The SSC module is essentially a convolutional neural network. Therefore, it can be directly equipped to an application-driven CNN and trained end-to-end along with the target of the CNN. The assembled stain separation result $\mathbf{s}$ is the output of the SSC module and meanwhile the input of the following CNN. To ensure s preserves the structural information of the histopathological image, a reconstruction layer is appended to the end of SSC and a mean square error (MSE) loss is considered between the original image and the reconstructed results. The MSE loss is merged to the loss of the following CNN in the training stage. Note that the SR only processes in the forward stage and the scalars ${c}_{i}$ are constant in the backward stage [9].
118
+
119
+ ## 3 Experiment
120
+
121
+ ### 3.1 Experimental settings
122
+
123
+ The proposed SSC module was validated on Camelyon ${16}^{5}$ and ${ACDC}$ -Lung $H{P}^{6}$ datasets $\left\lbrack {2,6}\right\rbrack$ via histopathological image classification tasks. Regions with cancer in the WSIs are annotated by pathologists. Image patches in size of ${224} \times {224}$ were randomly sampled from the WSIs. Patches containing above ${75}\%$ cancerous pixels according to the annotation were labeled as positive and containing none cancerous pixels were labeled as negative. The other patches were not used in the experiments.
124
+
125
+ The DenseNet-121 CNN structure with softmax output was employed for classification. The sensitivity, specificity, accuracy and the area under ROC carve were used for evaluation metrics. ${20}\%$ samples in the training set were spared for validation and the remainders were used to train the model. The hyper-parameters $M, N, R$ in SSC were tuned in the training set and determined according to the classification error of the validation samples. Specifically,(M, N, R)is determined as(5,3,3)for Camelyon16 and(4,3,4)for ACDC-LungHP. $S$ is set to 2 because the images are all from H&E-stained histology.
126
+
127
+ ### 3.2 Results and discussion
128
+
129
+ The classification performance in the Camelyon16 testing set are presented in Table 1, where three state-of-the-art methods $\left\lbrack {{11},{12},{16}}\right\rbrack$ are compared ${}^{7}$ . Table 1 also provides a summary on the dependence and the property of each compared method. Overall, our SSC module is the most effective in improving the classification performance. The performance of data standardization appeared to be less effective in ACDC-lungHP dataset than in Camelyon16 dataset since the color consistency in the former dataset is relatively better than the latter.
130
+
131
+ Stain augmentation [11] utilized the prior knowledge of slide staining to augment the color allocation of training images. Therefore, the classification network using Stain augmentation [11] achieved better classification metrics than that using a common Color augmentation (including random illumination, saturation, hue and contrast transfers in the experiment) method. However, the method would generate images with unreasonable color styles. These samples would perform as noises in the CNN training and reduce the classification accuracy when the color distribution is originally consistent (referring to results in ACDC-lungHP dataset). ACD [16] and CNN-norm [12] have achieved competitive results. Nevertheless, ACD requires individually estimating standardization parameters for specific testing image and relies on the context information of the corresponding WSI. CNN-norm learned a general principle for images in different color styles with millions of model parameters $\left( { > {10}^{7}}\right)$ . The computation amount of ${CNN}$ -norm is comparable or even more than the following HIA application.
132
+
133
+ ---
134
+
135
+ ${}^{5}$ https://camelyon16.grand-challenge.org/
136
+
137
+ ${}^{6}$ https://acdc-lunghp.grand-challenge.org/.Since the annotations of testing part of the data set are not yet accessible, only the 150 training WSIs of the data were used in this paper.
138
+
139
+ ${}^{7}$ the compared methods have been introduced in brief in section 1 .
140
+
141
+ ---
142
+
143
+ Table 1. Standardization performance for histopathological image patch classification, where the model properties, including the number of model parameters $\left( {n}_{\text{param }}\right)$ , whether to rely on manually selected templates (T.) and whether the model parameters for testing images require to be estimated (E.) are compared.
144
+
145
+ <table><tr><td rowspan="2">Methods</td><td colspan="4">Camelyon16</td><td colspan="4">ACDC-LungHP</td><td colspan="2">Dependence</td></tr><tr><td>Sen</td><td>Spe.</td><td>Acc</td><td>AUC</td><td>Sen</td><td>Spe.</td><td>Acc</td><td>AUC</td><td>${n}_{param}$</td><td>T./E.</td></tr><tr><td>Origin</td><td>0.851</td><td>0.969</td><td>0.910</td><td>0.957</td><td>0.822</td><td>0.779</td><td>0.801</td><td>0.882</td><td>-</td><td>-</td></tr><tr><td>Color Aug</td><td>0.868</td><td>0.950</td><td>0.909</td><td>0.958</td><td>0.836</td><td>0.760</td><td>0.798</td><td>0.881</td><td>None</td><td>No/No</td></tr><tr><td>Stain Aug[11]</td><td>0.875</td><td>0.946</td><td>0.911</td><td>0.967</td><td>0.819</td><td>0.778</td><td>0.799</td><td>0.882</td><td>None</td><td>No/No</td></tr><tr><td>ACD[16]</td><td>0.892</td><td>0.944</td><td>0.918</td><td>0.968</td><td>0.836</td><td>0.776</td><td>0.805</td><td>0.886</td><td>$< {10}^{1}$</td><td>Yes/Yes</td></tr><tr><td>CNN-norm[12]</td><td>0.875</td><td>0.970</td><td>0.922</td><td>0.971</td><td>0.821</td><td>0.788</td><td>0.804</td><td>0.886</td><td>$> {10}^{7}$</td><td>Yes/No</td></tr><tr><td>SSC (Ours)</td><td>0.894</td><td>0.966</td><td>0.930</td><td>0.975</td><td>0.840</td><td>0.778</td><td>0.805</td><td>0.887</td><td>$< {10}^{2}$</td><td>No/No</td></tr></table>
146
+
147
+ ![01963a56-e38a-7797-afd4-a73b0cb378f3_6_391_797_1024_310_0.jpg](images/01963a56-e38a-7797-afd4-a73b0cb378f3_6_391_797_1024_310_0.jpg)
148
+
149
+ Fig. 2. Joint display of the original images and the reconstructed images.
150
+
151
+ In comparison, our SSC module involves only tens of model parameters, does not rely on contextual information out the scope of the testing image, has no additional parameter estimation process in the prediction stage, and can be trained in end-to-end fashion. These properties determine the SSC module is more efficient and convenient than the present methods in both the training and deployment for HIA applications. Figure 2 illustrated original images and the corresponding reconstructed images from Camelyon16 dataset. Without any template images, SSC appears to have learned a "Mean" stain style in the reconstruction layer for images in diverse color appearance. It indicates an uniform representation of the SSC output layer, which has allowed the following CNN concentrating on structural discrimination in histopathological images and thus has improved the performance of the HIA application.
152
+
153
+ ## 4 Conclusion
154
+
155
+ In this paper, we proposed a novel stain standardization module named stain standardization capsule for histopathological image analyis based on the optical properties of tissue section staining and the insight of dynamic routing from capsule network. The proposed module is implemented in the domain of convolutional neural network and therefore can be directly equipped to CNN-based HIA application. The proposed method was evaluated with application of histopathological image classification on two public datasets. The results have demonstrated the effectiveness and robustness of the proposed methods.
156
+
157
+ ## Acknowledgment
158
+
159
+ This work was supported by the National Natural Science Foundation of China (No. 61901018, 61771031 and 61906058), China Postdoctoral Science Foundation (No. 2019M650446) and Motic-BUAA Image Technology Research Center.
160
+
161
+ ## References
162
+
163
+ 1. Bejnordi, B.E., Litjens, G., Timofeeva, N., Otte-Höller, I., Homeyer, A., Karssemei-jer, N., Laak, J.A.V.D.: Stain specific standardization of whole-slide histopathological images. IEEE Transactions on Medical Imaging $\mathbf{{35}}\left( 2\right) ,{404} - {415}\left( {2016}\right)$
164
+
165
+ 2. Bejnordi, B.E., Veta, M., Van Diest, P.J., Van Ginneken, B., Karssemeijer, N., Litjens, G.J.S., Der Laak, J.A.W.M.V., Hermsen, M., Manson, Q.F., Balkenhol, M., et al.: Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA 318(22), 2199-2210 (2017)
166
+
167
+ 3. Hidalgo-Gavira, N., Mateos, J., Vega, M., Molina, R., Katsaggelos, A.K.: Fully automated blind color deconvolution of histopathological images. In: International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). pp. 183-191. Springer (2018)
168
+
169
+ 4. Hoyer, P.O.: Non-negative matrix factorization with sparseness constraints. Journal of Machine Learning Research $\mathbf{5}\left( {11}\right) ,{1457} - {1469}\left( {2004}\right)$
170
+
171
+ 5. Khan, A.M., Rajpoot, N.M., Treanor, D., Magee, D.R.: A nonlinear mapping approach to stain normalization in digital histopathology images using image-specific color deconvolution. IEEE Transactions on Biomedical Engineering $\mathbf{{61}}\left( 6\right) ,{1729} -$ 1738 (2014)
172
+
173
+ 6. Li, Z., Hu, Z., Xu, J., Tan, T., Chen, H., Duan, Z., Liu, P., Tang, J., Cai, G., Ouyang, Q., et al.: Computer-aided diagnosis of lung carcinoma using deep learning-a pilot study. arXiv preprint arXiv:1803.05471 (2018)
174
+
175
+ 7. Macenko, M., Niethammer, M., Marron, J.S., Borland, D., Woosley, J.T., Guan, X., Schmitt, C., Thomas, N.E.: Colour normalisation in digital histopathology images. In: IEEE International Symposium on Biomedical Imaging (ISBI). pp. 1107-1110 (2009)
176
+
177
+ 8. Ruifrok, A.C., Johnston, D.A.: Quantification of histochemical staining by color deconvolution. Analytical and Quantitative Cytology and Histology 23(4), 291-299 (2001)
178
+
179
+ 9. Sabour, S., Frosst, N., Hinton, G.E.: Dynamic routing between capsules. In: Advances in Neural Information Processing Systems (NeurIPs). pp. 3856-3866 (2017)
180
+
181
+ 10. Shaban, M.T., Baur, C., Navab, N., Albarqouni, S.: Staingan: Stain style transfer for digital histological images. In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019). pp. 953-956. IEEE (2019)
182
+
183
+ 11. Tellez, D., Balkenhol, M., Otte-Höller, I., van de Loo, R., Vogels, R., Bult, P., Wauters, C., Vreuls, W., Mol, S., Karssemeijer, N., et al.: Whole-slide mitosis detection in h&e breast histology using phh3 as a reference to train distilled stain-invariant convolutional networks. IEEE Transactions on Medical Imaging $\mathbf{{37}}\left( 9\right)$ , 2126-2136 (2018)
184
+
185
+ 12. Tellez, D., Litjens, G., Bandi, P., Bulten, W., Bokhorst, J.M., Ciompi, F., van der Laak, J.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. arXiv preprint arXiv:1902.06543 (2019)
186
+
187
+ 13. Vahadane, A., Peng, T., Sethi, A., Albarqouni, S., Wang, L., Baust, M., Steiger, K., Schlitter, A.M., Esposito, I., Navab, N.: Structure-preserving color normalization and sparse stain separation for histological images. IEEE Transactions on Medical Imaging $\mathbf{{35}}\left( 8\right) ,{1962} - {1971}\left( {2016}\right)$
188
+
189
+ 14. Zanjani, F.G., Zinger, S., Bejnordi, B.E., van der Laak, J.A., de With, P.H.: Stain normalization of histopathology images using generative adversarial networks. In: IEEE International Symposium on Biomedical Imaging (ISBI). pp. 573-577. IEEE (2018)
190
+
191
+ 15. Zheng, Y., Jiang, Z., Zhang, H., Xie, F., Ma, Y., Shi, H., Zhao, Y.: Histopathological whole slide image analysis using context-based cbir. IEEE Transactions on Medical Imaging $\mathbf{{37}}\left( 7\right) ,{1641} - {1652}\left( {2018}\right)$
192
+
193
+ 16. Zheng, Y., Jiang, Z., Zhang, H., Xie, F., Shi, J., Xue, C.: Adaptive color deconvolution for histological wsi normalization. Computer Methods and Programs in Biomedicine 170, 107-120 (2019)
papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/B1xPG55qZS/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § STAIN STANDARDIZATION CAPSULE: A PRE-PROCESSING MODULE FOR HISTOPATHOLOGICAL IMAGE ANALYSIS
2
+
3
+ Yushan Zheng ${}^{1,2}$ , Zhiguo Jiang ${}^{2,1}$ , Haopeng Zhang ${}^{2,1}$ , Jun ${\mathrm{{Shi}}}^{3}$ , and Fengying ${\mathrm{{Xie}}}^{2,1}$
4
+
5
+ ${}^{1}$ Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China {yszheng, jiangzg}@buaa.edu.cn
6
+
7
+ ${}^{2}$ Image Processing Center, School of Astronautics, Beihang University, Beijing, 102206, China
8
+
9
+ ${}^{3}$ School of Software, Hefei University of Technology, Hefei 230601, China
10
+
11
+ Abstract. Color consistency is crucial to developing robust deep learning methods for histopathological image analysis. With the increasing application of digital histopathological images, the deep learning methods are likely developed based on the data from multiple medical centers. This requirement makes it a challenge task to normalize the color variance of histopathological images from different medical centers. In this paper, we proposed a novel color standardization module named stain standardization capsule (SSC) based on the paradigm of capsule network and the corresponding dynamic routing algorithm. The proposed module can learn and generate uniform stain separation outputs for histopathological images in various color appearance without the reference to manually selected template images. The SSC module is light and can be trained end-to-end with the application-driven CNN model. The proposed method was validated on two public datasets and compared with the state-of-the-art methods. The experimental results have demonstrated that the SSC module is effective in color normalization for histopathological images and achieves the best performance in the compared methods.
12
+
13
+ Keywords: Stain standardization - Histopathological image analysis - Digital pathology - Capsule network.
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ Based on the widespread application of digital pathology (DP) in cancer research and clinical diagnosis, an increasing number of methods for histopathological image analysis (HIA) have been proposed. In practice, color appearance of digital whole slide images (WSI) varies due to the diversity in the section fabrication and digitization, which makes it a challenge task to establish robust analysis frameworks for digital histopathological images from different medical centers. Generally, stain standardization (or normalization) is the main approach to solve the problem of stain color variances.
18
+
19
+ The early studies utilized the color style transformation methods in natural scene image processing and concentrated on matching the color of one histopathological image patch to another one $\left\lbrack {7,5}\right\rbrack$ . With the development of whole slide imaging techniques, the requirement of computer-aided diagnosis has changed from histopathological image patches to WSIs. Simultaneously, the stain transformation algorithms for WSIs are developed $\left\lbrack {1,{15},{16}}\right\rbrack$ . The color transform parameters were estimated or optimized with abundant pixels sampled from the entire WSI. The robustness of stain normalization has been greatly improved compared to the patch-based transformation methods $\left\lbrack {7,5,3}\right\rbrack$ . However, the dependence of abundant pixels and whole slide images meanwhile narrowed the scope of application.
20
+
21
+ Recently, the data-driven deep-learning methods, especially the convolutional neural networks (CNNs), have become the major basics of emerging HIA researches. Correspondingly, the requirement of data standardization is further promoted to adapt multiple stain domains from different datasets provided by different medical centers. One popular scheme to solve the dataset-wise color variance is color domain transfer, where the algorithms based on generative adversarial networks (GANs) are widely studied $\left\lbrack {{14},{10}}\right\rbrack$ . Instead of estimating transform parameters between image pairs or WSI pairs, these methods established a GAN structure to learn the data adaption principle between the training dataset and application (testing) dataset. The performance of stain standardization has proven very promising. Nevertheless, the present GAN-based methods require to know the full data distribution of the application dataset and the transform model is required to be trained in pairs if there are more than two medical centers providing the data. Another scheme to solve the problem is color augmentation. Tellez et al. [11] proposed a stain augmentation strategy based on CD theory to simulate different staining situations, which has proven effective in improving the generalization ability of the CNN model for stain variance. However, the stain information is extracted using fixed model parameters that estimated under ideal dyeing case. When facing the samples in non-ideal situation, the augmented samples would be out of the distribution of real cases. Another study [12] constructed an U-Net model to learn an uniform color style from images with random color biases. The trained network is powerful in the color normalization of unseen histological images. While, the network contains millions of parameters, which makes it less efficient in computation.
22
+
23
+ Facing the current issues in the multiple stain domain standardization, we proposed a novel stain standardization module for CNN-based histopathological image analysis, which is named as stain standardization capsule (SSC). The basic theory of SSC is the stain separation in optical density space [8]. The structure of the module is modified from the Capsule Network [9] and the stain standardization is realized referring to the dynamic routing (DR) operations in the capsule network (as shown in Fig. 1). The contribution of this paper and novelty to the existing methods can be summarized as follows.
24
+
25
+ 1) We brings the insight of dynamic routing into histopathological image standardization. Beyond optimizing the normalization parameters for specific image (or WSI) $\left\lbrack {5,1,{13},{16}}\right\rbrack$ or estimating the color transfer model depending on plenty of samples from the application dataset $\left\lbrack {{14},{10}}\right\rbrack$ , the proposed SSC module automatically summarizes a set of candidate ways to stain separation based on the training data that involves various color appearance. In the application stage, the stain standardization is achieved by optimizing the forward route within the pre-trained candidate ways via the designed sparsity routing process. It prevents the standardization results from serious artifacts and even failures.
26
+
27
+ < g r a p h i c s >
28
+
29
+ Fig. 1. Structure of the proposed SSC module, where the input RGB-format images are first converted to the optical density space, then projected into $M$ groups of $S$ stain channels via linear transformations, and finally assembled to obtain the stain separation results via the designed sparsity routing algorithm.
30
+
31
+ 2) The SSC module is much lighter (containing only tens of parameters) than CNN-based methods $\left\lbrack {{12},{14},{10}}\right\rbrack$ and can be trained end-to-end with specific HIA tasks. Furthermore, the module does not need manually selected template images, which determines the SSC module is easy-to-use in both the development and deployment of HIA applications.
32
+
33
+ 3) The proposed method is evaluated on two public datasets and compared with the sate-of-the-art methods. The experimental results have demonstrated the effectiveness and advantages in developing HIA applications.
34
+
35
+ § 2 METHOD
36
+
37
+ The approach of the proposed SSC module to stain standardization is achieved by generating uniform stain separation tensors for images in various color appearance. A set of stain separation candidates are first constructed and the separation result is obtained by a weighted sum of these candidates.
38
+
39
+ § 2.1 STAIN SEPARATION CANDIDATES
40
+
41
+ Color deconvolution (CD) [8] is a popular stain separation method for digital slides where the staining dyes obey Beer-Lambert law. CD is utilized as the basic theory of popular stain standardization methods $\left\lbrack {5,{13},{16}}\right\rbrack$ . Referring to $\left\lbrack 8\right\rbrack$ , independent stain components can be extracted through linear transformation in the optical density (OD) space. Hence, we constructed a CNN structure with linear projection operations to learn possible stain extraction principles in OD-space based on all the training images. Then, we assigned the stain extraction layers into $M$ groups with the same structures, generating $M$ stain separation candidates. The detail of the SSC structure is illustrated in Fig. 1.
42
+
43
+ Letting $\mathbf{o} \in {\mathbb{R}}^{m \times n \times 3}$ denote the optical density of an image in size of $m \times n$ pixels ${}^{4}$ , the grouped linear projections can be represented as
44
+
45
+ $$
46
+ {\mathbf{u}}_{i} = \operatorname{Conv}\left( {\mathbf{o},{\mathbf{W}}_{i}^{\left( 1\right) }}\right) \in {\mathbb{R}}^{m \times n \times N},
47
+ $$
48
+
49
+ $$
50
+ {\widehat{\mathbf{u}}}_{i} = \operatorname{Conv}\left( {{\mathbf{u}}_{i},{\mathbf{W}}_{i}^{\left( 2\right) }}\right) \in {\mathbb{R}}^{m \times n \times S},i = 1,2,\ldots ,M,
51
+ $$
52
+
53
+ where Conv represents a convolution operation followed by a leakly-relu activation, ${\mathbf{W}}_{i}^{\left( 1\right) }$ and ${\mathbf{W}}_{i}^{\left( 2\right) }$ are the convolutional weights, and $M,N$ and $S$ denotes the number of groups, the number of channels in the first convolution and the number of stains involved in the images, respectively.
54
+
55
+ § 2.2 SPARSITY ROUTING
56
+
57
+ Capsule Network is a new paradigm of artificial neural networks proposed by Hinton et al. [9], in which the input of the neurons are defined as a set of vectors, rather than scalars that defined in traditional neural networks. The set of vectors are assembled by a weighted sum operation and then activated. And the weights of the input vectors for ensemble are decided by the dynamic routing (DR) algorithm.
58
+
59
+ Motivated by the insight of capsule network, we propose assembling the stain candidates $\left\{ {{\widehat{\mathbf{u}}}_{i} \mid i = 1,\ldots ,M}\right\}$ through DR. The aim of the routing is to find the most appreciate stain separation result for each specific image from the $M$ candidates in the forward way of the network.
60
+
61
+ Generally, a good stain separation should exclusively assign the value of a pixel to one stain channel, i.e. the separated result is desired to be pixel-wise sparse $\left\lbrack {5,1,{16}}\right\rbrack$ . Therefore, we designed a novel Sparsity Routing (SR) algorithm by modifying the agreement scoring in DR. The pseudo-code of SR is given in Algorithm 1. The score of the pixel-wise sparsity is calculated based on the sparseness measure defined in [4]:
62
+
63
+ $$
64
+ {\eta }_{p}\left( \mathbf{x}\right) = \frac{1}{mn}\mathop{\sum }\limits_{i}\mathop{\sum }\limits_{j}\frac{\sqrt{S} - \mathop{\sum }\limits_{k}\left| {{x}_{ijk} + \epsilon }\right| /\sqrt{\mathop{\sum }\limits_{k}{\left( {x}_{ijk} + \epsilon \right) }^{2}}}{\sqrt{S} - 1},
65
+ $$
66
+
67
+ where $\mathbf{x} \in {\mathbb{R}}^{m \times n \times S}$ denotes the tensor to score. To avoid all the image data being assigned to a single stain channel, a channel-wise sparseness is additionally
68
+
69
+ ${}^{4}\mathbf{o} = - \log \left( {\mathbf{I} + \epsilon }\right) /{I}_{max}$ , where $\mathbf{I}$ represents a RGB-format image, ${I}_{max}$ is the upper intensity for the digitization and $\epsilon$ is a small scalar to protect the log operation.
70
+
71
+ defined:
72
+
73
+ $$
74
+ {\eta }_{c}\left( \mathbf{x}\right) = \frac{1}{S}\mathop{\sum }\limits_{k}\frac{\sqrt{mn} - \mathop{\sum }\limits_{i}\mathop{\sum }\limits_{j}\left| {{x}_{ijk} + \epsilon }\right| /\sqrt{\mathop{\sum }\limits_{i}\mathop{\sum }\limits_{j}{\left( {x}_{ijk} + \epsilon \right) }^{2}}}{\sqrt{mn} - 1}.
75
+ $$
76
+
77
+ Then, the sparsity score is formulated as $\eta \left( \mathbf{x}\right) = {\eta }_{p}\left( \mathbf{x}\right) + {\eta }_{c}\left( \mathbf{x}\right)$ and referred as SparseScore(x)in Algorithm 1. After SR, the output of SSC is calculated by equation
78
+
79
+ $$
80
+ \mathbf{s} = \mathop{\sum }\limits_{{i = 1}}^{M}{c}_{i} \cdot {\widehat{\mathbf{u}}}_{i}.
81
+ $$
82
+
83
+ The SR process allows SSC generating refined stain separation results by tuning the weights $\left\{ {c}_{i}\right\}$ and then allows the following CNNs concentrate on the structural variances of tissue images.
84
+
85
+ Data: $\left\{ {{\widehat{\mathbf{u}}}_{i} \mid i = 1,\ldots ,M}\right\} \leftarrow$ The grouped outputs of the candidate layer;
86
+
87
+ $R \leftarrow$ The number of routings;
88
+
89
+ SparseRouting $\left( {\left\{ {\widehat{\mathbf{u}}}_{i}\right\} ,R}\right)$ :
90
+
91
+ for all the group $i$ in the candidate layer: ${b}_{i} \leftarrow 0,{c}_{i} \leftarrow 1/M$ ;
92
+
93
+ for $r = 1$ to $R$ do
94
+
95
+ $\widehat{\mathbf{s}} \leftarrow \mathop{\sum }\limits_{i}{c}_{i} \cdot {\widehat{\mathbf{u}}}_{i};$
96
+
97
+ for all the group $i$ in the candidate layer: ${b}_{i} \leftarrow {b}_{i} +$ SparseScore $\left( {{\widehat{\mathbf{u}}}_{i} + \widehat{\mathbf{s}}}\right)$ ;
98
+
99
+ for all the group $i$ in the candidate layer: ${c}_{i} \leftarrow \exp \left( {b}_{i}\right) /\mathop{\sum }\limits_{i}\exp \left( {b}_{i}\right)$ ;
100
+
101
+ end
102
+
103
+ return ${\left\{ {c}_{i}\right\} }_{1}^{M}$ ;
104
+
105
+ Algorithm 1: The algorithm of sparsity routing.
106
+
107
+ § 2.3 TRAINING AND APPLICATION OF SSC
108
+
109
+ The SSC module is essentially a convolutional neural network. Therefore, it can be directly equipped to an application-driven CNN and trained end-to-end along with the target of the CNN. The assembled stain separation result $\mathbf{s}$ is the output of the SSC module and meanwhile the input of the following CNN. To ensure s preserves the structural information of the histopathological image, a reconstruction layer is appended to the end of SSC and a mean square error (MSE) loss is considered between the original image and the reconstructed results. The MSE loss is merged to the loss of the following CNN in the training stage. Note that the SR only processes in the forward stage and the scalars ${c}_{i}$ are constant in the backward stage [9].
110
+
111
+ § 3 EXPERIMENT
112
+
113
+ § 3.1 EXPERIMENTAL SETTINGS
114
+
115
+ The proposed SSC module was validated on Camelyon ${16}^{5}$ and ${ACDC}$ -Lung $H{P}^{6}$ datasets $\left\lbrack {2,6}\right\rbrack$ via histopathological image classification tasks. Regions with cancer in the WSIs are annotated by pathologists. Image patches in size of ${224} \times {224}$ were randomly sampled from the WSIs. Patches containing above ${75}\%$ cancerous pixels according to the annotation were labeled as positive and containing none cancerous pixels were labeled as negative. The other patches were not used in the experiments.
116
+
117
+ The DenseNet-121 CNN structure with softmax output was employed for classification. The sensitivity, specificity, accuracy and the area under ROC carve were used for evaluation metrics. ${20}\%$ samples in the training set were spared for validation and the remainders were used to train the model. The hyper-parameters $M,N,R$ in SSC were tuned in the training set and determined according to the classification error of the validation samples. Specifically,(M, N, R)is determined as(5,3,3)for Camelyon16 and(4,3,4)for ACDC-LungHP. $S$ is set to 2 because the images are all from H&E-stained histology.
118
+
119
+ § 3.2 RESULTS AND DISCUSSION
120
+
121
+ The classification performance in the Camelyon16 testing set are presented in Table 1, where three state-of-the-art methods $\left\lbrack {{11},{12},{16}}\right\rbrack$ are compared ${}^{7}$ . Table 1 also provides a summary on the dependence and the property of each compared method. Overall, our SSC module is the most effective in improving the classification performance. The performance of data standardization appeared to be less effective in ACDC-lungHP dataset than in Camelyon16 dataset since the color consistency in the former dataset is relatively better than the latter.
122
+
123
+ Stain augmentation [11] utilized the prior knowledge of slide staining to augment the color allocation of training images. Therefore, the classification network using Stain augmentation [11] achieved better classification metrics than that using a common Color augmentation (including random illumination, saturation, hue and contrast transfers in the experiment) method. However, the method would generate images with unreasonable color styles. These samples would perform as noises in the CNN training and reduce the classification accuracy when the color distribution is originally consistent (referring to results in ACDC-lungHP dataset). ACD [16] and CNN-norm [12] have achieved competitive results. Nevertheless, ACD requires individually estimating standardization parameters for specific testing image and relies on the context information of the corresponding WSI. CNN-norm learned a general principle for images in different color styles with millions of model parameters $\left( { > {10}^{7}}\right)$ . The computation amount of ${CNN}$ -norm is comparable or even more than the following HIA application.
124
+
125
+ ${}^{5}$ https://camelyon16.grand-challenge.org/
126
+
127
+ ${}^{6}$ https://acdc-lunghp.grand-challenge.org/.Since the annotations of testing part of the data set are not yet accessible, only the 150 training WSIs of the data were used in this paper.
128
+
129
+ ${}^{7}$ the compared methods have been introduced in brief in section 1 .
130
+
131
+ Table 1. Standardization performance for histopathological image patch classification, where the model properties, including the number of model parameters $\left( {n}_{\text{ param }}\right)$ , whether to rely on manually selected templates (T.) and whether the model parameters for testing images require to be estimated (E.) are compared.
132
+
133
+ max width=
134
+
135
+ 2*Methods 4|c|Camelyon16 4|c|ACDC-LungHP 2|c|Dependence
136
+
137
+ 2-11
138
+ Sen Spe. Acc AUC Sen Spe. Acc AUC ${n}_{param}$ T./E.
139
+
140
+ 1-11
141
+ Origin 0.851 0.969 0.910 0.957 0.822 0.779 0.801 0.882 - -
142
+
143
+ 1-11
144
+ Color Aug 0.868 0.950 0.909 0.958 0.836 0.760 0.798 0.881 None No/No
145
+
146
+ 1-11
147
+ Stain Aug[11] 0.875 0.946 0.911 0.967 0.819 0.778 0.799 0.882 None No/No
148
+
149
+ 1-11
150
+ ACD[16] 0.892 0.944 0.918 0.968 0.836 0.776 0.805 0.886 $< {10}^{1}$ Yes/Yes
151
+
152
+ 1-11
153
+ CNN-norm[12] 0.875 0.970 0.922 0.971 0.821 0.788 0.804 0.886 $> {10}^{7}$ Yes/No
154
+
155
+ 1-11
156
+ SSC (Ours) 0.894 0.966 0.930 0.975 0.840 0.778 0.805 0.887 $< {10}^{2}$ No/No
157
+
158
+ 1-11
159
+
160
+ < g r a p h i c s >
161
+
162
+ Fig. 2. Joint display of the original images and the reconstructed images.
163
+
164
+ In comparison, our SSC module involves only tens of model parameters, does not rely on contextual information out the scope of the testing image, has no additional parameter estimation process in the prediction stage, and can be trained in end-to-end fashion. These properties determine the SSC module is more efficient and convenient than the present methods in both the training and deployment for HIA applications. Figure 2 illustrated original images and the corresponding reconstructed images from Camelyon16 dataset. Without any template images, SSC appears to have learned a "Mean" stain style in the reconstruction layer for images in diverse color appearance. It indicates an uniform representation of the SSC output layer, which has allowed the following CNN concentrating on structural discrimination in histopathological images and thus has improved the performance of the HIA application.
165
+
166
+ § 4 CONCLUSION
167
+
168
+ In this paper, we proposed a novel stain standardization module named stain standardization capsule for histopathological image analyis based on the optical properties of tissue section staining and the insight of dynamic routing from capsule network. The proposed module is implemented in the domain of convolutional neural network and therefore can be directly equipped to CNN-based HIA application. The proposed method was evaluated with application of histopathological image classification on two public datasets. The results have demonstrated the effectiveness and robustness of the proposed methods.
169
+
170
+ § ACKNOWLEDGMENT
171
+
172
+ This work was supported by the National Natural Science Foundation of China (No. 61901018, 61771031 and 61906058), China Postdoctoral Science Foundation (No. 2019M650446) and Motic-BUAA Image Technology Research Center.
papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/BJxZ3ZH1-S/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Texture-based classification of confocal laser endomicroscopy images for Barrett's esophagus surveillance
2
+
3
+ Giacomo Nardi ${}^{1}$ , Marzieh Kohandani Tafreshi ${}^{1}$ , Jessie Mahé ${}^{1}$ , Nicholas Ayache ${}^{2}$ , and François Lacombe ${}^{1}$
4
+
5
+ ${}^{1}$ Mauna Kea Technologies,9 Rue d’Enghien,75010 Paris, France \{giacomo, marzieh, jessie, francois\}@maunakeatech.com 2 INRIA Sophia Antipolis - Méditerranée 2004 route des lucioles - BP 93 06902
6
+
7
+ Sophia Antipolis Cedex France
8
+
9
+ Nicholas.Ayache@inria.fr
10
+
11
+ Abstract. Barrett's esophagus is a complication of gastroesophageal reflux diseases that generates a transformation of esophagus epithelium turning into adenocarcinoma with a high risk. The surveillance of the changes in the esophageal mucosa is primordial to estimate the cancer progression. Confocal laser endomicroscopy is a novel imaging technique allowing physicians to perform in-vivo and real-time histological analysis in order to decrease the number of biopsies needed for the diagnosis. This paper uses the notion of local density function to extract characteristic morphologies of tissues. This allows us to define a novel classification method for Barrett's images based on fractal textures. The method performs particularly well on pre-cancer stages with an overall accuracy of 89.2%.
12
+
13
+ Keywords: Barrett's esophagus - confocal laser endomicroscopy - texture analysis - fractal local density - image classification.
14
+
15
+ ## 1 Introduction
16
+
17
+ ### 1.1 Medical Context
18
+
19
+ Barrett's esophagus designates a transformation (metaplasia) of the tissue lining the inside of the esophagus into intestinal or gastric-type tissue [3]. The main cause of Barrett's esophagus is the gastroesophageal reflux that induces such a modification in order to make the tissue more resistant to acid exposure. Intestinal and gastric metaplasia are highly associated to further transition into esophageal adenocarcinoma, so that they demand a periodic follow-up.
20
+
21
+ The surveillance of Barrett's esophagus evolution is traditionally obtained through microscopic analysis of several biopsies taken during endoscopies. Of course, such a process prevents from a complete examination of the tissue and is highly invasive for the patient. Moreover, biopsy results are not immediately available. Concerning these drawbacks, a new and promising technology is given by Cellvizio, developed by Mauna Kea Technologies, Paris. This endomicroscopy system enables the practitioners to perform in-vivo confocal microscopy ( optical biopsy) and provides them instantaneously with microscopic images in a minimally-invasive manner.
22
+
23
+ This technology is beneficial on several levels. Of course, to have an easier follow-up in case of cancer, but it also enables a closer analysis for earlier stages. In fact, when there is no visible sign of metaplasia or cancer, a few biopsies are taken by physicians and the pre-cancer stages are often no detected.
24
+
25
+ Classification of microscopic Barrett's esophagus images is then a crucial challenge to facilitate physician's decision-making in both pre-cancer and cancer stages.
26
+
27
+ ### 1.2 Previous works
28
+
29
+ The standard classification for Barrett's esophagus is made through four classes that correspond to the main stages of the disease (see Fig. 1): Squamous Epithelium (SE), Intestinal Metaplasia (IM), Gastric Metaplasia (GM), Dysplasia or Cancer (DC).
30
+
31
+ These stages clearly show the transformation of the healthy epithelium (SE), recognizable by its tile-like appearance, into the cancerous tissue (DC) characterized by a disorganization at the cellular layer.
32
+
33
+ Intestinal and gastric metaplasia are respectively characterized by the appearance of both columnar mucosa and globet cells (IM) and gastric pits (GM).
34
+
35
+ ![01963a55-864a-75bf-9d25-a404ffa5c8bc_1_391_1220_1042_268_0.jpg](images/01963a55-864a-75bf-9d25-a404ffa5c8bc_1_391_1220_1042_268_0.jpg)
36
+
37
+ Fig.1. From left to right : squamous epithelium (SE), intestinal metaplasia (IM), gastric metaplasia (GM), dysplasia or cancer (DC)
38
+
39
+ In [12], a binary tree classifier is defined to distinguish IM, GM, and DC on the basis of LBP-textures and Level-Sets geometric information of confocal images (overall accuracy of 96%). In [5], the proposed classification model improves images via an ad-hoc filter to extract several features (GLCM, LBP, fractal textures, wavelet features) achieving an overall accuracy of ${90.4}\%$ for classifying the IM class. In [4], a fractal-based filter is used to improve images before extracting features (LBP, GLCM, fractal features) with an overall accuracy of 96% over the four previous classes. Similar techniques are used in $\left\lbrack {6,7}\right\rbrack$ . A deep learning model using a convolutional neural network is proposed in [9] to distinguish IM, GM, and DC (overall accuracy of 80.7% on a small dataset).
40
+
41
+ ### 1.3 Contributions
42
+
43
+ The goal of this work is to define a suitable texture-based algorithm for Barrett’s images acquired by the Cellvizio system. This device has a high frame rate (around ten images per second), but the quality of images is quite low (SNR low, frequent local changes of illumination, non-rigid distortions due to the contact of the probe with the tissue).
44
+
45
+ Our first contribution consists in using the notion of local density function for image preprocessing. This allows us to detect cellular structures in case of changes of illumination so that a better segmentation is possible.
46
+
47
+ The second contribution consists in defining a novel texture based on the fractal properties for the level sets of density functions. To our knowledge this is the first paper using these features for endomicroscopic images. An accuracy of ${89.2}\%$ is obtained over all the pre-cancer stages.
48
+
49
+ Finally, in comparison to the previously cited studies about Barrett's images classification, our result is obtained on a large dataset which proves the robustness of texture-based classification for pre-cancer stages. Moreover, these studies use a different dataset collecting images acquired with a different technology. For this reason, we can not consider this results as a reference, and a comparison with different methods is given.
50
+
51
+ ## 2 Data and methods
52
+
53
+ ### 2.1 Dataset
54
+
55
+ The dataset consists of 1694 images, collected from 31 patients throughout several clinical trials : 362 images of SE (12 patients), 560 images of IM (9 patients), and 772 images of GM (11 patients).
56
+
57
+ All images have been acquired by the Cellvizio system and are available on the website www.cellvizio.net. The acquisition is made by a confocal mini-probe GastroFlex UHD (around 30000 optical fibers, depth of observation of ${60\mu }\mathrm{m}$ , field of view of ${240\mu }\mathrm{m}$ , and a resolution of ${1\mu }\mathrm{m}$ ) with a frame rate of around ten images per second.
58
+
59
+ Cellvizio is a fiber-bundle endoscopy system producing circular images [2]. In the following, in order to keep the maximum information of each image and avoid border effects, the largest square inscribed in the circular bundle is considered.
60
+
61
+ ### 2.2 Fractal features
62
+
63
+ In order to describe the different topologies appearing in the images some level-sets-based features are proposed in [12]. However, in our images, due to low SNR and frequent changes of illumination, intensity level-sets do not enable image segmentation.
64
+
65
+ To overcome this problem, the local density function (LDF) is used to define textures [11]. This takes into account the local growth of the signal instead of its intensity. For each pixel $x$ the local density function is defined as:
66
+
67
+ $$
68
+ \operatorname{LDF}\left( x\right) = \mathop{\lim }\limits_{{\mathrm{r} \rightarrow 0}}\frac{\log \mu \left( {\mathrm{B}\left( {x,\mathrm{r}}\right) }\right) }{\log \mathrm{r}}
69
+ $$
70
+
71
+ where $B\left( {x, r}\right)$ denotes the circular neighborhood of $x$ of radius $\mathrm{r}$ and $\mu \left( {B\left( {x, r}\right) }\right)$ denotes the sum of the intensity values into $B\left( {x, r}\right)$ . We compute the density by a linear fitting of $\log \mu \left( {B\left( {x, r}\right) }\right)$ against $\log r$ for radius values ranging from 3 to 13 pixels.
72
+
73
+ In [13], a LDF-based feature is proposed and is proved to be invariant to local changes of illumination. This descriptor is used in [8] to classify endoscopic images of colonic polyps. In Fig. 2 some examples of segmentation via intensity-values and LDF are shown. We can observe that LDF-based segmentation provides more details and it is invariant to variations of illumination.
74
+
75
+ ![01963a55-864a-75bf-9d25-a404ffa5c8bc_3_389_981_1044_533_0.jpg](images/01963a55-864a-75bf-9d25-a404ffa5c8bc_3_389_981_1044_533_0.jpg)
76
+
77
+ Fig. 2. From left to right : original image, local density function, intensity level set (in red), density level set (in red) for SE (top) and IM (bottom)
78
+
79
+ In the following, we consider the levels sets of local density functions corresponding to values ranging from 1.4 to 2.7 (these values have been set empirically after computation of all LDF-maps). For each level set we compute several fractal features : the ration of area to perimeter, fractal dimension of the level lines (computed via the box counting method), and the lacunarity describing how level sets fill the space (computed via the gliding box counting [1]). This finally leads to a 42-dimensional vector of fractal features.
80
+
81
+ ### 2.3 LBP-texture features
82
+
83
+ A very common type of textures are Local Binary Patterns (LBP) [10]. For each pixel a binary pattern is defined by comparing its intensity with the intensity of pixels on its circular neighborhood. We use in particular rotation invariant uniform patterns (a uniform pattern has at most two 0 to 1 or 1 to 0 transitions). A multi-scale analysis is possible by choosing different radius for circular neighborhoods.
84
+
85
+ In the following, we consider radii $= \left\lbrack {3,7,{11},{15},{19}}\right\rbrack$ and 16 neighbors for each circle. The choice of different radius enables a multi-scale analysis. Once LBP image has been computed, some statistical indicators of its histogram are considered: mean, variance, entropy, skewness, kurtosis. This finally leads to a vector of 25 LBP-features.
86
+
87
+ ## 3 Results
88
+
89
+ In this section we present the results of our model and its comparison with other texture-based methods. We consider the classification of the pre-cancer stages (SE, IM, GM) and we discuss the difficulties to generalize our model to the four-classes problem (SE, IM, GM, DC).
90
+
91
+ ### 3.1 Classification of pre-cancer stages
92
+
93
+ Experiments are carried out on a dataset of 1694 images of SE, IM, and GM. We consider 25 LBP-features and the 42-dimensional vector of fractal profiles.
94
+
95
+ We remind that, as the Cellvizio system has a high frame rate, then videos from the same patient may contain images highly correlated. For this reason we use the Leave-One-Patient-Out-Cross-Validation strategy (LOPO-CV) to validate our model. We also point out that this kind of validation is the most coherent with the algorithm application during medical procedures.
96
+
97
+ Moreover, we also evaluate the model via a splitting into training and test sets with no common patient ( 17 patients for the training set and 14 patients for the test-one).
98
+
99
+ Classification is made via both linear SVM and Random Forest (RF) classifier. All parameters (margin of error C for SVM and number of trees for Random Forest) are tuned by LOPO-CV on training sets.
100
+
101
+ We finally obtain an overall accuracy of 88.5% via the SVM classifier and of ${89.2}\%$ via the Random Forest classifier by using the LOPO-CV strategy. The confusion matrices are shown in Tab. 1.
102
+
103
+ As pointed out in Section 1.3, the published papers on Barrett's classification cannot be used as a reference because of the different technology used to acquire images. We compare our model (LBP+FRACT) with the following methods : only LBP-textures defined in Section 2.3 (LBP), only fractal textures as defined in Section 2.2 (FRACT), and the Smart Atlas method [2].
104
+
105
+ Table 1. Confusion Matrix using LOPO-CV for pre-cancerous stages.
106
+
107
+ <table><tr><td/><td colspan="3">SVM</td><td colspan="3">Random Forest</td></tr><tr><td>GROUND TRUTH</td><td>SE</td><td>$\mathbf{{IM}}$</td><td>GM</td><td>SE</td><td>$\mathbf{{IM}}$</td><td>GM</td></tr><tr><td>SE</td><td>328</td><td>17</td><td>17</td><td>315</td><td>28</td><td>19</td></tr><tr><td>$\mathbf{{IM}}$</td><td>15</td><td>511</td><td>34</td><td>19</td><td>509</td><td>32</td></tr><tr><td>GM</td><td>41</td><td>71</td><td>660</td><td>50</td><td>35</td><td>687</td></tr></table>
108
+
109
+ The Smart Atlas method performs image classification combining a content-based image retrieval (CBIR) [2] with a k-nearest neighbors (k-NN) voting scheme.
110
+
111
+ Tab. 2 summarizes the comparison results showing that our algorithm outperforms the other methods. We also point out the high accuracy of FRACT confirming that fractal textures based on local densities well characterize the tissue morphologies.
112
+
113
+ Table 2. Accuracy comparison for the pre-cancer stages via LOPO-CV and splitting
114
+
115
+ <table><tr><td/><td colspan="2">ACCURACY (LOPO-CV)</td><td colspan="2">ACCURACY (SPLIT)</td></tr><tr><td>METHOD</td><td>SVM</td><td>$\mathbf{{RF}}$</td><td>SVM</td><td>$\mathbf{{RF}}$</td></tr><tr><td>Smart Atlas</td><td colspan="2">65.5%</td><td colspan="2">65.7%</td></tr><tr><td>LBP</td><td>69.1%</td><td>68.7%</td><td>80.1%</td><td>67%</td></tr><tr><td>FRACT</td><td>84.9%</td><td>83.2%</td><td>90%</td><td>80.4%</td></tr><tr><td>LBP+FRACT</td><td>$\mathbf{{88.5}\% }$</td><td>89.2%</td><td>89%</td><td>$\mathbf{{86.9}\% }$</td></tr></table>
116
+
117
+ Fig. 3 shows some examples of correct classifications for SE, IM and GM classes highlighting the variability of tissue morphologies within each class.
118
+
119
+ ### 3.2 Discussion
120
+
121
+ The previously defined model is based on texture-features which describe the cellular organization. This enables our classifier to well distinguish pre-cancer stages that have well defined tissue architectures.
122
+
123
+ The case of cancerous tissues is more complex. In fact, as explained above, cancer stage is characterized by a highly disorganized cellular architecture, so that the presented characterization by morphologies segmentation is less adapted.
124
+
125
+ ![01963a55-864a-75bf-9d25-a404ffa5c8bc_6_428_324_948_721_0.jpg](images/01963a55-864a-75bf-9d25-a404ffa5c8bc_6_428_324_948_721_0.jpg)
126
+
127
+ Fig. 3. Examples of correct classifications for SE (top), IM (middle) and GM (bottom)
128
+
129
+ Next step consists in defining new features characterizing the different grades of cancer. This is actually a challenging problem because of the architecture variability in cancerous tissues. Moreover, no criterion exists yet to distinguish the different grades of cancer. Then, a finer analysis on a larger dataset is needed, in order to define suitable textures based on a collection of morphologies within the cancer class.
130
+
131
+ ## 4 Conclusion
132
+
133
+ This paper introduces a method to extract characteristic structural features from microendoscopic images with low SNR and unstable intensity. This provides a more robust description of the cellular architecture of tissues compared to intensity-based segmentation.
134
+
135
+ We used this method to define a classifier of pre-cancer stages for Barrett's esophagus. Previous results show that LBP and fractal features well characterize SE, IM, and GM stages. A classification result shows an overall accuracy of 89.2% on a large dataset.
136
+
137
+ ## References
138
+
139
+ 1. Allain, C., Cloitre, M.: Characterizing the lacunarity of random and deterministic fractal sets. Phys. Rev. A $\mathbf{{44}}\left( 6\right) ,{35523558}\left( {1991}\right)$
140
+
141
+ 2. André, B., Vercauteren, T., Buchner, A., Wallace, M., Ayache, N.: A smart atlas for endomicroscopy using automated video retrieval. Medical Image Analysis 15(4), 460-476 (August 2011)
142
+
143
+ 3. Coleman, H.G., Bhat, S.K., Murray, L.J., McManus, D.T., O'Neill, O.M., Gavin, A.T., Johnston, B.T.: Symptoms and endoscopic features at Barrett's esophagus diagnosis: implications for neoplastic progression risk. Am. J. Gastroenterol. 109(4), 527-534 (2014)
144
+
145
+ 4. Ghatwary, N., Ahmed, A., Grisan, E., Jalab, H., Bidaut, L., Ye, X.: In-vivo Barrett's esophagus digital pathology stage classification through feature enhancement of confocal laser endomicroscopy. Journal of Medical Imaging $\mathbf{6}\left( 1\right) ,{014502}\left( {2019}\right)$
146
+
147
+ 5. Ghatwary, N., Ahmed, A., Ye, X., Jalab, H.: Automatic grade classification of Barretts Esophagus through feature enhancement. Proc. SPIE 10134, 1013433 (2017)
148
+
149
+ 6. Grisan, E., et al.: 239 computer aided diagnosis of Barrett's esophagus using confocal laser endomicroscopy: preliminary data. Gastrointest. Endoscopy 75(4), AB126 (2012)
150
+
151
+ 7. Grisan, E., Veronese, E., Diamantis, G.: Computer aided diagnosis of Barrett's esophagus using confocal laser endomicroscopy: preliminary data. Dig. Liver Dis. 44, S147-S148 (2012)
152
+
153
+ 8. Häfner, M., Tamaki, T., Tanaka, S., Uhl, A., Wimmer, G., Yoshida, S.: Local fractal dimension based approaches for colonic polyp classification. Medical Image Analysis $\mathbf{{26}}\left( 1\right) ,{92} - {107}\left( {2015}\right)$
154
+
155
+ 9. Hong, J., Park, B.Y., Park, H.: Convolutional neural network classifier for distinguishing Barrett's esophagus and neoplasia endomicroscopy images. Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS pp. 2892-2895 (2017)
156
+
157
+ 10. Pietikäinen, M., Hadid, A., Zhao, G., Ahonen, T.: Computer Vision Using Local Binary Patterns (2012)
158
+
159
+ 11. Varma, M., Garg, R.: Locally invariant fractal features for statistical texture classification. Proceedings of the IEEE International Conference on Computer Vision (2007)
160
+
161
+ 12. Veronese, E., Grisan, E., Diamantis, G., Battaglia, G., Crosta, C., Trovato, C.: Hybrid patch-based and image-wide classification of confocal laser endomicroscopy images in Barrett's esophagus surveillance. Proceedings - International Symposium on Biomedical Imaging pp. 362-365 (2013)
162
+
163
+ 13. Xu, Y., Hui, J., Fermüller, C.: Viewpoint invariant texture description using fractal analysis. International Journal of Computer Vision $\mathbf{{83}}\left( 1\right) ,{85} - {100}\left( {2009}\right)$
papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/BJxZ3ZH1-S/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,170 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § TEXTURE-BASED CLASSIFICATION OF CONFOCAL LASER ENDOMICROSCOPY IMAGES FOR BARRETT'S ESOPHAGUS SURVEILLANCE
2
+
3
+ Giacomo Nardi ${}^{1}$ , Marzieh Kohandani Tafreshi ${}^{1}$ , Jessie Mahé ${}^{1}$ , Nicholas Ayache ${}^{2}$ , and François Lacombe ${}^{1}$
4
+
5
+ ${}^{1}$ Mauna Kea Technologies,9 Rue d’Enghien,75010 Paris, France {giacomo, marzieh, jessie, francois}@maunakeatech.com 2 INRIA Sophia Antipolis - Méditerranée 2004 route des lucioles - BP 93 06902
6
+
7
+ Sophia Antipolis Cedex France
8
+
9
+ Nicholas.Ayache@inria.fr
10
+
11
+ Abstract. Barrett's esophagus is a complication of gastroesophageal reflux diseases that generates a transformation of esophagus epithelium turning into adenocarcinoma with a high risk. The surveillance of the changes in the esophageal mucosa is primordial to estimate the cancer progression. Confocal laser endomicroscopy is a novel imaging technique allowing physicians to perform in-vivo and real-time histological analysis in order to decrease the number of biopsies needed for the diagnosis. This paper uses the notion of local density function to extract characteristic morphologies of tissues. This allows us to define a novel classification method for Barrett's images based on fractal textures. The method performs particularly well on pre-cancer stages with an overall accuracy of 89.2%.
12
+
13
+ Keywords: Barrett's esophagus - confocal laser endomicroscopy - texture analysis - fractal local density - image classification.
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ § 1.1 MEDICAL CONTEXT
18
+
19
+ Barrett's esophagus designates a transformation (metaplasia) of the tissue lining the inside of the esophagus into intestinal or gastric-type tissue [3]. The main cause of Barrett's esophagus is the gastroesophageal reflux that induces such a modification in order to make the tissue more resistant to acid exposure. Intestinal and gastric metaplasia are highly associated to further transition into esophageal adenocarcinoma, so that they demand a periodic follow-up.
20
+
21
+ The surveillance of Barrett's esophagus evolution is traditionally obtained through microscopic analysis of several biopsies taken during endoscopies. Of course, such a process prevents from a complete examination of the tissue and is highly invasive for the patient. Moreover, biopsy results are not immediately available. Concerning these drawbacks, a new and promising technology is given by Cellvizio, developed by Mauna Kea Technologies, Paris. This endomicroscopy system enables the practitioners to perform in-vivo confocal microscopy ( optical biopsy) and provides them instantaneously with microscopic images in a minimally-invasive manner.
22
+
23
+ This technology is beneficial on several levels. Of course, to have an easier follow-up in case of cancer, but it also enables a closer analysis for earlier stages. In fact, when there is no visible sign of metaplasia or cancer, a few biopsies are taken by physicians and the pre-cancer stages are often no detected.
24
+
25
+ Classification of microscopic Barrett's esophagus images is then a crucial challenge to facilitate physician's decision-making in both pre-cancer and cancer stages.
26
+
27
+ § 1.2 PREVIOUS WORKS
28
+
29
+ The standard classification for Barrett's esophagus is made through four classes that correspond to the main stages of the disease (see Fig. 1): Squamous Epithelium (SE), Intestinal Metaplasia (IM), Gastric Metaplasia (GM), Dysplasia or Cancer (DC).
30
+
31
+ These stages clearly show the transformation of the healthy epithelium (SE), recognizable by its tile-like appearance, into the cancerous tissue (DC) characterized by a disorganization at the cellular layer.
32
+
33
+ Intestinal and gastric metaplasia are respectively characterized by the appearance of both columnar mucosa and globet cells (IM) and gastric pits (GM).
34
+
35
+ < g r a p h i c s >
36
+
37
+ Fig.1. From left to right : squamous epithelium (SE), intestinal metaplasia (IM), gastric metaplasia (GM), dysplasia or cancer (DC)
38
+
39
+ In [12], a binary tree classifier is defined to distinguish IM, GM, and DC on the basis of LBP-textures and Level-Sets geometric information of confocal images (overall accuracy of 96%). In [5], the proposed classification model improves images via an ad-hoc filter to extract several features (GLCM, LBP, fractal textures, wavelet features) achieving an overall accuracy of ${90.4}\%$ for classifying the IM class. In [4], a fractal-based filter is used to improve images before extracting features (LBP, GLCM, fractal features) with an overall accuracy of 96% over the four previous classes. Similar techniques are used in $\left\lbrack {6,7}\right\rbrack$ . A deep learning model using a convolutional neural network is proposed in [9] to distinguish IM, GM, and DC (overall accuracy of 80.7% on a small dataset).
40
+
41
+ § 1.3 CONTRIBUTIONS
42
+
43
+ The goal of this work is to define a suitable texture-based algorithm for Barrett’s images acquired by the Cellvizio system. This device has a high frame rate (around ten images per second), but the quality of images is quite low (SNR low, frequent local changes of illumination, non-rigid distortions due to the contact of the probe with the tissue).
44
+
45
+ Our first contribution consists in using the notion of local density function for image preprocessing. This allows us to detect cellular structures in case of changes of illumination so that a better segmentation is possible.
46
+
47
+ The second contribution consists in defining a novel texture based on the fractal properties for the level sets of density functions. To our knowledge this is the first paper using these features for endomicroscopic images. An accuracy of ${89.2}\%$ is obtained over all the pre-cancer stages.
48
+
49
+ Finally, in comparison to the previously cited studies about Barrett's images classification, our result is obtained on a large dataset which proves the robustness of texture-based classification for pre-cancer stages. Moreover, these studies use a different dataset collecting images acquired with a different technology. For this reason, we can not consider this results as a reference, and a comparison with different methods is given.
50
+
51
+ § 2 DATA AND METHODS
52
+
53
+ § 2.1 DATASET
54
+
55
+ The dataset consists of 1694 images, collected from 31 patients throughout several clinical trials : 362 images of SE (12 patients), 560 images of IM (9 patients), and 772 images of GM (11 patients).
56
+
57
+ All images have been acquired by the Cellvizio system and are available on the website www.cellvizio.net. The acquisition is made by a confocal mini-probe GastroFlex UHD (around 30000 optical fibers, depth of observation of ${60\mu }\mathrm{m}$ , field of view of ${240\mu }\mathrm{m}$ , and a resolution of ${1\mu }\mathrm{m}$ ) with a frame rate of around ten images per second.
58
+
59
+ Cellvizio is a fiber-bundle endoscopy system producing circular images [2]. In the following, in order to keep the maximum information of each image and avoid border effects, the largest square inscribed in the circular bundle is considered.
60
+
61
+ § 2.2 FRACTAL FEATURES
62
+
63
+ In order to describe the different topologies appearing in the images some level-sets-based features are proposed in [12]. However, in our images, due to low SNR and frequent changes of illumination, intensity level-sets do not enable image segmentation.
64
+
65
+ To overcome this problem, the local density function (LDF) is used to define textures [11]. This takes into account the local growth of the signal instead of its intensity. For each pixel $x$ the local density function is defined as:
66
+
67
+ $$
68
+ \operatorname{LDF}\left( x\right) = \mathop{\lim }\limits_{{\mathrm{r} \rightarrow 0}}\frac{\log \mu \left( {\mathrm{B}\left( {x,\mathrm{r}}\right) }\right) }{\log \mathrm{r}}
69
+ $$
70
+
71
+ where $B\left( {x,r}\right)$ denotes the circular neighborhood of $x$ of radius $\mathrm{r}$ and $\mu \left( {B\left( {x,r}\right) }\right)$ denotes the sum of the intensity values into $B\left( {x,r}\right)$ . We compute the density by a linear fitting of $\log \mu \left( {B\left( {x,r}\right) }\right)$ against $\log r$ for radius values ranging from 3 to 13 pixels.
72
+
73
+ In [13], a LDF-based feature is proposed and is proved to be invariant to local changes of illumination. This descriptor is used in [8] to classify endoscopic images of colonic polyps. In Fig. 2 some examples of segmentation via intensity-values and LDF are shown. We can observe that LDF-based segmentation provides more details and it is invariant to variations of illumination.
74
+
75
+ < g r a p h i c s >
76
+
77
+ Fig. 2. From left to right : original image, local density function, intensity level set (in red), density level set (in red) for SE (top) and IM (bottom)
78
+
79
+ In the following, we consider the levels sets of local density functions corresponding to values ranging from 1.4 to 2.7 (these values have been set empirically after computation of all LDF-maps). For each level set we compute several fractal features : the ration of area to perimeter, fractal dimension of the level lines (computed via the box counting method), and the lacunarity describing how level sets fill the space (computed via the gliding box counting [1]). This finally leads to a 42-dimensional vector of fractal features.
80
+
81
+ § 2.3 LBP-TEXTURE FEATURES
82
+
83
+ A very common type of textures are Local Binary Patterns (LBP) [10]. For each pixel a binary pattern is defined by comparing its intensity with the intensity of pixels on its circular neighborhood. We use in particular rotation invariant uniform patterns (a uniform pattern has at most two 0 to 1 or 1 to 0 transitions). A multi-scale analysis is possible by choosing different radius for circular neighborhoods.
84
+
85
+ In the following, we consider radii $= \left\lbrack {3,7,{11},{15},{19}}\right\rbrack$ and 16 neighbors for each circle. The choice of different radius enables a multi-scale analysis. Once LBP image has been computed, some statistical indicators of its histogram are considered: mean, variance, entropy, skewness, kurtosis. This finally leads to a vector of 25 LBP-features.
86
+
87
+ § 3 RESULTS
88
+
89
+ In this section we present the results of our model and its comparison with other texture-based methods. We consider the classification of the pre-cancer stages (SE, IM, GM) and we discuss the difficulties to generalize our model to the four-classes problem (SE, IM, GM, DC).
90
+
91
+ § 3.1 CLASSIFICATION OF PRE-CANCER STAGES
92
+
93
+ Experiments are carried out on a dataset of 1694 images of SE, IM, and GM. We consider 25 LBP-features and the 42-dimensional vector of fractal profiles.
94
+
95
+ We remind that, as the Cellvizio system has a high frame rate, then videos from the same patient may contain images highly correlated. For this reason we use the Leave-One-Patient-Out-Cross-Validation strategy (LOPO-CV) to validate our model. We also point out that this kind of validation is the most coherent with the algorithm application during medical procedures.
96
+
97
+ Moreover, we also evaluate the model via a splitting into training and test sets with no common patient ( 17 patients for the training set and 14 patients for the test-one).
98
+
99
+ Classification is made via both linear SVM and Random Forest (RF) classifier. All parameters (margin of error C for SVM and number of trees for Random Forest) are tuned by LOPO-CV on training sets.
100
+
101
+ We finally obtain an overall accuracy of 88.5% via the SVM classifier and of ${89.2}\%$ via the Random Forest classifier by using the LOPO-CV strategy. The confusion matrices are shown in Tab. 1.
102
+
103
+ As pointed out in Section 1.3, the published papers on Barrett's classification cannot be used as a reference because of the different technology used to acquire images. We compare our model (LBP+FRACT) with the following methods : only LBP-textures defined in Section 2.3 (LBP), only fractal textures as defined in Section 2.2 (FRACT), and the Smart Atlas method [2].
104
+
105
+ Table 1. Confusion Matrix using LOPO-CV for pre-cancerous stages.
106
+
107
+ max width=
108
+
109
+ X 3|c|SVM 3|c|Random Forest
110
+
111
+ 1-7
112
+ GROUND TRUTH SE $\mathbf{{IM}}$ GM SE $\mathbf{{IM}}$ GM
113
+
114
+ 1-7
115
+ SE 328 17 17 315 28 19
116
+
117
+ 1-7
118
+ $\mathbf{{IM}}$ 15 511 34 19 509 32
119
+
120
+ 1-7
121
+ GM 41 71 660 50 35 687
122
+
123
+ 1-7
124
+
125
+ The Smart Atlas method performs image classification combining a content-based image retrieval (CBIR) [2] with a k-nearest neighbors (k-NN) voting scheme.
126
+
127
+ Tab. 2 summarizes the comparison results showing that our algorithm outperforms the other methods. We also point out the high accuracy of FRACT confirming that fractal textures based on local densities well characterize the tissue morphologies.
128
+
129
+ Table 2. Accuracy comparison for the pre-cancer stages via LOPO-CV and splitting
130
+
131
+ max width=
132
+
133
+ X 2|c|ACCURACY (LOPO-CV) 2|c|ACCURACY (SPLIT)
134
+
135
+ 1-5
136
+ METHOD SVM $\mathbf{{RF}}$ SVM $\mathbf{{RF}}$
137
+
138
+ 1-5
139
+ Smart Atlas 2|c|65.5% 2|c|65.7%
140
+
141
+ 1-5
142
+ LBP 69.1% 68.7% 80.1% 67%
143
+
144
+ 1-5
145
+ FRACT 84.9% 83.2% 90% 80.4%
146
+
147
+ 1-5
148
+ LBP+FRACT $\mathbf{{88.5}\% }$ 89.2% 89% $\mathbf{{86.9}\% }$
149
+
150
+ 1-5
151
+
152
+ Fig. 3 shows some examples of correct classifications for SE, IM and GM classes highlighting the variability of tissue morphologies within each class.
153
+
154
+ § 3.2 DISCUSSION
155
+
156
+ The previously defined model is based on texture-features which describe the cellular organization. This enables our classifier to well distinguish pre-cancer stages that have well defined tissue architectures.
157
+
158
+ The case of cancerous tissues is more complex. In fact, as explained above, cancer stage is characterized by a highly disorganized cellular architecture, so that the presented characterization by morphologies segmentation is less adapted.
159
+
160
+ < g r a p h i c s >
161
+
162
+ Fig. 3. Examples of correct classifications for SE (top), IM (middle) and GM (bottom)
163
+
164
+ Next step consists in defining new features characterizing the different grades of cancer. This is actually a challenging problem because of the architecture variability in cancerous tissues. Moreover, no criterion exists yet to distinguish the different grades of cancer. Then, a finer analysis on a larger dataset is needed, in order to define suitable textures based on a collection of morphologies within the cancer class.
165
+
166
+ § 4 CONCLUSION
167
+
168
+ This paper introduces a method to extract characteristic structural features from microendoscopic images with low SNR and unstable intensity. This provides a more robust description of the cellular architecture of tissues compared to intensity-based segmentation.
169
+
170
+ We used this method to define a classifier of pre-cancer stages for Barrett's esophagus. Previous results show that LBP and fractal features well characterize SE, IM, and GM stages. A classification result shows an overall accuracy of 89.2% on a large dataset.
papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/Bkgwe3GnZB/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Fast and stable color normalization of whole slide histopathology images using deep texture and color moment matching
2
+
3
+ Kyohei Sano ${}^{1}$ , Daisuke Komura ${}^{1}$ , Shumpei Ishikawa ${}^{1}$
4
+
5
+ ${}^{1}$ Department of Preventive Medicine, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, 1130033, Tokyo, Japan.
6
+
7
+ Abstract. Whole Slide Images (WSIs) are prone to color variations due to differences in fixation and staining conditions of tissue samples, as well as the scanning process. Such variations can adversely affect the image analysis, and in this paper, we propose a novel, fast and stable color normalization algorithm for WSIs called CONTEMM (COlor Normalization using deep TExture and color Moment Matching). CONTEMM estimates color transformation matrix based on pairs of reference and source patches with similar tissue components in the respective WSIs, which are selected using deep texture representations. The color transformation matrix is estimated quickly by fitting the second moment about white color.
8
+
9
+ Performance of CONTEMM algorithm was evaluated using histopathology images from different slide scanners and TCGA (The Cancer Genome Atlas) datasets. CONTEMM was shown to outperform the other methods; Reinhard, Va-hadane, and Macenko, in terms of variation (stability), accuracy, and computation time.
10
+
11
+ ## 1 Introduction
12
+
13
+ In histopathology, tissue sections are stained with multiple contrasting dyes (e.g., the most widely used hematoxylin and eosin (HE) stain) to highlight different tissue structure and cellular features, and pathologists make diagnosis of diseases under the microscope. However, histopathological images contain undesired variability of HE stain appearance due to differences in fixation, staining procedures, and scanners. Color normalization methods, which reduce the color variation of source images using a reference image, are often effective to improve the performance of histopathological image analysis.
14
+
15
+ Although various color normalization methods have been developed so far, most methods focus on normalization of patches sampled from WSIs and there are few methods optimized to gigapixel-sized whole slide images (WSIs). For example, Reinhard et al [1] proposed a patch normalization method, which is aimed at matching the color distribution of source patches to a reference patch in L*a*b* color space. This algorithm is quite fast, but assumes that the source and reference patches are composed of similar tissue, which does not hold generally in WSIs. When the method is applied to all patches of WSIs, the performance lacks stability due to the different tissue compositions between source and reference patches.
16
+
17
+ Macenko et al[2] and Vahadane et al[3] proposed color normalization methods based on stain deconvolution, which estimates hematoxylin and eosin vectors from the distribution of color space. This estimation is more robust against the tissue composition difference between source and reference patches, but it requires longer computation time than Reinhard et al. Thus, analyzing thousands of WSIs, which is common recently, is almost infeasible without high computational power. Also, these color normalization methods fail when tissue compositions of the reference and the source patch differ significantly. In addition, Macenko's method is also a patch-based normalization method and suffers from the same problem as the Reinhard's method.
18
+
19
+ To tackle these problems, we propose a novel color normalization algorithm for WSIs called CONTEMM (COlor Normalization using deep TExture and color Moment Matching), which is significantly stable and fast. Instead of using the whole images, CONTEMM selects appropriate pairs of image patches with similar tissue components in reference and source WSIs based on deep texture representations (DTRs). CONTEMM estimates global transformation matrix between the pairs and normalization of the source WSI is performed using the transformation matrix. One notable feature of CONTEMM is that it achieves high-speed color normalization by a simple linear transformation to fit the second moment about white color, which match the stain vector without doing stain deconvolution.
20
+
21
+ ## 2 Method
22
+
23
+ ### 2.1 Overview of the CONTEMM
24
+
25
+ CONTEMM searches the most similar regions from source and reference WSIs, and calculate the transformation matrix between them. This transformation matrix is then applied to the source images for color normalization. Figure 1 shows the overview of CONTEMM, which is consisting of the following three steps.
26
+
27
+ - STEP I: Operation in reference WSI
28
+
29
+ a. $\mathrm{N}$ patches are randomly sampled from a reference WSI.
30
+
31
+ b. Deep texture representations are extracted from all $\mathrm{N}$ patches using deep convolutional network. (2.1)
32
+
33
+ - STEP II: Operation in source WSI
34
+
35
+ a. n patches are randomly sampled from a source WSI.
36
+
37
+ b. Deep texture representations are extracted from all $\mathrm{n}$ patches using deep convolutional network. (2.1)
38
+
39
+ c. Pair up $\mathrm{N}$ and $\mathrm{n}$ patches based on texture features to form predetermined number (m) of the most similar-looking pairs.
40
+
41
+ d. A transformation matrix is calculated using the pairs. (2.2)
42
+
43
+ - STEP III: Operation in source images for color normalization.
44
+
45
+ a. The transformation matrix is applied to source images. (2.2)
46
+
47
+ ![01963a4e-5e37-710c-8d3d-c46fa9bc8138_2_353_476_876_600_0.jpg](images/01963a4e-5e37-710c-8d3d-c46fa9bc8138_2_353_476_876_600_0.jpg)
48
+
49
+ Fig. 1. Overview of the CONTEMM color normalization method. (a) Schematic of CONTEMM. (b)Transformation in RGB space
50
+
51
+ Quantitative metrics representing image similarity is necessary to search for the most similar regions in source and reference WSIs. We use second order statistics within deep features called deep texture representations (DTRs). Deep texture representations extracted from pre-trained convolutional neural network (CNN) are often used to calculate the perceptual similarity in general images due to the robustness to image distortion. The DTRs further produces order-less image representations suitable for searching similar histopathology images as shown previously[4]. Here, the output of ${9}^{\text{th }}$ convolution layer ("block4_conv2") in VGG16 [5], which is often used for perceptual similarity in general images, is used to compute the Gram matrix ${G}^{l} \in {\mathcal{R}}^{{N}_{l} \times {N}_{l}}$ by bilinear pooling using the following equation:
52
+
53
+ $$
54
+ {G}_{ij}^{l} = \mathop{\sum }\limits_{k}{F}_{ik}^{l}{F}_{jk}^{l}.
55
+ $$
56
+
57
+ Here ${F}_{ik}^{l}$ is the vectorized activation of the ${i}^{th}$ filter at position $k$ in in layer $l$ . In order to reduce the time required to search for the most similar regions, their dimensions are reduced by Compact bilinear pooling (CBP)[6] from 262144 dimensions to 1024 dimensions, and cosine similarity of the CBP output was used as a similarity measure as in [4].
58
+
59
+ ### 2.2 Transformation Matrix
60
+
61
+ In CONTEMMN, RGB vector in source image is rotated and scaled at the center of white color to fit the second moment about white to the reference patches (Fig.1b). Let ${I}_{r},{I}_{s} \in {R}^{h \times l}$ be the matrix of RGB intensities of reference and source patches chosen by similarity search respectively, where $\mathrm{h} = 3$ for RGB channels, and $1 =$ total number of pixels of $\mathrm{m}$ patches, and let $\mathrm{W} \in {R}^{h \times l}$ be the matrix of ${255}\mathrm{\;s}$ , which represent white color in 8 bit RGB color. Let ${X}_{r},{X}_{s} \in {R}^{h \times l}$ be the matrix of RGB intensities with the origin located at the coordinate of white color. Then ${X}_{r}$ and ${X}_{s}$ can be written as follows,
62
+
63
+ $$
64
+ {X}_{r} = W - {I}_{r},{X}_{s} = W - {I}_{s}
65
+ $$
66
+
67
+ Let ${C}_{r},{C}_{s} \in {R}^{h \times h}$ be the matrix of the second moment about white color of target and source patches respectively.
68
+
69
+ $$
70
+ {C}_{r} = {X}_{r}{X}_{r}^{T},\;{C}_{s} = {X}_{s}{X}_{s}^{T}
71
+ $$
72
+
73
+ and the eigenvalue decomposition of ${C}_{r}$ and ${C}_{s}$ are given as
74
+
75
+ $$
76
+ {C}_{r} = {P}_{r}{\Lambda }_{r}{P}_{r}^{-1},{C}_{s} = {P}_{s}{\Lambda }_{s}{P}_{s}^{-1}
77
+ $$
78
+
79
+ Let ${\Theta }_{r}$ and ${\Theta }_{s}$ be diagonal matrix, and ${\Lambda }_{r}$ and ${\Lambda }_{s}$ can be factorized as follows,
80
+
81
+ $$
82
+ {\Lambda }_{r} = {\Theta }_{r}{\Theta }_{r},\;{\Lambda }_{s} = {\Theta }_{s}{\Theta }_{s}
83
+ $$
84
+
85
+ Transformation matrix ${M}_{s \rightarrow r}$ can be written,
86
+
87
+ $$
88
+ {M}_{s \rightarrow r} = {P}_{r}{\Theta }_{r}{\Theta }_{s}^{-1}{P}_{s}^{-1}
89
+ $$
90
+
91
+ Let ${I}_{s}^{\prime },{I}_{s \rightarrow \mathrm{r}}^{\prime } \in {R}^{h \times k}$ be the matrix of RGB intensities of a source image for color normalization and the image after color normalization, where $\mathrm{k} =$ number of pixels of a source image then,
92
+
93
+ $$
94
+ {I}_{s \rightarrow \mathrm{r}}^{\prime } = W - {M}_{s \rightarrow r}\left( {W - {I}_{s}^{\prime }}\right)
95
+ $$
96
+
97
+ Fitting the second moment about mean, variance and covariance, is a major color transfer method[1][7]. However, these methods don't work well in white color transfer. In addition, color deconvolution is one of the most widely used methods for stain normalization[3], and fitting the second moment about white can match the stain vector of reference and source images (Fig.1b).
98
+
99
+ ## 3 Results and Discussion
100
+
101
+ ### 3.1 Hyperparameter selection
102
+
103
+ Generally, there is one reference WSI and multiple (possibly thousands of) source WSIs for each task. Since step 1 is performed only once, $\mathrm{N}$ can be large without influencing the total computation time much. In contrary, n cannot be large because step 2 is performed for every source slide. $\mathrm{m}/\mathrm{n}$ should be appropriate, because it can increase the chance of selecting artifact regions as similar pairs, especially when WSIs contain a lot of artifact. The size of patches should not be large because it takes long time to read large patches. Thus, we have set $\mathrm{N} = {1000},\mathrm{n} = {40},\mathrm{\;m} = {30}$ in Experiment 1 and 3, and $\mathrm{N} = {1000},\mathrm{n} = {40},\mathrm{\;m} = {15}$ in Experiment 2, and all patch size was set to ${256} \times {256}$ pixels. Here we decrease $\mathrm{m}$ value in Experiment 2, because some WSIs have a lot of artifacts such as pen marks.
104
+
105
+ ### 3.2 Experiment 1: Quantitative evaluation (PSNR)
106
+
107
+ First, accuracy and stability of CONTEMM was evaluated. Each of three WSIs of stomach adenocarcinoma was scanned using two different slide scanners (Hamamatsu photonics NanoZoomer S60 (Hamamatsu) and 3D HISTECH Pannoramic MIDI II (3DX)) and the performance was evaluated in a pixel-wise manner.
108
+
109
+ CONTEMM was compared with the other three normalization methods; Reinhard [1], Macenko [2], and Vahadane [3]. In Vahadane et al, there are two color normalization methods proposed, which we call "Vahadane (random)." and "Vahadane (WSI)". In Vahadane (random), source patches is normalized to one specific reference patch. In Vahadane (WSI), a WSI was split by grid and reference patches were sampled at grid points. In this experiment, a WSI was divided into 5x5 grids.
110
+
111
+ In, Reinhard, Macenko, and Vahadane (random), the reference patch was randomly sampled to each source patch, excluding the white background patches. Patches with the median RGB value greater than 220 were regarded as white background.
112
+
113
+ Mean and variance of Peak Signal-to-Noise Ratio (PSNR) improvement were used to assess the accuracy and the stability of the color normalization method, respectively. 70 patches were randomly sampled from the same position in source and reference WSIs, and the same 70 patches are used for evaluation in all color normalization methods. Since there was unignorable difference in sharpness between images from two scanners, Gaussian filter was applied before color normalization in all methods to reduce the effect of sharpness on PSNR. The size of the Gaussian filter was optimized to match the sharpness, which was estimated by Laplacian Kernel[8].
114
+
115
+ Registration between two WSIs is performed by imreg_dft package [9]. As each three WSIs from a scanner was color-normalized to the other one and vice versa, six transformations were obtained in total.
116
+
117
+ As shown in Table 1, CONTEMM significantly outperforms Macenko, Vahadane (random), and Reinhard in terms of both accuracy and stability. CONTEMM also significantly outperforms Vahadane (WSI) in terms of stability with comparable accuracy.
118
+
119
+ We also investigated the failure patterns in this experiment. In Figure 2, and Figure 3 , two types of failure were observed: first, Reinhard, Macenko, Vahadane (random) and Vahadane (WSI) failed when the background of source image was white (Fig. 2a, Fig.3). Second, Reinhard, Macenko and Vahadane (random) did not work when the appearance of reference patch is significantly different from the source patch (Fig. 2b). Fig. 3. The appearance of color normalization of thumbnail. Source WSI is scanned by Hamamatsu scanner and Reference WSI is scanned by 3D HISTECH.
120
+
121
+ Table 1. The improvement of PSNR: Mean $\pm$ Standard Deviation. H: Hamamatsu photonics NanoZoomer S60. 3DX: 3D HISTECH Pannoramic MIDI II. The number in a square bracket corresponds to the slide number. Dunnet contrasts tests and F-tests were used to compare the mean and the variance of PSNRs of CONTEMM and those of the other methods, respectively. Bonferroni correction was applied for F-tests.
122
+
123
+ <table><tr><td>Source-> Reference</td><td>CONTEMM (Ours)</td><td>Vahadane (WSI)</td><td>Macenko</td><td>Vahadane (random)</td><td>Reinhard</td></tr><tr><td>$\mathbf{{3DX}} \rightarrow \mathbf{H}\left\lbrack 1\right\rbrack$</td><td>$- {0.082} \pm \mathbf{{0.718}}$</td><td>0.682±1.043</td><td>$- {0.266} \pm {1.784}$</td><td>$- {0.984} \pm {2.187}$</td><td>$- {0.964} \pm {2.462}$</td></tr><tr><td>$\mathbf{{3DX} - > H\left\lbrack 2\right\rbrack }$</td><td>${2.379} \pm {1.302}$</td><td>${3.235} \pm {1.511}$</td><td>${3.259} \pm {3.297}$</td><td>${1.424} \pm {3.532}$</td><td>$- {0.491} \pm {4.905}$</td></tr><tr><td>3DX->H[3]</td><td>${2.293} \pm \mathbf{{0.808}}$</td><td>$- {0.586} \pm {1.597}$</td><td>${2.695} \pm {3.053}$</td><td>${1.214} \pm {2.884}$</td><td>${1.111} \pm {3.160}$</td></tr><tr><td>H->3DX[1]</td><td>$- {0.199} \pm {1.302}$</td><td>${0.495} \pm {1.327}$</td><td>$- {1.918} \pm {2.249}$</td><td>$- {2.092} \pm {2.458}$</td><td>$- {2.741} \pm {2.399}$</td></tr><tr><td>H->3DX[2]</td><td>${0.445} \pm {0.927}$</td><td>${3.597} \pm {1.948}$</td><td>$- {1.378} \pm {2.590}$</td><td>$- {1.593} \pm {3.397}$</td><td>$- {2.930} \pm {3.967}$</td></tr><tr><td>H->3DX[3]</td><td>${0.340} \pm {0.756}$</td><td>$- {4.425} \pm {1.454}$</td><td>$- {0.697} \pm {1.457}$</td><td>$- {0.010} \pm {2.499}$</td><td>$- {0.587} \pm {2.321}$</td></tr><tr><td>Mean</td><td>${0.863} \pm {1.461}$</td><td>${0.499} \pm {3.059}$</td><td>${0.282} \pm {3.183}$</td><td>$- {0.340} \pm {3.165}$</td><td>$- {1.100} \pm {3.619}$</td></tr><tr><td>Pvalue (mean)</td><td>N/A</td><td>p > 0.05</td><td>p < 0.05</td><td>p < 0.001</td><td>p < 0.001</td></tr><tr><td>Pvalue (variance)</td><td>N/A</td><td>$\mathrm{p} < {0.001}$</td><td>$\mathrm{p} < {0.001}$</td><td>$\mathrm{p} < {0.001}$</td><td>$\mathrm{p} < {0.001}$</td></tr></table>
124
+
125
+ ![01963a4e-5e37-710c-8d3d-c46fa9bc8138_5_344_1199_968_313_0.jpg](images/01963a4e-5e37-710c-8d3d-c46fa9bc8138_5_344_1199_968_313_0.jpg)
126
+
127
+ Fig. 2. Examples of the failure. (a) Image with large white regions. (b) Source and reference patches quite different from each other. (A): source patch, (B): reference patch for (C)-(E), (C): Vahadane(WSI), (D): Macenko, (E): Reinhard, (F): Vahadane(random), (G): CONTEMM, (H): Ground Truth. (C)-(E): Source patch was color-normalized using reference patch (B). Note that (B) is not used as reference patch in (F), (G).
128
+
129
+ ![01963a4e-5e37-710c-8d3d-c46fa9bc8138_5_359_1712_948_185_0.jpg](images/01963a4e-5e37-710c-8d3d-c46fa9bc8138_5_359_1712_948_185_0.jpg)
130
+
131
+ ### 3.3 Experiment 2: Quantitative evaluation (NMI)
132
+
133
+ Next, we evaluated the consistency of normalization using 100 randomly selected WSIs of kidney renal clear cell carcinomas from The Cancer Genome Atlas (TCGA) datasets[10]. WSI-level normalization was performed using CONTEMM and Va-hadane (WSI) and the normalized median intensity (NMI) within uniform tumor region selected by pathologists were compared. Lower standard deviation of NMI (NMI SD) indicates that the normalization is more consistent [11]. As shown in Table 2, CONTEMM showed better NMI than Vahadane (WSI) and the original WSIs.
134
+
135
+ Table 2. Standard deviation of the normalized median intensity.
136
+
137
+ <table><tr><td/><td>CONTEMM</td><td>Vahadane (WSI)</td><td>Original WSIs</td></tr><tr><td>$\mathbf{{NMI}}$ SD</td><td>0.0677</td><td>0.0685</td><td>0.0749</td></tr></table>
138
+
139
+ ### 3.4 Experiment 3: Computation time evaluation
140
+
141
+ Finally, computation time of each color normalization step was measured using a source slide. One Tesla V100 GPUs and Dual Intel 20-core Xeon E-2698v4 2.20GHz was used for computation. Table 3 shows that our algorithm is significantly fast. It takes only 96 seconds for step 1,3.16 seconds for step 2, and 0.0034 seconds for step 3. Fast color normalization method is especially important recently, as the number of WSIs being analyzed has grown dramatically. For example, more than 10,000 WSIs are analyzed in a recent study[12]: it would take around 1,500 hours to standardize 10,000 WSIs using Vahadane (WSI), while it would take only about 9 hours using CONTEMM, which is feasibly short.
142
+
143
+ Table 3. Computation time of color normalization: "WSI (sec)" is the computation time required for estimating the color transformation between two WSIs. "patch (sec)" is the computation time required for color-normalizing one patch. Macenko, Vahadane (random), and Reinhard do not have WSI-level color normalization.
144
+
145
+ <table><tr><td/><td>CONTEMM</td><td>Vahadane (WSI)</td><td>Reinhard</td><td>Macenko</td><td>Vahadane (random)</td></tr><tr><td>WSI (sec)</td><td>3.16</td><td>537.9</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td>patch (sec)</td><td>0.0034</td><td>0.013</td><td>0.0075</td><td>7.04</td><td>8.05</td></tr></table>
146
+
147
+ ## 4 Conclusion
148
+
149
+ In this paper, we proposed CONTEMM, a fast and stable color normalization method for WSIs in histopathology. CONTEMM estimates a global transformation matrix based on reference and source patch pairs selected based on deep texture representations. Experimental results showed that CONTEMM outperforms the existing patch-based methods in terms of stability and accuracy. Additionally, CONTEMM outperforms a WSI-based method, Vahadane (WSI), in terms of stability and computation time while keeping comparable accuracy. Notably, compared to the WSI-based method, CONTEMM speeds up the color normalization process of WSIs by several orders of magnitude, which makes it feasible to normalize thousands of WSIs in realistic time. CONTEMM would be a powerful tool for histopathology image analysis in the big data era, where rapid color normalization is essential.
150
+
151
+ ## References
152
+
153
+ 1. Reinhard, E., Ashikhmin, M., Gooch, B., Shirley, P.: Color transfer between images. IEEE Comput. Graph. Appl. (2001). https://doi.org/10.1109/38.946629.
154
+
155
+ 2. Macenko, M., Niethammer, M., Marron, J.S., Borland, D., Woosley, J.T., Guan, X., Schmitt, C., Thomas, N.E.: A method for normalizing histology slides for quantitative analysis. In: Proceedings - 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, ISBI 2009 (2009). https://doi.org/10.1109/ISBI.2009.5193250.
156
+
157
+ 3. Vahadane, A., Peng, T., Sethi, A., Albarqouni, S., Wang, L., Baust, M., Steiger, K., Schlitter, A.M., Esposito, I., Navab, N.: Structure-Preserving Color Normalization and Sparse Stain Separation for Histological Images. IEEE Trans. Med. Imaging. 35, 1962-1971 (2016). https://doi.org/10.1109/TMI.2016.2529665.
158
+
159
+ 4. Komura, D., Fukuta, K., Tominaga, K., Kawabe, A., Koda, H., Suzuki, R., Konishi, H., Umezaki, T., Harada, T., Ishikawa, S.: Luigi: Large-scale histopathological image retrieval system using deep texture representations. bioRxiv. 345785 (2018). https://doi.org/10.1101/345785.
160
+
161
+ 5. Simonyan, K., Zisserman, A.: Very Deep Convolutional Networks for Large-Scale Image Recognition. 1-14 (2014).
162
+
163
+ 6. Gao, Y., Beijbom, O., Zhang, N., Darrell, T.: Compact Bilinear Pooling.
164
+
165
+ 7. Xiao, X., Ma, L.: Color transfer in correlated color space. Presented at the (2009). https://doi.org/10.1145/1128923.1128974.
166
+
167
+ 8. Pech-Pacheco, J.L., Cristobal, G., Chamorro-Martinez, J., Fernandez-Valdivia, J.: Diatom autofocusing in brightfield microscopy: a comparative study. Proc. 15th Int. Conf. Pattern Recognition. ICPR-2000. 3, 314-317 (2000). https://doi.org/10.1109/ICPR.2000.903548.
168
+
169
+ 9. Matejak: imreg_dft, https://github.com/matejak/imreg_dft.git.
170
+
171
+ 10. The Cancer Genome Atlas Research Network: The Cancer Genome Atlas Pan-Cancer analysis project. Nat. Genet. (2013). https://doi.org/10.1038/ng.2764.
172
+
173
+ 11. Zheng, Y., Jiang, Z., Zhang, H., Xie, F., Shi, J., Xue, C.: Adaptive color deconvolution for histological WSI normalization. Comput. Methods Programs Biomed. (2019). https://doi.org/10.1016/j.cmpb.2019.01.008.
174
+
175
+ 12. Campanella, G., Silva, V.W.K., Fuchs, T.J.: Terabyte-scale Deep Multiple Instance Learning for Classification and Localization in Pathology. (2018).
papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/Bkgwe3GnZB/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,197 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § FAST AND STABLE COLOR NORMALIZATION OF WHOLE SLIDE HISTOPATHOLOGY IMAGES USING DEEP TEXTURE AND COLOR MOMENT MATCHING
2
+
3
+ Kyohei Sano ${}^{1}$ , Daisuke Komura ${}^{1}$ , Shumpei Ishikawa ${}^{1}$
4
+
5
+ ${}^{1}$ Department of Preventive Medicine, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, 1130033, Tokyo, Japan.
6
+
7
+ Abstract. Whole Slide Images (WSIs) are prone to color variations due to differences in fixation and staining conditions of tissue samples, as well as the scanning process. Such variations can adversely affect the image analysis, and in this paper, we propose a novel, fast and stable color normalization algorithm for WSIs called CONTEMM (COlor Normalization using deep TExture and color Moment Matching). CONTEMM estimates color transformation matrix based on pairs of reference and source patches with similar tissue components in the respective WSIs, which are selected using deep texture representations. The color transformation matrix is estimated quickly by fitting the second moment about white color.
8
+
9
+ Performance of CONTEMM algorithm was evaluated using histopathology images from different slide scanners and TCGA (The Cancer Genome Atlas) datasets. CONTEMM was shown to outperform the other methods; Reinhard, Va-hadane, and Macenko, in terms of variation (stability), accuracy, and computation time.
10
+
11
+ § 1 INTRODUCTION
12
+
13
+ In histopathology, tissue sections are stained with multiple contrasting dyes (e.g., the most widely used hematoxylin and eosin (HE) stain) to highlight different tissue structure and cellular features, and pathologists make diagnosis of diseases under the microscope. However, histopathological images contain undesired variability of HE stain appearance due to differences in fixation, staining procedures, and scanners. Color normalization methods, which reduce the color variation of source images using a reference image, are often effective to improve the performance of histopathological image analysis.
14
+
15
+ Although various color normalization methods have been developed so far, most methods focus on normalization of patches sampled from WSIs and there are few methods optimized to gigapixel-sized whole slide images (WSIs). For example, Reinhard et al [1] proposed a patch normalization method, which is aimed at matching the color distribution of source patches to a reference patch in L*a*b* color space. This algorithm is quite fast, but assumes that the source and reference patches are composed of similar tissue, which does not hold generally in WSIs. When the method is applied to all patches of WSIs, the performance lacks stability due to the different tissue compositions between source and reference patches.
16
+
17
+ Macenko et al[2] and Vahadane et al[3] proposed color normalization methods based on stain deconvolution, which estimates hematoxylin and eosin vectors from the distribution of color space. This estimation is more robust against the tissue composition difference between source and reference patches, but it requires longer computation time than Reinhard et al. Thus, analyzing thousands of WSIs, which is common recently, is almost infeasible without high computational power. Also, these color normalization methods fail when tissue compositions of the reference and the source patch differ significantly. In addition, Macenko's method is also a patch-based normalization method and suffers from the same problem as the Reinhard's method.
18
+
19
+ To tackle these problems, we propose a novel color normalization algorithm for WSIs called CONTEMM (COlor Normalization using deep TExture and color Moment Matching), which is significantly stable and fast. Instead of using the whole images, CONTEMM selects appropriate pairs of image patches with similar tissue components in reference and source WSIs based on deep texture representations (DTRs). CONTEMM estimates global transformation matrix between the pairs and normalization of the source WSI is performed using the transformation matrix. One notable feature of CONTEMM is that it achieves high-speed color normalization by a simple linear transformation to fit the second moment about white color, which match the stain vector without doing stain deconvolution.
20
+
21
+ § 2 METHOD
22
+
23
+ § 2.1 OVERVIEW OF THE CONTEMM
24
+
25
+ CONTEMM searches the most similar regions from source and reference WSIs, and calculate the transformation matrix between them. This transformation matrix is then applied to the source images for color normalization. Figure 1 shows the overview of CONTEMM, which is consisting of the following three steps.
26
+
27
+ * STEP I: Operation in reference WSI
28
+
29
+ a. $\mathrm{N}$ patches are randomly sampled from a reference WSI.
30
+
31
+ b. Deep texture representations are extracted from all $\mathrm{N}$ patches using deep convolutional network. (2.1)
32
+
33
+ * STEP II: Operation in source WSI
34
+
35
+ a. n patches are randomly sampled from a source WSI.
36
+
37
+ b. Deep texture representations are extracted from all $\mathrm{n}$ patches using deep convolutional network. (2.1)
38
+
39
+ c. Pair up $\mathrm{N}$ and $\mathrm{n}$ patches based on texture features to form predetermined number (m) of the most similar-looking pairs.
40
+
41
+ d. A transformation matrix is calculated using the pairs. (2.2)
42
+
43
+ * STEP III: Operation in source images for color normalization.
44
+
45
+ a. The transformation matrix is applied to source images. (2.2)
46
+
47
+ < g r a p h i c s >
48
+
49
+ Fig. 1. Overview of the CONTEMM color normalization method. (a) Schematic of CONTEMM. (b)Transformation in RGB space
50
+
51
+ Quantitative metrics representing image similarity is necessary to search for the most similar regions in source and reference WSIs. We use second order statistics within deep features called deep texture representations (DTRs). Deep texture representations extracted from pre-trained convolutional neural network (CNN) are often used to calculate the perceptual similarity in general images due to the robustness to image distortion. The DTRs further produces order-less image representations suitable for searching similar histopathology images as shown previously[4]. Here, the output of ${9}^{\text{ th }}$ convolution layer ("block4_conv2") in VGG16 [5], which is often used for perceptual similarity in general images, is used to compute the Gram matrix ${G}^{l} \in {\mathcal{R}}^{{N}_{l} \times {N}_{l}}$ by bilinear pooling using the following equation:
52
+
53
+ $$
54
+ {G}_{ij}^{l} = \mathop{\sum }\limits_{k}{F}_{ik}^{l}{F}_{jk}^{l}.
55
+ $$
56
+
57
+ Here ${F}_{ik}^{l}$ is the vectorized activation of the ${i}^{th}$ filter at position $k$ in in layer $l$ . In order to reduce the time required to search for the most similar regions, their dimensions are reduced by Compact bilinear pooling (CBP)[6] from 262144 dimensions to 1024 dimensions, and cosine similarity of the CBP output was used as a similarity measure as in [4].
58
+
59
+ § 2.2 TRANSFORMATION MATRIX
60
+
61
+ In CONTEMMN, RGB vector in source image is rotated and scaled at the center of white color to fit the second moment about white to the reference patches (Fig.1b). Let ${I}_{r},{I}_{s} \in {R}^{h \times l}$ be the matrix of RGB intensities of reference and source patches chosen by similarity search respectively, where $\mathrm{h} = 3$ for RGB channels, and $1 =$ total number of pixels of $\mathrm{m}$ patches, and let $\mathrm{W} \in {R}^{h \times l}$ be the matrix of ${255}\mathrm{\;s}$ , which represent white color in 8 bit RGB color. Let ${X}_{r},{X}_{s} \in {R}^{h \times l}$ be the matrix of RGB intensities with the origin located at the coordinate of white color. Then ${X}_{r}$ and ${X}_{s}$ can be written as follows,
62
+
63
+ $$
64
+ {X}_{r} = W - {I}_{r},{X}_{s} = W - {I}_{s}
65
+ $$
66
+
67
+ Let ${C}_{r},{C}_{s} \in {R}^{h \times h}$ be the matrix of the second moment about white color of target and source patches respectively.
68
+
69
+ $$
70
+ {C}_{r} = {X}_{r}{X}_{r}^{T},\;{C}_{s} = {X}_{s}{X}_{s}^{T}
71
+ $$
72
+
73
+ and the eigenvalue decomposition of ${C}_{r}$ and ${C}_{s}$ are given as
74
+
75
+ $$
76
+ {C}_{r} = {P}_{r}{\Lambda }_{r}{P}_{r}^{-1},{C}_{s} = {P}_{s}{\Lambda }_{s}{P}_{s}^{-1}
77
+ $$
78
+
79
+ Let ${\Theta }_{r}$ and ${\Theta }_{s}$ be diagonal matrix, and ${\Lambda }_{r}$ and ${\Lambda }_{s}$ can be factorized as follows,
80
+
81
+ $$
82
+ {\Lambda }_{r} = {\Theta }_{r}{\Theta }_{r},\;{\Lambda }_{s} = {\Theta }_{s}{\Theta }_{s}
83
+ $$
84
+
85
+ Transformation matrix ${M}_{s \rightarrow r}$ can be written,
86
+
87
+ $$
88
+ {M}_{s \rightarrow r} = {P}_{r}{\Theta }_{r}{\Theta }_{s}^{-1}{P}_{s}^{-1}
89
+ $$
90
+
91
+ Let ${I}_{s}^{\prime },{I}_{s \rightarrow \mathrm{r}}^{\prime } \in {R}^{h \times k}$ be the matrix of RGB intensities of a source image for color normalization and the image after color normalization, where $\mathrm{k} =$ number of pixels of a source image then,
92
+
93
+ $$
94
+ {I}_{s \rightarrow \mathrm{r}}^{\prime } = W - {M}_{s \rightarrow r}\left( {W - {I}_{s}^{\prime }}\right)
95
+ $$
96
+
97
+ Fitting the second moment about mean, variance and covariance, is a major color transfer method[1][7]. However, these methods don't work well in white color transfer. In addition, color deconvolution is one of the most widely used methods for stain normalization[3], and fitting the second moment about white can match the stain vector of reference and source images (Fig.1b).
98
+
99
+ § 3 RESULTS AND DISCUSSION
100
+
101
+ § 3.1 HYPERPARAMETER SELECTION
102
+
103
+ Generally, there is one reference WSI and multiple (possibly thousands of) source WSIs for each task. Since step 1 is performed only once, $\mathrm{N}$ can be large without influencing the total computation time much. In contrary, n cannot be large because step 2 is performed for every source slide. $\mathrm{m}/\mathrm{n}$ should be appropriate, because it can increase the chance of selecting artifact regions as similar pairs, especially when WSIs contain a lot of artifact. The size of patches should not be large because it takes long time to read large patches. Thus, we have set $\mathrm{N} = {1000},\mathrm{n} = {40},\mathrm{\;m} = {30}$ in Experiment 1 and 3, and $\mathrm{N} = {1000},\mathrm{n} = {40},\mathrm{\;m} = {15}$ in Experiment 2, and all patch size was set to ${256} \times {256}$ pixels. Here we decrease $\mathrm{m}$ value in Experiment 2, because some WSIs have a lot of artifacts such as pen marks.
104
+
105
+ § 3.2 EXPERIMENT 1: QUANTITATIVE EVALUATION (PSNR)
106
+
107
+ First, accuracy and stability of CONTEMM was evaluated. Each of three WSIs of stomach adenocarcinoma was scanned using two different slide scanners (Hamamatsu photonics NanoZoomer S60 (Hamamatsu) and 3D HISTECH Pannoramic MIDI II (3DX)) and the performance was evaluated in a pixel-wise manner.
108
+
109
+ CONTEMM was compared with the other three normalization methods; Reinhard [1], Macenko [2], and Vahadane [3]. In Vahadane et al, there are two color normalization methods proposed, which we call "Vahadane (random)." and "Vahadane (WSI)". In Vahadane (random), source patches is normalized to one specific reference patch. In Vahadane (WSI), a WSI was split by grid and reference patches were sampled at grid points. In this experiment, a WSI was divided into 5x5 grids.
110
+
111
+ In, Reinhard, Macenko, and Vahadane (random), the reference patch was randomly sampled to each source patch, excluding the white background patches. Patches with the median RGB value greater than 220 were regarded as white background.
112
+
113
+ Mean and variance of Peak Signal-to-Noise Ratio (PSNR) improvement were used to assess the accuracy and the stability of the color normalization method, respectively. 70 patches were randomly sampled from the same position in source and reference WSIs, and the same 70 patches are used for evaluation in all color normalization methods. Since there was unignorable difference in sharpness between images from two scanners, Gaussian filter was applied before color normalization in all methods to reduce the effect of sharpness on PSNR. The size of the Gaussian filter was optimized to match the sharpness, which was estimated by Laplacian Kernel[8].
114
+
115
+ Registration between two WSIs is performed by imreg_dft package [9]. As each three WSIs from a scanner was color-normalized to the other one and vice versa, six transformations were obtained in total.
116
+
117
+ As shown in Table 1, CONTEMM significantly outperforms Macenko, Vahadane (random), and Reinhard in terms of both accuracy and stability. CONTEMM also significantly outperforms Vahadane (WSI) in terms of stability with comparable accuracy.
118
+
119
+ We also investigated the failure patterns in this experiment. In Figure 2, and Figure 3, two types of failure were observed: first, Reinhard, Macenko, Vahadane (random) and Vahadane (WSI) failed when the background of source image was white (Fig. 2a, Fig.3). Second, Reinhard, Macenko and Vahadane (random) did not work when the appearance of reference patch is significantly different from the source patch (Fig. 2b). Fig. 3. The appearance of color normalization of thumbnail. Source WSI is scanned by Hamamatsu scanner and Reference WSI is scanned by 3D HISTECH.
120
+
121
+ Table 1. The improvement of PSNR: Mean $\pm$ Standard Deviation. H: Hamamatsu photonics NanoZoomer S60. 3DX: 3D HISTECH Pannoramic MIDI II. The number in a square bracket corresponds to the slide number. Dunnet contrasts tests and F-tests were used to compare the mean and the variance of PSNRs of CONTEMM and those of the other methods, respectively. Bonferroni correction was applied for F-tests.
122
+
123
+ max width=
124
+
125
+ Source-> Reference CONTEMM (Ours) Vahadane (WSI) Macenko Vahadane (random) Reinhard
126
+
127
+ 1-6
128
+ $\mathbf{{3DX}} \rightarrow \mathbf{H}\left\lbrack 1\right\rbrack$ $- {0.082} \pm \mathbf{{0.718}}$ 0.682±1.043 $- {0.266} \pm {1.784}$ $- {0.984} \pm {2.187}$ $- {0.964} \pm {2.462}$
129
+
130
+ 1-6
131
+ $\mathbf{{3DX} - > H\left\lbrack 2\right\rbrack }$ ${2.379} \pm {1.302}$ ${3.235} \pm {1.511}$ ${3.259} \pm {3.297}$ ${1.424} \pm {3.532}$ $- {0.491} \pm {4.905}$
132
+
133
+ 1-6
134
+ 3DX->H[3] ${2.293} \pm \mathbf{{0.808}}$ $- {0.586} \pm {1.597}$ ${2.695} \pm {3.053}$ ${1.214} \pm {2.884}$ ${1.111} \pm {3.160}$
135
+
136
+ 1-6
137
+ H->3DX[1] $- {0.199} \pm {1.302}$ ${0.495} \pm {1.327}$ $- {1.918} \pm {2.249}$ $- {2.092} \pm {2.458}$ $- {2.741} \pm {2.399}$
138
+
139
+ 1-6
140
+ H->3DX[2] ${0.445} \pm {0.927}$ ${3.597} \pm {1.948}$ $- {1.378} \pm {2.590}$ $- {1.593} \pm {3.397}$ $- {2.930} \pm {3.967}$
141
+
142
+ 1-6
143
+ H->3DX[3] ${0.340} \pm {0.756}$ $- {4.425} \pm {1.454}$ $- {0.697} \pm {1.457}$ $- {0.010} \pm {2.499}$ $- {0.587} \pm {2.321}$
144
+
145
+ 1-6
146
+ Mean ${0.863} \pm {1.461}$ ${0.499} \pm {3.059}$ ${0.282} \pm {3.183}$ $- {0.340} \pm {3.165}$ $- {1.100} \pm {3.619}$
147
+
148
+ 1-6
149
+ Pvalue (mean) N/A p > 0.05 p < 0.05 p < 0.001 p < 0.001
150
+
151
+ 1-6
152
+ Pvalue (variance) N/A $\mathrm{p} < {0.001}$ $\mathrm{p} < {0.001}$ $\mathrm{p} < {0.001}$ $\mathrm{p} < {0.001}$
153
+
154
+ 1-6
155
+
156
+ < g r a p h i c s >
157
+
158
+ Fig. 2. Examples of the failure. (a) Image with large white regions. (b) Source and reference patches quite different from each other. (A): source patch, (B): reference patch for (C)-(E), (C): Vahadane(WSI), (D): Macenko, (E): Reinhard, (F): Vahadane(random), (G): CONTEMM, (H): Ground Truth. (C)-(E): Source patch was color-normalized using reference patch (B). Note that (B) is not used as reference patch in (F), (G).
159
+
160
+ < g r a p h i c s >
161
+
162
+ § 3.3 EXPERIMENT 2: QUANTITATIVE EVALUATION (NMI)
163
+
164
+ Next, we evaluated the consistency of normalization using 100 randomly selected WSIs of kidney renal clear cell carcinomas from The Cancer Genome Atlas (TCGA) datasets[10]. WSI-level normalization was performed using CONTEMM and Va-hadane (WSI) and the normalized median intensity (NMI) within uniform tumor region selected by pathologists were compared. Lower standard deviation of NMI (NMI SD) indicates that the normalization is more consistent [11]. As shown in Table 2, CONTEMM showed better NMI than Vahadane (WSI) and the original WSIs.
165
+
166
+ Table 2. Standard deviation of the normalized median intensity.
167
+
168
+ max width=
169
+
170
+ X CONTEMM Vahadane (WSI) Original WSIs
171
+
172
+ 1-4
173
+ $\mathbf{{NMI}}$ SD 0.0677 0.0685 0.0749
174
+
175
+ 1-4
176
+
177
+ § 3.4 EXPERIMENT 3: COMPUTATION TIME EVALUATION
178
+
179
+ Finally, computation time of each color normalization step was measured using a source slide. One Tesla V100 GPUs and Dual Intel 20-core Xeon E-2698v4 2.20GHz was used for computation. Table 3 shows that our algorithm is significantly fast. It takes only 96 seconds for step 1,3.16 seconds for step 2, and 0.0034 seconds for step 3. Fast color normalization method is especially important recently, as the number of WSIs being analyzed has grown dramatically. For example, more than 10,000 WSIs are analyzed in a recent study[12]: it would take around 1,500 hours to standardize 10,000 WSIs using Vahadane (WSI), while it would take only about 9 hours using CONTEMM, which is feasibly short.
180
+
181
+ Table 3. Computation time of color normalization: "WSI (sec)" is the computation time required for estimating the color transformation between two WSIs. "patch (sec)" is the computation time required for color-normalizing one patch. Macenko, Vahadane (random), and Reinhard do not have WSI-level color normalization.
182
+
183
+ max width=
184
+
185
+ X CONTEMM Vahadane (WSI) Reinhard Macenko Vahadane (random)
186
+
187
+ 1-6
188
+ WSI (sec) 3.16 537.9 N/A N/A N/A
189
+
190
+ 1-6
191
+ patch (sec) 0.0034 0.013 0.0075 7.04 8.05
192
+
193
+ 1-6
194
+
195
+ § 4 CONCLUSION
196
+
197
+ In this paper, we proposed CONTEMM, a fast and stable color normalization method for WSIs in histopathology. CONTEMM estimates a global transformation matrix based on reference and source patch pairs selected based on deep texture representations. Experimental results showed that CONTEMM outperforms the existing patch-based methods in terms of stability and accuracy. Additionally, CONTEMM outperforms a WSI-based method, Vahadane (WSI), in terms of stability and computation time while keeping comparable accuracy. Notably, compared to the WSI-based method, CONTEMM speeds up the color normalization process of WSIs by several orders of magnitude, which makes it feasible to normalize thousands of WSIs in realistic time. CONTEMM would be a powerful tool for histopathology image analysis in the big data era, where rapid color normalization is essential.
papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/BkxeSkO7ZB/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Self-Supervised Similarity Learning for Digital Pathology
2
+
3
+ Jacob Gildenblat ${}^{1,2}$ and Eldad Klaiman ${}^{3}$
4
+
5
+ ${}^{1}$ SagivTech Ltd.
6
+
7
+ ${}^{2}$ DeePathology.ai
8
+
9
+ ${}^{3}$ Pathology and Tissue Analytics, Pharma Research and Early Development, Roche
10
+
11
+ Innovation Center Munich
12
+
13
+ eldad.klaiman@roche.com
14
+
15
+ Abstract. Using features extracted from networks pretrained on Ima-geNet is a common practice in applications of deep learning for digital pathology. However it presents the downside of missing domain specific image information. In digital pathology, supervised training data is expensive and difficult to collect. We propose a self-supervised method for feature extraction by similarity learning on whole slide images (WSI) that is simple to implement and allows creation of robust and compact image descriptors. We train a siamese network, exploiting image spatial continuity and assuming spatially adjacent tiles in the image are more similar to each other than distant tiles. Our network outputs feature vectors of length 128, which allows dramatically lower memory storage and faster processing than networks pretrained on ImageNet. We apply the method on digital pathology WSIs from the Camelyon16 train set and assess and compare our method by measuring image retrieval of tumor tiles and descriptor pair distance ratio for distant/near tiles in the Camelyon16 test set. We show that our method yields better retrieval task results than existing ImageNet based and generic self-supervised feature extraction methods. To the best of our knowledge, this is also the first published method for self-supervised learning tailored for digital pathology.
16
+
17
+ Keywords: Deep Learning $\cdot$ Self-Supervised $\cdot$ Similarity Learning $\cdot$ Digital Pathology.
18
+
19
+ ## 1 Introduction
20
+
21
+ There are many applications of Deep Learning for digital pathology [5]. Some examples of such applications are tissue segmentation [8], whole slide images (WSI) disease localization and classification $\left\lbrack {9,2}\right\rbrack$ , cell detection [13] and virtual staining [7].
22
+
23
+ WSI are typically large, and in full resolution may typically contain 1 billion pixels or more. It is therefore common practice to divide the WSI into tiles and analyse the individual tiles in order to sidestep the memory bottleneck. Often convolutional neural networks (CNN) pretrained on ImageNet are used to extract features from these tiles. For example in [2] features are extracted using the ResNet-50 [4] network trained on ImageNet, and then a semi supervised classification network is trained on these features using multiple instance learning (MIL). This is done because these features can function as rich image descriptors. In some cases, training networks from scratch for histopathological images is not feasible due to the weak nature of the labeling or the limited amount of annotated data.
24
+
25
+ The use of networks pretrained on ImageNet is common mostly because of the availability of these pretrained networks. Imagenet pretrained networks are typically trained with supervised learning on a large and variable annotated dataset of natural images with 1000 categories. There is a lack of similar available annotated datasets that capture the natural and practical variability in histopathology images. For example, even existing large datasets like Camelyon consist of only one type of staining (Hematoxylin and Eosin), one type of cancer (Breast Cancer) and only two classes (Tumor/Non-Tumor). Histopathology image texture and object shapes may vary highly in images from different cancer types, different tissue staining types and different tissue types. Additionally, histopathology images contain many different texture and object types with different domain specific meanings (e.g stroma, tumor infiltrating lymphocytes, blood vessels, fat, healthy tissue, necrosis, etc.).
26
+
27
+ In many domains, the shortage in annotated datasets has been addressed by unsupervised and self-supervised methods [6], which have been shown to hold potential for training networks that can serve as useful feature extractors. Self-supervised learning is a subset of unsupervised learning methods in which CNNs are explicitly trained with automatically generated labels [6].
28
+
29
+ In [14] the authors propose learning to colorize images, and then use the learned network as a feature extractor. In [3] the authors propose to rotate images and then learn to predict the rotation angle that serves as a synthetic label to train the network. A recent method that obtains state of the art results on standard (non-pathology) feature extraction benchmarks is described in [12]. The described method tries to discriminate between images in the dataset by predicting the index of the input image.
30
+
31
+ Generation of medical annotated data such as pathological images is especially time consuming and expensive, therefore we would like to use self-supervised approaches whenever possible. In the case of digital pathology, an intrinsic property of the tissue image is its continuity. This means that two tissue areas that are near each other are more likely to be similar than two distant tiles. A straightforward way for self-supervised image labeling for digital pathology images would then be labeling pairs of similar and non-similar images based on their spatial proximity. This would enable generating automatically labeled data for very large and diverse datasets.
32
+
33
+ Datasets with similarity labels can be used to train networks using a method called similarity learning in which an algorithm is trained by examples to measure how similar two objects are. Some important applications of these types of methods include image search and retrieval, visual tracking, and face verification. A common family of network architectures for similarity learning is siamese networks [1]. They consist of two or more identical sub networks sharing weights and trained on pairs or larger sets of images in order to rank semantic similarity and distinguish between similar and dissimilar inputs. Training siamese networks requires a way to generate pairs of images with a label indicating if they are similar or not.
34
+
35
+ We note that there is a noticeable lack of self-supervised methods that exploit domain specific characteristics existing in gigapixel histopathology WSIs. A common method to generate image descriptors given the shortage of labeled datasets is the use of pretrained ImageNet networks. However these descriptors lack domain specific information from the target dataset e.g. digital pathology. We propose a novel self-supervised learning method which leverages the intrinsic spatial continuity of histopathological tissue images to train a model that generates domain specific image descriptors. We apply our method on the Camelyon16 dataset in order to perform an image retrieval task for tumor areas. And we validate our approach by comparing descriptor distance of similar and dissimilar expert labeled images. We compare our results to a state of the art self-supervised method.
36
+
37
+ ## 2 Proposed Approach
38
+
39
+ We observe that WSIs have an inherent spatial continuity. Spatially adjacent tiles are typically more similar to each other than distant tiles in the image. In order to generate our training dataset we define a maximum distance between pairs of tiles to be labeled as similar, and a minimum distance between tiles to be labeled as non-similar based on the intrinsic sizes of histo-pathological regions in the dataset. For each tile in the dataset other similar or non-similar tiles are sampled based on the predefined distance thresholds creating a dataset of automatically labeled pairs. This sampling strategy allows easily creating a very large and diverse set of pairs sampled from histopathology WSIs without the need for any manual annotation.
40
+
41
+ Using this dataset we train a siamese network for image similarity leveraging this spatial continuity. The network used consists of two identical sub networks sharing weights and trained on pairs of similar and dissimilar inputs. As a training loss for the siamese network we use a contrastive loss on pairs of images 1.
42
+
43
+ $$
44
+ {L}_{\text{contrastive }} = \left( {1 - y}\right) {L}_{2}\left( {{f}_{1} - {f}_{2}}\right) + y \times \max \left( {0, m - {L}_{2}\left( {{f}_{1} - {f}_{2}}\right) }\right) , \tag{1}
45
+ $$
46
+
47
+ where ${f}_{1},{f}_{2}$ are the outputs of two identical sub networks. $y$ is the ground truth label for the image pair: 0 if they are similar, 1 if they are not similar.
48
+
49
+ The Camelyon16 test dataset includes manual expert annotations for tumor areas. In order to evaluate the performance of the network in capturing the histopathological features in the image descriptors, we use these ground truth annotations to form pairs of similar and non-similar tiles by pairing tumor labeled tiles with tumor and non-tumor tiles respectively. We calculate the L2 distance between image descriptors for each pair in the test dataset. We define the global Average Descriptor Distance Ratio (ADDR) as the ratio of the average descriptor distance of non-similar pairs and the average descriptor distance of similar pairs for all pairs in the test dataset.
50
+
51
+ As an additional evaluation metric we measure the ability of the learned network to perform a pathology image retrieval task on the Camelyon16 test set. Every tile extracted from the Camelyon16 testing set is marked as "tumor" if it lies entirely inside the expert tumor annotation area. A nearest neighbor search on feature vectors is performed on each tile, constraining the search to tiles from other slides in order to more robustly assess descriptor generalization across different images. We report the percentage of correct nearest neighbor tumor tile retrieval.
52
+
53
+ We compare our self-supervised method generated image descriptors to a ResNet-50 pretrained on ImageNet as well as to a state of the art self-supervised learning method called Non-Parametric Instance Discrimination (NPID) [12]. NPID tries to discriminate between all the images in the datasets by using the index of the input image as a synthetic label to train the network.
54
+
55
+ ## 3 Experiments
56
+
57
+ In this section we describe the datasets and experiments and give more detailed information about the implementations and the results.
58
+
59
+ ### 3.1 Datasets
60
+
61
+ All experiments were done on tiles extracted from the Camelyon16 training dataset at x10 resolution. The Camelyon16 training dataset contains 270 breast lymph node Hematoxylin and Eosin (H&E) stained tissue WSIs. We validate and assess our method on the Camelyon16 testing set which contains ${130}\mathrm{H}\& \mathrm{E}$ stained WSIs.
62
+
63
+ Our training and testing datasets were created by extracting non overlapping tiles of size ${224} \times {224}$ . We defined a maximum distance of 1792 pixels ( $2\mathrm{\;{mm}}$ ) between two tiles for them to be considered similar, and a minimum distance of 9408 pixels ( ${10}\mathrm{\;{mm}}$ ) between two tiles for them to be considered non-similar. We choose this threshold based on the histopathological definition of a macro-metastasis which is typically larger than $2\mathrm{\;{mm}}$ [11]. Sampling 32 pairs of near tiles and 32 pairs of distant tiles per tile in the dataset yielded 70 million pairs of which half are labeled similar, the others are labeled non-similar. A sample of similar and non-similar pairs form the training dataset can be seen in Fig. 1
64
+
65
+ For the testing of our method we generated a dataset from the Camelyon16 test dataset, by sampling 8 near tiles and 8 distant tiles per tile using the expert ground truth as described in the proposed approach section. This resulted in 1,385,288pairs of similar tiles and 1,385,288 non-similar tiles.
66
+
67
+ ![01963a4e-de6f-7557-81af-d361560a76d6_4_434_328_936_465_0.jpg](images/01963a4e-de6f-7557-81af-d361560a76d6_4_434_328_936_465_0.jpg)
68
+
69
+ Fig.1. Visualization of sampled tile pairs. (A) - Pairs of tiles close to each other, labeled as similar images. (B) - Pairs of tiles far from each other, labeled as unsimilar images.
70
+
71
+ ### 3.2 Implementation Details
72
+
73
+ We trained a siamese network consisting of two branches of modified ResNet-50 [4] with the last layer (that normally outputs 1,000 features) is replaced with a fully connected layer of size 128. For training our siamese network we use the Adam optimizer with the default parameters in PyTorch (learning rate of 0.001 and betas of ${0.9},{0.999})$ , and a batch size of 256 . For data augmentation, we used random horizontal and vertical flips, a random rotation up to 20 degrees, and a color jitter augmentation with a value of 0.075 for brightness, saturation and hue. The network was trained for 24 hours using 8 V100 GPUs on the Roche Penzberg High Performace Compute Cluster using a PyTorch DataParallel implementation.
74
+
75
+ In experiments using the ImageNet pretrained ResNet-50 we extract features from the second last layer, with a length of 2048.
76
+
77
+ Training of the NPID network was also performed using 8 V100 GPUs for 24 hours and loss convergence was observed.
78
+
79
+ In the application of the networks on the test set, we normalize the image by matching the standard deviation and the mean of each channel in the LAB color space of the source tile with a preselected target tile. This provides a sort of simple stain normalization for the WSIs.
80
+
81
+ ### 3.3 Results and Comparison
82
+
83
+ In the experiment measuring L2 distance between descriptors of distant and near tiles we report the global ADDR for ImageNet pretrained ResNet-50, the NPID method and our proposed approach. Results can be seen in Table 1.
84
+
85
+ Table 1. L2 distance ratio between descriptors of distant and near tiles.
86
+
87
+ <table><tr><td>Method</td><td>Global ADDR</td></tr><tr><td>ResNet-50 pretrained on ImageNet</td><td>1.38</td></tr><tr><td>Non-Parametric Instance Discrimination</td><td>1.28</td></tr><tr><td>Ours</td><td>1.5</td></tr></table>
88
+
89
+ The result of this experiment indicate that our method outperforms the benchmark methods in the task of separating similar and non-similar tiles in the descriptor space.
90
+
91
+ In the tumor tile retrieval experiment, 3809 tumor tiles were extracted from the test dataset based on expert ground truth annotation as described in the proposed approach section. The target class tumor comprises only $3\%$ of the entirety of the tiles searched in the test set. We report the percentage of correctly retrieved tiles in Table 2.
92
+
93
+ Table 2. Results for tumor tile retrieval.
94
+
95
+ <table><tr><td>Method</td><td>Ratio of retrieved tumor tiles</td></tr><tr><td>ResNet-50 pretrained on ImageNet</td><td>26%</td></tr><tr><td>Non-Parametric Instance Discrimination</td><td>21%</td></tr><tr><td>Ours</td><td>34%</td></tr></table>
96
+
97
+ It can be seen from the results of the retrieval task that our method substantially outperforms both the ImageNet pretrained network as well as the NPID method. Additionally, examples of nearest neighbor retrievals from our network in this experiment can be seen in Fig. 2.
98
+
99
+ ## 4 Conclusion and Discussion
100
+
101
+ We present a novel self-supervised approach for training CNNs for the purpose of generating visually meaningful image descriptors. In particular, we show that using this method for images in the digital pathology domain yields substantially better image retrieval results than other methods on the Camelyon16 dataset. We evaluate and compare the performance of our method with other benchmark methods in a retrieval task and a descriptor distance metric on the Camelyon16 test set. Our method presents potential to create better feature extraction algorithms for digital pathology datasets where labels for a supervised training are hard to obtain. We believe that this work can be a first step towards the adaptation of self-supervised methods for image descriptor generation in digital pathology instead of using features from networks pretrained on ImageNet.
102
+
103
+ A disadvantage of the spatial similarity sampling strategy is that in some cases pairs of images are not accurately labeled. For example in transitions between different functional histological areas in the image there are by definition spatially proximal tiles that are visually different. In other cases two distant tiles can be visually similar because they are part of distant areas of the same histopathological function. This effect creates an inherent labeling noise in the dataset. A reasonable assumption in many WSIs is that region borders typically occupy less area in the image than the regions themselves so the label noise is substantially lower than the correct labels. Additionally, due to the statistical characteristics of their training process, deep learning methods have been shown to be predominantly robust to label noise even in extreme cases [10].
104
+
105
+ ![01963a4e-de6f-7557-81af-d361560a76d6_6_542_330_723_705_0.jpg](images/01963a4e-de6f-7557-81af-d361560a76d6_6_542_330_723_705_0.jpg)
106
+
107
+ Fig. 2. Examples results for 5 tumor query tiles (A, B, C, D, E) in the image retrieval task and the 5 closest retrieved tiles from slides other than the query slide (A1-A5, B1- B5, C1-C5, D1-D5, E1-E5), ranked by distance from low to high, using tile descriptors generated by our method. Its interesting to note that even though some retrieved tiles look very different than the query tile (e.g. C3 and C) all of the retrieved tiles except A4 have been verified by an expert pathologist to contain tumor cells (i.e. correct class retrieval).
108
+
109
+ Future work will include verification strategies for sampled pairs, and new sampling strategies for self-supervised similarity learning as well as hyper-parameter tuning and exploration of the proximity thresholds in the dataset generation process.
110
+
111
+ ## Acknowledgements
112
+
113
+ The authors would like to thank Amal Lahiani for her invaluable insights and constructive review.
114
+
115
+ ## References
116
+
117
+ 1. Bromley, J., Guyon, I., LeCun, Y., Säckinger, E., Shah, R.: Signature verification using a" siamese" time delay neural network. In: Advances in neural information processing systems. pp. 737-744 (1994)
118
+
119
+ 2. Campanella, G., Silva, V.W.K., Fuchs, T.J.: Terabyte-scale deep multiple instance learning for classification and localization in pathology. arXiv preprint arXiv:1805.06983 (2018)
120
+
121
+ 3. Gidaris, S., Singh, P., Komodakis, N.: Unsupervised representation learning by predicting image rotations. arXiv preprint arXiv:1803.07728 (2018)
122
+
123
+ 4. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770-778 (2016)
124
+
125
+ 5. Janowczyk, A., Madabhushi, A.: Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases. Journal of pathology informatics 7 (2016)
126
+
127
+ 6. Jing, L., Tian, Y.: Self-supervised visual feature learning with deep neural networks: A survey. arXiv preprint arXiv:1902.06162 (2019)
128
+
129
+ 7. Lahiani, A., Gildenblat, J., Klaman, I., Albarqouni, S., Navab, N., Klaiman, E.: Virtualization of tissue staining in digital pathology using an unsupervised deep learning approach. arXiv preprint arXiv:1810.06415 (2018)
130
+
131
+ 8. Lahiani, A., Gildenblat, J., Klaman, I., Navab, N., Klaiman, E.: Generalizing mul-tistain immunohistochemistry tissue segmentation using one-shot color deconvolution deep neural networks. IET Image Processing (2019)
132
+
133
+ 9. Lee, B., Paeng, K.: A robust and effective approach towards accurate metastasis detection and pn-stage classification in breast cancer. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 841-850. Springer (2018)
134
+
135
+ 10. Rolnick, D., Veit, A., Belongie, S., Shavit, N.: Deep learning is robust to massive label noise. arXiv preprint arXiv:1705.10694 (2017)
136
+
137
+ 11. Wang, D., Khosla, A., Gargeya, R., Irshad, H., Beck, A.H.: Deep learning for identifying metastatic breast cancer. arXiv preprint arXiv:1606.05718 (2016)
138
+
139
+ 12. Wu, Z., Xiong, Y., Yu, S.X., Lin, D.: Unsupervised feature learning via nonparametric instance discrimination. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3733-3742 (2018)
140
+
141
+ 13. Xue, Y., Ray, N.: Cell detection in microscopy images with deep convolutional neural network and compressed sensing. arXiv preprint arXiv:1708.03307 (2017)
142
+
143
+ 14. Zhang, R., Isola, P., Efros, A.A.: Colorful image colorization. In: European conference on computer vision. pp. 649-666. Springer (2016)
papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/BkxeSkO7ZB/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § SELF-SUPERVISED SIMILARITY LEARNING FOR DIGITAL PATHOLOGY
2
+
3
+ Jacob Gildenblat ${}^{1,2}$ and Eldad Klaiman ${}^{3}$
4
+
5
+ ${}^{1}$ SagivTech Ltd.
6
+
7
+ ${}^{2}$ DeePathology.ai
8
+
9
+ ${}^{3}$ Pathology and Tissue Analytics, Pharma Research and Early Development, Roche
10
+
11
+ Innovation Center Munich
12
+
13
+ eldad.klaiman@roche.com
14
+
15
+ Abstract. Using features extracted from networks pretrained on Ima-geNet is a common practice in applications of deep learning for digital pathology. However it presents the downside of missing domain specific image information. In digital pathology, supervised training data is expensive and difficult to collect. We propose a self-supervised method for feature extraction by similarity learning on whole slide images (WSI) that is simple to implement and allows creation of robust and compact image descriptors. We train a siamese network, exploiting image spatial continuity and assuming spatially adjacent tiles in the image are more similar to each other than distant tiles. Our network outputs feature vectors of length 128, which allows dramatically lower memory storage and faster processing than networks pretrained on ImageNet. We apply the method on digital pathology WSIs from the Camelyon16 train set and assess and compare our method by measuring image retrieval of tumor tiles and descriptor pair distance ratio for distant/near tiles in the Camelyon16 test set. We show that our method yields better retrieval task results than existing ImageNet based and generic self-supervised feature extraction methods. To the best of our knowledge, this is also the first published method for self-supervised learning tailored for digital pathology.
16
+
17
+ Keywords: Deep Learning $\cdot$ Self-Supervised $\cdot$ Similarity Learning $\cdot$ Digital Pathology.
18
+
19
+ § 1 INTRODUCTION
20
+
21
+ There are many applications of Deep Learning for digital pathology [5]. Some examples of such applications are tissue segmentation [8], whole slide images (WSI) disease localization and classification $\left\lbrack {9,2}\right\rbrack$ , cell detection [13] and virtual staining [7].
22
+
23
+ WSI are typically large, and in full resolution may typically contain 1 billion pixels or more. It is therefore common practice to divide the WSI into tiles and analyse the individual tiles in order to sidestep the memory bottleneck. Often convolutional neural networks (CNN) pretrained on ImageNet are used to extract features from these tiles. For example in [2] features are extracted using the ResNet-50 [4] network trained on ImageNet, and then a semi supervised classification network is trained on these features using multiple instance learning (MIL). This is done because these features can function as rich image descriptors. In some cases, training networks from scratch for histopathological images is not feasible due to the weak nature of the labeling or the limited amount of annotated data.
24
+
25
+ The use of networks pretrained on ImageNet is common mostly because of the availability of these pretrained networks. Imagenet pretrained networks are typically trained with supervised learning on a large and variable annotated dataset of natural images with 1000 categories. There is a lack of similar available annotated datasets that capture the natural and practical variability in histopathology images. For example, even existing large datasets like Camelyon consist of only one type of staining (Hematoxylin and Eosin), one type of cancer (Breast Cancer) and only two classes (Tumor/Non-Tumor). Histopathology image texture and object shapes may vary highly in images from different cancer types, different tissue staining types and different tissue types. Additionally, histopathology images contain many different texture and object types with different domain specific meanings (e.g stroma, tumor infiltrating lymphocytes, blood vessels, fat, healthy tissue, necrosis, etc.).
26
+
27
+ In many domains, the shortage in annotated datasets has been addressed by unsupervised and self-supervised methods [6], which have been shown to hold potential for training networks that can serve as useful feature extractors. Self-supervised learning is a subset of unsupervised learning methods in which CNNs are explicitly trained with automatically generated labels [6].
28
+
29
+ In [14] the authors propose learning to colorize images, and then use the learned network as a feature extractor. In [3] the authors propose to rotate images and then learn to predict the rotation angle that serves as a synthetic label to train the network. A recent method that obtains state of the art results on standard (non-pathology) feature extraction benchmarks is described in [12]. The described method tries to discriminate between images in the dataset by predicting the index of the input image.
30
+
31
+ Generation of medical annotated data such as pathological images is especially time consuming and expensive, therefore we would like to use self-supervised approaches whenever possible. In the case of digital pathology, an intrinsic property of the tissue image is its continuity. This means that two tissue areas that are near each other are more likely to be similar than two distant tiles. A straightforward way for self-supervised image labeling for digital pathology images would then be labeling pairs of similar and non-similar images based on their spatial proximity. This would enable generating automatically labeled data for very large and diverse datasets.
32
+
33
+ Datasets with similarity labels can be used to train networks using a method called similarity learning in which an algorithm is trained by examples to measure how similar two objects are. Some important applications of these types of methods include image search and retrieval, visual tracking, and face verification. A common family of network architectures for similarity learning is siamese networks [1]. They consist of two or more identical sub networks sharing weights and trained on pairs or larger sets of images in order to rank semantic similarity and distinguish between similar and dissimilar inputs. Training siamese networks requires a way to generate pairs of images with a label indicating if they are similar or not.
34
+
35
+ We note that there is a noticeable lack of self-supervised methods that exploit domain specific characteristics existing in gigapixel histopathology WSIs. A common method to generate image descriptors given the shortage of labeled datasets is the use of pretrained ImageNet networks. However these descriptors lack domain specific information from the target dataset e.g. digital pathology. We propose a novel self-supervised learning method which leverages the intrinsic spatial continuity of histopathological tissue images to train a model that generates domain specific image descriptors. We apply our method on the Camelyon16 dataset in order to perform an image retrieval task for tumor areas. And we validate our approach by comparing descriptor distance of similar and dissimilar expert labeled images. We compare our results to a state of the art self-supervised method.
36
+
37
+ § 2 PROPOSED APPROACH
38
+
39
+ We observe that WSIs have an inherent spatial continuity. Spatially adjacent tiles are typically more similar to each other than distant tiles in the image. In order to generate our training dataset we define a maximum distance between pairs of tiles to be labeled as similar, and a minimum distance between tiles to be labeled as non-similar based on the intrinsic sizes of histo-pathological regions in the dataset. For each tile in the dataset other similar or non-similar tiles are sampled based on the predefined distance thresholds creating a dataset of automatically labeled pairs. This sampling strategy allows easily creating a very large and diverse set of pairs sampled from histopathology WSIs without the need for any manual annotation.
40
+
41
+ Using this dataset we train a siamese network for image similarity leveraging this spatial continuity. The network used consists of two identical sub networks sharing weights and trained on pairs of similar and dissimilar inputs. As a training loss for the siamese network we use a contrastive loss on pairs of images 1.
42
+
43
+ $$
44
+ {L}_{\text{ contrastive }} = \left( {1 - y}\right) {L}_{2}\left( {{f}_{1} - {f}_{2}}\right) + y \times \max \left( {0,m - {L}_{2}\left( {{f}_{1} - {f}_{2}}\right) }\right) , \tag{1}
45
+ $$
46
+
47
+ where ${f}_{1},{f}_{2}$ are the outputs of two identical sub networks. $y$ is the ground truth label for the image pair: 0 if they are similar, 1 if they are not similar.
48
+
49
+ The Camelyon16 test dataset includes manual expert annotations for tumor areas. In order to evaluate the performance of the network in capturing the histopathological features in the image descriptors, we use these ground truth annotations to form pairs of similar and non-similar tiles by pairing tumor labeled tiles with tumor and non-tumor tiles respectively. We calculate the L2 distance between image descriptors for each pair in the test dataset. We define the global Average Descriptor Distance Ratio (ADDR) as the ratio of the average descriptor distance of non-similar pairs and the average descriptor distance of similar pairs for all pairs in the test dataset.
50
+
51
+ As an additional evaluation metric we measure the ability of the learned network to perform a pathology image retrieval task on the Camelyon16 test set. Every tile extracted from the Camelyon16 testing set is marked as "tumor" if it lies entirely inside the expert tumor annotation area. A nearest neighbor search on feature vectors is performed on each tile, constraining the search to tiles from other slides in order to more robustly assess descriptor generalization across different images. We report the percentage of correct nearest neighbor tumor tile retrieval.
52
+
53
+ We compare our self-supervised method generated image descriptors to a ResNet-50 pretrained on ImageNet as well as to a state of the art self-supervised learning method called Non-Parametric Instance Discrimination (NPID) [12]. NPID tries to discriminate between all the images in the datasets by using the index of the input image as a synthetic label to train the network.
54
+
55
+ § 3 EXPERIMENTS
56
+
57
+ In this section we describe the datasets and experiments and give more detailed information about the implementations and the results.
58
+
59
+ § 3.1 DATASETS
60
+
61
+ All experiments were done on tiles extracted from the Camelyon16 training dataset at x10 resolution. The Camelyon16 training dataset contains 270 breast lymph node Hematoxylin and Eosin (H&E) stained tissue WSIs. We validate and assess our method on the Camelyon16 testing set which contains ${130}\mathrm{H}\& \mathrm{E}$ stained WSIs.
62
+
63
+ Our training and testing datasets were created by extracting non overlapping tiles of size ${224} \times {224}$ . We defined a maximum distance of 1792 pixels ( $2\mathrm{\;{mm}}$ ) between two tiles for them to be considered similar, and a minimum distance of 9408 pixels ( ${10}\mathrm{\;{mm}}$ ) between two tiles for them to be considered non-similar. We choose this threshold based on the histopathological definition of a macro-metastasis which is typically larger than $2\mathrm{\;{mm}}$ [11]. Sampling 32 pairs of near tiles and 32 pairs of distant tiles per tile in the dataset yielded 70 million pairs of which half are labeled similar, the others are labeled non-similar. A sample of similar and non-similar pairs form the training dataset can be seen in Fig. 1
64
+
65
+ For the testing of our method we generated a dataset from the Camelyon16 test dataset, by sampling 8 near tiles and 8 distant tiles per tile using the expert ground truth as described in the proposed approach section. This resulted in 1,385,288pairs of similar tiles and 1,385,288 non-similar tiles.
66
+
67
+ < g r a p h i c s >
68
+
69
+ Fig.1. Visualization of sampled tile pairs. (A) - Pairs of tiles close to each other, labeled as similar images. (B) - Pairs of tiles far from each other, labeled as unsimilar images.
70
+
71
+ § 3.2 IMPLEMENTATION DETAILS
72
+
73
+ We trained a siamese network consisting of two branches of modified ResNet-50 [4] with the last layer (that normally outputs 1,000 features) is replaced with a fully connected layer of size 128. For training our siamese network we use the Adam optimizer with the default parameters in PyTorch (learning rate of 0.001 and betas of ${0.9},{0.999})$ , and a batch size of 256 . For data augmentation, we used random horizontal and vertical flips, a random rotation up to 20 degrees, and a color jitter augmentation with a value of 0.075 for brightness, saturation and hue. The network was trained for 24 hours using 8 V100 GPUs on the Roche Penzberg High Performace Compute Cluster using a PyTorch DataParallel implementation.
74
+
75
+ In experiments using the ImageNet pretrained ResNet-50 we extract features from the second last layer, with a length of 2048.
76
+
77
+ Training of the NPID network was also performed using 8 V100 GPUs for 24 hours and loss convergence was observed.
78
+
79
+ In the application of the networks on the test set, we normalize the image by matching the standard deviation and the mean of each channel in the LAB color space of the source tile with a preselected target tile. This provides a sort of simple stain normalization for the WSIs.
80
+
81
+ § 3.3 RESULTS AND COMPARISON
82
+
83
+ In the experiment measuring L2 distance between descriptors of distant and near tiles we report the global ADDR for ImageNet pretrained ResNet-50, the NPID method and our proposed approach. Results can be seen in Table 1.
84
+
85
+ Table 1. L2 distance ratio between descriptors of distant and near tiles.
86
+
87
+ max width=
88
+
89
+ Method Global ADDR
90
+
91
+ 1-2
92
+ ResNet-50 pretrained on ImageNet 1.38
93
+
94
+ 1-2
95
+ Non-Parametric Instance Discrimination 1.28
96
+
97
+ 1-2
98
+ Ours 1.5
99
+
100
+ 1-2
101
+
102
+ The result of this experiment indicate that our method outperforms the benchmark methods in the task of separating similar and non-similar tiles in the descriptor space.
103
+
104
+ In the tumor tile retrieval experiment, 3809 tumor tiles were extracted from the test dataset based on expert ground truth annotation as described in the proposed approach section. The target class tumor comprises only $3\%$ of the entirety of the tiles searched in the test set. We report the percentage of correctly retrieved tiles in Table 2.
105
+
106
+ Table 2. Results for tumor tile retrieval.
107
+
108
+ max width=
109
+
110
+ Method Ratio of retrieved tumor tiles
111
+
112
+ 1-2
113
+ ResNet-50 pretrained on ImageNet 26%
114
+
115
+ 1-2
116
+ Non-Parametric Instance Discrimination 21%
117
+
118
+ 1-2
119
+ Ours 34%
120
+
121
+ 1-2
122
+
123
+ It can be seen from the results of the retrieval task that our method substantially outperforms both the ImageNet pretrained network as well as the NPID method. Additionally, examples of nearest neighbor retrievals from our network in this experiment can be seen in Fig. 2.
124
+
125
+ § 4 CONCLUSION AND DISCUSSION
126
+
127
+ We present a novel self-supervised approach for training CNNs for the purpose of generating visually meaningful image descriptors. In particular, we show that using this method for images in the digital pathology domain yields substantially better image retrieval results than other methods on the Camelyon16 dataset. We evaluate and compare the performance of our method with other benchmark methods in a retrieval task and a descriptor distance metric on the Camelyon16 test set. Our method presents potential to create better feature extraction algorithms for digital pathology datasets where labels for a supervised training are hard to obtain. We believe that this work can be a first step towards the adaptation of self-supervised methods for image descriptor generation in digital pathology instead of using features from networks pretrained on ImageNet.
128
+
129
+ A disadvantage of the spatial similarity sampling strategy is that in some cases pairs of images are not accurately labeled. For example in transitions between different functional histological areas in the image there are by definition spatially proximal tiles that are visually different. In other cases two distant tiles can be visually similar because they are part of distant areas of the same histopathological function. This effect creates an inherent labeling noise in the dataset. A reasonable assumption in many WSIs is that region borders typically occupy less area in the image than the regions themselves so the label noise is substantially lower than the correct labels. Additionally, due to the statistical characteristics of their training process, deep learning methods have been shown to be predominantly robust to label noise even in extreme cases [10].
130
+
131
+ < g r a p h i c s >
132
+
133
+ Fig. 2. Examples results for 5 tumor query tiles (A, B, C, D, E) in the image retrieval task and the 5 closest retrieved tiles from slides other than the query slide (A1-A5, B1- B5, C1-C5, D1-D5, E1-E5), ranked by distance from low to high, using tile descriptors generated by our method. Its interesting to note that even though some retrieved tiles look very different than the query tile (e.g. C3 and C) all of the retrieved tiles except A4 have been verified by an expert pathologist to contain tumor cells (i.e. correct class retrieval).
134
+
135
+ Future work will include verification strategies for sampled pairs, and new sampling strategies for self-supervised similarity learning as well as hyper-parameter tuning and exploration of the proximity thresholds in the dataset generation process.
136
+
137
+ § ACKNOWLEDGEMENTS
138
+
139
+ The authors would like to thank Amal Lahiani for her invaluable insights and constructive review.
papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/BkxqiK5h0V/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Cancer Metastasis Detection Through Multiple Spatial Context Network
2
+
3
+ Wutong Zhang ${}^{1}$ , Chuang Zhu ${}^{1}$ , Jun Liu ${}^{1}$ , Ying Wang ${}^{2}$ , and Mulan Jin ${}^{2}$
4
+
5
+ ${}^{1}$ Center for Data Science, Beijing University of Posts and Telecommunications, Beijing, China
6
+
7
+ zhangwutong, czhu, liujun@bupt.edu.cn
8
+
9
+ ${}^{2}$ Capital Medical University, Beijing, China
10
+
11
+ wangying_ng_blk@126.com, kinmokuran@163.com
12
+
13
+ Abstract. Breast cancer is one of the leading causes of death by cancer in women, and it often requires accurate detection of metastasis in lymph nodes through Whole-slide Images (WSIs). At present, there are many algorithms of cancer metastasis detection based on CNN, which are generally patch-level models, aiming for increasing the sensitivity, speed, and consistency of metastasis detection. However, most of these algorithms use patch as an independent individual to train, which leads to the neglect of much important spatial context information in WSI. In this paper, we propose a multiple spatial context network (MSC-Net) which considers the spatial correlations between neighboring patches through fusing the spatial information probability maps obtained from the two novel networks we propose, the self-surround spatial context stacked network (SSC-Net) and the center-surround spatial context shared network (CSC-Net). The SSC-Net is a deep mining of continuous information between patches, while CSC-Net strengthens the influence of the neighborhood information to the central patch. Furthermore, for saving memory overhead and reducing computational complexity, we propose a framework which can quickly scan the WSI through the mechanism of the patch feature sharing. We demonstrate evaluations on the camelyon16 dataset and compare with the state-of-the-art trackers. Our method provides a superior result.
14
+
15
+ Keywords: Deep Learning - Spatial Context Relation - Cancer Metastasis Detection.
16
+
17
+ ## 1 Introduction
18
+
19
+ Worldwide, there are about 2.1 million newly diagnosed female breast cancer cases in 2018, accounting for almost 1 in 4 cancer cases among women [2]. Actually, more than ${90}\%$ of women diagnosed with breast cancer at the early-stage survive their disease for at least 5 years. Therefore, the early cancer diagnosis and treatment play a crucial role in improving patients survival rate. Specifically, during the diagnosis procedure, specialists evaluate both overall and local tissue organization via Whole-slide Images (WSIs). However, manually detecting tumor cells within extremely large WSIs can be tedious and time-consuming. Furthermore, it has been shown that there is limited inter-observer consensus in interpreting breast biopsy specimens [8]. Because of this, the development of automatic detection and diagnosis tools is challenging but also essential for the field. And it is the hotspot to develop algorithms to detect cancer metastasis in lymph node images using computer assisted detection.
20
+
21
+ For decades, with the advent of convolutional neural networks (CNNs) and their excellent performance for natural image classification $\left\lbrack {5,9}\right\rbrack$ , there is a growing trend to adapt CNN in computer assisted detection of lymph node metastasis in WSIs $\left\lbrack {4,{10}}\right\rbrack$ . Usually, because of the extremely large size of WSIs, most of the studies first extracted small patches (e.g. 256 * 256 pixels) from WSIs, and trained a deep CNN to classify these small patches into normal or tumor regions $\left\lbrack {{10},{13}}\right\rbrack$ . However, these algorithms train each patch independently, which leads to a serious problem that the loss of spatial context information and dependency in WSI. Therefore, during inference time, the predictions over neighboring patches may be inconsistent, and the patch level probability map may contain isolated outliers. Actually, according to diagnostic experience, when a patch is in the tumor region, its neighboring patches also have a high probability to be labeled as tumor, since they are co-located in neighboring regions [6].
22
+
23
+ In order to capture spatial neighborhood information, Bin Kong [6] proposed Spatio-Net that uses 2D-LSTM layers. But the 2D-LSTM may causes a heavy computational overhead burden and makes the training process extremely slow. And Yi Li [7] propose a neural conditional random field (NCRF) deep learning framework. However, the spatial dependencies in NCRF which is a post-processing method are always suboptimal because complex configurations of patch.
24
+
25
+ In this work, we first propose SSC-Net that can capture continuous spatial information more comprehensively on the internal structure of a single patch. Then, we develop the CSC-Net to mine discrete spatial information with fixed directions around one center patch, which is complementary to the SSC-Net. Finally, we fuse the spatial information probability maps obtained from the above two networks and use sliding windows to get the whole WSI prediction results.In addition, for alleviating the memory consumption problem when sliding windows on the whole WSI, we propose a fast scanning framework by asynchronous sample pre-fetching and neighborhood feature sharing.
26
+
27
+ ## 2 Methodology
28
+
29
+ ### 2.1 Overview
30
+
31
+ The framework is divided into two parts as Fig.1. The first part is feature extraction using CNN. It's noting that unlike previous deep neural network methods that treat each small image patch independently, the proposed framework combines each image patch and its neighbors together for consideration. The second part uses two different components, SSC-Net and CSC-Net, to obtain continuous and discrete spatial context dependent information separately, which can effectively improve the accuracy of patch level and the generalization ability of the model. Then, The output of the above components is integrated for final classification.
32
+
33
+ ![01963a54-9e99-72e3-836e-4b47dac300af_2_454_402_919_394_0.jpg](images/01963a54-9e99-72e3-836e-4b47dac300af_2_454_402_919_394_0.jpg)
34
+
35
+ Fig.1. The overall scheme of our proposed framework. First, the WSIs are divided into many small patches. Second, Each patch and its eight neighbors are fed into CNN feature extractor. Then the transform layer sends the features to the two components in the multi-spatial layer. After a series of spatial feature extraction, we fuse the results of the two components, resulting in a probability map, and finally further processed to locate the metastases.
36
+
37
+ During the training phase, we load a grid of patches, e.g. $3\mathrm{x}3$ , only the predicted probability of the center patch is retained for easy implementation. In the testing phase, we perform inference over patches in a sliding window across the slide, generating a tumor probability heatmap, but it is heavy computational overhead. Therefore, we propose a fast scanning framework to optimize the conventional sliding window structure.
38
+
39
+ ### 2.2 Feature Extractor With CNN
40
+
41
+ Unlike Hand-crafted features $\left\lbrack {{14},{15}}\right\rbrack$ , CNN feature extractor preserve the inputs neighborhood relations and spatial locality in their latent higher-level feature representations.Therefore, using CNN as feature extractor can not only retain the important spatial relevance of images, but also greatly reduce the dimension of features, which makes it easier to capture spatial context information. In our framework, we employ two ResNet architectures [5], ResNet-18 and ResNet-34 that have proven to be powerful in image classification task to extract comprehensive feature representation of pathological image. After the transform layer, we will get a grid of patch feature.
42
+
43
+ ![01963a54-9e99-72e3-836e-4b47dac300af_3_481_347_841_217_0.jpg](images/01963a54-9e99-72e3-836e-4b47dac300af_3_481_347_841_217_0.jpg)
44
+
45
+ Fig. 2. The structure of SSC-Net and CSC-Net.(a)SSC-Net: The blue node is the current neighborhood patch, the orange node is the current center patch, and the LSTM follows the direction of the arrow from the center patch to bypass all the neighborhood patches and finally return to the center patch.(b)CSC-Net: For the center patch, multidirectional parallel LSTMs are used to perform discrete fixed-direction neighborhood information mining based on the direction of the arrow.
46
+
47
+ ### 2.3 Self-Surround Spatial Context Stacked Network
48
+
49
+ In order to capture the connectivity information of the space more comprehensively in grid of patch feature, we designed a separate closed-loop LSTM for each individual patch feature as represented by the circle in Fig.2(a). Through such a closed-loop LSTM structure, continuous spatial context information around each patch can be captured and the continuous dependence of neighborhood patch is also preserved. Then, after obtaining $\mathrm{N}$ spatial neighborhood information, we will get a new feature map which will be fed into the next same stacked layer which can obtain spatial context information at a higher semantic level. The SSC-Net formula referring to the standard LSTM [12] can be simplified as:
50
+
51
+ $$
52
+ {O}_{t + 1}^{d},\left( {{m}_{t + 1}^{d},{c}_{t + 1}^{d}}\right) = \operatorname{LSTM}\left( {{I}_{t}^{d},\left( {{m}_{t}^{d},{c}_{t}^{d}}\right) }\right) \tag{1}
53
+ $$
54
+
55
+ where ${I}_{t}^{d}$ is the current input; ${O}_{t + 1}^{d}$ is the final output; $\left( {{m}_{t}^{d},{c}_{t}^{d}}\right)$ are the long and short memory state; $\mathrm{t}$ is used to control the order of input blocks of LSTM. For example, if the current center position is(i, j), then the sequence of $\mathrm{t}$ is a set of rounds from the center around the center and finally returned to the center position; $\mathrm{d}$ is the number of layers stacked.
56
+
57
+ This model can capture the surrounding context information of each patch through the closed-loop LSTM connection, and then make the relationship between each patch more tight through the stacking of the same layer. In this way, we can obtain a feature that incorporates neighborhood information that is logically linked in a certain order.
58
+
59
+ ### 2.4 Center-Surround Spatial Context Shared Network
60
+
61
+ The SSC-Net was a deep mining of continuous logical sequence of spatial context information, while CSC-Net obtains discrete spatial context information in the eight different fixed directions and strengthens the study of the neighborhood information of the central patch, like Fig.2(b). Because of traditional 2D-LSTM [3] can only take into account the neighborhood information on the left and the top of the center patch, it is very incomplete. So we have adopted a novel network based on Multi-directional parallel LSTMs, which can process the full spatial context of each patch in such a WSI through eight sweeps over all patch by eight different LSTMs. The formula is denoted as follows:
62
+
63
+ $$
64
+ {O}_{t + 1}^{1},\left( {{m}_{t + 1}^{1},{c}_{t + 1}^{1}}\right) = \operatorname{LSTM}\left( {{I}_{t}^{1},\left( {{m}_{t}^{1},{c}_{t}^{1}}\right) }\right)
65
+ $$
66
+
67
+ $$
68
+ {O}_{t + 1}^{2},\left( {{m}_{t + 1}^{2},{c}_{t + 1}^{2}}\right) = \operatorname{LSTM}\left( {{I}_{t}^{2},\left( {{m}_{t}^{2},{c}_{t}^{2}}\right) }\right)
69
+ $$
70
+
71
+ (2)
72
+
73
+ $\vdots$
74
+
75
+ $$
76
+ {O}_{t + 1}^{N},\left( {{m}_{t + 1}^{N},{c}_{t + 1}^{N}}\right) = \operatorname{LSTM}\left( {{I}_{t}^{N},\left( {{m}_{t}^{N},{c}_{t}^{N}}\right) }\right)
77
+ $$
78
+
79
+ where ${I}_{t}^{d}$ is the current input; ${O}_{t + 1}^{d}$ is the final output; $\left( {{m}_{t}^{d},{c}_{t}^{d}}\right)$ are the long and short memory state; $\mathrm{t}$ is used to control the order of input blocks of LSTM. N represents the number of adjacent patches $\left( {\mathrm{N} = 8\text{in our case}}\right)$ .
80
+
81
+ ### 2.5 Multiple Spatial Context Information Integration Network
82
+
83
+ After passing through the above two components in parallel, the grid of patch feature which contains spatial context information enters the fully connected layer to classify, and outputs two grid of patches classification result. The grid of patches classification result obtained by the SSC-Net emphasizes the continuous spatial dependence of each patch and the patch around itself while the result of SSC-Net further emphasizes the influence of the discrete fixed direction on the spatial structure of the center patch. These two components form a certain degree of spatial domain context information complementarity. Therefore, we combine the spatial information probability maps obtained from the above two networks to obtain the final prediction results.
84
+
85
+ ### 2.6 A Fast WSI Scanning Framework
86
+
87
+ Asynchronous Sample Prefetching During the training phase, the heavy $\mathrm{I}/\mathrm{O}$ bottleneck always exists, i.e., the GPU is often idle while waiting for fetching batched training data. To resolve this problem, we adopt an asynchronous sample prefetching mechanism by using multiple producer processes of CPU to prepare the training samples while one consumer process for GPU to consume the training data. This strategy can keep GPU running all the time and boostat least 10 times acceleration in the training stage.
88
+
89
+ Neighborhood Feature Sharing In the testing phase, we perform inference over patches in a sliding window across the slide, generating a tumor probability heatmap, but it is heavy computational overhead. Therefor, we adopt feature sharing method to avoid repetitive computation and improve scanning efficiency, as shown in the Fig.3. The merit of using neighborhood feature sharing architecture. It can speed up the inference by sharing computations in the overlapping regions (blue patch).
90
+
91
+ ![01963a54-9e99-72e3-836e-4b47dac300af_5_489_342_846_359_0.jpg](images/01963a54-9e99-72e3-836e-4b47dac300af_5_489_342_846_359_0.jpg)
92
+
93
+ Fig. 3. A Fast WSI Scanning Framework. When predicting the center block, the feature maps of the eight neighborhoods need to be calculated simultaneously. Then, when predicting the next adjacent center block (right side or bottom side), it is possible to calculate only the feature map of the newly read patch (gray block), and the block that has been used last time (blue block) can be used without calculation.
94
+
95
+ ## 3 Experments
96
+
97
+ In this section, extensive experiments were conducted on the CAMELYON16 [1] dataset to evaluate the proposed model for cancer metastasis detection in WSIs. This dataset includes 160normal and 110 tumor WSIs for training, 81 normal and 49 tumor WSIs for testing. We conducted all the experiments on ${40} \times$ magnification.First, We employ the simple OTSU algorithm [11] to determine the adaptive threshold and filter out most of the white background. Then, We randomly sampled ${200},{000}{768} \times {768}$ patches from the non-tumor non-background regions of the tumor slides and the non-background regions of the normal slides as negative samples. In order to probe the efficacy of our method, we first evaluate our model under different configurations. We tried to use different CNN feature extractors. And Experiments show that using a ResNet18 network is enough to extract the appropriate features while saving memory. Our baseline is directly using the ResNet18 network. We also compared our method with several state-of-the-art methods using accuracy as evaluation indicator.
98
+
99
+ As shown in Table 1, on the full datasetall of the model proposed in this paper SSC-Net, CSC-Net and Multi-Net have a higher accuracy. And as expected, Multi-Net has the highest accuracy, which is ${6.72}\%$ higher than baseline, in the case of guaranteeing high FROC. At the same time, it is worth noting that our model works better on a small number of data sets than other models because of the combination of domain information. For most of the depth models, with the increase of the complexity of the model, it may make the model over-fitting in a small number of data sets serious, so that the performance on a small number of data sets is not as good as the simple model.
100
+
101
+ Fig. 4 is the curves of the training process on ${10}\%$ of dataset. As analyzed above, show that our model still has smooth training curves with small amount of data, contrast the fluctuation of baseline. Therefore, our model has a natural strong generalization ability when the amount of data is small, due to the use of stack LSTM. Therefore, the training efficiency of the model can be greatly improved.
102
+
103
+ Table 1. Quantitative comparisons
104
+
105
+ <table><tr><td>Model</td><td/><td>ACC(10%Data) ACC(100%Data) Ave.FROC STD</td><td/><td/></tr><tr><td>Baseline</td><td>88.52%</td><td>92.42%</td><td>0.4301</td><td>0.026</td></tr><tr><td>ResNet34</td><td>88.62%</td><td>92.76%</td><td>0.5241</td><td>0.023</td></tr><tr><td>ResNet50</td><td>88.68%</td><td>93.57%</td><td>0.5249</td><td>0.019</td></tr><tr><td>NCRF</td><td>91.15%</td><td>92.96%</td><td>0.8138</td><td>0.010</td></tr><tr><td>SSC-Net</td><td>92.07%</td><td>92.59%</td><td>0.7825</td><td>0.011</td></tr><tr><td>CSC-Net</td><td>94.13%</td><td>97.54%</td><td>0.7526</td><td>0.012</td></tr><tr><td>MSC-Net</td><td>95.24%</td><td>98.43%</td><td>0.8078</td><td>0.010</td></tr></table>
106
+
107
+ ![01963a54-9e99-72e3-836e-4b47dac300af_6_419_723_454_306_0.jpg](images/01963a54-9e99-72e3-836e-4b47dac300af_6_419_723_454_306_0.jpg)
108
+
109
+ Fig. 4. Accuracy on ${10}\%$ of dataset
110
+
111
+ ![01963a54-9e99-72e3-836e-4b47dac300af_6_898_723_467_306_0.jpg](images/01963a54-9e99-72e3-836e-4b47dac300af_6_898_723_467_306_0.jpg)
112
+
113
+ Fig. 5. Accuracy on full dataset
114
+
115
+ ## 4 Conclusion
116
+
117
+ In this paper, we propose a novel multiple spatial context network, which is composed of SSC-Net and CSC-Net, and through integrate neighborhood and background features improve the detection of metastasis in WSIs. The SSC-Net and CSC-Net which are based on the LSTM. A standard LSTM allows to easily memorize the context information for long periods of time in sequence data. In images, this temporal dependency learning is converted to the spatial domain which is significance for us to obtain continuous spatial dependencies. Therefor, the SSC-Net and CSC-Net generalize standard LSTM by providing recurrent connections along with all spatial dependence present in the data. Moreover, we propose a fast scanning framework by asynchronous sample pre-fetching and neighborhood feature sharing to alleviate the memory consumption problem when sliding windows on the whole WSI. We demonstrate that the proposed method achieved superior performance compared to other state-of-the-art methods on the Camelyon 2016 Grand Challenge dataset and even surpassed human performance. Furthermore, the proposed fast WSI scanning framework matched the speed requirements of clinical practice, where the framework can process whole-slide image within a very short time. We expect that our multiple spatial context network is useful to boost performance in a variety of medical image analytical challenges.
118
+
119
+ ## References
120
+
121
+ 1. Bejnordi, B.E., Veta, M., Van Diest, P.J., Van Ginneken, B., Karssemeijer, N., Litjens, G., Van Der Laak, J.A., Hermsen, M., Manson, Q.F., Balkenhol, M., et al.: Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. Jama $\mathbf{{318}}\left( {22}\right) ,{2199} - {2210}\left( {2017}\right)$
122
+
123
+ 2. Bray, F., Ferlay, J., Soerjomataram, I., Siegel, R.L., Torre, L.A., Jemal, A.: Global cancer statistics 2018: Globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: a cancer journal for clinicians $\mathbf{{68}}\left( 6\right) ,{394} - {424}$ (2018)
124
+
125
+ 3. Byeon, W., Liwicki, M., Breuel, T.M.: Texture classification using 2d lstm networks. In: 2014 22nd international conference on pattern recognition. pp. 1144- 1149. IEEE (2014)
126
+
127
+ 4. Chen, R., Jing, Y., Jackson, H.: Identifying metastases in sentinel lymph nodes with deep convolutional neural networks. arXiv preprint arXiv:1608.01658 (2016)
128
+
129
+ 5. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770-778 (2016)
130
+
131
+ 6. Kong, B., Wang, X., Li, Z., Song, Q., Zhang, S.: Cancer metastasis detection via spatially structured deep network. In: International Conference on Information Processing in Medical Imaging. pp. 236-248. Springer (2017)
132
+
133
+ 7. Li, Y., Ping, W.: Cancer metastasis detection with neural conditional random field. arXiv preprint arXiv:1806.07064 (2018)
134
+
135
+ 8. Liang, X., Shen, X., Xiang, D., Feng, J., Lin, L., Yan, S.: Semantic object parsing with local-global long short-term memory. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3185-3193 (2016)
136
+
137
+ 9. Lin, H., Chen, H., Dou, Q., Wang, L., Qin, J., Heng, P.A.: Scannet: A fast and dense scanning framework for metastastic breast cancer detection from whole-slide image. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). pp. 539-546. IEEE (2018)
138
+
139
+ 10. Liu, Y., Gadepalli, K., Norouzi, M., Dahl, G.E., Kohlberger, T., Boyko, A., Venu-gopalan, S., Timofeev, A., Nelson, P.Q., Corrado, G.S., et al.: Detecting cancer metastases on gigapixel pathology images. arXiv preprint arXiv:1703.02442 (2017)
140
+
141
+ 11. Otsu, N.: A threshold selection method from gray-level histograms. IEEE transactions on systems, man, and cybernetics $\mathbf{9}\left( 1\right) ,{62} - {66}\left( {1979}\right)$
142
+
143
+ 12. Peng, Z., Zhang, R., Liang, X., Liu, X., Lin, L.: Geometric scene parsing with hierarchical lstm. arXiv preprint arXiv:1604.01931 (2016)
144
+
145
+ 13. Wang, D., Khosla, A., Gargeya, R., Irshad, H., Beck, A.H.: Deep learning for identifying metastatic breast cancer. arXiv preprint arXiv:1606.05718 (2016)
146
+
147
+ 14. Zhang, X., Liu, W., Dundar, M., Badve, S., Zhang, S.: Towards large-scale histopathological image analysis: Hashing-based image retrieval. IEEE Transactions on Medical Imaging $\mathbf{{34}}\left( 2\right) ,{496} - {506}\left( {2015}\right)$
148
+
149
+ 15. Zhang, X., Xing, F., Su, H., Yang, L., Zhang, S.: High-throughput histopathological image analysis via robust cell segmentation and hashing. Medical image analysis $\mathbf{{26}}\left( 1\right) ,{306} - {315}\left( {2015}\right)$
papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/BkxqiK5h0V/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,142 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § CANCER METASTASIS DETECTION THROUGH MULTIPLE SPATIAL CONTEXT NETWORK
2
+
3
+ Wutong Zhang ${}^{1}$ , Chuang Zhu ${}^{1}$ , Jun Liu ${}^{1}$ , Ying Wang ${}^{2}$ , and Mulan Jin ${}^{2}$
4
+
5
+ ${}^{1}$ Center for Data Science, Beijing University of Posts and Telecommunications, Beijing, China
6
+
7
+ zhangwutong, czhu, liujun@bupt.edu.cn
8
+
9
+ ${}^{2}$ Capital Medical University, Beijing, China
10
+
11
+ wangying_ng_blk@126.com, kinmokuran@163.com
12
+
13
+ Abstract. Breast cancer is one of the leading causes of death by cancer in women, and it often requires accurate detection of metastasis in lymph nodes through Whole-slide Images (WSIs). At present, there are many algorithms of cancer metastasis detection based on CNN, which are generally patch-level models, aiming for increasing the sensitivity, speed, and consistency of metastasis detection. However, most of these algorithms use patch as an independent individual to train, which leads to the neglect of much important spatial context information in WSI. In this paper, we propose a multiple spatial context network (MSC-Net) which considers the spatial correlations between neighboring patches through fusing the spatial information probability maps obtained from the two novel networks we propose, the self-surround spatial context stacked network (SSC-Net) and the center-surround spatial context shared network (CSC-Net). The SSC-Net is a deep mining of continuous information between patches, while CSC-Net strengthens the influence of the neighborhood information to the central patch. Furthermore, for saving memory overhead and reducing computational complexity, we propose a framework which can quickly scan the WSI through the mechanism of the patch feature sharing. We demonstrate evaluations on the camelyon16 dataset and compare with the state-of-the-art trackers. Our method provides a superior result.
14
+
15
+ Keywords: Deep Learning - Spatial Context Relation - Cancer Metastasis Detection.
16
+
17
+ § 1 INTRODUCTION
18
+
19
+ Worldwide, there are about 2.1 million newly diagnosed female breast cancer cases in 2018, accounting for almost 1 in 4 cancer cases among women [2]. Actually, more than ${90}\%$ of women diagnosed with breast cancer at the early-stage survive their disease for at least 5 years. Therefore, the early cancer diagnosis and treatment play a crucial role in improving patients survival rate. Specifically, during the diagnosis procedure, specialists evaluate both overall and local tissue organization via Whole-slide Images (WSIs). However, manually detecting tumor cells within extremely large WSIs can be tedious and time-consuming. Furthermore, it has been shown that there is limited inter-observer consensus in interpreting breast biopsy specimens [8]. Because of this, the development of automatic detection and diagnosis tools is challenging but also essential for the field. And it is the hotspot to develop algorithms to detect cancer metastasis in lymph node images using computer assisted detection.
20
+
21
+ For decades, with the advent of convolutional neural networks (CNNs) and their excellent performance for natural image classification $\left\lbrack {5,9}\right\rbrack$ , there is a growing trend to adapt CNN in computer assisted detection of lymph node metastasis in WSIs $\left\lbrack {4,{10}}\right\rbrack$ . Usually, because of the extremely large size of WSIs, most of the studies first extracted small patches (e.g. 256 * 256 pixels) from WSIs, and trained a deep CNN to classify these small patches into normal or tumor regions $\left\lbrack {{10},{13}}\right\rbrack$ . However, these algorithms train each patch independently, which leads to a serious problem that the loss of spatial context information and dependency in WSI. Therefore, during inference time, the predictions over neighboring patches may be inconsistent, and the patch level probability map may contain isolated outliers. Actually, according to diagnostic experience, when a patch is in the tumor region, its neighboring patches also have a high probability to be labeled as tumor, since they are co-located in neighboring regions [6].
22
+
23
+ In order to capture spatial neighborhood information, Bin Kong [6] proposed Spatio-Net that uses 2D-LSTM layers. But the 2D-LSTM may causes a heavy computational overhead burden and makes the training process extremely slow. And Yi Li [7] propose a neural conditional random field (NCRF) deep learning framework. However, the spatial dependencies in NCRF which is a post-processing method are always suboptimal because complex configurations of patch.
24
+
25
+ In this work, we first propose SSC-Net that can capture continuous spatial information more comprehensively on the internal structure of a single patch. Then, we develop the CSC-Net to mine discrete spatial information with fixed directions around one center patch, which is complementary to the SSC-Net. Finally, we fuse the spatial information probability maps obtained from the above two networks and use sliding windows to get the whole WSI prediction results.In addition, for alleviating the memory consumption problem when sliding windows on the whole WSI, we propose a fast scanning framework by asynchronous sample pre-fetching and neighborhood feature sharing.
26
+
27
+ § 2 METHODOLOGY
28
+
29
+ § 2.1 OVERVIEW
30
+
31
+ The framework is divided into two parts as Fig.1. The first part is feature extraction using CNN. It's noting that unlike previous deep neural network methods that treat each small image patch independently, the proposed framework combines each image patch and its neighbors together for consideration. The second part uses two different components, SSC-Net and CSC-Net, to obtain continuous and discrete spatial context dependent information separately, which can effectively improve the accuracy of patch level and the generalization ability of the model. Then, The output of the above components is integrated for final classification.
32
+
33
+ < g r a p h i c s >
34
+
35
+ Fig.1. The overall scheme of our proposed framework. First, the WSIs are divided into many small patches. Second, Each patch and its eight neighbors are fed into CNN feature extractor. Then the transform layer sends the features to the two components in the multi-spatial layer. After a series of spatial feature extraction, we fuse the results of the two components, resulting in a probability map, and finally further processed to locate the metastases.
36
+
37
+ During the training phase, we load a grid of patches, e.g. $3\mathrm{x}3$ , only the predicted probability of the center patch is retained for easy implementation. In the testing phase, we perform inference over patches in a sliding window across the slide, generating a tumor probability heatmap, but it is heavy computational overhead. Therefore, we propose a fast scanning framework to optimize the conventional sliding window structure.
38
+
39
+ § 2.2 FEATURE EXTRACTOR WITH CNN
40
+
41
+ Unlike Hand-crafted features $\left\lbrack {{14},{15}}\right\rbrack$ , CNN feature extractor preserve the inputs neighborhood relations and spatial locality in their latent higher-level feature representations.Therefore, using CNN as feature extractor can not only retain the important spatial relevance of images, but also greatly reduce the dimension of features, which makes it easier to capture spatial context information. In our framework, we employ two ResNet architectures [5], ResNet-18 and ResNet-34 that have proven to be powerful in image classification task to extract comprehensive feature representation of pathological image. After the transform layer, we will get a grid of patch feature.
42
+
43
+ < g r a p h i c s >
44
+
45
+ Fig. 2. The structure of SSC-Net and CSC-Net.(a)SSC-Net: The blue node is the current neighborhood patch, the orange node is the current center patch, and the LSTM follows the direction of the arrow from the center patch to bypass all the neighborhood patches and finally return to the center patch.(b)CSC-Net: For the center patch, multidirectional parallel LSTMs are used to perform discrete fixed-direction neighborhood information mining based on the direction of the arrow.
46
+
47
+ § 2.3 SELF-SURROUND SPATIAL CONTEXT STACKED NETWORK
48
+
49
+ In order to capture the connectivity information of the space more comprehensively in grid of patch feature, we designed a separate closed-loop LSTM for each individual patch feature as represented by the circle in Fig.2(a). Through such a closed-loop LSTM structure, continuous spatial context information around each patch can be captured and the continuous dependence of neighborhood patch is also preserved. Then, after obtaining $\mathrm{N}$ spatial neighborhood information, we will get a new feature map which will be fed into the next same stacked layer which can obtain spatial context information at a higher semantic level. The SSC-Net formula referring to the standard LSTM [12] can be simplified as:
50
+
51
+ $$
52
+ {O}_{t + 1}^{d},\left( {{m}_{t + 1}^{d},{c}_{t + 1}^{d}}\right) = \operatorname{LSTM}\left( {{I}_{t}^{d},\left( {{m}_{t}^{d},{c}_{t}^{d}}\right) }\right) \tag{1}
53
+ $$
54
+
55
+ where ${I}_{t}^{d}$ is the current input; ${O}_{t + 1}^{d}$ is the final output; $\left( {{m}_{t}^{d},{c}_{t}^{d}}\right)$ are the long and short memory state; $\mathrm{t}$ is used to control the order of input blocks of LSTM. For example, if the current center position is(i, j), then the sequence of $\mathrm{t}$ is a set of rounds from the center around the center and finally returned to the center position; $\mathrm{d}$ is the number of layers stacked.
56
+
57
+ This model can capture the surrounding context information of each patch through the closed-loop LSTM connection, and then make the relationship between each patch more tight through the stacking of the same layer. In this way, we can obtain a feature that incorporates neighborhood information that is logically linked in a certain order.
58
+
59
+ § 2.4 CENTER-SURROUND SPATIAL CONTEXT SHARED NETWORK
60
+
61
+ The SSC-Net was a deep mining of continuous logical sequence of spatial context information, while CSC-Net obtains discrete spatial context information in the eight different fixed directions and strengthens the study of the neighborhood information of the central patch, like Fig.2(b). Because of traditional 2D-LSTM [3] can only take into account the neighborhood information on the left and the top of the center patch, it is very incomplete. So we have adopted a novel network based on Multi-directional parallel LSTMs, which can process the full spatial context of each patch in such a WSI through eight sweeps over all patch by eight different LSTMs. The formula is denoted as follows:
62
+
63
+ $$
64
+ {O}_{t + 1}^{1},\left( {{m}_{t + 1}^{1},{c}_{t + 1}^{1}}\right) = \operatorname{LSTM}\left( {{I}_{t}^{1},\left( {{m}_{t}^{1},{c}_{t}^{1}}\right) }\right)
65
+ $$
66
+
67
+ $$
68
+ {O}_{t + 1}^{2},\left( {{m}_{t + 1}^{2},{c}_{t + 1}^{2}}\right) = \operatorname{LSTM}\left( {{I}_{t}^{2},\left( {{m}_{t}^{2},{c}_{t}^{2}}\right) }\right)
69
+ $$
70
+
71
+ (2)
72
+
73
+ $\vdots$
74
+
75
+ $$
76
+ {O}_{t + 1}^{N},\left( {{m}_{t + 1}^{N},{c}_{t + 1}^{N}}\right) = \operatorname{LSTM}\left( {{I}_{t}^{N},\left( {{m}_{t}^{N},{c}_{t}^{N}}\right) }\right)
77
+ $$
78
+
79
+ where ${I}_{t}^{d}$ is the current input; ${O}_{t + 1}^{d}$ is the final output; $\left( {{m}_{t}^{d},{c}_{t}^{d}}\right)$ are the long and short memory state; $\mathrm{t}$ is used to control the order of input blocks of LSTM. N represents the number of adjacent patches $\left( {\mathrm{N} = 8\text{ in our case }}\right)$ .
80
+
81
+ § 2.5 MULTIPLE SPATIAL CONTEXT INFORMATION INTEGRATION NETWORK
82
+
83
+ After passing through the above two components in parallel, the grid of patch feature which contains spatial context information enters the fully connected layer to classify, and outputs two grid of patches classification result. The grid of patches classification result obtained by the SSC-Net emphasizes the continuous spatial dependence of each patch and the patch around itself while the result of SSC-Net further emphasizes the influence of the discrete fixed direction on the spatial structure of the center patch. These two components form a certain degree of spatial domain context information complementarity. Therefore, we combine the spatial information probability maps obtained from the above two networks to obtain the final prediction results.
84
+
85
+ § 2.6 A FAST WSI SCANNING FRAMEWORK
86
+
87
+ Asynchronous Sample Prefetching During the training phase, the heavy $\mathrm{I}/\mathrm{O}$ bottleneck always exists, i.e., the GPU is often idle while waiting for fetching batched training data. To resolve this problem, we adopt an asynchronous sample prefetching mechanism by using multiple producer processes of CPU to prepare the training samples while one consumer process for GPU to consume the training data. This strategy can keep GPU running all the time and boostat least 10 times acceleration in the training stage.
88
+
89
+ Neighborhood Feature Sharing In the testing phase, we perform inference over patches in a sliding window across the slide, generating a tumor probability heatmap, but it is heavy computational overhead. Therefor, we adopt feature sharing method to avoid repetitive computation and improve scanning efficiency, as shown in the Fig.3. The merit of using neighborhood feature sharing architecture. It can speed up the inference by sharing computations in the overlapping regions (blue patch).
90
+
91
+ < g r a p h i c s >
92
+
93
+ Fig. 3. A Fast WSI Scanning Framework. When predicting the center block, the feature maps of the eight neighborhoods need to be calculated simultaneously. Then, when predicting the next adjacent center block (right side or bottom side), it is possible to calculate only the feature map of the newly read patch (gray block), and the block that has been used last time (blue block) can be used without calculation.
94
+
95
+ § 3 EXPERMENTS
96
+
97
+ In this section, extensive experiments were conducted on the CAMELYON16 [1] dataset to evaluate the proposed model for cancer metastasis detection in WSIs. This dataset includes 160normal and 110 tumor WSIs for training, 81 normal and 49 tumor WSIs for testing. We conducted all the experiments on ${40} \times$ magnification.First, We employ the simple OTSU algorithm [11] to determine the adaptive threshold and filter out most of the white background. Then, We randomly sampled ${200},{000}{768} \times {768}$ patches from the non-tumor non-background regions of the tumor slides and the non-background regions of the normal slides as negative samples. In order to probe the efficacy of our method, we first evaluate our model under different configurations. We tried to use different CNN feature extractors. And Experiments show that using a ResNet18 network is enough to extract the appropriate features while saving memory. Our baseline is directly using the ResNet18 network. We also compared our method with several state-of-the-art methods using accuracy as evaluation indicator.
98
+
99
+ As shown in Table 1, on the full datasetall of the model proposed in this paper SSC-Net, CSC-Net and Multi-Net have a higher accuracy. And as expected, Multi-Net has the highest accuracy, which is ${6.72}\%$ higher than baseline, in the case of guaranteeing high FROC. At the same time, it is worth noting that our model works better on a small number of data sets than other models because of the combination of domain information. For most of the depth models, with the increase of the complexity of the model, it may make the model over-fitting in a small number of data sets serious, so that the performance on a small number of data sets is not as good as the simple model.
100
+
101
+ Fig. 4 is the curves of the training process on ${10}\%$ of dataset. As analyzed above, show that our model still has smooth training curves with small amount of data, contrast the fluctuation of baseline. Therefore, our model has a natural strong generalization ability when the amount of data is small, due to the use of stack LSTM. Therefore, the training efficiency of the model can be greatly improved.
102
+
103
+ Table 1. Quantitative comparisons
104
+
105
+ max width=
106
+
107
+ Model X ACC(10%Data) ACC(100%Data) Ave.FROC STD X X
108
+
109
+ 1-5
110
+ Baseline 88.52% 92.42% 0.4301 0.026
111
+
112
+ 1-5
113
+ ResNet34 88.62% 92.76% 0.5241 0.023
114
+
115
+ 1-5
116
+ ResNet50 88.68% 93.57% 0.5249 0.019
117
+
118
+ 1-5
119
+ NCRF 91.15% 92.96% 0.8138 0.010
120
+
121
+ 1-5
122
+ SSC-Net 92.07% 92.59% 0.7825 0.011
123
+
124
+ 1-5
125
+ CSC-Net 94.13% 97.54% 0.7526 0.012
126
+
127
+ 1-5
128
+ MSC-Net 95.24% 98.43% 0.8078 0.010
129
+
130
+ 1-5
131
+
132
+ < g r a p h i c s >
133
+
134
+ Fig. 4. Accuracy on ${10}\%$ of dataset
135
+
136
+ < g r a p h i c s >
137
+
138
+ Fig. 5. Accuracy on full dataset
139
+
140
+ § 4 CONCLUSION
141
+
142
+ In this paper, we propose a novel multiple spatial context network, which is composed of SSC-Net and CSC-Net, and through integrate neighborhood and background features improve the detection of metastasis in WSIs. The SSC-Net and CSC-Net which are based on the LSTM. A standard LSTM allows to easily memorize the context information for long periods of time in sequence data. In images, this temporal dependency learning is converted to the spatial domain which is significance for us to obtain continuous spatial dependencies. Therefor, the SSC-Net and CSC-Net generalize standard LSTM by providing recurrent connections along with all spatial dependence present in the data. Moreover, we propose a fast scanning framework by asynchronous sample pre-fetching and neighborhood feature sharing to alleviate the memory consumption problem when sliding windows on the whole WSI. We demonstrate that the proposed method achieved superior performance compared to other state-of-the-art methods on the Camelyon 2016 Grand Challenge dataset and even surpassed human performance. Furthermore, the proposed fast WSI scanning framework matched the speed requirements of clinical practice, where the framework can process whole-slide image within a very short time. We expect that our multiple spatial context network is useful to boost performance in a variety of medical image analytical challenges.
papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/H1eTF7BqZr/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # U-net Ensemble Model for Segmentation in Histopathology Images
2
+
3
+ Yilong Li, Xingru Huang, Yaqi Wang,
4
+
5
+ Zhaoyang Xu, Yibao Sun, and Qianni Zhang
6
+
7
+ Graduate School of Computer Science, Queen Mary University of London, London, UK, yilong.li@qmul.ac.uk
8
+
9
+ Abstract. In this work, a multi-scale U-net[1] fusion model is proposed for the automatic cancer detection and classification in whole-slide lung histopathology[2]. The model integrates two types of U-net structure, trained on different image scales and subsets, aiming to address the challenges posed by the significant variation in data presentation. Since lung histopathology images come in various sub-categories and appearances, the performance of an individual trained network is usually limited. We train a variety of networks by using multiple re-scaled images and different subsets of images, and finally ensemble the outputs of various networks. Smoothing and noise elimination are conducted using convolutional Conditional Random Fields (CRFs)[3]. The proposed model is validated on Automatic Cancer Detection and Classification in Whole-slide Lung Histopathology (ACDC@LungHP) challenge in ISBI2019. Our method achieves a dice coefficient of 0.7968 , Which is ranked at the third place on the board.
10
+
11
+ Keywords: Model ensemble, Tumor, Segmentation, Convolutional CRFs
12
+
13
+ ## 1 Introduction
14
+
15
+ Digital pathology has been gradually introduced into clinical practice. Digital pathology scanners can provide whole slide images (WSI) with very high resolution, but for pathologists, time and energy required to manual analysis on WSI of every single case are unbearable. Automatic computational analysis algorithms based on machine learning provides a way to reduce pathologists' workload. This project focuses on the detection and segmentation of lung tumour tissues.[4] Currently, most types of lung cancer are analysed mainly by pathologists' naked eye examination of the slice image of pulmonary lobes to determine the location, size and pattern of the tumor. This kind of diagnosis plays a vital role in the prognosis of lung cancer and the determination of the therapeutic regimen. However, because of the massive number of pathological slices, and too much time spent on every pathological slice, the process of the diagnostic method is usually tricky. Therefore, segmenting lung tumors through automatic analysis of pathological slices can help pathologists save significant time and efforts. Deep convolutional learning method based on neural network is currently the most popular technology in dealing with the problems of tumor segmentation. However, the network-based training method requires a huge number of datasets to validate and adjust the parameters of each convolutional layer in the convolutional network, to achieve accurate tumor segmentation. Therefore, the primary purpose of this study is to use limited datasets, through selection and optimisa-tion of single networks or combining multiple networks, to achieve efficient and more accurate segmentation of tumor in pathological image slices of lung cancer.
16
+
17
+ In this work, we design a specific process of tumor tissue segmentation. The dataset is trained to generate the probability map of the mask of the WSI image. Training on the varying data will mislead the model and as a result, produce unsatisfactory prediction results. Hence, we believe that the employment of multiple specifically trained models is necessary. Empirically, a well-trained model can identify the location of the tumor if magnification of at least ${10}\mathrm{X}$ is used. Thus, the dataset is classified into three sub-datasets by k-mean algorithm.[5] And each sub-dataset is trained with the modified U-net network. In this way, all of the data are used in training, and the different features can be reflected on the different model. This method can analyse the features of the WSI image more specifically so that the difference between different tumor tissue and different sources of the tumor can be studied by the models. Then, with the model ensemble of all the models trained by whole dataset or the three sub-dataset, the performance of the ensemble model is much better than any single model we trained. Finally, image smoothing and noise elimination are conducted after training by using convolutional Conditional Random Fields (CRFs), and the output images are aligned with the original images. The output of convolutional CRFs is the final output of the experiment.
18
+
19
+ The design of the overall model structure is illustrated in Fig.1.
20
+
21
+ ![01963a50-daa4-7929-a27a-f2e4902cf17a_1_370_1279_965_506_0.jpg](images/01963a50-daa4-7929-a27a-f2e4902cf17a_1_370_1279_965_506_0.jpg)
22
+
23
+ Fig. 1. The overall framework of tumor tissue segmentation.
24
+
25
+ ## 2 Methods
26
+
27
+ In this work, there are three crucial processes of tumor segmentation. The first one is data preprocessing. The dataset is classified into three sub-dataset by the k-means clustering algorithm. Then, with the difference of resolution of the dataset and the difference of sub-datasets, two types of six models in total are designed and trained. Then, fusing multiple models and merging several types of masks, more intuitively accurate segmentation masks are generated.[6] Finally, the convolutional CRFs is used to eliminate the noise and smooth the boundary of the segmentation mask.
28
+
29
+ In Fig.1, there are six models in total, model 1-3 are trained with the whole dataset, but with three resolutions of 576,1152 and 2048 pixels, model 4-6 are trained with the sub-datasets. The datasets are divided into three sub-datasets by k-means algorithm, so model 4-6 trains with the sub-datasets 1,2 and 3 .
30
+
31
+ ### 2.1 Model Training
32
+
33
+ The overall objective of the proposed network is to make the segmentation of the tumor tissue from normal tissue in the WSI image. Due to the purpose, a modified U-net is used to focus the model more on the overview.
34
+
35
+ In this network, the energy function is computed by a pixel-wise soft-max[7] over the final feature map combined with the cross entropy loss function.[8]
36
+
37
+ The soft-max is defined as:
38
+
39
+ $$
40
+ {p}_{k}\left( x\right) = \exp \left( {{a}_{k}\left( x\right) }\right) /\left( {\mathop{\sum }\limits_{{{k}^{\prime } = 1}}^{K}\exp \left( {{a}_{{k}^{\prime }}\left( x\right) }\right) }\right) \tag{1}
41
+ $$
42
+
43
+ where ${p}_{k}\left( x\right)$ denotes the probability value in feature channel $\mathrm{k}$ at the pixel position $x \in \Omega$ with $\Omega \subset {\mathbb{Z}}^{2}.K$ is the number of classes and ${p}_{k}\left( x\right)$ is the approximated maximum-function. i.e. ${p}_{k}\left( x\right) \approx 1$ for the $k$ that has the maximum activation ${a}_{k}\left( x\right)$ and ${p}_{k}\left( x\right) \approx 0$ for all other $k$ .
44
+
45
+ The probability value for each epoch should be influenced by the loss function and the result of this influence will reflect in next epoch.
46
+
47
+ The cross entropy then penalizes at each position the deviation of $\ell \left( x\right)$ using
48
+
49
+ $$
50
+ E = \mathop{\sum }\limits_{{x \in \Omega }}w\left( x\right) \log \left( {{p}_{\ell \left( x\right) }\left( x\right) }\right) \tag{2}
51
+ $$
52
+
53
+ where $\ell : \Omega \rightarrow \{ 1,\ldots , k\}$ is the true label of each pixel and $w : \Omega \rightarrow \mathbb{R}$ is a weight map that we introduced to give some pixels more importance in the training.
54
+
55
+ The separation border is computed using morphological operations. The weight map is then computed as
56
+
57
+ $$
58
+ w\left( x\right) = {w}_{c}\left( x\right) + {w}_{0} \cdot \exp \left( {-\frac{{\left( {d}_{1}\left( x\right) + {d}_{2}\left( x\right) \right) }^{2}}{2{\sigma }^{2}}}\right) \tag{3}
59
+ $$
60
+
61
+ where ${w}_{c} : \Omega \rightarrow \mathbb{R}$ is the weight map to balance the class frequencies, ${d}_{1} : \Omega \rightarrow \mathbb{R}$ denotes the distance to the border of the nearest cell and ${d}_{2} : \Omega \rightarrow R$ the distance to the border of the second nearest cell. In our experiments we set ${w}_{0} = {10}$ and $\sigma \approx 5$ pixels.
62
+
63
+ The out put of the network for each training epoch in this work should be the probability matrix of segmentation mask, so the matrix $\mathbf{P}$ should be
64
+
65
+ $$
66
+ \mathbf{P} = {\left\lbrack {p}_{k}\left( x\right) \right\rbrack }_{m, n} \tag{4}
67
+ $$
68
+
69
+ where $x$ is the coordinate of each pixel in the image, $m, n$ is the size of the image.
70
+
71
+ ### 2.2 Model Ensemble
72
+
73
+ As shown in the Fig.1, there are totally six models of two types. Model 1-3 are trained by the whole dataset, which recalls the performance of the coarse level model. Model 4-6, however, is nearly opposite to the first three models. In that way, one would consider fusing the output of the two levels appropriately, so that they complement each other and jointly explore the advantages.
74
+
75
+ For the fine level of networks, three sub-datasets are roughly identified based on k-means algorithm. The algorithm calculate the mean of the color value[?] in tumor region for each image based on the true mask.
76
+
77
+ Because the total variance is constant, this is equivalent to maximizing the sum of squared deviations between points in different clusters (between-cluster sum of squares), which follows from the law of total variance. Based on the three datasets generated by k-means algorithm, three model are generated by the network(model 4-6 in Fig.1), these six models perform well in different situation, so the model ensemble is necessary in this work. The model ensemble function
78
+
79
+ is defined as:
80
+
81
+ $$
82
+ \mathbf{F} = \mathop{\sum }\limits_{{i = 1}}^{n}{W}_{i} * {\mathbf{P}}_{i} \tag{5}
83
+ $$
84
+
85
+ Where $\mathbf{F}$ is the output matrix of the model ensemble, $n$ represents the number of models, ${W}_{i}$ represents the weight of model $i$ , and ${\mathbf{P}}_{i}$ (from formula 4) represents the prediction probability map of model $i$ .
86
+
87
+ ### 2.3 Convolutional CRFs for Semantic Segmentation
88
+
89
+ With the model ensemble, the generated masks are clearer, but some noise remains. Because the mask boundaries are mainly generated using 512*512 patches, the predicted model is very sharp. Due to the defect of the above method, the generated boundaries by the coarse level networks lack precise details, and the fine level networks are often attracted by outlier cells, leading to noise.
90
+
91
+ Also, the isolated tumor cells and fine details in boundaries are often not considered in human manual labeling. Thus, a post-processing step using convolutional CRF is introduced here to further improve the matching of the generated results to the manually annotated masks.
92
+
93
+ All parameters of the convolutional CRFs can easily be optimised using back-propagation. Semantic image segmentation, which aims to produce a categorical label for each pixel in an image, is a very import task for visual perception. At the same time, CRFs can also be used to extract the features of the tumor boundary, to optimise the mask generated by the model ensemble, and optimise the mask boundary to make it closer to the hand-drawn mask.
94
+
95
+ ## 3 Experiments
96
+
97
+ ### 3.1 Dataset and Data Preprocessing
98
+
99
+ Dataset. The dataset comes from the challenge on ACDC@LungHP in ISBI conference 2019. 200 whole-slide images are adopted, 150 of which are used as training sets. It is noted that the sources of the training sets are similar, but the image present significant differences for several reasons. For example, there are obvious differences in the ways of staining, and some pictures even present uneven coloring.
100
+
101
+ Data Preprocessing. In this work, the dataset is separate into three sub-datasets in order to train the models for model ensemble.
102
+
103
+ By k-means algorithm, the dataset is classified by the color value of the tumor region, these three sub-datasets are presented in Fig. 2.
104
+
105
+ ![01963a50-daa4-7929-a27a-f2e4902cf17a_4_364_1047_978_205_0.jpg](images/01963a50-daa4-7929-a27a-f2e4902cf17a_4_364_1047_978_205_0.jpg)
106
+
107
+ Fig. 2. Examples of three identified subsets (a) Datasets with lowest color value in tumor region by k-means algorithm, pink tissue around the nucleis in the tumor region, (b) Datasets with medium color value in tumor region, light purple tissue around the nucleis in the tumor region, (c) Datasets with highest color value in tumor region, deep purple tumor tissue in the tumor region due to the existence of necrosis or some staining mistakes
108
+
109
+ ### 3.2 Training details
110
+
111
+ Model Ensemble. In order to fulfil the algorithm, the six models should be generated. The model 1-3 are trained by all WSI image in the dataset. It turns out that these three networks show superior ability in noise elimination and pinpointing the location of tumor regions more precisely than the deeper networks in certain images. However, this type of networks can not produce detailed masks on their own, so the fine level of networks are combined for their complimentary expertise on the definition of details.
112
+
113
+ The model 4-6 are trained with three sub-datasets classified by the k-means algorithm. Thus a deeper U-net is used for detailed classification. At the same time, the level 0 images of ${512} * {512}$ are adopted to guarantee the details and a certain field of vision. It is observed that these three networks have completely different effects on the same picture after training.
114
+
115
+ The difference for model 1-3 is the resolution of the training image, so the model ensemble for model 1-3 is not necessary. In that way, we first try the model ensemble of 4-6 which properly stands for the whole dataset in total.
116
+
117
+ The masks generated by six model focus more on the tumor tissue; it has a much lower possibility value on non-tumor tissue region which makes the segmentation more precisely.
118
+
119
+ Naturally, one would consider fusing the output of the two levels appropriately, so that they complement each other and jointly explore the advantages, i.e. improving the accuracy while at the same time, have noise eliminated.
120
+
121
+ Convolutional CRFs. At this time, the generated masks are clearer, but some noise remains. Because the mask boundaries are mainly generated using 512*512 patches, the predicted model is very sharp. Due to the defect of the above method, the generated boundaries by the coarse level networks lack precise details, and the fine level networks are often attracted by outlier cells, leading to noise. Also, the isolated tumor cells and fine details in boundaries are often not considered in human manual labeling.
122
+
123
+ Thus, a post-processing step using convolutional CRF to further improve the matching of the generated results to the manually annotated masks. The resulting mask is bounded by the actual mask edge. The majority of the noise is removed, and smoother edges can be obtained, which are more coherent to those on human annotated masks. The result of model ensemble and the comparison of the mask before and after using convolutional CRFs are shown in Fig.3.
124
+
125
+ ### 3.3 Results
126
+
127
+ In the challenge of ACDC@LungHP, dice coefficient[?] is used as the evaluation metric.
128
+
129
+ $$
130
+ {DSC} = \frac{2\left| {X \cap Y}\right| }{\left| X\right| + \left| Y\right| } \tag{6}
131
+ $$
132
+
133
+ The model we propose achieved 0.7968 of dice coefficient, ranked at the third place on the board. Despite the good evaluation, the accuracy is still affected by the wrong classification of subsets and the different nature of manual and generated masks. The data set of this challenge can also be used for classifying the main lung cancer sub-types. This means that the structures and morphology of the different sub-types of tumor tissues vary widely and are complicated. Therefore, each network for a sub-type is likely to be biased towards its sub-category of the tumor and insensitive to other sub-categories. Thus, if the preliminary classification is inaccurate, the performance of the resulting specifically trained model will be affected. Such errors are inevitably propagated to the fused model and reflected in the final output.
134
+
135
+ ![01963a50-daa4-7929-a27a-f2e4902cf17a_6_446_319_810_455_0.jpg](images/01963a50-daa4-7929-a27a-f2e4902cf17a_6_446_319_810_455_0.jpg)
136
+
137
+ Fig. 3. Model ensemble and convolutional CRFs results(image is the test data of ACDC@LungHP) (a) Result of the single U-net with 512 pixels' resolution images(model 1), (b) Result of the single U-net with 1152 pixels' resolution images(model 2), (c) Result of the single U-net with 2048 pixels' resolution images(model 3), (d) Result of the combined masks from the three fine level networks, model 4-6 from figure 1, (e) Result of the combined masks of fine and coarse level networks, model 1-3 combined with model 4-6 from figure 1, (f) The generated mask before using convolutional CRFs,(g)The generated mask using convolutional CRFs with a low threshold,(h) The generated mask using convolutional CRFs with a high threshold, (i) The original WSI provided as reference
138
+
139
+ Furthermore, the dice coefficient score can be easily affected by the selection of the threshold. Table 1 shows the evaluation results of different runs, using different setups of our model. In the table, the single U-net stands for the model 1-3 in Fig.1, sub-dataset U-net stands for the model 4-6 in Fig.1 which are the network for the sub-datasets. The resulting mask is bounded by the actual mask edge. The majority of the noise is removed, and smoother edges can be obtained, which are more coherent to those on human annotated masks.
140
+
141
+ <table><tr><td>Method</td><td>Dice Coefficient (Highest)</td></tr><tr><td>Single U-net (512)</td><td>0.687</td></tr><tr><td>Sub-dataset1 U-net</td><td>0.706</td></tr><tr><td>Fusion</td><td>0.769</td></tr><tr><td>Fusion + ConvolutionalCRF</td><td>0.797</td></tr></table>
142
+
143
+ Table 1. Raw results based on multiple methods
144
+
145
+ In table 1, the best result of single U-net was from model 1 which used the whole dataset with 512 pixels' resolution. The best result of single sub-dataset U-net was from model 4 which used the first part of the dataset. The best model ensemble result was the result of the model ensemble of all six models. The best result of all the method was the model ensemble of all six models and the method of convolutional CRFs, and this result was ranked at the third place on the board of the ACDC@LungHP challenge in ISBI2019.
146
+
147
+ ## 4 Conclusions
148
+
149
+ We propose an automatic cancer detection and classification method based on U-net and an ensemble scheme of multiple networks leading to a merged mask for segmentation. It is shown that using such a fusion model, more accurate segmentation can be acquired compared to relying on a single network. The result shows that when dealing with complex data sets, multiple networks fusion demonstrates an evident advantage. Convolutional CRFs for noise reduction and tumor border smoothing further enhance the boundary accuracy. Moreover, as reflected by the success of the combined multi-network model, our specifically trained networks for different sub tissue types provide a good foundation for stage two challenges on the lung cancer subtype classification.
150
+
151
+ For future work, the performance of the multi-network fusion model will be further improved by adding a self-learning classifier, if cell type labels can be introduced. At the same time, a better classification model will be developed for preliminary sub-type classification.
152
+
153
+ ## References
154
+
155
+ 1. O. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," in MICCAI, 2015.
156
+
157
+ 2. F. Whitwell, "The histopathology of lung cancer in liverpool: A survey of bronchial biopsy histology," British Journal of Cancer, vol. 15, pp. 429-439, 1961.
158
+
159
+ 3. M. T. T. Teichmann and R. Cipolla, "Convolutional crfs for semantic segmentation," CoRR, vol. abs/1805.04777, 2018.
160
+
161
+ 4. G. J. S. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. van der Laak, B. van Ginneken, and C. I. Sánchez, "A survey on deep learning in medical image analysis," Medical image analysis, vol. 42, pp. 60-88, 2017.
162
+
163
+ 5. T. Kanungo, D. M. Mount, N. S. Netanyahu, C. D. Piatko, R. Silverman, and A. Y. Wu, "An efficient k-means clustering algorithm: Analysis and implementation," IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, pp. 881-892, 2002.
164
+
165
+ 6. K. Kamnitsas, W. Bai, E. Ferrante, S. G. McDonagh, M. Sinclair, N. Pawlowski, M. Rajchl, M. C. H. Lee, B. Kainz, D. Rueckert, and B. Glocker, "Ensembles of multiple models and architectures for robust brain tumour segmentation," in Brain-Les@MICCAI, 2017.
166
+
167
+ 7. E. Jang, S. Gu, and B. Poole, "Categorical reparameterization with gumbel-softmax," CoRR, vol. abs/1611.01144, 2017.
168
+
169
+ 8. B. D. Brabandere, D. Neven, and L. V. Gool, "Semantic instance segmentation with a discriminative loss function," CoRR, vol. abs/1708.02551, 2017.
papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/H1eTF7BqZr/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,167 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § U-NET ENSEMBLE MODEL FOR SEGMENTATION IN HISTOPATHOLOGY IMAGES
2
+
3
+ Yilong Li, Xingru Huang, Yaqi Wang,
4
+
5
+ Zhaoyang Xu, Yibao Sun, and Qianni Zhang
6
+
7
+ Graduate School of Computer Science, Queen Mary University of London, London, UK, yilong.li@qmul.ac.uk
8
+
9
+ Abstract. In this work, a multi-scale U-net[1] fusion model is proposed for the automatic cancer detection and classification in whole-slide lung histopathology[2]. The model integrates two types of U-net structure, trained on different image scales and subsets, aiming to address the challenges posed by the significant variation in data presentation. Since lung histopathology images come in various sub-categories and appearances, the performance of an individual trained network is usually limited. We train a variety of networks by using multiple re-scaled images and different subsets of images, and finally ensemble the outputs of various networks. Smoothing and noise elimination are conducted using convolutional Conditional Random Fields (CRFs)[3]. The proposed model is validated on Automatic Cancer Detection and Classification in Whole-slide Lung Histopathology (ACDC@LungHP) challenge in ISBI2019. Our method achieves a dice coefficient of 0.7968, Which is ranked at the third place on the board.
10
+
11
+ Keywords: Model ensemble, Tumor, Segmentation, Convolutional CRFs
12
+
13
+ § 1 INTRODUCTION
14
+
15
+ Digital pathology has been gradually introduced into clinical practice. Digital pathology scanners can provide whole slide images (WSI) with very high resolution, but for pathologists, time and energy required to manual analysis on WSI of every single case are unbearable. Automatic computational analysis algorithms based on machine learning provides a way to reduce pathologists' workload. This project focuses on the detection and segmentation of lung tumour tissues.[4] Currently, most types of lung cancer are analysed mainly by pathologists' naked eye examination of the slice image of pulmonary lobes to determine the location, size and pattern of the tumor. This kind of diagnosis plays a vital role in the prognosis of lung cancer and the determination of the therapeutic regimen. However, because of the massive number of pathological slices, and too much time spent on every pathological slice, the process of the diagnostic method is usually tricky. Therefore, segmenting lung tumors through automatic analysis of pathological slices can help pathologists save significant time and efforts. Deep convolutional learning method based on neural network is currently the most popular technology in dealing with the problems of tumor segmentation. However, the network-based training method requires a huge number of datasets to validate and adjust the parameters of each convolutional layer in the convolutional network, to achieve accurate tumor segmentation. Therefore, the primary purpose of this study is to use limited datasets, through selection and optimisa-tion of single networks or combining multiple networks, to achieve efficient and more accurate segmentation of tumor in pathological image slices of lung cancer.
16
+
17
+ In this work, we design a specific process of tumor tissue segmentation. The dataset is trained to generate the probability map of the mask of the WSI image. Training on the varying data will mislead the model and as a result, produce unsatisfactory prediction results. Hence, we believe that the employment of multiple specifically trained models is necessary. Empirically, a well-trained model can identify the location of the tumor if magnification of at least ${10}\mathrm{X}$ is used. Thus, the dataset is classified into three sub-datasets by k-mean algorithm.[5] And each sub-dataset is trained with the modified U-net network. In this way, all of the data are used in training, and the different features can be reflected on the different model. This method can analyse the features of the WSI image more specifically so that the difference between different tumor tissue and different sources of the tumor can be studied by the models. Then, with the model ensemble of all the models trained by whole dataset or the three sub-dataset, the performance of the ensemble model is much better than any single model we trained. Finally, image smoothing and noise elimination are conducted after training by using convolutional Conditional Random Fields (CRFs), and the output images are aligned with the original images. The output of convolutional CRFs is the final output of the experiment.
18
+
19
+ The design of the overall model structure is illustrated in Fig.1.
20
+
21
+ < g r a p h i c s >
22
+
23
+ Fig. 1. The overall framework of tumor tissue segmentation.
24
+
25
+ § 2 METHODS
26
+
27
+ In this work, there are three crucial processes of tumor segmentation. The first one is data preprocessing. The dataset is classified into three sub-dataset by the k-means clustering algorithm. Then, with the difference of resolution of the dataset and the difference of sub-datasets, two types of six models in total are designed and trained. Then, fusing multiple models and merging several types of masks, more intuitively accurate segmentation masks are generated.[6] Finally, the convolutional CRFs is used to eliminate the noise and smooth the boundary of the segmentation mask.
28
+
29
+ In Fig.1, there are six models in total, model 1-3 are trained with the whole dataset, but with three resolutions of 576,1152 and 2048 pixels, model 4-6 are trained with the sub-datasets. The datasets are divided into three sub-datasets by k-means algorithm, so model 4-6 trains with the sub-datasets 1,2 and 3 .
30
+
31
+ § 2.1 MODEL TRAINING
32
+
33
+ The overall objective of the proposed network is to make the segmentation of the tumor tissue from normal tissue in the WSI image. Due to the purpose, a modified U-net is used to focus the model more on the overview.
34
+
35
+ In this network, the energy function is computed by a pixel-wise soft-max[7] over the final feature map combined with the cross entropy loss function.[8]
36
+
37
+ The soft-max is defined as:
38
+
39
+ $$
40
+ {p}_{k}\left( x\right) = \exp \left( {{a}_{k}\left( x\right) }\right) /\left( {\mathop{\sum }\limits_{{{k}^{\prime } = 1}}^{K}\exp \left( {{a}_{{k}^{\prime }}\left( x\right) }\right) }\right) \tag{1}
41
+ $$
42
+
43
+ where ${p}_{k}\left( x\right)$ denotes the probability value in feature channel $\mathrm{k}$ at the pixel position $x \in \Omega$ with $\Omega \subset {\mathbb{Z}}^{2}.K$ is the number of classes and ${p}_{k}\left( x\right)$ is the approximated maximum-function. i.e. ${p}_{k}\left( x\right) \approx 1$ for the $k$ that has the maximum activation ${a}_{k}\left( x\right)$ and ${p}_{k}\left( x\right) \approx 0$ for all other $k$ .
44
+
45
+ The probability value for each epoch should be influenced by the loss function and the result of this influence will reflect in next epoch.
46
+
47
+ The cross entropy then penalizes at each position the deviation of $\ell \left( x\right)$ using
48
+
49
+ $$
50
+ E = \mathop{\sum }\limits_{{x \in \Omega }}w\left( x\right) \log \left( {{p}_{\ell \left( x\right) }\left( x\right) }\right) \tag{2}
51
+ $$
52
+
53
+ where $\ell : \Omega \rightarrow \{ 1,\ldots ,k\}$ is the true label of each pixel and $w : \Omega \rightarrow \mathbb{R}$ is a weight map that we introduced to give some pixels more importance in the training.
54
+
55
+ The separation border is computed using morphological operations. The weight map is then computed as
56
+
57
+ $$
58
+ w\left( x\right) = {w}_{c}\left( x\right) + {w}_{0} \cdot \exp \left( {-\frac{{\left( {d}_{1}\left( x\right) + {d}_{2}\left( x\right) \right) }^{2}}{2{\sigma }^{2}}}\right) \tag{3}
59
+ $$
60
+
61
+ where ${w}_{c} : \Omega \rightarrow \mathbb{R}$ is the weight map to balance the class frequencies, ${d}_{1} : \Omega \rightarrow \mathbb{R}$ denotes the distance to the border of the nearest cell and ${d}_{2} : \Omega \rightarrow R$ the distance to the border of the second nearest cell. In our experiments we set ${w}_{0} = {10}$ and $\sigma \approx 5$ pixels.
62
+
63
+ The out put of the network for each training epoch in this work should be the probability matrix of segmentation mask, so the matrix $\mathbf{P}$ should be
64
+
65
+ $$
66
+ \mathbf{P} = {\left\lbrack {p}_{k}\left( x\right) \right\rbrack }_{m,n} \tag{4}
67
+ $$
68
+
69
+ where $x$ is the coordinate of each pixel in the image, $m,n$ is the size of the image.
70
+
71
+ § 2.2 MODEL ENSEMBLE
72
+
73
+ As shown in the Fig.1, there are totally six models of two types. Model 1-3 are trained by the whole dataset, which recalls the performance of the coarse level model. Model 4-6, however, is nearly opposite to the first three models. In that way, one would consider fusing the output of the two levels appropriately, so that they complement each other and jointly explore the advantages.
74
+
75
+ For the fine level of networks, three sub-datasets are roughly identified based on k-means algorithm. The algorithm calculate the mean of the color value[?] in tumor region for each image based on the true mask.
76
+
77
+ Because the total variance is constant, this is equivalent to maximizing the sum of squared deviations between points in different clusters (between-cluster sum of squares), which follows from the law of total variance. Based on the three datasets generated by k-means algorithm, three model are generated by the network(model 4-6 in Fig.1), these six models perform well in different situation, so the model ensemble is necessary in this work. The model ensemble function
78
+
79
+ is defined as:
80
+
81
+ $$
82
+ \mathbf{F} = \mathop{\sum }\limits_{{i = 1}}^{n}{W}_{i} * {\mathbf{P}}_{i} \tag{5}
83
+ $$
84
+
85
+ Where $\mathbf{F}$ is the output matrix of the model ensemble, $n$ represents the number of models, ${W}_{i}$ represents the weight of model $i$ , and ${\mathbf{P}}_{i}$ (from formula 4) represents the prediction probability map of model $i$ .
86
+
87
+ § 2.3 CONVOLUTIONAL CRFS FOR SEMANTIC SEGMENTATION
88
+
89
+ With the model ensemble, the generated masks are clearer, but some noise remains. Because the mask boundaries are mainly generated using 512*512 patches, the predicted model is very sharp. Due to the defect of the above method, the generated boundaries by the coarse level networks lack precise details, and the fine level networks are often attracted by outlier cells, leading to noise.
90
+
91
+ Also, the isolated tumor cells and fine details in boundaries are often not considered in human manual labeling. Thus, a post-processing step using convolutional CRF is introduced here to further improve the matching of the generated results to the manually annotated masks.
92
+
93
+ All parameters of the convolutional CRFs can easily be optimised using back-propagation. Semantic image segmentation, which aims to produce a categorical label for each pixel in an image, is a very import task for visual perception. At the same time, CRFs can also be used to extract the features of the tumor boundary, to optimise the mask generated by the model ensemble, and optimise the mask boundary to make it closer to the hand-drawn mask.
94
+
95
+ § 3 EXPERIMENTS
96
+
97
+ § 3.1 DATASET AND DATA PREPROCESSING
98
+
99
+ Dataset. The dataset comes from the challenge on ACDC@LungHP in ISBI conference 2019. 200 whole-slide images are adopted, 150 of which are used as training sets. It is noted that the sources of the training sets are similar, but the image present significant differences for several reasons. For example, there are obvious differences in the ways of staining, and some pictures even present uneven coloring.
100
+
101
+ Data Preprocessing. In this work, the dataset is separate into three sub-datasets in order to train the models for model ensemble.
102
+
103
+ By k-means algorithm, the dataset is classified by the color value of the tumor region, these three sub-datasets are presented in Fig. 2.
104
+
105
+ < g r a p h i c s >
106
+
107
+ Fig. 2. Examples of three identified subsets (a) Datasets with lowest color value in tumor region by k-means algorithm, pink tissue around the nucleis in the tumor region, (b) Datasets with medium color value in tumor region, light purple tissue around the nucleis in the tumor region, (c) Datasets with highest color value in tumor region, deep purple tumor tissue in the tumor region due to the existence of necrosis or some staining mistakes
108
+
109
+ § 3.2 TRAINING DETAILS
110
+
111
+ Model Ensemble. In order to fulfil the algorithm, the six models should be generated. The model 1-3 are trained by all WSI image in the dataset. It turns out that these three networks show superior ability in noise elimination and pinpointing the location of tumor regions more precisely than the deeper networks in certain images. However, this type of networks can not produce detailed masks on their own, so the fine level of networks are combined for their complimentary expertise on the definition of details.
112
+
113
+ The model 4-6 are trained with three sub-datasets classified by the k-means algorithm. Thus a deeper U-net is used for detailed classification. At the same time, the level 0 images of ${512} * {512}$ are adopted to guarantee the details and a certain field of vision. It is observed that these three networks have completely different effects on the same picture after training.
114
+
115
+ The difference for model 1-3 is the resolution of the training image, so the model ensemble for model 1-3 is not necessary. In that way, we first try the model ensemble of 4-6 which properly stands for the whole dataset in total.
116
+
117
+ The masks generated by six model focus more on the tumor tissue; it has a much lower possibility value on non-tumor tissue region which makes the segmentation more precisely.
118
+
119
+ Naturally, one would consider fusing the output of the two levels appropriately, so that they complement each other and jointly explore the advantages, i.e. improving the accuracy while at the same time, have noise eliminated.
120
+
121
+ Convolutional CRFs. At this time, the generated masks are clearer, but some noise remains. Because the mask boundaries are mainly generated using 512*512 patches, the predicted model is very sharp. Due to the defect of the above method, the generated boundaries by the coarse level networks lack precise details, and the fine level networks are often attracted by outlier cells, leading to noise. Also, the isolated tumor cells and fine details in boundaries are often not considered in human manual labeling.
122
+
123
+ Thus, a post-processing step using convolutional CRF to further improve the matching of the generated results to the manually annotated masks. The resulting mask is bounded by the actual mask edge. The majority of the noise is removed, and smoother edges can be obtained, which are more coherent to those on human annotated masks. The result of model ensemble and the comparison of the mask before and after using convolutional CRFs are shown in Fig.3.
124
+
125
+ § 3.3 RESULTS
126
+
127
+ In the challenge of ACDC@LungHP, dice coefficient[?] is used as the evaluation metric.
128
+
129
+ $$
130
+ {DSC} = \frac{2\left| {X \cap Y}\right| }{\left| X\right| + \left| Y\right| } \tag{6}
131
+ $$
132
+
133
+ The model we propose achieved 0.7968 of dice coefficient, ranked at the third place on the board. Despite the good evaluation, the accuracy is still affected by the wrong classification of subsets and the different nature of manual and generated masks. The data set of this challenge can also be used for classifying the main lung cancer sub-types. This means that the structures and morphology of the different sub-types of tumor tissues vary widely and are complicated. Therefore, each network for a sub-type is likely to be biased towards its sub-category of the tumor and insensitive to other sub-categories. Thus, if the preliminary classification is inaccurate, the performance of the resulting specifically trained model will be affected. Such errors are inevitably propagated to the fused model and reflected in the final output.
134
+
135
+ < g r a p h i c s >
136
+
137
+ Fig. 3. Model ensemble and convolutional CRFs results(image is the test data of ACDC@LungHP) (a) Result of the single U-net with 512 pixels' resolution images(model 1), (b) Result of the single U-net with 1152 pixels' resolution images(model 2), (c) Result of the single U-net with 2048 pixels' resolution images(model 3), (d) Result of the combined masks from the three fine level networks, model 4-6 from figure 1, (e) Result of the combined masks of fine and coarse level networks, model 1-3 combined with model 4-6 from figure 1, (f) The generated mask before using convolutional CRFs,(g)The generated mask using convolutional CRFs with a low threshold,(h) The generated mask using convolutional CRFs with a high threshold, (i) The original WSI provided as reference
138
+
139
+ Furthermore, the dice coefficient score can be easily affected by the selection of the threshold. Table 1 shows the evaluation results of different runs, using different setups of our model. In the table, the single U-net stands for the model 1-3 in Fig.1, sub-dataset U-net stands for the model 4-6 in Fig.1 which are the network for the sub-datasets. The resulting mask is bounded by the actual mask edge. The majority of the noise is removed, and smoother edges can be obtained, which are more coherent to those on human annotated masks.
140
+
141
+ max width=
142
+
143
+ Method Dice Coefficient (Highest)
144
+
145
+ 1-2
146
+ Single U-net (512) 0.687
147
+
148
+ 1-2
149
+ Sub-dataset1 U-net 0.706
150
+
151
+ 1-2
152
+ Fusion 0.769
153
+
154
+ 1-2
155
+ Fusion + ConvolutionalCRF 0.797
156
+
157
+ 1-2
158
+
159
+ Table 1. Raw results based on multiple methods
160
+
161
+ In table 1, the best result of single U-net was from model 1 which used the whole dataset with 512 pixels' resolution. The best result of single sub-dataset U-net was from model 4 which used the first part of the dataset. The best model ensemble result was the result of the model ensemble of all six models. The best result of all the method was the model ensemble of all six models and the method of convolutional CRFs, and this result was ranked at the third place on the board of the ACDC@LungHP challenge in ISBI2019.
162
+
163
+ § 4 CONCLUSIONS
164
+
165
+ We propose an automatic cancer detection and classification method based on U-net and an ensemble scheme of multiple networks leading to a merged mask for segmentation. It is shown that using such a fusion model, more accurate segmentation can be acquired compared to relying on a single network. The result shows that when dealing with complex data sets, multiple networks fusion demonstrates an evident advantage. Convolutional CRFs for noise reduction and tumor border smoothing further enhance the boundary accuracy. Moreover, as reflected by the success of the combined multi-network model, our specifically trained networks for different sub tissue types provide a good foundation for stage two challenges on the lung cancer subtype classification.
166
+
167
+ For future work, the performance of the multi-network fusion model will be further improved by adding a self-learning classifier, if cell type labels can be introduced. At the same time, a better classification model will be developed for preliminary sub-type classification.
papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/H1l9RiIVWr/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 3D Image Reconstruction from Multi-focus Microscopic Images *
2
+
3
+ Takahiro Yamaguchi ${}^{1}$ , Hajime Nagahara ${}^{2}$ , Ken’ichi Morooka ${}^{1}$ , Yuta Nakashima ${}^{2}$ , Yuki Uranishi ${}^{2}$ , Shoko Miyauchi ${}^{1}$ , and Ryo Kurazume ${}^{1}$
4
+
5
+ ${}^{1}$ Graduate School of Information Science and Electrical Engineering Kyushu University, Fukuoka 819-0395, Japan morooka@ait.kyushu-u.ac.jp ${}^{2}$ Institute for Datability Science, Osaka University, Osaka 565-0871, Japan nagahara@ids.osaka-u.ac.jp
6
+
7
+ Abstract. This paper presents a method for reconstructing 3D image from multi-focus microscopic images captured with different focuses. We model the multi-focus imaging by a microscopy and produce the $3\mathrm{D}$ image of a target object based on the model. The 3D image reconstruction is done by minimizing the difference between the observed images and the simulated images generated by the imaging model. Simulation and experimental result shows that the proposed method can generate the $3\mathrm{D}$ image of a transparent object efficiently and reliably.
8
+
9
+ Keywords: 3D imaging $\cdot$ Microscopy $\cdot$ Multi-focus images $\cdot$ Transparent object.
10
+
11
+ ## 1 Introduction
12
+
13
+ Cell observation by optical microscopy is widely used in biology, medicine, and so on. For example, cytodiagnosis and iPS cells culture are based on the cell observation. A regular microscopy acquires a 2D image of a target cell with a 3D structure. However, only a part of slices of the 3D cell structure can be observed as a focused image because the depth of field of the general microscope is narrow. Under such circumstances, various applications can be expected for the measurement technology of 3D cell structure.
14
+
15
+ A simple way to get the 3D structure using a microscopy is to stack multiple slice images with different focuses. Various methodologies[1] have been proposed to measure the multi-focus images. However, the simple stacked multi-focus images include many unclear regions because of reflections of front and backside of the focus. To solve this problem, a confocal microscope is often used. In a confocal microscope, a pinhole in front of a light detector cuts off light that is out of focus while allowing only the fluorescence light from the in-focus spot to enter the light detector. Thus, a confocal microscope is a useful and powerful tool to obtain clear images of only in-focus regions.
16
+
17
+ ---
18
+
19
+ * Supported by JST CREST Grant Number JPMJCR1786 and JSPS KAKENHI Grant Number JP19H04139, Japan.
20
+
21
+ ---
22
+
23
+ Another approach for imaging the 3D object structure is Computed tomography (CT)[2]. In CT scanning, when X-rays is irradiated to a target object, X-ray CT detects X-ray passed through the object by a detector located on opposite sides of the object. Here, it is assumed that X-ray is absorbed and attenuated by the object in the irradiation. On the assumption, many intensities of X-ray are measured by rotating a pair of X-ray source and detector around the object. When a target object is represented with a set of voxels, the measured intensities are used to estimate the attenuation coefficient of each voxel.
24
+
25
+ Similar to X-ray CT, Optical Projection Tomography (OPT)[3] have been proposed for microscopic 3D imaging. Using regular light, lens optics and silicon image sensor, OPT estimates the light attenuation of each voxel. However, since the imaging systems of X-ray and OPT requires rotation mechanisms, the methodology of X-ray CT and OPT is directly inapplicable to regular microscope with no rotation mechanism.
26
+
27
+ In this paper, we propose a method of $3\mathrm{D}$ image from multi-focus microscopic images obtained with different focuses. We model an imaging system for acquiring the multi-focus microscopic images with different focuses. In the imaging system, the microscopic images are produced by the light emitted from its light source. When the light passes through the transparent object, the light is attenuated depending on the transmittance of the object material. This means that each pixel in the microscopic image is related to these attenuated light. Considering this, we reconstruct the 3D image of the object by minimizing the difference between the observed images and the simulated images generated by the imaging model.
28
+
29
+ Similar with our method, there are two approaches for $3\mathrm{D}$ imaging using multi-focus images. The first is the reconstruction of $3\mathrm{D}$ image which contains appearance of inner slices of transparent objects[4]. The 3D image is generated by simply piling these discrete slices acquired by CCD cameras. The second approach is to reconstruct $3\mathrm{D}$ image of a target object’s luminescence from multi-focus images obtained by a fluorescence microscope[5]. Unlike the two approaches, we aim to reconstruct the 3D image as a set of voxels having transmittance values from multi-focus images obtained by a general bright field microscopy.
30
+
31
+ ## 2 3D image reconstruction from multi-focus images
32
+
33
+ ### 2.1 Imageing model for multi-focus microscopic images
34
+
35
+ Fig. 1 shows how the pixel intensity is observed by a microscopy. We denote an intensity value of an arbitrary pixel(x, y)in the $s$ -th image in the sequence of the multi-focus microscopic images as ${I}_{\mathbf{\alpha }}\left( {x, y, s}\right)$ . We assume that a target space including one or more than cells is represented by a set of voxels. 3D image is estimated as transmittances of the voxels ${\alpha }_{i}\left( {i = 0,1,\cdots ,{N}_{v} - 1}\right)$ as shown in Fig. 1. An incident light is emitted from a light source under a stage.
36
+
37
+ The incident light is discretized as a set of ${N}_{r}$ discrete rays. The intensity ${l}_{j}$ of the $j$ -th $\left( {j = 0,1,\cdots ,{N}_{r} - 1}\right)$ ray is attenuated every time the ray passes through each voxel. The attenuation is affected by the transmittance of the voxel and the length of the ray through the voxel. Hence, we model the relationship between ${l}_{j}$ and the attenuated ray ${l}_{j}^{\prime }$ by
38
+
39
+ $$
40
+ {l}_{j}^{\prime } = {l}_{j} \times \mathop{\prod }\limits_{i}{\alpha }_{i}^{{d}_{ji}} \tag{1}
41
+ $$
42
+
43
+ ![01963a52-46f9-711a-9c3c-49f735e7c8a3_2_592_335_628_458_0.jpg](images/01963a52-46f9-711a-9c3c-49f735e7c8a3_2_592_335_628_458_0.jpg)
44
+
45
+ Fig. 1. Imaging system model
46
+
47
+ where ${d}_{ji}$ is the length of the $j$ -th ray if the ray passes through the $i$ -th voxel. Otherwise, ${d}_{ji} = 0$ .
48
+
49
+ By taking the log of both side in Eq. (1) and expanding the log, Eq. (1) is rewritten as
50
+
51
+ $$
52
+ \log {l}_{j}^{\prime } = \log {l}_{j} + \mathop{\sum }\limits_{i}{d}_{ji}\log {\alpha }_{i}. \tag{2}
53
+ $$
54
+
55
+ By collecting all relationships between the incident ray and attenuated ray, ${l}_{j}$ and ${l}_{j}^{\prime }$ using Eq. (2), we obtain the following formulation:
56
+
57
+ $$
58
+ {\mathbf{L}}^{\prime } = \mathbf{{DA}} + \mathbf{L}, \tag{3}
59
+ $$
60
+
61
+ where
62
+
63
+ $$
64
+ \mathbf{L} = \left\lbrack \begin{matrix} \log {l}_{0} \\ \vdots \\ \log {l}_{{N}_{r} - 1} \end{matrix}\right\rbrack ,{\mathbf{L}}^{\prime } = \left\lbrack \begin{matrix} \log {l}_{0}^{\prime } \\ \vdots \\ \log {l}_{{N}_{r} - 1}^{\prime } \end{matrix}\right\rbrack ,\mathbf{A} = \left\lbrack \begin{matrix} \log {\alpha }_{0} \\ \vdots \\ \log {\alpha }_{{N}_{v} - 1} \end{matrix}\right\rbrack ,
65
+ $$
66
+
67
+ $$
68
+ \mathbf{D} = \left\lbrack \begin{matrix} {d}_{00} & \cdots & {d}_{0\left( {{N}_{v} - 1}\right) } \\ \vdots & \ddots & \vdots \\ {d}_{\left( {{N}_{r} - 1}\right) 0} & \cdots & {d}_{\left( {{N}_{r} - 1}\right) \left( {{N}_{v} - 1}\right) } \end{matrix}\right\rbrack .
69
+ $$
70
+
71
+ Here, we assume that aperture of the light source and objective lens are enough large to the target cell. On this assumption, an arbitrary pixel ${I}_{\mathbf{\alpha }}\left( {x, y, s}\right)$ can be similarly expressed by shifting the stage along with(x, y, s)coordinates. On the assumption, $\mathbf{D}$ and $\mathbf{L}$ are regarded as the function of the three parameters $x, y$ , and $s$ . We modify $\mathbf{D}$ is the function of(x, y, s)and the components of matrix $\mathrm{D}$ is shifted by(x, y, s). Hence, Eq. (3) is rewritten by
72
+
73
+ $$
74
+ {\mathbf{L}}^{\prime }\left( {x, y, s}\right) = \mathbf{D}\left( {x, y, s}\right) \mathbf{A} + \mathbf{L}. \tag{4}
75
+ $$
76
+
77
+ Finally, ${I}_{\mathbf{\alpha }}\left( {x, y, s}\right)$ is calculated by the total amount of the attenuated rays ${l}_{j}^{\prime }$ :
78
+
79
+ $$
80
+ {I}_{\mathbf{\alpha }}\left( {x, y, s}\right) = \mathop{\sum }\limits_{j}{l}_{j}^{\prime }\left( {x, y, s}\right) . \tag{5}
81
+ $$
82
+
83
+ ### 2.2 Estimation of the voxel transmittance
84
+
85
+ Using the model as mentioned in Sec 2.1., we simulate the observed multi-focus images. When the estimated transmittances of the target voxels are close to the real ones, the intensity value of the simulated multi-focus images ${I}_{\mathbf{\alpha }}\left( {x, y, s}\right)$ should be the same as the intensity value of the observed images $I\left( {x, y, s}\right)$ by microscopy. Considering this, the 3D image is reconstructed by minimizing an objective function $F$ :
86
+
87
+ $$
88
+ F\left( \mathbf{\alpha }\right) = E\left( \mathbf{\alpha }\right) + {wTV}\left( \mathbf{\alpha }\right) , \tag{6}
89
+ $$
90
+
91
+ where $\mathbf{\alpha }$ is a vector composed of all the transmittances $\mathbf{\alpha } = \left( {{\alpha }_{0},{\alpha }_{1},\cdots ,{\alpha }_{{N}_{v} - 1}}\right)$ . The parameter $w$ is a weighted coefficient as a regularization parameter. The conjugate gradient method is applied to find optimal transmittances which minimize $F\left( \mathbf{\alpha }\right)$ . The function $E\left( \mathbf{\alpha }\right)$ in Eq. (6) represents the difference between the intensity value $I\left( {x, y, s}\right)$ in the observed image and ${I}_{\mathbf{\alpha }}\left( {x, y, s}\right)$ in the simulated image by Eq. (5). The function $E$ is defined as
92
+
93
+ $$
94
+ E\left( \mathbf{\alpha }\right) = \mathop{\sum }\limits_{s}\mathop{\sum }\limits_{x}\mathop{\sum }\limits_{y}{\left( {I}_{\mathbf{\alpha }}\left( x, y, s\right) - I\left( x, y, s\right) \right) }^{2}. \tag{7}
95
+ $$
96
+
97
+ On the contrary, ${TV}\left( \mathbf{\alpha }\right)$ is a regularization function base on a total variation (TV) norm to reconstruct the 3D image smoothly. Practically, the value of ${TV}\left( \mathbf{\alpha }\right)$ is calculated by the total transmittance difference between the traget voxels and its six neighbor voxels:
98
+
99
+ $$
100
+ {TV}\left( \mathbf{\alpha }\right) = \mathop{\sum }\limits_{{k \in {\Phi }_{i}}}{\left( {\alpha }_{i} - {\alpha }_{k}\right) }^{2}, \tag{8}
101
+ $$
102
+
103
+ where ${\Phi }_{i}$ is the set of the six neighbors of the target $i$ -th voxel.
104
+
105
+ ### 2.3 Efficient search of optimal transmittances
106
+
107
+ From Eq. (6), our proposed method finds the optimum transmittances by iteratively updating the transmittances. To find the optimum efficiently and robustly, we introduce the two followings.
108
+
109
+ ![01963a52-46f9-711a-9c3c-49f735e7c8a3_4_448_335_975_182_0.jpg](images/01963a52-46f9-711a-9c3c-49f735e7c8a3_4_448_335_975_182_0.jpg)
110
+
111
+ Fig. 2. (a)An artificial 3D cell model; (b) multi-focus images of the cell model.
112
+
113
+ ![01963a52-46f9-711a-9c3c-49f735e7c8a3_4_468_585_871_175_0.jpg](images/01963a52-46f9-711a-9c3c-49f735e7c8a3_4_468_585_871_175_0.jpg)
114
+
115
+ Fig. 3. 3D image of the cell model estimated by our method.
116
+
117
+ Initialization from input images: The initial values of the transmittances are important to find the optimum transmittances robustly by the conjugate gradient method. Given a sequence of ${N}_{s}$ multi-focus images, we determine the initial transmittances based on the intensity value $I\left( {x, y, s}\right)$ of the original image sequence.
118
+
119
+ Let us consider that all rays are intersected at the $i$ -th voxel when the intensity $I\left( {x, y, s}\right)$ is calculated. In this case, since all rays pass through the $i$ -th voxel, the transmittance ${\alpha }_{i}$ of the $i$ -th voxel strongly influences on the calculation of $I\left( {x, y, s}\right)$ compared with other voxels. Moreover, in our imaging system model, each ray passes through at least ${N}_{s}$ voxels. Therefore, the optimal transmittance value of the $i$ -th voxel is approximately regarded as the ${N}_{s}$ -th root of $I\left( {x, y, s}\right)$ . Considering these, the initial transmittance value ${\alpha }_{i}^{\left( 0\right) }$ of the $i$ -th voxel is calculated by
120
+
121
+ $$
122
+ {\alpha }_{i}^{\left( 0\right) } = \sqrt[{N}_{s}]{I\left( {x, y, s}\right) }. \tag{9}
123
+ $$
124
+
125
+ Coarse-to-fine search: From Eq.(1) - (5), the computational burden in our method depends on the number of rays. When the small number of the rays is used, the estimation of the transmittances can be speeded up. However, the light is discretized roughly by the small number of the rays. Therefore, the use of such rays results in the low accuracy of estimating the transmittances. On the other hand, in the case of using many rays, although the estimation of the transmittances is time-consuming, the reliable transmittances can be obtained.
126
+
127
+ Considering the trade-off between the efficiency and accuracy of estimating the transmittances, we introduce a coarse-to-fine strategy. Firstly, in the coarse step, the transmittances are roughly estimated by using a small number of the rays (in our case, ${N}_{r} = {25}$ ). The obtained transmittances in the coarse step are used as the initial values of the transmittances in the following fine step. In the fine step, we find the optimal values of the transmittances by using many rays (in our case, ${N}_{r} = {533}$ ).
128
+
129
+ ## 3 Experimental results
130
+
131
+ To evaluate the performance of the proposed method, we made a simulation using synthetic cell images and experiments using real cell images. In the simulation and the experiment, the parameter $w$ in Eq.(6) is set to $w = {0.125}$ .
132
+
133
+ ### 3.1 Simulation using synthetic cell images
134
+
135
+ In the simulation, we generate two virtual 3D cell models. Fig. 2(a) shows one of them. It consists of a nucleus (red in Fig. 2(a)), a cytoplasm(light blue) and a cell membrane (blue). From real cell images, it is observed that the transmittance values of the nucleus tend to be lower than those of the cyptoplasm and the membrane. Based on the observation, the transmittance values of the three components are set to 0.80 (the nucleus), 0.95 (the cyptoplasm), and 0.98 (the membrane), respectively.
136
+
137
+ The imaging system model (Section 2.1) is applied to generate the synthetic images of the virtual cells. Here, the number ${N}_{r}$ of the rays used in the $3\mathrm{D}$ image reconstruction is set to 533 so that for each voxel shown in Fig. 1, at least one ray passes through the voxel when the maximum blur is occurred in the model. Finally, we obtain 11 multi-focus images with ${50} \times {50}$ [pixel] (Fig. 2(b)).
138
+
139
+ We verify initialization from input images and coarse-to-fine search(section 2.3). In the verification, the 3D images are reconstructed by the proposed methods. Moreover, the proposed methods are compared by the two methods. First one is the method in which all initial values of the transmittances are set to 0.8 through some preliminary experiments. Second one is the method which uses 25 or 533 rays to reconstruct the $3\mathrm{D}$ image.
140
+
141
+ To measure the accuracy of the reconstructed 3D image, we use the root mean square error (RMSE) between the reconstructed 3D image and their ground truth values. Table 1 shows the average of RMSE and computational time between two virtual 3D cell models for the each method.
142
+
143
+ In the verification about the initialization, from Table 1, the initialization from input images improves the accuracy of reconstructing 3D image compared with the method in which all initial values of transmittances are set to 0.8 . Moreover, the computational time for the initialization from input images is shorter than that of the method in which all initial values of transmittances are set to 0.8 using same number of the rays. From these results, the initialization from input images is useful for obtaining the reliable transmittances.
144
+
145
+ In the verification about the coarse-to-fine search, from Table 1, the proposed method using the coarse-to-fine search improves the accuracy of the reconstructing 3D image compared with the methods using only 25 or 533 rays. Moreover, the computational time of the methods using the coarse-to-fine search is shorter than the methods using only 533 rays.
146
+
147
+ The computational time in the $3\mathrm{D}$ image reconstruction increases according to the number of the used rays in the reconstruction. In the coarse-to-fine search, the first coarse step is to search the optimal transmittances roughly by using a small number of the rays. In the second fine step, we find the optimal values of the transmittances by using many rays. Therefore, the coarse-to-fine search reduces the total number of the used rays in the reconstruction. Moreover, the coarse search enables to find the values of the transmittances closed to the optimal ones while avoiding local minimum. Owing to these, the proposed method using the coarse-to-fine search can find the optimal transmittances efficiently and stably.
148
+
149
+ Table 1. Ablation study for initialization and coarse-to-fine methods.
150
+
151
+ <table><tr><td>Initial values</td><td>${N}_{r}$</td><td>RMSE $\left\lbrack {\times {10}^{-2}}\right\rbrack$</td><td>Time[sec]</td></tr><tr><td>Initialization from input images</td><td>coarse-to-fine (25 to 533)</td><td>0.869</td><td>336</td></tr><tr><td>Initialization from input images</td><td>25</td><td>0.875</td><td>58</td></tr><tr><td>Initialization from input images</td><td>533</td><td>0.873</td><td>1,553</td></tr><tr><td>Constant value $\left( {{\alpha }_{i}^{\left( 0\right) } = {0.80}}\right)$</td><td>coarse-to-fine (25 to 533)</td><td>0.878</td><td>373</td></tr><tr><td>Constant value $\left( {{\alpha }_{i}^{\left( 0\right) } = {0.80}}\right)$</td><td>25</td><td>0.882</td><td>76</td></tr><tr><td>Constant value $\left( {{\alpha }_{i}^{\left( 0\right) } = {0.80}}\right)$</td><td>533</td><td>0.906</td><td>2,209</td></tr></table>
152
+
153
+ Thus, the proposed method using the initialization from input images and the coarse-to-fine search achieves the best accuracy of the 3D images reconstruction while reducing the computational time drastically compared with the methods using only 533 rays. Fig. 3 shows the 3D image of the virtual cell estimated by the proposed method using the initialization from input images and the coarse-to-fine search.
154
+
155
+ ### 3.2 Experiment using real cell images
156
+
157
+ In the experiment, the proposed method is applied to the multi-focus images of real cell images to reconstruct the 3D image of the real cells. Fig. 4 (a) and (b) show the multi-focus image sequences of normal and cancer cells. The size and spatial resolution of each cell image is ${62} \times {62}$ [pixel] and ${0.92\mu }\mathrm{m}/1$ pixel. To apply the proposed method, the color cell images are converted into the gray scale images.
158
+
159
+ Fig. 5 (a) and (b) show the 3D image of the normal and cancer cells reconstructed from Fig. 4, respectively. It takes about 1,650 [sec] on average to reconstruct $3\mathrm{D}$ image. The average computational time in the experiments is longer than that in the simulation because of the following reason. In the simulation, we assume that the cytoplasm is homogeneous with no other components. In other words, all the voxels in the artificial cell model have almost the same transmittance values. On the contrary, a real cell contain other components such as mitochondria. This means that there are the voxels with various transmittance values in the real cells. Owing to the complex structure of the cell, reconstructing the 3D image of the real cells is time-consuming. One of our future works is to speed up the estimation of the transmittances of real cells with complex structures.
160
+
161
+ ![01963a52-46f9-711a-9c3c-49f735e7c8a3_7_396_332_1065_164_0.jpg](images/01963a52-46f9-711a-9c3c-49f735e7c8a3_7_396_332_1065_164_0.jpg)
162
+
163
+ Fig. 4. Multi-focus images of real cells: (a) a normal cell; (b) a cancer cell.
164
+
165
+ ![01963a52-46f9-711a-9c3c-49f735e7c8a3_7_489_564_831_224_0.jpg](images/01963a52-46f9-711a-9c3c-49f735e7c8a3_7_489_564_831_224_0.jpg)
166
+
167
+ Fig. 5. Reconstructed 3D images of (a)the normal and (b)cancer cells.
168
+
169
+ ## 4 Conclusion
170
+
171
+ We proposed a method for reconstructing the $3\mathrm{D}$ image of a transparent object from multi-focus microscopic images. To achieve this, we model a microscopic imaging system for acquiring the multi-focus microscopic images with different focuses. The optimal values of the transmittances are determined by minimizing the difference between the intensities of the observed image and the simulated image by our model. From the simulation using the virtual cell, it is confirmed that the proposed method can reconstruct the optimal 3D image efficiently and stably. In addition, the $3\mathrm{D}$ image reconstruction from the real cell images is achieved with these proposed methods.
172
+
173
+ ## References
174
+
175
+ 1. R. Attota. Through-focus or volumetric type of optical imaging methods: a review. Journal of Biomedical Optics, 23:23-23-10, 2018.
176
+
177
+ 2. D. J. Brenner and E. J. Hall. Computed tomography - an increasing source of radiation exposure. New England Journal of Medicine, 357(22):2277-2284, 2007. PMID: 18046031.
178
+
179
+ 3. E. Figueiras, A. M. Soto, D. Jesus, M. Lehti, J. Koivisto, J. E. Parraga, J. Silva-Correia, J. M. Oliveira, R. L. Reis, M. Kellomäki, and J. Hyttinen. Optical projection tomography as a tool for 3d imaging of hydrogels. Biomed. Opt. Express, 5(10):3443-3449, Oct 2014.
180
+
181
+ 4. K. Tanaka, Y. Mukaigawa, H. Kubo, Y. Matsushita, and Y. Yagi. Recovering inner slices of layered translucent objects by multi-frequency illumination. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(4):746-757, April 2017.
182
+
183
+ 5. S. Yoo, P. Ruiz, X. Huang, K. He, N. Ferrier, M. Hereld, A. Selewa, M. Daddys-man, N. Scherer, O. Cossairt, and A. Katsaggelos. 3d image reconstruction from multi-focus microscope: Axial super-resolution and multiple-frame processing. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018.
papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/H1l9RiIVWr/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,189 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § 3D IMAGE RECONSTRUCTION FROM MULTI-FOCUS MICROSCOPIC IMAGES *
2
+
3
+ Takahiro Yamaguchi ${}^{1}$ , Hajime Nagahara ${}^{2}$ , Ken’ichi Morooka ${}^{1}$ , Yuta Nakashima ${}^{2}$ , Yuki Uranishi ${}^{2}$ , Shoko Miyauchi ${}^{1}$ , and Ryo Kurazume ${}^{1}$
4
+
5
+ ${}^{1}$ Graduate School of Information Science and Electrical Engineering Kyushu University, Fukuoka 819-0395, Japan morooka@ait.kyushu-u.ac.jp ${}^{2}$ Institute for Datability Science, Osaka University, Osaka 565-0871, Japan nagahara@ids.osaka-u.ac.jp
6
+
7
+ Abstract. This paper presents a method for reconstructing 3D image from multi-focus microscopic images captured with different focuses. We model the multi-focus imaging by a microscopy and produce the $3\mathrm{D}$ image of a target object based on the model. The 3D image reconstruction is done by minimizing the difference between the observed images and the simulated images generated by the imaging model. Simulation and experimental result shows that the proposed method can generate the $3\mathrm{D}$ image of a transparent object efficiently and reliably.
8
+
9
+ Keywords: 3D imaging $\cdot$ Microscopy $\cdot$ Multi-focus images $\cdot$ Transparent object.
10
+
11
+ § 1 INTRODUCTION
12
+
13
+ Cell observation by optical microscopy is widely used in biology, medicine, and so on. For example, cytodiagnosis and iPS cells culture are based on the cell observation. A regular microscopy acquires a 2D image of a target cell with a 3D structure. However, only a part of slices of the 3D cell structure can be observed as a focused image because the depth of field of the general microscope is narrow. Under such circumstances, various applications can be expected for the measurement technology of 3D cell structure.
14
+
15
+ A simple way to get the 3D structure using a microscopy is to stack multiple slice images with different focuses. Various methodologies[1] have been proposed to measure the multi-focus images. However, the simple stacked multi-focus images include many unclear regions because of reflections of front and backside of the focus. To solve this problem, a confocal microscope is often used. In a confocal microscope, a pinhole in front of a light detector cuts off light that is out of focus while allowing only the fluorescence light from the in-focus spot to enter the light detector. Thus, a confocal microscope is a useful and powerful tool to obtain clear images of only in-focus regions.
16
+
17
+ * Supported by JST CREST Grant Number JPMJCR1786 and JSPS KAKENHI Grant Number JP19H04139, Japan.
18
+
19
+ Another approach for imaging the 3D object structure is Computed tomography (CT)[2]. In CT scanning, when X-rays is irradiated to a target object, X-ray CT detects X-ray passed through the object by a detector located on opposite sides of the object. Here, it is assumed that X-ray is absorbed and attenuated by the object in the irradiation. On the assumption, many intensities of X-ray are measured by rotating a pair of X-ray source and detector around the object. When a target object is represented with a set of voxels, the measured intensities are used to estimate the attenuation coefficient of each voxel.
20
+
21
+ Similar to X-ray CT, Optical Projection Tomography (OPT)[3] have been proposed for microscopic 3D imaging. Using regular light, lens optics and silicon image sensor, OPT estimates the light attenuation of each voxel. However, since the imaging systems of X-ray and OPT requires rotation mechanisms, the methodology of X-ray CT and OPT is directly inapplicable to regular microscope with no rotation mechanism.
22
+
23
+ In this paper, we propose a method of $3\mathrm{D}$ image from multi-focus microscopic images obtained with different focuses. We model an imaging system for acquiring the multi-focus microscopic images with different focuses. In the imaging system, the microscopic images are produced by the light emitted from its light source. When the light passes through the transparent object, the light is attenuated depending on the transmittance of the object material. This means that each pixel in the microscopic image is related to these attenuated light. Considering this, we reconstruct the 3D image of the object by minimizing the difference between the observed images and the simulated images generated by the imaging model.
24
+
25
+ Similar with our method, there are two approaches for $3\mathrm{D}$ imaging using multi-focus images. The first is the reconstruction of $3\mathrm{D}$ image which contains appearance of inner slices of transparent objects[4]. The 3D image is generated by simply piling these discrete slices acquired by CCD cameras. The second approach is to reconstruct $3\mathrm{D}$ image of a target object’s luminescence from multi-focus images obtained by a fluorescence microscope[5]. Unlike the two approaches, we aim to reconstruct the 3D image as a set of voxels having transmittance values from multi-focus images obtained by a general bright field microscopy.
26
+
27
+ § 2 3D IMAGE RECONSTRUCTION FROM MULTI-FOCUS IMAGES
28
+
29
+ § 2.1 IMAGEING MODEL FOR MULTI-FOCUS MICROSCOPIC IMAGES
30
+
31
+ Fig. 1 shows how the pixel intensity is observed by a microscopy. We denote an intensity value of an arbitrary pixel(x, y)in the $s$ -th image in the sequence of the multi-focus microscopic images as ${I}_{\mathbf{\alpha }}\left( {x,y,s}\right)$ . We assume that a target space including one or more than cells is represented by a set of voxels. 3D image is estimated as transmittances of the voxels ${\alpha }_{i}\left( {i = 0,1,\cdots ,{N}_{v} - 1}\right)$ as shown in Fig. 1. An incident light is emitted from a light source under a stage.
32
+
33
+ The incident light is discretized as a set of ${N}_{r}$ discrete rays. The intensity ${l}_{j}$ of the $j$ -th $\left( {j = 0,1,\cdots ,{N}_{r} - 1}\right)$ ray is attenuated every time the ray passes through each voxel. The attenuation is affected by the transmittance of the voxel and the length of the ray through the voxel. Hence, we model the relationship between ${l}_{j}$ and the attenuated ray ${l}_{j}^{\prime }$ by
34
+
35
+ $$
36
+ {l}_{j}^{\prime } = {l}_{j} \times \mathop{\prod }\limits_{i}{\alpha }_{i}^{{d}_{ji}} \tag{1}
37
+ $$
38
+
39
+ < g r a p h i c s >
40
+
41
+ Fig. 1. Imaging system model
42
+
43
+ where ${d}_{ji}$ is the length of the $j$ -th ray if the ray passes through the $i$ -th voxel. Otherwise, ${d}_{ji} = 0$ .
44
+
45
+ By taking the log of both side in Eq. (1) and expanding the log, Eq. (1) is rewritten as
46
+
47
+ $$
48
+ \log {l}_{j}^{\prime } = \log {l}_{j} + \mathop{\sum }\limits_{i}{d}_{ji}\log {\alpha }_{i}. \tag{2}
49
+ $$
50
+
51
+ By collecting all relationships between the incident ray and attenuated ray, ${l}_{j}$ and ${l}_{j}^{\prime }$ using Eq. (2), we obtain the following formulation:
52
+
53
+ $$
54
+ {\mathbf{L}}^{\prime } = \mathbf{{DA}} + \mathbf{L}, \tag{3}
55
+ $$
56
+
57
+ where
58
+
59
+ $$
60
+ \mathbf{L} = \left\lbrack \begin{matrix} \log {l}_{0} \\ \vdots \\ \log {l}_{{N}_{r} - 1} \end{matrix}\right\rbrack ,{\mathbf{L}}^{\prime } = \left\lbrack \begin{matrix} \log {l}_{0}^{\prime } \\ \vdots \\ \log {l}_{{N}_{r} - 1}^{\prime } \end{matrix}\right\rbrack ,\mathbf{A} = \left\lbrack \begin{matrix} \log {\alpha }_{0} \\ \vdots \\ \log {\alpha }_{{N}_{v} - 1} \end{matrix}\right\rbrack ,
61
+ $$
62
+
63
+ $$
64
+ \mathbf{D} = \left\lbrack \begin{matrix} {d}_{00} & \cdots & {d}_{0\left( {{N}_{v} - 1}\right) } \\ \vdots & \ddots & \vdots \\ {d}_{\left( {{N}_{r} - 1}\right) 0} & \cdots & {d}_{\left( {{N}_{r} - 1}\right) \left( {{N}_{v} - 1}\right) } \end{matrix}\right\rbrack .
65
+ $$
66
+
67
+ Here, we assume that aperture of the light source and objective lens are enough large to the target cell. On this assumption, an arbitrary pixel ${I}_{\mathbf{\alpha }}\left( {x,y,s}\right)$ can be similarly expressed by shifting the stage along with(x, y, s)coordinates. On the assumption, $\mathbf{D}$ and $\mathbf{L}$ are regarded as the function of the three parameters $x,y$ , and $s$ . We modify $\mathbf{D}$ is the function of(x, y, s)and the components of matrix $\mathrm{D}$ is shifted by(x, y, s). Hence, Eq. (3) is rewritten by
68
+
69
+ $$
70
+ {\mathbf{L}}^{\prime }\left( {x,y,s}\right) = \mathbf{D}\left( {x,y,s}\right) \mathbf{A} + \mathbf{L}. \tag{4}
71
+ $$
72
+
73
+ Finally, ${I}_{\mathbf{\alpha }}\left( {x,y,s}\right)$ is calculated by the total amount of the attenuated rays ${l}_{j}^{\prime }$ :
74
+
75
+ $$
76
+ {I}_{\mathbf{\alpha }}\left( {x,y,s}\right) = \mathop{\sum }\limits_{j}{l}_{j}^{\prime }\left( {x,y,s}\right) . \tag{5}
77
+ $$
78
+
79
+ § 2.2 ESTIMATION OF THE VOXEL TRANSMITTANCE
80
+
81
+ Using the model as mentioned in Sec 2.1., we simulate the observed multi-focus images. When the estimated transmittances of the target voxels are close to the real ones, the intensity value of the simulated multi-focus images ${I}_{\mathbf{\alpha }}\left( {x,y,s}\right)$ should be the same as the intensity value of the observed images $I\left( {x,y,s}\right)$ by microscopy. Considering this, the 3D image is reconstructed by minimizing an objective function $F$ :
82
+
83
+ $$
84
+ F\left( \mathbf{\alpha }\right) = E\left( \mathbf{\alpha }\right) + {wTV}\left( \mathbf{\alpha }\right) , \tag{6}
85
+ $$
86
+
87
+ where $\mathbf{\alpha }$ is a vector composed of all the transmittances $\mathbf{\alpha } = \left( {{\alpha }_{0},{\alpha }_{1},\cdots ,{\alpha }_{{N}_{v} - 1}}\right)$ . The parameter $w$ is a weighted coefficient as a regularization parameter. The conjugate gradient method is applied to find optimal transmittances which minimize $F\left( \mathbf{\alpha }\right)$ . The function $E\left( \mathbf{\alpha }\right)$ in Eq. (6) represents the difference between the intensity value $I\left( {x,y,s}\right)$ in the observed image and ${I}_{\mathbf{\alpha }}\left( {x,y,s}\right)$ in the simulated image by Eq. (5). The function $E$ is defined as
88
+
89
+ $$
90
+ E\left( \mathbf{\alpha }\right) = \mathop{\sum }\limits_{s}\mathop{\sum }\limits_{x}\mathop{\sum }\limits_{y}{\left( {I}_{\mathbf{\alpha }}\left( x,y,s\right) - I\left( x,y,s\right) \right) }^{2}. \tag{7}
91
+ $$
92
+
93
+ On the contrary, ${TV}\left( \mathbf{\alpha }\right)$ is a regularization function base on a total variation (TV) norm to reconstruct the 3D image smoothly. Practically, the value of ${TV}\left( \mathbf{\alpha }\right)$ is calculated by the total transmittance difference between the traget voxels and its six neighbor voxels:
94
+
95
+ $$
96
+ {TV}\left( \mathbf{\alpha }\right) = \mathop{\sum }\limits_{{k \in {\Phi }_{i}}}{\left( {\alpha }_{i} - {\alpha }_{k}\right) }^{2}, \tag{8}
97
+ $$
98
+
99
+ where ${\Phi }_{i}$ is the set of the six neighbors of the target $i$ -th voxel.
100
+
101
+ § 2.3 EFFICIENT SEARCH OF OPTIMAL TRANSMITTANCES
102
+
103
+ From Eq. (6), our proposed method finds the optimum transmittances by iteratively updating the transmittances. To find the optimum efficiently and robustly, we introduce the two followings.
104
+
105
+ < g r a p h i c s >
106
+
107
+ Fig. 2. (a)An artificial 3D cell model; (b) multi-focus images of the cell model.
108
+
109
+ < g r a p h i c s >
110
+
111
+ Fig. 3. 3D image of the cell model estimated by our method.
112
+
113
+ Initialization from input images: The initial values of the transmittances are important to find the optimum transmittances robustly by the conjugate gradient method. Given a sequence of ${N}_{s}$ multi-focus images, we determine the initial transmittances based on the intensity value $I\left( {x,y,s}\right)$ of the original image sequence.
114
+
115
+ Let us consider that all rays are intersected at the $i$ -th voxel when the intensity $I\left( {x,y,s}\right)$ is calculated. In this case, since all rays pass through the $i$ -th voxel, the transmittance ${\alpha }_{i}$ of the $i$ -th voxel strongly influences on the calculation of $I\left( {x,y,s}\right)$ compared with other voxels. Moreover, in our imaging system model, each ray passes through at least ${N}_{s}$ voxels. Therefore, the optimal transmittance value of the $i$ -th voxel is approximately regarded as the ${N}_{s}$ -th root of $I\left( {x,y,s}\right)$ . Considering these, the initial transmittance value ${\alpha }_{i}^{\left( 0\right) }$ of the $i$ -th voxel is calculated by
116
+
117
+ $$
118
+ {\alpha }_{i}^{\left( 0\right) } = \sqrt[{N}_{s}]{I\left( {x,y,s}\right) }. \tag{9}
119
+ $$
120
+
121
+ Coarse-to-fine search: From Eq.(1) - (5), the computational burden in our method depends on the number of rays. When the small number of the rays is used, the estimation of the transmittances can be speeded up. However, the light is discretized roughly by the small number of the rays. Therefore, the use of such rays results in the low accuracy of estimating the transmittances. On the other hand, in the case of using many rays, although the estimation of the transmittances is time-consuming, the reliable transmittances can be obtained.
122
+
123
+ Considering the trade-off between the efficiency and accuracy of estimating the transmittances, we introduce a coarse-to-fine strategy. Firstly, in the coarse step, the transmittances are roughly estimated by using a small number of the rays (in our case, ${N}_{r} = {25}$ ). The obtained transmittances in the coarse step are used as the initial values of the transmittances in the following fine step. In the fine step, we find the optimal values of the transmittances by using many rays (in our case, ${N}_{r} = {533}$ ).
124
+
125
+ § 3 EXPERIMENTAL RESULTS
126
+
127
+ To evaluate the performance of the proposed method, we made a simulation using synthetic cell images and experiments using real cell images. In the simulation and the experiment, the parameter $w$ in Eq.(6) is set to $w = {0.125}$ .
128
+
129
+ § 3.1 SIMULATION USING SYNTHETIC CELL IMAGES
130
+
131
+ In the simulation, we generate two virtual 3D cell models. Fig. 2(a) shows one of them. It consists of a nucleus (red in Fig. 2(a)), a cytoplasm(light blue) and a cell membrane (blue). From real cell images, it is observed that the transmittance values of the nucleus tend to be lower than those of the cyptoplasm and the membrane. Based on the observation, the transmittance values of the three components are set to 0.80 (the nucleus), 0.95 (the cyptoplasm), and 0.98 (the membrane), respectively.
132
+
133
+ The imaging system model (Section 2.1) is applied to generate the synthetic images of the virtual cells. Here, the number ${N}_{r}$ of the rays used in the $3\mathrm{D}$ image reconstruction is set to 533 so that for each voxel shown in Fig. 1, at least one ray passes through the voxel when the maximum blur is occurred in the model. Finally, we obtain 11 multi-focus images with ${50} \times {50}$ [pixel] (Fig. 2(b)).
134
+
135
+ We verify initialization from input images and coarse-to-fine search(section 2.3). In the verification, the 3D images are reconstructed by the proposed methods. Moreover, the proposed methods are compared by the two methods. First one is the method in which all initial values of the transmittances are set to 0.8 through some preliminary experiments. Second one is the method which uses 25 or 533 rays to reconstruct the $3\mathrm{D}$ image.
136
+
137
+ To measure the accuracy of the reconstructed 3D image, we use the root mean square error (RMSE) between the reconstructed 3D image and their ground truth values. Table 1 shows the average of RMSE and computational time between two virtual 3D cell models for the each method.
138
+
139
+ In the verification about the initialization, from Table 1, the initialization from input images improves the accuracy of reconstructing 3D image compared with the method in which all initial values of transmittances are set to 0.8 . Moreover, the computational time for the initialization from input images is shorter than that of the method in which all initial values of transmittances are set to 0.8 using same number of the rays. From these results, the initialization from input images is useful for obtaining the reliable transmittances.
140
+
141
+ In the verification about the coarse-to-fine search, from Table 1, the proposed method using the coarse-to-fine search improves the accuracy of the reconstructing 3D image compared with the methods using only 25 or 533 rays. Moreover, the computational time of the methods using the coarse-to-fine search is shorter than the methods using only 533 rays.
142
+
143
+ The computational time in the $3\mathrm{D}$ image reconstruction increases according to the number of the used rays in the reconstruction. In the coarse-to-fine search, the first coarse step is to search the optimal transmittances roughly by using a small number of the rays. In the second fine step, we find the optimal values of the transmittances by using many rays. Therefore, the coarse-to-fine search reduces the total number of the used rays in the reconstruction. Moreover, the coarse search enables to find the values of the transmittances closed to the optimal ones while avoiding local minimum. Owing to these, the proposed method using the coarse-to-fine search can find the optimal transmittances efficiently and stably.
144
+
145
+ Table 1. Ablation study for initialization and coarse-to-fine methods.
146
+
147
+ max width=
148
+
149
+ Initial values ${N}_{r}$ RMSE $\left\lbrack {\times {10}^{-2}}\right\rbrack$ Time[sec]
150
+
151
+ 1-4
152
+ Initialization from input images coarse-to-fine (25 to 533) 0.869 336
153
+
154
+ 1-4
155
+ Initialization from input images 25 0.875 58
156
+
157
+ 1-4
158
+ Initialization from input images 533 0.873 1,553
159
+
160
+ 1-4
161
+ Constant value $\left( {{\alpha }_{i}^{\left( 0\right) } = {0.80}}\right)$ coarse-to-fine (25 to 533) 0.878 373
162
+
163
+ 1-4
164
+ Constant value $\left( {{\alpha }_{i}^{\left( 0\right) } = {0.80}}\right)$ 25 0.882 76
165
+
166
+ 1-4
167
+ Constant value $\left( {{\alpha }_{i}^{\left( 0\right) } = {0.80}}\right)$ 533 0.906 2,209
168
+
169
+ 1-4
170
+
171
+ Thus, the proposed method using the initialization from input images and the coarse-to-fine search achieves the best accuracy of the 3D images reconstruction while reducing the computational time drastically compared with the methods using only 533 rays. Fig. 3 shows the 3D image of the virtual cell estimated by the proposed method using the initialization from input images and the coarse-to-fine search.
172
+
173
+ § 3.2 EXPERIMENT USING REAL CELL IMAGES
174
+
175
+ In the experiment, the proposed method is applied to the multi-focus images of real cell images to reconstruct the 3D image of the real cells. Fig. 4 (a) and (b) show the multi-focus image sequences of normal and cancer cells. The size and spatial resolution of each cell image is ${62} \times {62}$ [pixel] and ${0.92\mu }\mathrm{m}/1$ pixel. To apply the proposed method, the color cell images are converted into the gray scale images.
176
+
177
+ Fig. 5 (a) and (b) show the 3D image of the normal and cancer cells reconstructed from Fig. 4, respectively. It takes about 1,650 [sec] on average to reconstruct $3\mathrm{D}$ image. The average computational time in the experiments is longer than that in the simulation because of the following reason. In the simulation, we assume that the cytoplasm is homogeneous with no other components. In other words, all the voxels in the artificial cell model have almost the same transmittance values. On the contrary, a real cell contain other components such as mitochondria. This means that there are the voxels with various transmittance values in the real cells. Owing to the complex structure of the cell, reconstructing the 3D image of the real cells is time-consuming. One of our future works is to speed up the estimation of the transmittances of real cells with complex structures.
178
+
179
+ < g r a p h i c s >
180
+
181
+ Fig. 4. Multi-focus images of real cells: (a) a normal cell; (b) a cancer cell.
182
+
183
+ < g r a p h i c s >
184
+
185
+ Fig. 5. Reconstructed 3D images of (a)the normal and (b)cancer cells.
186
+
187
+ § 4 CONCLUSION
188
+
189
+ We proposed a method for reconstructing the $3\mathrm{D}$ image of a transparent object from multi-focus microscopic images. To achieve this, we model a microscopic imaging system for acquiring the multi-focus microscopic images with different focuses. The optimal values of the transmittances are determined by minimizing the difference between the intensities of the observed image and the simulated image by our model. From the simulation using the virtual cell, it is confirmed that the proposed method can reconstruct the optimal 3D image efficiently and stably. In addition, the $3\mathrm{D}$ image reconstruction from the real cell images is achieved with these proposed methods.
papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/HJxydABnbS/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # NuClick: From Clicks in the Nuclei to Nuclear Boundaries
2
+
3
+ Mostafa Jahanifar ${}^{\star 2}$ , Navid Alemi Koohbanani ${}^{\star 1,3}$ , and Nasir Rajpoot ${}^{1,3}$
4
+
5
+ ${}^{1}$ Department of Computer Science, University of Warwick, Coventry ${}^{2}$ Department of Research & Development, NRP Co., Tehran, Iran ${}^{3}$ Alan Turing Institute, London, UK
6
+
7
+ Abstract. Best performing nuclear segmentation methods are based on deep learning algorithms that require a large amount of annotated data. However, collecting annotations for nuclear segmentation is a very labor-intensive and time-consuming task. Thereby, providing a tool that can facilitate and speed up this procedure is of great interest. Here we propose a simple yet efficient framework based on convolutional neural networks, named NuClick, which can precisely segment nuclei boundaries by accepting a single point position (or click) inside each nucleus. Based on the clicked positions, inclusion and exclusion maps are generated which comprise of 2D Gaussian distributions centered on those positions. These maps serve as guiding signals for the network as they are concatenated to the input image. The inclusion map focuses on the desired nucleus while the exclusion map indicates neighboring nuclei and improve the results of segmentation in scenes with nuclei clutter. The NuClick not only facilitates collecting more annotation from unseen data but also leads to superior segmentation output for deep models. It is also worth mentioning that an instance segmentation model trained on NuClick generated labels was able to ranked ${1}^{\text{st }}$ in LYON19 challenge.
8
+
9
+ Keywords: Interactive annotating $\cdot$ nuclei segmentation $\cdot$ instance segmentation - computational pathology
10
+
11
+ ## 1 Introduction
12
+
13
+ Appearance and shape characteristics of nuclei in histology images are important markers for the diagnosis of cancer and predicting patient outcome [1]. To quantify these features, one should first determine the boundaries of the nuclei, which requires lots of time and effort to achieve manually. To this end, automatic segmentation methods play an important role in facilitating this task.
14
+
15
+ Since the emergence of deep learning (DL) methods and their superior performance over classical methods (feature-based), the need for annotated data has increased significantly. Data-dependency nature of DL methods still imposes a huge burden on the human for providing annotated data. Despite the labor intensive nature of annotating nuclei within histology images, several datasets have been provided for training deep networks [2,3,4]. The question here is: how we
16
+
17
+ ---
18
+
19
+ * These authors contributed equally to this work.
20
+
21
+ ---
22
+
23
+ ![01963a54-2978-7af9-9dc8-0f0a1f6bb908_1_391_330_1027_219_0.jpg](images/01963a54-2978-7af9-9dc8-0f0a1f6bb908_1_391_330_1027_219_0.jpg)
24
+
25
+ Fig. 1. Example outputs of NuClick: Annotator click inside the nucleus and the mask will be generated by NuClick model.
26
+
27
+ can use available annotated datasets to ease extending the knowledge and reducing the human effort when creating a new data set on another cancer/tissue type? In the computer vision domain, several methods have been employed to speed up the procedure of collecting annotations for natural images by accepting a few points from the annotator [5]. One of the most efficient models is DEXTR [5] which takes extreme points (the left-most, right-most, top-most and bottom-most pixels) of the object as the input to extract mask of the desired object.
28
+
29
+ All these approaches require the user to click on several points on the boundary of an object or draw a bounding box. For nuclear segmentation, providing several points on the boundaries of nuclei is still a high burden, since the annotator should first find the boundary of a nucleus in high magnification and then select several points on it. Moreover, nuclei are small objects, and their number may exceed 400 pixels in a patch size of ${500} \times {500}$ pixels (for example, when there is a dense cluster of lymphocytes), which makes this task more arduous. To the best of our knowledge, there is no similar approach based on DL models for interactive nuclei segmentation in histology images. Some works like [6] used the marker-controlled watershed algorithm to segment nuclei from marked points which failed in complex histology images.
30
+
31
+ Here, we propose a simple yet effective method for collecting nuclear annotation by asking a user to provide only one point inside the nucleus (examples are depicted in Fig. 1). Clicking one point inside an object is not a demanding task and can be done in low resolution by a non-expert. In summary, our contributions in this work are two-fold: 1) proposing a DL framework by adding two channels comprising guiding signals to the selected nucleus and its neighboring nuclei. 2) showing that the outputs from this framework can be useful in practice and for training deep networks.
32
+
33
+ ## 2 Methodology
34
+
35
+ In the current work, we train the NuClick model for different labeled datasets of nuclei. For each data set, based on the centroids of annotated nuclei, patches are extracted from larger images, and then two guiding channels are created to serve alongside RGB patches as the network input. The network's parameters are then optimized based on a weighted hybrid loss function. On the other hand, during the prediction phase, our framework accepts an image and its marked nuclei (clicked positions) from the user as inputs and generate the instance segmenta-
36
+
37
+ ![01963a54-2978-7af9-9dc8-0f0a1f6bb908_2_395_328_1014_391_0.jpg](images/01963a54-2978-7af9-9dc8-0f0a1f6bb908_2_395_328_1014_391_0.jpg)
38
+
39
+ Fig. 2. NuClick network architecture. Comprising convolutional, residual, and multi-scale blocks. Level transition is done using MaxPooling and TransposedConv layers.
40
+
41
+ tion of the clicked nuclei in the output. In the rest of this paper, we describe each step in details.
42
+
43
+ ### 2.1 Model architecture and loss function
44
+
45
+ We have utilized an encoder-decoder architecture, inspired by U-net [7], which reduces the size of feature maps in the encoding path while increasing the number of channels. The decoding path reverses this effect through several levels and turns those small and enriched feature maps into a single channel dense prediction. However, unlike the "traditional" U-Net, the NuClick architecture incorporates residual and multi-scale convolutional blocks [8] instead of normal convolutional layers in each level of encoding and decoding paths. An overview of the proposed NuClick architecture is depicted in Fig. 2. Using residual blocks enables us to train the network with higher learning rates without being worried about gradient vanishing effect [9]. Furthermore, multi-scale convolutional blocks allow the network to better capture the essence of image structures with different sizes and extract more relevant feature maps, hence boosting the network performance [8].
46
+
47
+ For training the network, we proposed to use a hybrid weighted loss function, which is based on a soft variant of the Dice similarity coefficient and weighted binary cross-entropy (1). The dice part of the loss controls class imbalance problem during training as most of pixels belong to the background, and weighted binary cross entropy penalizes the loss if network wrongly segments the neighbouring nuclei. Our proposed hybrid loss is as follow:
48
+
49
+ $$
50
+ \mathcal{L} = 1 - \frac{\mathop{\sum }\limits_{i}{p}_{i}{g}_{i} + \varepsilon }{\mathop{\sum }\limits_{i}{p}_{i} + \mathop{\sum }\limits_{i}{g}_{i} + \varepsilon } - \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}{w}_{i}\left( {{g}_{i}\log {p}_{i} + \left( {1 - {g}_{i}}\right) \log \left( {1 - {p}_{i}}\right) }\right) \tag{1}
51
+ $$
52
+
53
+ where $\varepsilon$ is a small number, $n$ is the number of pixels in the image spatial domain, ${p}_{i},{g}_{i}$ , and ${w}_{i}$ are values of the prediction map, the ground-truths mask, and the weight map at pixel $i$ , respectively.
54
+
55
+ ![01963a54-2978-7af9-9dc8-0f0a1f6bb908_3_391_328_1024_219_0.jpg](images/01963a54-2978-7af9-9dc8-0f0a1f6bb908_3_391_328_1024_219_0.jpg)
56
+
57
+ Fig. 3. Guiding signal maps: (a)-(c) show inputs to the NuClick network which are image patch, inclusion map, and exclusion map, respectively, (d) depicts the desired network output (ground truth), and (e) illustrates pixel-wise weight map used in the loss function.
58
+
59
+ The pixel-wise weight map is generated based on the ground-truth, where regions of the neighboring nuclei have 10 times more weight than the desired nucleus (marked object). To better understand these maps, a simple image patch with the desired nuclei clicked (marked) in it, alongside its related ground-truth and weight map are illustrated in Fig. 3. We incorporated this weighting scheme in the loss function to put more emphasis on the neighboring nuclei and avoid false segmentation of touching objects. In an alternative scenario, if we set the weights of the desired nuclei higher than other nuclei, the network may be falsely biased toward over-segmentation of neighboring nuclei.
60
+
61
+ ### 2.2 Guiding signal maps
62
+
63
+ As a guiding signal and to incorporate prior knowledge, an extra channel is concatenated with the image in the network input which contains a 2D Gaussian distribution centered on the selected point (similar to [10] where the nucleus centroid was assumed as a Gaussian). We call this guiding signal the inclusion map, which refers to the nucleus we wanted to be included in the segmentation output. Adding the inclusion map helps the network to achieve a desirable segmentation of the selected nucleus as long as it is isolated. Based on our early experiments, although adding the inclusion map guide the model to segment the selected nucleus, in the presence of nuclei cluster the segmentation output might contain neighbouring nuclei too. To avoid this phenomenon and to exclude the neighboring nuclei in the output prediction map, we introduce the exclusion map as the fifth channel to the input which can contain multiple 2D Gaussian distributions centered on the clicked positions of the neighboring nuclei (if annotator provides them). For clarity, we display the inclusion and exclusion maps in Fig. 3(b)-(c) for a sample patch. They can also be seen at the input of the network in Fig. 2 as they are concatenated with the RGB image patch.
64
+
65
+ To this end, the inclusion channel always provides a guiding signal for segmenting the desired nucleus and if other nuclei in the vicinity of the patch are selected the exclusion mask will also provide a signal; otherwise it is an all-zero channel. Please note that within the training phase the inclusion and exclusion maps are generated on-the-fly, based on the augmented (changed) ground-truth mask, and during the test phase they are directly constructed based on the user clicked positions.
66
+
67
+ ### 2.3 Training Procedure
68
+
69
+ To optimize the network weights, an Adam optimizer with initial learning rate of 0.003 was used. NuClick has been trained for 300 epoch with a batch size of 128 on all datasets. At each iteration, centroid positions of the desired nuclei are randomly jittered, and subsequently, the inclusion and exclusion maps are created on-the-fly. This makes the network more robust against the variations in the input position provided by the annotator.
70
+
71
+ ### 2.4 Testing Procedure
72
+
73
+ During test time, for each input image, the user clicks on the nuclei for annotation, or the centroids are loaded from file. Afterward, for all available coordinates, patches of size ${128} \times {128}$ are extracted from image and inclusion, and exclusion maps are created as mentioned in the previous section. The NuClick predicts a nucleus segmentation map for each click (patch). Then that prediction map is converted to a binary map by thresholding, and objects with areas smaller than 10 pixels are removed (based on the size of the smallest object in the data set). The optimal threshold value, $T = {0.4}$ , is selected by testing a set of candidate values and evaluating the resulting binary maps. Moreover, for removing extra objects except for the desired nucleus inside the binary map, the morphological reconstruction operator is used which needs a marker and a mask. The marker has its all pixels equal to 0 except for a single pixel at the centroid location which is set to 1 .The binary map plays the role of the mask in the morphological reconstruction. Having all patches predicted and processed, we can fill an empty canvas at the origin coordinates of each path with the processed nuclei masks to generate the final instance segmentation map of the input image.
74
+
75
+ ## 3 Experiments and Results
76
+
77
+ ### 3.1 Dataset
78
+
79
+ We have utilized two publicly available datasets in this work.First, the Kumar dataset [2] contains 30 images of size ${1000} \times {1000}$ which have been extracted from WSIs in The Tissue Genome Atlas (TCGA). This dataset contains seven tissue types and contains a total of 21623 nuclei instances segmented. From this dataset, 16 images are separated for training. The second dataset is CPM17 dataset [3] which consists of 32 images of size between ${500} \times {500}$ to ${700} \times {700}$ and a total of 7570 nuclei instances. Similarly, 16 images in CPM17 are used to extract patches as the training set.
80
+
81
+ ### 3.2 Experimental results
82
+
83
+ To show the generalizability of the NuClick to an unseen dataset, Table 1 demonstrates quantitative results of NuClick when trained on a certain dataset (first column), and tested on another one (second column). Points for testing on the unseen dataset were provided from the centroids of objects in GT however they have been randomly jittered by 5 pixels to simulate manual annotation. We have used six evaluation metrics: Aggregated Jaccard Index (AJI), general Dice similarity coefficient, object-wise Dice $\left( {\mathrm{{Dice}}}_{\mathrm{{Obj}}}\right)$ , Segmentation Quality(SQ), Detection Quality (DQ), and Panoptic Quality (PQ=SQ×DQ) measures. The AJI metric [2] measures the quality of instance-wise predictions, DQ is equivalent to F1-score and only quantify the quality of detection, SQ reflects the average of intersection over union (IOU) for the detected object, Dice for evaluates the similarity of overall nuclei segmentation against the GT, and Dice ${}_{obj}$ measures the Dice coefficient for individual segmented nucleus. Comprehensive information about these metrics can be found in [11].
84
+
85
+ We also compare NuClick to two other approaches: U-Net, which is deep learning-based (supervised) model, and the watershed, which is an unsupervised method. For fair comparison in the case of U-Net, the detection map (Gaussian centered on each nucleus centroid) of nuclei is concatenated to RGB channels at the training and testing phases. Moreover, the watershed has been applied to the U-Net prediction to have instance-wise outputs. In the unsupervised framework, the marker-controlled watershed algorithm is applied to the gradient map of the image using the centroid points as the markers.
86
+
87
+ As reported in Table 1, the NuClick shows worse performance when it is trained on CPM and then tested on Kumar, which is due to some hard cases (Cancerous colon) in the Kumar dataset. Overall, NuClick performance, according to all metrics, is much better than the other two baselines that prove the high generalization capability of the NuClick. In an ideal situation, DQ for NuClick should be equal to 1 , as it is representing the detection quality of the method and we have already provided the model with GT centroid defections. Nonetheless, DQ for NuClick is less than (yet very close to) 1. The reason is that NuClick does not consider some input points as valid nuclei, or the predicted map is eliminated during the thresholding and post-processing procedures. However, both detection and segmentation quality of NuClick is much higher than other reported methods in Table 1, which is also obvious in PQ metric. Quality of NuClick generated annotations is also evident from Fig. 1 which illustrates output masks for different clicked points in five image patches of different organs.
88
+
89
+ <table><tr><td>Model</td><td>Train</td><td>Test</td><td>AJI</td><td>${\text{Dice}}_{\text{obj }}$</td><td>Dice</td><td>SQ</td><td>DQ</td><td>$\mathbf{{PQ}}$</td></tr><tr><td rowspan="2">NuClick</td><td>CPM</td><td>Kumar</td><td>0.7940</td><td>0.7937</td><td>0.8886</td><td>0.8001</td><td>0.9819</td><td>0.7856</td></tr><tr><td>Kumar</td><td>CPM</td><td>0.8278</td><td>0.8278</td><td>0.9088</td><td>0.8361</td><td>0.9981</td><td>0.8180</td></tr><tr><td rowspan="2">U-net+WS</td><td>CPM</td><td>Kumar</td><td>0.7544</td><td>0.7601</td><td>0.8648</td><td>0.7823</td><td>0.9796</td><td>0.7663</td></tr><tr><td>Kumar</td><td>CPM</td><td>0.7812</td><td>0.7844</td><td>0.8903</td><td>0.8074</td><td>0.9945</td><td>0.8029</td></tr><tr><td rowspan="2">Watershed</td><td>-</td><td>Kumar</td><td>0.1892</td><td>0.1660</td><td>0.4023</td><td>0.6936</td><td>0.3965</td><td>0.2805</td></tr><tr><td>-</td><td>CPM</td><td>0.1501</td><td>0.1327</td><td>0.3467</td><td>0.7078</td><td>0.4243</td><td>0.3046</td></tr></table>
90
+
91
+ Table 1. Generalization of NuClick across CPM [3] and Kumar [2] datasets in comparison with other methods.
92
+
93
+ Moreover, to validate the quality of annotations generated by NuClick another experiment has been designed: we first train NuClick on CPM (Kumar) data, and then used the trained NuClick to generate labels for Kumar (CPM) dataset. Afterwards, we train U-Net [7], FCN8 [12], and Segnet [13] models on NuClick's annotations for Kumar and CPM dataset. Performances of these models are compared against those of same models trained on GT annotations. Table 2 reports the results for this analysis. In this table, the title of each main column represents the name of the dataset that we apply our model on. Each sub-column of GT and NuClic ${}_{\mathrm{{CPM}}/\text{ Kumar }}$ indicate whether GT annotations or NuClick generated instances were utilized for training each model. Note that always GT annotations are used for model evaluation.
94
+
95
+ In Table 2, for all networks, we observe relatively similar results from outputs based on GT and NuClick annotations. For instance, when testing on Kumar dataset, Dice and PQ values from FCN8 model trained on NuClick ${}_{\mathrm{{CPM}}}$ ’s annotations are 0.01 and 0.003 (insignificantly) higher than the model trained on GT annotations, respectively. This might be due to more uniformity of the NuClick generated annotations, which eliminate the negative effect of inter-annotator variations present in GT annotations. This example and negligible differences in metrics values for two scenarios in all cases, prove that labels provided by NuClick are good enough to train deep networks. Note that all hyper-parameters and the order of feeding patches during training are the same for all experiments.
96
+
97
+ <table><tr><td rowspan="2"/><td colspan="4">Kumar</td><td colspan="4">CPM</td></tr><tr><td colspan="2">GT</td><td colspan="2">NuClick ${}_{\text{CPM }}$</td><td colspan="2">GT</td><td colspan="2">${\mathrm{{NuClick}}}_{\text{Kumar }}$</td></tr><tr><td>Models</td><td>Dice</td><td>$\mathbf{{PQ}}$</td><td>Dice</td><td>$\mathbf{{PQ}}$</td><td>Dice</td><td>PQ</td><td>Dice</td><td>$\mathbf{{PQ}}$</td></tr><tr><td>U-net</td><td>0.8243</td><td>0.5047</td><td>0.8196</td><td>0.5012</td><td>0.8535</td><td>0.5878</td><td>0.8458</td><td>0.5798</td></tr><tr><td>Segnet</td><td>0.8465</td><td>0.5238</td><td>0.8368</td><td>0.5178</td><td>0.8716</td><td>0.6268</td><td>0.8775</td><td>0.6281</td></tr><tr><td>FCN8</td><td>0.7952</td><td>0.4484</td><td>0.8064</td><td>0.4512</td><td>0.8426</td><td>0.5998</td><td>0.8294</td><td>0.5904</td></tr></table>
98
+
99
+ Table 2. Comparative experiments on CPM [3] and Kumar [2] test set with models trained using GT and NuClick's predicted masks. NuClick subscript indicates the dataset that used for its training.
100
+
101
+ ### 3.3 NuClick in Practice
102
+
103
+ LYON19 Challenge LYON19 is a scientific challenge on lymphocyte detection in immuno-histochemistry (IHC) sample images. Challenge organizers released a dataset comprising 441 images of IHC stained specimens of breast, colon, and prostate. The most challenging aspect of this task is that organizers did not release ground truth detection labels for the data set and instead asked the participants to use their data to develop a method. To develop a well-performing supervised method, particularly deep learning based models, annotated data is required. Therefore, NuClick was used to generate labeled data We transformed the centroid detection problem into a nuclei instance segmentation task, where for each image in the released data set, we randomly sampled a ${256} \times {256}$ patches to collect a subset of 441 training members. Then, a non-expert user reviewed all the patches and clicked on the positive lymphocytes based on his/her assumptions, which did not exceed 3 hours to be done completely. Image patches and their corresponding clicked positions are then fed into the NuClick framework to construct the instance segmentation map. After constructing a synthesized ground truth for each image, we developed instance segmentation models for the LYON19 task. Extracted centroids from the output instances of our model were able to rank ${1}^{\text{st }}$ in the LYON19 challenge leader-board achieving F1-score of 0.7951 . This state-of-the-art result proves the fidelity of the NuClick generated masks once again and shows that NuClick can be used reliably in generating data sets for such tasks.
104
+
105
+ PanNuke Another use case of NuClick is in extending the dataset in our previous work, PanNuke [14], which demonstrates a pipeline for creating large classification and segmentation labels for nuclei. Here, we used NuClick to generate accurate nuclear segmentation masks, which was imperative when labeling thousands of nuclei.
106
+
107
+ ## 4 Conclusion
108
+
109
+ We have proposed a simple and practical method for collecting nuclear annotation in histology images. We showed that one click from the user is enough to segment the nucleus, which is effortless and quick to collect a large number of annotations. Moreover, we have shown that the labels generated by NuClick are of high quality that can be used for training deep networks.
110
+
111
+ ## References
112
+
113
+ 1. Cheng Lu, David Romo-Bucheli, Xiangxue Wang, Andrew Janowczyk, Shridar Ganesan, Hannah Gilmore, David Rimm, and Anant Madabhushi. Nuclear shape and orientation features from h&e images predict survival in early-stage estrogen receptor-positive breast cancers. Laboratory Investigation, 98(11):1438, 2018.
114
+
115
+ 2. Neeraj Kumar, Ruchika Verma, Sanuj Sharma, Surabhi Bhargava, Abhishek Va-hadane, and Amit Sethi. A dataset and a technique for generalized nuclear segmentation for computational pathology. IEEE transactions on medical imaging, 36(7):1550-1560, 2017.
116
+
117
+ 3. Quoc Dang Vu, Simon Graham, Tahsin Kurc, Minh Nguyen Nhat To, Muhammad Shaban, Talha Qaiser, Navid Alemi Koohbanani, Syed Ali Khurram, Jayashree Kalpathy-Cramer, Tianhao Zhao, et al. Methods for segmentation and classification of digital microscopy tissue images. Frontiers in bioengineering and biotechnology, 7, 2019.
118
+
119
+ 4. Navid Alemi Koohbanani, Mostafa Jahanifar, Ali Gooya, and Nasir Rajpoot. Nuclear instance segmentation using a proposal-free spatially aware deep learning framework. arXiv preprint arXiv:1908.10356, 2019.
120
+
121
+ 5. Kevis-Kokitsi Maninis, Sergi Caelles, Jordi Pont-Tuset, and Luc Van Gool. Deep extreme cut: From extreme points to object segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 616-625, 2018.
122
+
123
+ 6. Xiaodong Yang, Houqiang Li, and Xiaobo Zhou. Nuclei segmentation using marker-controlled watershed, tracking using mean-shift, and kalman filter in time-lapse microscopy. IEEE Transactions on Circuits and Systems I: Regular Papers, 53(11):2405-2414, 2006.
124
+
125
+ 7. Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234-241. Springer, 2015.
126
+
127
+ 8. Mostafa Jahanifar, Neda Zamani Tajeddin, Navid Alemi Koohbanani, Ali Gooya, and Nasir Rajpoot. Segmentation of skin lesions and their attributes using multi-scale convolutional neural networks and domain specific augmentations. arXiv preprint arXiv:1809.10243, 2018.
128
+
129
+ 9. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.
130
+
131
+ 10. Navid Alemi Koohababni, Mostafa Jahanifar, Ali Gooya, and Nasir Rajpoot. Nuclei detection using mixture density networks. In International Workshop on Machine Learning in Medical Imaging, pages 241-248. Springer, 2018.
132
+
133
+ 11. Simon Graham, Quoc Dang Vu, Shan E Ahmed Raza, Ayesha Azam, Yee Wah Tsang, Jin Tae Kwak, and Nasir Rajpoot. Hover-net: Simultaneous segmentation and classification of nuclei in multi-tissue histology images. Medical Image Analysis, page 101563, 2019.
134
+
135
+ 12. Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3431-3440, 2015.
136
+
137
+ 13. Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence, 39(12):2481-2495, 2017.
138
+
139
+ 14. Jevgenij Gamper, Navid Alemi Koohbanani, Ksenija Benet, Ali Khuram, and Nasir Rajpoot. Pannuke: An open pan-cancer histology dataset for nuclei instance segmentation and classification. In European Congress on Digital Pathology, pages 11-19. Springer, 2019.
papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/HJxydABnbS/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § NUCLICK: FROM CLICKS IN THE NUCLEI TO NUCLEAR BOUNDARIES
2
+
3
+ Mostafa Jahanifar ${}^{\star 2}$ , Navid Alemi Koohbanani ${}^{\star 1,3}$ , and Nasir Rajpoot ${}^{1,3}$
4
+
5
+ ${}^{1}$ Department of Computer Science, University of Warwick, Coventry ${}^{2}$ Department of Research & Development, NRP Co., Tehran, Iran ${}^{3}$ Alan Turing Institute, London, UK
6
+
7
+ Abstract. Best performing nuclear segmentation methods are based on deep learning algorithms that require a large amount of annotated data. However, collecting annotations for nuclear segmentation is a very labor-intensive and time-consuming task. Thereby, providing a tool that can facilitate and speed up this procedure is of great interest. Here we propose a simple yet efficient framework based on convolutional neural networks, named NuClick, which can precisely segment nuclei boundaries by accepting a single point position (or click) inside each nucleus. Based on the clicked positions, inclusion and exclusion maps are generated which comprise of 2D Gaussian distributions centered on those positions. These maps serve as guiding signals for the network as they are concatenated to the input image. The inclusion map focuses on the desired nucleus while the exclusion map indicates neighboring nuclei and improve the results of segmentation in scenes with nuclei clutter. The NuClick not only facilitates collecting more annotation from unseen data but also leads to superior segmentation output for deep models. It is also worth mentioning that an instance segmentation model trained on NuClick generated labels was able to ranked ${1}^{\text{ st }}$ in LYON19 challenge.
8
+
9
+ Keywords: Interactive annotating $\cdot$ nuclei segmentation $\cdot$ instance segmentation - computational pathology
10
+
11
+ § 1 INTRODUCTION
12
+
13
+ Appearance and shape characteristics of nuclei in histology images are important markers for the diagnosis of cancer and predicting patient outcome [1]. To quantify these features, one should first determine the boundaries of the nuclei, which requires lots of time and effort to achieve manually. To this end, automatic segmentation methods play an important role in facilitating this task.
14
+
15
+ Since the emergence of deep learning (DL) methods and their superior performance over classical methods (feature-based), the need for annotated data has increased significantly. Data-dependency nature of DL methods still imposes a huge burden on the human for providing annotated data. Despite the labor intensive nature of annotating nuclei within histology images, several datasets have been provided for training deep networks [2,3,4]. The question here is: how we
16
+
17
+ * These authors contributed equally to this work.
18
+
19
+ < g r a p h i c s >
20
+
21
+ Fig. 1. Example outputs of NuClick: Annotator click inside the nucleus and the mask will be generated by NuClick model.
22
+
23
+ can use available annotated datasets to ease extending the knowledge and reducing the human effort when creating a new data set on another cancer/tissue type? In the computer vision domain, several methods have been employed to speed up the procedure of collecting annotations for natural images by accepting a few points from the annotator [5]. One of the most efficient models is DEXTR [5] which takes extreme points (the left-most, right-most, top-most and bottom-most pixels) of the object as the input to extract mask of the desired object.
24
+
25
+ All these approaches require the user to click on several points on the boundary of an object or draw a bounding box. For nuclear segmentation, providing several points on the boundaries of nuclei is still a high burden, since the annotator should first find the boundary of a nucleus in high magnification and then select several points on it. Moreover, nuclei are small objects, and their number may exceed 400 pixels in a patch size of ${500} \times {500}$ pixels (for example, when there is a dense cluster of lymphocytes), which makes this task more arduous. To the best of our knowledge, there is no similar approach based on DL models for interactive nuclei segmentation in histology images. Some works like [6] used the marker-controlled watershed algorithm to segment nuclei from marked points which failed in complex histology images.
26
+
27
+ Here, we propose a simple yet effective method for collecting nuclear annotation by asking a user to provide only one point inside the nucleus (examples are depicted in Fig. 1). Clicking one point inside an object is not a demanding task and can be done in low resolution by a non-expert. In summary, our contributions in this work are two-fold: 1) proposing a DL framework by adding two channels comprising guiding signals to the selected nucleus and its neighboring nuclei. 2) showing that the outputs from this framework can be useful in practice and for training deep networks.
28
+
29
+ § 2 METHODOLOGY
30
+
31
+ In the current work, we train the NuClick model for different labeled datasets of nuclei. For each data set, based on the centroids of annotated nuclei, patches are extracted from larger images, and then two guiding channels are created to serve alongside RGB patches as the network input. The network's parameters are then optimized based on a weighted hybrid loss function. On the other hand, during the prediction phase, our framework accepts an image and its marked nuclei (clicked positions) from the user as inputs and generate the instance segmenta-
32
+
33
+ < g r a p h i c s >
34
+
35
+ Fig. 2. NuClick network architecture. Comprising convolutional, residual, and multi-scale blocks. Level transition is done using MaxPooling and TransposedConv layers.
36
+
37
+ tion of the clicked nuclei in the output. In the rest of this paper, we describe each step in details.
38
+
39
+ § 2.1 MODEL ARCHITECTURE AND LOSS FUNCTION
40
+
41
+ We have utilized an encoder-decoder architecture, inspired by U-net [7], which reduces the size of feature maps in the encoding path while increasing the number of channels. The decoding path reverses this effect through several levels and turns those small and enriched feature maps into a single channel dense prediction. However, unlike the "traditional" U-Net, the NuClick architecture incorporates residual and multi-scale convolutional blocks [8] instead of normal convolutional layers in each level of encoding and decoding paths. An overview of the proposed NuClick architecture is depicted in Fig. 2. Using residual blocks enables us to train the network with higher learning rates without being worried about gradient vanishing effect [9]. Furthermore, multi-scale convolutional blocks allow the network to better capture the essence of image structures with different sizes and extract more relevant feature maps, hence boosting the network performance [8].
42
+
43
+ For training the network, we proposed to use a hybrid weighted loss function, which is based on a soft variant of the Dice similarity coefficient and weighted binary cross-entropy (1). The dice part of the loss controls class imbalance problem during training as most of pixels belong to the background, and weighted binary cross entropy penalizes the loss if network wrongly segments the neighbouring nuclei. Our proposed hybrid loss is as follow:
44
+
45
+ $$
46
+ \mathcal{L} = 1 - \frac{\mathop{\sum }\limits_{i}{p}_{i}{g}_{i} + \varepsilon }{\mathop{\sum }\limits_{i}{p}_{i} + \mathop{\sum }\limits_{i}{g}_{i} + \varepsilon } - \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}{w}_{i}\left( {{g}_{i}\log {p}_{i} + \left( {1 - {g}_{i}}\right) \log \left( {1 - {p}_{i}}\right) }\right) \tag{1}
47
+ $$
48
+
49
+ where $\varepsilon$ is a small number, $n$ is the number of pixels in the image spatial domain, ${p}_{i},{g}_{i}$ , and ${w}_{i}$ are values of the prediction map, the ground-truths mask, and the weight map at pixel $i$ , respectively.
50
+
51
+ < g r a p h i c s >
52
+
53
+ Fig. 3. Guiding signal maps: (a)-(c) show inputs to the NuClick network which are image patch, inclusion map, and exclusion map, respectively, (d) depicts the desired network output (ground truth), and (e) illustrates pixel-wise weight map used in the loss function.
54
+
55
+ The pixel-wise weight map is generated based on the ground-truth, where regions of the neighboring nuclei have 10 times more weight than the desired nucleus (marked object). To better understand these maps, a simple image patch with the desired nuclei clicked (marked) in it, alongside its related ground-truth and weight map are illustrated in Fig. 3. We incorporated this weighting scheme in the loss function to put more emphasis on the neighboring nuclei and avoid false segmentation of touching objects. In an alternative scenario, if we set the weights of the desired nuclei higher than other nuclei, the network may be falsely biased toward over-segmentation of neighboring nuclei.
56
+
57
+ § 2.2 GUIDING SIGNAL MAPS
58
+
59
+ As a guiding signal and to incorporate prior knowledge, an extra channel is concatenated with the image in the network input which contains a 2D Gaussian distribution centered on the selected point (similar to [10] where the nucleus centroid was assumed as a Gaussian). We call this guiding signal the inclusion map, which refers to the nucleus we wanted to be included in the segmentation output. Adding the inclusion map helps the network to achieve a desirable segmentation of the selected nucleus as long as it is isolated. Based on our early experiments, although adding the inclusion map guide the model to segment the selected nucleus, in the presence of nuclei cluster the segmentation output might contain neighbouring nuclei too. To avoid this phenomenon and to exclude the neighboring nuclei in the output prediction map, we introduce the exclusion map as the fifth channel to the input which can contain multiple 2D Gaussian distributions centered on the clicked positions of the neighboring nuclei (if annotator provides them). For clarity, we display the inclusion and exclusion maps in Fig. 3(b)-(c) for a sample patch. They can also be seen at the input of the network in Fig. 2 as they are concatenated with the RGB image patch.
60
+
61
+ To this end, the inclusion channel always provides a guiding signal for segmenting the desired nucleus and if other nuclei in the vicinity of the patch are selected the exclusion mask will also provide a signal; otherwise it is an all-zero channel. Please note that within the training phase the inclusion and exclusion maps are generated on-the-fly, based on the augmented (changed) ground-truth mask, and during the test phase they are directly constructed based on the user clicked positions.
62
+
63
+ § 2.3 TRAINING PROCEDURE
64
+
65
+ To optimize the network weights, an Adam optimizer with initial learning rate of 0.003 was used. NuClick has been trained for 300 epoch with a batch size of 128 on all datasets. At each iteration, centroid positions of the desired nuclei are randomly jittered, and subsequently, the inclusion and exclusion maps are created on-the-fly. This makes the network more robust against the variations in the input position provided by the annotator.
66
+
67
+ § 2.4 TESTING PROCEDURE
68
+
69
+ During test time, for each input image, the user clicks on the nuclei for annotation, or the centroids are loaded from file. Afterward, for all available coordinates, patches of size ${128} \times {128}$ are extracted from image and inclusion, and exclusion maps are created as mentioned in the previous section. The NuClick predicts a nucleus segmentation map for each click (patch). Then that prediction map is converted to a binary map by thresholding, and objects with areas smaller than 10 pixels are removed (based on the size of the smallest object in the data set). The optimal threshold value, $T = {0.4}$ , is selected by testing a set of candidate values and evaluating the resulting binary maps. Moreover, for removing extra objects except for the desired nucleus inside the binary map, the morphological reconstruction operator is used which needs a marker and a mask. The marker has its all pixels equal to 0 except for a single pixel at the centroid location which is set to 1 .The binary map plays the role of the mask in the morphological reconstruction. Having all patches predicted and processed, we can fill an empty canvas at the origin coordinates of each path with the processed nuclei masks to generate the final instance segmentation map of the input image.
70
+
71
+ § 3 EXPERIMENTS AND RESULTS
72
+
73
+ § 3.1 DATASET
74
+
75
+ We have utilized two publicly available datasets in this work.First, the Kumar dataset [2] contains 30 images of size ${1000} \times {1000}$ which have been extracted from WSIs in The Tissue Genome Atlas (TCGA). This dataset contains seven tissue types and contains a total of 21623 nuclei instances segmented. From this dataset, 16 images are separated for training. The second dataset is CPM17 dataset [3] which consists of 32 images of size between ${500} \times {500}$ to ${700} \times {700}$ and a total of 7570 nuclei instances. Similarly, 16 images in CPM17 are used to extract patches as the training set.
76
+
77
+ § 3.2 EXPERIMENTAL RESULTS
78
+
79
+ To show the generalizability of the NuClick to an unseen dataset, Table 1 demonstrates quantitative results of NuClick when trained on a certain dataset (first column), and tested on another one (second column). Points for testing on the unseen dataset were provided from the centroids of objects in GT however they have been randomly jittered by 5 pixels to simulate manual annotation. We have used six evaluation metrics: Aggregated Jaccard Index (AJI), general Dice similarity coefficient, object-wise Dice $\left( {\mathrm{{Dice}}}_{\mathrm{{Obj}}}\right)$ , Segmentation Quality(SQ), Detection Quality (DQ), and Panoptic Quality (PQ=SQ×DQ) measures. The AJI metric [2] measures the quality of instance-wise predictions, DQ is equivalent to F1-score and only quantify the quality of detection, SQ reflects the average of intersection over union (IOU) for the detected object, Dice for evaluates the similarity of overall nuclei segmentation against the GT, and Dice ${}_{obj}$ measures the Dice coefficient for individual segmented nucleus. Comprehensive information about these metrics can be found in [11].
80
+
81
+ We also compare NuClick to two other approaches: U-Net, which is deep learning-based (supervised) model, and the watershed, which is an unsupervised method. For fair comparison in the case of U-Net, the detection map (Gaussian centered on each nucleus centroid) of nuclei is concatenated to RGB channels at the training and testing phases. Moreover, the watershed has been applied to the U-Net prediction to have instance-wise outputs. In the unsupervised framework, the marker-controlled watershed algorithm is applied to the gradient map of the image using the centroid points as the markers.
82
+
83
+ As reported in Table 1, the NuClick shows worse performance when it is trained on CPM and then tested on Kumar, which is due to some hard cases (Cancerous colon) in the Kumar dataset. Overall, NuClick performance, according to all metrics, is much better than the other two baselines that prove the high generalization capability of the NuClick. In an ideal situation, DQ for NuClick should be equal to 1, as it is representing the detection quality of the method and we have already provided the model with GT centroid defections. Nonetheless, DQ for NuClick is less than (yet very close to) 1. The reason is that NuClick does not consider some input points as valid nuclei, or the predicted map is eliminated during the thresholding and post-processing procedures. However, both detection and segmentation quality of NuClick is much higher than other reported methods in Table 1, which is also obvious in PQ metric. Quality of NuClick generated annotations is also evident from Fig. 1 which illustrates output masks for different clicked points in five image patches of different organs.
84
+
85
+ max width=
86
+
87
+ Model Train Test AJI ${\text{ Dice }}_{\text{ obj }}$ Dice SQ DQ $\mathbf{{PQ}}$
88
+
89
+ 1-9
90
+ 2*NuClick CPM Kumar 0.7940 0.7937 0.8886 0.8001 0.9819 0.7856
91
+
92
+ 2-9
93
+ Kumar CPM 0.8278 0.8278 0.9088 0.8361 0.9981 0.8180
94
+
95
+ 1-9
96
+ 2*U-net+WS CPM Kumar 0.7544 0.7601 0.8648 0.7823 0.9796 0.7663
97
+
98
+ 2-9
99
+ Kumar CPM 0.7812 0.7844 0.8903 0.8074 0.9945 0.8029
100
+
101
+ 1-9
102
+ 2*Watershed - Kumar 0.1892 0.1660 0.4023 0.6936 0.3965 0.2805
103
+
104
+ 2-9
105
+ - CPM 0.1501 0.1327 0.3467 0.7078 0.4243 0.3046
106
+
107
+ 1-9
108
+
109
+ Table 1. Generalization of NuClick across CPM [3] and Kumar [2] datasets in comparison with other methods.
110
+
111
+ Moreover, to validate the quality of annotations generated by NuClick another experiment has been designed: we first train NuClick on CPM (Kumar) data, and then used the trained NuClick to generate labels for Kumar (CPM) dataset. Afterwards, we train U-Net [7], FCN8 [12], and Segnet [13] models on NuClick's annotations for Kumar and CPM dataset. Performances of these models are compared against those of same models trained on GT annotations. Table 2 reports the results for this analysis. In this table, the title of each main column represents the name of the dataset that we apply our model on. Each sub-column of GT and NuClic ${}_{\mathrm{{CPM}}/\text{ Kumar }}$ indicate whether GT annotations or NuClick generated instances were utilized for training each model. Note that always GT annotations are used for model evaluation.
112
+
113
+ In Table 2, for all networks, we observe relatively similar results from outputs based on GT and NuClick annotations. For instance, when testing on Kumar dataset, Dice and PQ values from FCN8 model trained on NuClick ${}_{\mathrm{{CPM}}}$ ’s annotations are 0.01 and 0.003 (insignificantly) higher than the model trained on GT annotations, respectively. This might be due to more uniformity of the NuClick generated annotations, which eliminate the negative effect of inter-annotator variations present in GT annotations. This example and negligible differences in metrics values for two scenarios in all cases, prove that labels provided by NuClick are good enough to train deep networks. Note that all hyper-parameters and the order of feeding patches during training are the same for all experiments.
114
+
115
+ max width=
116
+
117
+ 2*X 4|c|Kumar 4|c|CPM
118
+
119
+ 2-9
120
+ 2|c|GT 2|c|NuClick ${}_{\text{ CPM }}$ 2|c|GT 2|c|${\mathrm{{NuClick}}}_{\text{ Kumar }}$
121
+
122
+ 1-9
123
+ Models Dice $\mathbf{{PQ}}$ Dice $\mathbf{{PQ}}$ Dice PQ Dice $\mathbf{{PQ}}$
124
+
125
+ 1-9
126
+ U-net 0.8243 0.5047 0.8196 0.5012 0.8535 0.5878 0.8458 0.5798
127
+
128
+ 1-9
129
+ Segnet 0.8465 0.5238 0.8368 0.5178 0.8716 0.6268 0.8775 0.6281
130
+
131
+ 1-9
132
+ FCN8 0.7952 0.4484 0.8064 0.4512 0.8426 0.5998 0.8294 0.5904
133
+
134
+ 1-9
135
+
136
+ Table 2. Comparative experiments on CPM [3] and Kumar [2] test set with models trained using GT and NuClick's predicted masks. NuClick subscript indicates the dataset that used for its training.
137
+
138
+ § 3.3 NUCLICK IN PRACTICE
139
+
140
+ LYON19 Challenge LYON19 is a scientific challenge on lymphocyte detection in immuno-histochemistry (IHC) sample images. Challenge organizers released a dataset comprising 441 images of IHC stained specimens of breast, colon, and prostate. The most challenging aspect of this task is that organizers did not release ground truth detection labels for the data set and instead asked the participants to use their data to develop a method. To develop a well-performing supervised method, particularly deep learning based models, annotated data is required. Therefore, NuClick was used to generate labeled data We transformed the centroid detection problem into a nuclei instance segmentation task, where for each image in the released data set, we randomly sampled a ${256} \times {256}$ patches to collect a subset of 441 training members. Then, a non-expert user reviewed all the patches and clicked on the positive lymphocytes based on his/her assumptions, which did not exceed 3 hours to be done completely. Image patches and their corresponding clicked positions are then fed into the NuClick framework to construct the instance segmentation map. After constructing a synthesized ground truth for each image, we developed instance segmentation models for the LYON19 task. Extracted centroids from the output instances of our model were able to rank ${1}^{\text{ st }}$ in the LYON19 challenge leader-board achieving F1-score of 0.7951 . This state-of-the-art result proves the fidelity of the NuClick generated masks once again and shows that NuClick can be used reliably in generating data sets for such tasks.
141
+
142
+ PanNuke Another use case of NuClick is in extending the dataset in our previous work, PanNuke [14], which demonstrates a pipeline for creating large classification and segmentation labels for nuclei. Here, we used NuClick to generate accurate nuclear segmentation masks, which was imperative when labeling thousands of nuclei.
143
+
144
+ § 4 CONCLUSION
145
+
146
+ We have proposed a simple and practical method for collecting nuclear annotation in histology images. We showed that one click from the user is enough to segment the nucleus, which is effortless and quick to collect a large number of annotations. Moreover, we have shown that the labels generated by NuClick are of high quality that can be used for training deep networks.
papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/HklExX79-S/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,179 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Graph-based Classification of Intestinal Glands in Colorectal Cancer Tissue Images
2
+
3
+ Linda Studer ${}^{\star 1,2}$ , Shushan Toneyan ${}^{\star 3}$ , Inti Zlobec ${}^{3}$ , Heather Dawson ${}^{3}$ , and Andreas Fischer ${}^{1,2}$
4
+
5
+ ${}^{1}$ DIVA Research Group, University of Fribourg
6
+
7
+ firstname.lastname@unifr.ch
8
+
9
+ ${}^{2}$ Institute of Complex Systems (iCoSys)
10
+
11
+ University of Applied Sciences and Arts Western Switzerland
12
+
13
+ firstname.lastname@hefr.ch
14
+
15
+ ${}^{3}$ Institue of Pathology, University of Bern,
16
+
17
+ firstname.lastname@pathology.unibe.ch
18
+
19
+ Abstract. Pathologists study tissue morphology in order to correctly diagnose diseases such as colorectal cancer. This task can be very time consuming, and automated systems can greatly improve the precision and speed with which a diagnosis is established. Explainable algorithms and results are key to successful implementation of these methods into routine diagnostics in the medical field. In this paper, we propose a graph-based approach for intestinal gland classification. It leverages the high representational power of graphs for describing geometrical and topological properties of the glands. A novel, publicly available image and graph dataset is introduced based on cell segmentation of healthy and dysplastic H&E stained intestinal glands from pT1 colorectal cancer. The graphs are compared using an approximate graph edit distance and are classified using the k-nearest neighbours algorithm. With this method, we achieve a classification accuracy of 83.3%.
20
+
21
+ Keywords: Intestinal Gland Classification $\cdot$ Colorectal Cancer $\cdot$ Digital Pathology - Graph Matching - Graph Edit Distance.
22
+
23
+ ## 1 Introduction
24
+
25
+ In diagnosis and treatment of colorectal cancer the observations of pathologists are crucial for the characterisation of the stage of the disease and subsequent predictions of its progression [18]. The precise morphological characteristics of the deformation depend on the type of cancer and the stage of cancer progression [4]. In order to expedite diagnostics and reduce errors and variability between experts, computer-aided diagnosis (CAD) can offer great support in the diagnostic process.
26
+
27
+ In the initial stages of cancer, such as pT1, carcinomas can frequently be observed originating from polyps [4]. In such cases it is possible to observe normal tissues, dysplasia and carcinoma on one slide. In normal tissue the glands have parallel and flat lumina lined with a single layer of cells in the upper parts of the gland. They have circular or oval shape and a homogeneously coloured nucleus, that is pushed to the outer side of the gland by the mucus in the cytoplasm. In dysplastic glands the regular and ordered configuration is disrupted, and their shape varies greatly, especially when a mix of different dysplasia types (such as low- and high-grade) is present.
28
+
29
+ ---
30
+
31
+ * These authors contributed equally to this work.
32
+
33
+ ---
34
+
35
+ ![01963a4d-6595-7da2-8e19-bac18dde21c3_1_392_357_1030_215_0.jpg](images/01963a4d-6595-7da2-8e19-bac18dde21c3_1_392_357_1030_215_0.jpg)
36
+
37
+ Fig. 1: Examples of normal (a) and dysplastic (b) colon mucosa stained with H&E. The yellow arrowheads point to selected glands. Normal glands usually have a regular round or oval shape while dysplastic glands are more irregularly shaped.
38
+
39
+ Histological images show the microanatomy of a tissue sample. They contain many different cell types, cell compartments and tissues which makes them very complex to analyse. In their diagnosis, pathologists consider morphological changes in tissue, spatial relationship between cell (sub-)types, density of certain cells, and more. Graph-based methods, which are able to capture geometrical and topological properties of the glands, offer a very natural approach to attempt an automated analysis of such data [15].
40
+
41
+ Graphs have been used for a variety of tasks in digital pathology, such as classification and exploratory analysis [15] as well as segmentation [17] and content-based image retrieval (CBIR) [14]. There are also a great variety of types of graphs that are being used, such as O'Callaghan neighbourhood graphs, attributed relational graphs (ARG) and cell graphs [15]. Especially cell graphs have been successfully used to support cancer diagnosis [3.9].
42
+
43
+ In this paper, we propose a gland classification method based on labelled cell graphs and graph edit distance (GED) [6,7], which transforms one cell graph into another using deletion, insertion, and substitution of individual cells. In contrast to other graph matching methods [5], such as spectral methods [11] or graph kernels [8], GED has the advantage that it is applicable to any type of labelled graphs. Furthermore, it provides an explicit mapping of cells from one gland to cells of the other gland, which may help human experts comprehend why the algorithm predicts high or low gland similarity (see for example Figure 3). For experimental evaluation, we have created a graph dataset that contains cell graphs of healthy and dysplastic H&E stained intestinal glands from pT1 colorectal cancer. The dataset, which has been made publicly available, is used to evaluate the classification performance of our GED-based approach using a k-nearest neighbour (k-NN) classifier. The performance is compared to results reported in the literature.
44
+
45
+ ![01963a4d-6595-7da2-8e19-bac18dde21c3_2_383_357_1038_365_0.jpg](images/01963a4d-6595-7da2-8e19-bac18dde21c3_2_383_357_1038_365_0.jpg)
46
+
47
+ Fig. 2: Examples of the cell segmentation (left) and graph-representation (right) of a normal and a dysplastic gland. Each detected cell (circled in red) is represented as a node in the graph (in orange). The nodes are connected with edges (in green) based on the physical distance between them.
48
+
49
+ ## 2 Graph-Based Gland Classification
50
+
51
+ This section introduces the novel, publicly available gland classification dataset and provides more details about the proposed method for graph-based gland classification.
52
+
53
+ ### 2.1 pT1 Gland Graph (GG-pT1) Dataset
54
+
55
+ The images used to create the graphs dataset are from H&E stained whole slide images (WSIs) tissue samples taken from pT1 [4] cancer patients. The glands are cropped from images that have normal tissues, dysplasia and carcinoma on one slide. The crops are classified by an expert pathologist and then used to build the graphs. In total there are 520 graphs from one 20 different patients. One WSI per patient was selected based on image quality and 26 well-defined glands (13 dysplastic and 13 normal) were manually annotated.
56
+
57
+ The cells of each gland are segmented using QuPath [1]. The same parameters are used for all the images. 33 features are exported from QuPath for each cell, which are used to label the nodes. Available features based on the cell are the eosin stain (mean, standard deviation (SD), min, max), circularity, eccentricity, perimeter, area and diameter (min, max). Features based on the nucleus are circularity, eosin stain (mean, std, min, max, range, sum), hematoxylin stain (mean, std, min, max, range, sum), diameter (min, max), area, perimeter and eccentricity. Further features are the eosin stain (mean, std, min, max) of the cytoplasm and the nucleus/cell area ratio.
58
+
59
+ Figure 2 shows example images and graphs from the dataset which is publicly available $I$ . It includes all images, annotation masks, and graph features as well as the reference, validation and test split used in this paper.
60
+
61
+ ---
62
+
63
+ https://github.com/LindaSt/pT1-Gland-Graph-Dataset
64
+
65
+ ---
66
+
67
+ ![01963a4d-6595-7da2-8e19-bac18dde21c3_3_427_330_942_330_0.jpg](images/01963a4d-6595-7da2-8e19-bac18dde21c3_3_427_330_942_330_0.jpg)
68
+
69
+ Fig. 3: GED transformations between two normal glands. The black arrows indicate node label substitution, the nodes circled in black mark deleted/inserted nodes.
70
+
71
+ ### 2.2 Graph-Based Representation
72
+
73
+ The formal mathematical definition of a graph $G$ is given as a tuple of $\left( {V, E,\alpha ,\beta }\right)$ , where $V$ is the finite set of nodes (or vertices), $E$ is the set of edges, $\alpha$ is the node labelling function and $\beta$ is the edge labelling function.
74
+
75
+ We use so-called cell graphs [9], in which each cell is represented by a node with different attributed features $\alpha \left( v\right) = \left( {{x}_{1},\ldots ,{x}_{n}}\right)$ with $n \leq {33}$ (see section 2.1 for the complete feature list). Figure 2 shows examples of cell graphs of glands. Because the different features all have a different range, they are normalised using the z-normalisation which adjusts each feature value $x$ such that $\widehat{x} = \frac{x - \mu }{\sigma }$ .
76
+
77
+ For each node, we insert an edge to its two spatially closest neighbour nodes. No edge features are used. Node features are selected using the sequential forward selection method [10]. This process starts with no features and iteratively adds the best feature until there is no further improvement in the classification accuracy. We also establish a baseline based on the unlabelled graph.
78
+
79
+ ### 2.3 Graph Edit Distance (GED)
80
+
81
+ GED is an error-tolerant measurement of similarity between two graphs [7]. It provides a model for transforming a source graph into a target graph instead of searching for an exact match between graphs or their sub-graphs. Figure 3 shows an example of such a transformation.
82
+
83
+ GED is defined as the distance between two graphs in the case when the cost of transforming one into the other is minimal. There are three types of edit operations that are performed on edges as well as labels to transform a graph: insertion, deletion and label substitution. For each of these operations a cost function needs to be specified. We consider the Euclidean cost model that uses a fixed cost for deletion/insertion and the Euclidean distance for substituting node labels. Since we do not use edge labels in our cell graphs, the cost function for edge label substitution does not need to be defined.
84
+
85
+ The computational complexity of the exact GED calculation increases exponentially as a function of the number of nodes in the graphs. However, heuristic methods are available that can compute an approximate solution. We use an improved version of the bipartite graph-matching method (BP2) [6], which runs in quadratic time and calculates an upper bound of GED.
86
+
87
+ Table 1: Classification accuracies achieved by the baseline and after forward search selection of the node features. The mean accuracy along with the standard deviation of a 4-fold cross-validation is reported.
88
+
89
+ <table><tr><td/><td>NODE FEATURES</td><td>Accuracy</td></tr><tr><td>BASELINE</td><td>NONE</td><td>${71.7} \pm {2.8}\%$</td></tr><tr><td>Optimized Graph</td><td>CYTOPLASM: EOSIN MIN NUCLEUS: HEMATOXYLIN MEAN, MIN, MAX</td><td>${83.3} \pm {1.7}\%$</td></tr></table>
90
+
91
+ ### 2.4 K-Nearest Neighbour Classification
92
+
93
+ The classification of the glands is performed using the k-NN classifier, which assigns the most frequent label out of the $k$ most similar objects to the object to be classified [2]. In our case we use the three closest $\left( {k = 3}\right)$ gland graphs in the reference set, in terms of the GED, to classify a new gland graph.
94
+
95
+ ## 3 Experimental Evaluation
96
+
97
+ Our goal is to classify intestinal glands in the novel GG-pT1 dataset as either normal or dysplastic by using graph-based representations. Graphs are compared to a reference dataset using the GED and then classified using the k-NN algorithm. We also investigate the impact of node feature selection.
98
+
99
+ ### 3.1 Setup
100
+
101
+ We split the dataset into four parts and evaluate the performance with a 4- fold cross-validation. Two parts are used as the reference set and one each for the validation and test set (details here ${}^{2}$ ). The reference set is used for the classification. For each input graph the GED is computed to all graphs in the reference set and k-NN is then used to classify the graph based on this distance. The validation dataset is used to optimise the insertion/deletion cost for nodes and edges using a grid search over 25 parameters (for the specific parameters see here ${}^{2}$ ) and to optimise the node features using forward search.
102
+
103
+ ### 3.2 Results
104
+
105
+ Table 1 gives an overview of the results and selected node features. The achieved accuracy is ${83.3}\%$ . The forward search selected four attributes, three are based on the nucleus hematoxylin stain and one is based on the cystoplasm eosin stain. Adding these four features to the nodes increased the performance by 11.6% compared to the unlabelled baseline.
106
+
107
+ ---
108
+
109
+ ${}^{2}$ https://bit.ly/2xDuRcV
110
+
111
+ ---
112
+
113
+ ![01963a4d-6595-7da2-8e19-bac18dde21c3_5_385_357_1037_231_0.jpg](images/01963a4d-6595-7da2-8e19-bac18dde21c3_5_385_357_1037_231_0.jpg)
114
+
115
+ Fig. 4: Examples of dysplastic glands misclassified as normal (a) and normal glands misclassified as dysplastic (b).
116
+
117
+ ### 3.3 Discussion
118
+
119
+ We achieve slightly better results on our dataset than the only other published results using graph-based methods on a closely related (but not non-publicly available) colorectal cancer image dataset [13]. Ozdemir et al. report results achieved by hybrid models that use different combinations of structural and statistical features. One variant uses GED embedding coupled with a Support Vector Machine (SVM) to classify glands as normal, low- or high-grade dysplastic and achieves an overall accuracy of 81.72%. They however use a different graph-representation. Their graphs are not based on cells, but they identify nodes as circular nucleus and non-nucleus objects and label them with some additional features based on the expansion order in a breadth-first search.
120
+
121
+ The precision is higher among the normal glands (see Figure 4 for examples). Looking at the misclassified dysplastic glands, most of them show features that are very distinct for dysplastic glands such as structural chromatin and nuclear polarity, which are currently not available as node labels. Adding these features could thus improve the classification accuracy. Many of them also have a more round shape, which is more similar to the shape of normal glands. Some of the graphs also show some issues with cell segmentation, which reduces the representational power of the graph. A few of them are very low-grade dysplasia and thus have very similar features to normal glands. Improving the cell segmentation method and adding more key features could overcome these misclassification errors. Extending the dataset should also improve the performance. For an accurate classification, it is very important that the reference set contains a strong representation of the different slicing planes and varieties in appearance.
122
+
123
+ Figure 3 shows an example of the matching of two graphs. We can see which individual cells from each graph are matched by label substitution and therefore have similar local features. Some cells are not matched and are inserted/deleted during the transformation because they are too different from any of the cells in the other graph. This illustration of the matching has the potential to help humans better understand the result of the automatic classification and thus to help with explainability.
124
+
125
+ To create this dataset, the gland selection was performed manually. For this system to be useful in routine diagnostics, the gland detection and segmentation process should be automatised. This is another focus for future work.
126
+
127
+ ## 4 Conclusion
128
+
129
+ In this preliminary study we achieve an accuracy of 83.3% for intestinal gland classification (normal or dysplastic) using a graph-representation with distance-based edges and four node features coupled with a GED and k-NN classification. This result is comparable to state-of-the-art results for graph-based gland classification reported on a private dataset [13].
130
+
131
+ There are a number of possibilities to further improve the classification results presented here. On the graph extraction level, improving the cell segmentation helps to create more precise graph-representations. There is also a vast range of different graph types that can be explored that include more tissue types and areas, such as the lumen of the glands, many of which have already been successfully used to analyse histopathological data [15]. Exploring different features for the nodes and edges can also help to improve the performance. There is a wide range of possibilities here and the GED is well suited for this task, as it is able to handle any type of labelled graph. Using a different classifiers such as SVMs [13] or even combining different types of graph-representations and classifiers into an ensemble may also lead to a higher accuracy. Another option and one of the newest techniques for graph classification is geometric deep learning, which is based on graph neural networks [12].
132
+
133
+ Future work also includes more experimental evaluations, such as on the publicly available Gland Segmentation Challenge Contest (GlaS) [16] dataset, which is an image dataset of intestinal glands from colorectal cancer tissue. So far we have not been able to obtain a useful cell segmentation on this dataset with our methods, as the images are of much lower resolution than in our dataset.
134
+
135
+ Furthermore, we plan to conduct a study with expert pathologists to evaluate what kinds of graphs and graph matching methods are most intuitive and understandable in order to improve the explainability of our method. We also want to establish an expert pathologist baseline to analyse the inter-observer variability and include the experts knowledge into the feature selection process.
136
+
137
+ ## Acknowledgment
138
+
139
+ The work presented in this paper has been partially supported by the Rising Tide foundation with the grant number CCR-18-130.
140
+
141
+ ## References
142
+
143
+ 1. Bankhead, P., Loughrey, M.B., Fernández, J.A., Dombrowski, Y., McArt, D.G., Dunne, P.D., McQuaid, S., Gray, R.T., Murray, L.J., Coleman, H.G., et al.:
144
+
145
+ Qupath: Open source software for digital pathology image analysis. Scientific reports 7(1), 16878 (2017)
146
+
147
+ 2. Beyer, K., Goldstein, J., Ramakrishnan, R., Shaft, U.: When is nearest neighbor meaningful? In: International conference on database theory. pp. 217-235. Springer (1999)
148
+
149
+ 3. Bilgin, C., Demir, C., Nagi, C., Yener, B.: Cell-graph mining for breast tissue modeling and classification. In: 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. pp. 5311-5314. IEEE (2007)
150
+
151
+ 4. Bosman, F.T., Carneiro, F., Hruban, R.H., Theise, N.D., et al.: WHO classification of tumours of the digestive system. No. Ed. 4, World Health Organization (2010)
152
+
153
+ 5. Conte, D., Foggia, P., Sansone, C., Vento, M.: Thirty years of graph matching in pattern recognition. International journal of pattern recognition and artificial intelligence $\mathbf{{18}}\left( {03}\right) ,{265} - {298}\left( {2004}\right)$
154
+
155
+ 6. Fischer, A., Riesen, K., Bunke, H.: Improved quadratic time approximation of graph edit distance by combining hausdorff matching and greedy assignment. Pattern Recognition Letters 87, 55-62 (2017)
156
+
157
+ 7. Gao, X., Xiao, B., Tao, D., Li, X.: A survey of graph edit distance. Pattern Analysis and applications $\mathbf{{13}}\left( 1\right) ,{113} - {129}\left( {2010}\right)$
158
+
159
+ 8. Gärtner, T., Flach, P., Wrobel, S.: On graph kernels: Hardness results and efficient alternatives. In: Learning theory and kernel machines, pp. 129-143. Springer (2003)
160
+
161
+ 9. Gunduz, C., Yener, B., Gultekin, S.H.: The cell graphs of cancer. Bioinformatics 20(suppl_1), i145-i151 (2004)
162
+
163
+ 10. Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. Journal of machine learning research3(Mar),1157-1182 (2003)
164
+
165
+ 11. Leordeanu, M., Hebert, M.: A spectral technique for correspondence problems using pairwise constraints. In: Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1. vol. 2, pp. 1482-1489. IEEE (2005)
166
+
167
+ 12. Monti, F., Boscaini, D., Masci, J., Rodola, E., Svoboda, J., Bronstein, M.M.: Geometric deep learning on graphs and manifolds using mixture model cnns. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 5115-5124 (2017)
168
+
169
+ 13. Ozdemir, E., Gunduz-Demir, C.: A hybrid classification model for digital pathology using structural and statistical pattern recognition. IEEE Transactions on Medical Imaging $\mathbf{{32}}\left( 2\right) ,{474} - {483}\left( {2012}\right)$
170
+
171
+ 14. Sharma, H., Alekseychuk, A., Leskovsky, P., Hellwich, O., Anand, R., Zerbe, N., Hufnagl, P.: Determining similarity in histological images using graph-theoretic description and matching methods for content-based image retrieval in medical diagnostics. Diagnostic pathology $\mathbf{7}\left( 1\right) ,{134}\left( {2012}\right)$
172
+
173
+ 15. Sharma, H., Zerbe, N., Lohmann, S., Kayser, K., Hellwich, O., Hufnagl, P.: A review of graph-based methods for image analysis in digital histopathology. Diagnostic pathology (2015)
174
+
175
+ 16. Sirinukunwattana, K., Pluim, J.P., Chen, H., Qi, X., Heng, P.A., Guo, Y.B., Wang, L.Y., Matuszewski, B.J., Bruni, E., Sanchez, U., et al.: Gland segmentation in colon histology images: The glas challenge contest. Medical image analysis 35, 489-502 (2017)
176
+
177
+ 17. Ta, V.T., Lézoray, O., Elmoataz, A., Schüpp, S.: Graph-based tools for microscopic cellular image segmentation. Pattern Recognition 42(6), 1113-1125 (2009)
178
+
179
+ 18. Zhang, Z., Chen, P., McGough, M., Xing, F., Wang, C., Bui, M., Xie, Y., Sapkota, M., Cui, L., Dhillon, J., et al.: Pathologist-level interpretable whole-slide cancer diagnosis with deep learning. Nature Machine Intelligence $\mathbf{1}\left( 5\right) ,{236}\left( {2019}\right)$
papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/HklExX79-S/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § GRAPH-BASED CLASSIFICATION OF INTESTINAL GLANDS IN COLORECTAL CANCER TISSUE IMAGES
2
+
3
+ Linda Studer ${}^{\star 1,2}$ , Shushan Toneyan ${}^{\star 3}$ , Inti Zlobec ${}^{3}$ , Heather Dawson ${}^{3}$ , and Andreas Fischer ${}^{1,2}$
4
+
5
+ ${}^{1}$ DIVA Research Group, University of Fribourg
6
+
7
+ firstname.lastname@unifr.ch
8
+
9
+ ${}^{2}$ Institute of Complex Systems (iCoSys)
10
+
11
+ University of Applied Sciences and Arts Western Switzerland
12
+
13
+ firstname.lastname@hefr.ch
14
+
15
+ ${}^{3}$ Institue of Pathology, University of Bern,
16
+
17
+ firstname.lastname@pathology.unibe.ch
18
+
19
+ Abstract. Pathologists study tissue morphology in order to correctly diagnose diseases such as colorectal cancer. This task can be very time consuming, and automated systems can greatly improve the precision and speed with which a diagnosis is established. Explainable algorithms and results are key to successful implementation of these methods into routine diagnostics in the medical field. In this paper, we propose a graph-based approach for intestinal gland classification. It leverages the high representational power of graphs for describing geometrical and topological properties of the glands. A novel, publicly available image and graph dataset is introduced based on cell segmentation of healthy and dysplastic H&E stained intestinal glands from pT1 colorectal cancer. The graphs are compared using an approximate graph edit distance and are classified using the k-nearest neighbours algorithm. With this method, we achieve a classification accuracy of 83.3%.
20
+
21
+ Keywords: Intestinal Gland Classification $\cdot$ Colorectal Cancer $\cdot$ Digital Pathology - Graph Matching - Graph Edit Distance.
22
+
23
+ § 1 INTRODUCTION
24
+
25
+ In diagnosis and treatment of colorectal cancer the observations of pathologists are crucial for the characterisation of the stage of the disease and subsequent predictions of its progression [18]. The precise morphological characteristics of the deformation depend on the type of cancer and the stage of cancer progression [4]. In order to expedite diagnostics and reduce errors and variability between experts, computer-aided diagnosis (CAD) can offer great support in the diagnostic process.
26
+
27
+ In the initial stages of cancer, such as pT1, carcinomas can frequently be observed originating from polyps [4]. In such cases it is possible to observe normal tissues, dysplasia and carcinoma on one slide. In normal tissue the glands have parallel and flat lumina lined with a single layer of cells in the upper parts of the gland. They have circular or oval shape and a homogeneously coloured nucleus, that is pushed to the outer side of the gland by the mucus in the cytoplasm. In dysplastic glands the regular and ordered configuration is disrupted, and their shape varies greatly, especially when a mix of different dysplasia types (such as low- and high-grade) is present.
28
+
29
+ * These authors contributed equally to this work.
30
+
31
+ < g r a p h i c s >
32
+
33
+ Fig. 1: Examples of normal (a) and dysplastic (b) colon mucosa stained with H&E. The yellow arrowheads point to selected glands. Normal glands usually have a regular round or oval shape while dysplastic glands are more irregularly shaped.
34
+
35
+ Histological images show the microanatomy of a tissue sample. They contain many different cell types, cell compartments and tissues which makes them very complex to analyse. In their diagnosis, pathologists consider morphological changes in tissue, spatial relationship between cell (sub-)types, density of certain cells, and more. Graph-based methods, which are able to capture geometrical and topological properties of the glands, offer a very natural approach to attempt an automated analysis of such data [15].
36
+
37
+ Graphs have been used for a variety of tasks in digital pathology, such as classification and exploratory analysis [15] as well as segmentation [17] and content-based image retrieval (CBIR) [14]. There are also a great variety of types of graphs that are being used, such as O'Callaghan neighbourhood graphs, attributed relational graphs (ARG) and cell graphs [15]. Especially cell graphs have been successfully used to support cancer diagnosis [3.9].
38
+
39
+ In this paper, we propose a gland classification method based on labelled cell graphs and graph edit distance (GED) [6,7], which transforms one cell graph into another using deletion, insertion, and substitution of individual cells. In contrast to other graph matching methods [5], such as spectral methods [11] or graph kernels [8], GED has the advantage that it is applicable to any type of labelled graphs. Furthermore, it provides an explicit mapping of cells from one gland to cells of the other gland, which may help human experts comprehend why the algorithm predicts high or low gland similarity (see for example Figure 3). For experimental evaluation, we have created a graph dataset that contains cell graphs of healthy and dysplastic H&E stained intestinal glands from pT1 colorectal cancer. The dataset, which has been made publicly available, is used to evaluate the classification performance of our GED-based approach using a k-nearest neighbour (k-NN) classifier. The performance is compared to results reported in the literature.
40
+
41
+ < g r a p h i c s >
42
+
43
+ Fig. 2: Examples of the cell segmentation (left) and graph-representation (right) of a normal and a dysplastic gland. Each detected cell (circled in red) is represented as a node in the graph (in orange). The nodes are connected with edges (in green) based on the physical distance between them.
44
+
45
+ § 2 GRAPH-BASED GLAND CLASSIFICATION
46
+
47
+ This section introduces the novel, publicly available gland classification dataset and provides more details about the proposed method for graph-based gland classification.
48
+
49
+ § 2.1 PT1 GLAND GRAPH (GG-PT1) DATASET
50
+
51
+ The images used to create the graphs dataset are from H&E stained whole slide images (WSIs) tissue samples taken from pT1 [4] cancer patients. The glands are cropped from images that have normal tissues, dysplasia and carcinoma on one slide. The crops are classified by an expert pathologist and then used to build the graphs. In total there are 520 graphs from one 20 different patients. One WSI per patient was selected based on image quality and 26 well-defined glands (13 dysplastic and 13 normal) were manually annotated.
52
+
53
+ The cells of each gland are segmented using QuPath [1]. The same parameters are used for all the images. 33 features are exported from QuPath for each cell, which are used to label the nodes. Available features based on the cell are the eosin stain (mean, standard deviation (SD), min, max), circularity, eccentricity, perimeter, area and diameter (min, max). Features based on the nucleus are circularity, eosin stain (mean, std, min, max, range, sum), hematoxylin stain (mean, std, min, max, range, sum), diameter (min, max), area, perimeter and eccentricity. Further features are the eosin stain (mean, std, min, max) of the cytoplasm and the nucleus/cell area ratio.
54
+
55
+ Figure 2 shows example images and graphs from the dataset which is publicly available $I$ . It includes all images, annotation masks, and graph features as well as the reference, validation and test split used in this paper.
56
+
57
+ https://github.com/LindaSt/pT1-Gland-Graph-Dataset
58
+
59
+ < g r a p h i c s >
60
+
61
+ Fig. 3: GED transformations between two normal glands. The black arrows indicate node label substitution, the nodes circled in black mark deleted/inserted nodes.
62
+
63
+ § 2.2 GRAPH-BASED REPRESENTATION
64
+
65
+ The formal mathematical definition of a graph $G$ is given as a tuple of $\left( {V,E,\alpha ,\beta }\right)$ , where $V$ is the finite set of nodes (or vertices), $E$ is the set of edges, $\alpha$ is the node labelling function and $\beta$ is the edge labelling function.
66
+
67
+ We use so-called cell graphs [9], in which each cell is represented by a node with different attributed features $\alpha \left( v\right) = \left( {{x}_{1},\ldots ,{x}_{n}}\right)$ with $n \leq {33}$ (see section 2.1 for the complete feature list). Figure 2 shows examples of cell graphs of glands. Because the different features all have a different range, they are normalised using the z-normalisation which adjusts each feature value $x$ such that $\widehat{x} = \frac{x - \mu }{\sigma }$ .
68
+
69
+ For each node, we insert an edge to its two spatially closest neighbour nodes. No edge features are used. Node features are selected using the sequential forward selection method [10]. This process starts with no features and iteratively adds the best feature until there is no further improvement in the classification accuracy. We also establish a baseline based on the unlabelled graph.
70
+
71
+ § 2.3 GRAPH EDIT DISTANCE (GED)
72
+
73
+ GED is an error-tolerant measurement of similarity between two graphs [7]. It provides a model for transforming a source graph into a target graph instead of searching for an exact match between graphs or their sub-graphs. Figure 3 shows an example of such a transformation.
74
+
75
+ GED is defined as the distance between two graphs in the case when the cost of transforming one into the other is minimal. There are three types of edit operations that are performed on edges as well as labels to transform a graph: insertion, deletion and label substitution. For each of these operations a cost function needs to be specified. We consider the Euclidean cost model that uses a fixed cost for deletion/insertion and the Euclidean distance for substituting node labels. Since we do not use edge labels in our cell graphs, the cost function for edge label substitution does not need to be defined.
76
+
77
+ The computational complexity of the exact GED calculation increases exponentially as a function of the number of nodes in the graphs. However, heuristic methods are available that can compute an approximate solution. We use an improved version of the bipartite graph-matching method (BP2) [6], which runs in quadratic time and calculates an upper bound of GED.
78
+
79
+ Table 1: Classification accuracies achieved by the baseline and after forward search selection of the node features. The mean accuracy along with the standard deviation of a 4-fold cross-validation is reported.
80
+
81
+ max width=
82
+
83
+ X NODE FEATURES Accuracy
84
+
85
+ 1-3
86
+ BASELINE NONE ${71.7} \pm {2.8}\%$
87
+
88
+ 1-3
89
+ Optimized Graph CYTOPLASM: EOSIN MIN NUCLEUS: HEMATOXYLIN MEAN, MIN, MAX ${83.3} \pm {1.7}\%$
90
+
91
+ 1-3
92
+
93
+ § 2.4 K-NEAREST NEIGHBOUR CLASSIFICATION
94
+
95
+ The classification of the glands is performed using the k-NN classifier, which assigns the most frequent label out of the $k$ most similar objects to the object to be classified [2]. In our case we use the three closest $\left( {k = 3}\right)$ gland graphs in the reference set, in terms of the GED, to classify a new gland graph.
96
+
97
+ § 3 EXPERIMENTAL EVALUATION
98
+
99
+ Our goal is to classify intestinal glands in the novel GG-pT1 dataset as either normal or dysplastic by using graph-based representations. Graphs are compared to a reference dataset using the GED and then classified using the k-NN algorithm. We also investigate the impact of node feature selection.
100
+
101
+ § 3.1 SETUP
102
+
103
+ We split the dataset into four parts and evaluate the performance with a 4- fold cross-validation. Two parts are used as the reference set and one each for the validation and test set (details here ${}^{2}$ ). The reference set is used for the classification. For each input graph the GED is computed to all graphs in the reference set and k-NN is then used to classify the graph based on this distance. The validation dataset is used to optimise the insertion/deletion cost for nodes and edges using a grid search over 25 parameters (for the specific parameters see here ${}^{2}$ ) and to optimise the node features using forward search.
104
+
105
+ § 3.2 RESULTS
106
+
107
+ Table 1 gives an overview of the results and selected node features. The achieved accuracy is ${83.3}\%$ . The forward search selected four attributes, three are based on the nucleus hematoxylin stain and one is based on the cystoplasm eosin stain. Adding these four features to the nodes increased the performance by 11.6% compared to the unlabelled baseline.
108
+
109
+ ${}^{2}$ https://bit.ly/2xDuRcV
110
+
111
+ < g r a p h i c s >
112
+
113
+ Fig. 4: Examples of dysplastic glands misclassified as normal (a) and normal glands misclassified as dysplastic (b).
114
+
115
+ § 3.3 DISCUSSION
116
+
117
+ We achieve slightly better results on our dataset than the only other published results using graph-based methods on a closely related (but not non-publicly available) colorectal cancer image dataset [13]. Ozdemir et al. report results achieved by hybrid models that use different combinations of structural and statistical features. One variant uses GED embedding coupled with a Support Vector Machine (SVM) to classify glands as normal, low- or high-grade dysplastic and achieves an overall accuracy of 81.72%. They however use a different graph-representation. Their graphs are not based on cells, but they identify nodes as circular nucleus and non-nucleus objects and label them with some additional features based on the expansion order in a breadth-first search.
118
+
119
+ The precision is higher among the normal glands (see Figure 4 for examples). Looking at the misclassified dysplastic glands, most of them show features that are very distinct for dysplastic glands such as structural chromatin and nuclear polarity, which are currently not available as node labels. Adding these features could thus improve the classification accuracy. Many of them also have a more round shape, which is more similar to the shape of normal glands. Some of the graphs also show some issues with cell segmentation, which reduces the representational power of the graph. A few of them are very low-grade dysplasia and thus have very similar features to normal glands. Improving the cell segmentation method and adding more key features could overcome these misclassification errors. Extending the dataset should also improve the performance. For an accurate classification, it is very important that the reference set contains a strong representation of the different slicing planes and varieties in appearance.
120
+
121
+ Figure 3 shows an example of the matching of two graphs. We can see which individual cells from each graph are matched by label substitution and therefore have similar local features. Some cells are not matched and are inserted/deleted during the transformation because they are too different from any of the cells in the other graph. This illustration of the matching has the potential to help humans better understand the result of the automatic classification and thus to help with explainability.
122
+
123
+ To create this dataset, the gland selection was performed manually. For this system to be useful in routine diagnostics, the gland detection and segmentation process should be automatised. This is another focus for future work.
124
+
125
+ § 4 CONCLUSION
126
+
127
+ In this preliminary study we achieve an accuracy of 83.3% for intestinal gland classification (normal or dysplastic) using a graph-representation with distance-based edges and four node features coupled with a GED and k-NN classification. This result is comparable to state-of-the-art results for graph-based gland classification reported on a private dataset [13].
128
+
129
+ There are a number of possibilities to further improve the classification results presented here. On the graph extraction level, improving the cell segmentation helps to create more precise graph-representations. There is also a vast range of different graph types that can be explored that include more tissue types and areas, such as the lumen of the glands, many of which have already been successfully used to analyse histopathological data [15]. Exploring different features for the nodes and edges can also help to improve the performance. There is a wide range of possibilities here and the GED is well suited for this task, as it is able to handle any type of labelled graph. Using a different classifiers such as SVMs [13] or even combining different types of graph-representations and classifiers into an ensemble may also lead to a higher accuracy. Another option and one of the newest techniques for graph classification is geometric deep learning, which is based on graph neural networks [12].
130
+
131
+ Future work also includes more experimental evaluations, such as on the publicly available Gland Segmentation Challenge Contest (GlaS) [16] dataset, which is an image dataset of intestinal glands from colorectal cancer tissue. So far we have not been able to obtain a useful cell segmentation on this dataset with our methods, as the images are of much lower resolution than in our dataset.
132
+
133
+ Furthermore, we plan to conduct a study with expert pathologists to evaluate what kinds of graphs and graph matching methods are most intuitive and understandable in order to improve the explainability of our method. We also want to establish an expert pathologist baseline to analyse the inter-observer variability and include the experts knowledge into the feature selection process.
134
+
135
+ § ACKNOWLEDGMENT
136
+
137
+ The work presented in this paper has been partially supported by the Rising Tide foundation with the grant number CCR-18-130.
papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/Hkx63bWjZr/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Quantitative assessment of colorectal cancer via conditional generative adversarial networks
2
+
3
+ Quoc Dang Vu ${}^{1}$ , Kyungeun Kim ${}^{2}$ , Jin Tae Kwak ${}^{1, \boxtimes }$ ${}^{1}$ Department of Computer Science, Sejong University, Seoul, Korea 05006 jkwak@sejong.ac.kr
4
+
5
+ ${}^{2}$ Department of Pathology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Korea 03181
6
+
7
+ Abstract. Grading for cancer, based upon the degree of cancer differentiation, plays a major role in describing the characteristics and behavior of the cancer and determining treatment plan for patients. The grade is determined by a subjective and qualitative assessment of tissues under microscope, which suffers from high inter- and intra-observer variability among pathologists. Digital pathology offers an alternative means to automate the procedure as well as to improve the accuracy and robustness of cancer grading. However, most of such methods tend to mimic or reproduce cancer grade determined by human experts. Herein, we propose a quantitative means of assessing and characterizing cancer via conditional generative adversarial networks. The proposed method is evaluated using tissue mi-croarrays (TMA) of colorectal cancer. The results suggest that the proposed method holds a potential for quantifying cancer characteristics and improving cancer pathology.
8
+
9
+ Keywords: Colorectal cancer, Tumor grading, differentiation, GAN.
10
+
11
+ ## 1 Introduction
12
+
13
+ In pathology, grading is a means of evaluating tumors based upon their appearance. A grade is given depending on how different the tumors from normal/benign tissues (i.e., differentiation). It is utilized to determine a patient's prognosis and develop a treatment plan. However, pathologists manually assess the tissue under microscope and determine its grade, i.e., it is a subjective and qualitative process, limiting the speed and questioning on the reproducibility [1]. Moreover, it is well known that the current grading system is sub-optimal, especially for prognosis [2]. Therefore, an objective and quantitative method for assessing tumors beyond the current tumor grading scheme holds a great potential for improving cancer pathology and patient management.
14
+
15
+ With the advent of digital pathology, numerous computerized tools, including deep learning, have been proposed to aid in pathologists and improve the current pathology [3]. A majority of such tools has been (successfully) applied to discriminative tasks, including cell/tissue classification and segmentation, where the ground truth labels are provided by pathologists. In other words, these tools, by and large, sought to mimic pathologists and/or automate the histopathologic analysis. Although they could facilitate the rapid and robust decision-making and ease the burden of the pathologists, the limitation of the current histopathologic analysis remains the same. For instance, such tools cannot tell the difference between the tumors within the same grade. To further improve the current grading system and digital pathology tools, it is highly desirable to develop a method that is capable of learning the tissue or tumor characteristics and quantitatively measuring the similarity/dissimilarity to the normal/benign tissue (differentiation) without explicit guidance of tumor grades, i.e., in an unsupervised fashion.
16
+
17
+ A generative adversarial network (GAN) [4] is a type of deep learning approach that can generate or produce realistic outputs (here, tissue images). Recently, a conditional GAN (cGAN) [5], where the output is conditioned on an input, has gained much attention. For example, a cGAN was used to generate synthetic tissue images [6]. It was also used to conduct virtual H&E staining [7] as well as H&E-to-immunofluorescent stain translation [8]. The strength of a GAN/cGAN is its superior learning capability in an unsupervised manner; hence, the technique could be better suited for learning the latent characteristics of tissues or tumors.
18
+
19
+ In this manuscript, we propose a cGAN-based method to learn and quantify the characteristics of the tissue that are relevant to tumor differentiation. We construct a cGAN model (BenignGAN) using benign tissue images only. BenignGAN is utilized to generate tumor images of differing degree of differentiation. The less similar the tumor is to the benign (poorly-differentiated), the harder BenignGAN generates a realistic tumor image. The difference between the original and generated tumor images is quantitatively measured and compared to tumor differentiation. We evaluate the proposed method using tissue microarrays (TMA) of colorectal cancer. Our main contributions are summarized as follows: 1) We propose an alternative means of learning and quantifying the tumor characteristics; 2) We build BenignGAN to learn the characteristics of the tissue of origin (here, benign); 3) Employing BenignGAN, the proposed method analyzes tumors in an unsupervised manner, and thus is not restricted to the current grading system.
20
+
21
+ ![01963a57-6453-7886-a4c3-6f0e487b33fe_1_344_1409_959_446_0.jpg](images/01963a57-6453-7886-a4c3-6f0e487b33fe_1_344_1409_959_446_0.jpg)
22
+
23
+ Fig. 1. Overview of the proposed method. Benign tissue images are converted to edge maps and used to train a cGAN (BenignGAN). Given the edge map of tumor images of differing grades, BenignGAN is utilized to reconstruct the tumor images. The similarity between the reconstructed and original tumor images is measured and compared to tumor grades provided by pathologists.
24
+
25
+ ## 2 Methods
26
+
27
+ The overview of the proposed method is illustrated in Fig. 1. Details of the method is described in the following sections.
28
+
29
+ ### 2.1 BenignGAN: Conditional Generative Adversarial Network
30
+
31
+ A conditional generative adversarial network (cGAN) [5] consists of a generator and a discriminator. Given an input image, a generator G learns how to transform the input image to an output image. Following [9], we adopt U-Net [10] architecture to build the generator G. The role of a discriminator D is to distinguish the output images generated by the generator G from the original images. As described in PatchGAN [9], the discriminator $\mathrm{D}$ is solely composed of convolutional layers. It outputs a patch, not a scalar. Each pixel in the patch has a value ranging from 0 to 1 , representing how believable the corresponding section of the unknown image is.
32
+
33
+ The overall objective function can be represented as:
34
+
35
+ $$
36
+ {\text{ Loss }}_{ = }{arg}\mathop{\min }\limits_{G}\mathop{\max }\limits_{D}{L}_{cGAN}\left( {G, D}\right) + \lambda {L}_{L1}\left( G\right) \tag{1}
37
+ $$
38
+
39
+ $$
40
+ {\mathrm{L}}_{\mathrm{{cGAN}}}\left( {G, D}\right) = {\mathbb{E}}_{x, y}\left\lbrack {\log \left( {\mathrm{D}\left( {x, y}\right) }\right) }\right\rbrack + {\mathbb{E}}_{x, z}\left\lbrack {\log \left( {1 - D\left( {x, G\left( {x, z}\right) }\right) }\right) }\right\rbrack \tag{2}
41
+ $$
42
+
43
+ $$
44
+ {L}_{L1}\left( G\right) = {\mathbb{E}}_{x, y, z}\left\lbrack {\parallel y - G\left( {x, z}\right) {\parallel }_{1}}\right\rbrack \tag{3}
45
+ $$
46
+
47
+ where ${\mathrm{L}}_{\mathrm{{cGAN}}}\left( {G, D}\right)$ the conditional adversarial loss and ${L}_{L1}\left( G\right)$ is the L1 norm loss between the original image and the output image of the generator $G.x, y$ and $z$ denote the input image, output image and random noise vector, respectively.
48
+
49
+ Given an input image $x$ , the generator $G$ reconstructs the original RGB image $y$ . The random noise vector $z$ is introduced in the form of dropout to prevent the generator $G$ from directly mapping the input image $x$ to the output image $y$ . L1 norm loss is known to be helpful in generating less blurry output images.
50
+
51
+ ### 2.2 Preprocessing
52
+
53
+ A cGAN generates an output image conditioned on an input image. Since a neural network tends to focus on the surface statistics of the input [11], training the cGAN directly on the RGB images may cause the generator $G$ to only memorize the direct mapping between the input and output and thus fail to learn the fundamental characteristics of the input, i.e., benign tissue. Thus, we limit the amount of information that the generator $G$ receives. Given the limited information, the cGAN tries to reconstruct the original RGB image. To reduce the amount of the information, we apply Sobel operator to an input image and compute the gradient magnitude, called as an edge map.
54
+
55
+ ### 2.3 Similarity Metrics
56
+
57
+ We utilize mutual information (MI), structural similarity index (SSIM) and Pearson correlation coefficient (CC) for measuring the similarity between the reconstructed images and their originals. For each pair of reconstructed and original images, we compute the three metrics for their RGB and gray scale images. For an RGB image, the metrics are separately calculated for each channel and then averaged across channels. Gray scale images are converted from the reconstructed and original RGB images and used to compute the three metrics.
58
+
59
+ ### 2.4 Training and Implementation
60
+
61
+ We implemented the proposed method using Python and Pytorch. The proposed method is trained for a total of 150 epochs using Adam optimizer with beta1=0.5 and beta2 $= {0.999}.\lambda$ is set to 100 to weight the L1 loss. The learning rate of both the generator and discriminator is set to ${1.0}\mathrm{e} - 4$ and reduces to ${1.0}\mathrm{e} - 5$ at50th epoch. In order to enhance the robustness of the generator, during training, we perform a random horizontal and vertical flip, random scaling, random rotation and random shearing of the input image. We also add Gaussian noise and perform minor blurring using a median or Gaussian filter.
62
+
63
+ ## 3 Experiments
64
+
65
+ ### 3.1 Dataset
66
+
67
+ One whole slide images (WSI) and two colorectal tissue microarrays (TMAs) were employed to evaluate the proposed method. Tissue samples in the WSI and TMAs were stained with Hematoxylin and Eosin (H&E) and digitized at x40 optical magnification. An experienced pathologist identified and delineated benign and tumor regions. Tumor regions were further categorized into 3 grades - well-differentiate (WD), moderately-differentiate (MD) and poorly-differentiate (PD). From the first TMA group, we extracted 212 benign (BN) image patches of size ${1024} \times {1024}$ and used as the training set. 339 tumor image patches and 80 benign images patches of size ${2048} \times {2048}$ were obtained from the second TMA and WSI, respectively, forming the evaluation set. In short, the evaluation set is composed of ${80}\mathrm{{BN}},{28}\mathrm{{WD}},{246}\mathrm{{MD}}$ and 65 PD image patches. The patches were mainly focused on the glandular structure, and the patches containing >20% luminal and/or un-annotated regions were excluded.
68
+
69
+ ### 3.2 Qualitative Results
70
+
71
+ To qualitatively evaluate the effectiveness of the proposed method, the results of the proposed method is presented in Fig. 2. The result demonstrated that BenignGAN is capable of reconstructing the benign tissue image from the corresponding edge map, capturing the underlying characteristics of the benign tissue. The presence and location of glands, basement membrane and nuclei were well observed and retained. The appearance of glands was also reasonably depicted. However, the reconstructed images had a tendency to become blurry, slightly losing the fine details of the tissue. As for the tumors, the presence of glands and density of nuclei tended to influence the quality of the reconstruction. As the density of nuclei increases, the capability of BenignGAN to reconstruct the original images degrades. Absence of glands in the original image (e.g., PD) resulted in poorer reconstruction.
72
+
73
+ ![01963a57-6453-7886-a4c3-6f0e487b33fe_4_345_677_967_700_0.jpg](images/01963a57-6453-7886-a4c3-6f0e487b33fe_4_345_677_967_700_0.jpg)
74
+
75
+ Fig. 2. Representative reconstructed and original tumor images. The original tumor images (first row), edge maps (second row) and reconstructed tumor images (third row) are shown for the benign tissue and well-differentiated (WD), moderately-differentiated (MD) and poorly-differentiated (PD) tumor images, respectively.
76
+
77
+ Table 1. Results of the comparison between the reconstructed and original tissue images on the evaluation set. The mean $\pm$ standard deviation is shown for three evaluation metrics that is computed for benign and three tumor grades.
78
+
79
+ <table><tr><td/><td/><td>CC</td><td>$\mathbf{{MI}}$</td><td>SSIM</td></tr><tr><td rowspan="4">RGB</td><td>Benign</td><td>${0.8308} \pm {0.0400}$</td><td>${0.6918} \pm {0.0754}$</td><td>${0.5130} \pm {0.0385}$</td></tr><tr><td>WD</td><td>${0.6544} \pm {0.0802}$</td><td>${0.4605} \pm {0.0848}$</td><td>${0.4138} \pm {0.0423}$</td></tr><tr><td>MD</td><td>${0.6119} \pm {0.0760}$</td><td>${0.4369} \pm {0.0768}$</td><td>${0.3928} \pm {0.0400}$</td></tr><tr><td>PD</td><td>${0.5378} \pm {0.0898}$</td><td>${0.3626} \pm {0.0912}$</td><td>${0.3520} \pm {0.0415}$</td></tr><tr><td rowspan="4">Grayscale</td><td>Benign</td><td>${0.8432} \pm {0.0393}$</td><td>${0.7470} \pm {0.0806}$</td><td>${0.5276} \pm {0.0402}$</td></tr><tr><td>WD</td><td>${0.6834} \pm {0.0835}$</td><td>${0.4886} \pm {0.0886}$</td><td>${0.4191} \pm {0.0442}$</td></tr><tr><td>MD</td><td>${0.6414} \pm {0.0809}$</td><td>${0.4657} \pm {0.0798}$</td><td>${0.3970} \pm {0.0419}$</td></tr><tr><td>PD</td><td>${0.5604} \pm {0.0956}$</td><td>${0.3922} \pm {0.0950}$</td><td>${0.3542} \pm {0.0430}$</td></tr></table>
80
+
81
+ ![01963a57-6453-7886-a4c3-6f0e487b33fe_5_413_410_815_883_0.jpg](images/01963a57-6453-7886-a4c3-6f0e487b33fe_5_413_410_815_883_0.jpg)
82
+
83
+ Fig. 3. Boxplots for similarity measurements. Correlation coefficient (CC), mutual information (MI) and structural similarity index (SSIM) are measured between the reconstructed and original tumor images, including well-differentiated (WD), moderately-differentiated (MD) and poorly-differentiated (PD) tumors. Red points correspond to outliers, defined by the cases outside the range $\left\lbrack {\mathrm{Q}1 - {1.5} \times \mathrm{{IQR}},\mathrm{Q}3 + {1.5} \times \mathrm{{IQR}}}\right\rbrack$ where $\mathrm{Q}1,\mathrm{Q}3$ and $\mathrm{{IQR}}$ denote the first quartile, third quartile and interquartile range, respectively.
84
+
85
+ ### 3.3 Experiments and Quantitative Results
86
+
87
+ The performance of the proposed method was quantitatively assessed using the three evaluation metrics, including CC, MI and SSIM, between the reconstructed and original tumor images. Within the training set, 3-fold cross validation was performed to evaluate BenignGan's capability on reconstructing benign tissue samples. Comparing the original and reconstructed RGB images, we achieved ${0.7984} \pm {0.0570}\mathrm{{CC}},{0.6402} \pm {0.1026}$ MI and 0.4955±0.0491 SSMI. Using grayscale images (converted from RGB images), we obtained ${0.8127} \pm {0.0568}$ for $\mathrm{{CC}},{0.6920} \pm {0.1104}$ for $\mathrm{{MI}}$ and ${0.5050} \pm {0.0514}$ for SSIM. Subsequently, we trained BenignGAN on the entire training set and tested on the evaluation set. The results are shown in Fig. 3 and Table 1. The results on BN patches in the evaluation set were similar to the reported results of 3-fold cross validation (Table 1), which confirms the ability of BenignGAN in reconstructing benign tissue images. Moreover, investigation of the results on the tumor image patches revealed that the similarity measurements between the original and reconstructed images are related to the degree of tumor differentiation (Fig. 3 and Table 1). The worse the tumor grade is, the less similar the tumor is to the benign tissue. The similar trend was observed for all three evaluation metrics. ANOVA was further conducted on each of the three evaluation metrics to evaluate the significance of the difference in the similarity measure between the original and reconstructed images in regard to tumor grades. A statistically significant difference (p-value $< {10}^{-5}$ ) was found for the three evaluation metrics using both RGB and grayscale images, suggesting that the difference between the reconstructed and original tumor images, with respect to the benign tissue, could serve as a means of analyzing tumors. In additional, no significant difference between color and grayscale images was observed, indicating that the observed trend is not simply due to the color difference between tumors.
88
+
89
+ The difference between MD and PD tumors was larger than the difference between WD and MD tumors. This may be ascribable to the presence of glands. Glands are present in many of the tumors of WD and MD but absent in PD tumors. It would have been a bigger challenge for BenignGAN to reconstruct PD tumors since BenignGAN was trained using the benign tissues only, which contain plenty of glands in general. Although there was a downward trend in similarity, the three similarity measures were overlapping between different tumor grades. This may be due to the intrinsic similarity between tumors. However, the specific meanings or biological causes of our observation cannot be identified without further histopathologic and/or biological studies. Moreover, since this study is conducted based upon image patches, sampling of image patches could have an effect on the study. A large-scale study should be followed to further confirm our findings.
90
+
91
+ ## 4 Conclusion
92
+
93
+ Herein, we presented a method of utilizing a cGAN to quantify the tissue characteristics relevant to tumor differentiation. The experimental results demonstrated that a cGAN is capable of learning the latent representation of the benign tissue and, as it is applied to tumor images, its ability varied depending on the tumor grade, suggesting that it could be utilized to quantitatively analyze and measure the degree of tumor differentiation. The proposed method is generic, and thus could be applied to different types of tissues and tumors. Providing an alternative means of analyzing tissues/tumors, we believe that this approach could aid in improving and reshaping the current cancer pathology in both clinics and research. The future study will entail the comparison of the proposed method to the patients' outcome.
94
+
95
+ ## References
96
+
97
+ 1. Gurcan, M.N., Boucheron, L.E., Can, A., Madabhushi, A., Rajpoot, N.M., Yener, B.:
98
+
99
+ Histopathological image analysis: a review. IEEE Rev Biomed Eng 2, 147-171 (2009)
100
+
101
+ 2. McKenney, J.K., Wei, W., Hawley, S., Auman, H., Newcomb, L.F., Boyer, H.D., Fazli, L., Simko, J., Hurtado-Coll, A., Troyer, D.A., Tretiakova, M.S., Vakar-Lopez, F., Carroll, P.R., Cooperberg, M.R., Gleave, M.E., Lance, R.S., Lin, D.W., Nelson, P.S., Thompson, I.M., True, L.D., Feng, Z., Brooks, J.D.: Histologic Grading of Prostatic Adenocarcinoma Can Be Further Optimized: Analysis of the Relative Prognostic Strength of Individual Architectural Patterns in 1275 Patients From the Canary Retrospective Cohort. Am J Surg Pathol 40, 1439-1456 (2016)
102
+
103
+ 3. Janowczyk, A., Madabhushi, A.: Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases. J Pathol Inform 7, 29 (2016)
104
+
105
+ 4. Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, pp. 2672-2680. MIT Press, Montreal, Canada (2014)
106
+
107
+ 5. Mirza, M., Osindero, S.: Conditional Generative Adversarial Nets (2014)
108
+
109
+ 6. Senaras, C., Sahiner, B., Tozbikian, G., Lozanski, G., Gurcan, M.N.: Creating synthetic digital slides using conditional generative adversarial networks: application to Ki67 staining. SPIE (2018)
110
+
111
+ 7. Bayramoglu, N., Kaakinen, M., Eklund, L., Heikkilä, J.: Towards Virtual H&amp;E Staining of Hyperspectral Lung Histology Images Using Conditional Generative Adversarial Networks. In: 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), pp. 64-71. (Year)
112
+
113
+ 8. Burlingame, E.A., Margolin, A.A., Gray, J.W., Chang, Y.H.: SHIFT: speedy histopathological-to-immunofluorescent translation of whole slide images using conditional generative adversarial networks. Proc SPIE Int Soc Opt Eng 10581, (2018)
114
+
115
+ 9. Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-Image Translation with Conditional Adversarial Networks (2016)
116
+
117
+ 10. Ronneberger, O., Fischer, P., Brox, T.: U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III, pp. 234-241. Springer International Publishing, Cham (2015)
118
+
119
+ 11. Jo, J., Bengio, Y.: Measuring the tendency of CNNs to Learn Surface Statistical Regularities (2017)
120
+
121
+ 12. Vahadane, A., Peng, T., Sethi, A., Albarqouni, S., Wang, L., Baust, M., Steiger, K., Schlitter, A.M., Esposito, I., Navab, N.: Structure-Preserving Color Normalization and Sparse Stain Separation for Histological Images. IEEE Transactions on Medical Imaging 35, 1962-1971 (2016)
papers/MICCAI/MICCAI 2019/MICCAI 2019 Workshop/MICCAI 2019 Workshop COMPAY/Hkx63bWjZr/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § QUANTITATIVE ASSESSMENT OF COLORECTAL CANCER VIA CONDITIONAL GENERATIVE ADVERSARIAL NETWORKS
2
+
3
+ Quoc Dang Vu ${}^{1}$ , Kyungeun Kim ${}^{2}$ , Jin Tae Kwak ${}^{1, \boxtimes }$ ${}^{1}$ Department of Computer Science, Sejong University, Seoul, Korea 05006 jkwak@sejong.ac.kr
4
+
5
+ ${}^{2}$ Department of Pathology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Korea 03181
6
+
7
+ Abstract. Grading for cancer, based upon the degree of cancer differentiation, plays a major role in describing the characteristics and behavior of the cancer and determining treatment plan for patients. The grade is determined by a subjective and qualitative assessment of tissues under microscope, which suffers from high inter- and intra-observer variability among pathologists. Digital pathology offers an alternative means to automate the procedure as well as to improve the accuracy and robustness of cancer grading. However, most of such methods tend to mimic or reproduce cancer grade determined by human experts. Herein, we propose a quantitative means of assessing and characterizing cancer via conditional generative adversarial networks. The proposed method is evaluated using tissue mi-croarrays (TMA) of colorectal cancer. The results suggest that the proposed method holds a potential for quantifying cancer characteristics and improving cancer pathology.
8
+
9
+ Keywords: Colorectal cancer, Tumor grading, differentiation, GAN.
10
+
11
+ § 1 INTRODUCTION
12
+
13
+ In pathology, grading is a means of evaluating tumors based upon their appearance. A grade is given depending on how different the tumors from normal/benign tissues (i.e., differentiation). It is utilized to determine a patient's prognosis and develop a treatment plan. However, pathologists manually assess the tissue under microscope and determine its grade, i.e., it is a subjective and qualitative process, limiting the speed and questioning on the reproducibility [1]. Moreover, it is well known that the current grading system is sub-optimal, especially for prognosis [2]. Therefore, an objective and quantitative method for assessing tumors beyond the current tumor grading scheme holds a great potential for improving cancer pathology and patient management.
14
+
15
+ With the advent of digital pathology, numerous computerized tools, including deep learning, have been proposed to aid in pathologists and improve the current pathology [3]. A majority of such tools has been (successfully) applied to discriminative tasks, including cell/tissue classification and segmentation, where the ground truth labels are provided by pathologists. In other words, these tools, by and large, sought to mimic pathologists and/or automate the histopathologic analysis. Although they could facilitate the rapid and robust decision-making and ease the burden of the pathologists, the limitation of the current histopathologic analysis remains the same. For instance, such tools cannot tell the difference between the tumors within the same grade. To further improve the current grading system and digital pathology tools, it is highly desirable to develop a method that is capable of learning the tissue or tumor characteristics and quantitatively measuring the similarity/dissimilarity to the normal/benign tissue (differentiation) without explicit guidance of tumor grades, i.e., in an unsupervised fashion.
16
+
17
+ A generative adversarial network (GAN) [4] is a type of deep learning approach that can generate or produce realistic outputs (here, tissue images). Recently, a conditional GAN (cGAN) [5], where the output is conditioned on an input, has gained much attention. For example, a cGAN was used to generate synthetic tissue images [6]. It was also used to conduct virtual H&E staining [7] as well as H&E-to-immunofluorescent stain translation [8]. The strength of a GAN/cGAN is its superior learning capability in an unsupervised manner; hence, the technique could be better suited for learning the latent characteristics of tissues or tumors.
18
+
19
+ In this manuscript, we propose a cGAN-based method to learn and quantify the characteristics of the tissue that are relevant to tumor differentiation. We construct a cGAN model (BenignGAN) using benign tissue images only. BenignGAN is utilized to generate tumor images of differing degree of differentiation. The less similar the tumor is to the benign (poorly-differentiated), the harder BenignGAN generates a realistic tumor image. The difference between the original and generated tumor images is quantitatively measured and compared to tumor differentiation. We evaluate the proposed method using tissue microarrays (TMA) of colorectal cancer. Our main contributions are summarized as follows: 1) We propose an alternative means of learning and quantifying the tumor characteristics; 2) We build BenignGAN to learn the characteristics of the tissue of origin (here, benign); 3) Employing BenignGAN, the proposed method analyzes tumors in an unsupervised manner, and thus is not restricted to the current grading system.
20
+
21
+ < g r a p h i c s >
22
+
23
+ Fig. 1. Overview of the proposed method. Benign tissue images are converted to edge maps and used to train a cGAN (BenignGAN). Given the edge map of tumor images of differing grades, BenignGAN is utilized to reconstruct the tumor images. The similarity between the reconstructed and original tumor images is measured and compared to tumor grades provided by pathologists.
24
+
25
+ § 2 METHODS
26
+
27
+ The overview of the proposed method is illustrated in Fig. 1. Details of the method is described in the following sections.
28
+
29
+ § 2.1 BENIGNGAN: CONDITIONAL GENERATIVE ADVERSARIAL NETWORK
30
+
31
+ A conditional generative adversarial network (cGAN) [5] consists of a generator and a discriminator. Given an input image, a generator G learns how to transform the input image to an output image. Following [9], we adopt U-Net [10] architecture to build the generator G. The role of a discriminator D is to distinguish the output images generated by the generator G from the original images. As described in PatchGAN [9], the discriminator $\mathrm{D}$ is solely composed of convolutional layers. It outputs a patch, not a scalar. Each pixel in the patch has a value ranging from 0 to 1, representing how believable the corresponding section of the unknown image is.
32
+
33
+ The overall objective function can be represented as:
34
+
35
+ $$
36
+ {\text{ Loss }}_{ = }{arg}\mathop{\min }\limits_{G}\mathop{\max }\limits_{D}{L}_{cGAN}\left( {G,D}\right) + \lambda {L}_{L1}\left( G\right) \tag{1}
37
+ $$
38
+
39
+ $$
40
+ {\mathrm{L}}_{\mathrm{{cGAN}}}\left( {G,D}\right) = {\mathbb{E}}_{x,y}\left\lbrack {\log \left( {\mathrm{D}\left( {x,y}\right) }\right) }\right\rbrack + {\mathbb{E}}_{x,z}\left\lbrack {\log \left( {1 - D\left( {x,G\left( {x,z}\right) }\right) }\right) }\right\rbrack \tag{2}
41
+ $$
42
+
43
+ $$
44
+ {L}_{L1}\left( G\right) = {\mathbb{E}}_{x,y,z}\left\lbrack {\parallel y - G\left( {x,z}\right) {\parallel }_{1}}\right\rbrack \tag{3}
45
+ $$
46
+
47
+ where ${\mathrm{L}}_{\mathrm{{cGAN}}}\left( {G,D}\right)$ the conditional adversarial loss and ${L}_{L1}\left( G\right)$ is the L1 norm loss between the original image and the output image of the generator $G.x,y$ and $z$ denote the input image, output image and random noise vector, respectively.
48
+
49
+ Given an input image $x$ , the generator $G$ reconstructs the original RGB image $y$ . The random noise vector $z$ is introduced in the form of dropout to prevent the generator $G$ from directly mapping the input image $x$ to the output image $y$ . L1 norm loss is known to be helpful in generating less blurry output images.
50
+
51
+ § 2.2 PREPROCESSING
52
+
53
+ A cGAN generates an output image conditioned on an input image. Since a neural network tends to focus on the surface statistics of the input [11], training the cGAN directly on the RGB images may cause the generator $G$ to only memorize the direct mapping between the input and output and thus fail to learn the fundamental characteristics of the input, i.e., benign tissue. Thus, we limit the amount of information that the generator $G$ receives. Given the limited information, the cGAN tries to reconstruct the original RGB image. To reduce the amount of the information, we apply Sobel operator to an input image and compute the gradient magnitude, called as an edge map.
54
+
55
+ § 2.3 SIMILARITY METRICS
56
+
57
+ We utilize mutual information (MI), structural similarity index (SSIM) and Pearson correlation coefficient (CC) for measuring the similarity between the reconstructed images and their originals. For each pair of reconstructed and original images, we compute the three metrics for their RGB and gray scale images. For an RGB image, the metrics are separately calculated for each channel and then averaged across channels. Gray scale images are converted from the reconstructed and original RGB images and used to compute the three metrics.
58
+
59
+ § 2.4 TRAINING AND IMPLEMENTATION
60
+
61
+ We implemented the proposed method using Python and Pytorch. The proposed method is trained for a total of 150 epochs using Adam optimizer with beta1=0.5 and beta2 $= {0.999}.\lambda$ is set to 100 to weight the L1 loss. The learning rate of both the generator and discriminator is set to ${1.0}\mathrm{e} - 4$ and reduces to ${1.0}\mathrm{e} - 5$ at50th epoch. In order to enhance the robustness of the generator, during training, we perform a random horizontal and vertical flip, random scaling, random rotation and random shearing of the input image. We also add Gaussian noise and perform minor blurring using a median or Gaussian filter.
62
+
63
+ § 3 EXPERIMENTS
64
+
65
+ § 3.1 DATASET
66
+
67
+ One whole slide images (WSI) and two colorectal tissue microarrays (TMAs) were employed to evaluate the proposed method. Tissue samples in the WSI and TMAs were stained with Hematoxylin and Eosin (H&E) and digitized at x40 optical magnification. An experienced pathologist identified and delineated benign and tumor regions. Tumor regions were further categorized into 3 grades - well-differentiate (WD), moderately-differentiate (MD) and poorly-differentiate (PD). From the first TMA group, we extracted 212 benign (BN) image patches of size ${1024} \times {1024}$ and used as the training set. 339 tumor image patches and 80 benign images patches of size ${2048} \times {2048}$ were obtained from the second TMA and WSI, respectively, forming the evaluation set. In short, the evaluation set is composed of ${80}\mathrm{{BN}},{28}\mathrm{{WD}},{246}\mathrm{{MD}}$ and 65 PD image patches. The patches were mainly focused on the glandular structure, and the patches containing >20% luminal and/or un-annotated regions were excluded.
68
+
69
+ § 3.2 QUALITATIVE RESULTS
70
+
71
+ To qualitatively evaluate the effectiveness of the proposed method, the results of the proposed method is presented in Fig. 2. The result demonstrated that BenignGAN is capable of reconstructing the benign tissue image from the corresponding edge map, capturing the underlying characteristics of the benign tissue. The presence and location of glands, basement membrane and nuclei were well observed and retained. The appearance of glands was also reasonably depicted. However, the reconstructed images had a tendency to become blurry, slightly losing the fine details of the tissue. As for the tumors, the presence of glands and density of nuclei tended to influence the quality of the reconstruction. As the density of nuclei increases, the capability of BenignGAN to reconstruct the original images degrades. Absence of glands in the original image (e.g., PD) resulted in poorer reconstruction.
72
+
73
+ < g r a p h i c s >
74
+
75
+ Fig. 2. Representative reconstructed and original tumor images. The original tumor images (first row), edge maps (second row) and reconstructed tumor images (third row) are shown for the benign tissue and well-differentiated (WD), moderately-differentiated (MD) and poorly-differentiated (PD) tumor images, respectively.
76
+
77
+ Table 1. Results of the comparison between the reconstructed and original tissue images on the evaluation set. The mean $\pm$ standard deviation is shown for three evaluation metrics that is computed for benign and three tumor grades.
78
+
79
+ max width=
80
+
81
+ X X CC $\mathbf{{MI}}$ SSIM
82
+
83
+ 1-5
84
+ 4*RGB Benign ${0.8308} \pm {0.0400}$ ${0.6918} \pm {0.0754}$ ${0.5130} \pm {0.0385}$
85
+
86
+ 2-5
87
+ WD ${0.6544} \pm {0.0802}$ ${0.4605} \pm {0.0848}$ ${0.4138} \pm {0.0423}$
88
+
89
+ 2-5
90
+ MD ${0.6119} \pm {0.0760}$ ${0.4369} \pm {0.0768}$ ${0.3928} \pm {0.0400}$
91
+
92
+ 2-5
93
+ PD ${0.5378} \pm {0.0898}$ ${0.3626} \pm {0.0912}$ ${0.3520} \pm {0.0415}$
94
+
95
+ 1-5
96
+ 4*Grayscale Benign ${0.8432} \pm {0.0393}$ ${0.7470} \pm {0.0806}$ ${0.5276} \pm {0.0402}$
97
+
98
+ 2-5
99
+ WD ${0.6834} \pm {0.0835}$ ${0.4886} \pm {0.0886}$ ${0.4191} \pm {0.0442}$
100
+
101
+ 2-5
102
+ MD ${0.6414} \pm {0.0809}$ ${0.4657} \pm {0.0798}$ ${0.3970} \pm {0.0419}$
103
+
104
+ 2-5
105
+ PD ${0.5604} \pm {0.0956}$ ${0.3922} \pm {0.0950}$ ${0.3542} \pm {0.0430}$
106
+
107
+ 1-5
108
+
109
+ < g r a p h i c s >
110
+
111
+ Fig. 3. Boxplots for similarity measurements. Correlation coefficient (CC), mutual information (MI) and structural similarity index (SSIM) are measured between the reconstructed and original tumor images, including well-differentiated (WD), moderately-differentiated (MD) and poorly-differentiated (PD) tumors. Red points correspond to outliers, defined by the cases outside the range $\left\lbrack {\mathrm{Q}1 - {1.5} \times \mathrm{{IQR}},\mathrm{Q}3 + {1.5} \times \mathrm{{IQR}}}\right\rbrack$ where $\mathrm{Q}1,\mathrm{Q}3$ and $\mathrm{{IQR}}$ denote the first quartile, third quartile and interquartile range, respectively.
112
+
113
+ § 3.3 EXPERIMENTS AND QUANTITATIVE RESULTS
114
+
115
+ The performance of the proposed method was quantitatively assessed using the three evaluation metrics, including CC, MI and SSIM, between the reconstructed and original tumor images. Within the training set, 3-fold cross validation was performed to evaluate BenignGan's capability on reconstructing benign tissue samples. Comparing the original and reconstructed RGB images, we achieved ${0.7984} \pm {0.0570}\mathrm{{CC}},{0.6402} \pm {0.1026}$ MI and 0.4955±0.0491 SSMI. Using grayscale images (converted from RGB images), we obtained ${0.8127} \pm {0.0568}$ for $\mathrm{{CC}},{0.6920} \pm {0.1104}$ for $\mathrm{{MI}}$ and ${0.5050} \pm {0.0514}$ for SSIM. Subsequently, we trained BenignGAN on the entire training set and tested on the evaluation set. The results are shown in Fig. 3 and Table 1. The results on BN patches in the evaluation set were similar to the reported results of 3-fold cross validation (Table 1), which confirms the ability of BenignGAN in reconstructing benign tissue images. Moreover, investigation of the results on the tumor image patches revealed that the similarity measurements between the original and reconstructed images are related to the degree of tumor differentiation (Fig. 3 and Table 1). The worse the tumor grade is, the less similar the tumor is to the benign tissue. The similar trend was observed for all three evaluation metrics. ANOVA was further conducted on each of the three evaluation metrics to evaluate the significance of the difference in the similarity measure between the original and reconstructed images in regard to tumor grades. A statistically significant difference (p-value $< {10}^{-5}$ ) was found for the three evaluation metrics using both RGB and grayscale images, suggesting that the difference between the reconstructed and original tumor images, with respect to the benign tissue, could serve as a means of analyzing tumors. In additional, no significant difference between color and grayscale images was observed, indicating that the observed trend is not simply due to the color difference between tumors.
116
+
117
+ The difference between MD and PD tumors was larger than the difference between WD and MD tumors. This may be ascribable to the presence of glands. Glands are present in many of the tumors of WD and MD but absent in PD tumors. It would have been a bigger challenge for BenignGAN to reconstruct PD tumors since BenignGAN was trained using the benign tissues only, which contain plenty of glands in general. Although there was a downward trend in similarity, the three similarity measures were overlapping between different tumor grades. This may be due to the intrinsic similarity between tumors. However, the specific meanings or biological causes of our observation cannot be identified without further histopathologic and/or biological studies. Moreover, since this study is conducted based upon image patches, sampling of image patches could have an effect on the study. A large-scale study should be followed to further confirm our findings.
118
+
119
+ § 4 CONCLUSION
120
+
121
+ Herein, we presented a method of utilizing a cGAN to quantify the tissue characteristics relevant to tumor differentiation. The experimental results demonstrated that a cGAN is capable of learning the latent representation of the benign tissue and, as it is applied to tumor images, its ability varied depending on the tumor grade, suggesting that it could be utilized to quantitatively analyze and measure the degree of tumor differentiation. The proposed method is generic, and thus could be applied to different types of tissues and tumors. Providing an alternative means of analyzing tissues/tumors, we believe that this approach could aid in improving and reshaping the current cancer pathology in both clinics and research. The future study will entail the comparison of the proposed method to the patients' outcome.