Daoze commited on
Commit
3111c91
·
verified ·
1 Parent(s): 37e1c69

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. papers/LOG/LOG 2022/LOG 2022 Conference/HRmby7yVVuF/Initial_manuscript_md/Initial_manuscript.md +701 -0
  2. papers/LOG/LOG 2022/LOG 2022 Conference/HRmby7yVVuF/Initial_manuscript_tex/Initial_manuscript.tex +303 -0
  3. papers/LOG/LOG 2022/LOG 2022 Conference/IKevTLt3rT/Initial_manuscript_md/Initial_manuscript.md +651 -0
  4. papers/LOG/LOG 2022/LOG 2022 Conference/IKevTLt3rT/Initial_manuscript_tex/Initial_manuscript.tex +361 -0
  5. papers/LOG/LOG 2022/LOG 2022 Conference/IP-TISJqfq/Initial_manuscript_md/Initial_manuscript.md +307 -0
  6. papers/LOG/LOG 2022/LOG 2022 Conference/IP-TISJqfq/Initial_manuscript_tex/Initial_manuscript.tex +248 -0
  7. papers/LOG/LOG 2022/LOG 2022 Conference/IXvfIex0mX6f/Initial_manuscript_md/Initial_manuscript.md +742 -0
  8. papers/LOG/LOG 2022/LOG 2022 Conference/IXvfIex0mX6f/Initial_manuscript_tex/Initial_manuscript.tex +262 -0
  9. papers/LOG/LOG 2022/LOG 2022 Conference/KQNsbAmJEug/Initial_manuscript_md/Initial_manuscript.md +175 -0
  10. papers/LOG/LOG 2022/LOG 2022 Conference/KQNsbAmJEug/Initial_manuscript_tex/Initial_manuscript.tex +127 -0
  11. papers/LOG/LOG 2022/LOG 2022 Conference/KUGwmnSdPV3/Initial_manuscript_md/Initial_manuscript.md +0 -0
  12. papers/LOG/LOG 2022/LOG 2022 Conference/KUGwmnSdPV3/Initial_manuscript_tex/Initial_manuscript.tex +288 -0
  13. papers/LOG/LOG 2022/LOG 2022 Conference/KgjU7K0yEgt/Initial_manuscript_md/Initial_manuscript.md +111 -0
  14. papers/LOG/LOG 2022/LOG 2022 Conference/KgjU7K0yEgt/Initial_manuscript_tex/Initial_manuscript.tex +165 -0
  15. papers/LOG/LOG 2022/LOG 2022 Conference/O7msz8Ou7o/Initial_manuscript_md/Initial_manuscript.md +403 -0
  16. papers/LOG/LOG 2022/LOG 2022 Conference/O7msz8Ou7o/Initial_manuscript_tex/Initial_manuscript.tex +135 -0
  17. papers/LOG/LOG 2022/LOG 2022 Conference/PTz0aXJp7A/Initial_manuscript_md/Initial_manuscript.md +380 -0
  18. papers/LOG/LOG 2022/LOG 2022 Conference/PTz0aXJp7A/Initial_manuscript_tex/Initial_manuscript.tex +179 -0
  19. papers/LOG/LOG 2022/LOG 2022 Conference/QBGYYu3l3dG/Initial_manuscript_md/Initial_manuscript.md +397 -0
  20. papers/LOG/LOG 2022/LOG 2022 Conference/QBGYYu3l3dG/Initial_manuscript_tex/Initial_manuscript.tex +257 -0
  21. papers/LOG/LOG 2022/LOG 2022 Conference/QDN0jSXuvtX/Initial_manuscript_md/Initial_manuscript.md +410 -0
  22. papers/LOG/LOG 2022/LOG 2022 Conference/QDN0jSXuvtX/Initial_manuscript_tex/Initial_manuscript.tex +306 -0
  23. papers/LOG/LOG 2022/LOG 2022 Conference/R8v95EwI7NL/Initial_manuscript_md/Initial_manuscript.md +142 -0
  24. papers/LOG/LOG 2022/LOG 2022 Conference/R8v95EwI7NL/Initial_manuscript_tex/Initial_manuscript.tex +104 -0
  25. papers/LOG/LOG 2022/LOG 2022 Conference/Ri2dzVt_a1h/Initial_manuscript_md/Initial_manuscript.md +354 -0
  26. papers/LOG/LOG 2022/LOG 2022 Conference/Ri2dzVt_a1h/Initial_manuscript_tex/Initial_manuscript.tex +185 -0
  27. papers/LOG/LOG 2022/LOG 2022 Conference/RqN8W3R76J/Initial_manuscript_md/Initial_manuscript.md +491 -0
  28. papers/LOG/LOG 2022/LOG 2022 Conference/RqN8W3R76J/Initial_manuscript_tex/Initial_manuscript.tex +368 -0
  29. papers/LOG/LOG 2022/LOG 2022 Conference/Sq9Orta9l5i/Initial_manuscript_md/Initial_manuscript.md +495 -0
  30. papers/LOG/LOG 2022/LOG 2022 Conference/Sq9Orta9l5i/Initial_manuscript_tex/Initial_manuscript.tex +135 -0
  31. papers/LOG/LOG 2022/LOG 2022 Conference/UiBiLRXR0G/Initial_manuscript_md/Initial_manuscript.md +330 -0
  32. papers/LOG/LOG 2022/LOG 2022 Conference/UiBiLRXR0G/Initial_manuscript_tex/Initial_manuscript.tex +324 -0
  33. papers/LOG/LOG 2022/LOG 2022 Conference/hM5UIWqZ7d/Initial_manuscript_md/Initial_manuscript.md +255 -0
  34. papers/LOG/LOG 2022/LOG 2022 Conference/hZ3b8CskgC/Initial_manuscript_md/Initial_manuscript.md +263 -0
  35. papers/LOG/LOG 2022/LOG 2022 Conference/hZ3b8CskgC/Initial_manuscript_tex/Initial_manuscript.tex +126 -0
  36. papers/LOG/LOG 2022/LOG 2022 Conference/kQsniwmGgF5/Initial_manuscript_md/Initial_manuscript.md +431 -0
  37. papers/LOG/LOG 2022/LOG 2022 Conference/kQsniwmGgF5/Initial_manuscript_tex/Initial_manuscript.tex +308 -0
  38. papers/LOG/LOG 2022/LOG 2022 Conference/kXe4Y0c4VqT/Initial_manuscript_md/Initial_manuscript.md +427 -0
  39. papers/LOG/LOG 2022/LOG 2022 Conference/kXe4Y0c4VqT/Initial_manuscript_tex/Initial_manuscript.tex +141 -0
  40. papers/LOG/LOG 2022/LOG 2022 Conference/kv4xUo5Pu6/Initial_manuscript_md/Initial_manuscript.md +479 -0
  41. papers/LOG/LOG 2022/LOG 2022 Conference/kv4xUo5Pu6/Initial_manuscript_tex/Initial_manuscript.tex +317 -0
  42. papers/LOG/LOG 2022/LOG 2022 Conference/kvwWjYQtmw/Initial_manuscript_md/Initial_manuscript.md +539 -0
  43. papers/LOG/LOG 2022/LOG 2022 Conference/kvwWjYQtmw/Initial_manuscript_tex/Initial_manuscript.tex +285 -0
  44. papers/LOG/LOG 2022/LOG 2022 Conference/lwx5gi4MIh/Initial_manuscript_md/Initial_manuscript.md +445 -0
  45. papers/LOG/LOG 2022/LOG 2022 Conference/lwx5gi4MIh/Initial_manuscript_tex/Initial_manuscript.tex +270 -0
  46. papers/LOG/LOG 2022/LOG 2022 Conference/m3aVA7ykn67/Initial_manuscript_md/Initial_manuscript.md +383 -0
  47. papers/LOG/LOG 2022/LOG 2022 Conference/m3aVA7ykn67/Initial_manuscript_tex/Initial_manuscript.tex +332 -0
  48. papers/LOG/LOG 2022/LOG 2022 Conference/mWzWvMxuFg1/Initial_manuscript_md/Initial_manuscript.md +0 -0
  49. papers/LOG/LOG 2022/LOG 2022 Conference/mWzWvMxuFg1/Initial_manuscript_tex/Initial_manuscript.tex +304 -0
  50. papers/LOG/LOG 2022/LOG 2022 Conference/n5tvDCQGloq/Initial_manuscript_md/Initial_manuscript.md +522 -0
papers/LOG/LOG 2022/LOG 2022 Conference/HRmby7yVVuF/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,701 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Label-Wise Graph Convolutional Network for Heterophilic Graphs
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ Graph Neural Networks (GNNs) have achieved remarkable performance in modeling graphs for various applications. However, most existing GNNs assume the graphs exhibit strong homophily in node labels, i.e., nodes with similar labels are connected in the graphs. They fail to generalize to heterophilic graphs where linked nodes may have dissimilar labels and attributes. Therefore, in this paper, we investigate a novel framework that performs well on graphs with either homophily or heterophily. More specifically, we propose a label-wise message passing mechanism to avoid the negative effects caused by aggregating dissimilar node representations and preserve the heterophilic contexts for representation learning. We further propose a bi-level optimization method to automatically select the model for graphs with homophily/heterophily. Theoretical analysis and extensive experiments demonstrate the effectiveness of our proposed framework for node classification on both homophilic and heterophilic graphs.
12
+
13
+ ## 15 1 Introduction
14
+
15
+ Graph-structured data is very pervasive in the real-world such as knowledge graphs, traffic networks, and social networks. Therefore, it is important to model the graphs for downstream tasks such as traffic prediction [34], recommendation system [12] and drug generation [3]. To capture the topology information in graph-structured data, Graph Neural Networks (GNNs) [30] adopt a message-passing mechanism which learns a node's representation by iteratively aggregating the representations of its neighbors. This can enrich the node features and preserve the node attributes and local topology for various downstream tasks.
16
+
17
+ Despite the great success of GNNs in modeling graphs, there is concern in processing heterophilic graphs where edges often link nodes dissimilar in attributes or labels. Specifically, existing works [37, 7] find that GNNs could fail to generalize to graphs with heterophily due to their implicit/explicit homophily assumption. For example, Graph Convolutional Network (GCNs) is even outperformed by MLP that ignores the graph structure on heterophilic website datasets [37]. However, a recent work [21] argues that homophily assumption is not a necessity for GNNs. They show that GCN can work well on dense heterophilic graphs whose neighborhood patterns of different classes are distinguishable. But their analysis and conclusion is limited to the heterophilic graphs under strict conditions, and fail to show the relation between heterophily levels and performance of GNNs. Thus, in Sec. 3, we conduct thoroughly theoretical and empirical analysis on GCN to investigate the impacts of heterophily levels, which cover all the aforementioned observations. As the Theorem 1 and Fig. 1 show, we find the performance of GCN will firstly decrease then increase with the increment of heterophily levels. And the aggregation in GCN could even lead to non-discriminate representations under certain conditions.
18
+
19
+ Though heterophilic graphs challenge existing GNNs, we observe that the heterophilic neighborhood context itself provides useful information. Generally, two nodes of the same class tend to have similar heterophilic neighborhood contexts; while two nodes of different classes are more likely to have different heterophilic neighborhood contexts, which is verified in Appendix F. Thus, a heterophilic context-preserving mechanism can lead to more discriminative representations. One promising way to preserve heterophilic context is to conduct label-wise aggregation, i.e., separately aggregate neighbors in each class. In this way, we can summarize the heterophilic neighbors belonging to each class to an embedding to preserve the local context information for representation learning. As shown in the example in Fig. 2, for node ${v}_{A}$ , with label-wise aggregation, ${v}_{A}$ will be represented as $\left\lbrack {{1.0},{5.5},{2.0}\text{, non-existence}}\right\rbrack$ , in the order of ${v}_{A}$ ’s attribute, blue, green, and orange neighbors, respectively. Compared with ${v}_{B},{v}_{A}$ ’s representations of central node and neighborhood context differ significantly with ${v}_{B}$ . While for the aggregation in GCN, the obtained representations are rather similar for two nodes. In other words, we obtain more discriminative features on heterophilic graphs with label-wise aggregation, which is also verified by our analysis in Theorem 2. Though promising, there is no existing work exploring label-wise message passing to address the challenge of heterophilic graphs.
20
+
21
+ Therefore, in this paper, we investigate novel label-wise aggregation for graph convolution to facilitate the node classification on heterophilic graphs. In essence, we are faced with two challenges: (i) the label-wise aggregation needs the label of each node; while for node classification, we are only given a small set of labeled nodes. How to adopt label-wise graph convolution on sparsely labeled heterophilic graphs to facilitate node classification? (ii) In practice, the homophily levels of the given graphs can be various and are often unknown. For homophily graphs, the label-wise graph convolution might not work as well as previous GNNs embedded with homophily assumption. How to ensure the performance on both heterophilic and homophilic graphs? In an attempt to address these challenges, we propose a novel framework Label-Wise GCN (LW-GCN). LW-GCN adopts a pseudo label predictor to predict pseudo labels and designs a novel label-wise message passing to preserve the heterophilic contexts with pseudo labels. To handle both heterophilic and homophilic graphs, apart from label-wise message passing GNN, LW-GCN also utilizes a GNN for homophilic graphs, and adopts bi-level optimization on the validation data to automatically select the better model for the given graph. The main contributions are:
22
+
23
+ - We theoretically show impacts of heterophily levels to GCN and demonstrate the potential limitations of GCN in learning on heterophilic graphs;
24
+
25
+ - We design a label-wise graph convolution to preserve local context in heterophilic graphs, which is also proven by our theoretical and empirical analysis.
26
+
27
+ - We propose a novel framework LW-GCN, which deploys a pseudo label predictor and an automatic model selection module to achieve label-wise aggregation on sparsely labeled graphs and ensure the performance on both heterophilic and homophilic graphs; and
28
+
29
+ - Extensive experiments on real-world graphs with heterophily and homophily are conducted to demonstrate the effectiveness of LW-GCN.
30
+
31
+ ## 2 Related Work
32
+
33
+ Graph neural networks (GNNs) have shown great success for various applications such as social networks [12,8], financial transaction networks [28,10] and traffic networks [34, 35]. Based on the definition of the graph convolution, GNNs can be categorized into two categories, i.e., spectral-based $\left\lbrack {4,9,{17},{19}}\right\rbrack$ and spatial-based $\left\lbrack {{27},{32},1}\right\rbrack$ . Spectral-based GNN models are defined according to spectral graph theory. Bruna et al. [4] firstly generalize convolution operation to graph-structured data from spectral domain. GCN [17] simplifies the graph convolution by first-order approximation. For spatial-based graph convolution, it aggregates the information of the neighbors nodes [22,12,5]. For instance, spatial graph convolution that incorporates the attention mechanism is applied in GAT [27] to facilitate the information aggregation. Graph Isomorphism Network (GIN) [31] is proposed to learn more powerful representations of the graph structures. Recently, to learn better node representations, deep graph neural networks $\left\lbrack {6,{18},{20}}\right\rbrack$ and self-supervised learning methods $\left\lbrack {{26},{16},{38},{24},{33}}\right\rbrack$ have been investigated.
34
+
35
+ However, the aforementioned methods are generally designed based on the homophily assumption of the graph. Low homophily level in some real-word graphs can largely degrade their performance [37]. Some initial efforts $\left\lbrack {{23},2,{14},{37},{36},7,{13}}\right\rbrack$ have been taken to address the problem of heterophilic graphs. For example, H2GCN [37] investigates three key designs for GNNs on heterophilic graphs. SimP-GCN [14] adopts a node similarity preserving mechanism to handle graphs with heterophiliy. FAGCN [2] adaptively aggregates low-frequency and high-frequency signals from neighbors to learn representations for graphs with heterophily. GPR-GNN [7] proposes a generalized PageRank GNN architecture that can learn positive/negative weights for the representations after different steps of propagation to mitigate the graph heterophily issue. CPGNN [36] adopts a compatibility matrix to model the heterophily in the graph. Recently, BM-GCN [13] proposes to utilize pseudo labels in the convolutional operation. Specifically, the pseudo labels are used to obtain a block similarity matrix to re-weight the edges in heterophilic graphs. Then, node pairs belonging to different label combinations could have different information exchange. Our LW-GCN is inherently different from these methods: (i) we propose a novel label-wise graph convolution to better capture the neighbors' information in heterophilic graphs; and (ii) Automatic model selection is deployed to achieve state-of-the-art performance on both homophilic and heterophilic graphs.
36
+
37
+ ## 3 Preliminaries
38
+
39
+ In this section, we first present the notations and definition followed by the introduction of the GCN's design. We then conduct the theoretical analysis to to investigate the impacts of heterophily to GCN.
40
+
41
+ ### 3.1 Notations and Definition
42
+
43
+ Let $\mathcal{G} = \left( {\mathcal{V},\mathcal{E},\mathbf{X}}\right)$ be an attributed graph, where $\mathcal{V} = \left\{ {{v}_{1},\ldots ,{v}_{N}}\right\}$ is the set of $N$ nodes, $\mathcal{E} \subseteq \mathcal{V} \times \mathcal{V}$ is the set of edges, and $\mathbf{X} = \left\{ {{\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{N}}\right\}$ is the set of node attributes. $\mathbf{A} \in {\mathbb{R}}^{N \times N}$ represents the adjacency matrix of the graph $\mathcal{G}$ , where ${\mathbf{A}}_{ij} = 1$ indicates an edge between nodes ${v}_{i}$ and ${v}_{j}$ ; otherwise, ${\mathbf{A}}_{ij} = 0$ . In the node classification task, each node belongs to one of $C$ classes. We use ${y}_{i}$ to denote label of node ${v}_{i}$ . Graphs can be split into homophilic and heterophilic graphs based on how likely edges link nodes in the same class. The homophily level is measured by the homophily ratio: Definition 1 (Homophily Ratio) It is the fraction of edges in a graph that connect nodes of the same class. The homophily ratio $h$ is calculated as $h = \frac{\left| \left\{ \left( {v}_{i},{v}_{j}\right) \in \mathcal{E} : {y}_{i} = {y}_{j}\right\} \right| }{\left| \mathcal{E}\right| }$ .
44
+
45
+ When the homophily ratio is small, most of the edges will link nodes from different classes, which indicates a heterophilic graph. In homophilic graphs, connected nodes are more likely to belong to the same class, which will lead to a homophily ratio close to 1 .
46
+
47
+ ### 3.2 How does the Heterophily Affect the GCN?
48
+
49
+ GCN [17] is one of the most widely used graph neural networks. The operation in each layer of GCN
50
+
51
+ can be written as:
52
+
53
+ $$
54
+ {\mathbf{H}}^{\left( k + 1\right) } = \sigma \left( {\widetilde{\mathbf{A}}{\mathbf{H}}^{\left( k\right) }{\mathbf{W}}^{\left( k\right) }}\right) , \tag{1}
55
+ $$
56
+
57
+ where ${\mathbf{H}}^{\left( k\right) }$ is the node representation matrix of the output of the $k$ -th layer and $\widetilde{\mathbf{A}}$ is the normalized adjacency matrix. Generally, the symmetric normalized form ${\mathbf{D}}^{-\frac{1}{2}}\mathbf{A}{\mathbf{D}}^{-\frac{1}{2}}$ or ${\mathbf{D}}^{-1}\mathbf{A}$ is used as $\widetilde{\mathbf{A}}$ , where $\mathbf{D}$ is a diagonal matrix with ${\mathbf{D}}_{ii} = \mathop{\sum }\limits_{i}{\mathbf{A}}_{ij}$ . The adjacency matrix can be augmented with a self-loop. $\sigma$ is an activation function such as ReLU. In a single layer of GCN, the process can be split to two steps. First, GCN layer averages the neighbor features with $\mathbf{Z} = \widetilde{\mathbf{A}}\mathbf{X}$ . Then, a non-linear transformation $\sigma \left( \mathbf{{ZW}}\right)$ is applied to obtain intermediate features or final predictions. The step of averaging the neighbor features can benefit the node classification when the neighbors have similar features. However, for heterophilic graphs, mixing neighbors that possess different features may result in poor representations for node classification. This could be justified by the following theorem, which thoroughly analyzes the impacts of the heterophily level to the linear separability of the representations after one step aggregation in GCN.
58
+
59
+ Assumptions. We first discuss the assumptions of the heterophilic graphs: (i) Following previous works [37], the graph $\mathcal{G}$ is considered as a $d$ -regular graph, i.e.. each node have $d$ neighbors; For each node $v$ , its neighbors’ features and class labels $\left\{ {{y}_{u} : u \in \mathcal{N}\left( v\right) }\right\}$ are conditionally independent given ${y}_{v}$ , and $P\left( {{y}_{u} = {y}_{v} \mid {y}_{v}}\right) = h, P\left( {{y}_{u} = y \mid {y}_{v}}\right) = \frac{1 - h}{C - 1},\forall y \neq {y}_{v}$ . And dimensions of node feature are independent to each other; (ii) For nodes in different classes, their heterophilic neighbors’ features follow different distributions. Specifically, let ${\mathcal{N}}_{k}\left( v\right)$ denote node $v$ ’s neighbors of class $k$ . For two nodes $v$ and $s$ in classes $i$ and $j\left( {i \neq j}\right)$ , the features of their heterophilic neighbors ${\mathcal{N}}_{k}\left( v\right)$ and ${\mathcal{N}}_{k}\left( s\right)$ in class $k \in \{ 1,\ldots , C\}$ follow two different normal distributions $N\left( {{\mathbf{\mu }}_{ik},{\mathbf{\sigma }}_{ik}}\right)$ and $N\left( {{\mathbf{\mu }}_{jk},{\mathbf{\sigma }}_{jk}}\right)$ , where ${\mathbf{\mu }}_{ik}$ and ${\mathbf{\mu }}_{jk}$ represent the means. ${\mathbf{\sigma }}_{ik}$ and ${\mathbf{\sigma }}_{ik}$ denote the standard deviations. Intuitively, though nodes in ${\mathcal{N}}_{k}\left( v\right)$ and ${\mathcal{N}}_{k}\left( s\right)$ belong to the same class $k$ , they are connected to nodes of different classes because of their different properties. For example, in the molecule, the atom in the same class will exhibit different features, when they are linked to different atoms. Therefore, this assumption is valid. And it is also verified by the empirical analysis on large real-world heterophilic graphs in Appendix F. Let ${\mathbf{\sigma }}_{i} = \sqrt{\frac{1}{C}\mathop{\sum }\limits_{{k = 1}}^{C}\left( {{\mathbf{\mu }}_{ik} - {\overline{\mathbf{\mu }}}_{i}}\right) \odot \left( {{\mathbf{\mu }}_{ik} - {\overline{\mathbf{\mu }}}_{i}}\right) }$ , where ${\overline{\mathbf{\mu }}}_{i} = \frac{1}{C}\mathop{\sum }\limits_{{k = 1}}^{C}{\mathbf{\mu }}_{ik}$ and $\odot$ represents the element-wise product. We can have the following theorem.
60
+
61
+ Theorem 1 For an attributed graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E},\mathbf{X}}\right)$ that follows the above assumptions in Sec. 3.2, if $\left| {{\mathbf{\mu }}_{ii} - {\mathbf{\mu }}_{jj}}\right| > \left| {{\mathbf{\mu }}_{ik} - {\mathbf{\mu }}_{jk}}\right|$ and ${\mathbf{\sigma }}_{i} > {\mathbf{\sigma }}_{ii},\forall k \in \{ 1,\ldots C\}$ , as the decrease of homophily ratio $h$ , the discriminability of representations obtained by the averaging process in GCN layer; i.e. $\mathbf{Z} = {\mathbf{D}}^{-1}\mathbf{A}\mathbf{X}$ , will firstly decrease until $h = \frac{1}{C}$ then increase. When $h = \frac{1}{C}$ and $d < \frac{{\sigma }_{i}^{2}}{{\left| {\mathbf{\mu }}_{ik} - {\mathbf{\mu }}_{jk}\right| }^{2}}$ , the representations after averaging process will be nearly non-discrimative.
62
+
63
+ The detailed proof can be found in Appendix C. The conditions in this theorem generally hold. Since the intra-class distance is often much smaller than inter-calss distance, $\left| {{\mathbf{\mu }}_{ii} - {\mathbf{\mu }}_{jj}}\right| > \left| {{\mathbf{\mu }}_{ik} - {\mathbf{\mu }}_{jk}}\right|$ is generally meet in real-world graphs. As for ${\mathbf{\sigma }}_{i}$ , it computes the standard deviations of mean neighbor features in different classes. As a result, ${\mathbf{\sigma }}_{i}$ is usually much larger than the ${\mathbf{\sigma }}_{ii}$ and $\left| {{\mathbf{\mu }}_{ik} - {\mathbf{\mu }}_{ik}}\right|$ . Therefore, the Theorem 1 generally holds for the real-world graphs. And we can observe from Theorem 1 that (i) heterophily level in a certain range will largely degrade the performance of GCN; (ii) GCN will be more negatively affected by the heterophilic graphs with lower node degrees. Though our analysis is based on GCN, it can be easily extended to GNNs that average neighbor representations in the aggregation (e.g. GraphSage [12], APPNP [18], and SGC [29]). For the extension of the analysis on more complex message-passing mechanism, we leave it as future work.
64
+
65
+ To empirically verify the above theoretical analysis, we synthesize graphs with different homophily ratios and node degrees by deleting/adding edges in the crocodile graph. The detailed graph generation process can be found in Appendix E.1. The results of GCN and GAT [27] on graphs with various node degrees are shown in Fig. 1. We can observe that (i) as the homophily ratio decreases the performance of GCN will keep decreasing until $h$ is around ${0.2}\left( {h \approx \frac{1}{C}}\right)$ , then the performance will start increase;(ii) when $h$ is around $\frac{1}{C}$ , the performance can be very poor and even much worse than MLP on the graph with low node degree. The observations are in consistent with our Theorem 1. This trend has also be reported in [21]. However, their theoretical analysis focus on proving the effectiveness of GCN on the heterophilic graphs with discrimative neighborhoods, which can only explain the observation when $h < \frac{1}{C}$ . By contrast, our theoretical analysis can well explain the whole trend of GCN performance w.r.t the homophily ratio. This empirical analysis further demonstrates the general limitations of current GNN models in learning on graphs with heterophily.
66
+
67
+ ![01963ede-b51e-7ad0-8efc-12886ed83ec2_3_1083_819_401_280_0.jpg](images/01963ede-b51e-7ad0-8efc-12886ed83ec2_3_1083_819_401_280_0.jpg)
68
+
69
+ Figure 1: Impacts of the het-erophily levels to GCN and GAT.
70
+
71
+ ### 3.3 Problem Definition
72
+
73
+ Based on the analysis above, we can infer that current GNNs are effective on graphs with high homophily; while they are challenged by the graphs with heterophily. In real world, we are usually given graphs with various homophily levels. In addition, the graphs are often sparsely labeled. And due to the lacking of labels, the homophily ratio of the given graph is generally unknown. Thus, we aim to develop a framework that works for semi-supervised node classification on graphs with any homophily level. The problem is defined as:
74
+
75
+ Problem 1 Given an attributed graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E},\mathbf{X}}\right)$ with a set of labels ${\mathcal{Y}}_{L}$ for node set ${\mathcal{V}}_{L} \subset \mathcal{V}$ , the homophily ratio $h$ of $\mathcal{G}$ is unknown, we aim to learn a GNN which accurately predicts the labels of the unlabeled nodes, i.e., $f\left( {\mathcal{G},{\mathcal{Y}}_{L}}\right) \rightarrow {\widehat{\mathcal{Y}}}_{U}$ , where $f$ is the function we aim to learn and ${\widehat{\mathcal{Y}}}_{U}$ is the set of predicted labels for unlabeled nodes.
76
+
77
+ ## 4 Methodology
78
+
79
+ As the analysis in Sec. 3 shows, the aggregation process in GCN will mix the neighbors in various labels/distributions in heterophilic graphs, resulting non-discrimative representations for local context. Based on this motivation, we propose to adopt label-wise aggregation in graph convolution, i.e., neighbors in the same class are separately aggregated, to preserve the heterophilic context. Next, we give the details of the label-wise aggregation along with the theoretical analysis that verifies its capability in obtaining distinguishable representations for heterophilic context. Then, we present how to apply label-wise graph convolution on sparsely labeled graphs and how to ensure performance on both heterophilic and homophilic graphs.
80
+
81
+ ![01963ede-b51e-7ad0-8efc-12886ed83ec2_4_323_208_1138_355_0.jpg](images/01963ede-b51e-7ad0-8efc-12886ed83ec2_4_323_208_1138_355_0.jpg)
82
+
83
+ Figure 2: The illustration of label wise aggregation and overall framework of our LW-GCN.
84
+
85
+ ### 4.1 Label-Wise Graph Convolution
86
+
87
+ In heterophilic graphs, we observe that the heterophilic neighbor context itself provides useful information. Let ${\mathcal{N}}_{k}\left( v\right)$ denote node $v$ ’s neighbors of label class $k$ . As shown in Appendix F, for two nodes $u$ and $v$ of the same class, i.e., ${y}_{u} = {y}_{v}$ , the features of nodes in ${\mathcal{N}}_{k}\left( u\right)$ are likely to be similar to that of nodes in ${\mathcal{N}}_{k}\left( v\right)$ ; while for nodes $u$ and $s$ with ${y}_{u} \neq {y}_{s}$ , the features of nodes in ${\mathcal{N}}_{k}\left( u\right)$ are likely to be different from that in ${\mathcal{N}}_{k}\left( s\right)$ . Therefore, for each node $v \in \mathcal{V}$ , we propose to summarize the information of ${\mathcal{N}}_{k}\left( v\right)$ by label-wise aggregation to capture useful heterophilic context. Let ${\mathbf{a}}_{v, k}$ be the aggregated representation of neighbors in class $k$ , the process of obtaining representation for heterophilic context with label-wise aggregation can be formally written as:
88
+
89
+ $$
90
+ {\mathbf{a}}_{v, k} = \mathop{\sum }\limits_{{u \in {\mathcal{N}}_{k}\left( v\right) }}\frac{1}{\left| {\mathcal{N}}_{k}\left( v\right) \right| }{\mathbf{x}}_{u},\;{\mathbf{h}}_{v}^{c} = \operatorname{CONCAT}\left( {{\mathbf{a}}_{v,1},\ldots ,{\mathbf{a}}_{v, C}}\right) , \tag{2}
91
+ $$
92
+
93
+ where $C$ is the number of classes. ${\mathbf{h}}_{v}^{c}$ denotes the representation of the neighborhood context. As it is shown in Eq.(2), concatenation is applied to obtain representation of context to preserve the heterophilic context. When there is no neighbor of $v$ belonging to class $k$ , zero embedding is assigned for class $k$ . We then can augment the representation of the centered node with the context representation as the general design of GNNs. Specifically, we concatenate the context representation ${h}_{v}^{c}$ and centered node representation ${\mathbf{x}}_{v}$ followed by the non-linear transformation:
94
+
95
+ $$
96
+ {\mathbf{h}}_{v} = \sigma \left( {\mathbf{W} \cdot \operatorname{CONCAT}\left( {{\mathbf{x}}_{v},{\mathbf{h}}_{v}^{c}}\right) }\right) , \tag{3}
97
+ $$
98
+
99
+ where $\mathbf{W}$ denotes the learnable parameters in the label-wise graph convolution and $\sigma$ denotes the activation function such as ReLU.
100
+
101
+ In this section, we further prove the superiority of label-wise graph convolution in learning discrima-tive representations for heterophilic context by the following theorem.
102
+
103
+ Theorem 2 We consider an attributed graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E},\mathbf{X}}\right)$ that follows the aforementioned assumptions in Sec. 3.2. If $\left| {{\mathbf{\mu }}_{ik} - {\mathbf{\mu }}_{jk}}\right| > \sqrt{\frac{C}{d}}{\mathbf{\sigma }}_{ik},\forall k \in \{ 1,\ldots , C\}$ , the heterophilic context representation ${\mathbf{h}}_{v}^{c}$ that is obtained by the label-wise aggregation with Eq. (2) will keep its discriminability regardless the value of homophily ratio $h$ .
104
+
105
+ The detailed proof is presented in Appendix D. The difference between the groups of neighbors is naturally larger than the intra-group variance. Since $\sqrt{\frac{C}{d}}$ is usually small (e.g. around 1.8 in the Texas graph), the condition $\left| {{\mathbf{\mu }}_{ik} - {\mathbf{\mu }}_{jk}}\right| > \sqrt{\frac{C}{d}}{\mathbf{\sigma }}_{ik}$ is generally satisfied in real-world scenarios. We also adopt the label-wise graph convolution on the synthetic graphs with different homophily ratios to empirically show its effectiveness. The results can be found in Appendix E.2.
106
+
107
+ ### 4.2 LW-GCN: A Unified Framework for Graphs with Homophily or Heterophily
108
+
109
+ Though the analysis in Sec.3.1 proves the effectiveness of label-wise graph convolution in processing graphs with heterophily, there are still two major challenges for semi-supervised node classification on graphs with any heterophily levels: (i) how to conduct label-wise graph convolution on heterophilic graphs with a small number of labeled nodes; and (ii) how to make it work for both heterophilic and homophilic graphs. To address these challenges, we propose a novel framework LW-GCN, which is illustrated in Fig. 2. LW-GCN is composed of an MLP-based pseudo label predictor ${f}_{P}$ , a GNN ${f}_{C}$ using label-wise graph convolution, a GNN ${f}_{G}$ for homophilic graph, and an automatic model selection module. The predictor ${f}_{P}$ takes the node attributes as input to give pseudo labels. ${f}_{C}$ utilizes the estimated pseudo labels from ${f}_{P}$ to conduct label-wise graph convolution on $\mathcal{G}$ for node classification. To ensure the performance on graphs with any homophily level, LW-GCN also trains ${f}_{G}$ , i.e., a GNN for homophilic graphs, and can automatically select the model for graphs with unknown homophily ratio. For the model selection module, a bi-level optimization on validation set is applied to learn the weights for model selection. Next, we give the details of each component.
110
+
111
+ #### 4.2.1 Pseudo Label Prediction.
112
+
113
+ In label-wise graph convolution, neighbors in different classes are separately aggregated to update node representations. However, only a small number of nodes are provided with labels. Thus, a pseudo label predictor ${f}_{P}$ is deployed to estimate labels for label-wise aggregation. Specifically, a MLP is utilized to obtain pseudo label of node $v$ as ${\widehat{\mathbf{y}}}_{v}^{P} = \operatorname{MLP}\left( {\mathbf{x}}_{v}\right)$ , where ${\mathbf{x}}_{v}$ is the attributes of node $v$ . Note that, we use MLP as the predictor because message passing of the GNNs may lead to poor predictions on heterophilic graphs. The loss function for training ${f}_{P}$ is:
114
+
115
+ $$
116
+ \mathop{\min }\limits_{{\theta }_{P}}{\mathcal{L}}_{P} = \frac{1}{\left| {\mathcal{V}}_{\text{train }}\right| }\mathop{\sum }\limits_{{v \in {\mathcal{V}}_{\text{train }}}}l\left( {{\widehat{y}}_{v}^{P},{y}_{v}}\right) , \tag{4}
117
+ $$
118
+
119
+ where ${\mathcal{V}}_{\text{train }}$ is the set of labeled nodes in the training set, ${y}_{v}$ denote the true label of node $v,{\theta }_{P}$ represents the parameters of the predictor ${f}_{P}$ , and $l\left( \cdot \right)$ is the cross entropy loss.
120
+
121
+ #### 4.2.2 Architecture of LW-GCN for Heterophilic Graphs.
122
+
123
+ With ${f}_{P}$ , we can get pseudo labels ${\widehat{\mathcal{Y}}}_{U}^{P}$ for unlabeled nodes ${\mathcal{V}}_{U} = \mathcal{V} \smallsetminus {\mathcal{V}}_{L}$ . Combining it with the provided ${\mathcal{Y}}_{L}$ , we have labels ${\mathcal{Y}}^{P} \in \left( {{\widehat{\mathcal{Y}}}_{U}^{P} \cup {\mathcal{Y}}_{L}}\right)$ necessary for label-wise aggregation in Eq.(2). Then, node representation can be updated with heterophilic context by Eq.(3). Multiple layers of label-wise graph convolution can be applied to incorporate more hops of neighbors in representation learning. The process of one layer label-wise graph convolution with pseudo labels can be rewritten as:
124
+
125
+ $$
126
+ {\mathbf{a}}_{v, k}^{\left( l\right) } = \mathop{\sum }\limits_{{u \in {\mathcal{N}}_{k}^{P}\left( v\right) }}\frac{1}{\left| {\mathcal{N}}_{k}^{P}\left( v\right) \right| }{\mathbf{h}}_{u}^{\left( l\right) },\;{\mathbf{h}}_{v}^{l + 1} = \sigma \left( {{\mathbf{W}}^{\left( l\right) } \cdot \operatorname{CONCAT}\left( {{\mathbf{h}}_{v}^{\left( l\right) },{a}_{v,1}^{\left( l\right) },\ldots ,{a}_{v, C}^{\left( l\right) }}\right) ,}\right. \tag{5}
127
+ $$
128
+
129
+ where ${\mathcal{N}}_{k}^{P}\left( v\right) = \left\{ {u : \left( {v, u}\right) \in \mathcal{E} \land {\widehat{y}}_{u}^{P} = k}\right\}$ stands for node $v$ ’s neighbors with estimated label $k$ . ${\mathbf{h}}_{v}^{\left( l\right) }$ is the representation of node $v$ at the $l$ -th layer label-wise graph convolution with ${\mathbf{h}}_{v}^{\left( 0\right) } = {\mathbf{x}}_{v}$ . In heterophilic graphs, different hops of neighbors may exhibit different distributions which can provide useful information for node classification. Therefore, the final node prediction can be conducted by combining the intermediate representations of the model with $K$ layers:
130
+
131
+ $$
132
+ {\widehat{\mathbf{y}}}_{v}^{C} = \operatorname{softmax}\left( {{\mathbf{W}}_{C} \cdot \operatorname{COMBINE}\left( {{\mathbf{h}}_{v}^{\left( 1\right) },\ldots ,{\mathbf{h}}_{v}^{\left( K\right) }}\right) }\right) , \tag{6}
133
+ $$
134
+
135
+ where ${\mathbf{W}}_{C}$ is a learnable weight matrix, ${\widehat{\mathbf{y}}}_{v}^{C}$ is predicted label probabilities of node $v$ . Various operations such as max-pooling and concatenation [32] can be applied as the COMBINE function.
136
+
137
+ #### 4.2.3 Automatic Model Selection
138
+
139
+ In heterophilic graphs, the homophily ratio is very small and even can be around 0.2 [23]. With a reasonable pseudo label predictor, the label-wise aggregation with pseudo labels will mix much less noise than the general GNN aggregation. In contrast, for homophilic graphs such as citation networks, their homophily ratios are close to 1 . In this situation, directly aggregating all the neighbors may introduce less noise in representations than aggregating label-wisely as the pseudo-labels contain noises. Therefore, it is necessary to determine whether to apply the label-wise graph convolution or the state-of-the-art GNN for homophilic graphs. One straightforward way is to select the model based on the homophily ratio. However, graphs are generally sparsely labeled which makes it difficult to estimate the real homophily ratio. To address this problem, we propose to utilize the validation set to automatically select the model.
140
+
141
+ In the model selection module, we combine predictions of the label-wise aggregation model for heterophilic graphs and traditional GNN models for homophilic graphs. Predictions from the GNN ${f}_{G}$ for homophilic graphs are given by ${\widehat{\mathcal{Y}}}^{G} = \operatorname{GNN}\left( {\mathbf{A},\mathbf{X}}\right)$ , where the GNN is flexible to various models for homophilic graphs. Here, we select GCNII [6] which achieves state-of-the-art results
142
+
143
+ on homophilic graphs. The model selection can be achieved by assigning higher weight to the corresponding model prediction. The combined prediction is given as:
144
+
145
+ $$
146
+ {\widehat{\mathbf{y}}}_{v} = \frac{\exp \left( {\phi }_{1}\right) }{\mathop{\sum }\limits_{{i = 1}}^{2}\exp \left( {\phi }_{i}\right) }{\widehat{\mathbf{y}}}_{v}^{C} + \frac{\exp \left( {\phi }_{2}\right) }{\mathop{\sum }\limits_{{i = 1}}^{2}\exp \left( {\phi }_{i}\right) }{\widehat{\mathbf{y}}}_{v}^{G}, \tag{7}
147
+ $$
148
+
149
+ where ${\widehat{\mathbf{y}}}_{v}^{G} \in {\widehat{\mathcal{Y}}}^{G}$ is the prediction of node $v$ from ${f}_{G}.{\phi }_{1}$ and ${\phi }_{2}$ are the learnable weights to control the contributions of two models in final prediction. ${\phi }_{1}$ and ${\phi }_{2}$ can be obtained by finding the values that lead to good performance on validation set. More specifically, this goal can be formulated as the following bi-level optimization problem:
150
+
151
+ $$
152
+ \mathop{\min }\limits_{{{\phi }_{1},{\phi }_{2}}}{\mathcal{L}}_{\text{val }}\left( {{\theta }_{C}^{ * }\left( {{\phi }_{1},{\phi }_{2}}\right) ,{\theta }_{G}^{ * }\left( {{\phi }_{1},{\phi }_{2}}\right) ,{\phi }_{1},{\phi }_{2}}\right) \;\text{ s.t. }\;{\theta }_{C}^{ * },{\theta }_{G}^{ * } = \arg \mathop{\min }\limits_{{{\theta }_{C},{\theta }_{G}}}{\mathcal{L}}_{\text{train }}\left( {{\theta }_{C},{\theta }_{G},{\phi }_{1},{\phi }_{2}}\right)
153
+ $$
154
+
155
+ (8)
156
+
157
+ where ${\mathcal{L}}_{\text{val }}$ and ${\mathcal{L}}_{\text{train }}$ are the average cross entropy loss of the combined predictions $\left\{ {{\widehat{y}}_{v} : v \in {\mathcal{V}}_{\text{val }}}\right\}$ and $\left\{ {{\widehat{y}}_{v} : v \in {\mathcal{V}}_{\text{train }}}\right\}$ on validation set and training set, respectively.
158
+
159
+ ### 4.3 An Optimization Algorithm of LW-GCN
160
+
161
+ Computing the gradients for ${\phi }_{1}$ and ${\phi }_{2}$ is expensive in both computational cost and memory. To alleviate this issue, we use an alternating optimization schema to iteratively update the model parameters and the model selection weights.
162
+
163
+ Updating Lower Level ${\theta }_{C}$ and ${\theta }_{G}$ . Instead of calculating ${\theta }_{C}^{ * }$ and ${\theta }_{G}^{ * }$ per outer iteration, we fix ${\phi }_{1}$ and ${\phi }_{2}$ and update the mode parameters ${\theta }_{G}$ and ${\theta }_{C}$ for $T$ steps by:
164
+
165
+ $$
166
+ {\theta }_{C}^{t + 1} = {\theta }_{C}^{t} - {\alpha }_{C}{\nabla }_{{\theta }_{C}}{\mathcal{L}}_{\text{train }}\left( {{\theta }_{C}^{t},{\theta }_{G}^{t},{\phi }_{1},{\phi }_{2}}\right) ,\;{\theta }_{G}^{t + 1} = {\theta }_{G}^{t} - {\alpha }_{G}{\nabla }_{{\theta }_{G}}{\mathcal{L}}_{\text{train }}\left( {{\theta }_{C}^{t},{\theta }_{G}^{t},{\phi }_{1},{\phi }_{2}}\right) , \tag{9}
167
+ $$
168
+
169
+ where ${\theta }_{C}^{t}$ and ${\theta }_{G}^{t}$ are model parameters after updating $t$ steps. ${\alpha }_{C}$ and ${\alpha }_{G}$ are the learning rates for ${\theta }_{C}$ and ${\bar{\theta }}_{G}$ .
170
+
171
+ Updating Upper Level ${\phi }_{1}$ and ${\phi }_{2}$ . Here, we use the updated model parameters ${\theta }_{C}^{T}$ and ${\theta }_{G}^{T}$ to approximate ${\theta }_{C}^{ * }$ and ${\theta }_{G}^{ * }$ . Moreover, to further speed up the optimization, we apply first-order approximation [11] to compute the gradients of ${\phi }_{1}$ and ${\phi }_{2}$ :
172
+
173
+ $$
174
+ {\phi }_{1}^{k + 1} = {\phi }_{1}^{k} - {\alpha }_{\phi }{\nabla }_{{\phi }_{1}}{\mathcal{L}}_{\text{val }}\left( {{\bar{\theta }}_{C}^{T},{\bar{\theta }}_{G}^{T},{\phi }_{1}^{k},{\phi }_{2}^{k}}\right) ,\;{\phi }_{2}^{k + 1} = {\phi }_{2}^{k} - {\alpha }_{\phi }{\nabla }_{{\phi }_{2}}{\mathcal{L}}_{\text{val }}\left( {{\bar{\theta }}_{C}^{T},{\bar{\theta }}_{G}^{T},{\phi }_{1}^{k},{\phi }_{2}^{k}}\right) , \tag{10}
175
+ $$
176
+
177
+ where ${\bar{\theta }}_{C}^{T}$ and ${\bar{\theta }}_{G}^{T}$ means stopping the gradient. ${\alpha }_{\phi }$ is the learning rate for ${\phi }_{1}$ and ${\phi }_{2}$ .
178
+
179
+ More details of the training algorithm are in Appendix A.
180
+
181
+ ## 5 Experiments
182
+
183
+ In this section, we conduct experiments to demonstrate the effectiveness of LW-GCN. In particular, we aim to answer the following research questions:
184
+
185
+ - RQ1 Is our LW-GCN effective in node classification on both homophilic and heterophilic graphs.
186
+
187
+ - RQ2 Can label-wise aggregation learn representations that well capture information for prediction? - RQ3 How do the quality of pseudo labels and the automatic model selection affect LW-GCN?
188
+
189
+ ### 5.1 Experimental Settings
190
+
191
+ Datasets. To evaluate the performance of our proposed LW-GCN, we conduct experiments on three homophilic graphs and five heterophilic graphs. For homophilic graphs, we choose the widely used benchmark datasets, Cora, Citeseer, and Pubmed [17]. The dataset splits of homophilic graphs are the same as the cited paper. As for heterophilic graphs, we use two webpage datasets Texas and Wisconsin [23], and three subgraphs of wiki, i.e., Squirrel, Chameleon, and Crocodile [25]. Following [37], 10 dataset splits are used in each heterophilic graph for evaluation. The statistics of the datasets are presented in Table 3 in the Appendix.
192
+
193
+ Compared Methods. We compare LW-GCN with the representative and state-of-the-art GNNs, which includes GCN [17], MixHop [16] and GCNII [6]. We also compare with the following state-of-the-art models that are designed for heterophilic graphs: FAGCN [2], SimP-GCN [14], H2GCN [37], GRP-GNN [7], and BM-GCN [13]. In addition, the MLP are evaluated on the datasets for reference. The details of these compared methods can be found in Appendix B.2.
194
+
195
+ Table 1: Node classification results (Accuracy $\left( \% \right) \pm$ Std.) on homophilic/heterophilic graphs.
196
+
197
+ <table><tr><td>Dataset</td><td>Wisconsin</td><td>Texas</td><td>Chameleon</td><td>Squirrel</td><td>Crocodile</td><td>Cora</td><td>Citeseer</td><td>Pubmed</td></tr><tr><td>Ave. Degree</td><td>2.05</td><td>1.69</td><td>15.85</td><td>41.74</td><td>30.96</td><td>4.01</td><td>2.84</td><td>4.50</td></tr><tr><td>Homo. Ratio</td><td>0.20</td><td>0.11</td><td>0.24</td><td>0.22</td><td>0.25</td><td>0.81</td><td>0.74</td><td>0.8</td></tr><tr><td>MLP</td><td>${83.5} \pm {4.9}$</td><td>${78.1} \pm {6.0}$</td><td>${48.0} \pm {1.5}$</td><td>${32.3} \pm {1.8}$</td><td>${65.8} \pm {0.7}$</td><td>${58.6} \pm {0.5}$</td><td>${60.3} \pm {0.4}$</td><td>${72.7} \pm {0.4}$</td></tr><tr><td>GCN</td><td>${53.1} \pm {5.8}$</td><td>${57.6} \pm {5.9}$</td><td>${63.5} \pm {2.5}$</td><td>${46.7} \pm {1.5}$</td><td>${66.7} \pm {1.0}$</td><td>${81.6} \pm {0.7}$</td><td>${71.3} \pm {0.3}$</td><td>${78.4} \pm {1.1}$</td></tr><tr><td>MixHop</td><td>${70.2} \pm {4.8}$</td><td>${60.6} \pm {7.7}$</td><td>${61.2} \pm {2.2}$</td><td>${44.1} \pm {1.1}$</td><td>${67.6} \pm {1.3}$</td><td>${80.6} \pm {0.2}$</td><td>${68.7} \pm {0.3}$</td><td>${78.9} \pm {0.5}$</td></tr><tr><td>SuperGAT</td><td>${53.7} \pm {5.7}$</td><td>${58.6} \pm {7.7}$</td><td>${59.4} \pm {2.5}$</td><td>${38.9} \pm {1.5}$</td><td>${62.6} \pm {0.9}$</td><td>${82.7} \pm {0.4}$</td><td>${72.2} \pm {0.8}$</td><td>${78.4} \pm {0.5}$</td></tr><tr><td>GCNII</td><td>${82.1} \pm {3.9}$</td><td>${68.6} \pm {9.8}$</td><td>${63.5} \pm {2.5}$</td><td>${49.4} \pm {1.7}$</td><td>${69.0} \pm {0.7}$</td><td>${84.2} \pm {0.5}$</td><td>${72.0} \pm {0.8}$</td><td>${80.2} \pm {0.2}$</td></tr><tr><td>FAGCN</td><td>${83.3} \pm {3.7}$</td><td>${79.5} \pm {4.8}$</td><td>${63.9} \pm {2.2}$</td><td>${43.3} \pm {2.5}$</td><td>${67.1} \pm {0.9}$</td><td>${83.1} \pm {0.6}$</td><td>${71.7} \pm {0.6}$</td><td>${78.8} \pm {0.3}$</td></tr><tr><td>SimP-GCN</td><td>${85.5} \pm {4.7}$</td><td>${80.5} \pm {5.9}$</td><td>${63.7} \pm {2.3}$</td><td>${42.8} \pm {1.4}$</td><td>${63.7} \pm {2.3}$</td><td>${82.8} \pm {0.1}$</td><td>${71.8} \pm {0.8}$</td><td>${80.3} \pm {0.2}$</td></tr><tr><td>H2GCN</td><td>${84.7} \pm {3.9}$</td><td>${83.7} \pm {6.0}$</td><td>${54.2} \pm {2.3}$</td><td>${36.0} \pm {1.1}$</td><td>${66.7} \pm {0.5}$</td><td>${81.6} \pm {0.4}$</td><td>${71.0} \pm {0.5}$</td><td>${79.5} \pm {0.2}$</td></tr><tr><td>GPRGNN</td><td>${78.2} \pm {4.4}$</td><td>${77.0} \pm {6.4}$</td><td>${70.6} \pm {2.1}$</td><td>${50.8} \pm {1.4}$</td><td>${65.6} \pm {0.9}$</td><td>${83.8} \pm {0.6}$</td><td>${71.1} \pm {0.9}$</td><td>${79.9} \pm {0.1}$</td></tr><tr><td>BM-GCN</td><td>${77.6} \pm {5.9}$</td><td>${81.9} \pm {5.4}$</td><td>${69.4} \pm {1.7}$</td><td>${53.1} \pm {1.8}$</td><td>${64.3} \pm {1.1}$</td><td>${81.5} \pm {0.5}$</td><td>${68.9} \pm {1.0}$</td><td>${77.9} \pm {0.4}$</td></tr><tr><td>LW-GCN</td><td>${86.9} \pm {2.2}$</td><td>${86.2} \pm {5.8}$</td><td>${74.4} \pm {1.4}$</td><td>${62.6} \pm {1.6}$</td><td>${78.1} \pm {0.6}$</td><td>${84.3} \pm {0.3}$</td><td>${72.3} \pm {0.4}$</td><td>${80.4} \pm {0.3}$</td></tr><tr><td>Weight for ${f}_{C}$</td><td>0.981</td><td>0.960</td><td>0.986</td><td>0.987</td><td>0.999</td><td>0.001</td><td>0.006</td><td>0.005</td></tr></table>
198
+
199
+ Settings of LW-GCN. For the label predictor ${f}_{P}$ , we adopt a MLP with one-hideen layer. The dimension of the hidden layer in MLP is set as 64 . As for the ${f}_{C}$ , we adopt two layers of label-wise message passing on all the datasets. More discussion about the impacts of the depth on LW-GCN is given in Sec. H. The other hyperparameters such as hidden dimension and weight decay are tuned based on the validation set. See Appendix B. 1 for more details.
200
+
201
+ ### 5.2 Node Classification Performance
202
+
203
+ To answer RQ1, we conduct experiments on both heterophilic graphs and homophilic graphs with comparison to the state-of-the-art GNN models. The average accuracy and standard deviations on homophilic/heterophilic graphs are reported in Table 1. The model selection weight for label-wise aggregation GNN ${f}_{C}$ is shown along with the results of LW-GCN. Note that this model selection weight ranges from 0 to 1 . When the weight is close to 1 , the label-wise aggregation model is selected. When the weight for ${f}_{C}$ is close to 0, the GNN ${f}_{G}$ for homophilic graph is selected.
204
+
205
+ Performance on Heterophilic Graphs. We conduct experiments on 10 dataset splits on each heterophilic graphs. From the results on heterophilic graphs, we can have following observations:
206
+
207
+ - MLP outperforms GCN and other GNNs for homophilic graphs by a large margin on Texas and Wisconsin; while GCN can achieve relatively good performance on dense heterophilic graphs such as Chameleon. This empirical result is consistent with our analysis in Theorem 1 that the heterophily will especially degrade the performance of GCN on graphs with low degrees.
208
+
209
+ - Though GCN and other GNNs designed for homophilic graphs can give relatively good performance on dense heterophilic graphs, our LW-GCN bring significant improvement by adopting label-wise aggregation. In addition, LW-GCN outperforms baselines on heterophilic graphs with low node degrees. This proves the superiority of label-wise aggregation in preserving heterophilic context.
210
+
211
+ - The model selection weight for ${f}_{C}$ is close to 1 for heterophilic graphs, which verifies that the proposed LW-GCN can correctly select the label-wise aggregation GNN ${f}_{C}$ for heterophilic graphs.
212
+
213
+ - Compared with SimP-GCN which also aims to preserve node features, our LW-GCN performs significantly better on heterophilic graphs. This is because SimP-GCN only focuses on the similarity of central node attributes. In contrast, our label-wise aggregation can preserve both the central node features and the heterophilic local context for node classification. LW-GCN also outperforms the other GNNs that adopts message-passing mechanism designed for heterophilic graphs by a large margin. This further demonstrates the effectiveness of label-wise aggregation.
214
+
215
+ Performance on Homophilic Graphs. The average results and standard deviations of 5 runs on homophilic graphs, i.e., Cora, Citeseer, and Pubmed, are also reported in Table 1. From the table, we can observe that existing GNNs for heterophilic graphs generally perform worse than state-of-the-art GNNs on homophilic graphs such as GCNII. In contrast, LW-GCN achieves comparable results with the the best model on homophilic graphs. This is because LW-GCN combines the GNN using label-wise message passing and a state-of-the-art GNN for homophilic graph. And it can automatically select the right model for the given homophilic graph.
216
+
217
+ Table 2: Ablation Study
218
+
219
+ <table><tr><td>Dataset</td><td>MLP</td><td>GCN</td><td>GCNII</td><td>LW-GCN \\P</td><td>LW-GCN \\G</td><td>LW-GCN ${}_{GCN}$</td><td>LW-GCN</td></tr><tr><td>Cora</td><td>${58.7} \pm {0.5}$</td><td>${81.6} \pm {0.7}$</td><td>${84.2} \pm {0.5}$</td><td>${84.2} \pm {0.3}$</td><td>${75.3} \pm {0.4}$</td><td>${81.9} \pm {0.2}$</td><td>${84.3} \pm {0.3}$</td></tr><tr><td>Citeseer</td><td>${60.3} \pm {0.4}$</td><td>${71.3} \pm {0.3}$</td><td>${72.0} \pm {0.8}$</td><td>${72.3} \pm {0.5}$</td><td>${65.1} \pm {0.5}$</td><td>${71.6} \pm {0.3}$</td><td>${72.3} \pm {0.4}$</td></tr><tr><td>Texas</td><td>${78.1} \pm {6.0}$</td><td>${57.6} \pm {5.9}$</td><td>${68.6} \pm {9.8}$</td><td>${82.4} \pm {5.2}$</td><td>${85.9} \pm {5.6}$</td><td>${85.4} \pm {6.3}$</td><td>${86.2} \pm {5.8}$</td></tr><tr><td>Crocodile</td><td>${65.8} \pm {0.7}$</td><td>${66.7} \pm {1.0}$</td><td>${69.0} \pm {0.7}$</td><td>${78.0} \pm {0.8}$</td><td>${77.9} \pm {0.5}$</td><td>${77.8} \pm {0.8}$</td><td>${78.1} \pm {0.6}$</td></tr></table>
220
+
221
+ ### 5.3 Analysis of Node Representations
222
+
223
+ To answer RQ2, we compare the representation similarity of intra-class node pairs and inter-class node pairs on a sparse heterophilic graph in Fig. 3. For both GCN and LW-GCN, representations learned by the last layer are used for analysis. we can observe that the learned representations of GCN are very similar for both intra-class pairs and inter-class pairs. This verifies that simply aggregating the neighbors will make the node representations less discrimative. With label-wise aggregation, the similarity scores of intra-class pairs are significantly higher than inter-class node pairs. This demonstrates that the representations learned by label-wise message passing can well preserve the target nodes' features and their contextual information.
224
+
225
+ ![01963ede-b51e-7ad0-8efc-12886ed83ec2_8_1009_545_476_242_0.jpg](images/01963ede-b51e-7ad0-8efc-12886ed83ec2_8_1009_545_476_242_0.jpg)
226
+
227
+ Figure 3: Representation similarity distributions on Texas Graphs.
228
+
229
+ ### 5.4 Ablation Study
230
+
231
+ To answer RQ3, we conduct ablation studies to understand the contributions of each component to LW-GCN. To investigate how the quality of pseudo labels can affect LW-GCN, we train a variant LW-GCN|P by replacing the MLP-based label predictor with a GCN model. To show the importance of the automatic model selection, we train a variant LW-GCN\\G which removes the GNN for homophilic graphs and only uses label-wise aggregation GNN. Finally, we replace the GCNII backbone of ${f}_{G}$ to $\mathrm{{GCN}}$ , denoted as $\mathrm{{LW}} - {\mathrm{{GCN}}}_{GCN}$ , to show $\mathrm{{LW}} - \mathrm{{GCN}}$ is flexible to adopt various $\mathrm{{GNNs}}$ for ${f}_{G}$ . Experiments are conducted on both homophilic and heterophilic graphs. The results are shown in Table 2 and ablation studies on the rest datasets are shown in Appendix G. We can observe that:
232
+
233
+ - On homophilic graphs, LW-GCN P shows comparable results with LW-GCN, because GCNII will be selected given a homophilic graph. On the heterophilic graph Texas, the performance of LW-GCNNP is significantly worse than LW-GCN. This is because GNNs can produce poor pseudo labels on heterophilic graph, which degrades the label-wise message passing.
234
+
235
+ - LW-GCN\\G performs much better than MLP. This shows label-wise graph convolution can capture structure information. However, LW-GCN\\G performs worse than GCNII and LW-GCN on homophilic graphs, which indicates the necessity of combining GNN for homophilic graphs.
236
+
237
+ - LW-GCN ${}_{GCN}$ achieves comparable results with GCN on homophilic graphs. On heterophilic graphs, LW-GCN ${}_{GCN}$ performs similarly with LW-GCN. This shows the flexibility of LW-GCN in adopting traditional GNN models designed for homophilic graphs.
238
+
239
+ ## 6 Conclusion and Future Work
240
+
241
+ In this paper, we analyze the impacts of the heterophily levels to GCN model and demonstrate its limitations. We develop a novel label-wise graph convolution to learn representations that preserve the node features and their heterophilic neighbors' information. An automatic model selection module is applied to ensure the performance of the proposed framework on graphs with any homophily ratio. Theoretical and empirical analysis demonstrates the effectiveness of the label-wise aggregation. Extensive experiments shows that our proposed LW-GCN can achieve sate-of-the-art results on both homophilic and heterophilic graphs. There are several interesting directions need further investigation. First, since better pseudo labels will benefit the label-wise message passing, it is promising to incorporate the predictions of LW-GCN in label-wise message passing. Second, in some applications such as link prediction, labels are not available. Therefore, we will investigate how to generate useful pseudo labels for label-wise aggregation for applications where no labeled nodes are provided.
242
+
243
+ References
244
+
245
+ [1] Sami Abu-El-Haija, Bryan Perozzi, Amol Kapoor, Nazanin Alipourfard, Kristina Lerman, Hrayr Harutyunyan, Greg Ver Steeg, and Aram Galstyan. Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing. In international conference on machine learning, pages 21-29. PMLR, 2019. 2, 13
246
+
247
+ [2] Deyu Bo, Xiao Wang, Chuan Shi, and Huawei Shen. Beyond low-frequency information in graph convolutional networks. arXiv preprint arXiv:2101.00797, 2021. 2, 7, 13
248
+
249
+ [3] Pietro Bongini, Monica Bianchini, and Franco Scarselli. Molecular generative graph neural networks for drug discovery. Neurocomputing, 450:242-252, 2021. 1
250
+
251
+ [4] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. ICLR, 2014. 2
252
+
253
+ [5] Jie Chen, Tengfei Ma, and Cao Xiao. Fastgcn: fast learning with graph convolutional networks via importance sampling. ICLR, 2018. 2
254
+
255
+ [6] Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, and Yaliang Li. Simple and deep graph convolutional networks. In ${ICML}$ , pages ${1725} - {1735}$ . PMLR,2020. 2,6,7,12,13
256
+
257
+ [7] Eli Chien, Jianhao Peng, Pan Li, and Olgica Milenkovic. Adaptive universal generalized pagerank graph neural network. arXiv preprint arXiv:2006.07988, 2020. 1, 2, 7, 13
258
+
259
+ [8] Enyan Dai and Suhang Wang. Say no to the discrimination: Learning fair graph neural networks with limited sensitive attribute information. In WSDM, pages 680-688, 2021. 2
260
+
261
+ [9] Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In NeurIPS, pages 3844-3852, 2016. 2
262
+
263
+ [10] Yingtong Dou, Zhiwei Liu, Li Sun, Yutong Deng, Hao Peng, and Philip S Yu. Enhancing graph neural network-based fraud detectors against camouflaged fraudsters. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pages 315-324, 2020. 2
264
+
265
+ [11] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning, pages 1126-1135. PMLR, 2017. 7
266
+
267
+ [12] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In NeurIPS, pages 1024-1034, 2017. 1, 2, 4
268
+
269
+ [13] Dongxiao He, Chundong Liang, Huixin Liu, Mingxiang Wen, Pengfei Jiao, and Zhiyong Feng. Block modeling-guided graph convolutional neural networks. AAAI, 2022. 2, 3, 7, 13
270
+
271
+ [14] Wei Jin, Tyler Derr, Yiqi Wang, Yao Ma, Zitao Liu, and Jiliang Tang. Node similarity preserving graph convolutional networks. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, pages 148-156, 2021. 2, 7, 13
272
+
273
+ [15] Dongkwan Kim and Alice Oh. How to find your friendly neighborhood: Graph attention design with self-supervision. In International Conference on Learning Representations, 2020. 13
274
+
275
+ [16] Dongkwan Kim and Alice Oh. How to find your friendly neighborhood: Graph attention design with self-supervision. In International Conference on Learning Representations, 2021. 2, 7
276
+
277
+ [17] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016. 2, 3, 7, 13
278
+
279
+ [18] Johannes Klicpera, Aleksandar Bojchevski, and Stephan Günnemann. Predict then propagate: Graph neural networks meet personalized pagerank. arXiv preprint arXiv:1810.05997, 2018. 2, 4
280
+
281
+ [19] Ron Levie, Federico Monti, Xavier Bresson, and Michael M Bronstein. Cayleynets: Graph convolutional neural networks with complex rational spectral filters. IEEE Transactions on Signal Processing, 67(1):97-109, 2018. 2
282
+
283
+ [20] Guohao Li, Matthias Muller, Ali Thabet, and Bernard Ghanem. Deepgcns: Can gcns go as deep as cnns? In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9267-9276, 2019. 2
284
+
285
+ [21] Yao Ma, Xiaorui Liu, Neil Shah, and Jiliang Tang. Is homophily a necessity for graph neural networks? arXiv preprint arXiv:2106.06134, 2021. 1, 4
286
+
287
+ [22] Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural networks for graphs. In ${ICML}$ , pages ${2014} - {2023},{2016.2}$
288
+
289
+ [23] Hongbin Pei, Bingzhe Wei, Kevin Chen-Chuan Chang, Yu Lei, and Bo Yang. Geom-gcn: Geometric graph convolutional networks. arXiv preprint arXiv:2002.05287, 2020. 2, 6, 7
290
+
291
+ [24] Jiezhong Qiu, Qibin Chen, Yuxiao Dong, Jing Zhang, Hongxia Yang, Ming Ding, Kuansan Wang, and Jie Tang. Gcc: Graph contrastive coding for graph neural network pre-training. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1150-1160, 2020. 2
292
+
293
+ [25] Benedek Rozemberczki, Carl Allen, and Rik Sarkar. Multi-scale attributed node embedding. Journal of Complex Networks, 9(2):cnab014, 2021. 7
294
+
295
+ [26] Ke Sun, Zhanxing Zhu, and Zhouchen Lin. Multi-stage self-supervised learning for graph convolutional networks. AAAI, 2020. 2
296
+
297
+ [27] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. ICLR, 2018. 2, 4, 13
298
+
299
+ [28] Daixin Wang, Jianbin Lin, Peng Cui, Quanhui Jia, Zhen Wang, Yanming Fang, Quan Yu, Jun Zhou, Shuang Yang, and Yuan Qi. A semi-supervised graph attentive network for financial fraud detection. ICDM, 2019. 2
300
+
301
+ [29] Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. Simplifying graph convolutional networks. In International conference on machine learning, pages 6861-6871. PMLR, 2019. 4
302
+
303
+ [30] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems, 2020. 1
304
+
305
+ [31] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018. 2
306
+
307
+ [32] Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. Representation learning on graphs with jumping knowledge networks. In International Conference on Machine Learning, pages 5453-5462. PMLR, 2018. 2, 6
308
+
309
+ [33] Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. Graph contrastive learning with augmentations. Advances in Neural Information Processing Systems, 33, 2020. 2
310
+
311
+ [34] Bing Yu, Haoteng Yin, and Zhanxing Zhu. Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting. arXiv preprint arXiv:1709.04875, 2017. 1, 2
312
+
313
+ [35] Tianxiang Zhao, Xianfeng Tang, Xiang Zhang, and Suhang Wang. Semi-supervised graph-to-graph translation. In ${CIKM}$ , pages ${1863} - {1872},{2020.2}$
314
+
315
+ [36] Jiong Zhu, Ryan A Rossi, Anup Rao, Tung Mai, Nedim Lipka, Nesreen K Ahmed, and Danai Koutra. Graph neural networks with heterophily. arXiv preprint arXiv:2009.13566, pages 11168-11176, 2020. 2, 3
316
+
317
+ [37] Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, and Danai Koutra. Beyond homophily in graph neural networks: Current limitations and effective designs. arXiv preprint arXiv:2006.11468, 2020. 1, 2, 3, 7, 13
318
+
319
+ [38] Qikui Zhu, Bo Du, and Pingkun Yan. Self-supervised training of graph convolutional networks. arXiv preprint arXiv:2006.02380, 2020. 2
320
+
321
+ Algorithm 1 Training Algorithm of LW-GCN
322
+
323
+ ---
324
+
325
+ Input: $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}, X}\right) ,{\mathcal{Y}}_{L}, p,{\alpha }_{C},{\alpha }_{G},{\alpha }_{\phi }$ and $T$
326
+
327
+ Output: ${f}_{P},{f}_{C},{f}_{G},{\phi }_{1}$ and ${\phi }_{2}$
328
+
329
+ Train ${f}_{P}$ by optimizing Eq.(4) w.r.t ${\theta }_{P}$
330
+
331
+ Obtain pseudo labels ${\widehat{\mathcal{Y}}}^{P}$ by with ${f}_{P}$
332
+
333
+ repeat
334
+
335
+ Get combined predictions of ${f}_{C}$ and ${f}_{G}$ on ${\mathcal{V}}_{\text{val }}$
336
+
337
+ Calculate the upper level loss ${\mathcal{L}}_{\text{val }}$
338
+
339
+ Update ${\phi }_{1}$ and ${\phi }_{2}$ according to Eq.(10)
340
+
341
+ for $t = 1$ to $T$ do
342
+
343
+ Obtain the lower level loss ${\mathcal{L}}_{\text{train }}$
344
+
345
+ Update ${\theta }_{C}$ and ${\theta }_{G}$ by Eq.(9)
346
+
347
+ end for
348
+
349
+ until convergence
350
+
351
+ ---
352
+
353
+ Table 3: The statistics of datasets.
354
+
355
+ <table><tr><td>Dataset</td><td>Nodes</td><td>Edges</td><td>Classes</td><td>Hom. Ratio</td></tr><tr><td>Wisconsin</td><td>251</td><td>515</td><td>5</td><td>0.20</td></tr><tr><td>Texas</td><td>183</td><td>309</td><td>5</td><td>0.11</td></tr><tr><td>Chameleon</td><td>2,277</td><td>36,101</td><td>5</td><td>0.24</td></tr><tr><td>Squirrel</td><td>5,201</td><td>217,073</td><td>5</td><td>0.22</td></tr><tr><td>Crocodile</td><td>11,631</td><td>360,040</td><td>5</td><td>0.25</td></tr><tr><td>Cora</td><td>2,708</td><td>5,429</td><td>6</td><td>0.81</td></tr><tr><td>Citeseer</td><td>3,327</td><td>4,732</td><td>7</td><td>0.74</td></tr><tr><td>Pubmed</td><td>19,717</td><td>44,338</td><td>3</td><td>0.8</td></tr></table>
356
+
357
+ ## A Training Algorithm of LW-GCN
358
+
359
+ The training algorithm of LW-GCN is shown in Algorithm 1. In line 1 and 2, we firstly train the ${f}_{P}$ to obtain the required pseudo labels for label-wise message passing. From line 4 to 6 , we get the combined predictions from ${f}_{C}$ and ${f}_{G}$ and update the model selection weights with Eq.(10). From line 7 to 10, we update the model parameters ${\theta }_{C}$ and ${\theta }_{G}$ by minimizing ${\mathcal{L}}_{\text{train }}$ with Eq.(9). The updating of model selection weights and model parameters are conducted iteratively until convenience.
360
+
361
+ ## B Additional Details of Experimental Settings
362
+
363
+ ### B.1 Implementation Details of LW-GCN
364
+
365
+ For experiments on each heterophilic graph, we report the results on the 10 public dataset splits. For homophily graphs, we run each experiment 5 times on the provided public dataset split. The hidden dimension of ${f}_{P}$ is fixed as 64 for all graphs. For the ${f}_{C}$ on Texas and Wisconsin, a linear layer is firstly applied to transform the features followed by the label-wise graph convolutional layer. As for the other graphs, the label-wise graph convolutional layer is directly applied to the node features. The hidden layer dimension and weight decay rate are tuned based on the validation set by grid search. Specifically, we vary the hidden dimension and weight decay in $\{ {32},{64},{128},{256}\}$ and $\{ {0.05},{0.005},{0.0005},{0.00005}\}$ , respectively . As for the ${f}_{G}$ which deploys GCNII [6] as the backbone, the hyperparameter settings are the same as the cited paper. During the training phase, the learning rate is set as 0.01 for all the parameters and model selection weights. The inner iteration step $T$ is set as 1 . Our machine uses an Intel i7-9700k CPU with 64GB RAM. A Nvidia 2080Ti GPU is used to run all the experiments.
366
+
367
+ ### B.2 Implementation Details of Compared Methods
368
+
369
+ 7 We adopt a two-layer MLP model on the datasets as baslines to show the effects of the graph structure 8 and local context of the graphs. The hidden dimension is set the same as our LW-GCN. Apart
370
+
371
+ from MLP, we compare LW-GCN with the following representative and state-of-the-art GNNs that originally designed for graphs with homophily:
372
+
373
+ - GCN [17]: This is a popular spectral-based Graph Convolutional Network, which aggregates the neighbor information and the centered node by averaging their representations. We apply the official code in https://github.com/tkipf/pygcn.
374
+
375
+ - MixHop [1]: It adopts a graph convolutional layer with powers of the adjacency matrix. The official code in https://github.com/samihaija/mixhop is implemented for comparsion.
376
+
377
+ - SuperGAT [15]: This is a GAT model augmented by the self-supervision. In SuperGAT, apart from the classification loss on provided labels, a self-supervised learning task is deployed to further guide the learning of attention for better information propagation based on GAT [27]. The official code from the authors in https://github.com/dongkwan-kim/SuperGAT is used.
378
+
379
+ - GCNII [6]: Based on GCN, residual connection and identity mapping are applied in GCNII to have a deep GNN for better performance. The experiments are run with the official implementation in https://github.com/chennnM/GCNII.
380
+
381
+ We also compare LW-GCN with the following baseline GNN models for heterophilic graphs:
382
+
383
+ - FAGCN [2]: FAGCN adaptively aggregates low-frequency and high-frequency signals from neighbors to improve the performance on heterophilic graphs. The implementation from authors in https://github.com/bdy9527/FAGCN is applied in our experiments.
384
+
385
+ - SimP-GCN [14]: A feature similarity preserving aggregation is applied to facilitate the representation learning on graphs with homophily and heterophily. We utilize the official code in https://github.com/ChandlerBang/SimP-GCN.
386
+
387
+ - H2GCN [37]: H2GCN investigates the limitations of GCN on graphs with heterophily. And it accordingly adopts three key designs for node classification on heterophilic graphs. We conduct experiments with the official code from authors in https://github.com/GemsLab/H2GCN.
388
+
389
+ - GPR-GNN [7]: This method introduces a new Generalized PageRank (GPR) GNN to adaptively learn the GPR weights that combine the aggregated representations in different orders. The learned GPR weights can be either positive or negative, which allows the GPR-GNN handle both heterophilic and homophilic graphs. We adopt the official code from authors in https: //github.com/jianhao2016/GPRGNN.
390
+
391
+ - BM-GCN [13]: This is one of the most recent methods designed for graphs with heterophily, which achieves state-of-the-art results on heterophilic graphs. A block-modeling is adopted to GCN to aggregate information from homophilic and heterophilic neighbors discrimatively. More specifically, the link between two nodes will be re-weighted based on the soft labels of two nodes and the block-similarity matrix. The training and evaluation process is based on the official code in https://github.com/hedongxiao-tju/BM-GCN.
392
+
393
+ The model architecture and hyperparameters of the baselines are set according to the experimental settings provided by the authors for reproduction. For datasets that are not given reproduction details, the hyperparameters of baselines will be tuned based on the performance on validation set to make a fair comparison.
394
+
395
+ ## C Proof of Theorem 1
396
+
397
+ Proof 1 In this proof, we focus on nodes in class $i$ and class $j$ , where $i \neq j$ . Since dimensions of the node feature are independent to each other, without loss of generality, we consider one dimension of the feature and aggregated representation for node $v$ , which is denoted as ${x}_{v}$ and ${z}_{v}$ . For node $v$ in class $i$ , the aggregated representation ${z}_{v}$ in GCN layer is rewritten as:
398
+
399
+ $$
400
+ {z}_{v} = \mathop{\sum }\limits_{{u \in \mathcal{N}\left( v\right) }}\frac{1}{\left| \mathcal{N}\left( v\right) \right| }{x}_{u}. \tag{11}
401
+ $$
402
+
403
+ With assumptions in Sec. 3.2, the expectation of aggregated representations of nodes in class $i$ can be written as:
404
+
405
+ $$
406
+ \mathbb{E}\left( {{z}_{v} \mid {y}_{v} = i}\right) = h \cdot {\mu }_{ii} + \frac{1 - h}{C - 1}\mathop{\sum }\limits_{{k = 1, k \neq i}}^{C}{\mu }_{ik}, \tag{12}
407
+ $$
408
+
409
+ Similarly, we can get the expectation of aggregated nodes representations in class $j$ , i.e., $\mathbb{E}\left( {{z}_{v} \mid {y}_{v} = j}\right)$ .
410
+
411
+ Then, the difference between $\mathbb{E}\left( {{z}_{v} \mid {y}_{v} = i}\right)$ and $\mathbb{E}\left( {{z}_{v} \mid {y}_{v} = j}\right)$ is
412
+
413
+ $$
414
+ {\Delta }_{i, j} = \left| {\mathbb{E}\left( {{z}_{v} \mid {y}_{v} = i}\right) - \mathbb{E}\left( {{z}_{v} \mid {y}_{v} = j}\right) }\right|
415
+ $$
416
+
417
+ $$
418
+ = \left| {h \cdot \left( {{\mu }_{ii} - {\mu }_{jj}}\right) + \frac{1 - h}{C - 1}\left( {{\mu }_{ij} - {\mu }_{ji}}\right) + \frac{1 - h}{C - 1}\mathop{\sum }\limits_{{k = 1, k \neq i, j}}^{C}\left( {{\mu }_{ik} - {\mu }_{jk}}\right) }\right| \tag{13}
419
+ $$
420
+
421
+ $$
422
+ = \left| {\frac{{hC} - 1}{C - 1}\left( {{\mu }_{ii} - {\mu }_{jj}}\right) + \frac{1 - h}{C - 1}\left( {\mathop{\sum }\limits_{{k = 1}}^{C}\left( {{\mu }_{ik} - {\mu }_{jk}}\right) }\right) }\right|
423
+ $$
424
+
425
+ 67 We firstly consider the situation of $h \geq \frac{1}{C}$ . When $h \geq \frac{1}{C}$ , we can infer the upper bound of ${\Delta }_{i, j}$ as:
426
+
427
+ $$
428
+ {\Delta }_{i, j} \leq \frac{{hC} - 1}{C - 1}\left| {{\mu }_{ii} - {\mu }_{jj}}\right| + \frac{1 - h}{C - 1}\mathop{\sum }\limits_{{k = 1}}^{C}\left| {{\mu }_{ik} - {\mu }_{jk}}\right| \tag{14}
429
+ $$
430
+
431
+ $$
432
+ = \frac{hC}{C - 1}\left( {\left| {{\mu }_{ii} - {\mu }_{jj}}\right| - \frac{1}{C}\mathop{\sum }\limits_{{k = 1}}^{C}\left| {{\mu }_{ik} - {\mu }_{jk}}\right| }\right) + \frac{1}{C - 1}\left( {\mathop{\sum }\limits_{{k = 1}}^{C}\left| {{\mu }_{ik} - {\mu }_{jk}}\right| - \left| {{\mu }_{ii} - {\mu }_{jj}}\right| }\right) ,
433
+ $$
434
+
435
+ 568 And the lower bound of ${\Delta }_{i, j}$ is:
436
+
437
+ $$
438
+ {\Delta }_{i, j} \geq \frac{{hC} - 1}{C - 1}\left| {{\mu }_{ii} - {\mu }_{jj}}\right| - \frac{1 - h}{C - 1}\mathop{\sum }\limits_{{k = 1}}^{C}\left| {{\mu }_{ik} - {\mu }_{jk}}\right| \tag{15}
439
+ $$
440
+
441
+ $$
442
+ = \frac{hC}{C - 1}\left( {\left| {{\mu }_{ii} - {\mu }_{jj}}\right| + \frac{1}{C}\mathop{\sum }\limits_{{k = 1}}^{C}\left| {{\mu }_{ik} - {\mu }_{jk}}\right| }\right) - \frac{1}{C - 1}\left( {\mathop{\sum }\limits_{{k = 1}}^{C}\left| {{\mu }_{ik} - {\mu }_{jk}}\right| + \left| {{\mu }_{ii} - {\mu }_{jj}}\right| }\right) ,
443
+ $$
444
+
445
+ Thus, when $\left| {{\mu }_{ii} - {\mu }_{jj}}\right| > \left| {{\mu }_{ik} - {\mu }_{jk}}\right| ,\forall k \in \{ 1,\ldots C\}$ and $h \geq \frac{1}{C}$ , both the upper bound and lower bound of ${\Delta }_{i, j}$ will decrease with the decrease of $h$ .
446
+
447
+ Next, we will show that lower $h$ under the condition of $h \geq \frac{1}{C}$ will lead to higher variance of aggregated nodes. According to Eq.(11), the variance of $\left\{ {{z}_{v} : {y}_{v} = i}\right\}$ can be written as:
448
+
449
+ $$
450
+ \operatorname{Var}\left( {{z}_{v} \mid {y}_{v} = i}\right) = \operatorname{Var}\left( {\mathop{\sum }\limits_{{u \in \mathcal{N}\left( v\right) }}\frac{1}{\left| \mathcal{N}\left( v\right) \right| }{x}_{u} \mid {y}_{v} = i}\right)
451
+ $$
452
+
453
+ According to the assumption 1, the neighbor features are conditional independent to each other given the label of the center node. And for each neighbor node $u \in \mathcal{N}\left( v\right)$ , we have $P\left( {{y}_{u} = {y}_{v} \mid {y}_{v}}\right) =$ $h, P\left( {{y}_{u} = y \mid {y}_{v}}\right) = \frac{1 - h}{C - 1},\forall y \neq {y}_{v}$ . Therefore, for neighbor node $u \in \mathcal{N}\left( v\right)$ of node $v$ whose label is $i$ , its features follow a mixed distribution:
454
+
455
+ $$
456
+ P\left( {{x}_{u} \mid {y}_{v} = i}\right)
457
+ $$
458
+
459
+ $$
460
+ = \mathop{\sum }\limits_{{k = 1}}^{C}P\left( {{y}_{u} = k \mid {y}_{v} = i}\right) P\left( {{x}_{u} \mid {y}_{u} = k}\right) \tag{16}
461
+ $$
462
+
463
+ $$
464
+ = h \cdot N\left( {{\mu }_{ii},{\sigma }_{ii}}\right) + \frac{1 - h}{C - 1}\mathop{\sum }\limits_{{k = 1, k \neq i}}N\left( {{\mu }_{ik},{\sigma }_{ik}}\right)
465
+ $$
466
+
467
+ Using the variance of mixture distribution, the variance of node $v$ in class $i$ can be derived as
468
+
469
+ $$
470
+ \operatorname{Var}\left( {{z}_{v} \mid {y}_{v} = i}\right) = \frac{1}{d}\operatorname{Var}\left( {{x}_{u} \mid {y}_{v} = i}\right)
471
+ $$
472
+
473
+ $$
474
+ = \frac{1}{d}\left( {\mathbb{E}\left\lbrack {\operatorname{Var}\left( {{x}_{u} \mid {y}_{u},{y}_{v} = i}\right) }\right\rbrack + \operatorname{Var}\left\lbrack {\mathbb{E}\left( {{x}_{u} \mid {y}_{u},{y}_{v} = i}\right) }\right\rbrack }\right)
475
+ $$
476
+
477
+ $$
478
+ = \frac{1}{d}\left( {h{\sigma }_{ii}^{2} + \frac{1 - h}{C - 1}\mathop{\sum }\limits_{{k = 1, k \neq i}}^{C}{\sigma }_{ik}^{2} + h{\mu }_{ii}^{2} + \frac{1 - h}{C - 1}\mathop{\sum }\limits_{{k = 1, k \neq i}}^{C}{\mu }_{ik}^{2} - {\left( h{\mu }_{ii} + \frac{1 - h}{C - 1}\mathop{\sum }\limits_{{k = 1, k \neq i}}^{C}{\mu }_{ik}\right) }^{2}}\right)
479
+ $$
480
+
481
+ (17)
482
+
483
+ Let ${\bar{\mu }}_{i} = \frac{1}{C}\mathop{\sum }\limits_{{k = 1}}^{C}{\mu }_{ik}$ and ${\sigma }_{i}^{2} = \frac{1}{C}\mathop{\sum }\limits_{{k = 1}}^{C}{\left( {\mu }_{ik} - {\bar{\mu }}_{i}\right) }^{2}$ . Then Eq. (17) can be rewritten as the following
484
+
485
+ equation:
486
+
487
+ $$
488
+ \operatorname{Var}\left( {{z}_{v} \mid {y}_{v} = i}\right)
489
+ $$
490
+
491
+ $$
492
+ = \frac{1}{d}\left( {\frac{{hC} - 1}{C - 1}{\sigma }_{ii}^{2} + \frac{C - {hC}}{C - 1}\left( {\frac{1}{C}\mathop{\sum }\limits_{{k = 1}}^{C}{\sigma }_{ik}^{2} + {\sigma }_{i}^{2}}\right) }\right. \tag{18}
493
+ $$
494
+
495
+ $$
496
+ + \frac{{hC} - 1}{C - 1}{\mu }_{ii}^{2} + \frac{C - {hC}}{C - 1}{\bar{\mu }}_{i}^{2} - {\left( h{\mu }_{ii} + \frac{1 - h}{C - 1}\mathop{\sum }\limits_{{k = 1, k \neq i}}^{C}{\mu }_{ik}\right) }^{2})
497
+ $$
498
+
499
+ 30 As $h \geq \frac{1}{C}$ , we can set $p = \frac{{hC} - 1}{C - 1},0 \leq p \leq 1$ and $\frac{C - {hC}}{C - 1} = 1 - p$ . For the last three terms of ${Eq}$ .(18), 581 we have:
500
+
501
+ $$
502
+ \frac{{hC} - 1}{C - 1}{\mu }_{ii}^{2} + \frac{C - {hC}}{C - 1}{\bar{\mu }}_{i}^{2} - {\left( h{\mu }_{ii} + \frac{1 - h}{C - 1}\mathop{\sum }\limits_{{k = 1, k \neq i}}^{C}{\mu }_{ik}\right) }^{2} \tag{19}
503
+ $$
504
+
505
+ $$
506
+ = p{\mu }_{ii}^{2} + \left( {1 - p}\right) {\bar{\mu }}_{i}^{2} - {\left( p{\mu }_{ii} + \left( 1 - p\right) {\bar{\mu }}_{i}\right) }^{2}
507
+ $$
508
+
509
+ $$
510
+ = p\left( {1 - p}\right) {\left( {\mu }_{ii} - {\bar{\mu }}_{i}\right) }^{2} \geq 0
511
+ $$
512
+
513
+ Combining Eq.(18) and Eq.(19), we are able to get the lower bound of the variance as:
514
+
515
+ $$
516
+ \operatorname{Var}\left( {{z}_{v} \mid {y}_{v} = i}\right)
517
+ $$
518
+
519
+ $$
520
+ \geq \frac{{hC} - 1}{d\left( {C - 1}\right) }{\sigma }_{ii}^{2} + \frac{C - {hC}}{d\left( {C - 1}\right) }\left( {\frac{1}{C}\mathop{\sum }\limits_{{k = 1}}^{C}{\sigma }_{ik}^{2} + {\sigma }_{i}^{2}}\right) \tag{20}
521
+ $$
522
+
523
+ $$
524
+ = \frac{hC}{d\left( {C - 1}\right) }\left( {{\sigma }_{ii}^{2} - {\sigma }_{i}^{2} - \frac{1}{C}\mathop{\sum }\limits_{{k = 1}}^{C}{\sigma }_{ik}^{2}}\right) + \frac{1}{d\left( {C - 1}\right) }\left( {C{\sigma }_{i}^{2} + \mathop{\sum }\limits_{{k = 1}}^{C}{\sigma }_{ik}^{2} - {\sigma }_{ii}^{2}}\right)
525
+ $$
526
+
527
+ When ${\sigma }_{i} > {\sigma }_{ii}$ , we know that with the decrease of $h$ , the lower bound of $\operatorname{Var}\left( {{z}_{v} \mid {y}_{v} = i}\right)$ will increase. Similarly, $\operatorname{Var}\left( {{z}_{v} \mid {y}_{v} = j}\right)$ will also increase with a lower $h$ . Combining with $\left| {\mathbb{E}\left( {{z}_{v} \mid {y}_{v} = }\right. }\right|$ $i) - \mathbb{E}\left( {{z}_{v} \mid {y}_{v} = j}\right) \mid$ will decrease with the decrease of $h$ , we can conclude that when $h \geq \frac{1}{C}$ , the graph with lower $h$ will lead to less discrimative aggregate representations.
528
+
529
+ We then prove when $h < \frac{1}{C}$ , the decreasing of $h$ will increase the discriminability of the aggregated representations by averaging. Specifically, with Eq. (13), we can infer that when $h < \frac{1}{C}$ the upper bound of ${\Delta }_{i, j}$ will be:
530
+
531
+ $$
532
+ {\Delta }_{i, j} \leq \frac{1 - {hC}}{C - 1}\left| {{\mu }_{ii} - {\mu }_{jj}}\right| + \frac{1 - h}{C - 1}\mathop{\sum }\limits_{{k = 1}}^{C}\left| {{\mu }_{ik} - {\mu }_{jk}}\right| \tag{21}
533
+ $$
534
+
535
+ $$
536
+ = \frac{-{hC}}{C - 1}\left( {\left| {{\mu }_{ii} - {\mu }_{jj}}\right| + \frac{1}{C}\mathop{\sum }\limits_{{k = 1}}^{C}\left| {{\mu }_{ik} - {\mu }_{jk}}\right| }\right) + \frac{1}{C - 1}\left( {\mathop{\sum }\limits_{{k = 1}}^{C}\left| {{\mu }_{ik} - {\mu }_{jk}}\right| + \left| {{\mu }_{ii} - {\mu }_{jj}}\right| }\right) ,
537
+ $$
538
+
539
+ 590 And the lower bound of ${\Delta }_{i, j}$ is:
540
+
541
+ $$
542
+ {\Delta }_{i, j} \geq \frac{1 - {hC}}{C - 1}\left| {{\mu }_{ii} - {\mu }_{jj}}\right| - \frac{1 - h}{C - 1}\mathop{\sum }\limits_{{k = 1}}^{C}\left| {{\mu }_{ik} - {\mu }_{jk}}\right| \tag{22}
543
+ $$
544
+
545
+ $$
546
+ = \frac{-{hC}}{C - 1}\left( {\left| {{\mu }_{ii} - {\mu }_{jj}}\right| - \frac{1}{C}\mathop{\sum }\limits_{{k = 1}}^{C}\left| {{\mu }_{ik} - {\mu }_{jk}}\right| }\right) - \frac{1}{C - 1}\left( {\mathop{\sum }\limits_{{k = 1}}^{C}\left| {{\mu }_{ik} - {\mu }_{jk}}\right| - \left| {{\mu }_{ii} - {\mu }_{jj}}\right| }\right) ,
547
+ $$
548
+
549
+ Thus, when $h < \frac{1}{C}$ and $\left| {{\mu }_{ii} - {\mu }_{jj}}\right| > \left| {{\mu }_{ik} - {\mu }_{jk}}\right| ,\forall k \in \{ 1,\ldots C\}$ , both the upper bound and lower bound of ${\Delta }_{i, j}$ will increase with the decrease of $h$ .
550
+
551
+ For the variance of aggregated representations when $h < \frac{1}{C}$ , we can infer its folowing upper bound with Eq.(18):
552
+
553
+ $$
554
+ \operatorname{Var}\left( {{z}_{v} \mid {y}_{v} = i}\right)
555
+ $$
556
+
557
+ $$
558
+ \leq \frac{{hC} - 1}{d\left( {C - 1}\right) }{\sigma }_{ii}^{2} + \frac{C - {hC}}{d\left( {C - 1}\right) }\left( {\frac{1}{C}\mathop{\sum }\limits_{{k = 1}}^{C}{\sigma }_{ik}^{2} + {\sigma }_{i}^{2}}\right) \tag{23}
559
+ $$
560
+
561
+ $$
562
+ = \frac{hC}{d\left( {C - 1}\right) }\left( {{\sigma }_{ii}^{2} - {\sigma }_{i}^{2} - \frac{1}{C}\mathop{\sum }\limits_{{k = 1}}^{C}{\sigma }_{ik}^{2}}\right) + \frac{1}{d\left( {C - 1}\right) }\left( {C{\sigma }_{i}^{2} + \mathop{\sum }\limits_{{k = 1}}^{C}{\sigma }_{ik}^{2} - {\sigma }_{ii}^{2}}\right)
563
+ $$
564
+
565
+ According to the assumption that ${\sigma }_{i} > {\sigma }_{ii}$ , we know that with the decrease of $h$ under the condition of $h < \frac{1}{C}$ the upper bound of the Var $\left( {{z}_{v} \mid {y}_{v} = i}\right)$ will decrease. We can have the same conlcusion for $\operatorname{Var}\left( {{z}_{v} \mid {y}_{v} = j}\right)$ . Combining the trend that when $h < \frac{1}{C}\left| {\mathbb{E}\left( {{z}_{v} \mid {y}_{v} = i}\right) - \mathbb{E}\left( {{z}_{v} \mid {y}_{v} = j}\right) }\right|$ will increase with the decrease of $h$ , we can conclude that when $h < \frac{1}{C}$ , the graph with lower $h$ will have more discriminative aggregate representations.
566
+
567
+ When $h = \frac{1}{C}$ , we can get
568
+
569
+ 601
570
+
571
+ $$
572
+ {\Delta }_{i, j} = \frac{1}{C}\left| {\mathop{\sum }\limits_{{k = 1}}^{C}\left( {{\mu }_{ik} - {\mu }_{jk}}\right) }\right| , \tag{24}
573
+ $$
574
+
575
+ $$
576
+ \operatorname{Var}\left( {{z}_{v} \mid {y}_{v} = i}\right) \geq \frac{1}{d}\left( {\frac{1}{C}\mathop{\sum }\limits_{{k = 1}}^{C}{\sigma }_{ik}^{2} + {\sigma }_{i}^{2}}\right) \mid , \tag{25}
577
+ $$
578
+
579
+ If ${\sigma }_{i} > \sqrt{d}\left| {{\mu }_{ik} - {\mu }_{ik}}\right| ,\forall k \in \{ 1,\ldots , C\}$ , we can get $\operatorname{Var}\left( {{z}_{v} \mid {y}_{v} = i}\right) > {\Delta }_{i, j}^{2}$ . So when $h = \frac{1}{C}$ and ${\sigma }_{i} > \sqrt{d}\left| {{\mu }_{ik} - {\mu }_{ik}}\right| ,\forall k \in \{ 1,\ldots , C\}$ , the representations after the averaging process will be non-discrimative.
580
+
581
+ ## D Proof of Theorem 2
582
+
583
+ Proof 2 In this proof, we also consider a center node $v$ in class $i$ . And we focus on one dimension of the node feature and aggregated representation. Specifically, for each dimension, the label-wise aggregation can be written as:
584
+
585
+ $$
586
+ {a}_{v, k} = \mathop{\sum }\limits_{{u \in {\mathcal{N}}_{k}\left( v\right) }}\frac{1}{\left| {\mathcal{N}}_{k}\left( v\right) \right| }{x}_{u}, \tag{26}
587
+ $$
588
+
589
+ where ${a}_{v, k}$ denotes the aggregated feature of neighbors in class $k$ . Since $u \in {\mathcal{N}}_{k}\left( v\right)$ , we know node ${u}^{\prime }$ ’s features ${x}_{u}$ follows distribution as ${x}_{u} \sim N\left( {{\mu }_{ik},{\sigma }_{ik}}\right)$ . The mean of ${a}_{v, k}$ in Eq. (26) is given as:
590
+
591
+ $$
592
+ \mathbb{E}\left( {{a}_{v, k} \mid {y}_{v} = i}\right) = {\mu }_{ik}. \tag{27}
593
+ $$
594
+
595
+ the absolute difference between $\mathbb{E}\left( {{a}_{v, k} \mid {y}_{v} = i}\right)$ and $\mathbb{E}\left( {{a}_{v, k} \mid {y}_{v} = j}\right)$ will be:
596
+
597
+ $$
598
+ {\Delta }_{i, j}^{k} = \left| {\mathbb{E}\left( {{a}_{v, k} \mid {y}_{v} = i}\right) - \mathbb{E}\left( {{a}_{v, k} \mid {y}_{v} = j}\right) }\right| = \left| {{\mu }_{ik} - {\mu }_{jk}}\right| . \tag{28}
599
+ $$
600
+
601
+ Given the assumption that the features are conditionally independent given the label of center node, 3 the variance of ${a}_{v, k}$ can be written as:
602
+
603
+ $$
604
+ \operatorname{Var}\left( {{a}_{v, k} \mid {y}_{v} = i}\right) = \left\{ \begin{array}{ll} \frac{1}{dh}{\sigma }_{ik}^{2} & \text{ if }k = i; \\ \frac{C - 1}{d\left( {1 - h}\right) }{\sigma }_{ik}^{2} & \text{ else,} \end{array}\right. \tag{29}
605
+ $$
606
+
607
+ In label-wise aggregation, we generally concatenate the $\left\{ {{a}_{v, k} : k \in \{ 1,\ldots C\} }\right\}$ for further classification. Therefore, the lower bound of discriminability can be given by the representation of the class that are most discriminative, which can be formally written as:
608
+
609
+ $$
610
+ {k}^{ * } = \arg \mathop{\max }\limits_{k}\frac{{\left( {\Delta }_{i, j}^{k}\right) }^{2}}{\operatorname{Var}\left( {{a}_{v, k} \mid {y}_{v} = i}\right) } \tag{30}
611
+ $$
612
+
613
+ 617 When $h \geq \frac{1}{C}$ , we can get:
614
+
615
+ $$
616
+ \frac{{\left( {\Delta }_{i, j}^{k}\right) }^{2}}{\operatorname{Var}\left( {{a}_{v,{k}^{ * }} \mid {y}_{v} = i}\right) } \geq \frac{{dh}{\left| {\mu }_{ii} - {\mu }_{ji}\right| }^{2}}{{\sigma }_{ii}^{2}} \geq \frac{d{\left| {\mu }_{ii} - {\mu }_{ji}\right| }^{2}}{C{\sigma }_{ii}^{2}} \tag{31}
617
+ $$
618
+
619
+ As for $h \leq \frac{1}{C}$ , let $k \neq i$ we can infer that:
620
+
621
+ $$
622
+ \frac{{\left( {\Delta }_{i, j}^{{k}^{ * }}\right) }^{2}}{\operatorname{Var}\left( {{a}_{v,{k}^{ * }} \mid {y}_{v} = i}\right) } \geq \frac{d\left( {1 - h}\right) {\left| {\mu }_{ik} - {\mu }_{jk}\right| }^{2}}{\left( {C - 1}\right) {\sigma }_{ik}^{2}} \geq \frac{d{\left| {\mu }_{ik} - {\mu }_{jk}\right| }^{2}}{C{\sigma }_{ik}^{2}} \tag{32}
623
+ $$
624
+
625
+ Therefore, if the condition that $\left| {{\mu }_{ik} - {\mu }_{jk}}\right| > \sqrt{\frac{C}{d}}{\sigma }_{ik},\forall k \in \{ 1,\ldots , C\}$ is met, we can infer from Eq.(31) and Eq.(32) that $\frac{{\left( {\Delta }_{t, j}^{ * }\right) }^{2}}{\operatorname{Var}\left( {{a}_{v,{k}^{ * }} \mid {y}_{v} = i}\right) } > 1$ regardless the value of the homophily ratio $h$ . This shows that label-wise aggregation can preserve the context and ensure the high discriminability regardless the homophily ratio.
626
+
627
+ ## E Additional Details and Experiments on Generated Graphs
628
+
629
+ Algorithm 2 Algorithm of Generating Graphs
630
+
631
+ ---
632
+
633
+ Input: $\mathcal{G} = \left( {\mathcal{V},\mathcal{E},\mathbf{X}}\right) ,{\mathcal{Y}}_{L}$ , target homophily ratio $h$ , and target node degree $d$
634
+
635
+ Output: ${\mathcal{G}}^{\prime } = \left( {\mathcal{V},{\mathcal{E}}^{\prime },\mathbf{X}}\right)$
636
+
637
+ Split the edges $\mathcal{E}$ into heterophilic edges ${\mathcal{E}}_{n}$ and homophilic edges ${\mathcal{E}}_{s}$ .
638
+
639
+ if $\left| {\mathcal{E}}_{s}\right| \geq {hd}\left| \mathcal{V}\right|$ then
640
+
641
+ Sample ${hd}\left| \mathcal{V}\right|$ edges from ${\mathcal{E}}_{s}$ to get ${\mathcal{E}}_{s}^{\prime }$
642
+
643
+ else
644
+
645
+ Obtain ${hd}\left| \mathcal{V}\right| - \left| {\mathcal{E}}_{s}\right|$ homophilic edges by randomly link nodes in the same class
646
+
647
+ Combine ${\mathcal{E}}_{s}$ with added homophilic edges to obtain ${\mathcal{E}}_{s}^{\prime }$
648
+
649
+ end if
650
+
651
+ Randomly sample $d\left( {1 - h}\right) \left| \mathcal{V}\right|$ edges from ${\mathcal{E}}_{n}$ as ${\mathcal{E}}_{n}^{\prime }$
652
+
653
+ Get ${\mathcal{E}}^{\prime }$ with ${\mathcal{E}}^{\prime } = {\mathcal{E}}_{n}^{\prime } \cup {\mathcal{E}}_{s}^{\prime }$
654
+
655
+ ---
656
+
657
+ ### E.1 Process of Graph Generation
658
+
659
+ To verify the conclusion in Theroem 1, we generate graphs with different homophily ratios and average degrees on the large-scale crocodile graph. Specifically, the average node degree of the target generated graphs is varied by $\{ 5,{10},{20}\}$ . For each node degree, we will sample the heterophilic edges, i.e., edges linking nodes in different classes, and homophilic edges, i.e., edges linking nodes in the same class from the original crocodile graph in different ratios to obtain realistic graphs with different heterophily levels. The homophily ratios of the generated graphs range from 0 to 0.9 with a step of 0.1 . Since crocodile itself is a heterophilic graph that do not contain many homophilic edges, there could be no enough homophilic edges to obtain a graph with high homophily and node degrees. In this situation, we will randomly link nodes in the same class to get the required number of homophilic edges for graph generation. For the train/validation/test splits of generated graphs, they are the same as the original crocodile graph. The algorithm of the graph generation process can be found in Algorithm 2.
660
+
661
+ ![01963ede-b51e-7ad0-8efc-12886ed83ec2_16_333_1608_1130_286_0.jpg](images/01963ede-b51e-7ad0-8efc-12886ed83ec2_16_333_1608_1130_286_0.jpg)
662
+
663
+ Figure 4: Comparisons between GCN, GAT and our LW-GCN on generated graphs. Note that model selection module is not adopted in LW-GCN in these experiments.
664
+
665
+ ### E.2 More Experiments on Generated Graphs
666
+
667
+ To verify our theoretical analysis that label-wise aggregation can lead to distinguishable representations regardless the heterophily levels under mild conditions, we also compare LW-GCN with GCN and GAT on the generated graphs with different homophily ratios and average node degrees. The label-wise aggregation is conducted with the pseudo labels and provided ground-truth labels as it is described in Sec.4.2.2. Since we only focus on the label-wise graph convolution in the experiments, the model selection module is removed here. The other settings are the same as description in Appendix B.1. The average results of 10 splits are shown in Fig. 4. From this Figure, we can observe that the performance of LW-GCN is much better than the GCN and GAT when the heterophily level is high. For example, when $h \approx {0.2}$ , both GCN and GAT can hardly outperforms MLP. By contrast, the accuracy of LW-GCN outperform GCN and GAT by around ${10}\%$ . This demonstrate the effectiveness of adopting label-wise aggregation in graph convolution. In addition, we can find that only adopting the model with label-wise graph convolution will give slightly worse performance than GCN/GAT when the homophily ratio is very high. This implies the necessity of deploying a model selection module.
668
+
669
+ ## F Analysis on Heterophilic Graphs
670
+
671
+ In this section, we conduct empirical analysis to verify Assumption 2 in Sec.3.2. Specifically, we aim to show (i) For nodes in the same class, features of their neighbors in the same class are similar; (ii) For nodes in different classes, features of their neighbors in the same class follow different distributions. Let ${\mathcal{X}}_{ik} = \left\{ {{x}_{u} : {y}_{u} = k,{y}_{v} = i, u \in \mathcal{N}\left( v\right) , v \in \mathcal{V}}\right\}$ be the set of neighbors which belong to class $k$ and are linked by the central node in class $i$ . For neighbors in class $k$ , We analyze the average similarity scores between ${\mathcal{X}}_{ik}$ and ${\mathcal{X}}_{jk}$ to investigate whether neighbors in class $k$ that are linked by center nodes in different classes follow different distributions. The results on Crocodile, Chameleon, and Squirrel for representative neighbor classes are presented in Fig. 5, where(i, j)-th element in the similarity matrix denotes the average node feature cosine similarity between ${\mathcal{X}}_{ik}$ and ${\mathcal{X}}_{jk}$ . From this figure, we can observe that:
672
+
673
+ - For ${\mathcal{X}}_{ik},\forall i \in 1,\ldots , C$ , its intra-group similarity score is very high. This proves that the heterophilic neighbors' features are similar when the nodes are in the same class.
674
+
675
+ - The similarity scores between ${\mathcal{X}}_{ik}$ and ${\mathcal{X}}_{ik}$ are very small when $i \neq j$ . This indicates that for nodes in different classes their heterophilic neighbors belonging to the same class still differs a lot.
676
+
677
+ With the above observations, Assumption 2 is justified.
678
+
679
+ ![01963ede-b51e-7ad0-8efc-12886ed83ec2_17_318_1239_1157_431_0.jpg](images/01963ede-b51e-7ad0-8efc-12886ed83ec2_17_318_1239_1157_431_0.jpg)
680
+
681
+ Figure 5: Similarity matrices of neighbors linked with centered nodes in different classes on Crocodile, Squirrel, and Chameleon.
682
+
683
+ ## G Additional Ablation Studies
684
+
685
+ Table 4 gives additional ablation studies on Pubmed, Chameleon, and Squirrel. The observations are similar to that of Table 2.
686
+
687
+ ## H Impacts of Label-Wise Aggregation Layers
688
+
689
+ In this section, we explore the sensitivity of LW-GCN on the depth of ${f}_{C}$ , i.e., the number of layers of label-wise message passing. Since LW-GCN will not select ${f}_{C}$ for homophilic graphs. We only conduct the sensitivity analysis on heterophilic graphs. We vary the depth of ${f}_{C}$ as $\{ 2,3,\ldots ,6\}$ . The other experimental settings are the same as that described in Sec. B.1. The results on Chameleon and Squirrel are shown in Fig. 6. From the figure, we find that our LW-GCN is insensitive to the number of layers, while the performance of GCN will drop with the increase of depth. This is because aggregation of LW-GCN is performed label-wisely to capture the context information. Embeddings of nodes in different classes will not be smoothed to similar values even after many iterations.
690
+
691
+ Table 4: Ablation Study
692
+
693
+ <table><tr><td>Dataset</td><td>MLP</td><td>GCN</td><td>GCNII</td><td>LW-GCN \\P</td><td>LW-GCN \\G</td><td>LW-GCN ${}_{GCN}$</td><td>LW-GCN</td></tr><tr><td>Pubmed</td><td>${72.7} \pm {0.4}$</td><td>${78.4} \pm {1.1}$</td><td>${80.2} \pm {0.2}$</td><td>${77.6} \pm {0.7}$</td><td>${72.4} \pm {0.6}$</td><td>${79.2} \pm {0.8}$</td><td>${80.3} \pm {0.3}$</td></tr><tr><td>Chameleon</td><td>${48.0} \pm {1.5}$</td><td>${63.5} \pm {2.5}$</td><td>${63.5} \pm {2.5}$</td><td>${74.7} \pm {1.4}$</td><td>${74.2} \pm {1.8}$</td><td>${74.3} \pm {2.3}$</td><td>${74.4} \pm {1.2}$</td></tr><tr><td>Squirrel</td><td>${32.3} \pm {1.8}$</td><td>${46.7} \pm {1.5}$</td><td>${49.4} \pm {1.7}$</td><td>${62.3} \pm {2.3}$</td><td>${62.3} \pm {1.3}$</td><td>${61.9} \pm {1.4}$</td><td>${62.6} \pm {1.6}$</td></tr></table>
694
+
695
+ ![01963ede-b51e-7ad0-8efc-12886ed83ec2_18_332_673_1132_281_0.jpg](images/01963ede-b51e-7ad0-8efc-12886ed83ec2_18_332_673_1132_281_0.jpg)
696
+
697
+ Figure 6: Classification accuracy with different model depth.
698
+
699
+ ## I Limitations of Our Work
700
+
701
+ In this paper, we conduct thoroughly theoretical and empirical analysis to show the impacts of heterophily levels to GCN. And we demonstrate the GCN model can be largely affected by heterophily and give poor prediction results. To alleviate the issue brought by heterophily, we develop a novel label-wise graph convolutional network to preserve the heterophilic context to facilitate the node classification. However, there are some limitations of our work. First, we majorly focus on the GCN model in our theoretical analysis about the impacts of heterophily level. We hope to extend this analysis to more complex message-passing mechanism such as GAT in the future. Second, node labels are required for LW-GCN to obtain pseudo labels for label-wise graph convolution. However, in same tasks such as link prediction, labels are not available. Therefore, we will investigate how to obtain useful pseudo labels for applications that do not provide node labels.
papers/LOG/LOG 2022/LOG 2022 Conference/HRmby7yVVuF/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,303 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § LABEL-WISE GRAPH CONVOLUTIONAL NETWORK FOR HETEROPHILIC GRAPHS
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ Graph Neural Networks (GNNs) have achieved remarkable performance in modeling graphs for various applications. However, most existing GNNs assume the graphs exhibit strong homophily in node labels, i.e., nodes with similar labels are connected in the graphs. They fail to generalize to heterophilic graphs where linked nodes may have dissimilar labels and attributes. Therefore, in this paper, we investigate a novel framework that performs well on graphs with either homophily or heterophily. More specifically, we propose a label-wise message passing mechanism to avoid the negative effects caused by aggregating dissimilar node representations and preserve the heterophilic contexts for representation learning. We further propose a bi-level optimization method to automatically select the model for graphs with homophily/heterophily. Theoretical analysis and extensive experiments demonstrate the effectiveness of our proposed framework for node classification on both homophilic and heterophilic graphs.
12
+
13
+ § 15 1 INTRODUCTION
14
+
15
+ Graph-structured data is very pervasive in the real-world such as knowledge graphs, traffic networks, and social networks. Therefore, it is important to model the graphs for downstream tasks such as traffic prediction [34], recommendation system [12] and drug generation [3]. To capture the topology information in graph-structured data, Graph Neural Networks (GNNs) [30] adopt a message-passing mechanism which learns a node's representation by iteratively aggregating the representations of its neighbors. This can enrich the node features and preserve the node attributes and local topology for various downstream tasks.
16
+
17
+ Despite the great success of GNNs in modeling graphs, there is concern in processing heterophilic graphs where edges often link nodes dissimilar in attributes or labels. Specifically, existing works [37, 7] find that GNNs could fail to generalize to graphs with heterophily due to their implicit/explicit homophily assumption. For example, Graph Convolutional Network (GCNs) is even outperformed by MLP that ignores the graph structure on heterophilic website datasets [37]. However, a recent work [21] argues that homophily assumption is not a necessity for GNNs. They show that GCN can work well on dense heterophilic graphs whose neighborhood patterns of different classes are distinguishable. But their analysis and conclusion is limited to the heterophilic graphs under strict conditions, and fail to show the relation between heterophily levels and performance of GNNs. Thus, in Sec. 3, we conduct thoroughly theoretical and empirical analysis on GCN to investigate the impacts of heterophily levels, which cover all the aforementioned observations. As the Theorem 1 and Fig. 1 show, we find the performance of GCN will firstly decrease then increase with the increment of heterophily levels. And the aggregation in GCN could even lead to non-discriminate representations under certain conditions.
18
+
19
+ Though heterophilic graphs challenge existing GNNs, we observe that the heterophilic neighborhood context itself provides useful information. Generally, two nodes of the same class tend to have similar heterophilic neighborhood contexts; while two nodes of different classes are more likely to have different heterophilic neighborhood contexts, which is verified in Appendix F. Thus, a heterophilic context-preserving mechanism can lead to more discriminative representations. One promising way to preserve heterophilic context is to conduct label-wise aggregation, i.e., separately aggregate neighbors in each class. In this way, we can summarize the heterophilic neighbors belonging to each class to an embedding to preserve the local context information for representation learning. As shown in the example in Fig. 2, for node ${v}_{A}$ , with label-wise aggregation, ${v}_{A}$ will be represented as $\left\lbrack {{1.0},{5.5},{2.0}\text{ , non-existence }}\right\rbrack$ , in the order of ${v}_{A}$ ’s attribute, blue, green, and orange neighbors, respectively. Compared with ${v}_{B},{v}_{A}$ ’s representations of central node and neighborhood context differ significantly with ${v}_{B}$ . While for the aggregation in GCN, the obtained representations are rather similar for two nodes. In other words, we obtain more discriminative features on heterophilic graphs with label-wise aggregation, which is also verified by our analysis in Theorem 2. Though promising, there is no existing work exploring label-wise message passing to address the challenge of heterophilic graphs.
20
+
21
+ Therefore, in this paper, we investigate novel label-wise aggregation for graph convolution to facilitate the node classification on heterophilic graphs. In essence, we are faced with two challenges: (i) the label-wise aggregation needs the label of each node; while for node classification, we are only given a small set of labeled nodes. How to adopt label-wise graph convolution on sparsely labeled heterophilic graphs to facilitate node classification? (ii) In practice, the homophily levels of the given graphs can be various and are often unknown. For homophily graphs, the label-wise graph convolution might not work as well as previous GNNs embedded with homophily assumption. How to ensure the performance on both heterophilic and homophilic graphs? In an attempt to address these challenges, we propose a novel framework Label-Wise GCN (LW-GCN). LW-GCN adopts a pseudo label predictor to predict pseudo labels and designs a novel label-wise message passing to preserve the heterophilic contexts with pseudo labels. To handle both heterophilic and homophilic graphs, apart from label-wise message passing GNN, LW-GCN also utilizes a GNN for homophilic graphs, and adopts bi-level optimization on the validation data to automatically select the better model for the given graph. The main contributions are:
22
+
23
+ * We theoretically show impacts of heterophily levels to GCN and demonstrate the potential limitations of GCN in learning on heterophilic graphs;
24
+
25
+ * We design a label-wise graph convolution to preserve local context in heterophilic graphs, which is also proven by our theoretical and empirical analysis.
26
+
27
+ * We propose a novel framework LW-GCN, which deploys a pseudo label predictor and an automatic model selection module to achieve label-wise aggregation on sparsely labeled graphs and ensure the performance on both heterophilic and homophilic graphs; and
28
+
29
+ * Extensive experiments on real-world graphs with heterophily and homophily are conducted to demonstrate the effectiveness of LW-GCN.
30
+
31
+ § 2 RELATED WORK
32
+
33
+ Graph neural networks (GNNs) have shown great success for various applications such as social networks [12,8], financial transaction networks [28,10] and traffic networks [34, 35]. Based on the definition of the graph convolution, GNNs can be categorized into two categories, i.e., spectral-based $\left\lbrack {4,9,{17},{19}}\right\rbrack$ and spatial-based $\left\lbrack {{27},{32},1}\right\rbrack$ . Spectral-based GNN models are defined according to spectral graph theory. Bruna et al. [4] firstly generalize convolution operation to graph-structured data from spectral domain. GCN [17] simplifies the graph convolution by first-order approximation. For spatial-based graph convolution, it aggregates the information of the neighbors nodes [22,12,5]. For instance, spatial graph convolution that incorporates the attention mechanism is applied in GAT [27] to facilitate the information aggregation. Graph Isomorphism Network (GIN) [31] is proposed to learn more powerful representations of the graph structures. Recently, to learn better node representations, deep graph neural networks $\left\lbrack {6,{18},{20}}\right\rbrack$ and self-supervised learning methods $\left\lbrack {{26},{16},{38},{24},{33}}\right\rbrack$ have been investigated.
34
+
35
+ However, the aforementioned methods are generally designed based on the homophily assumption of the graph. Low homophily level in some real-word graphs can largely degrade their performance [37]. Some initial efforts $\left\lbrack {{23},2,{14},{37},{36},7,{13}}\right\rbrack$ have been taken to address the problem of heterophilic graphs. For example, H2GCN [37] investigates three key designs for GNNs on heterophilic graphs. SimP-GCN [14] adopts a node similarity preserving mechanism to handle graphs with heterophiliy. FAGCN [2] adaptively aggregates low-frequency and high-frequency signals from neighbors to learn representations for graphs with heterophily. GPR-GNN [7] proposes a generalized PageRank GNN architecture that can learn positive/negative weights for the representations after different steps of propagation to mitigate the graph heterophily issue. CPGNN [36] adopts a compatibility matrix to model the heterophily in the graph. Recently, BM-GCN [13] proposes to utilize pseudo labels in the convolutional operation. Specifically, the pseudo labels are used to obtain a block similarity matrix to re-weight the edges in heterophilic graphs. Then, node pairs belonging to different label combinations could have different information exchange. Our LW-GCN is inherently different from these methods: (i) we propose a novel label-wise graph convolution to better capture the neighbors' information in heterophilic graphs; and (ii) Automatic model selection is deployed to achieve state-of-the-art performance on both homophilic and heterophilic graphs.
36
+
37
+ § 3 PRELIMINARIES
38
+
39
+ In this section, we first present the notations and definition followed by the introduction of the GCN's design. We then conduct the theoretical analysis to to investigate the impacts of heterophily to GCN.
40
+
41
+ § 3.1 NOTATIONS AND DEFINITION
42
+
43
+ Let $\mathcal{G} = \left( {\mathcal{V},\mathcal{E},\mathbf{X}}\right)$ be an attributed graph, where $\mathcal{V} = \left\{ {{v}_{1},\ldots ,{v}_{N}}\right\}$ is the set of $N$ nodes, $\mathcal{E} \subseteq \mathcal{V} \times \mathcal{V}$ is the set of edges, and $\mathbf{X} = \left\{ {{\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{N}}\right\}$ is the set of node attributes. $\mathbf{A} \in {\mathbb{R}}^{N \times N}$ represents the adjacency matrix of the graph $\mathcal{G}$ , where ${\mathbf{A}}_{ij} = 1$ indicates an edge between nodes ${v}_{i}$ and ${v}_{j}$ ; otherwise, ${\mathbf{A}}_{ij} = 0$ . In the node classification task, each node belongs to one of $C$ classes. We use ${y}_{i}$ to denote label of node ${v}_{i}$ . Graphs can be split into homophilic and heterophilic graphs based on how likely edges link nodes in the same class. The homophily level is measured by the homophily ratio: Definition 1 (Homophily Ratio) It is the fraction of edges in a graph that connect nodes of the same class. The homophily ratio $h$ is calculated as $h = \frac{\left| \left\{ \left( {v}_{i},{v}_{j}\right) \in \mathcal{E} : {y}_{i} = {y}_{j}\right\} \right| }{\left| \mathcal{E}\right| }$ .
44
+
45
+ When the homophily ratio is small, most of the edges will link nodes from different classes, which indicates a heterophilic graph. In homophilic graphs, connected nodes are more likely to belong to the same class, which will lead to a homophily ratio close to 1 .
46
+
47
+ § 3.2 HOW DOES THE HETEROPHILY AFFECT THE GCN?
48
+
49
+ GCN [17] is one of the most widely used graph neural networks. The operation in each layer of GCN
50
+
51
+ can be written as:
52
+
53
+ $$
54
+ {\mathbf{H}}^{\left( k + 1\right) } = \sigma \left( {\widetilde{\mathbf{A}}{\mathbf{H}}^{\left( k\right) }{\mathbf{W}}^{\left( k\right) }}\right) , \tag{1}
55
+ $$
56
+
57
+ where ${\mathbf{H}}^{\left( k\right) }$ is the node representation matrix of the output of the $k$ -th layer and $\widetilde{\mathbf{A}}$ is the normalized adjacency matrix. Generally, the symmetric normalized form ${\mathbf{D}}^{-\frac{1}{2}}\mathbf{A}{\mathbf{D}}^{-\frac{1}{2}}$ or ${\mathbf{D}}^{-1}\mathbf{A}$ is used as $\widetilde{\mathbf{A}}$ , where $\mathbf{D}$ is a diagonal matrix with ${\mathbf{D}}_{ii} = \mathop{\sum }\limits_{i}{\mathbf{A}}_{ij}$ . The adjacency matrix can be augmented with a self-loop. $\sigma$ is an activation function such as ReLU. In a single layer of GCN, the process can be split to two steps. First, GCN layer averages the neighbor features with $\mathbf{Z} = \widetilde{\mathbf{A}}\mathbf{X}$ . Then, a non-linear transformation $\sigma \left( \mathbf{{ZW}}\right)$ is applied to obtain intermediate features or final predictions. The step of averaging the neighbor features can benefit the node classification when the neighbors have similar features. However, for heterophilic graphs, mixing neighbors that possess different features may result in poor representations for node classification. This could be justified by the following theorem, which thoroughly analyzes the impacts of the heterophily level to the linear separability of the representations after one step aggregation in GCN.
58
+
59
+ Assumptions. We first discuss the assumptions of the heterophilic graphs: (i) Following previous works [37], the graph $\mathcal{G}$ is considered as a $d$ -regular graph, i.e.. each node have $d$ neighbors; For each node $v$ , its neighbors’ features and class labels $\left\{ {{y}_{u} : u \in \mathcal{N}\left( v\right) }\right\}$ are conditionally independent given ${y}_{v}$ , and $P\left( {{y}_{u} = {y}_{v} \mid {y}_{v}}\right) = h,P\left( {{y}_{u} = y \mid {y}_{v}}\right) = \frac{1 - h}{C - 1},\forall y \neq {y}_{v}$ . And dimensions of node feature are independent to each other; (ii) For nodes in different classes, their heterophilic neighbors’ features follow different distributions. Specifically, let ${\mathcal{N}}_{k}\left( v\right)$ denote node $v$ ’s neighbors of class $k$ . For two nodes $v$ and $s$ in classes $i$ and $j\left( {i \neq j}\right)$ , the features of their heterophilic neighbors ${\mathcal{N}}_{k}\left( v\right)$ and ${\mathcal{N}}_{k}\left( s\right)$ in class $k \in \{ 1,\ldots ,C\}$ follow two different normal distributions $N\left( {{\mathbf{\mu }}_{ik},{\mathbf{\sigma }}_{ik}}\right)$ and $N\left( {{\mathbf{\mu }}_{jk},{\mathbf{\sigma }}_{jk}}\right)$ , where ${\mathbf{\mu }}_{ik}$ and ${\mathbf{\mu }}_{jk}$ represent the means. ${\mathbf{\sigma }}_{ik}$ and ${\mathbf{\sigma }}_{ik}$ denote the standard deviations. Intuitively, though nodes in ${\mathcal{N}}_{k}\left( v\right)$ and ${\mathcal{N}}_{k}\left( s\right)$ belong to the same class $k$ , they are connected to nodes of different classes because of their different properties. For example, in the molecule, the atom in the same class will exhibit different features, when they are linked to different atoms. Therefore, this assumption is valid. And it is also verified by the empirical analysis on large real-world heterophilic graphs in Appendix F. Let ${\mathbf{\sigma }}_{i} = \sqrt{\frac{1}{C}\mathop{\sum }\limits_{{k = 1}}^{C}\left( {{\mathbf{\mu }}_{ik} - {\overline{\mathbf{\mu }}}_{i}}\right) \odot \left( {{\mathbf{\mu }}_{ik} - {\overline{\mathbf{\mu }}}_{i}}\right) }$ , where ${\overline{\mathbf{\mu }}}_{i} = \frac{1}{C}\mathop{\sum }\limits_{{k = 1}}^{C}{\mathbf{\mu }}_{ik}$ and $\odot$ represents the element-wise product. We can have the following theorem.
60
+
61
+ Theorem 1 For an attributed graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E},\mathbf{X}}\right)$ that follows the above assumptions in Sec. 3.2, if $\left| {{\mathbf{\mu }}_{ii} - {\mathbf{\mu }}_{jj}}\right| > \left| {{\mathbf{\mu }}_{ik} - {\mathbf{\mu }}_{jk}}\right|$ and ${\mathbf{\sigma }}_{i} > {\mathbf{\sigma }}_{ii},\forall k \in \{ 1,\ldots C\}$ , as the decrease of homophily ratio $h$ , the discriminability of representations obtained by the averaging process in GCN layer; i.e. $\mathbf{Z} = {\mathbf{D}}^{-1}\mathbf{A}\mathbf{X}$ , will firstly decrease until $h = \frac{1}{C}$ then increase. When $h = \frac{1}{C}$ and $d < \frac{{\sigma }_{i}^{2}}{{\left| {\mathbf{\mu }}_{ik} - {\mathbf{\mu }}_{jk}\right| }^{2}}$ , the representations after averaging process will be nearly non-discrimative.
62
+
63
+ The detailed proof can be found in Appendix C. The conditions in this theorem generally hold. Since the intra-class distance is often much smaller than inter-calss distance, $\left| {{\mathbf{\mu }}_{ii} - {\mathbf{\mu }}_{jj}}\right| > \left| {{\mathbf{\mu }}_{ik} - {\mathbf{\mu }}_{jk}}\right|$ is generally meet in real-world graphs. As for ${\mathbf{\sigma }}_{i}$ , it computes the standard deviations of mean neighbor features in different classes. As a result, ${\mathbf{\sigma }}_{i}$ is usually much larger than the ${\mathbf{\sigma }}_{ii}$ and $\left| {{\mathbf{\mu }}_{ik} - {\mathbf{\mu }}_{ik}}\right|$ . Therefore, the Theorem 1 generally holds for the real-world graphs. And we can observe from Theorem 1 that (i) heterophily level in a certain range will largely degrade the performance of GCN; (ii) GCN will be more negatively affected by the heterophilic graphs with lower node degrees. Though our analysis is based on GCN, it can be easily extended to GNNs that average neighbor representations in the aggregation (e.g. GraphSage [12], APPNP [18], and SGC [29]). For the extension of the analysis on more complex message-passing mechanism, we leave it as future work.
64
+
65
+ To empirically verify the above theoretical analysis, we synthesize graphs with different homophily ratios and node degrees by deleting/adding edges in the crocodile graph. The detailed graph generation process can be found in Appendix E.1. The results of GCN and GAT [27] on graphs with various node degrees are shown in Fig. 1. We can observe that (i) as the homophily ratio decreases the performance of GCN will keep decreasing until $h$ is around ${0.2}\left( {h \approx \frac{1}{C}}\right)$ , then the performance will start increase;(ii) when $h$ is around $\frac{1}{C}$ , the performance can be very poor and even much worse than MLP on the graph with low node degree. The observations are in consistent with our Theorem 1. This trend has also be reported in [21]. However, their theoretical analysis focus on proving the effectiveness of GCN on the heterophilic graphs with discrimative neighborhoods, which can only explain the observation when $h < \frac{1}{C}$ . By contrast, our theoretical analysis can well explain the whole trend of GCN performance w.r.t the homophily ratio. This empirical analysis further demonstrates the general limitations of current GNN models in learning on graphs with heterophily.
66
+
67
+ < g r a p h i c s >
68
+
69
+ Figure 1: Impacts of the het-erophily levels to GCN and GAT.
70
+
71
+ § 3.3 PROBLEM DEFINITION
72
+
73
+ Based on the analysis above, we can infer that current GNNs are effective on graphs with high homophily; while they are challenged by the graphs with heterophily. In real world, we are usually given graphs with various homophily levels. In addition, the graphs are often sparsely labeled. And due to the lacking of labels, the homophily ratio of the given graph is generally unknown. Thus, we aim to develop a framework that works for semi-supervised node classification on graphs with any homophily level. The problem is defined as:
74
+
75
+ Problem 1 Given an attributed graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E},\mathbf{X}}\right)$ with a set of labels ${\mathcal{Y}}_{L}$ for node set ${\mathcal{V}}_{L} \subset \mathcal{V}$ , the homophily ratio $h$ of $\mathcal{G}$ is unknown, we aim to learn a GNN which accurately predicts the labels of the unlabeled nodes, i.e., $f\left( {\mathcal{G},{\mathcal{Y}}_{L}}\right) \rightarrow {\widehat{\mathcal{Y}}}_{U}$ , where $f$ is the function we aim to learn and ${\widehat{\mathcal{Y}}}_{U}$ is the set of predicted labels for unlabeled nodes.
76
+
77
+ § 4 METHODOLOGY
78
+
79
+ As the analysis in Sec. 3 shows, the aggregation process in GCN will mix the neighbors in various labels/distributions in heterophilic graphs, resulting non-discrimative representations for local context. Based on this motivation, we propose to adopt label-wise aggregation in graph convolution, i.e., neighbors in the same class are separately aggregated, to preserve the heterophilic context. Next, we give the details of the label-wise aggregation along with the theoretical analysis that verifies its capability in obtaining distinguishable representations for heterophilic context. Then, we present how to apply label-wise graph convolution on sparsely labeled graphs and how to ensure performance on both heterophilic and homophilic graphs.
80
+
81
+ < g r a p h i c s >
82
+
83
+ Figure 2: The illustration of label wise aggregation and overall framework of our LW-GCN.
84
+
85
+ § 4.1 LABEL-WISE GRAPH CONVOLUTION
86
+
87
+ In heterophilic graphs, we observe that the heterophilic neighbor context itself provides useful information. Let ${\mathcal{N}}_{k}\left( v\right)$ denote node $v$ ’s neighbors of label class $k$ . As shown in Appendix F, for two nodes $u$ and $v$ of the same class, i.e., ${y}_{u} = {y}_{v}$ , the features of nodes in ${\mathcal{N}}_{k}\left( u\right)$ are likely to be similar to that of nodes in ${\mathcal{N}}_{k}\left( v\right)$ ; while for nodes $u$ and $s$ with ${y}_{u} \neq {y}_{s}$ , the features of nodes in ${\mathcal{N}}_{k}\left( u\right)$ are likely to be different from that in ${\mathcal{N}}_{k}\left( s\right)$ . Therefore, for each node $v \in \mathcal{V}$ , we propose to summarize the information of ${\mathcal{N}}_{k}\left( v\right)$ by label-wise aggregation to capture useful heterophilic context. Let ${\mathbf{a}}_{v,k}$ be the aggregated representation of neighbors in class $k$ , the process of obtaining representation for heterophilic context with label-wise aggregation can be formally written as:
88
+
89
+ $$
90
+ {\mathbf{a}}_{v,k} = \mathop{\sum }\limits_{{u \in {\mathcal{N}}_{k}\left( v\right) }}\frac{1}{\left| {\mathcal{N}}_{k}\left( v\right) \right| }{\mathbf{x}}_{u},\;{\mathbf{h}}_{v}^{c} = \operatorname{CONCAT}\left( {{\mathbf{a}}_{v,1},\ldots ,{\mathbf{a}}_{v,C}}\right) , \tag{2}
91
+ $$
92
+
93
+ where $C$ is the number of classes. ${\mathbf{h}}_{v}^{c}$ denotes the representation of the neighborhood context. As it is shown in Eq.(2), concatenation is applied to obtain representation of context to preserve the heterophilic context. When there is no neighbor of $v$ belonging to class $k$ , zero embedding is assigned for class $k$ . We then can augment the representation of the centered node with the context representation as the general design of GNNs. Specifically, we concatenate the context representation ${h}_{v}^{c}$ and centered node representation ${\mathbf{x}}_{v}$ followed by the non-linear transformation:
94
+
95
+ $$
96
+ {\mathbf{h}}_{v} = \sigma \left( {\mathbf{W} \cdot \operatorname{CONCAT}\left( {{\mathbf{x}}_{v},{\mathbf{h}}_{v}^{c}}\right) }\right) , \tag{3}
97
+ $$
98
+
99
+ where $\mathbf{W}$ denotes the learnable parameters in the label-wise graph convolution and $\sigma$ denotes the activation function such as ReLU.
100
+
101
+ In this section, we further prove the superiority of label-wise graph convolution in learning discrima-tive representations for heterophilic context by the following theorem.
102
+
103
+ Theorem 2 We consider an attributed graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E},\mathbf{X}}\right)$ that follows the aforementioned assumptions in Sec. 3.2. If $\left| {{\mathbf{\mu }}_{ik} - {\mathbf{\mu }}_{jk}}\right| > \sqrt{\frac{C}{d}}{\mathbf{\sigma }}_{ik},\forall k \in \{ 1,\ldots ,C\}$ , the heterophilic context representation ${\mathbf{h}}_{v}^{c}$ that is obtained by the label-wise aggregation with Eq. (2) will keep its discriminability regardless the value of homophily ratio $h$ .
104
+
105
+ The detailed proof is presented in Appendix D. The difference between the groups of neighbors is naturally larger than the intra-group variance. Since $\sqrt{\frac{C}{d}}$ is usually small (e.g. around 1.8 in the Texas graph), the condition $\left| {{\mathbf{\mu }}_{ik} - {\mathbf{\mu }}_{jk}}\right| > \sqrt{\frac{C}{d}}{\mathbf{\sigma }}_{ik}$ is generally satisfied in real-world scenarios. We also adopt the label-wise graph convolution on the synthetic graphs with different homophily ratios to empirically show its effectiveness. The results can be found in Appendix E.2.
106
+
107
+ § 4.2 LW-GCN: A UNIFIED FRAMEWORK FOR GRAPHS WITH HOMOPHILY OR HETEROPHILY
108
+
109
+ Though the analysis in Sec.3.1 proves the effectiveness of label-wise graph convolution in processing graphs with heterophily, there are still two major challenges for semi-supervised node classification on graphs with any heterophily levels: (i) how to conduct label-wise graph convolution on heterophilic graphs with a small number of labeled nodes; and (ii) how to make it work for both heterophilic and homophilic graphs. To address these challenges, we propose a novel framework LW-GCN, which is illustrated in Fig. 2. LW-GCN is composed of an MLP-based pseudo label predictor ${f}_{P}$ , a GNN ${f}_{C}$ using label-wise graph convolution, a GNN ${f}_{G}$ for homophilic graph, and an automatic model selection module. The predictor ${f}_{P}$ takes the node attributes as input to give pseudo labels. ${f}_{C}$ utilizes the estimated pseudo labels from ${f}_{P}$ to conduct label-wise graph convolution on $\mathcal{G}$ for node classification. To ensure the performance on graphs with any homophily level, LW-GCN also trains ${f}_{G}$ , i.e., a GNN for homophilic graphs, and can automatically select the model for graphs with unknown homophily ratio. For the model selection module, a bi-level optimization on validation set is applied to learn the weights for model selection. Next, we give the details of each component.
110
+
111
+ § 4.2.1 PSEUDO LABEL PREDICTION.
112
+
113
+ In label-wise graph convolution, neighbors in different classes are separately aggregated to update node representations. However, only a small number of nodes are provided with labels. Thus, a pseudo label predictor ${f}_{P}$ is deployed to estimate labels for label-wise aggregation. Specifically, a MLP is utilized to obtain pseudo label of node $v$ as ${\widehat{\mathbf{y}}}_{v}^{P} = \operatorname{MLP}\left( {\mathbf{x}}_{v}\right)$ , where ${\mathbf{x}}_{v}$ is the attributes of node $v$ . Note that, we use MLP as the predictor because message passing of the GNNs may lead to poor predictions on heterophilic graphs. The loss function for training ${f}_{P}$ is:
114
+
115
+ $$
116
+ \mathop{\min }\limits_{{\theta }_{P}}{\mathcal{L}}_{P} = \frac{1}{\left| {\mathcal{V}}_{\text{ train }}\right| }\mathop{\sum }\limits_{{v \in {\mathcal{V}}_{\text{ train }}}}l\left( {{\widehat{y}}_{v}^{P},{y}_{v}}\right) , \tag{4}
117
+ $$
118
+
119
+ where ${\mathcal{V}}_{\text{ train }}$ is the set of labeled nodes in the training set, ${y}_{v}$ denote the true label of node $v,{\theta }_{P}$ represents the parameters of the predictor ${f}_{P}$ , and $l\left( \cdot \right)$ is the cross entropy loss.
120
+
121
+ § 4.2.2 ARCHITECTURE OF LW-GCN FOR HETEROPHILIC GRAPHS.
122
+
123
+ With ${f}_{P}$ , we can get pseudo labels ${\widehat{\mathcal{Y}}}_{U}^{P}$ for unlabeled nodes ${\mathcal{V}}_{U} = \mathcal{V} \smallsetminus {\mathcal{V}}_{L}$ . Combining it with the provided ${\mathcal{Y}}_{L}$ , we have labels ${\mathcal{Y}}^{P} \in \left( {{\widehat{\mathcal{Y}}}_{U}^{P} \cup {\mathcal{Y}}_{L}}\right)$ necessary for label-wise aggregation in Eq.(2). Then, node representation can be updated with heterophilic context by Eq.(3). Multiple layers of label-wise graph convolution can be applied to incorporate more hops of neighbors in representation learning. The process of one layer label-wise graph convolution with pseudo labels can be rewritten as:
124
+
125
+ $$
126
+ {\mathbf{a}}_{v,k}^{\left( l\right) } = \mathop{\sum }\limits_{{u \in {\mathcal{N}}_{k}^{P}\left( v\right) }}\frac{1}{\left| {\mathcal{N}}_{k}^{P}\left( v\right) \right| }{\mathbf{h}}_{u}^{\left( l\right) },\;{\mathbf{h}}_{v}^{l + 1} = \sigma \left( {{\mathbf{W}}^{\left( l\right) } \cdot \operatorname{CONCAT}\left( {{\mathbf{h}}_{v}^{\left( l\right) },{a}_{v,1}^{\left( l\right) },\ldots ,{a}_{v,C}^{\left( l\right) }}\right) ,}\right. \tag{5}
127
+ $$
128
+
129
+ where ${\mathcal{N}}_{k}^{P}\left( v\right) = \left\{ {u : \left( {v,u}\right) \in \mathcal{E} \land {\widehat{y}}_{u}^{P} = k}\right\}$ stands for node $v$ ’s neighbors with estimated label $k$ . ${\mathbf{h}}_{v}^{\left( l\right) }$ is the representation of node $v$ at the $l$ -th layer label-wise graph convolution with ${\mathbf{h}}_{v}^{\left( 0\right) } = {\mathbf{x}}_{v}$ . In heterophilic graphs, different hops of neighbors may exhibit different distributions which can provide useful information for node classification. Therefore, the final node prediction can be conducted by combining the intermediate representations of the model with $K$ layers:
130
+
131
+ $$
132
+ {\widehat{\mathbf{y}}}_{v}^{C} = \operatorname{softmax}\left( {{\mathbf{W}}_{C} \cdot \operatorname{COMBINE}\left( {{\mathbf{h}}_{v}^{\left( 1\right) },\ldots ,{\mathbf{h}}_{v}^{\left( K\right) }}\right) }\right) , \tag{6}
133
+ $$
134
+
135
+ where ${\mathbf{W}}_{C}$ is a learnable weight matrix, ${\widehat{\mathbf{y}}}_{v}^{C}$ is predicted label probabilities of node $v$ . Various operations such as max-pooling and concatenation [32] can be applied as the COMBINE function.
136
+
137
+ § 4.2.3 AUTOMATIC MODEL SELECTION
138
+
139
+ In heterophilic graphs, the homophily ratio is very small and even can be around 0.2 [23]. With a reasonable pseudo label predictor, the label-wise aggregation with pseudo labels will mix much less noise than the general GNN aggregation. In contrast, for homophilic graphs such as citation networks, their homophily ratios are close to 1 . In this situation, directly aggregating all the neighbors may introduce less noise in representations than aggregating label-wisely as the pseudo-labels contain noises. Therefore, it is necessary to determine whether to apply the label-wise graph convolution or the state-of-the-art GNN for homophilic graphs. One straightforward way is to select the model based on the homophily ratio. However, graphs are generally sparsely labeled which makes it difficult to estimate the real homophily ratio. To address this problem, we propose to utilize the validation set to automatically select the model.
140
+
141
+ In the model selection module, we combine predictions of the label-wise aggregation model for heterophilic graphs and traditional GNN models for homophilic graphs. Predictions from the GNN ${f}_{G}$ for homophilic graphs are given by ${\widehat{\mathcal{Y}}}^{G} = \operatorname{GNN}\left( {\mathbf{A},\mathbf{X}}\right)$ , where the GNN is flexible to various models for homophilic graphs. Here, we select GCNII [6] which achieves state-of-the-art results
142
+
143
+ on homophilic graphs. The model selection can be achieved by assigning higher weight to the corresponding model prediction. The combined prediction is given as:
144
+
145
+ $$
146
+ {\widehat{\mathbf{y}}}_{v} = \frac{\exp \left( {\phi }_{1}\right) }{\mathop{\sum }\limits_{{i = 1}}^{2}\exp \left( {\phi }_{i}\right) }{\widehat{\mathbf{y}}}_{v}^{C} + \frac{\exp \left( {\phi }_{2}\right) }{\mathop{\sum }\limits_{{i = 1}}^{2}\exp \left( {\phi }_{i}\right) }{\widehat{\mathbf{y}}}_{v}^{G}, \tag{7}
147
+ $$
148
+
149
+ where ${\widehat{\mathbf{y}}}_{v}^{G} \in {\widehat{\mathcal{Y}}}^{G}$ is the prediction of node $v$ from ${f}_{G}.{\phi }_{1}$ and ${\phi }_{2}$ are the learnable weights to control the contributions of two models in final prediction. ${\phi }_{1}$ and ${\phi }_{2}$ can be obtained by finding the values that lead to good performance on validation set. More specifically, this goal can be formulated as the following bi-level optimization problem:
150
+
151
+ $$
152
+ \mathop{\min }\limits_{{{\phi }_{1},{\phi }_{2}}}{\mathcal{L}}_{\text{ val }}\left( {{\theta }_{C}^{ * }\left( {{\phi }_{1},{\phi }_{2}}\right) ,{\theta }_{G}^{ * }\left( {{\phi }_{1},{\phi }_{2}}\right) ,{\phi }_{1},{\phi }_{2}}\right) \;\text{ s.t. }\;{\theta }_{C}^{ * },{\theta }_{G}^{ * } = \arg \mathop{\min }\limits_{{{\theta }_{C},{\theta }_{G}}}{\mathcal{L}}_{\text{ train }}\left( {{\theta }_{C},{\theta }_{G},{\phi }_{1},{\phi }_{2}}\right)
153
+ $$
154
+
155
+ (8)
156
+
157
+ where ${\mathcal{L}}_{\text{ val }}$ and ${\mathcal{L}}_{\text{ train }}$ are the average cross entropy loss of the combined predictions $\left\{ {{\widehat{y}}_{v} : v \in {\mathcal{V}}_{\text{ val }}}\right\}$ and $\left\{ {{\widehat{y}}_{v} : v \in {\mathcal{V}}_{\text{ train }}}\right\}$ on validation set and training set, respectively.
158
+
159
+ § 4.3 AN OPTIMIZATION ALGORITHM OF LW-GCN
160
+
161
+ Computing the gradients for ${\phi }_{1}$ and ${\phi }_{2}$ is expensive in both computational cost and memory. To alleviate this issue, we use an alternating optimization schema to iteratively update the model parameters and the model selection weights.
162
+
163
+ Updating Lower Level ${\theta }_{C}$ and ${\theta }_{G}$ . Instead of calculating ${\theta }_{C}^{ * }$ and ${\theta }_{G}^{ * }$ per outer iteration, we fix ${\phi }_{1}$ and ${\phi }_{2}$ and update the mode parameters ${\theta }_{G}$ and ${\theta }_{C}$ for $T$ steps by:
164
+
165
+ $$
166
+ {\theta }_{C}^{t + 1} = {\theta }_{C}^{t} - {\alpha }_{C}{\nabla }_{{\theta }_{C}}{\mathcal{L}}_{\text{ train }}\left( {{\theta }_{C}^{t},{\theta }_{G}^{t},{\phi }_{1},{\phi }_{2}}\right) ,\;{\theta }_{G}^{t + 1} = {\theta }_{G}^{t} - {\alpha }_{G}{\nabla }_{{\theta }_{G}}{\mathcal{L}}_{\text{ train }}\left( {{\theta }_{C}^{t},{\theta }_{G}^{t},{\phi }_{1},{\phi }_{2}}\right) , \tag{9}
167
+ $$
168
+
169
+ where ${\theta }_{C}^{t}$ and ${\theta }_{G}^{t}$ are model parameters after updating $t$ steps. ${\alpha }_{C}$ and ${\alpha }_{G}$ are the learning rates for ${\theta }_{C}$ and ${\bar{\theta }}_{G}$ .
170
+
171
+ Updating Upper Level ${\phi }_{1}$ and ${\phi }_{2}$ . Here, we use the updated model parameters ${\theta }_{C}^{T}$ and ${\theta }_{G}^{T}$ to approximate ${\theta }_{C}^{ * }$ and ${\theta }_{G}^{ * }$ . Moreover, to further speed up the optimization, we apply first-order approximation [11] to compute the gradients of ${\phi }_{1}$ and ${\phi }_{2}$ :
172
+
173
+ $$
174
+ {\phi }_{1}^{k + 1} = {\phi }_{1}^{k} - {\alpha }_{\phi }{\nabla }_{{\phi }_{1}}{\mathcal{L}}_{\text{ val }}\left( {{\bar{\theta }}_{C}^{T},{\bar{\theta }}_{G}^{T},{\phi }_{1}^{k},{\phi }_{2}^{k}}\right) ,\;{\phi }_{2}^{k + 1} = {\phi }_{2}^{k} - {\alpha }_{\phi }{\nabla }_{{\phi }_{2}}{\mathcal{L}}_{\text{ val }}\left( {{\bar{\theta }}_{C}^{T},{\bar{\theta }}_{G}^{T},{\phi }_{1}^{k},{\phi }_{2}^{k}}\right) , \tag{10}
175
+ $$
176
+
177
+ where ${\bar{\theta }}_{C}^{T}$ and ${\bar{\theta }}_{G}^{T}$ means stopping the gradient. ${\alpha }_{\phi }$ is the learning rate for ${\phi }_{1}$ and ${\phi }_{2}$ .
178
+
179
+ More details of the training algorithm are in Appendix A.
180
+
181
+ § 5 EXPERIMENTS
182
+
183
+ In this section, we conduct experiments to demonstrate the effectiveness of LW-GCN. In particular, we aim to answer the following research questions:
184
+
185
+ * RQ1 Is our LW-GCN effective in node classification on both homophilic and heterophilic graphs.
186
+
187
+ * RQ2 Can label-wise aggregation learn representations that well capture information for prediction? - RQ3 How do the quality of pseudo labels and the automatic model selection affect LW-GCN?
188
+
189
+ § 5.1 EXPERIMENTAL SETTINGS
190
+
191
+ Datasets. To evaluate the performance of our proposed LW-GCN, we conduct experiments on three homophilic graphs and five heterophilic graphs. For homophilic graphs, we choose the widely used benchmark datasets, Cora, Citeseer, and Pubmed [17]. The dataset splits of homophilic graphs are the same as the cited paper. As for heterophilic graphs, we use two webpage datasets Texas and Wisconsin [23], and three subgraphs of wiki, i.e., Squirrel, Chameleon, and Crocodile [25]. Following [37], 10 dataset splits are used in each heterophilic graph for evaluation. The statistics of the datasets are presented in Table 3 in the Appendix.
192
+
193
+ Compared Methods. We compare LW-GCN with the representative and state-of-the-art GNNs, which includes GCN [17], MixHop [16] and GCNII [6]. We also compare with the following state-of-the-art models that are designed for heterophilic graphs: FAGCN [2], SimP-GCN [14], H2GCN [37], GRP-GNN [7], and BM-GCN [13]. In addition, the MLP are evaluated on the datasets for reference. The details of these compared methods can be found in Appendix B.2.
194
+
195
+ Table 1: Node classification results (Accuracy $\left( \% \right) \pm$ Std.) on homophilic/heterophilic graphs.
196
+
197
+ max width=
198
+
199
+ Dataset Wisconsin Texas Chameleon Squirrel Crocodile Cora Citeseer Pubmed
200
+
201
+ 1-9
202
+ Ave. Degree 2.05 1.69 15.85 41.74 30.96 4.01 2.84 4.50
203
+
204
+ 1-9
205
+ Homo. Ratio 0.20 0.11 0.24 0.22 0.25 0.81 0.74 0.8
206
+
207
+ 1-9
208
+ MLP ${83.5} \pm {4.9}$ ${78.1} \pm {6.0}$ ${48.0} \pm {1.5}$ ${32.3} \pm {1.8}$ ${65.8} \pm {0.7}$ ${58.6} \pm {0.5}$ ${60.3} \pm {0.4}$ ${72.7} \pm {0.4}$
209
+
210
+ 1-9
211
+ GCN ${53.1} \pm {5.8}$ ${57.6} \pm {5.9}$ ${63.5} \pm {2.5}$ ${46.7} \pm {1.5}$ ${66.7} \pm {1.0}$ ${81.6} \pm {0.7}$ ${71.3} \pm {0.3}$ ${78.4} \pm {1.1}$
212
+
213
+ 1-9
214
+ MixHop ${70.2} \pm {4.8}$ ${60.6} \pm {7.7}$ ${61.2} \pm {2.2}$ ${44.1} \pm {1.1}$ ${67.6} \pm {1.3}$ ${80.6} \pm {0.2}$ ${68.7} \pm {0.3}$ ${78.9} \pm {0.5}$
215
+
216
+ 1-9
217
+ SuperGAT ${53.7} \pm {5.7}$ ${58.6} \pm {7.7}$ ${59.4} \pm {2.5}$ ${38.9} \pm {1.5}$ ${62.6} \pm {0.9}$ ${82.7} \pm {0.4}$ ${72.2} \pm {0.8}$ ${78.4} \pm {0.5}$
218
+
219
+ 1-9
220
+ GCNII ${82.1} \pm {3.9}$ ${68.6} \pm {9.8}$ ${63.5} \pm {2.5}$ ${49.4} \pm {1.7}$ ${69.0} \pm {0.7}$ ${84.2} \pm {0.5}$ ${72.0} \pm {0.8}$ ${80.2} \pm {0.2}$
221
+
222
+ 1-9
223
+ FAGCN ${83.3} \pm {3.7}$ ${79.5} \pm {4.8}$ ${63.9} \pm {2.2}$ ${43.3} \pm {2.5}$ ${67.1} \pm {0.9}$ ${83.1} \pm {0.6}$ ${71.7} \pm {0.6}$ ${78.8} \pm {0.3}$
224
+
225
+ 1-9
226
+ SimP-GCN ${85.5} \pm {4.7}$ ${80.5} \pm {5.9}$ ${63.7} \pm {2.3}$ ${42.8} \pm {1.4}$ ${63.7} \pm {2.3}$ ${82.8} \pm {0.1}$ ${71.8} \pm {0.8}$ ${80.3} \pm {0.2}$
227
+
228
+ 1-9
229
+ H2GCN ${84.7} \pm {3.9}$ ${83.7} \pm {6.0}$ ${54.2} \pm {2.3}$ ${36.0} \pm {1.1}$ ${66.7} \pm {0.5}$ ${81.6} \pm {0.4}$ ${71.0} \pm {0.5}$ ${79.5} \pm {0.2}$
230
+
231
+ 1-9
232
+ GPRGNN ${78.2} \pm {4.4}$ ${77.0} \pm {6.4}$ ${70.6} \pm {2.1}$ ${50.8} \pm {1.4}$ ${65.6} \pm {0.9}$ ${83.8} \pm {0.6}$ ${71.1} \pm {0.9}$ ${79.9} \pm {0.1}$
233
+
234
+ 1-9
235
+ BM-GCN ${77.6} \pm {5.9}$ ${81.9} \pm {5.4}$ ${69.4} \pm {1.7}$ ${53.1} \pm {1.8}$ ${64.3} \pm {1.1}$ ${81.5} \pm {0.5}$ ${68.9} \pm {1.0}$ ${77.9} \pm {0.4}$
236
+
237
+ 1-9
238
+ LW-GCN ${86.9} \pm {2.2}$ ${86.2} \pm {5.8}$ ${74.4} \pm {1.4}$ ${62.6} \pm {1.6}$ ${78.1} \pm {0.6}$ ${84.3} \pm {0.3}$ ${72.3} \pm {0.4}$ ${80.4} \pm {0.3}$
239
+
240
+ 1-9
241
+ Weight for ${f}_{C}$ 0.981 0.960 0.986 0.987 0.999 0.001 0.006 0.005
242
+
243
+ 1-9
244
+
245
+ Settings of LW-GCN. For the label predictor ${f}_{P}$ , we adopt a MLP with one-hideen layer. The dimension of the hidden layer in MLP is set as 64 . As for the ${f}_{C}$ , we adopt two layers of label-wise message passing on all the datasets. More discussion about the impacts of the depth on LW-GCN is given in Sec. H. The other hyperparameters such as hidden dimension and weight decay are tuned based on the validation set. See Appendix B. 1 for more details.
246
+
247
+ § 5.2 NODE CLASSIFICATION PERFORMANCE
248
+
249
+ To answer RQ1, we conduct experiments on both heterophilic graphs and homophilic graphs with comparison to the state-of-the-art GNN models. The average accuracy and standard deviations on homophilic/heterophilic graphs are reported in Table 1. The model selection weight for label-wise aggregation GNN ${f}_{C}$ is shown along with the results of LW-GCN. Note that this model selection weight ranges from 0 to 1 . When the weight is close to 1, the label-wise aggregation model is selected. When the weight for ${f}_{C}$ is close to 0, the GNN ${f}_{G}$ for homophilic graph is selected.
250
+
251
+ Performance on Heterophilic Graphs. We conduct experiments on 10 dataset splits on each heterophilic graphs. From the results on heterophilic graphs, we can have following observations:
252
+
253
+ * MLP outperforms GCN and other GNNs for homophilic graphs by a large margin on Texas and Wisconsin; while GCN can achieve relatively good performance on dense heterophilic graphs such as Chameleon. This empirical result is consistent with our analysis in Theorem 1 that the heterophily will especially degrade the performance of GCN on graphs with low degrees.
254
+
255
+ * Though GCN and other GNNs designed for homophilic graphs can give relatively good performance on dense heterophilic graphs, our LW-GCN bring significant improvement by adopting label-wise aggregation. In addition, LW-GCN outperforms baselines on heterophilic graphs with low node degrees. This proves the superiority of label-wise aggregation in preserving heterophilic context.
256
+
257
+ * The model selection weight for ${f}_{C}$ is close to 1 for heterophilic graphs, which verifies that the proposed LW-GCN can correctly select the label-wise aggregation GNN ${f}_{C}$ for heterophilic graphs.
258
+
259
+ * Compared with SimP-GCN which also aims to preserve node features, our LW-GCN performs significantly better on heterophilic graphs. This is because SimP-GCN only focuses on the similarity of central node attributes. In contrast, our label-wise aggregation can preserve both the central node features and the heterophilic local context for node classification. LW-GCN also outperforms the other GNNs that adopts message-passing mechanism designed for heterophilic graphs by a large margin. This further demonstrates the effectiveness of label-wise aggregation.
260
+
261
+ Performance on Homophilic Graphs. The average results and standard deviations of 5 runs on homophilic graphs, i.e., Cora, Citeseer, and Pubmed, are also reported in Table 1. From the table, we can observe that existing GNNs for heterophilic graphs generally perform worse than state-of-the-art GNNs on homophilic graphs such as GCNII. In contrast, LW-GCN achieves comparable results with the the best model on homophilic graphs. This is because LW-GCN combines the GNN using label-wise message passing and a state-of-the-art GNN for homophilic graph. And it can automatically select the right model for the given homophilic graph.
262
+
263
+ Table 2: Ablation Study
264
+
265
+ max width=
266
+
267
+ Dataset MLP GCN GCNII LW-GCN \P LW-GCN \G LW-GCN ${}_{GCN}$ LW-GCN
268
+
269
+ 1-8
270
+ Cora ${58.7} \pm {0.5}$ ${81.6} \pm {0.7}$ ${84.2} \pm {0.5}$ ${84.2} \pm {0.3}$ ${75.3} \pm {0.4}$ ${81.9} \pm {0.2}$ ${84.3} \pm {0.3}$
271
+
272
+ 1-8
273
+ Citeseer ${60.3} \pm {0.4}$ ${71.3} \pm {0.3}$ ${72.0} \pm {0.8}$ ${72.3} \pm {0.5}$ ${65.1} \pm {0.5}$ ${71.6} \pm {0.3}$ ${72.3} \pm {0.4}$
274
+
275
+ 1-8
276
+ Texas ${78.1} \pm {6.0}$ ${57.6} \pm {5.9}$ ${68.6} \pm {9.8}$ ${82.4} \pm {5.2}$ ${85.9} \pm {5.6}$ ${85.4} \pm {6.3}$ ${86.2} \pm {5.8}$
277
+
278
+ 1-8
279
+ Crocodile ${65.8} \pm {0.7}$ ${66.7} \pm {1.0}$ ${69.0} \pm {0.7}$ ${78.0} \pm {0.8}$ ${77.9} \pm {0.5}$ ${77.8} \pm {0.8}$ ${78.1} \pm {0.6}$
280
+
281
+ 1-8
282
+
283
+ § 5.3 ANALYSIS OF NODE REPRESENTATIONS
284
+
285
+ To answer RQ2, we compare the representation similarity of intra-class node pairs and inter-class node pairs on a sparse heterophilic graph in Fig. 3. For both GCN and LW-GCN, representations learned by the last layer are used for analysis. we can observe that the learned representations of GCN are very similar for both intra-class pairs and inter-class pairs. This verifies that simply aggregating the neighbors will make the node representations less discrimative. With label-wise aggregation, the similarity scores of intra-class pairs are significantly higher than inter-class node pairs. This demonstrates that the representations learned by label-wise message passing can well preserve the target nodes' features and their contextual information.
286
+
287
+ < g r a p h i c s >
288
+
289
+ Figure 3: Representation similarity distributions on Texas Graphs.
290
+
291
+ § 5.4 ABLATION STUDY
292
+
293
+ To answer RQ3, we conduct ablation studies to understand the contributions of each component to LW-GCN. To investigate how the quality of pseudo labels can affect LW-GCN, we train a variant LW-GCN|P by replacing the MLP-based label predictor with a GCN model. To show the importance of the automatic model selection, we train a variant LW-GCN \G which removes the GNN for homophilic graphs and only uses label-wise aggregation GNN. Finally, we replace the GCNII backbone of ${f}_{G}$ to $\mathrm{{GCN}}$ , denoted as $\mathrm{{LW}} - {\mathrm{{GCN}}}_{GCN}$ , to show $\mathrm{{LW}} - \mathrm{{GCN}}$ is flexible to adopt various $\mathrm{{GNNs}}$ for ${f}_{G}$ . Experiments are conducted on both homophilic and heterophilic graphs. The results are shown in Table 2 and ablation studies on the rest datasets are shown in Appendix G. We can observe that:
294
+
295
+ * On homophilic graphs, LW-GCN P shows comparable results with LW-GCN, because GCNII will be selected given a homophilic graph. On the heterophilic graph Texas, the performance of LW-GCNNP is significantly worse than LW-GCN. This is because GNNs can produce poor pseudo labels on heterophilic graph, which degrades the label-wise message passing.
296
+
297
+ * LW-GCN \G performs much better than MLP. This shows label-wise graph convolution can capture structure information. However, LW-GCN \G performs worse than GCNII and LW-GCN on homophilic graphs, which indicates the necessity of combining GNN for homophilic graphs.
298
+
299
+ * LW-GCN ${}_{GCN}$ achieves comparable results with GCN on homophilic graphs. On heterophilic graphs, LW-GCN ${}_{GCN}$ performs similarly with LW-GCN. This shows the flexibility of LW-GCN in adopting traditional GNN models designed for homophilic graphs.
300
+
301
+ § 6 CONCLUSION AND FUTURE WORK
302
+
303
+ In this paper, we analyze the impacts of the heterophily levels to GCN model and demonstrate its limitations. We develop a novel label-wise graph convolution to learn representations that preserve the node features and their heterophilic neighbors' information. An automatic model selection module is applied to ensure the performance of the proposed framework on graphs with any homophily ratio. Theoretical and empirical analysis demonstrates the effectiveness of the label-wise aggregation. Extensive experiments shows that our proposed LW-GCN can achieve sate-of-the-art results on both homophilic and heterophilic graphs. There are several interesting directions need further investigation. First, since better pseudo labels will benefit the label-wise message passing, it is promising to incorporate the predictions of LW-GCN in label-wise message passing. Second, in some applications such as link prediction, labels are not available. Therefore, we will investigate how to generate useful pseudo labels for label-wise aggregation for applications where no labeled nodes are provided.
papers/LOG/LOG 2022/LOG 2022 Conference/IKevTLt3rT/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,651 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Expander Graph Propagation
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ Deploying graph neural networks (GNNs) on whole-graph classification or regression tasks is known to be challenging: it often requires computing node features that are mindful of both local interactions in their neighbourhood and the global context of the graph structure. GNN architectures that navigate this space need to avoid pathological behaviours, such as bottlenecks and oversquashing, while ideally having linear time and space complexity requirements. In this work, we propose an elegant approach based on propagating information over expander graphs. We provide an efficient method for constructing expander graphs of a given size, and use this insight to propose the EGP model. We show that EGP is able to address all of the above concerns, while requiring minimal effort to set up, and provide evidence of its empirical utility on relevant datasets and baselines in the Open Graph Benchmark. Importantly, using expander graphs as a template for message passing necessarily gives rise to negative curvature. While this appears to be counterintuitive in light of recent related work on oversquashing, we theoretically demonstrate that negatively curved edges are likely to be required to obtain scalable message passing without bottlenecks. To the best of our knowledge, this is a previously unstudied result in the context of graph representation learning, and we believe our analysis paves the way to a novel class of scalable methods to counter oversquashing in GNNs.
12
+
13
+ ## 21 1 Introduction
14
+
15
+ Graph neural networks (GNNs) are a flexible class of models for learning representations over graph-structured data [1]. Their versatility [2-4] and generality [5, 6] has made them a very attractive approach, leading to considerable application in areas as diverse as virtual drug screening [7], traffic prediction [8], combinatorial chip design [9] and pure mathematics [10, 11].
16
+
17
+ Most GNNs rely on repeatedly propagating information between neighbouring nodes in the graph. This is commonly expressed in the message passing [4] paradigm: nodes send vector-based messages to each other along the edges of the graph, and nodes update their representations by aggregating all the messages sent to them, in a permutation-invariant manner. Under many industrially-relevant tasks (which require identifying node-level properties, often with homophily assumptions), this formalism is very well aligned, often allowing for highly scalable model variants [12-14].
18
+
19
+ However, in many areas of scientific interest, purely local interactions are likely to be insufficient. Among the principal tasks over graphs, graph classification is perhaps most ripe with such situations: to meaningfully attach a label to a graph, in many cases it is insufficient to treat graphs as "bags of nodes". For example, when classifying a molecule for its potency as a candidate drug [7], the label is driven by complex substructure interactions in the molecule [15], rather than a naïve sum of atom-level effects. Accordingly, GNNs deployed in this regime need to update node features in a manner that is mindful of the global properties of the graph.
20
+
21
+ It quickly became apparent (as early as [2]) that it is often inadequate to merely stack more message passing layers over the input graph. In fact, for many standard graph classification tasks, such approaches may be weaker than discarding the graph structure altogether [16, 17]. Now, it is well-understood that stacking many local layers leaves GNNs vulnerable to pathological behaviours such as oversquashing [18], wherein nodes close to bottlenecks in the graph would need to store quantities of information that are exponentially increasing with model depth.
22
+
23
+ ![01963eec-385f-7953-8dd5-eff9f1bab00e_1_318_203_1145_357_0.jpg](images/01963eec-385f-7953-8dd5-eff9f1bab00e_1_318_203_1145_357_0.jpg)
24
+
25
+ Figure 1: Left: The Cayley graph of $\mathrm{{SL}}\left( {2,{\mathbb{Z}}_{3}}\right)$ , constructed using our method. It has $\left| V\right| = {24}$ nodes and it is 4-regular (implying $\left| E\right| = 2\left| V\right|$ ), hence it is sparse. Despite its sparsity, it is highly interconnected: any node is reachable from any other node by no more than 4 hops. Hence, it can serve as a strong "template" for globally propagating node features with a GNN. Right: The Cayley graph of $\mathrm{{SL}}\left( {2,{\mathbb{Z}}_{5}}\right)$ , constructed in an analogous way (with $\left| V\right| = {120}$ nodes). A 2-hop neighbourhood of one node (in red) is highlighted, demonstrating its tree-like local structure.
26
+
27
+ Within this space, we are interested in proposing a method that satisfies four desirable criteria: (C1) it is capable of propagating information globally in the graph; (C2) it is resistant to the oversquashing effect and does not introduce bottlenecks; (C3) its time and space complexity remain subquadratic (tighter than $O\left( {\left| V\right| }^{2}\right)$ for sparse graphs); and (C4) it requires no dedicated preprocessing of the input. Satisfying all four of these criteria simultaneously is challenging, and we will survey many of the popular approaches in the next section-demonstrating ways in which they fail to meet some of them.
28
+
29
+ In this paper, we identify expander graphs as very attractive objects in this regard. Specifically, they offer a family of graph structures that are fundamentally sparse $\left( {\left| E\right| = O\left( \left| V\right| \right) }\right)$ , while having low diameter: thus, any two nodes in an expander graph may reach each other in a short number of hops, eliminating bottlenecks and oversquashing (see Figure 1). Further, we will demonstrate an efficient way to construct a family of expander graphs (leveraging known theoretical results on the special linear group, $\operatorname{SL}\left( {2,{\mathbb{Z}}_{n}}\right)$ ). Once an expander graph of appropriate size is constructed, we can perform a certain number of GNN propagation steps over its structure to globally distribute the nodes' features. Accordingly, we name our method expander graph propagation (EGP).
30
+
31
+ Another important contribution of our work concerns extending the implications of prior art on oversquashing via curvature analysis [19]. According to [19], edges that are negatively curved are causing the oversquashing effect-yet, counterintuitively, the edges of the expander graphs we construct will always be negatively curved! We prove, however, that our expanders can never be sufficiently negatively curved to trigger the conditions necessary for the results in [19] to be applicable, and show that the existence of negatively curved edges might in fact be required in order to have sparse communication without bottlenecks.
32
+
33
+ ## 2 Related work
34
+
35
+ We begin with a survey of the many prior approaches to handling global context in graph representation learning, evaluating them carefully against our four desirable criteria (C1-C4; cf. Table 1). This list is by no means exhaustive, but should be indicative of the most important directions.
36
+
37
+ Stacking more layers. As already highlighted, one way to achieve global information propagation is to have a deeper GNN. In this case, we are capable of satisfying (C1) and (C4)-no dedicated preprocessing is needed. However, depending on the graph’s diameter, we may need up to $O\left( \left| V\right| \right)$ layers to cover the graph, leading to quadratic complexity (violating (C3)) and introducing a vulnerability to bottlenecks (C2), as theoretically and empirically demonstrated in [18].
38
+
39
+ Master nodes. An attractive approach to introducing global context is to introduce a master node to the graph, and connect it to all of the graph's nodes. This can be done either explicitly [4] or implicitly, by storing a "global" vector [20]. It trivially reduces the graph's diameter to 2, introduces
40
+
41
+ Table 1: A summary of principal approaches to handling global context in graph representation learning (Section 2). "(✓)" indicates that a criterion may be satisfied, depending on the method's tradeoffs. Our proposal, the expander graph propagation (EGP) method, satisfies all four criteria.
42
+
43
+ <table><tr><td>Approach</td><td>(C1) (global prop.)</td><td>(C2) (no bottlenecks)</td><td>(C3) (subquadratic)</td><td>(C4) (no dedicated preproc.)</td></tr><tr><td>GNNs</td><td>✘</td><td>✘</td><td>✓</td><td>✓</td></tr><tr><td>Sufficiently deep GNNs</td><td>✓</td><td>✘</td><td>✘</td><td>✓</td></tr><tr><td>Master node [4, 20]</td><td>✓</td><td>✘</td><td>✓</td><td>✓</td></tr><tr><td>Fully connected [18, 21-25]</td><td>✓</td><td>✓</td><td>✘</td><td>✓</td></tr><tr><td>Feature aug. [26-31]</td><td>✓</td><td>(✓)</td><td>(✓)</td><td>✘</td></tr><tr><td>Graph rewiring [19, 32]</td><td>✓</td><td>✓</td><td>✓</td><td>✘</td></tr><tr><td>Hierarchical MP [33-38]</td><td>✓</td><td>✓</td><td>(✓)</td><td>✘</td></tr><tr><td>$\mathbf{{EGP}}$ (ours)</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td></tr></table>
44
+
45
+ $O\left( 1\right)$ new nodes and $O\left( \left| V\right| \right)$ new edges, and requires no dedicated preprocessing, hence it satisfies (C1, C3, C4). However, these benefits come at the expense of introducing a bottleneck in the master node: it has a very challenging task (especially when graphs get larger) to continually incorporate information over a very large neighbourhood in a useful way. Hence it fails to satisfy (C2).
46
+
47
+ Fully connected graphs. The converse approach is to make every node a master node: in this case, we make all pairs of nodes connected by an edge-this was initially proposed as a powerful method to alleviate oversquashing by [18]. This strategy proved highly popular in the recent surge of Graph Transformers $\left\lbrack {{22},{23},{25}}\right\rbrack$ , and is common for GNNs used in physical simulation [21] or reasoning [24] tasks. The graph's diameter is reduced to 1 , no bottlenecks remain, and the approach does not require any dedicated preprocessing. Hence(C1, C2, C4)are trivially satisfied. The main downside of this approach is the introduction of $O\left( {\left| V\right| }^{2}\right)$ edges, which means (C3) can never be satisfied-and this approach will hence be prohibitive even for modestly-sized graphs.
48
+
49
+ Feature augmentation. An alternative approach is to provide additional features to the GNN which directly identify the structural role each node plays in the graph structure [26]. If done properly (i.e., if the computed features are directly relevant to the target task), this can drastically improve expressive power. Hence, in theory, it is possible to satisfy (C1) while not violating (C2, C3). However, computing appropriate features requires either specific domain knowledge, or an appropriate pretraining procedure [27-31] to be applied, in order to obtain such embeddings. Hence all of these gains come at the expense of failing to satisfy (C4).
50
+
51
+ Graph rewiring. Another promising line of research involves modifying the edges of the original graph, in order to alleviate bottlenecks. Popular examples of this approach involve using diffusion [32]-which diffuse additional edges through the application of kernels such as the personalised PageRank, and stochastic discrete Ricci flows [19]-which surgically modify a small quantity of edges to alleviate the oversquashing effect on the nodes with negative Ricci curvature. If realised carefully, such approaches will not deviate too far from the original graph, while provably alleviating oversquashing; hence it is possible to satisfy(C1, C2, C3). However, this comes at a cost of having to examine the input graph structure, with methods that do not necessarily scale easily with the number of nodes. As such, dedicated preprocessing is needed, failing to satisfy (C4).
52
+
53
+ Hierarchical message passing. Lastly, going beyond modifying the edges, it is also possible to introduce additional nodes in the graph-each of them responsible for a particular substructure in the graph ${}^{1}$ . If done carefully, it has the potential to drastically reduce the graph’s diameter while not introducing bottlenecked nodes (hence, allowing us to satisfy(C1, C2)). However, in prior work, a cost has to be paid for this, usually in the need for dedicated preprocessing. Prior proposals for hierarchical GNNs that remain scalable require a dedicated pre-processing step [33-35], sometimes coupled with domain knowledge [35]-thus failing to satisfy (C4). In addition, such methods may require adding prohibitively large numbers of substructures [36, 37] or expensive pre-computation, e.g. computing the graph Laplacian eigenvectors [38]. This might make even (C3) hard to satisfy.
54
+
55
+ ---
56
+
57
+ ${}^{1}$ The master node approach discussed before is a special case of this, wherein a single node is responsible for a "substructure" spanning the entire graph.
58
+
59
+ ---
60
+
61
+ Before proceeding to present EGP-specific material, we remark that our work is not the first to study expander graph-related topics in the context of GNNs. Specifically, the ExpanderGNN [39] leverages expander graphs over neural network weights to sparsify the update step in GNNs, and the Cheeger constant has been previously used to quantify oversquashing in [19]. With respect to our contributions, neither of these cases discuss expander graphs in the context of the computational graph for a GNN, nor attempt to propagate messages over such a structure. Further, neither of these proposals successfully satisfies all four of our desired criteria (C1-C4).
62
+
63
+ ## 3 Theoretical background
64
+
65
+ We now dedicate our attention to the key theoretical results over expander graphs, which will allow EGP to have favourable properties and be efficiently precomputable.
66
+
67
+ Definition 1. For a finite connected graph $G = \left( {V\left( G\right) , E\left( G\right) }\right)$ , we consider functions $f : V\left( G\right) \rightarrow \mathbb{R}$ . The Laplacian ${Lf} : V\left( G\right) \rightarrow \mathbb{R}$ of such a function is defined to be
68
+
69
+ $$
70
+ {Lf}\left( v\right) = \deg \left( v\right) f\left( v\right) - \mathop{\sum }\limits_{{{vw} \in E\left( G\right) }}f\left( w\right) ,
71
+ $$
72
+
73
+ where $\deg \left( v\right)$ is the degree of the vertex $v$ .
74
+
75
+ The mapping $L : {\mathbb{R}}^{V\left( G\right) } \rightarrow {\mathbb{R}}^{V\left( G\right) }$ sending a function $f$ to its Laplacian ${Lf}$ is a linear transformation. It is not hard to show [40] that $L$ is symmetric with respect to the standard basis for ${\mathbb{R}}^{V\left( G\right) }$ and positive semi-definite and hence has non-negative real eigenvalues
76
+
77
+ $$
78
+ 0 = {\lambda }_{0}\left( G\right) < {\lambda }_{1}\left( G\right) \leq {\lambda }_{2}\left( G\right) \leq \ldots
79
+ $$
80
+
81
+ The smallest eigenvalue is 0 and its associated eigenspace consists of the constant functions (assuming $G$ is connected). The smallest positive eigenvalue, ${\lambda }_{1}\left( G\right)$ , is central to the definition of expander graphs, as the next definition shows.
82
+
83
+ Definition 2. A collection $\left\{ {G}_{i}\right\}$ of finite connected graphs is an expander family if there is a constant $c > 0$ such that for all ${G}_{i}$ in the collection, ${\lambda }_{1}\left( {G}_{i}\right) \geq c$ .
84
+
85
+ Expander families [41-43] have many remarkable and useful properties, particularly when there is a uniform upper bound on the degree of the vertices of ${G}_{i}$ .
86
+
87
+ Definition 3. Let $G$ be a finite graph. For $A \subset V\left( G\right)$ , its boundary $\partial A$ is the collection of edges with one endpoint in $A$ and one endpoint not in $A$ . The Cheeger constant $h\left( G\right)$ is defined to be
88
+
89
+ $$
90
+ h\left( G\right) = \min \left\{ {\frac{\left| \partial A\right| }{\left| A\right| } : A \subset V\left( G\right) ,0 < \left| A\right| \leq \left| {V\left( G\right) }\right| /2}\right\} .
91
+ $$
92
+
93
+ Thus, having a small Cheeger constant is equivalent to the graph having a 'bottleneck', in the sense that there is a collection of edges $\partial A$ that, when removed, disconnects the vertices into two sets ( $A$ and its complement, $V\left( G\right) \smallsetminus A$ ), with the property that the sizes of $A$ and its complement are significantly larger than the size of $\partial A$ .
94
+
95
+ Expander families can be reinterpreted using Cheeger constants, as follows (see, e.g., [44-47]):
96
+
97
+ Theorem 4. Let $\left\{ {G}_{i}\right\}$ be a collection of finite connected graphs with a uniform upper bound on their vertex degrees. Then the following are equivalent:
98
+
99
+ 1. $\left\{ {G}_{i}\right\}$ is an expander family;
100
+
101
+ 2. there is a constant $\epsilon > 0$ such that for all graphs in the collection, $h\left( {G}_{i}\right) \geq \epsilon$ .
102
+
103
+ Hence, expander graphs have higher Cheeger constants and will hence provably be bottleneck-free. The following result is one of the many useful properties of expander families, and it concerns their diameter. It was proved by Mohar [48, Theorem 2.3]. See also [45].
104
+
105
+ Theorem 5. The diameter $\operatorname{diam}\left( G\right)$ of a graph $G$ satisfies
106
+
107
+ $$
108
+ \operatorname{diam}\left( G\right) \leq 2\left\lceil {\frac{\Delta \left( G\right) + {\lambda }_{1}\left( G\right) }{4{\lambda }_{1}\left( G\right) }\log \left( {\left| {V\left( G\right) }\right| - 1}\right) }\right\rceil ,
109
+ $$
110
+
111
+ where $\Delta \left( G\right)$ is the maximal degree of any vertex of $G$ . Hence, if $\left\{ {G}_{i}\right\}$ is an expander family of finite graphs with a uniform upper bound on their vertex degrees, then there is a constant $k > 0$ such that for all graphs in the family,
112
+
113
+ $$
114
+ \operatorname{diam}\left( {G}_{i}\right) \leq k\log V\left( {G}_{i}\right) .
115
+ $$
116
+
117
+ Therefore, if we want to globally propagate information over an expander graph which has $\left| V\right|$ nodes, we only need $O\left( {\log \left| V\right| }\right)$ propagation steps to do so-yielding subquadratic complexity.
118
+
119
+ We have now successfully shown that expander graphs are bottleneck-free, and have favourable propagation qualities. What is missing is an efficient method of constructing an expander graph of (roughly) $\left| V\right|$ nodes. To demonstrate such a method, we leverage known results from group theory.
120
+
121
+ Definition 6. A group $\left( {\Gamma , \circ }\right)$ is a set $\Gamma$ equipped with a composition operation $\circ : \Gamma \times \Gamma \rightarrow \Gamma$ (written concisely by omitting $\circ$ , i.e. $g \circ h = {gh}$ , for $g, h \in \Gamma$ ), satisfying the following axioms:
122
+
123
+ - (Associativity) $\left( {gh}\right) l = g\left( {hl}\right)$ , for $g, h, l \in \Gamma$ .
124
+
125
+ - (Identity) There exists a unique $e \in \Gamma$ satisfying ${eg} = {ge} = g$ for all $g \in \Gamma$ .
126
+
127
+ - (Inverse) For every $g \in \Gamma$ there exists a unique ${g}^{-1} \in \Gamma$ such that $g{g}^{-1} = {g}^{-1}g = e$ .
128
+
129
+ A group is hence a natural construct for reasoning about transformations that leave an object invariant (unchanged). Further, we define a relevant notion of a group's generating set:
130
+
131
+ Definition 7. Let $\Gamma$ be a group. A subset $S \subseteq \Gamma$ is a generating set for $\Gamma$ if it can be used to "generate" all of $\Gamma$ via composition. Concretely, any element $g \in \Gamma$ can be expressed by composing elements in the generating set, or their inverses; that is, we can express $g = {s}_{1}^{\pm 1}{s}_{2}^{\pm 1}{s}_{3}^{\pm 1}\cdots {s}_{n - 1}^{\pm 1}{s}_{n}^{\pm 1}$ for ${s}_{i} \in S$ .
132
+
133
+ Now we are ready to define a Cayley graph of a group w.r.t. its generating set.
134
+
135
+ Definition 8. Let $\Gamma$ be a group with a finite generating set $S$ . Then the associated Cayley graph $\operatorname{Cay}\left( {\Gamma ;S}\right)$ has vertex set $\Gamma$ and it has an edge $g \rightarrow {gs}$ for each $g \in \Gamma$ and each $s \in S$ . We say that $s$ is the label on this edge. This is a potentially non-simple graph, as it may have edges with both endpoints on the same vertex and it may have multiple edges between a pair of vertices. In particular, when $s$ has order 2, then we view the edge $g \rightarrow {gs}$ and the edge $g \rightarrow g{s}^{2} = g$ as being distinct edges.
136
+
137
+ Note that the degree of each vertex of a Cayley graph $\operatorname{Cay}\left( {\Gamma ;S}\right)$ is $2\left| S\right|$ . This is because each vertex $g$ is joined by edges to ${gs}$ and $g{s}^{-1}$ for each $s \in S$ . Thus, we shall be particularly interested in the case where there is a uniform upper bound on $\left| S\right|$ . The specific group we use for EGP is as follows.
138
+
139
+ For each positive integer $n$ , the special linear group $\operatorname{SL}\left( {2,{\mathbb{Z}}_{n}}\right)$ denotes the group of $2 \times 2$ matrices with entries that are integers modulo $n$ and with determinant 1 . One of its generating sets is:
140
+
141
+ $$
142
+ {S}_{n} = \left\{ {\left( \begin{array}{ll} 1 & 1 \\ 0 & 1 \end{array}\right) ,\left( \begin{array}{ll} 1 & 0 \\ 1 & 1 \end{array}\right) }\right\} .
143
+ $$
144
+
145
+ Central to our constructions is the following important result.
146
+
147
+ Theorem 9. The family of Cayley graph $\operatorname{Cay}\left( {\operatorname{SL}\left( {2,{\mathbb{Z}}_{n}}\right) ;{S}_{n}}\right)$ forms an expander family.
148
+
149
+ The proof uses a result of Selberg [49] who showed that the smallest positive eigenvalue of the Laplacian of certain hyperbolic surfaces is at least $3/{16}$ . One can use this to a produce a lower bound on the first eigenvalue of the Laplacian on $\operatorname{Cay}\left( {\operatorname{SL}\left( {2,{\mathbb{Z}}_{n}}\right) ;{S}_{n}}\right)$ . Full proofs are given in [42,43].
150
+
151
+ Lastly, it is useful to state a known result: the number of nodes of $\operatorname{Cay}\left( {\mathrm{{SL}}\left( {2,{\mathbb{Z}}_{n}}\right) ;{S}_{n}}\right)$ is:
152
+
153
+ $$
154
+ \left| {V\left( {\operatorname{Cay}\left( {\mathrm{{SL}}\left( {2,{\mathbb{Z}}_{n}}\right) ;{S}_{n}}\right) }\right) }\right| = {n}^{3}\mathop{\prod }\limits_{{\text{prime }p \mid n}}\left( {1 - \frac{1}{{p}^{2}}}\right) , \tag{10}
155
+ $$
156
+
157
+ hence, it is of the order of $O\left( {n}^{3}\right)$ . We now study the local properties of Cayley graphs in detail.
158
+
159
+ ## 4 Local structure of the Cayley graphs, and the utility of negative curvature
160
+
161
+ Recent work [19] has suggested that the local structure of the graph $G$ underlying a GNN may play an important role in the way that information propagates around $G$ . In particular, various notions of 'Ricci curvature' such as Forman curvature [50], Ollivier curvature [51, 52] and balanced Forman curvature [19] have been examined. These are all local quantities, in the sense that they depend on the structure of the graph within a small neighbourhood of each edge. In this section, we will therefore examine the local structure of the Cayley graphs ${G}_{n} = \operatorname{Cay}\left( {\mathrm{{SL}}\left( {2,{\mathbb{Z}}_{n}}\right) ;{S}_{n}}\right)$ .
162
+
163
+ The various notions of curvature given above are defined for each $e$ of the graph $G$ . Since, as defined by [19], the balanced Forman curvature of an edge depends only on local structures (i.e. triangles
164
+
165
+ and squares) around that edge, they can be determined by only observing the immediate 2-hop surrounding of that edge. Formally, for an edge $e$ of a graph $G$ , let ${N}_{2}\left( e\right)$ be the induced subgraph with vertices that are at most two hops away from at least one endpoint of $e$ . Then the curvature of $e$ only depends on the isomorphism type of ${N}_{2}\left( e\right)$ . More specifically, if $e$ and ${e}^{\prime }$ are edges in possibly distinct graphs, and there is a graph isomorphism between ${N}_{2}\left( e\right)$ and ${N}_{2}\left( {e}^{\prime }\right)$ that sends $e$ to ${e}^{\prime }$ , then this guarantees that the curvatures of $e$ and ${e}^{\prime }$ are equal.
166
+
167
+ This situation arises prominently in the Cayley graphs that we are considering, as follows.
168
+
169
+ Proposition 11. Let $s$ be one of
170
+
171
+ $$
172
+ \left( \begin{array}{ll} 1 & 1 \\ 0 & 1 \end{array}\right) ,\;\left( \begin{array}{ll} 1 & 0 \\ 1 & 1 \end{array}\right) .
173
+ $$
174
+
175
+ Let $n,{n}^{\prime } > {18}$ and let $e$ and ${e}^{\prime }$ be s-labelled edges in ${G}_{n}$ and ${G}_{{n}^{\prime }}$ . Then there is a graph isomorphism between ${N}_{2}\left( e\right)$ and ${N}_{2}\left( {e}^{\prime }\right)$ taking e to ${e}^{\prime }$ .
176
+
177
+ We prove Proposition 11 in Appendix A. This immediately allows us to characterise the balanced Forman curvature and Ollivier curvature for all of the Cayley graphs we generate:
178
+
179
+ Proposition 12. The balanced Forman curvatures $\operatorname{Ric}\left( n\right)$ , and the Ollivier curvatures $\kappa \left( n\right)$ of all edges of Cayley graphs ${G}_{n}$ are given by:
180
+
181
+ $$
182
+ \operatorname{Ric}\left( n\right) = \left\{ {\begin{array}{ll} 0 & \text{ if }n = 2 \\ - 1/4 & \text{ if }n = 3 \\ - 1/2 & \text{ if }n = 4 \\ - 1 & \text{ if }n \geq 5, \end{array}\;\kappa \left( n\right) = \left\{ \begin{array}{ll} 0 & \text{ if }n = 2 \\ - 1/8 & \text{ if }n = 3 \\ - 1/4 & \text{ if }n = 4 \\ - 3/8 & \text{ if }n = 5 \\ - 1/2 & \text{ if }n \geq 6. \end{array}\right. }\right.
183
+ $$
184
+
185
+ Proof. Proposition 11 implies that the balanced Forman and Ollivier curvatures are all equal for $n > {18}$ . Their values for $2 \leq n \leq {19}$ can all be empirically computed, and are given as above.
186
+
187
+ Prior work [19] suggests it is preferable for GNNs to operate on graphs with positive Ricci curvature, whereas our graphs ${G}_{n}\left( {n > 2}\right)$ all have negative Ricci curvature. However, we contend that negative Ricci curvature is not in itself an impediment to efficient propagation around a GNN. Indeed, it was shown in [19, Theorem 4] that poor propagation arises when the balanced Forman curvature is close to -2, specifically if it is at most $- 2 + \delta$ for some $\delta > 0$ . Here, $\delta$ is required to satisfy certain inequalities. But, with certainty, $\delta = 1$ can never be satisfied in the hypotheses of [19, Theorem 4].
188
+
189
+ Furthermore, positive Ricci curvature may have downsides when used for GNNs. One significant downside to non-negative Ricci curvature can be derived using the main result of [53], which says that the three properties of expansion, sparsity and non-negative Ollivier curvature are incompatible, in the following sense.
190
+
191
+ Theorem 13. For any $\epsilon > 0,\delta > 0$ and $\Delta > 0$ , there are only finitely many graphs with maximum vertex degree $\Delta$ , Cheeger constant at least $\delta$ and Olliver curvature at least $- \epsilon$ .
192
+
193
+ We prove Theorem 13 in Appendix B. Furthermore, quoting directly from [53]:
194
+
195
+ "The high-level message is that on large sparse graphs, non-negative curvature (in an even weak sense) induces extremely poor spectral expansion. This stands in stark contrast with the traditional idea - quantified by a broad variety of functional inequalities over the past decade - that non-negative curvature is associated with good mixing behavior."
196
+
197
+ In our view, it is highly desirable that the graphs used for GNNs have high Cheeger constants, in the sense of globally lacking bottlenecks. Having bounded vertex degree is certainly useful too, since it implies that the graphs will be sparse, and the nodes will not have to handle ever-increasing neighbourhoods for message passing as graphs grow larger in size ${}^{2}$ . As we have just shown, using the results from [53], non-negative Ollivier curvature is incompatible with these properties when the graph is sufficiently large.
198
+
199
+ The negative curvature of each edge in ${G}_{n}$ implies that they are locally ‘tree-like’. In Appendix C, we make this statement precise by showing that ${G}_{n}$ is ’tree-like’ up to scale $c\log \left( n\right)$ about each node, for $c \simeq \left( {1/2}\right) {\left( \log \left( \left( 1 + \sqrt{5}\right) /2\right) \right) }^{-1}$ (see Figure 1 (Right) for a schematic view).
200
+
201
+ ---
202
+
203
+ ${}^{2}$ This property would not hold for GNNs with master nodes, as the master node has $O\left( V\right)$ neighbours.
204
+
205
+ ---
206
+
207
+ This tree-like structure might seem, at first, to be counter-productive for good propagation across the graphs ${G}_{n}$ . Indeed, GNNs based on trees have been shown to have provably poor performance [18]. The reason for this seems to be two-fold. On the one hand, trees have small Cheeger constant. Indeed, any tree $G$ on $n$ vertices has a Cheeger constant $1/\left\lbrack {n/2}\right\rbrack$ , since we may find an edge that, when removed, decomposes the graph into subgraphs with $\lceil n/2\rceil$ and $\lceil n/2\rceil$ vertices. As discussed in Section 3 and in [19], when a graph has small Cheeger constant, its performance when used as a template for a GNN is likely to become poor. Secondly, GNNs based on trees are susceptible to oversquashing. For a $k$ -regular infinite tree, there are $k{\left( k - 1\right) }^{r - 1}$ vertices at distance $r$ from a given vertex. Hence, if information is to be propagated at least distance $r$ from a given vertex, then seemingly an exponential amount of information is required to be stored.
208
+
209
+ However, neither of these issues are problematic for a GNN based on the Cayley graph ${G}_{n}$ . By Theorem 9, their Cheeger constants are bounded away from 0 . Secondly, although they are tree-like locally, this is only true up to scale $O\left( {\log n}\right)$ . In fact, the $r$ -neighbourhood of any vertex is the whole graph ${G}_{n}$ as soon as $r > C\log n$ , for some constant $C$ , by Theorem 5 . Being tree-like up to distance $O\left( {\log n}\right)$ does not lead to a requirement to store too much information as the message propagates. This is because $k{\left( k - 1\right) }^{r - 1}$ is linear in $n$ when $r \leq O\left( {\log n}\right)$ .
210
+
211
+ Beyond this scale, there exist many additional connections, which lead to many possible paths joining any pair of vertices. Each of these paths can be a potential route of transfer of information from one vertex to another. The perspective of information transfer also gives rise to another perspective in which expanders fare very favourably: the mixing time of their corresponding Markov chain. We state several known facts about the favourable mixing times of expanders in Appendix D, to further supplement our claims on their efficient communication properties.
212
+
213
+ ## 5 Expander graph propagation
214
+
215
+ Let an input to a graph neural network be a node feature matrix $\mathbf{X} \in {\mathbb{R}}^{\left| V\right| \times k}$ , and an adjacency matrix $\mathbf{A} \in {\mathbb{R}}^{\left| V\right| \times \left| V\right| }$ . This setup is such that the feature vector of node $u,{\mathbf{x}}_{u} \in {\mathbb{R}}^{k}$ , can be recovered by taking an appropriate row from $\mathbf{X}$ . Note that the adjacency information can also be fed in an edge-list manner, which is desirable from a scalability perspective. Further, each edge in the graph may be endowed with additional features rather than a single real scalar. None of the above modifications would change the essence of our findings; we use a matrix formalism here purely for simplicity.
216
+
217
+ There exist many ways in which the computed Cayley graph $\operatorname{Cay}\left( {\mathrm{{SL}}\left( {2,{\mathbb{Z}}_{n}}\right) ;{S}_{n}}\right)$ can be leveraged for message propagation, and exploring these variations could be very useful for future work. Here, we opt for a simple construction: interleave running a standard GNN over the given input structure, followed by running another GNN layer over the relevant Cayley graph. If we let ${\mathbf{A}}^{\operatorname{Cay}\left( n\right) }$ be an adjacency matrix derived from $\operatorname{Cay}\left( {\mathrm{{SL}}\left( {2,{\mathbb{Z}}_{n}}\right) ;{S}_{n}}\right)$ , this implies:
218
+
219
+ $$
220
+ \mathbf{H} = \operatorname{GNN}\left( {\operatorname{GNN}\left( {\mathbf{X},\mathbf{A}}\right) ,{\mathbf{A}}^{\operatorname{Cay}\left( n\right) }}\right) \tag{14}
221
+ $$
222
+
223
+ Here, GNN refers to any preferred GNN layer, such as the graph isomorphism network [54, GIN]:
224
+
225
+ $$
226
+ {\mathbf{h}}_{u} = \phi \left( {\left( {1 + \epsilon }\right) {\mathbf{x}}_{u} + \mathop{\sum }\limits_{{v \in {\mathcal{N}}_{u}}}{\mathbf{x}}_{v}}\right) \tag{15}
227
+ $$
228
+
229
+ where ${\mathcal{N}}_{u}$ is the neighbourhood of node $u$ , i.e. in our setup, the set of all nodes $v$ such that ${a}_{vu} \neq 0$ . $\epsilon \in \mathbb{R}$ is a learnable scalar, and $\phi : {\mathbb{R}}^{k} \rightarrow {\mathbb{R}}^{{k}^{\prime }}$ is a two-layer MLP.
230
+
231
+ This procedure is iterated for a certain number of steps, after which the computed node embeddings in $\mathbf{H}$ can be used for any downstream task of interest-such as node classification, link prediction or graph classification. Note that, unlike [18], who apply their custom layer only at the tail of the architecture, we apply the expander graph immediately after each layer over the input graph. We find that if the input graph given by $\mathbf{A}$ contains bottlenecks, applying the GNN over ${\mathbf{A}}^{\operatorname{Cay}\left( n\right) }$ only at the end may result in oversquashing occurring before any expander graph propagation can take place.
232
+
233
+ The setup so far assumed the number of nodes in our input graph to line up with the Cayley graph, that is, ${\mathbf{A}}^{\operatorname{Cay}\left( n\right) } \in {\mathbb{R}}^{\left| V\right| \times \left| V\right| }$ . However, there is no guarantee that we can find an appropriate $n$ such that $\operatorname{Cay}\left( {\mathrm{{SL}}\left( {2,{\mathbb{Z}}_{n}}\right) ;{S}_{n}}\right)$ would have $\left| V\right|$ nodes. What we can do in practice, as an approximation, is choose the smallest $n$ such that the number of nodes of $\operatorname{Cay}\left( {\operatorname{SL}\left( {2,{\mathbb{Z}}_{n}}\right) ;{S}_{n}}\right)$ is $\geq \left| V\right|$ , then consider ${\mathbf{A}}_{1 : \left| V\right| ,1 : \left| V\right| }^{\operatorname{Cay}\left( n\right) }$ i.e. only the subgraph containing the first $\left| V\right|$ nodes in the Cayley graph.
234
+
235
+ There is a slight misalignment to our theory in this specific slicing choice-if the $\left| V\right|$ vertices in this subgraph are chosen completely arbitrarily, we risk disconnecting the graph. However, in all our experiments we construct the Cayley graph in a breadth-first manner, starting from the identity element as "node zero". Hence, the node at index $i$ is always guaranteed to be reachable from the nodes at lower indices $\left( {j < i}\right)$ , and the graph cannot be disconnected under this construction. More interesting strategies for this step can also be considered in the future, in order to optimise the communicativity properties of this subgraph.
236
+
237
+ Note that we do not need to perform matching of the nodes in the original graph to the nodes of the Cayley graph. This is because, much like the fully connected graph used by [18], we interpret the Cayley graph mainly as a template for global information propagation, in order to relieve bottlenecks in a scalable way.
238
+
239
+ Algorithm 1 summarises the steps of our proposed EGP model. As direct corollaries of results we proved or demonstrated, we note that EGP satisfies all four of our desirable criteria: (C1) by Theorem 5 (so long as logarithmically many layers are applied), (C2) by Theorem 4 (high Cheeger constant implies no bottlenecks), (C3) by the fact our Cayley graphs are 4-regular and hence sparse, and (C4) by the fact we can generate a Cayley graph of appropriate size without detailed analysis of the input-we may precompute a "bank" of Cayley graphs of various sizes to use in an ad-hoc manner.
240
+
241
+ Algorithm 1: Expander graph propagation (EGP) forward pass
242
+
243
+ ---
244
+
245
+ Inputs :Node features $\mathbf{X} \in {\mathbb{R}}^{\left| V\right| \times k}$ , Adjacency matrix $\mathbf{A} \in {\mathbb{R}}^{\left| V\right| \times \left| V\right| }$
246
+
247
+ Output:Node embeddings $\mathbf{H}$
248
+
249
+ // Choose the smallest Cayley graph from our family that has number of nodes equal to, or greater than, $\left| V\right|$
250
+
251
+ $n \leftarrow {\operatorname{argmin}}_{m \in \mathbb{N}}\left| {V\left( {\operatorname{Cay}\left( {\mathrm{{SL}}\left( {2,{\mathbb{Z}}_{m}}\right) ;{S}_{m}}\right) }\right) }\right| \geq \left| V\right| ;\;$ // We can use Equation 10 to determine $n$
252
+
253
+ $$
254
+ {G}^{\operatorname{Cay}\left( n\right) } \leftarrow \operatorname{Cay}\left( {\mathrm{{SL}}\left( {2,{\mathbb{Z}}_{n}}\right) ;{S}_{n}}\right)
255
+ $$
256
+
257
+ $$
258
+ {\mathbf{A}}_{uv}^{\operatorname{Cay}\left( n\right) } \leftarrow \left\{ \begin{array}{ll} 1 & \left( {u, v}\right) \in E\left( {G}^{\operatorname{Cay}\left( n\right) }\right) \\ 0 & \text{ otherwise } \end{array}\right.
259
+ $$
260
+
261
+ // Populate adjacency matrix of the Cayley graph
262
+
263
+ ${\mathbf{H}}^{\left( 0\right) } \leftarrow \mathbf{X}$ ; // Initialise GNN inputs
264
+
265
+ for $t \in \{ 1,\ldots , T\}$ do
266
+
267
+ if $t{\;\operatorname{mod}\;2} = 0$ then
268
+
269
+ ${\mathbf{H}}^{\left( t\right) } \leftarrow {\operatorname{GNN}}^{\left( t\right) }\left( {{\mathbf{H}}^{\left( t - 1\right) },\mathbf{A}}\right) ;$ // GNN layer over input graph; e.g. Equation 15
270
+
271
+ end
272
+
273
+ else
274
+
275
+ ${\mathbf{H}}^{\left( t\right) } \leftarrow {\operatorname{GNN}}^{\operatorname{Cay}}\left( {{\mathbf{H}}^{\left( t - 1\right) },{\mathbf{A}}_{1 : \left| V\right| ,1 : \left| V\right| }^{\operatorname{Cay}\left( n\right) }}\right) ;//$ GNN layer over Cayley graph; e.g. Equation 15
276
+
277
+ end
278
+
279
+ end
280
+
281
+ return ${\mathbf{H}}^{\left( T\right) }$ ; // Return final embeddings for downstream use
282
+
283
+ ---
284
+
285
+ ## 6 Empirical evaluation
286
+
287
+ Our work provides mainly a theoretical contribution: demonstrating a simple, theoretically-grounded approach to relieving bottlenecks and oversquashing in GNNs without requiring quadratic complexity or dedicated preprocessing. Further, we prove several additional results which deepen our understanding of curvature-based analysis of GNNs, showing how our expanders can be favourable in spite of their negatively-curved edges.
288
+
289
+ We now provide several direct comparative experiments in order to ascertain that our EGP addition can directly help existing graph classification baselines, even without further hyperparameter tuning.
290
+
291
+ Datasets To show this, we leverage the established Open Graph Benchmark collection of tasks [55, OGB]. Specifically, we provide results on all of its graph classification datasets: ogbg-molhiv, ogbg-molpcba, ogbg-ppa and ogbg-code2. The first two are among the largest molecule property prediction datasets in the MoleculeNet benchmark [56]. The third dataset is concerned with classifying
292
+
293
+ Table 2: Statistics of the three graph classification datasets studied in our evaluation.
294
+
295
+ <table><tr><td>Name</td><td>Number of graphs</td><td>Avg. nodes/graph</td><td>Avg. edges/graph</td><td>$\mathbf{{Metric}}$</td></tr><tr><td>ogbg-molhiv</td><td>41,127</td><td>25.5</td><td>27.5</td><td>ROC-AUC</td></tr><tr><td>ogbg-molpcba</td><td>437,929</td><td>26.0</td><td>28.1</td><td>Avg. precision</td></tr><tr><td>ogbg-ppa</td><td>158,100</td><td>243.4</td><td>2,266.1</td><td>Accuracy</td></tr><tr><td>ogbg-code2</td><td>452,741</td><td>125.2</td><td>124.2</td><td>${\mathrm{F}}_{1}$ score</td></tr></table>
296
+
297
+ Table 3: Comparative evaluation performance on the four datasets studied. Our baseline model is a GIN [54], using exactly the same implementation as in [55].
298
+
299
+ <table><tr><td>Model</td><td>ogbg-molhiv</td><td>ogbg-molpcba</td><td>ogbg-ppa</td><td>ogbg-code2</td></tr><tr><td>GIN</td><td>${0.7558} \pm {0.0140}$</td><td>${0.2266} \pm {0.0028}$</td><td>${0.6892} \pm {0.0100}$</td><td>${0.1495} \pm {0.0023}$</td></tr><tr><td>GIN + EGP</td><td>$\mathbf{{0.7934}} \pm {0.0035}$</td><td>$\mathbf{{0.2329}} \pm {0.0019}$</td><td>$\mathbf{{0.7027}} \pm {0.0159}$</td><td>${0.1497} \pm {0.0015}$</td></tr></table>
300
+
301
+ species into their taxa, from their protein-protein association networks [57, 58] given as input. The fourth dataset is a code summarisation task: it requires predicting the tokens in the name of a Python method, given the abstract syntax tree (AST) of its implementation.
302
+
303
+ We provide a summary of important dataset statistics in Table 2; please see [55] for detailed information on the data. These datasets are designed to span a wide variety of domains (virtual drug screening, molecular activity prediction, protein-protein interactions, code summarisation) and sizes (from small molecules to very large syntax trees—the largest graph in ogbg-code2 has 36, 123 nodes).
304
+
305
+ Models In all four datasets, we want to directly evaluate the empirical gain of introducing an EGP layer and completely rule out any effects from parameter count, or similar architectural decisions.
306
+
307
+ To enable this, we take inspiration from the experimental setup of [18]. Our baseline model is the GIN [54], with hyperparameters as given by [55]. We use the official publicly available model implementation from the OGB authors [55], and modify all even layers of the architecture to operate over the appropriately-sampled Cayley graph.
308
+
309
+ Note that our construction leaves both the parameter count and latent dimension of the model unchanged, hence any benefits coming from optimising those have been diminished.
310
+
311
+ Results The results of our evaluation are presented in Table 3. It can be observed that, in all four cases, propagating information over the Cayley graph yields improvements in mean performance-these improvements are most apparent on ogbg-molhiv, where our approach significantly outperforms even the "virtual node" version of GIN, which uses $\sim {1.8} \times$ more parameters and achieves ${0.7707} \pm {0.0149}$ AUC [55]. We believe that these results provide encouraging empirical evidence that propagating information over Cayley graphs is an elegant idea for alleviating bottlenecks.
312
+
313
+ ## 7 Conclusion
314
+
315
+ In this paper, we have presented expander graph propagation (EGP), a novel and elegant approach to alleviating bottlenecks in graph representation learning, which provably supports global communication while not requiring quadratic complexity or dedicated preprocessing of the input.
316
+
317
+ To this end, we offered a detailed theoretical overview of Cayley graphs of special linear groups, $\operatorname{Cay}\left( {\operatorname{SL}\left( {2,{\mathbb{Z}}_{n}}\right) ;{S}_{n}}\right)$ . We cite proofs that these graphs have highly favourable properties for information propagation in graph neural networks: they are sparse and 4-regular, they have logarithmic diameter, and they can be efficiently precomputed by a simple procedure that does not rely on the input structure. We show that, in spite of having negatively curved edges, our findings do not violate any prior results on understanding oversquashing via curvature. Even under a simple intervention-interleaving EGP layers inbetween standard GNN layers-we have been able to recover significant performance returns without changing the parameter count or latent space dimensionality.
318
+
319
+ We hope that our work serves as a foundation for further work on deploying Cayley graphs-or other expander families-within the context of GNNs.
320
+
321
+ References
322
+
323
+ [1] William L Hamilton. Graph representation learning. Synthesis Lectures on Artifical Intelligence and Machine Learning, 14(3):1-159, 2020. 1
324
+
325
+ [2] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016. 1
326
+
327
+ [3] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
328
+
329
+ [4] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International conference on machine learning, pages 1263-1272. PMLR, 2017. 1, 2, 3
330
+
331
+ [5] Michael M Bronstein, Joan Bruna, Taco Cohen, and Petar Veličković. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. arXiv preprint arXiv:2104.13478, 2021. 1
332
+
333
+ [6] Petar Veličković. Message passing all the way up. arXiv preprint arXiv:2202.11097, 2022. 1
334
+
335
+ [7] Jonathan M Stokes, Kevin Yang, Kyle Swanson, Wengong Jin, Andres Cubillos-Ruiz, Nina M Donghia, Craig R MacNair, Shawn French, Lindsey A Carfrae, Zohar Bloom-Ackermann, et al. A deep learning approach to antibiotic discovery. Cell, 180(4):688-702, 2020. 1
336
+
337
+ [8] Austin Derrow-Pinion, Jennifer She, David Wong, Oliver Lange, Todd Hester, Luis Perez, Marc Nunkesser, Seongjae Lee, Xueying Guo, Brett Wiltshire, et al. Eta prediction with graph neural networks in google maps. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pages 3767-3776, 2021. 1
338
+
339
+ [9] Azalia Mirhoseini, Anna Goldie, Mustafa Yazgan, Joe Wenjie Jiang, Ebrahim Songhori, Shen Wang, Young-Joon Lee, Eric Johnson, Omkar Pathak, Azade Nazi, et al. A graph placement methodology for fast chip design. Nature, 594(7862):207-212, 2021. 1
340
+
341
+ [10] Charles Blundell, Lars Buesing, Alex Davies, Petar Veličković, and Geordie Williamson. Towards combinatorial invariance for kazhdan-lusztig polynomials. arXiv preprint arXiv:2111.15161, 2021. 1
342
+
343
+ [11] Alex Davies, Petar Veličković, Lars Buesing, Sam Blackwell, Daniel Zheng, Nenad Tomašev, Richard Tanburn, Peter Battaglia, Charles Blundell, András Juhász, et al. Advancing mathematics by guiding human intuition with ai. Nature, 600(7887):70-74, 2021. 1
344
+
345
+ [12] Qian Huang, Horace He, Abhay Singh, Ser-Nam Lim, and Austin R Benson. Combining label propagation and simple models out-performs graph neural networks. arXiv preprint arXiv:2010.13993, 2020. 1
346
+
347
+ [13] Shyam A Tailor, Felix Opolka, Pietro Lio, and Nicholas Donald Lane. Do we need anisotropic graph neural networks? In International Conference on Learning Representations, 2021.
348
+
349
+ [14] Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. Simplifying graph convolutional networks. In International conference on machine learning, pages 6861-6871. PMLR, 2019. 1
350
+
351
+ [15] David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alán Aspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular fingerprints. Advances in neural information processing systems, 28, 2015. 1
352
+
353
+ [16] Federico Errica, Marco Podda, Davide Bacciu, and Alessio Micheli. A fair comparison of graph neural networks for graph classification. arXiv preprint arXiv:1912.09893, 2019. 1
354
+
355
+ [17] Enxhell Luzhnica, Ben Day, and Pietro Liò. On graph classification networks, datasets and baselines. arXiv preprint arXiv:1905.04682, 2019. 1
356
+
357
+ [18] Uri Alon and Eran Yahav. On the bottleneck of graph neural networks and its practical implications. arXiv preprint arXiv:2006.05205, 2020. 2, 3, 7, 8, 9
358
+
359
+ [19] Jake Topping, Francesco Di Giovanni, Benjamin Paul Chamberlain, Xiaowen Dong, and Michael M Bronstein. Understanding over-squashing and bottlenecks on graphs via curvature. arXiv preprint arXiv:2111.14522, 2021. 2, 3, 4, 5, 6, 7
360
+
361
+ [20] Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018. 2, 3
362
+
363
+ [21] Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, et al. Interaction networks for learning about objects, relations and physics. Advances in neural information processing systems, 29, 2016. 3
364
+
365
+ [22] Devin Kreuzer, Dominique Beaini, Will Hamilton, Vincent Létourneau, and Prudencio Tossou. Rethinking graph transformers with spectral attention. Advances in Neural Information Processing Systems, 34, 2021. 3
366
+
367
+ [23] Grégoire Mialon, Dexiong Chen, Margot Selosse, and Julien Mairal. Graphit: Encoding graph structure in transformers. arXiv preprint arXiv:2106.05667, 2021. 3
368
+
369
+ [24] Adam Santoro, David Raposo, David G Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. A simple neural network module for relational reasoning. Advances in neural information processing systems, 30, 2017. 3
370
+
371
+ [25] Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, and Tie-Yan Liu. Do transformers really perform badly for graph representation? Advances in Neural Information Processing Systems, 34, 2021. 3
372
+
373
+ [26] Giorgos Bouritsas, Fabrizio Frasca, Stefanos P Zafeiriou, and Michael Bronstein. Improving graph neural network expressivity via subgraph isomorphism counting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022. 3
374
+
375
+ [27] Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 855-864, 2016. 3
376
+
377
+ [28] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 701-710, 2014.
378
+
379
+ [29] Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scale information network embedding. In Proceedings of the 24th international conference on world wide web, pages 1067-1077, 2015.
380
+
381
+ [30] Shantanu Thakoor, Corentin Tallec, Mohammad Gheshlaghi Azar, Mehdi Azabou, Eva L Dyer, Remi Munos, Petar Veličković, and Michal Valko. Large-scale representation learning on graphs via bootstrapping. In International Conference on Learning Representations, 2021.
382
+
383
+ [31] Petar Veličković, William Fedus, William L Hamilton, Pietro Liò, Yoshua Bengio, and R Devon Hjelm. Deep graph infomax. arXiv preprint arXiv:1809.10341, 2018. 3
384
+
385
+ [32] Johannes Gasteiger, Stefan Weißenberger, and Stephan Günnemann. Diffusion improves graph learning. arXiv preprint arXiv:1911.05485, 2019. 3
386
+
387
+ [33] Cristian Bodnar, Fabrizio Frasca, Nina Otter, Yuguang Wang, Pietro Lio, Guido F Montufar, and Michael Bronstein. Weisfeiler and lehman go cellular: Cw networks. Advances in Neural Information Processing Systems, 34:2625-2640, 2021. 3
388
+
389
+ [34] Cristian Bodnar, Fabrizio Frasca, Yuguang Wang, Nina Otter, Guido F Montufar, Pietro Lio, and Michael Bronstein. Weisfeiler and lehman go topological: Message passing simplicial networks. In International Conference on Machine Learning, pages 1026-1037. PMLR, 2021.
390
+
391
+ [35] Matthias Fey, Jan-Gin Yuen, and Frank Weichert. Hierarchical inter-message passing for learning on molecular graphs. arXiv preprint arXiv:2006.12179, 2020. 3
392
+
393
+ [36] Christopher Morris, Gaurav Rattan, and Petra Mutzel. Weisfeiler and leman go sparse: Towards scalable higher-order graph embeddings. Advances in Neural Information Processing Systems, 33:21824-21840, 2020. 3
394
+
395
+ [37] Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 4602-4609, 2019. 3
396
+
397
+ [38] Kimberly Stachenfeld, Jonathan Godwin, and Peter Battaglia. Graph networks with spectral message passing. arXiv preprint arXiv:2101.00079, 2020. 3
398
+
399
+ [39] Johannes F Lutzeyer, Changmin Wu, and Michalis Vazirgiannis. Sparsifying the update step in graph neural networks. arXiv preprint arXiv:2109.00909, 2021. 4
400
+
401
+ [40] Fan R. K. Chung. Spectral graph theory, volume 92 of CBMS Regional Conference Series in Mathematics. Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 1997. ISBN 0-8218-0315-8. 4, 14
402
+
403
+ [41] Alexander Lubotzky. Discrete groups, expanding graphs and invariant measures, volume 125 of Progress in Mathematics. Birkhäuser Verlag, Basel, 1994. ISBN 3-7643-5075-X. doi: 10.1007/978-3-0346-0332-4. URL https://doi.org/10.1007/978-3-0346-0332-4.With an appendix by Jonathan D. Rogawski. 4
404
+
405
+ [42] Giuliana Davidoff, Peter Sarnak, and Alain Valette. Elementary number theory, group theory, and Ramanujan graphs, volume 55 of London Mathematical Society Student Texts. Cambridge University Press, Cambridge, 2003. ISBN 0-521-82426-5; 0-521-53143-8. doi: 10.1017/ CBO9780511615825. URL https://doi.org/10.1017/CBO9780511615825.5
406
+
407
+ [43] Emmanuel Kowalski. An introduction to expander graphs, volume 26 of Cours Spécialisés [Specialized Courses]. Société Mathématique de France, Paris, 2019. ISBN 978-2-85629-898-5. 4, 5
408
+
409
+ [44] N. Alon. Eigenvalues and expanders. volume 6, pages 83-96. 1986. doi: 10.1007/BF02579166. URL https://doi.org/10.1007/BF02579166.Theory of computing (Singer Island, Fla., 1984). 4
410
+
411
+ [45] N. Alon and V. D. Milman. ${\lambda }_{1}$ , isoperimetric inequalities for graphs, and superconcentrators. J. Combin. Theory Ser. B, 38(1):73-88, 1985. ISSN 0095-8956. doi: 10.1016/0095-8956(85) 90092-9. URL https://doi.org/10.1016/0095-8956(85)90092-9.4
412
+
413
+ [46] Jozef Dodziuk. Difference equations, isoperimetric inequality and transience of certain random walks. Trans. Amer. Math. Soc., 284(2):787-794, 1984. ISSN 0002-9947. doi: 10.2307/1999107. URL https://doi.org/10.2307/1999107.
414
+
415
+ [47] R. Michael Tanner. Explicit concentrators from generalized $N$ -gons. SIAM J. Algebraic Discrete Methods, 5(3):287-293, 1984. ISSN 0196-5212. doi: 10.1137/0605030. URL https://doi.org/10.1137/0605030.4
416
+
417
+ [48] Bojan Mohar. Eigenvalues, diameter, and mean distance in graphs. Graphs Combin., 7(1): 53-64, 1991. ISSN 0911-0119. doi: 10.1007/BF01789463. URL https://doi.org/10.1007/BF01789463.4
418
+
419
+ [49] Atle Selberg. On the estimation of Fourier coefficients of modular forms. In Proc. Sympos. Pure Math., Vol. VIII, pages 1-15. Amer. Math. Soc., Providence, R.I., 1965. 5
420
+
421
+ [50] R Forman. Discrete and computational geometry. 2003. 5
422
+
423
+ [51] Yann Ollivier. Ricci curvature of metric spaces. Comptes Rendus Mathematique, 345(11): 643-646, 2007. 5
424
+
425
+ [52] Yann Ollivier. Ricci curvature of markov chains on metric spaces. Journal of Functional Analysis, 256(3):810-864, 2009. 5
426
+
427
+ [53] Justin Salez. Sparse expanders have negative curvature. arXiv preprint arXiv:2101.08242, 2021. 6, 14
428
+
429
+ [54] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018. 7, 9
430
+
431
+ [55] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33:22118-22133, 2020. 8, 9
432
+
433
+ [56] Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S Pappu, Karl Leswing, and Vijay Pande. Moleculenet: a benchmark for molecular machine learning. Chemical science, 9(2):513-530, 2018. 8
434
+
435
+ [57] Damian Szklarczyk, Annika L Gable, David Lyon, Alexander Junge, Stefan Wyder, Jaime Huerta-Cepas, Milan Simonovic, Nadezhda T Doncheva, John H Morris, Peer Bork, et al. String v11: protein-protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets. Nucleic acids research, 47(D1):D607-D613, 2019.9
436
+
437
+ [58] Marinka Zitnik, Rok Sosič, Marcus W Feldman, and Jure Leskovec. Evolution of resilience in protein interactomes across the tree of life. Proceedings of the National Academy of Sciences, 116(10):4426-4433, 2019. 9
438
+
439
+ [59] Grigorii A Margulis. Explicit constructions of graphs without short cycles and low density codes. Combinatorica, 2(1):71-78, 1982. 13
440
+
441
+ ## 497 A Proof of Proposition 11
442
+
443
+ Let $s$ be one of
444
+
445
+ $$
446
+ \left( \begin{array}{ll} 1 & 1 \\ 0 & 1 \end{array}\right) ,\;\left( \begin{array}{ll} 1 & 0 \\ 1 & 1 \end{array}\right) .
447
+ $$
448
+
449
+ Let $n,{n}^{\prime } > {18}$ and let $e$ and ${e}^{\prime }$ be s-labelled edges in ${G}_{n}$ and ${G}_{{n}^{\prime }}$ . Then there is a graph isomorphism between ${N}_{2}\left( e\right)$ and ${N}_{2}\left( {e}^{\prime }\right)$ taking e to ${e}^{\prime }$ .
450
+
451
+ Proof. Note first that, by the homogeneity of the Cayley graphs ${G}_{n}$ and ${G}_{{n}^{\prime }}$ , we may assume that $e$ and ${e}^{\prime }$ emanate from the identity vertex of each graph.
452
+
453
+ Let ${G}_{\infty }$ be the Cayley graph of $\operatorname{SL}\left( {2,\mathbb{Z}}\right)$ with respect to the generators
454
+
455
+ $$
456
+ {S}_{\infty } = \left\{ {\left( \begin{array}{ll} 1 & 1 \\ 0 & 1 \end{array}\right) ,\left( \begin{array}{ll} 1 & 0 \\ 1 & 1 \end{array}\right) }\right\} .
457
+ $$
458
+
459
+ Let ${e}_{\infty }$ be the $s$ -labelled edge emanating from the identity vertex of ${G}_{\infty }$ . The quotient homomorphism
460
+
461
+ $$
462
+ \mathrm{{SL}}\left( {2,\mathbb{Z}}\right) \rightarrow \mathrm{{SL}}\left( {2,{\mathbb{Z}}_{n}}\right)
463
+ $$
464
+
465
+ induces a graph homomorphism ${G}_{\infty } \rightarrow {G}_{n}$ sending ${e}_{\infty }$ to $e$ . We will show that it restricts to a graph isomorphism
466
+
467
+ $$
468
+ {N}_{2}\left( {e}_{\infty }\right) \rightarrow {N}_{2}\left( e\right) \text{.}
469
+ $$
470
+
471
+ 502 As there is a similar graph isomorphism ${N}_{2}\left( {e}_{\infty }\right) \rightarrow {N}_{2}\left( {e}^{\prime }\right)$ , the proposition will follow.
472
+
473
+ Note that two elements of $\mathrm{{SL}}\left( {2,\mathbb{Z}}\right)$ map to the same element of $\mathrm{{SL}}\left( {2,{\mathbb{Z}}_{n}}\right)$ if and only if they differ by multiplication by an element of the kernel ${K}_{n}$ . This is
474
+
475
+ $$
476
+ {K}_{n} = \left\{ {\left( \begin{array}{ll} a & b \\ c & d \end{array}\right) \in \mathrm{{SL}}\left( {2,\mathbb{Z}}\right) : a \equiv d \equiv 1{\;\operatorname{mod}\;n}\text{ and }b \equiv c \equiv 0{\;\operatorname{mod}\;n}}\right\} .
477
+ $$
478
+
479
+ The graph homomorphism sends edges to edges, and so it is distance non-increasing. Hence it certainly sends ${N}_{2}\left( {e}_{\infty }\right)$ to ${N}_{2}\left( e\right)$ . It is also clearly surjective, because any element of ${N}_{2}\left( e\right)$ is reached from an endpoint of $e$ by a path of length at most 2, and there is a corresponding path in ${N}_{2}\left( {e}_{\infty }\right)$ .
480
+
481
+ We just need to show that this is an injection. If not, then two distinct vertices ${g}_{1}$ and ${g}_{2}$ in ${N}_{2}\left( {e}_{\infty }\right)$ map to the same vertex in ${N}_{2}\left( e\right)$ . Note then that as elements of $\mathrm{{SL}}\left( {2,\mathbb{Z}}\right) ,{g}_{1} = {g}_{2}k$ for some $k \in {K}_{n}$ . There are paths with length at most 3 joining the identity 1 to ${g}_{1}$ and ${g}_{2}$ respectively. Hence, the distance in ${G}_{\infty }$ between ${g}_{1}$ and ${g}_{2}$ is at most 6 . Therefore, the distance between 1 and ${g}_{1}^{-1}{g}_{2}$ is at most 6 . This element ${g}_{1}^{-1}{g}_{2}$ lies in ${K}_{n}$ . We will show that when $n > {18}$ , the only element of ${K}_{n}$ that has distance at most 6 from the identity is the identity itself. This will imply that ${g}_{1}^{-1}{g}_{2} = 1$ and hence ${g}_{1} = {g}_{2}$ . But this contradicts the assumption that ${g}_{1}$ and ${g}_{2}$ are distinct vertices. Our argument follows that of [59].
482
+
483
+ The operator norm $\parallel A\parallel$ of a matrix $A \in \mathrm{{SL}}\left( {2,\mathbb{Z}}\right)$ is
484
+
485
+ $$
486
+ \parallel A\parallel = \sup \left\{ {\left| {A\left( v\right) }\right| : v \in {\mathbb{R}}^{2},\left| v\right| = 1}\right\} .
487
+ $$
488
+
489
+ This is submultiplicative: $\parallel {AB}\parallel \leq \parallel A\parallel \parallel B\parallel$ for matrices $A$ and $B$ . It can be calculated as the square root of the largest eigenvalue of ${A}^{t}A$ . In our case, the operator norms satisfy
490
+
491
+ $$
492
+ \begin{Vmatrix}\left( \begin{array}{ll} 1 & 1 \\ 0 & 1 \end{array}\right) \end{Vmatrix} = \begin{Vmatrix}\left( \begin{array}{ll} 1 & 0 \\ 1 & 1 \end{array}\right) \end{Vmatrix} = \frac{1 + \sqrt{5}}{2}.
493
+ $$
494
+
495
+ Consider an element
496
+
497
+ $$
498
+ K = \left( \begin{array}{ll} a & b \\ c & d \end{array}\right)
499
+ $$
500
+
501
+ of ${K}_{n}$ that is not the identity. Since $a \equiv d \equiv 1$ modulo $n$ and $b \equiv c \equiv 0$ modulo $n$ , we deduce that at least one $\left| a\right| ,\left| b\right| ,\left| c\right|$ and $\left| d\right|$ is at least $n - 1$ . Therefore, this matrix acts on one of the vectors ${\left( 1,0\right) }^{t}$ or ${\left( 0,1\right) }^{t}$ by scaling its length by at least $n - 1$ . Therefore, $\parallel K\parallel \geq n - 1$ . Suppose now that $K$ has distance at most 6 from the identity. Then $K$ can be written as a word in the generators of $\mathrm{{SL}}\left( {2,\mathbb{Z}}\right)$ with length at most 6 . Therefore, we obtain the inequality
502
+
503
+ $$
504
+ \parallel K\parallel \leq {\left( \frac{1 + \sqrt{5}}{2}\right) }^{6} < {17.95}.
505
+ $$
506
+
507
+ 15 Hence, $n < {18.95}$ and therefore, as $n$ is integral, $n \leq {18}$ .
508
+
509
+ ## 6 B Proof of Theorem 13
510
+
511
+ For any $\epsilon > 0,\delta > 0$ and $\Delta > 0$ , there are only finitely many graphs with maximum vertex degree $\Delta$ , Cheeger constant at least $\delta$ and Olliver curvature at least $- \epsilon$ .
512
+
513
+ Proof. This is a consequence of the main result of Salez [53, Theorem 3]. This states if ${G}_{n} =$ $\left( {{V}_{n},{E}_{n}}\right)$ is a sequence of graphs with the following properties:
514
+
515
+ $$
516
+ \mathop{\sup }\limits_{{n \geq 1}}\left\{ {\frac{1}{\left| {V}_{n}\right| }\mathop{\sum }\limits_{{v \in {V}_{n}}}\deg \left( v\right) \log \deg \left( v\right) }\right\} < \infty
517
+ $$
518
+
519
+ $$
520
+ \forall \epsilon > 0,\;\frac{1}{\left| {E}_{n}\right| }\left| \left\{ {e \in {E}_{n} : \kappa \left( e\right) < - \epsilon }\right\} \right| \rightarrow 0\text{ as }n \rightarrow \infty ,
521
+ $$
522
+
523
+ then
524
+
525
+ $$
526
+ \forall \rho < 1,\;\mathop{\liminf }\limits_{{n \rightarrow \infty }}\left\{ {\frac{1}{\left| {V}_{n}\right| }\left| \left\{ {i : {\mu }_{i}\left( {G}_{n}\right) \geq \rho }\right\} \right| }\right\} > 0.
527
+ $$
528
+
529
+ Here, $\kappa \left( e\right)$ is the Ollivier curvature of an edge $e$ and
530
+
531
+ $$
532
+ 1 = {\mu }_{0}\left( G\right) \geq {\mu }_{1}\left( G\right) \geq \cdots \geq 0
533
+ $$
534
+
535
+ are the eigenvalues of the lazy random walk operator. To prove the theorem, we suppose that on the contrary, there are infinitely many distinct graphs ${G}_{n} = \left( {{V}_{n},{E}_{n}}\right)$ with with maximum vertex degree $\Delta$ , Cheeger constant at least $\delta$ and Olliver curvature at least $- \epsilon$ . Then
536
+
537
+ $$
538
+ \mathop{\sum }\limits_{{v \in {V}_{n}}}\deg \left( v\right) \log \deg \left( v\right) \leq \left| {V}_{n}\right| \Delta \log \Delta
539
+ $$
540
+
541
+ and so (1) is satsfied. Condition (2) is trivially satisfied because the Ollivier curvature of each graph is at least $- \epsilon$ . Thus, we deduce that the conclusion of Salev’s theorem holds. Setting $\rho = 1 - \left( {{\delta }^{2}/4{\Delta }^{2}}\right)$ , we deduce that a definite proportion of the eigenvalues of the random walk operator are at least $1 - \left( {{\delta }^{2}/4{\Delta }^{2}}\right)$ . In particular, ${\mu }_{1}\left( {G}_{n}\right) \geq 1 - \left( {{\delta }^{2}/4{\Delta }^{2}}\right)$ . Denote the eigenvalues of the normalised
542
+
543
+ Laplacian by
544
+
545
+ $$
546
+ 0 = {\lambda }_{0}^{\prime }\left( {G}_{n}\right) \leq {\lambda }_{1}^{\prime }\left( {G}_{n}\right) \leq \ldots
547
+ $$
548
+
549
+ These are related to the eigenvalues of the lazy random walk operator by ${\lambda }_{i}^{\prime }\left( {G}_{n}\right) = 2 - 2{\mu }_{i}\left( {G}_{n}\right)$ . Hence, ${\lambda }_{1}^{\prime }\left( {G}_{n}\right) \leq {\delta }^{2}/\left( {2{\Delta }^{2}}\right)$ . There is a variation of Cheeger’s inequality that relates ${\lambda }_{1}^{\prime }$ to the conductance of the graph. To define this, one considers subsets $A$ of the vertex set, and defines their volume to be $\operatorname{vol}\left( A\right) = \mathop{\sum }\limits_{{v \in A}}\deg \left( v\right)$ . The conductance $\phi \left( G\right)$ of a graph $G$ is
550
+
551
+ $$
552
+ \phi \left( G\right) = \min \left\{ {\frac{\left| \partial A\right| }{\operatorname{vol}\left( A\right) } : A \subset V\left( G\right) ,0 < \operatorname{vol}\left( A\right) \leq \operatorname{vol}\left( {V\left( G\right) }\right) /2}\right\} .
553
+ $$
554
+
555
+ Then, by Chung [40, Theorem 2.2],
556
+
557
+ $$
558
+ \phi \left( G\right) \leq \sqrt{2{\lambda }_{1}^{\prime }\left( G\right) }
559
+ $$
560
+
561
+ Hence, in our case,
562
+
563
+ $$
564
+ \phi \left( {G}_{n}\right) \leq \delta /\Delta
565
+ $$
566
+
567
+ Consider any subset ${A}_{n}$ of the vertex set that realises $\phi \left( {G}_{n}\right)$ . Thus $0 < \operatorname{vol}\left( {A}_{n}\right) \leq \operatorname{vol}\left( {V}_{n}\right) /2$ and $\left| {\partial {A}_{n}}\right| /\operatorname{vol}\left( {A}_{n}\right) = \phi \left( {G}_{n}\right) \leq \delta /\Delta$ . If ${A}_{n}$ is at most half the vertices of ${G}_{n}$ , then this implies that the
568
+
569
+ Cheeger constant $h\left( {G}_{n}\right) \leq \delta$ . On the other hand, if ${A}_{n}$ is more than half the vertices of ${G}_{n}$ , we consider its complement ${A}_{n}^{c}$ . Its cardinality $\left| {A}_{n}^{c}\right|$ satisfies
570
+
571
+ $$
572
+ \left| {A}_{n}^{c}\right| \geq \operatorname{vol}\left( {A}_{n}^{c}\right) /\Delta
573
+ $$
574
+
575
+ Hence,
576
+
577
+ $$
578
+ h\left( {G}_{n}\right) \leq \frac{\left| \partial {A}_{n}^{c}\right| }{\left| {A}_{n}^{c}\right| } \leq \frac{\left| {\partial {A}_{n}}\right| \Delta }{\operatorname{vol}\left( {A}_{n}^{c}\right) } \leq \frac{\left| {\partial {A}_{n}}\right| \Delta }{\operatorname{vol}\left( {A}_{n}\right) } = \phi \left( {G}_{n}\right) \Delta \leq \delta .
579
+ $$
580
+
581
+ In either case, we deduce that the Cheeger constant of ${G}_{n}$ is at most $\delta$ , contradicting one of our hypotheses. Hence, there must have been only finitely many graphs satisfying the conditions of the theorem.
582
+
583
+ ## C Cayley graph at infinity is quasi-isometric to a tree
584
+
585
+ As all vertices of ${G}_{n}$ look the same, we focus attention on ${N}_{r}\left( 1\right)$ , the $r$ -neighbourhood of the identity vertex. The proof of Proposition 11 immediately gives the following.
586
+
587
+ Proposition 16. Let $r$ be a positive integer satisfying
588
+
589
+ $$
590
+ r \leq \frac{1}{2}{\left( \log \left( \frac{1 + \sqrt{5}}{2}\right) \right) }^{-1}\log \left( {n - 1}\right) .
591
+ $$
592
+
593
+ Then there is a graph isomorphism between the $r$ -neighbourhood of the identity vertex in ${G}_{n}$ and the $r$ -neighbourhood of the identity vertex in ${G}_{\infty }$ . This isomorphism takes the identity vertex to the identity vertex.
594
+
595
+ Proof. As shown above, there is a graph homomorphsm from ${N}_{r}\left( 1\right)$ in ${G}_{\infty }$ to ${N}_{r}\left( 1\right)$ in ${G}_{n}$ that is a surjection. If it fails to be an injection, then there is a non-trivial element $K$ in the kernel ${K}_{n}$ of $\mathrm{{SL}}\left( {2,\mathbb{Z}}\right) \rightarrow \mathrm{{SL}}\left( {2,{\mathbb{Z}}_{n}}\right)$ satisfying
596
+
597
+ $$
598
+ \parallel K\parallel \leq {\left( \frac{1 + \sqrt{5}}{2}\right) }^{2r}.
599
+ $$
600
+
601
+ But any non-trivial element $K$ in ${K}_{n}$ satisfies
602
+
603
+ $$
604
+ \parallel K\parallel \geq n - 1
605
+ $$
606
+
607
+ Rearranging gives the required inequality.
608
+
609
+ This raises the question of the local structure of ${G}_{\infty }$ . The answer is well-known: it is ’tree-like’. Specifically, it is quasi-isometric to a tree. The formal definition of quasi-isometry is as follows.
610
+
611
+ Definition 17. A quasi-isometry between two metric spaces $\left( {{X}_{1},{d}_{1}}\right)$ and $\left( {{X}_{2},{d}_{2}}\right)$ is a function $f : {X}_{1} \rightarrow {X}_{2}$ that satisfies the following two conditions:
612
+
613
+ 1. there are constants $c, C > 0$ such that, for every $x,{x}^{\prime } \in {X}_{1}$
614
+
615
+ $$
616
+ c{d}_{1}\left( {x,{x}^{\prime }}\right) - c \leq {d}_{2}\left( {f\left( x\right) , f\left( {x}^{\prime }\right) }\right) \leq C{d}_{1}\left( {x,{x}^{\prime }}\right) + C,
617
+ $$
618
+
619
+ 2. there is a constant $K \geq 0$ such that for every $y \in {X}_{2}$ , there is an $x \in {X}_{1}$ with ${d}_{2}\left( {f\left( x\right) , y}\right) \leq K$ . If there is such a quasi-isometry, we say that $\left( {{X}_{1},{d}_{1}}\right)$ and $\left( {{X}_{2},{d}_{2}}\right)$ are quasi-isometric.
620
+
621
+ This forms an equivalence relation on metric spaces. When two metric spaces are quasi-isometric, they are viewed as being 'essentially the same' at large scales.
622
+
623
+ When $S$ and ${S}^{\prime }$ are finite generating sets for a group $\Gamma$ , the graphs $\operatorname{Cay}\left( {\Gamma ;S}\right)$ and $\operatorname{Cay}\left( {\Gamma ;{S}^{\prime }}\right)$ are quasi-isometric. Hence, the quasi-isometry type of finitely generated group is well-defined, and this is the central object of study in geometric group theory.
624
+
625
+ The group $\mathrm{{SL}}\left( {2,\mathbb{Z}}\right)$ has as a finite-index subgroup that is a free group $F$ . If ${S}^{\prime }$ denotes a free generating set for $F$ , then $\operatorname{Cay}\left( {F;{S}^{\prime }}\right)$ is a tree. As passing to a finite-index subgroup preserves its quasi-isometry class, we deduce that the Cayley graph ${G}_{\infty } = \operatorname{Cay}\left( {\mathrm{{SL}}\left( {2,\mathbb{Z}}\right) ;{S}_{\infty }}\right)$ is indeed quasi-isometric to a tree, as claimed above.
626
+
627
+ ## D Mixing time properties of expander graphs
628
+
629
+ Expanders are well known to have small mixing time, in the following sense.
630
+
631
+ Let $G$ be a graph. We will consider probability distributions $\pi$ on $V\left( G\right)$ . The lazy random walk operator $M$ acts on probability distributions as follows. We think of $\pi \left( v\right)$ as being the probability of the random walk being at vertex $v$ . If the current location of the walk is at $v$ , then at the next step of the walk, either we stay put with probability $1/2$ or we move to one of its neighbours with equal probability. Then ${M\pi }$ is the new probability distribution.
632
+
633
+ In the case when $G$ is $k$ -regular, this takes a particular simple form. The operator $M$ is represented by the matrix $\left( {1/2}\right) I + \left( {1/{2k}}\right) A$ , where $A$ is the adjacency matrix. In that case, any initial distribution $\pi$ converges under powers of $M$ to the uniform distribution.
634
+
635
+ This is true for any reasonable notion of convergence, but we will use the $\parallel \cdot {\parallel }_{1}$ norm, where for two probability distributions $\pi$ and ${\pi }^{\prime }$ ,
636
+
637
+ $$
638
+ {\begin{Vmatrix}\pi - {\pi }^{\prime }\end{Vmatrix}}_{1} = \mathop{\sum }\limits_{{v \in V\left( G\right) }}\left| {\pi \left( v\right) - {\pi }^{\prime }\left( v\right) }\right| .
639
+ $$
640
+
641
+ Definition 18. The mixing time for a regular graph $G$ is the minimum value of $\ell$ such that for any starting probability distribution $\pi$ on the vertex set of $G$ ,
642
+
643
+ $$
644
+ {\begin{Vmatrix}{M}^{\ell }\pi - u\end{Vmatrix}}_{1} \leq \frac{1}{4}
645
+ $$
646
+
647
+ Here, $u$ is the uniform probability distribution on the vertex set, and $M$ is the lazy random walk operator.
648
+
649
+ Expanders have small mixing times in the following very strong sense.
650
+
651
+ Theorem 19. For any $k > 0$ and $\delta > 0$ , there is a constant $c > 0$ with the following property. If $G$ is a connected $k$ -regular graph on $n$ vertices with Cheeger constant at least $\delta > 0$ , then the mixing time for $G$ is at most $c\log \left( n\right)$ .
papers/LOG/LOG 2022/LOG 2022 Conference/IKevTLt3rT/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,361 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § EXPANDER GRAPH PROPAGATION
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ Deploying graph neural networks (GNNs) on whole-graph classification or regression tasks is known to be challenging: it often requires computing node features that are mindful of both local interactions in their neighbourhood and the global context of the graph structure. GNN architectures that navigate this space need to avoid pathological behaviours, such as bottlenecks and oversquashing, while ideally having linear time and space complexity requirements. In this work, we propose an elegant approach based on propagating information over expander graphs. We provide an efficient method for constructing expander graphs of a given size, and use this insight to propose the EGP model. We show that EGP is able to address all of the above concerns, while requiring minimal effort to set up, and provide evidence of its empirical utility on relevant datasets and baselines in the Open Graph Benchmark. Importantly, using expander graphs as a template for message passing necessarily gives rise to negative curvature. While this appears to be counterintuitive in light of recent related work on oversquashing, we theoretically demonstrate that negatively curved edges are likely to be required to obtain scalable message passing without bottlenecks. To the best of our knowledge, this is a previously unstudied result in the context of graph representation learning, and we believe our analysis paves the way to a novel class of scalable methods to counter oversquashing in GNNs.
12
+
13
+ § 21 1 INTRODUCTION
14
+
15
+ Graph neural networks (GNNs) are a flexible class of models for learning representations over graph-structured data [1]. Their versatility [2-4] and generality [5, 6] has made them a very attractive approach, leading to considerable application in areas as diverse as virtual drug screening [7], traffic prediction [8], combinatorial chip design [9] and pure mathematics [10, 11].
16
+
17
+ Most GNNs rely on repeatedly propagating information between neighbouring nodes in the graph. This is commonly expressed in the message passing [4] paradigm: nodes send vector-based messages to each other along the edges of the graph, and nodes update their representations by aggregating all the messages sent to them, in a permutation-invariant manner. Under many industrially-relevant tasks (which require identifying node-level properties, often with homophily assumptions), this formalism is very well aligned, often allowing for highly scalable model variants [12-14].
18
+
19
+ However, in many areas of scientific interest, purely local interactions are likely to be insufficient. Among the principal tasks over graphs, graph classification is perhaps most ripe with such situations: to meaningfully attach a label to a graph, in many cases it is insufficient to treat graphs as "bags of nodes". For example, when classifying a molecule for its potency as a candidate drug [7], the label is driven by complex substructure interactions in the molecule [15], rather than a naïve sum of atom-level effects. Accordingly, GNNs deployed in this regime need to update node features in a manner that is mindful of the global properties of the graph.
20
+
21
+ It quickly became apparent (as early as [2]) that it is often inadequate to merely stack more message passing layers over the input graph. In fact, for many standard graph classification tasks, such approaches may be weaker than discarding the graph structure altogether [16, 17]. Now, it is well-understood that stacking many local layers leaves GNNs vulnerable to pathological behaviours such as oversquashing [18], wherein nodes close to bottlenecks in the graph would need to store quantities of information that are exponentially increasing with model depth.
22
+
23
+ < g r a p h i c s >
24
+
25
+ Figure 1: Left: The Cayley graph of $\mathrm{{SL}}\left( {2,{\mathbb{Z}}_{3}}\right)$ , constructed using our method. It has $\left| V\right| = {24}$ nodes and it is 4-regular (implying $\left| E\right| = 2\left| V\right|$ ), hence it is sparse. Despite its sparsity, it is highly interconnected: any node is reachable from any other node by no more than 4 hops. Hence, it can serve as a strong "template" for globally propagating node features with a GNN. Right: The Cayley graph of $\mathrm{{SL}}\left( {2,{\mathbb{Z}}_{5}}\right)$ , constructed in an analogous way (with $\left| V\right| = {120}$ nodes). A 2-hop neighbourhood of one node (in red) is highlighted, demonstrating its tree-like local structure.
26
+
27
+ Within this space, we are interested in proposing a method that satisfies four desirable criteria: (C1) it is capable of propagating information globally in the graph; (C2) it is resistant to the oversquashing effect and does not introduce bottlenecks; (C3) its time and space complexity remain subquadratic (tighter than $O\left( {\left| V\right| }^{2}\right)$ for sparse graphs); and (C4) it requires no dedicated preprocessing of the input. Satisfying all four of these criteria simultaneously is challenging, and we will survey many of the popular approaches in the next section-demonstrating ways in which they fail to meet some of them.
28
+
29
+ In this paper, we identify expander graphs as very attractive objects in this regard. Specifically, they offer a family of graph structures that are fundamentally sparse $\left( {\left| E\right| = O\left( \left| V\right| \right) }\right)$ , while having low diameter: thus, any two nodes in an expander graph may reach each other in a short number of hops, eliminating bottlenecks and oversquashing (see Figure 1). Further, we will demonstrate an efficient way to construct a family of expander graphs (leveraging known theoretical results on the special linear group, $\operatorname{SL}\left( {2,{\mathbb{Z}}_{n}}\right)$ ). Once an expander graph of appropriate size is constructed, we can perform a certain number of GNN propagation steps over its structure to globally distribute the nodes' features. Accordingly, we name our method expander graph propagation (EGP).
30
+
31
+ Another important contribution of our work concerns extending the implications of prior art on oversquashing via curvature analysis [19]. According to [19], edges that are negatively curved are causing the oversquashing effect-yet, counterintuitively, the edges of the expander graphs we construct will always be negatively curved! We prove, however, that our expanders can never be sufficiently negatively curved to trigger the conditions necessary for the results in [19] to be applicable, and show that the existence of negatively curved edges might in fact be required in order to have sparse communication without bottlenecks.
32
+
33
+ § 2 RELATED WORK
34
+
35
+ We begin with a survey of the many prior approaches to handling global context in graph representation learning, evaluating them carefully against our four desirable criteria (C1-C4; cf. Table 1). This list is by no means exhaustive, but should be indicative of the most important directions.
36
+
37
+ Stacking more layers. As already highlighted, one way to achieve global information propagation is to have a deeper GNN. In this case, we are capable of satisfying (C1) and (C4)-no dedicated preprocessing is needed. However, depending on the graph’s diameter, we may need up to $O\left( \left| V\right| \right)$ layers to cover the graph, leading to quadratic complexity (violating (C3)) and introducing a vulnerability to bottlenecks (C2), as theoretically and empirically demonstrated in [18].
38
+
39
+ Master nodes. An attractive approach to introducing global context is to introduce a master node to the graph, and connect it to all of the graph's nodes. This can be done either explicitly [4] or implicitly, by storing a "global" vector [20]. It trivially reduces the graph's diameter to 2, introduces
40
+
41
+ Table 1: A summary of principal approaches to handling global context in graph representation learning (Section 2). "(✓)" indicates that a criterion may be satisfied, depending on the method's tradeoffs. Our proposal, the expander graph propagation (EGP) method, satisfies all four criteria.
42
+
43
+ max width=
44
+
45
+ Approach (C1) (global prop.) (C2) (no bottlenecks) (C3) (subquadratic) (C4) (no dedicated preproc.)
46
+
47
+ 1-5
48
+ GNNs ✘ ✘ ✓ ✓
49
+
50
+ 1-5
51
+ Sufficiently deep GNNs ✓ ✘ ✘ ✓
52
+
53
+ 1-5
54
+ Master node [4, 20] ✓ ✘ ✓ ✓
55
+
56
+ 1-5
57
+ Fully connected [18, 21-25] ✓ ✓ ✘ ✓
58
+
59
+ 1-5
60
+ Feature aug. [26-31] ✓ (✓) (✓) ✘
61
+
62
+ 1-5
63
+ Graph rewiring [19, 32] ✓ ✓ ✓ ✘
64
+
65
+ 1-5
66
+ Hierarchical MP [33-38] ✓ ✓ (✓) ✘
67
+
68
+ 1-5
69
+ $\mathbf{{EGP}}$ (ours) ✓ ✓ ✓ ✓
70
+
71
+ 1-5
72
+
73
+ $O\left( 1\right)$ new nodes and $O\left( \left| V\right| \right)$ new edges, and requires no dedicated preprocessing, hence it satisfies (C1, C3, C4). However, these benefits come at the expense of introducing a bottleneck in the master node: it has a very challenging task (especially when graphs get larger) to continually incorporate information over a very large neighbourhood in a useful way. Hence it fails to satisfy (C2).
74
+
75
+ Fully connected graphs. The converse approach is to make every node a master node: in this case, we make all pairs of nodes connected by an edge-this was initially proposed as a powerful method to alleviate oversquashing by [18]. This strategy proved highly popular in the recent surge of Graph Transformers $\left\lbrack {{22},{23},{25}}\right\rbrack$ , and is common for GNNs used in physical simulation [21] or reasoning [24] tasks. The graph's diameter is reduced to 1, no bottlenecks remain, and the approach does not require any dedicated preprocessing. Hence(C1, C2, C4)are trivially satisfied. The main downside of this approach is the introduction of $O\left( {\left| V\right| }^{2}\right)$ edges, which means (C3) can never be satisfied-and this approach will hence be prohibitive even for modestly-sized graphs.
76
+
77
+ Feature augmentation. An alternative approach is to provide additional features to the GNN which directly identify the structural role each node plays in the graph structure [26]. If done properly (i.e., if the computed features are directly relevant to the target task), this can drastically improve expressive power. Hence, in theory, it is possible to satisfy (C1) while not violating (C2, C3). However, computing appropriate features requires either specific domain knowledge, or an appropriate pretraining procedure [27-31] to be applied, in order to obtain such embeddings. Hence all of these gains come at the expense of failing to satisfy (C4).
78
+
79
+ Graph rewiring. Another promising line of research involves modifying the edges of the original graph, in order to alleviate bottlenecks. Popular examples of this approach involve using diffusion [32]-which diffuse additional edges through the application of kernels such as the personalised PageRank, and stochastic discrete Ricci flows [19]-which surgically modify a small quantity of edges to alleviate the oversquashing effect on the nodes with negative Ricci curvature. If realised carefully, such approaches will not deviate too far from the original graph, while provably alleviating oversquashing; hence it is possible to satisfy(C1, C2, C3). However, this comes at a cost of having to examine the input graph structure, with methods that do not necessarily scale easily with the number of nodes. As such, dedicated preprocessing is needed, failing to satisfy (C4).
80
+
81
+ Hierarchical message passing. Lastly, going beyond modifying the edges, it is also possible to introduce additional nodes in the graph-each of them responsible for a particular substructure in the graph ${}^{1}$ . If done carefully, it has the potential to drastically reduce the graph’s diameter while not introducing bottlenecked nodes (hence, allowing us to satisfy(C1, C2)). However, in prior work, a cost has to be paid for this, usually in the need for dedicated preprocessing. Prior proposals for hierarchical GNNs that remain scalable require a dedicated pre-processing step [33-35], sometimes coupled with domain knowledge [35]-thus failing to satisfy (C4). In addition, such methods may require adding prohibitively large numbers of substructures [36, 37] or expensive pre-computation, e.g. computing the graph Laplacian eigenvectors [38]. This might make even (C3) hard to satisfy.
82
+
83
+ ${}^{1}$ The master node approach discussed before is a special case of this, wherein a single node is responsible for a "substructure" spanning the entire graph.
84
+
85
+ Before proceeding to present EGP-specific material, we remark that our work is not the first to study expander graph-related topics in the context of GNNs. Specifically, the ExpanderGNN [39] leverages expander graphs over neural network weights to sparsify the update step in GNNs, and the Cheeger constant has been previously used to quantify oversquashing in [19]. With respect to our contributions, neither of these cases discuss expander graphs in the context of the computational graph for a GNN, nor attempt to propagate messages over such a structure. Further, neither of these proposals successfully satisfies all four of our desired criteria (C1-C4).
86
+
87
+ § 3 THEORETICAL BACKGROUND
88
+
89
+ We now dedicate our attention to the key theoretical results over expander graphs, which will allow EGP to have favourable properties and be efficiently precomputable.
90
+
91
+ Definition 1. For a finite connected graph $G = \left( {V\left( G\right) ,E\left( G\right) }\right)$ , we consider functions $f : V\left( G\right) \rightarrow \mathbb{R}$ . The Laplacian ${Lf} : V\left( G\right) \rightarrow \mathbb{R}$ of such a function is defined to be
92
+
93
+ $$
94
+ {Lf}\left( v\right) = \deg \left( v\right) f\left( v\right) - \mathop{\sum }\limits_{{{vw} \in E\left( G\right) }}f\left( w\right) ,
95
+ $$
96
+
97
+ where $\deg \left( v\right)$ is the degree of the vertex $v$ .
98
+
99
+ The mapping $L : {\mathbb{R}}^{V\left( G\right) } \rightarrow {\mathbb{R}}^{V\left( G\right) }$ sending a function $f$ to its Laplacian ${Lf}$ is a linear transformation. It is not hard to show [40] that $L$ is symmetric with respect to the standard basis for ${\mathbb{R}}^{V\left( G\right) }$ and positive semi-definite and hence has non-negative real eigenvalues
100
+
101
+ $$
102
+ 0 = {\lambda }_{0}\left( G\right) < {\lambda }_{1}\left( G\right) \leq {\lambda }_{2}\left( G\right) \leq \ldots
103
+ $$
104
+
105
+ The smallest eigenvalue is 0 and its associated eigenspace consists of the constant functions (assuming $G$ is connected). The smallest positive eigenvalue, ${\lambda }_{1}\left( G\right)$ , is central to the definition of expander graphs, as the next definition shows.
106
+
107
+ Definition 2. A collection $\left\{ {G}_{i}\right\}$ of finite connected graphs is an expander family if there is a constant $c > 0$ such that for all ${G}_{i}$ in the collection, ${\lambda }_{1}\left( {G}_{i}\right) \geq c$ .
108
+
109
+ Expander families [41-43] have many remarkable and useful properties, particularly when there is a uniform upper bound on the degree of the vertices of ${G}_{i}$ .
110
+
111
+ Definition 3. Let $G$ be a finite graph. For $A \subset V\left( G\right)$ , its boundary $\partial A$ is the collection of edges with one endpoint in $A$ and one endpoint not in $A$ . The Cheeger constant $h\left( G\right)$ is defined to be
112
+
113
+ $$
114
+ h\left( G\right) = \min \left\{ {\frac{\left| \partial A\right| }{\left| A\right| } : A \subset V\left( G\right) ,0 < \left| A\right| \leq \left| {V\left( G\right) }\right| /2}\right\} .
115
+ $$
116
+
117
+ Thus, having a small Cheeger constant is equivalent to the graph having a 'bottleneck', in the sense that there is a collection of edges $\partial A$ that, when removed, disconnects the vertices into two sets ( $A$ and its complement, $V\left( G\right) \smallsetminus A$ ), with the property that the sizes of $A$ and its complement are significantly larger than the size of $\partial A$ .
118
+
119
+ Expander families can be reinterpreted using Cheeger constants, as follows (see, e.g., [44-47]):
120
+
121
+ Theorem 4. Let $\left\{ {G}_{i}\right\}$ be a collection of finite connected graphs with a uniform upper bound on their vertex degrees. Then the following are equivalent:
122
+
123
+ 1. $\left\{ {G}_{i}\right\}$ is an expander family;
124
+
125
+ 2. there is a constant $\epsilon > 0$ such that for all graphs in the collection, $h\left( {G}_{i}\right) \geq \epsilon$ .
126
+
127
+ Hence, expander graphs have higher Cheeger constants and will hence provably be bottleneck-free. The following result is one of the many useful properties of expander families, and it concerns their diameter. It was proved by Mohar [48, Theorem 2.3]. See also [45].
128
+
129
+ Theorem 5. The diameter $\operatorname{diam}\left( G\right)$ of a graph $G$ satisfies
130
+
131
+ $$
132
+ \operatorname{diam}\left( G\right) \leq 2\left\lceil {\frac{\Delta \left( G\right) + {\lambda }_{1}\left( G\right) }{4{\lambda }_{1}\left( G\right) }\log \left( {\left| {V\left( G\right) }\right| - 1}\right) }\right\rceil ,
133
+ $$
134
+
135
+ where $\Delta \left( G\right)$ is the maximal degree of any vertex of $G$ . Hence, if $\left\{ {G}_{i}\right\}$ is an expander family of finite graphs with a uniform upper bound on their vertex degrees, then there is a constant $k > 0$ such that for all graphs in the family,
136
+
137
+ $$
138
+ \operatorname{diam}\left( {G}_{i}\right) \leq k\log V\left( {G}_{i}\right) .
139
+ $$
140
+
141
+ Therefore, if we want to globally propagate information over an expander graph which has $\left| V\right|$ nodes, we only need $O\left( {\log \left| V\right| }\right)$ propagation steps to do so-yielding subquadratic complexity.
142
+
143
+ We have now successfully shown that expander graphs are bottleneck-free, and have favourable propagation qualities. What is missing is an efficient method of constructing an expander graph of (roughly) $\left| V\right|$ nodes. To demonstrate such a method, we leverage known results from group theory.
144
+
145
+ Definition 6. A group $\left( {\Gamma , \circ }\right)$ is a set $\Gamma$ equipped with a composition operation $\circ : \Gamma \times \Gamma \rightarrow \Gamma$ (written concisely by omitting $\circ$ , i.e. $g \circ h = {gh}$ , for $g,h \in \Gamma$ ), satisfying the following axioms:
146
+
147
+ * (Associativity) $\left( {gh}\right) l = g\left( {hl}\right)$ , for $g,h,l \in \Gamma$ .
148
+
149
+ * (Identity) There exists a unique $e \in \Gamma$ satisfying ${eg} = {ge} = g$ for all $g \in \Gamma$ .
150
+
151
+ * (Inverse) For every $g \in \Gamma$ there exists a unique ${g}^{-1} \in \Gamma$ such that $g{g}^{-1} = {g}^{-1}g = e$ .
152
+
153
+ A group is hence a natural construct for reasoning about transformations that leave an object invariant (unchanged). Further, we define a relevant notion of a group's generating set:
154
+
155
+ Definition 7. Let $\Gamma$ be a group. A subset $S \subseteq \Gamma$ is a generating set for $\Gamma$ if it can be used to "generate" all of $\Gamma$ via composition. Concretely, any element $g \in \Gamma$ can be expressed by composing elements in the generating set, or their inverses; that is, we can express $g = {s}_{1}^{\pm 1}{s}_{2}^{\pm 1}{s}_{3}^{\pm 1}\cdots {s}_{n - 1}^{\pm 1}{s}_{n}^{\pm 1}$ for ${s}_{i} \in S$ .
156
+
157
+ Now we are ready to define a Cayley graph of a group w.r.t. its generating set.
158
+
159
+ Definition 8. Let $\Gamma$ be a group with a finite generating set $S$ . Then the associated Cayley graph $\operatorname{Cay}\left( {\Gamma ;S}\right)$ has vertex set $\Gamma$ and it has an edge $g \rightarrow {gs}$ for each $g \in \Gamma$ and each $s \in S$ . We say that $s$ is the label on this edge. This is a potentially non-simple graph, as it may have edges with both endpoints on the same vertex and it may have multiple edges between a pair of vertices. In particular, when $s$ has order 2, then we view the edge $g \rightarrow {gs}$ and the edge $g \rightarrow g{s}^{2} = g$ as being distinct edges.
160
+
161
+ Note that the degree of each vertex of a Cayley graph $\operatorname{Cay}\left( {\Gamma ;S}\right)$ is $2\left| S\right|$ . This is because each vertex $g$ is joined by edges to ${gs}$ and $g{s}^{-1}$ for each $s \in S$ . Thus, we shall be particularly interested in the case where there is a uniform upper bound on $\left| S\right|$ . The specific group we use for EGP is as follows.
162
+
163
+ For each positive integer $n$ , the special linear group $\operatorname{SL}\left( {2,{\mathbb{Z}}_{n}}\right)$ denotes the group of $2 \times 2$ matrices with entries that are integers modulo $n$ and with determinant 1 . One of its generating sets is:
164
+
165
+ $$
166
+ {S}_{n} = \left\{ {\left( \begin{array}{ll} 1 & 1 \\ 0 & 1 \end{array}\right) ,\left( \begin{array}{ll} 1 & 0 \\ 1 & 1 \end{array}\right) }\right\} .
167
+ $$
168
+
169
+ Central to our constructions is the following important result.
170
+
171
+ Theorem 9. The family of Cayley graph $\operatorname{Cay}\left( {\operatorname{SL}\left( {2,{\mathbb{Z}}_{n}}\right) ;{S}_{n}}\right)$ forms an expander family.
172
+
173
+ The proof uses a result of Selberg [49] who showed that the smallest positive eigenvalue of the Laplacian of certain hyperbolic surfaces is at least $3/{16}$ . One can use this to a produce a lower bound on the first eigenvalue of the Laplacian on $\operatorname{Cay}\left( {\operatorname{SL}\left( {2,{\mathbb{Z}}_{n}}\right) ;{S}_{n}}\right)$ . Full proofs are given in [42,43].
174
+
175
+ Lastly, it is useful to state a known result: the number of nodes of $\operatorname{Cay}\left( {\mathrm{{SL}}\left( {2,{\mathbb{Z}}_{n}}\right) ;{S}_{n}}\right)$ is:
176
+
177
+ $$
178
+ \left| {V\left( {\operatorname{Cay}\left( {\mathrm{{SL}}\left( {2,{\mathbb{Z}}_{n}}\right) ;{S}_{n}}\right) }\right) }\right| = {n}^{3}\mathop{\prod }\limits_{{\text{ prime }p \mid n}}\left( {1 - \frac{1}{{p}^{2}}}\right) , \tag{10}
179
+ $$
180
+
181
+ hence, it is of the order of $O\left( {n}^{3}\right)$ . We now study the local properties of Cayley graphs in detail.
182
+
183
+ § 4 LOCAL STRUCTURE OF THE CAYLEY GRAPHS, AND THE UTILITY OF NEGATIVE CURVATURE
184
+
185
+ Recent work [19] has suggested that the local structure of the graph $G$ underlying a GNN may play an important role in the way that information propagates around $G$ . In particular, various notions of 'Ricci curvature' such as Forman curvature [50], Ollivier curvature [51, 52] and balanced Forman curvature [19] have been examined. These are all local quantities, in the sense that they depend on the structure of the graph within a small neighbourhood of each edge. In this section, we will therefore examine the local structure of the Cayley graphs ${G}_{n} = \operatorname{Cay}\left( {\mathrm{{SL}}\left( {2,{\mathbb{Z}}_{n}}\right) ;{S}_{n}}\right)$ .
186
+
187
+ The various notions of curvature given above are defined for each $e$ of the graph $G$ . Since, as defined by [19], the balanced Forman curvature of an edge depends only on local structures (i.e. triangles
188
+
189
+ and squares) around that edge, they can be determined by only observing the immediate 2-hop surrounding of that edge. Formally, for an edge $e$ of a graph $G$ , let ${N}_{2}\left( e\right)$ be the induced subgraph with vertices that are at most two hops away from at least one endpoint of $e$ . Then the curvature of $e$ only depends on the isomorphism type of ${N}_{2}\left( e\right)$ . More specifically, if $e$ and ${e}^{\prime }$ are edges in possibly distinct graphs, and there is a graph isomorphism between ${N}_{2}\left( e\right)$ and ${N}_{2}\left( {e}^{\prime }\right)$ that sends $e$ to ${e}^{\prime }$ , then this guarantees that the curvatures of $e$ and ${e}^{\prime }$ are equal.
190
+
191
+ This situation arises prominently in the Cayley graphs that we are considering, as follows.
192
+
193
+ Proposition 11. Let $s$ be one of
194
+
195
+ $$
196
+ \left( \begin{array}{ll} 1 & 1 \\ 0 & 1 \end{array}\right) ,\;\left( \begin{array}{ll} 1 & 0 \\ 1 & 1 \end{array}\right) .
197
+ $$
198
+
199
+ Let $n,{n}^{\prime } > {18}$ and let $e$ and ${e}^{\prime }$ be s-labelled edges in ${G}_{n}$ and ${G}_{{n}^{\prime }}$ . Then there is a graph isomorphism between ${N}_{2}\left( e\right)$ and ${N}_{2}\left( {e}^{\prime }\right)$ taking e to ${e}^{\prime }$ .
200
+
201
+ We prove Proposition 11 in Appendix A. This immediately allows us to characterise the balanced Forman curvature and Ollivier curvature for all of the Cayley graphs we generate:
202
+
203
+ Proposition 12. The balanced Forman curvatures $\operatorname{Ric}\left( n\right)$ , and the Ollivier curvatures $\kappa \left( n\right)$ of all edges of Cayley graphs ${G}_{n}$ are given by:
204
+
205
+ $$
206
+ \operatorname{Ric}\left( n\right) = \left\{ {\begin{array}{ll} 0 & \text{ if }n = 2 \\ - 1/4 & \text{ if }n = 3 \\ - 1/2 & \text{ if }n = 4 \\ - 1 & \text{ if }n \geq 5, \end{array}\;\kappa \left( n\right) = \left\{ \begin{array}{ll} 0 & \text{ if }n = 2 \\ - 1/8 & \text{ if }n = 3 \\ - 1/4 & \text{ if }n = 4 \\ - 3/8 & \text{ if }n = 5 \\ - 1/2 & \text{ if }n \geq 6. \end{array}\right. }\right.
207
+ $$
208
+
209
+ Proof. Proposition 11 implies that the balanced Forman and Ollivier curvatures are all equal for $n > {18}$ . Their values for $2 \leq n \leq {19}$ can all be empirically computed, and are given as above.
210
+
211
+ Prior work [19] suggests it is preferable for GNNs to operate on graphs with positive Ricci curvature, whereas our graphs ${G}_{n}\left( {n > 2}\right)$ all have negative Ricci curvature. However, we contend that negative Ricci curvature is not in itself an impediment to efficient propagation around a GNN. Indeed, it was shown in [19, Theorem 4] that poor propagation arises when the balanced Forman curvature is close to -2, specifically if it is at most $- 2 + \delta$ for some $\delta > 0$ . Here, $\delta$ is required to satisfy certain inequalities. But, with certainty, $\delta = 1$ can never be satisfied in the hypotheses of [19, Theorem 4].
212
+
213
+ Furthermore, positive Ricci curvature may have downsides when used for GNNs. One significant downside to non-negative Ricci curvature can be derived using the main result of [53], which says that the three properties of expansion, sparsity and non-negative Ollivier curvature are incompatible, in the following sense.
214
+
215
+ Theorem 13. For any $\epsilon > 0,\delta > 0$ and $\Delta > 0$ , there are only finitely many graphs with maximum vertex degree $\Delta$ , Cheeger constant at least $\delta$ and Olliver curvature at least $- \epsilon$ .
216
+
217
+ We prove Theorem 13 in Appendix B. Furthermore, quoting directly from [53]:
218
+
219
+ "The high-level message is that on large sparse graphs, non-negative curvature (in an even weak sense) induces extremely poor spectral expansion. This stands in stark contrast with the traditional idea - quantified by a broad variety of functional inequalities over the past decade - that non-negative curvature is associated with good mixing behavior."
220
+
221
+ In our view, it is highly desirable that the graphs used for GNNs have high Cheeger constants, in the sense of globally lacking bottlenecks. Having bounded vertex degree is certainly useful too, since it implies that the graphs will be sparse, and the nodes will not have to handle ever-increasing neighbourhoods for message passing as graphs grow larger in size ${}^{2}$ . As we have just shown, using the results from [53], non-negative Ollivier curvature is incompatible with these properties when the graph is sufficiently large.
222
+
223
+ The negative curvature of each edge in ${G}_{n}$ implies that they are locally ‘tree-like’. In Appendix C, we make this statement precise by showing that ${G}_{n}$ is ’tree-like’ up to scale $c\log \left( n\right)$ about each node, for $c \simeq \left( {1/2}\right) {\left( \log \left( \left( 1 + \sqrt{5}\right) /2\right) \right) }^{-1}$ (see Figure 1 (Right) for a schematic view).
224
+
225
+ ${}^{2}$ This property would not hold for GNNs with master nodes, as the master node has $O\left( V\right)$ neighbours.
226
+
227
+ This tree-like structure might seem, at first, to be counter-productive for good propagation across the graphs ${G}_{n}$ . Indeed, GNNs based on trees have been shown to have provably poor performance [18]. The reason for this seems to be two-fold. On the one hand, trees have small Cheeger constant. Indeed, any tree $G$ on $n$ vertices has a Cheeger constant $1/\left\lbrack {n/2}\right\rbrack$ , since we may find an edge that, when removed, decomposes the graph into subgraphs with $\lceil n/2\rceil$ and $\lceil n/2\rceil$ vertices. As discussed in Section 3 and in [19], when a graph has small Cheeger constant, its performance when used as a template for a GNN is likely to become poor. Secondly, GNNs based on trees are susceptible to oversquashing. For a $k$ -regular infinite tree, there are $k{\left( k - 1\right) }^{r - 1}$ vertices at distance $r$ from a given vertex. Hence, if information is to be propagated at least distance $r$ from a given vertex, then seemingly an exponential amount of information is required to be stored.
228
+
229
+ However, neither of these issues are problematic for a GNN based on the Cayley graph ${G}_{n}$ . By Theorem 9, their Cheeger constants are bounded away from 0 . Secondly, although they are tree-like locally, this is only true up to scale $O\left( {\log n}\right)$ . In fact, the $r$ -neighbourhood of any vertex is the whole graph ${G}_{n}$ as soon as $r > C\log n$ , for some constant $C$ , by Theorem 5 . Being tree-like up to distance $O\left( {\log n}\right)$ does not lead to a requirement to store too much information as the message propagates. This is because $k{\left( k - 1\right) }^{r - 1}$ is linear in $n$ when $r \leq O\left( {\log n}\right)$ .
230
+
231
+ Beyond this scale, there exist many additional connections, which lead to many possible paths joining any pair of vertices. Each of these paths can be a potential route of transfer of information from one vertex to another. The perspective of information transfer also gives rise to another perspective in which expanders fare very favourably: the mixing time of their corresponding Markov chain. We state several known facts about the favourable mixing times of expanders in Appendix D, to further supplement our claims on their efficient communication properties.
232
+
233
+ § 5 EXPANDER GRAPH PROPAGATION
234
+
235
+ Let an input to a graph neural network be a node feature matrix $\mathbf{X} \in {\mathbb{R}}^{\left| V\right| \times k}$ , and an adjacency matrix $\mathbf{A} \in {\mathbb{R}}^{\left| V\right| \times \left| V\right| }$ . This setup is such that the feature vector of node $u,{\mathbf{x}}_{u} \in {\mathbb{R}}^{k}$ , can be recovered by taking an appropriate row from $\mathbf{X}$ . Note that the adjacency information can also be fed in an edge-list manner, which is desirable from a scalability perspective. Further, each edge in the graph may be endowed with additional features rather than a single real scalar. None of the above modifications would change the essence of our findings; we use a matrix formalism here purely for simplicity.
236
+
237
+ There exist many ways in which the computed Cayley graph $\operatorname{Cay}\left( {\mathrm{{SL}}\left( {2,{\mathbb{Z}}_{n}}\right) ;{S}_{n}}\right)$ can be leveraged for message propagation, and exploring these variations could be very useful for future work. Here, we opt for a simple construction: interleave running a standard GNN over the given input structure, followed by running another GNN layer over the relevant Cayley graph. If we let ${\mathbf{A}}^{\operatorname{Cay}\left( n\right) }$ be an adjacency matrix derived from $\operatorname{Cay}\left( {\mathrm{{SL}}\left( {2,{\mathbb{Z}}_{n}}\right) ;{S}_{n}}\right)$ , this implies:
238
+
239
+ $$
240
+ \mathbf{H} = \operatorname{GNN}\left( {\operatorname{GNN}\left( {\mathbf{X},\mathbf{A}}\right) ,{\mathbf{A}}^{\operatorname{Cay}\left( n\right) }}\right) \tag{14}
241
+ $$
242
+
243
+ Here, GNN refers to any preferred GNN layer, such as the graph isomorphism network [54, GIN]:
244
+
245
+ $$
246
+ {\mathbf{h}}_{u} = \phi \left( {\left( {1 + \epsilon }\right) {\mathbf{x}}_{u} + \mathop{\sum }\limits_{{v \in {\mathcal{N}}_{u}}}{\mathbf{x}}_{v}}\right) \tag{15}
247
+ $$
248
+
249
+ where ${\mathcal{N}}_{u}$ is the neighbourhood of node $u$ , i.e. in our setup, the set of all nodes $v$ such that ${a}_{vu} \neq 0$ . $\epsilon \in \mathbb{R}$ is a learnable scalar, and $\phi : {\mathbb{R}}^{k} \rightarrow {\mathbb{R}}^{{k}^{\prime }}$ is a two-layer MLP.
250
+
251
+ This procedure is iterated for a certain number of steps, after which the computed node embeddings in $\mathbf{H}$ can be used for any downstream task of interest-such as node classification, link prediction or graph classification. Note that, unlike [18], who apply their custom layer only at the tail of the architecture, we apply the expander graph immediately after each layer over the input graph. We find that if the input graph given by $\mathbf{A}$ contains bottlenecks, applying the GNN over ${\mathbf{A}}^{\operatorname{Cay}\left( n\right) }$ only at the end may result in oversquashing occurring before any expander graph propagation can take place.
252
+
253
+ The setup so far assumed the number of nodes in our input graph to line up with the Cayley graph, that is, ${\mathbf{A}}^{\operatorname{Cay}\left( n\right) } \in {\mathbb{R}}^{\left| V\right| \times \left| V\right| }$ . However, there is no guarantee that we can find an appropriate $n$ such that $\operatorname{Cay}\left( {\mathrm{{SL}}\left( {2,{\mathbb{Z}}_{n}}\right) ;{S}_{n}}\right)$ would have $\left| V\right|$ nodes. What we can do in practice, as an approximation, is choose the smallest $n$ such that the number of nodes of $\operatorname{Cay}\left( {\operatorname{SL}\left( {2,{\mathbb{Z}}_{n}}\right) ;{S}_{n}}\right)$ is $\geq \left| V\right|$ , then consider ${\mathbf{A}}_{1 : \left| V\right| ,1 : \left| V\right| }^{\operatorname{Cay}\left( n\right) }$ i.e. only the subgraph containing the first $\left| V\right|$ nodes in the Cayley graph.
254
+
255
+ There is a slight misalignment to our theory in this specific slicing choice-if the $\left| V\right|$ vertices in this subgraph are chosen completely arbitrarily, we risk disconnecting the graph. However, in all our experiments we construct the Cayley graph in a breadth-first manner, starting from the identity element as "node zero". Hence, the node at index $i$ is always guaranteed to be reachable from the nodes at lower indices $\left( {j < i}\right)$ , and the graph cannot be disconnected under this construction. More interesting strategies for this step can also be considered in the future, in order to optimise the communicativity properties of this subgraph.
256
+
257
+ Note that we do not need to perform matching of the nodes in the original graph to the nodes of the Cayley graph. This is because, much like the fully connected graph used by [18], we interpret the Cayley graph mainly as a template for global information propagation, in order to relieve bottlenecks in a scalable way.
258
+
259
+ Algorithm 1 summarises the steps of our proposed EGP model. As direct corollaries of results we proved or demonstrated, we note that EGP satisfies all four of our desirable criteria: (C1) by Theorem 5 (so long as logarithmically many layers are applied), (C2) by Theorem 4 (high Cheeger constant implies no bottlenecks), (C3) by the fact our Cayley graphs are 4-regular and hence sparse, and (C4) by the fact we can generate a Cayley graph of appropriate size without detailed analysis of the input-we may precompute a "bank" of Cayley graphs of various sizes to use in an ad-hoc manner.
260
+
261
+ Algorithm 1: Expander graph propagation (EGP) forward pass
262
+
263
+ Inputs :Node features $\mathbf{X} \in {\mathbb{R}}^{\left| V\right| \times k}$ , Adjacency matrix $\mathbf{A} \in {\mathbb{R}}^{\left| V\right| \times \left| V\right| }$
264
+
265
+ Output:Node embeddings $\mathbf{H}$
266
+
267
+ // Choose the smallest Cayley graph from our family that has number of nodes equal to, or greater than, $\left| V\right|$
268
+
269
+ $n \leftarrow {\operatorname{argmin}}_{m \in \mathbb{N}}\left| {V\left( {\operatorname{Cay}\left( {\mathrm{{SL}}\left( {2,{\mathbb{Z}}_{m}}\right) ;{S}_{m}}\right) }\right) }\right| \geq \left| V\right| ;\;$ // We can use Equation 10 to determine $n$
270
+
271
+ $$
272
+ {G}^{\operatorname{Cay}\left( n\right) } \leftarrow \operatorname{Cay}\left( {\mathrm{{SL}}\left( {2,{\mathbb{Z}}_{n}}\right) ;{S}_{n}}\right)
273
+ $$
274
+
275
+ $$
276
+ {\mathbf{A}}_{uv}^{\operatorname{Cay}\left( n\right) } \leftarrow \left\{ \begin{array}{ll} 1 & \left( {u,v}\right) \in E\left( {G}^{\operatorname{Cay}\left( n\right) }\right) \\ 0 & \text{ otherwise } \end{array}\right.
277
+ $$
278
+
279
+ // Populate adjacency matrix of the Cayley graph
280
+
281
+ ${\mathbf{H}}^{\left( 0\right) } \leftarrow \mathbf{X}$ ; // Initialise GNN inputs
282
+
283
+ for $t \in \{ 1,\ldots ,T\}$ do
284
+
285
+ if $t{\;\operatorname{mod}\;2} = 0$ then
286
+
287
+ ${\mathbf{H}}^{\left( t\right) } \leftarrow {\operatorname{GNN}}^{\left( t\right) }\left( {{\mathbf{H}}^{\left( t - 1\right) },\mathbf{A}}\right) ;$ // GNN layer over input graph; e.g. Equation 15
288
+
289
+ end
290
+
291
+ else
292
+
293
+ ${\mathbf{H}}^{\left( t\right) } \leftarrow {\operatorname{GNN}}^{\operatorname{Cay}}\left( {{\mathbf{H}}^{\left( t - 1\right) },{\mathbf{A}}_{1 : \left| V\right| ,1 : \left| V\right| }^{\operatorname{Cay}\left( n\right) }}\right) ;//$ GNN layer over Cayley graph; e.g. Equation 15
294
+
295
+ end
296
+
297
+ end
298
+
299
+ return ${\mathbf{H}}^{\left( T\right) }$ ; // Return final embeddings for downstream use
300
+
301
+ § 6 EMPIRICAL EVALUATION
302
+
303
+ Our work provides mainly a theoretical contribution: demonstrating a simple, theoretically-grounded approach to relieving bottlenecks and oversquashing in GNNs without requiring quadratic complexity or dedicated preprocessing. Further, we prove several additional results which deepen our understanding of curvature-based analysis of GNNs, showing how our expanders can be favourable in spite of their negatively-curved edges.
304
+
305
+ We now provide several direct comparative experiments in order to ascertain that our EGP addition can directly help existing graph classification baselines, even without further hyperparameter tuning.
306
+
307
+ Datasets To show this, we leverage the established Open Graph Benchmark collection of tasks [55, OGB]. Specifically, we provide results on all of its graph classification datasets: ogbg-molhiv, ogbg-molpcba, ogbg-ppa and ogbg-code2. The first two are among the largest molecule property prediction datasets in the MoleculeNet benchmark [56]. The third dataset is concerned with classifying
308
+
309
+ Table 2: Statistics of the three graph classification datasets studied in our evaluation.
310
+
311
+ max width=
312
+
313
+ Name Number of graphs Avg. nodes/graph Avg. edges/graph $\mathbf{{Metric}}$
314
+
315
+ 1-5
316
+ ogbg-molhiv 41,127 25.5 27.5 ROC-AUC
317
+
318
+ 1-5
319
+ ogbg-molpcba 437,929 26.0 28.1 Avg. precision
320
+
321
+ 1-5
322
+ ogbg-ppa 158,100 243.4 2,266.1 Accuracy
323
+
324
+ 1-5
325
+ ogbg-code2 452,741 125.2 124.2 ${\mathrm{F}}_{1}$ score
326
+
327
+ 1-5
328
+
329
+ Table 3: Comparative evaluation performance on the four datasets studied. Our baseline model is a GIN [54], using exactly the same implementation as in [55].
330
+
331
+ max width=
332
+
333
+ Model ogbg-molhiv ogbg-molpcba ogbg-ppa ogbg-code2
334
+
335
+ 1-5
336
+ GIN ${0.7558} \pm {0.0140}$ ${0.2266} \pm {0.0028}$ ${0.6892} \pm {0.0100}$ ${0.1495} \pm {0.0023}$
337
+
338
+ 1-5
339
+ GIN + EGP $\mathbf{{0.7934}} \pm {0.0035}$ $\mathbf{{0.2329}} \pm {0.0019}$ $\mathbf{{0.7027}} \pm {0.0159}$ ${0.1497} \pm {0.0015}$
340
+
341
+ 1-5
342
+
343
+ species into their taxa, from their protein-protein association networks [57, 58] given as input. The fourth dataset is a code summarisation task: it requires predicting the tokens in the name of a Python method, given the abstract syntax tree (AST) of its implementation.
344
+
345
+ We provide a summary of important dataset statistics in Table 2; please see [55] for detailed information on the data. These datasets are designed to span a wide variety of domains (virtual drug screening, molecular activity prediction, protein-protein interactions, code summarisation) and sizes (from small molecules to very large syntax trees—the largest graph in ogbg-code2 has 36, 123 nodes).
346
+
347
+ Models In all four datasets, we want to directly evaluate the empirical gain of introducing an EGP layer and completely rule out any effects from parameter count, or similar architectural decisions.
348
+
349
+ To enable this, we take inspiration from the experimental setup of [18]. Our baseline model is the GIN [54], with hyperparameters as given by [55]. We use the official publicly available model implementation from the OGB authors [55], and modify all even layers of the architecture to operate over the appropriately-sampled Cayley graph.
350
+
351
+ Note that our construction leaves both the parameter count and latent dimension of the model unchanged, hence any benefits coming from optimising those have been diminished.
352
+
353
+ Results The results of our evaluation are presented in Table 3. It can be observed that, in all four cases, propagating information over the Cayley graph yields improvements in mean performance-these improvements are most apparent on ogbg-molhiv, where our approach significantly outperforms even the "virtual node" version of GIN, which uses $\sim {1.8} \times$ more parameters and achieves ${0.7707} \pm {0.0149}$ AUC [55]. We believe that these results provide encouraging empirical evidence that propagating information over Cayley graphs is an elegant idea for alleviating bottlenecks.
354
+
355
+ § 7 CONCLUSION
356
+
357
+ In this paper, we have presented expander graph propagation (EGP), a novel and elegant approach to alleviating bottlenecks in graph representation learning, which provably supports global communication while not requiring quadratic complexity or dedicated preprocessing of the input.
358
+
359
+ To this end, we offered a detailed theoretical overview of Cayley graphs of special linear groups, $\operatorname{Cay}\left( {\operatorname{SL}\left( {2,{\mathbb{Z}}_{n}}\right) ;{S}_{n}}\right)$ . We cite proofs that these graphs have highly favourable properties for information propagation in graph neural networks: they are sparse and 4-regular, they have logarithmic diameter, and they can be efficiently precomputed by a simple procedure that does not rely on the input structure. We show that, in spite of having negatively curved edges, our findings do not violate any prior results on understanding oversquashing via curvature. Even under a simple intervention-interleaving EGP layers inbetween standard GNN layers-we have been able to recover significant performance returns without changing the parameter count or latent space dimensionality.
360
+
361
+ We hope that our work serves as a foundation for further work on deploying Cayley graphs-or other expander families-within the context of GNNs.
papers/LOG/LOG 2022/LOG 2022 Conference/IP-TISJqfq/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,307 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Distributed representations of graphs for drug pair scoring
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ In this paper we study the practicality and usefulness of incorporating distributed representations of graphs into models within the context of drug pair scoring. We argue that the real world growth and update cycles of drug pair scoring datasets subvert the limitations of transductive learning associated with distributed representations. Furthermore, we argue that the vocabulary of discrete substructure patterns induced over drug sets is not dramatically large due to the limited set of atom types and constraints on bonding patterns enforced by chemistry. Under this pretext, we explore the effectiveness of distributed representations of the molecular graphs of drugs in drug pair scoring tasks such as drug synergy, polypharmacy, and drug-drug interaction prediction. To achieve this, we present a methodology for learning and incorporating distributed representations of graphs within a unified framework for drug pair scoring. Subsequently, we augment a number of recent and state-of-the-art models to utilise our embeddings. We empirically show that the incorporation of these embeddings improves downstream performance of almost every model across different drug pair scoring tasks, even those the original model was not designed for. We publicly release all of our drug embeddings for the DrugCombDB, DrugComb, DrugbankDDI, and TwoSides datasets.
12
+
13
+ ## 191 Introduction
14
+
15
+ Recent advancements in graph representation learning (GRL) - particularly in message passing based graph neural networks - have enabled new ways of modelling natural phenomena and tackling learning tasks on graph structured data. One of the areas which now sees application of graph neural networks is drug pair scoring [1]. Drug pair scoring refers to the prediction tasks that answer questions about the consequences of administering a pair of drugs at the same time such as drug synergy prediction, polypharmacy prediction, and predicting drug-drug interaction types which are of great interest in the treatment of diseases. One of the primary challenges in elucidating and discovering the effects of drug combinations is the dramatically growing combinatorial space of drug pairs. Furthermore, reliance on human trials (in polypharmacy), and proneness to human error [2] makes manual/experimental discovery of useful drug combinations difficult without even considering the prohibitive financial and labour costs that make it only possible on small sets of drugs. Such conditions make in silico modelling of drug combinations an attractive solution.
16
+
17
+ A key component to modelling drug pairs is finding useful representations of the drugs to input into the drug pair scoring models. Traditional supervised machine learning methods for drug pair scoring rely on carefully crafted descriptors such as MDL descriptor keysets [3] and fingerprinting techniques such as Morgan fingerprinting [4]. More recently, graph neural network layers and permutation invariant pooling operators have enabled inputting the molecular graphs of drugs directly to learn task oriented representations in an end-to-end manner. Interestingly, graph kernel techniques and specifically distributed representations of graphs were not considered at all for inclusion in drug pair scoring pipelines to the best of our knowledge. We may only speculate to the reasons for this such as publication biases or its limitations in not using node feature vectors and the transductive nature that have made these approaches less appropriate in observations with rich/continuous node features and dynamic graphs $\left\lbrack {5,6}\right\rbrack$ .
18
+
19
+ However, we will argue that the transductive learning of distributed representations is hardly a limitation in the context of drug pair scoring tasks in Section 3.2. This is primarily as we are learning the representations of the drugs whose number in the real world rises in the timescale of many years and immense investment $\left\lbrack {7,8}\right\rbrack$ . Furthermore, as the set of atom types and bonding patterns of drugs are strictly constrained by the rules of chemistry, the number of generic substructure patterns that may be induced over the molecular graphs of a drug set are much smaller than the theoretically possible set of combinations. Additionally, as the self supervised learning objective is agnostic to the downstream task the drug embeddings may be transferred trivially making distributed representations an attractive modelling proposition for representation learning of structural patterns for drug pair scoring.
20
+
21
+ Under this pretext our research questions are: "How can we learn and then incorporate the distributed representations of the drugs into drug pair scoring pipelines?" and "Are distributed representations of graphs useful in drug pair scoring tasks?". To answer these questions we describe a methodology for learning distributed representations of graphs and their inclusion within a unified framework applicable all drug pair scoring tasks in Section 3. Subsequently, we create a simple MLP model based solely on the distributed representations of the drugs and show that this performs considerably better than random suggesting the usefulness of discrete substructure affinities of the drugs in drug pair scoring. Building upon this, we augment a number of recent and state-of-the-art models for drug pair scoring tasks to utilise our drug embeddings. Empirical results show that the incorporation of the distributed representations improves the performance of almost every model across synergy, polypharmacy, and drug interaction prediction tasks in Section 5. To the best of our knowledge this is the first application and study of distributed representations of molecular drug graphs for drug pair scoring tasks. To help further research and inclusion of these distributed representations we publicly release all of the drug representations as learned and utilised in this study.
22
+
23
+ ## To summarise our contributions are as follows:
24
+
25
+ - We show that learning distributed representations of graphs as a source of additional features is reasonable within drug pair scoring pipelines.
26
+
27
+ - We present a generic methodology for learning various distributed representations of the molecular graphs of the drugs and incorporating these into machine learning pipelines for drug pair scoring.
28
+
29
+ - We augment state-of-the-art models for drug synergy, polypharmacy, and drug interaction prediction and improve their performance through the use of distributed drug representations across tasks; even tasks they were not originally designed for.
30
+
31
+ - We publicly release all of the drug embeddings for DrugCombDB [2], DrugComb [9, 10], DrugbankDDI [11], and TwoSides [12] datasets as utilised in this study with the accompanying code for generating more.
32
+
33
+ ## 2 Background and related work
34
+
35
+ In drug pair scoring tasks we are concerned with learning a function which predicts scores for pairs of drugs in a biological or chemical context. Naturally within the domain of deep learning this learned function takes on the form of a neural network. Drug pair scoring have three main applications and questions which models are designed to answer [1]:
36
+
37
+ - Inferring drug synergy: Do drugs $i$ and $j$ have a synergistic effect on treatment of disease $k$ ?
38
+
39
+ - Inferring polypharmacy side effects: Does the simultaneous use of drugs $i$ and $j$ have a propensity for causing side effect $k$ ?
40
+
41
+ - Inferring drug-drug interaction types: Do drugs $i$ and $j$ have a $k$ type interaction?
42
+
43
+ ### 2.1 Unified framework for drug pair scoring
44
+
45
+ The machine learning tasks born out of the questions above can be generalised and formalised with a unified view of drug pair scoring described in Rozemberczki et al. [1]. We briefly reiterate this framework below to build upon in our proposed work in the next section.
46
+
47
+ Assume there is a set of $n$ drugs $\mathcal{D} = \left\{ {{d}_{1},{d}_{2},\ldots ,{d}_{n}}\right\}$ for which we know the chemical structure of molecules and a set of classes $\mathcal{C} = \left\{ {{c}_{1},{c}_{2},\ldots ,{c}_{p}}\right\}$ that provides information on the contexts under which a drug pair can be administered.
48
+
49
+ A drug feature set is the set of tuples $\left( {{\mathbf{x}}^{d},{\mathcal{G}}^{d},{\mathbf{X}}_{N}^{d},{\mathbf{X}}_{E}^{d}}\right) \in {\mathcal{X}}_{\mathcal{D}},\forall d \in \mathcal{D}$ , where ${\mathbf{x}}^{d}$ is the molecular feature vector, ${\mathcal{G}}^{d}$ is the molecular graph of the drug, ${\mathbf{X}}_{N}^{d}$ is the node/atom feature matrix and ${\mathbf{X}}_{E}^{d}$ the edge/bond feature matrix. In this setup, drugs can be attributed with 4 types of information: (i) Molecular features which give high-level information about the molecules such as measures of charge. (ii) The molecular graph in which nodes are atoms and edges describe bonding patterns. (iii) Node features in the molecular graph can give us information such as the type of atom or whether it is in a ring. (iv) Edge features which can provide context such as the type of bond that exists between atoms in the molecule.
50
+
51
+ A context feature set is the set of context feature vectors ${\mathbf{x}}^{c} \in {\mathcal{X}}_{\mathcal{C}},\forall c \in \mathcal{C}$ associated with the context classes $\mathcal{C}$ . This set allows for making context specific presdictions that take into account the similarity of the contexts. For example, in a synergy prediction scenario the context features can describe the gene expressions in a targeted cancer cell.
52
+
53
+ The labeled drug-pair and context triple set is a set of tuples $\left( {d,{d}^{\prime }, c,{y}^{d,{d}^{\prime }, c}}\right) \in \mathcal{Y}$ where $d,{d}^{\prime } \in \mathcal{D}$ , $c \in \mathcal{C}$ and ${y}^{d,{d}^{\prime }, c} \in \{ 0,1\}$ . This set of observations associates a drug pair within a specific biological or chemical context with a binary target. This target could specify whether a pair of drugs is synergistic in terminating a cancer cell type or have a certain drug-drug interaction type. Naturally, it is also common to have continuous targets ${y}^{d,{d}^{\prime }, c} \in \mathbb{R}$ . The machine learning practitioner is tasked with constructing predictive models $f\left( \cdot \right)$ such that ${\widehat{y}}^{d,{d}^{\prime }, c} = f\left( {d,{d}^{\prime }, c}\right)$ for these drug-pair context observations.
54
+
55
+ ### 2.2 Representations for drugs
56
+
57
+ A major source of research interest is the study and development of drug feature vectors and representations as they form inputs into various drug learning tasks. In our case these form integral parts of the molecular feature vector ${\mathbf{x}}^{d}$ in the drug feature set (see section 2.1) often arising from the molecular graph of the drugs.
58
+
59
+ Two dimensional representations and diagrams of the structure of molecules are often used as a convenient representation for their 3-dimensional structures and electrostatic properties that give rise to their biological activities. Whilst this abstraction is useful for communication in person, technical limitations drove the development of linear string based representations including SMILES [13] and InChI [14] which are present across many popular chemical information systems today. Language models have been applied onto such molecular strings to learn embeddings such as in Bombarelli et al. [15] which utilises the SMILES strings within a VAE framework to sample low dimensional continuous vector representations of the drugs. The success of this inspired similar work such as DeepSMILES [16] and SELFIES [17].
60
+
61
+ Two dimensional graph structures have been used before to generate discrete bag-of-words type feature vectors of molecules based on the presence of a specified vocabulary of descriptive substructures as in Morgan's work in 1965 [18]. Subsequent years saw efforts in finding different descriptive properties within the molecule structures or optimising existing sets of descriptive substructures such as in Durant et al. [3] which optimised the set of substructure based 2D descriptors from MDL keysets for drug discovery pipelines. The use of molecular fingerprints such as Morgan/Circular fingerprints [4] continues this branch of constructing descriptors and kernels for molecules. Concurrent efforts recently focus on end-to-end neural models involving graph neural network operators [1, 19]. Here graph neural networks operate over the molecular graph of the drug such that atoms are treated as nodes and bonds are the edges. Node level representations are updated through a series of message passing layers as in Equation 1 as described in Gilmer et al. [20] and Battaglia et al. [21].
62
+
63
+ $$
64
+ {\mathbf{h}}_{i}^{l} = \phi \left( {{h}_{i}^{l - 1},{\bigoplus }_{j \in {\mathcal{N}}_{i}}\psi \left( {{\mathbf{h}}_{i}^{l - 1},{\mathbf{h}}_{j}^{l - 1}}\right) }\right) \tag{1}
65
+ $$
66
+
67
+ Here ${\mathbf{h}}_{i}^{l}$ is the $l$ th layer representation of the features associated with node $i$ (in our context these would be atom features arising from message passing using ${\mathbf{X}}_{N}^{d}$ and ${\mathbf{X}}_{E}^{d}$ ). ${\mathbf{h}}_{i}^{l}$ is the output of the local permutation invariant function composed of the node $i$ ’s previous feature representation ${\mathbf{h}}_{i}^{l - 1}$ and its neighbours $j \in {\mathcal{N}}_{i}$ with $\psi \left( {{\mathbf{h}}_{i}^{l - 1},{\mathbf{h}}_{j}^{l - 1}}\right)$ being the message computed via function $\psi$ and $\bigoplus$ is some permutation invariant aggregation for the messages such as a sum, product, or average. $\phi$ and $\psi$ are typically neural networks. Subsequently, the node level representations are aggregated via permutation invariant pooling operations to form graph-level drug representations. For example, the EPGCN-DS model [22] utilises GCN layers [23] to produce higher level node representations of the atoms in the molecular graphs. The drug representations are then computed via a mean aggregation of the node representations. Such operators have become prevalent in recent proposals of drug pair scoring models with primary distinction being the form of $\psi$ in the message passing layers $\left\lbrack {1,{22},{24},{25}}\right\rbrack$ .
68
+
69
+ Our proposed system lies somewhere in between and in parallel to these efforts. We learn low dimensional continuous distributed representations (described in Section 3.1) of the drugs within the drug pair scoring dataset. These form additional drug features that can be utilised in augmented versions of existing drug pair scoring models. To the best of our knowledge this is the first application of distributed representations of drugs within drug pair scoring.
70
+
71
+ ### 2.3 Neural models for drug pair scoring
72
+
73
+ All recent neural models for drug pair scoring can be described with an encoder-decoder framework typically involving 3 parametric functions: (i) a drug encoder, (ii) an encoder for contextual features, and (iii) a decoder which infers the target value. We describe each component below, followed by how some state-of-the-art models can be instantiated out of this framework. A more thorough treatment of this can be found in Rozemberczki et al. [1].
74
+
75
+ The drug encoder is the parametric function ${f}_{{\theta }_{D}}\left( \cdot \right)$ in Equation 2 that takes the drug feature set as input and produces a vector representation of the drug $d$ called ${\mathbf{h}}^{d}.{f}_{{\theta }_{D}}\left( \cdot \right)$ maps the molecular features of the drug into a low dimensional vector space, this can incorporate various neural operators such as feed forward multi-layer perceptron layers as in DeepSynergy [26] and MatchMaker [27] or graph neural network layers as in DeepDDS [24] and DeepDrug [25]. Differences in the architecture of the encoder such as the flavour of message passing network is typically the main differentiator between current existing methods.
76
+
77
+ $$
78
+ {\mathbf{h}}^{d} = {f}_{{\theta }_{D}}\left( {{\mathbf{x}}^{d},{\mathcal{G}}^{d},{\mathbf{X}}_{N}^{d},{\mathbf{X}}_{E}^{d}}\right) ,\forall d \in \mathcal{D} \tag{2}
79
+ $$
80
+
81
+ The context encoder ${f}_{{\theta }_{C}}\left( \cdot \right)$ in Equation 3 is a neural network that outputs a low dimensional representation of the contextual feature set ${\mathbf{x}}^{c}$ . This component does not feature in all of the models we will discuss but plays a prominent part in DeepSynergy [26], MatchMaker [27], and DeepDDS [24].
82
+
83
+ $$
84
+ {\mathbf{h}}^{c} = {f}_{{\theta }_{\mathcal{C}}}\left( {\mathbf{x}}^{c}\right) ,\forall c \in \mathcal{C} \tag{3}
85
+ $$
86
+
87
+ Finally the decoder or head of the model ${f}_{{\theta }_{H}}\left( \cdot \right)$ in Equation 4 combines the outputs of the drug and context encoders $\left( {{\mathbf{h}}^{d},{\mathbf{h}}^{{d}^{\prime }},{\mathbf{h}}^{c}}\right)$ and outputs the predicted probability for a positive label for the drug-pair context triple ${\widehat{y}}^{d,{d}^{\prime }, c}$ .
88
+
89
+ $$
90
+ {\widehat{y}}^{d,{d}^{\prime }, c} = {f}_{{\theta }_{H}}\left( {{\mathbf{h}}^{d},{\mathbf{h}}^{{d}^{\prime }},{\mathbf{h}}^{c}}\right) ,\forall d,{d}^{\prime } \in \mathcal{D},\forall c \in \mathcal{C} \tag{4}
91
+ $$
92
+
93
+ Training the models in the framework described involves minimising the binary cross entropy for the binary targets or mean absolute error for regression targets with respect to the ${\theta }_{D},{\theta }_{C}$ , and ${\theta }_{H}$ parameters using gradient descent algorithms.
94
+
95
+ $$
96
+ \mathcal{L} = \mathop{\sum }\limits_{{\left( {d,{d}^{\prime }, c,{y}^{d,{d}^{\prime }, c}}\right) \in \mathcal{Y}}}l\left( {{\widehat{y}}^{d,{d}^{\prime }, c},{y}^{d,{d}^{\prime }, c}}\right) \tag{5}
97
+ $$
98
+
99
+ ## 3 Study and Methods
100
+
101
+ ### 3.1 Distributed representations of graphs
102
+
103
+ We adopt the framework of Scherer and Liò [28] for describing distributed representations of graphs based on the R-Convolutional framework for graph kernels [29]. Given a set of $n$ molecular graphs for the drugs in the dataset $\mathbb{G} = \left\{ {{\mathcal{G}}_{1},{\mathcal{G}}_{2},\ldots ,{\mathcal{G}}_{n}}\right\}$ one can induce discrete substructure patterns such as shortest paths, rooted subgraphs, graphlets, etc. using side effects of algorithms such as Floyd-Warshall [30-32] or the Weisfeiler-Lehmann graph isomorphism test [33]. This can be used to produce pattern frequency vectors $X = \left\{ {{x}_{1},{x}_{2},\ldots ,{x}_{n}}\right\}$ describing the occurence frequency of substructure patterns for every graph over a shared vocabulary $\mathbb{V}$ . $\mathbb{V}$ is the set of unique substructure patterns induced over all graphs ${\mathcal{G}}_{i} \in \mathbb{G}$ .
104
+
105
+ Classically one may directly use these pattern frequency vectors within standard machine learning algorithms or construct kernels to perform some task. This has been the approach taken by many state of the art graph kernels in classification tasks [29, 34]. Unfortunately, as the number, complexity, and size of graphs in $\mathbb{G}$ increases so does the number of induced substructure patterns - often dramatically $\left\lbrack {{28},{29},{34}}\right\rbrack$ . This, in turn, causes the pattern frequency vectors of $X$ to be extremely sparse and high dimensional both of which are detrimental to the performance of estimators. Furthermore, the high specificity of the patterns and the sparsity cause a phenomenon known as diagonal dominance across kernel matrices wherein each graph becomes more similar to itself and dissimilar from others, degrading machine learning performance.
106
+
107
+ To address this issue it is possible to learn dense and low dimensional distributed representations of graphs that are inductively biased to be similar when they contain similar substructure patterns and dissimilar if they do not in a self supervised manner. To achieve this we need to construct a corpus dataset $\mathcal{D}$ that details the target-context relationship between a graph and its induced substructure patterns. In the simplest form for graph level representation learning we can specify $\mathcal{R}$ as the set of tuples $\left( {{\mathcal{G}}_{i},{p}_{j}}\right) \in \mathcal{R}$ if ${}_{j} \in \mathbb{V}$ and ${p}_{j} \in {\mathcal{G}}_{i}$ .
108
+
109
+ The corpus can then be used to learn embeddings via a method that incorporates Harris' distributive hypothesis [35] to learn the distributed representations. Methods such as Skipgram, CBOW, PV-DM, PV-DBOW, and GLoVE are some examples of neural embedding methods that utilise this inductive bias [36-38]. In our study we implement Skipgram with negative sampling which optimises the following objective function.
110
+
111
+ $$
112
+ \mathcal{L} = \mathop{\sum }\limits_{{{\mathcal{G}}_{i} \in \mathbb{G}}}\mathop{\sum }\limits_{{p \in \mathbb{V}}}\left| \left\{ {\left( {{\mathcal{G}}_{i}, p}\right) \in \mathcal{R}}\right\} \right| \left( {\log \sigma \left( {{\Phi }_{i} \cdot {\mathcal{S}}_{p}}\right) }\right) + q \cdot {\mathbb{E}}_{{p}_{N} \in \mathcal{R}}\left\lbrack {\log \sigma \left( {-{\Phi }_{i} \cdot {p}_{N}}\right) }\right\rbrack \tag{6}
113
+ $$
114
+
115
+ Here $\Phi \in {\mathbb{R}}^{\left| \mathbb{G}\right| \times d}$ is the $d$ -dimensional matrix of graph embeddings we desire of the set of drug graphs $\mathbb{G}$ , and ${\mathbf{\Phi }}_{i}$ is the embedding for ${\mathcal{G}}_{i} \in \mathbb{G}$ . In similar vein, $\mathcal{S} \in {\mathbb{R}}^{\left| V\right| \times d}$ are the $d$ -dimensional embeddings of the substructure patterns in the vocabulary $\mathbb{V}$ such that ${\mathcal{S}}_{p}$ represents the vector embedding corresponding to the substructure pattern $p$ . Whilst these embeddings are tuned as well during the optimisation of Equation 6, ultimately, these substructure embeddings are not used in our case as we are interested in the drug embeddings. $q$ is the number of negative samples with ${p}_{N}$ being the sampled context pattern drawn according to the empirical unigram distribution.
116
+
117
+ $$
118
+ {P}_{R}\left( p\right) = \frac{\left| \left\{ p\mid \forall {\mathcal{G}}_{i} \in \mathbb{G},\left( {\mathcal{G}}_{i}, p\right) \in \mathcal{R}\right\} \right| }{\left| \mathcal{R}\right| }
119
+ $$
120
+
121
+ The optimisation of the above objective creates the desired distributed representations in $\mathbf{\Phi }$ , in this the case graph-level drug embeddings. These may be used as additional drug features in the drug feature set as we show in section 3.3. The distributed representations benefit from having lower dimensionality than the pattern frequency vectors, in other words $\left| V\right| > > d$ , being non-sparse, and being inductively biased via the distributive hypothesis. A more thorough treatment of the distributive hypothesis and in-depth reading of the neural embedding methods in this family can be found in $\left\lbrack {{35},{36},{39}}\right\rbrack$ .
122
+
123
+ Various instances of models for learning distributed representations of graphs following our description have been made such as Graph2Vec [40], DGK-WL/SP/GK [29], and AWE [41]. These differentiate primarily on the type of substructure pattern is induced over $\mathbb{G}$ . These have shown strong performance in graph classification tasks, still often performing on par with modern graph neural networks despite using significantly less features and parameters. However, limitations such as the dependency on a set vocabulary and inability to inductively infer representations for new subgraph patterns and new graphs (at least in its standard definitions), coupled with difficulty in scaling to large graphs with many millions of node have led to less attention on these methods. We speculate this has led to developments of deep drug pair score models completely ignoring distributed representations of graphs as part of the pipeline.
124
+
125
+ Table 1: Table of dataset details containing information on the application domain, and summary statistics on the number of drugs, context types, and drug pair context triples. Additional columns highlight the number of unique substructure patterns found across the molecular graphs of the drugs in the dataset based on the substructure patterns induced. $\left| \mathcal{D}\right|$ represents the number of unique drugs. $\left| \mathcal{C}\right|$ represents the set of unique contexts. $\left| \mathcal{Y}\right|$ represents the number of labeled drug-drug context triples. The remaining columns indicate the number of unique substructure patterns found in the drugs with respect to the corresponding substructure patterns extracted: WL $\left( {k = 2}\right)$ is the number of discrete rooted subtrees up to depth 2, WL $\left( {k = 3}\right)$ for rooted subgraphs up to depth 3, and the discrete shortest paths.
126
+
127
+ <table><tr><td>Dataset</td><td>Task</td><td>$\left| \mathcal{D}\right|$</td><td>$\left| \mathcal{C}\right|$</td><td>l v</td><td>WL $\left( {k = 2}\right)$</td><td>WL $\left( {k = 3}\right)$</td><td>Shortest paths</td></tr><tr><td>DrugCombDB [2]</td><td>Synergy</td><td>2956</td><td>112</td><td>191,391</td><td>70</td><td>1591</td><td>1310</td></tr><tr><td>DrugComb [9, 10]</td><td>Synergy</td><td>4146</td><td>288</td><td>659,333</td><td>70</td><td>1651</td><td>1432</td></tr><tr><td>DrugbankDDI [11]</td><td>Interaction</td><td>1706</td><td>86</td><td>383,496</td><td>74</td><td>1287</td><td>2710</td></tr><tr><td>TwoSides [12]</td><td>Polypharmacy</td><td>644</td><td>10</td><td>499,582</td><td>64</td><td>934</td><td>8070</td></tr></table>
128
+
129
+ ### 3.2 Arguing for the use of distributed representations of drugs in drug pair scoring pipelines
130
+
131
+ Here we show that the use of distributed representations of graphs to construct additional drug features is sensible in drug pair scoring tasks. As discussed in Section 2.1 a drug score pairing model is tasked with learning the function $f\left( {d,{d}^{\prime }, c}\right) = {y}^{d,{d}^{\prime }, c}$ from the labelled drug-pair context triples in $\mathcal{Y}$ . Looking at the statistics of drug pair scoring datasets in Table 1, we can see that the number of drugs and contexts is far lower than the number of triple observations. The huge and complex combinatorial space of drug-pair contexts (without even considering dosage effects) as well as the time/cost associated with experimenting more triples is a motivating factor for machine learning models. In practice, when such databases are updated it is through the addition of more labelled drug-pair context observations for better coverage [42]. The number of drugs considered rarely increases, as drugs can take many years of development, clinical trials, massive investment and regulatory processes before they enter studies for application domains of drug pair scoring [7, 8].
132
+
133
+ Therefore we can argue that learning distributed representations of the molecular graphs of the drugs in drug pair scoring tasks is sensible. Importantly, the number of discrete substructure patterns grows with the number of unique drugs, not the number of drug-pair-context observations within the dataset. Hence, as long as the number of drugs stays the same, trained drug embeddings can be carried over to any model being trained over the drug-pair context triples with minimal augmentation as we show in Section 3.3. To add further motivation, the number of discrete substructure patterns in the considered set of drugs is driven by the unique atom types and substructure patterns arising out of the bonded atoms. This set of unique atom types is theoretically limited to the periodic table and is obviously a limited subset of this in drugs. Furthermore, the size of the molecular graphs tend to be considerably smaller than social network scale graphs and less random due to chemical bonding rules hence the resulting substructure patterns are fewer and more informative making suitable descriptors in these settings $\left\lbrack {{29},{34},{43}}\right\rbrack$ .
134
+
135
+ ### 3.3 Incorporating distributed representations of graphs into existing drug pair scoring pipelines
136
+
137
+ Through retrieval of the SMILES strings, we generated the molecular graphs for each of the drugs $\mathbb{G} = \left\{ {{\mathcal{G}}_{d} \mid d \in \mathcal{D}}\right\}$ using TorchDrug [44] and RDKit [45]. Given this set of graphs we considered two discrete substructure patterns to induce over the graphs. For the first substructure pattern we considered rooted subgraphs at different depth $k = 3$ . These may be induced as a side effect of the Weisfeiler-Lehman graph isomorphism test [33, 43]. The second substructure pattern we considered were all the shortest paths of the molecular graph which may be induced using the Floyd-Warshall algorithm [30-32]. Both choices were made based on their completeness and deterministic nature of their inducing algorithms for which there are also fast implementations [28, 46].
138
+
139
+ In either case, the set of unique substructure patterns found across all molecular graphs in $\mathcal{D}$ gives us the molecular substructure vocabulary $\mathbb{V}$ . We construct a target-context corpus of the drugs ${\mathcal{R}}_{\mathcal{D}} = \left\{ {\left( {{\mathcal{G}}_{d},{p}_{j}}\right) \mid {\mathcal{G}}_{d} \in \mathbb{G},{p}_{j} \in {\mathcal{G}}_{d},{p}_{j} \in \mathbb{V}}\right\}$ . We use a skipgram model with negative sampling to learn the desired drug embeddings, optimising the objective function in equation 6 .
140
+
141
+ ![01963ed1-a97f-77ec-ba2e-980c0c3b5910_6_405_194_1051_889_0.jpg](images/01963ed1-a97f-77ec-ba2e-980c0c3b5910_6_405_194_1051_889_0.jpg)
142
+
143
+ Figure 1: A summary of the proposed pipeline for learning and utilising distributed representations of drugs for drug pair scoring. The pipeline consists of two main stages: the learning of the distributed representations and the augmentation of existing models to utilise the new drug embeddings $\mathbf{\Phi }$ which become part of the drug feature set described in Section 2.1. As the learning of the distributed representations is separate from the drug pair scoring task we may transfer the embeddings into the drug feature set of any existing drug pair scoring model without retraining.
144
+
145
+ After training and obtaining the distributed representations of drugs $\Phi$ we add the embeddings to the drug feature set $\left( {{\mathbf{x}}^{d},{\mathbf{\Phi }}^{d},{\mathcal{G}}^{d},{\mathbf{X}}_{N}^{d},{\mathbf{X}}_{E}^{d}}\right) \in {\mathcal{X}}_{\mathcal{D}},\forall d \in \mathcal{D}$ . The remaining task is to develop downstream models which utilise the distributed representations. As the self supervised learning of the distributed representations is separate from the learning for the drug pair scoring task, we may transfer the embeddings into any of the existing drug pair scoring models. A diagram of this workflow can be seen in Figure 1.
146
+
147
+ In order to validate the usefulness of the distributed representations we chose to extend existing drug pair scoring models from different application domains. As a sanity check to see whether the distributed representations carry any useful signal we also implemented a simple MLP with three hidden layers based on DeepSynergy called DROnly which only utilises the embeddings learned. We took seminal models representing the state of the art and recent models containing graph neural networks that operate over the molecular graphs of the drugs. Each augmented model we propose takes the original name of the model and is suffixed with "DR" and the substructure pattern induced over the graphs (WL or SP for rooted subgraphs and shortest paths respectively). In most cases we simply concatenate the distributed representation of the first and second drug (drugs $\alpha$ and $\beta$ in Figure 1) to the corresponding molecular feature vectors being used in the model. In the case of EPGCN-DS-DR and DeepDrugDR the left and right drug embeddings are concatenated to the outputs of the graph neural network drug encoders and fed into the decoder. All of the code for these models is available in our supplementary materials.
148
+
149
+ Table 2: Table of results with information about the original drug pair scoring models such as year of publication and their original application domains. We report the average AUROC on the hold out test set with standard deviations from 5 seeded random splits. Bolded numbers indicate best performing model for each dataset.
150
+
151
+ <table><tr><td>Model</td><td>Year</td><td>Orig. application</td><td>DrugCombDB</td><td>DrugComb</td><td>DrugbankDDI</td><td>TwoSides</td></tr><tr><td>DeepSynergy [26]</td><td>2018</td><td>Synergy</td><td>0.796 +- 0.010</td><td>0.739 +- 0.005</td><td>0.987 +- 0.001</td><td>0.933 +- 0.001</td></tr><tr><td>EPGCN-DS [22]</td><td>2020</td><td>Interaction</td><td>0.703 +- 0.006</td><td>0.623 +- 0.002</td><td>0.724 +- 0.002</td><td>${0.809} + - {0.006}$</td></tr><tr><td>DeepDrug [25]</td><td>2020</td><td>Interaction</td><td>0.743 +- 0.001</td><td>0.648 +- 0.001</td><td>0.862 +- 0.002</td><td>0.926 +- 0.001</td></tr><tr><td>DeepDDS [24]</td><td>2021</td><td>Synergy</td><td>0.791 +- 0.005</td><td>0.697 +- 0.002</td><td>0.988 +- 0.001</td><td>0.944 +- 0.001</td></tr><tr><td>MatchMaker [27]</td><td>2021</td><td>Synergy</td><td>0.788 +- 0.002</td><td>0.720 +- 0.003</td><td>0.991 +- 0.001</td><td>0.928 +- 0.001</td></tr><tr><td>DROnly (WL k=3)</td><td>Proposed</td><td>Not applicable</td><td>0.763 +- 0.002</td><td>0.651 +- 0.002</td><td>0.809 +- 0.005</td><td>0.917 +- 0.002</td></tr><tr><td>DROnly (SP)</td><td>Proposed</td><td>Not applicable</td><td>0.711 +- 0.004</td><td>${0.621} + - {0.002}$</td><td>0.710 +- 0.005</td><td>0.823 +- 0.005</td></tr><tr><td>DeepSynergy-DR (WL k=3)</td><td>Proposed</td><td>Not applicable</td><td>$\mathbf{{0.814}} + - \mathbf{{0.004}}$</td><td>0.738 +- 0.001</td><td>0.988 +- 0.000</td><td>0.934 +- 0.002</td></tr><tr><td>DeepSynergy-DR (SP)</td><td>Proposed</td><td>Not applicable</td><td>${0.813} + - {0.003}$</td><td>$\mathbf{{0.740}} + - \mathbf{{0.004}}$</td><td>0.988 +- 0.001</td><td>0.935 +- 0.000</td></tr><tr><td>EPGCN-DS-DR (WL k=3)</td><td>Proposed</td><td>Not applicable</td><td>0.711 +- 0.002</td><td>${0.627} + - {0.001}$</td><td>0.741 +- 0.004</td><td>0.822 +- 0.006</td></tr><tr><td>EPGCN-DS-DR (SP)</td><td>Proposed</td><td>Not applicable</td><td>0.704 +- 0.001</td><td>0.622 +- 0.001</td><td>0.730 +- 0.003</td><td>0.808 +- 0.002</td></tr><tr><td>DeepDrug-DR (WL k=3)</td><td>Proposed</td><td>Not applicable</td><td>0.743 +- 0.001</td><td>0.648 +- 0.001</td><td>${0.863} + - {0.000}$</td><td>0.926 +- 0.001</td></tr><tr><td>DeepDrug-DR (SP)</td><td>Proposed</td><td>Not applicable</td><td>0.743 +- 0.000</td><td>0.648 +- 0.001</td><td>0.863 +- 0.001</td><td>0.926 +- 0.000</td></tr><tr><td>DeepDDS-DR (WL k=3)</td><td>Proposed</td><td>Not applicable</td><td>0.799 +- 0.004</td><td>0.700 +- 0.002</td><td>0.989 +- 0.000</td><td>0.944 +- 0.001</td></tr><tr><td>DeepDDS-DR (SP)</td><td>Proposed</td><td>Not applicable</td><td>0.790 +- 0.003</td><td>0.696 +- 0.001</td><td>0.988 +- 0.001</td><td>0.943 +- 0.001</td></tr><tr><td>MatchMaker-DR (WL k=3)</td><td>Proposed</td><td>Not applicable</td><td>0.783 +- 0.004</td><td>0.714 +- 0.003</td><td>$\mathbf{{0.992}} + - \mathbf{{0.000}}$</td><td>0.930 +- 0.001</td></tr><tr><td>MatchMaker-DR (SP)</td><td>Proposed</td><td>Not applicable</td><td>0.784 +- 0.002</td><td>0.714 +- 0.004</td><td>0.991 +- 0.001</td><td>0.928 +- 0.002</td></tr></table>
152
+
153
+ ## 4 Experimental setup
154
+
155
+ We empirically validate the usefulness of the distributed drug representations in downstream drug pair scoring tasks. We consider 4 datasets from the domains of drug synergy prediction, polypharmacy prediction, and drug interaction to evaluate our augmented models, which we have previously outlined in Table 1. Five seeded random 0.5/0.5 train and test set splits were made and the average AUROC performance was evaluated over the hold-out test set with standard deviation in Table 2.
156
+
157
+ For the distributed representations of the graphs we set the desired dimensionality at $d = {64}$ and the Skipgram model was trained for 1000 epochs. These hyperparameter values were chosen arbitrarily to simplify the following comparative analysis, however we explore their effects on downstream performance in an ablation study in Appendix A.
158
+
159
+ To obtain the non DR drug-level features as used in DeepSynergy and MatchMaker we retrieved the canonical SMILES strings [13] for each of the drugs in the labeled drug-pair context triples. 256 dimensional Morgan fingerprints [4] were computed for each drug with a radius of 2. Molecular graphs for entry into models with GNNs were generated using TorchDrug (and the underlying RDKit utilities) from the SMILES strings for each drug.
160
+
161
+ We utilised the default hyperparameters for each of the drug pair scoring models as in [47] which are summarised in Table 3 of appendix B. Augmentation of the models affects the input shapes of the drug encoders or the final decoder by the chosen dimensionality of the distributed representations, but does not affect any other original model hyperparameters.
162
+
163
+ Optimisation hyperparameters for training of the models were all kept the same. All drug pair scoring models were trained using an Adam optimiser [48] for 250 epochs with a batch size of 8192 observations, an initial learning rate of ${10}^{-2},{\beta }_{1}$ was set to 0.9 with ${\beta }_{2}$ set to ${0.99},\epsilon = {10}^{-7}$ and finally a weight decay of ${10}^{-5}$ was added. A dropout rate of 0.5 was applied for regularisation.
164
+
165
+ Naturally in addition to these details we make all of our code containing all implementations and scripts for evaluation available in the supplementary materials for reproducibility.
166
+
167
+ ## 5 Results and discussion
168
+
169
+ Looking at our main results Table 2 we can make 3 main observations. First looking at the original methods we can see that methods using precomputed drug features and contextual features instead of graph neural networks such as DeepSynergy and Matchmaker perform better across drug pair scoring tasks. Combined with the fact that they train and evaluate much faster than methods using graph neural networks, it is generally advisable to use these models in the first instance validating the results in [47]. DeepDDS is the best performing model utilising a graph neural network. It is worth noting that it utilises contextual features like DeepSynergy and Matchmaker and unlike EPGCN-DS and DeepDrug. Secondly, looking at the DROnly model that serves as the sanity check for our embeddings, we can see that it is significantly better than a random model. This indicates the usefulness of the structural affinities and distributive inductive biases within the drug representations for the drug pair scoring tasks. Thirdly, we can see that the incorporation of the distributed representation into the models generally increases the performance of models. Particularly, we observe that the best performances for 3 out of 4 tasks are achieved by models incorporating our embeddings with the final one being a tie (within rounding error of 3 decimal points) between DeepDDS and its DR incorporating equivalent DeepDDS-DR (WL k=3) on TwoSides.
170
+
171
+ The horizontal analysis of the drug pair scoring models highlights that the significantly more expensive graph neural network based models generally perform worse than simpler models employing precomputed drug and context features on MLPs. This in spite of the graph neural network modules also having access to additional atom features on the molecular graphs as computed in TorchDrug. These include features such as the one-hot embedding of the atomic chiral tag, whether it participates in a ring, and whether it is aromatic, and the number of radical electrons on the atom. Hence, despite the wealth of additional information inside the provided molecular graph, we surmise the primary bottleneck for the drug level representations arises from the comparatively simple permutation invariant operators used to pool the node representations such as the global mean operator used in EPGCN-DS. There is an inevitable and large amount of information loss in the attempt to summarise variable amounts of higher level smooth node representations coming out of GNNs into a single vector of the same size, without any trainable parameters. We may partially attribute the additional performance boosts brought in by the distributed representations to the more refined algorithm to constructing the graph level representations, despite the input molecular graph only detailing the atom types and no additional node features. We can also attribute the performance boosts to the usefulness of substructure affinities to the drug pair scoring tasks as indicated in the DROnly performances across the tasks.
172
+
173
+ The learning of the distributed representations comes with two hyperparameters which may affect downstream performance when incorporated into the drug pair scoring models. These use specified hyperparameters are: (i) the dimensionality of the drug embeddings and (ii) the number of epochs for which the skipgram model is trained. We give full details on an ablation study on how varying these hyperparameters affects downstream performance with the experimental setup in Appendix A. To summarise the main points, the downstream performance caused by varying the desired dimensionality initially rises and then falls as expected due to the information bottleneck in very small dimensions and curse of dimensionality in higher dimensions. For varying the training epochs we find a slight but statistically significant positive correlation with performance as the number of epochs increases in two out of four datasets. However, in both cases there is little variation $( \pm {0.02}\mathrm{{ROCAUC}}$ in both ablation studies over the ranges studied) in the final performance of the downstream models given the hyperparameter choices except on the extreme ends of the studied ranges. This indicates the stable nature of the output embeddings and their usefulness in downstream tasks. As such we can generally recommend low dimensional embeddings on par with any other drug features being utilised and a high number of training epochs to obtain good performance. As mentioned previously more details can be found in Appendix A.
174
+
175
+ ## 62 6 Conclusion
176
+
177
+ We have answered our two research questions posed in the introduction. We presented a methodology for learning and incorporating distributed representations of graphs into machine learning pipelines for drug pair scoring, answering the first question on how we may integrate distributed representations. We assessed the usefulness of the distributed representations of drugs with two parts. In the first part we show that a model only using the learned drug embeddings shows significantly better performance than random, suggesting the usefulness of the substructure pattern affinities between drugs in drug pair scoring. Subsequently for the second part, we augmented recent and state-of-the-art models from synergy, polypharmacy, and drug interaction type prediction to utilise our distibuted representations. Horizontal evaluation of these models shows that the incorporation of the distributed representations improves performance across different tasks and datasets.
178
+
179
+ References
180
+
181
+ [1] Benedek Rozemberczki, Stephen Bonner, Andriy Nikolov, Michaël Ughetto, Sebastian Nilsson, and Eliseo Papa. A unified view of relational deep learning for drug pair scoring. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, pages 5564-5571. International Joint Conferences on Artificial Intelligence Organization, 2022. 1, 2, 3,4
182
+
183
+ [2] Hui Liu, Wenhao Zhang, Bo Zou, Jinxian Wang, Yuanyuan Deng, and Lei Deng. DrugCombDB: a comprehensive database of drug combinations toward the discovery of combinatorial therapy. Nucleic Acids Research, 48(D1):D871-D881, 10 2019. ISSN 0305-1048. doi: 10.1093/nar/ gkz1007. URL https://doi.org/10.1093/nar/gkz1007.1, 2, 6
184
+
185
+ [3] Joseph L. Durant, Burton A. Leland, Douglas R. Henry, and James G. Nourse. Reoptimization of mdl keys for use in drug discovery. Journal of Chemical Information and Computer Sciences, 42(6):1273-1280, Nov 2002. ISSN 0095-2338. doi: 10.1021/ci010132r. URL https://doi.org/10.1021/ci010132r.1,3
186
+
187
+ [4] Alice Capecchi, Daniel Probst, and Jean-Louis Reymond. One molecular fingerprint to rule them all: drugs, biomolecules, and the metabolome. Journal of Cheminformatics, 12(1):43, Jun 2020. ISSN 1758-2946. doi: 10.1186/s13321-020-00445-4. URL https://doi.org/10.1186/s13321-020-00445-4.1,3,8
188
+
189
+ [5] William L. Hamilton. Graph representation learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 14(3):1-159. 1
190
+
191
+ [6] Michael M Bronstein, Joan Bruna, Taco Cohen, and Petar Veličković. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. arXiv preprint arXiv:2104.13478, 2021. 1
192
+
193
+ [7] Olivier J. Wouters, Martin McKee, and Jeroen Luyten. Estimated Research and Development Investment Needed to Bring a New Medicine to Market, 2009-2018. JAMA, 323(9):844-853, 03 2020. ISSN 0098-7484. doi: 10.1001/jama.2020.1166. URL https://doi.org/10.1001/ jama.2020.1166.2,6
194
+
195
+ [8] Thomas Gaudelet, Ben Day, Arian R Jamasb, Jyothish Soman, Cristian Regep, Gertrude Liu, Jeremy BR Hayter, Richard Vickers, Charles Roberts, Jian Tang, et al. Utilizing graph machine learning within drug discovery and development. Briefings in bioinformatics, 22(6):bbab159, 2021.2,6
196
+
197
+ [9] Bulat Zagidullin, Jehad Aldahdooh, Shuyu Zheng, Wenyu Wang, Yinyin Wang, Joseph Saad, Alina Malyutina, Mohieddin Jafari, Ziaurrehman Tanoli, Alberto Pessia, and Jing Tang. DrugComb: an integrative cancer drug combination data portal. Nucleic Acids Research, 47(W1):W43-W51, 05 2019. ISSN 0305-1048. doi: 10.1093/nar/gkz337. URL https://doi.org/10.1093/nar/gkz337.2,6
198
+
199
+ [10] Shuyu Zheng, Jehad Aldahdooh, Tolou Shadbahr, Yinyin Wang, Dalal Aldahdooh, Jie Bao, Wenyu Wang, and Jing Tang. DrugComb update: a more comprehensive drug sensitivity data repository and analysis portal. Nucleic Acids Research, 49(W1):W174-W184, 06 2021. ISSN 0305-1048. doi: 10.1093/nar/gkab438. URL https://doi.org/10.1093/nar/gkab438.2, 6
200
+
201
+ [11] Jae Yong Ryu, Hyun Uk Kim, and Sang Yup Lee. Deep learning improves prediction of drug&#x2013;drug and drug&#x2013;food interactions. Proceedings of the National Academy of Sciences, 115(18):E4304-E4311, 2018. doi: 10.1073/pnas.1803294115. URL https: //www.pnas.org/doi/abs/10.1073/pnas.1803294115.2,6
202
+
203
+ [12] Nicholas P. Tatonetti, Patrick P. Ye, Roxana Daneshjou, and Russ B. Altman. Data-driven prediction of drug effects and interactions. Science Translational Medicine, 4(125):125ra31- 125ra31, 2012. doi: 10.1126/scitranslmed.3003377. URL https://www.science.org/doi/ abs/10.1126/scitranslmed.3003377.2,6
204
+
205
+ [13] David Weininger. Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules. J. Chem. Inf. Comput. Sci., 28(1):31-36, feb 1988. ISSN 0095-2338. doi: 10.1021/ci00057a005. URL https://doi.org/10.1021/ci00057a005.3, 8
206
+
207
+ [14] Stephen R. Heller, Alan McNaught, Igor Pletnev, Stephen Stein, and Dmitrii Tchekhovskoi. Inchi, the iupac international chemical identifier. Journal of Cheminformatics, 7(1):23, May
208
+
209
+ 2015. ISSN 1758-2946. doi: 10.1186/s13321-015-0068-4. URL https://doi.org/10.1186/s13321-015-0068-4.3
210
+
211
+ [15] Rafael Gómez-Bombarelli, Jennifer N. Wei, David Duvenaud, José Miguel Hernández-Lobato, Benjamín Sánchez-Lengeling, Dennis Sheberla, Jorge Aguilera-Iparraguirre, Timothy D. Hirzel, Ryan P. Adams, and Alán Aspuru-Guzik. Automatic chemical design using a data-driven continuous representation of molecules. ACS Central Science, 4(2):268-276, Feb 2018. ISSN 2374- 7943. doi: 10.1021/acscentsci.7b00572. URL https://doi.org/10.1021/acscentsci.7b00572.3
212
+
213
+ [16] Noel M. O'Boyle and Andrew Dalke. Deepsmiles: An adaptation of smiles for use in machine-learning of chemical structures. ChemRxiv, 2018. 3
214
+
215
+ [17] Mario Krenn, Florian Häse, AkshatKumar Nigam, Pascal Friederich, and Alan Aspuru-Guzik. Self-referencing embedded strings (SELFIES): A 100% robust molecular string representation. Machine Learning: Science and Technology, 1(4):045024, oct 2020. doi: 10.1088/2632-2153/ aba947. URL https://doi.org/10.1088/2632-2153/aba947.3
216
+
217
+ [18] H. L. Morgan. The generation of a unique machine description for chemical structures-a technique developed at chemical abstracts service. Journal of Chemical Documentation, 5(2): 107-113, May 1965. ISSN 0021-9576. doi: 10.1021/c160017a018. URL https://doi.org/ 10.1021/c160017a018.3
218
+
219
+ [19] Rocío Mercado, Tobias Rastemo, Edvard Lindelöf, Günter Klambauer, Ola Engkvist, Hongming Chen, and Esben Jannik Bjerrum. Graph networks for molecular design. Machine Learning: Science and Technology, 2(2):025023, mar 2021. doi: 10.1088/2632-2153/abcf91. URL https://doi.org/10.1088/2632-2153/abcf91.3
220
+
221
+ [20] Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, page 1263-1272. JMLR.org, 2017. 3
222
+
223
+ [21] Peter Battaglia, Jessica Blake Chandler Hamrick, Victor Bapst, Alvaro Sanchez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, Caglar Gulcehre, Francis Song, Andy Ballard, Justin Gilmer, George E. Dahl, Ashish Vaswani, Kelsey Allen, Charles Nash, Victoria Jayne Langston, Chris Dyer, Nicolas Heess, Daan Wierstra, Pushmeet Kohli, Matt Botvinick, Oriol Vinyals, Yujia Li, and Razvan Pas-canu. Relational inductive biases, deep learning, and graph networks. arXiv, 2018. URL https://arxiv.org/pdf/1806.01261.pdf.3
224
+
225
+ [22] Mengying Sun, Fei Wang, Olivier Elemento, and Jiayu Zhou. Structure-based drug-drug interaction detection via expressive graph convolutional networks and deep sets (student abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 34(10):13927-13928, Apr. 2020. doi: 10.1609/aaai.v34i10.7236. URL https://ojs.aaai.org/index.php/AAAI/ article/view/7236. 4,8
226
+
227
+ [23] Thomas N. Kipf and Max Welling. Semi-Supervised Classification with Graph Convolutional Networks. In ICLR, 2017. 4
228
+
229
+ [24] Jinxian Wang, Xuejun Liu, Siyuan Shen, Lei Deng, and Hui Liu. Deepdds: deep graph neural network with attention mechanism to predict synergistic drug combinations. bioRxiv, 2021. doi: 10.1101/2021.04.06.438723. URL https://www.biorxiv.org/content/early/2021/ 07/06/2021.04.06.438723.4,8
230
+
231
+ [25] Xusheng Cao, Rui Fan, and Wanwen Zeng. Deepdrug: A general graph-based deep learning framework for drug relation prediction. bioRxiv, 2020. doi: 10.1101/2020.11.09.375626. URL https://www.biorxiv.org/content/early/2020/11/10/2020.11.09.375626.4,8
232
+
233
+ [26] Kristina Preuer, Richard P I Lewis, Sepp Hochreiter, Andreas Bender, Krishna C Bulusu, and Günter Klambauer. DeepSynergy: predicting anti-cancer drug synergy with Deep Learning. Bioinformatics, 34(9):1538-1546, 12 2017. ISSN 1367-4803. doi: 10.1093/bioinformatics/ btx806. URL https://doi.org/10.1093/bioinformatics/btx806.4, 8
234
+
235
+ [27] Halil Ibrahim Kuru, Oznur Tastan, and A. Ercument Cicek. Matchmaker: A deep learning framework for drug synergy prediction. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 19(4):2334-2344, 2022. doi: 10.1109/TCBB.2021.3086702. 4, 8
236
+
237
+ [28] Paul Scherer and Pietro Liò. Learning distributed representations of graphs with geo2dr. Graph Representation Learning and Beyond Workshop (ICML'20), 2020. 4, 5, 6
238
+
239
+ [29] Pinar Yanardag and S.V.N. Vishwanathan. Deep graph kernels. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD'15, pages 1365-1374, New York, NY, USA, 2015. ACM. ISBN 978-1-4503-3664-2. doi: 10.1145/ 2783258.2783417. URL http://doi.acm.org/10.1145/2783258.2783417.4, 5, 6
240
+
241
+ [30] Robert W. Floyd. Algorithm 97: Shortest path. Commun. ACM, 5(6):345, jun 1962. ISSN 0001- 0782. doi: 10.1145/367766.368168. URL https://doi.org/10.1145/367766.368168.5, 6
242
+
243
+ [31] Stephen Warshall. A theorem on boolean matrices. J. ACM, 9(1):11-12, jan 1962. ISSN 0004- 5411. doi: 10.1145/321105.321107. URL https://doi.org/10.1145/321105.321107.
244
+
245
+ [32] P. Z. Ingerman. Algorithm 141: Path matrix. Commun. ACM, 5(11):556, nov 1962. ISSN 0001- 0782. doi: 10.1145/368996.369016. URL https://doi.org/10.1145/368996.369016.5, 6
246
+
247
+ [33] B. Weisfeiler and A. A. Lehman. A reduction of a graph to a canonical form and an algebra arising during this reduction. Nauchno-Technicheskaya Informatsia, 9(9):12-16, 1968. 5, 6
248
+
249
+ [34] S. V. N. Vishwanathan, Nicol N. Schraudolph, Risi Kondor, and Karsten M. Borgwardt. Graph kernels. Journal of Machine Learning Research, 11:1201-1242, 2010. ISSN 1532-4435. 5, 6
250
+
251
+ [35] Zellig S. Harris. Distributional structure. WORD, 10(2-3):146-162, 1954. doi: 10.1080/ 00437956.1954.11659520.5
252
+
253
+ [36] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. In 1st International Conference on Learning Representations, ICLR 2013, Workshop Track Proceedings, 2013. 5
254
+
255
+ [37] Quoc Le and Tomas Mikolov. Distributed representations of sentences and documents. In Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32, ICML'14, pages II-1188-II-1196. JMLR.org, 2014. URL http: //dl.acm.org/citation.cfm?id=3044805.3045025.
256
+
257
+ [38] Jeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, 2014. URL http://www.aclweb.org/anthology/D14-1162.5
258
+
259
+ [39] Yoav Goldberg and Omer Levy. word2vec explained: deriving mikolov et al.'s negative-sampling word-embedding method. CoRR, abs/1402.3722, 2014. URL http://arxiv.org/ abs/1402.3722.5
260
+
261
+ [40] Annamalai Narayanan, Mahinthan Chandramohan, Rajasekar Venkatesan, Lihui Chen, Yang Liu, and Shantanu Jaiswal. graph2vec: Learning distributed representations of graphs. CoRR, abs/1707.05005, 2017. 5
262
+
263
+ [41] Sergey Ivanov and Evgeny Burnaev. Anonymous walk embeddings. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2191-2200, Stock-holmsmässan, Stockholm Sweden, 2018. PMLR. URL http://proceedings.mlr.press/ v80/ivanov18a.html. 5
264
+
265
+ [42] Paul Bertin, Jarrid Rector-Brooks, Deepak Sharma, Thomas Gaudelet, Andrew Anighoro, Torsten Gross, Francisco Martinez-Pena, Eileen L Tang, Cristian Regep, Jeremy Hayter, et al. Recover: sequential model optimization platform for combination drug repurposing identifies novel synergistic compounds in vitro. arXiv preprint arXiv:2202.04202, 2022. 6
266
+
267
+ [43] Nino Shervashidze, Pascal Schweitzer, Erik Jan van Leeuwen, Kurt Mehlhorn, and Karsten M. Borgwardt. Weisfeiler-lehman graph kernels. J. Mach. Learn. Res., 12:2539-2561, 2011. ISSN 1532-4435. 6
268
+
269
+ [44] Zhaocheng Zhu, Chence Shi, Zuobai Zhang, Shengchao Liu, Minghao Xu, Xinyu Yuan, Yang-tian Zhang, Junkun Chen, Huiyu Cai, Jiarui Lu, Chang Ma, Runcheng Liu, Louis-Pascal Xhonneux, Meng Qu, and Jian Tang. Torchdrug: A powerful and flexible machine learning platform for drug discovery. arXiv preprint arXiv:2202.08320, 2022. 6
270
+
271
+ [45] Greg Landrum. Rdkit: Open-source cheminformatics software. 2016. URL https://github.com/rdkit/rdkit/releases/tag/Release_2022_03_5.6
272
+
273
+ [46] Daniel A. Schult. Exploring network structure, dynamics, and function using networkx. In In Proceedings of the 7th Python in Science Conference (SciPy), pages 11-15, 2008. 6
274
+
275
+ [47] Benedek Rozemberczki, Charles Tapley Hoyt, Anna Gogleva, Piotr Grabowski, Klas Karis, Andrej Lamov, Andriy Nikolov, Sebastian Nilsson, Michael Ughetto, Yu Wang, Tyler Derr, and Benjamin M. Gyori. Chemicalx: A deep learning library for drug pair scoring. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD '22, page 3819-3828, New York, NY, USA, 2022. Association for Computing Machinery. ISBN 9781450393850. doi: 10.1145/3534678.3539023. URL https://doi.org/10.1145/ 3534678.3539023.8
276
+
277
+ [48] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. International Conference on Learning Representations, 12 2014. 8
278
+
279
+ ![01963ed1-a97f-77ec-ba2e-980c0c3b5910_13_311_199_1173_259_0.jpg](images/01963ed1-a97f-77ec-ba2e-980c0c3b5910_13_311_199_1173_259_0.jpg)
280
+
281
+ Figure 2: Figure of the test ROCAUC performance of DeepSynergyDR (SP and WL3) across the drug pair scoring datasets. Performance is recorded with respect to the embedding dimension chosen in the learning of the distributed representations of graphs.
282
+
283
+ ![01963ed1-a97f-77ec-ba2e-980c0c3b5910_13_309_606_1173_262_0.jpg](images/01963ed1-a97f-77ec-ba2e-980c0c3b5910_13_309_606_1173_262_0.jpg)
284
+
285
+ Figure 3: Figure of the test ROCAUC performance of DeepSynergyDR (SP and WL3) across the drug pair scoring datasets. Performance is recorded with respect to the number of training epochs chosen in the learning of the distributed representations of graphs.
286
+
287
+ ## A Ablation study over the two hyperparameters in learning distributed representations
288
+
289
+ 546
290
+
291
+ The introduction of the distributed representations comes with two hyperparameters which may affect their downstream performance when incorporated into the drug pair scoring models. These user specified hyperparameters are: (i) the dimensionality of drug embeddings and (ii) the number of epochs for which the skipgram model is trained. We study the effect of the embedding dimensionality on downstream performance by setting the number of training epochs to 1000 and varying the dimensionality from 8 to 1024 following powers of 2 . For our downstream drug pair scoring model we use DeepSynergyDR whilst keeping the same hyperparameter settings as in our comparative analysis described in Section 4. Similarly for studying the effect of training epochs we set the dimensionality of the embeddings at 64 and observe the downstream performance of the drug pair scoring model (with its own training epochs set at 250 as before) across a range of values (from 200 to 2000, in steps of 200). In both cases, we perform 5 repeated runs to obtain empirical confidence intervals in the plots shown in Figures 2 and 3.
292
+
293
+ ### A.1 Dimensionality of distributed representations
294
+
295
+ The plots in Figure 2 summarise the effects of changing the dimensionality of the drug embeddings on downstream drug pair scoring performance with the DrugSynergyDR model. Across the datasets as well as the substructure patterns we observe there is little change in the downstream performance as the dimensionality increases from 8 to 1024. Furthermore, the resulting downstream performance seems robust against these changes with small confidence regions as in the plots. Both observations suggest that the skipgram model is effective in producing consistent drug gram matrices and capturing salient distributive context information within the embeddings. A Pearson correlation coefficient of 0.331 (p-value: 0.0097) and 0.584 (p-value: ${9.873} \times {10}^{-7}$ ) across substructure patterns on DrugCombDB and TwoSides respectively indicates a statistically significant upward correlation in performance for increased dimensionality. Conversely we find a downwards Pearson correlation coefficient of -0.573 (p-value: ${1.692} \times {10}^{-6}$ ) in DrugComb. There is no statistically significant trend (p-value $\leq {0.05}$ ) in DrugbankDDI. Despite the observed upwards trends in performance DrugCombDB and TwoSides we do not recommend having a high embedding dimensionality as we expect an inevitable decrease in performance due to the curse of dimensionality. Hence, we suggest a more moderate choice on par with the dimensionality of other features in the drug feature set as the performance generally is stable across the range of dimensionalities. The next ablation study studies how this varies under the number of training epochs.
296
+
297
+ Table 3: A breakdown of the hyperparameters in each of the drug pair scoring models. Note that these are the same for each of the augmented versions with distributed representations that we propose.
298
+
299
+ <table><tr><td>Model</td><td>Hyperparameter</td><td>Values</td></tr><tr><td rowspan="3">DeepSynergy</td><td>Drug encoder channels</td><td>128</td></tr><tr><td>Context encoder channels</td><td>128</td></tr><tr><td>Hidden layer channels</td><td>(32, 32, 32)</td></tr><tr><td rowspan="2">EPGCN-DS</td><td>Drug encoder channels</td><td>128</td></tr><tr><td>Hidden layer channels</td><td>(32, 32)</td></tr><tr><td rowspan="2">DeepDrug</td><td>Drug encoder channels</td><td>(32,32,32,32)</td></tr><tr><td>Hidden layer channels</td><td>64</td></tr><tr><td rowspan="2">DeepDDS</td><td>Context encoder channels</td><td>(512, 256, 128)</td></tr><tr><td>Hidden layer channels</td><td>(512, 128)</td></tr><tr><td rowspan="2">MatchMaker</td><td>Drug encoder channels</td><td>(32, 32)</td></tr><tr><td>Hidden layer channels</td><td>(64, 32)</td></tr></table>
300
+
301
+ ### A.2 Number of training epochs for distributed representations
302
+
303
+ The plots in Figure 3 summarises the effects of changing the number of epochs used in training the skipgram model for a set embedding dimensionality of 64 . The plots report the downstream test ROCAUC performance achieved on the DrugSynergy model. Like before, we see that across datasets and induced substructure pattern the downstream performance is not affected strongly except when the number of training epochs is exceptionally low for obvious optimisation reasons. The small confidence bands indicate small variability between different runs. A Pearson correlation coefficient of 0.197 (p-value: 0.049) for DrugbankDDI and 0.473 (p-value: ${6.597} \times {10}^{-7}$ ) for TwoSides across substructure patterns indicates a light but statistically significant upwards trend in performance as the number of training epochs increases. DrugcombDB and DrugComb do not show any statistically significant correlations with regard to training epochs, but are generally stable irregardless. Hence we may suggest generally that more rigorous training regimes for learning the distributed representations are favourable in drug pair scoring tasks.
304
+
305
+ ## B Hyperparameters for the drug pair scoring models
306
+
307
+ Table 3 summarises the architectural hyperparameters of the drug pair scoring models utilised in this study. Note that these hyperparameters are the same for the DR augmented versions of these models as it only affects the input sizes to the drug encoders (or decoders in the case of EPGCN-DS-DR and DrugDrugDR).
papers/LOG/LOG 2022/LOG 2022 Conference/IP-TISJqfq/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,248 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § DISTRIBUTED REPRESENTATIONS OF GRAPHS FOR DRUG PAIR SCORING
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ In this paper we study the practicality and usefulness of incorporating distributed representations of graphs into models within the context of drug pair scoring. We argue that the real world growth and update cycles of drug pair scoring datasets subvert the limitations of transductive learning associated with distributed representations. Furthermore, we argue that the vocabulary of discrete substructure patterns induced over drug sets is not dramatically large due to the limited set of atom types and constraints on bonding patterns enforced by chemistry. Under this pretext, we explore the effectiveness of distributed representations of the molecular graphs of drugs in drug pair scoring tasks such as drug synergy, polypharmacy, and drug-drug interaction prediction. To achieve this, we present a methodology for learning and incorporating distributed representations of graphs within a unified framework for drug pair scoring. Subsequently, we augment a number of recent and state-of-the-art models to utilise our embeddings. We empirically show that the incorporation of these embeddings improves downstream performance of almost every model across different drug pair scoring tasks, even those the original model was not designed for. We publicly release all of our drug embeddings for the DrugCombDB, DrugComb, DrugbankDDI, and TwoSides datasets.
12
+
13
+ § 191 INTRODUCTION
14
+
15
+ Recent advancements in graph representation learning (GRL) - particularly in message passing based graph neural networks - have enabled new ways of modelling natural phenomena and tackling learning tasks on graph structured data. One of the areas which now sees application of graph neural networks is drug pair scoring [1]. Drug pair scoring refers to the prediction tasks that answer questions about the consequences of administering a pair of drugs at the same time such as drug synergy prediction, polypharmacy prediction, and predicting drug-drug interaction types which are of great interest in the treatment of diseases. One of the primary challenges in elucidating and discovering the effects of drug combinations is the dramatically growing combinatorial space of drug pairs. Furthermore, reliance on human trials (in polypharmacy), and proneness to human error [2] makes manual/experimental discovery of useful drug combinations difficult without even considering the prohibitive financial and labour costs that make it only possible on small sets of drugs. Such conditions make in silico modelling of drug combinations an attractive solution.
16
+
17
+ A key component to modelling drug pairs is finding useful representations of the drugs to input into the drug pair scoring models. Traditional supervised machine learning methods for drug pair scoring rely on carefully crafted descriptors such as MDL descriptor keysets [3] and fingerprinting techniques such as Morgan fingerprinting [4]. More recently, graph neural network layers and permutation invariant pooling operators have enabled inputting the molecular graphs of drugs directly to learn task oriented representations in an end-to-end manner. Interestingly, graph kernel techniques and specifically distributed representations of graphs were not considered at all for inclusion in drug pair scoring pipelines to the best of our knowledge. We may only speculate to the reasons for this such as publication biases or its limitations in not using node feature vectors and the transductive nature that have made these approaches less appropriate in observations with rich/continuous node features and dynamic graphs $\left\lbrack {5,6}\right\rbrack$ .
18
+
19
+ However, we will argue that the transductive learning of distributed representations is hardly a limitation in the context of drug pair scoring tasks in Section 3.2. This is primarily as we are learning the representations of the drugs whose number in the real world rises in the timescale of many years and immense investment $\left\lbrack {7,8}\right\rbrack$ . Furthermore, as the set of atom types and bonding patterns of drugs are strictly constrained by the rules of chemistry, the number of generic substructure patterns that may be induced over the molecular graphs of a drug set are much smaller than the theoretically possible set of combinations. Additionally, as the self supervised learning objective is agnostic to the downstream task the drug embeddings may be transferred trivially making distributed representations an attractive modelling proposition for representation learning of structural patterns for drug pair scoring.
20
+
21
+ Under this pretext our research questions are: "How can we learn and then incorporate the distributed representations of the drugs into drug pair scoring pipelines?" and "Are distributed representations of graphs useful in drug pair scoring tasks?". To answer these questions we describe a methodology for learning distributed representations of graphs and their inclusion within a unified framework applicable all drug pair scoring tasks in Section 3. Subsequently, we create a simple MLP model based solely on the distributed representations of the drugs and show that this performs considerably better than random suggesting the usefulness of discrete substructure affinities of the drugs in drug pair scoring. Building upon this, we augment a number of recent and state-of-the-art models for drug pair scoring tasks to utilise our drug embeddings. Empirical results show that the incorporation of the distributed representations improves the performance of almost every model across synergy, polypharmacy, and drug interaction prediction tasks in Section 5. To the best of our knowledge this is the first application and study of distributed representations of molecular drug graphs for drug pair scoring tasks. To help further research and inclusion of these distributed representations we publicly release all of the drug representations as learned and utilised in this study.
22
+
23
+ § TO SUMMARISE OUR CONTRIBUTIONS ARE AS FOLLOWS:
24
+
25
+ * We show that learning distributed representations of graphs as a source of additional features is reasonable within drug pair scoring pipelines.
26
+
27
+ * We present a generic methodology for learning various distributed representations of the molecular graphs of the drugs and incorporating these into machine learning pipelines for drug pair scoring.
28
+
29
+ * We augment state-of-the-art models for drug synergy, polypharmacy, and drug interaction prediction and improve their performance through the use of distributed drug representations across tasks; even tasks they were not originally designed for.
30
+
31
+ * We publicly release all of the drug embeddings for DrugCombDB [2], DrugComb [9, 10], DrugbankDDI [11], and TwoSides [12] datasets as utilised in this study with the accompanying code for generating more.
32
+
33
+ § 2 BACKGROUND AND RELATED WORK
34
+
35
+ In drug pair scoring tasks we are concerned with learning a function which predicts scores for pairs of drugs in a biological or chemical context. Naturally within the domain of deep learning this learned function takes on the form of a neural network. Drug pair scoring have three main applications and questions which models are designed to answer [1]:
36
+
37
+ * Inferring drug synergy: Do drugs $i$ and $j$ have a synergistic effect on treatment of disease $k$ ?
38
+
39
+ * Inferring polypharmacy side effects: Does the simultaneous use of drugs $i$ and $j$ have a propensity for causing side effect $k$ ?
40
+
41
+ * Inferring drug-drug interaction types: Do drugs $i$ and $j$ have a $k$ type interaction?
42
+
43
+ § 2.1 UNIFIED FRAMEWORK FOR DRUG PAIR SCORING
44
+
45
+ The machine learning tasks born out of the questions above can be generalised and formalised with a unified view of drug pair scoring described in Rozemberczki et al. [1]. We briefly reiterate this framework below to build upon in our proposed work in the next section.
46
+
47
+ Assume there is a set of $n$ drugs $\mathcal{D} = \left\{ {{d}_{1},{d}_{2},\ldots ,{d}_{n}}\right\}$ for which we know the chemical structure of molecules and a set of classes $\mathcal{C} = \left\{ {{c}_{1},{c}_{2},\ldots ,{c}_{p}}\right\}$ that provides information on the contexts under which a drug pair can be administered.
48
+
49
+ A drug feature set is the set of tuples $\left( {{\mathbf{x}}^{d},{\mathcal{G}}^{d},{\mathbf{X}}_{N}^{d},{\mathbf{X}}_{E}^{d}}\right) \in {\mathcal{X}}_{\mathcal{D}},\forall d \in \mathcal{D}$ , where ${\mathbf{x}}^{d}$ is the molecular feature vector, ${\mathcal{G}}^{d}$ is the molecular graph of the drug, ${\mathbf{X}}_{N}^{d}$ is the node/atom feature matrix and ${\mathbf{X}}_{E}^{d}$ the edge/bond feature matrix. In this setup, drugs can be attributed with 4 types of information: (i) Molecular features which give high-level information about the molecules such as measures of charge. (ii) The molecular graph in which nodes are atoms and edges describe bonding patterns. (iii) Node features in the molecular graph can give us information such as the type of atom or whether it is in a ring. (iv) Edge features which can provide context such as the type of bond that exists between atoms in the molecule.
50
+
51
+ A context feature set is the set of context feature vectors ${\mathbf{x}}^{c} \in {\mathcal{X}}_{\mathcal{C}},\forall c \in \mathcal{C}$ associated with the context classes $\mathcal{C}$ . This set allows for making context specific presdictions that take into account the similarity of the contexts. For example, in a synergy prediction scenario the context features can describe the gene expressions in a targeted cancer cell.
52
+
53
+ The labeled drug-pair and context triple set is a set of tuples $\left( {d,{d}^{\prime },c,{y}^{d,{d}^{\prime },c}}\right) \in \mathcal{Y}$ where $d,{d}^{\prime } \in \mathcal{D}$ , $c \in \mathcal{C}$ and ${y}^{d,{d}^{\prime },c} \in \{ 0,1\}$ . This set of observations associates a drug pair within a specific biological or chemical context with a binary target. This target could specify whether a pair of drugs is synergistic in terminating a cancer cell type or have a certain drug-drug interaction type. Naturally, it is also common to have continuous targets ${y}^{d,{d}^{\prime },c} \in \mathbb{R}$ . The machine learning practitioner is tasked with constructing predictive models $f\left( \cdot \right)$ such that ${\widehat{y}}^{d,{d}^{\prime },c} = f\left( {d,{d}^{\prime },c}\right)$ for these drug-pair context observations.
54
+
55
+ § 2.2 REPRESENTATIONS FOR DRUGS
56
+
57
+ A major source of research interest is the study and development of drug feature vectors and representations as they form inputs into various drug learning tasks. In our case these form integral parts of the molecular feature vector ${\mathbf{x}}^{d}$ in the drug feature set (see section 2.1) often arising from the molecular graph of the drugs.
58
+
59
+ Two dimensional representations and diagrams of the structure of molecules are often used as a convenient representation for their 3-dimensional structures and electrostatic properties that give rise to their biological activities. Whilst this abstraction is useful for communication in person, technical limitations drove the development of linear string based representations including SMILES [13] and InChI [14] which are present across many popular chemical information systems today. Language models have been applied onto such molecular strings to learn embeddings such as in Bombarelli et al. [15] which utilises the SMILES strings within a VAE framework to sample low dimensional continuous vector representations of the drugs. The success of this inspired similar work such as DeepSMILES [16] and SELFIES [17].
60
+
61
+ Two dimensional graph structures have been used before to generate discrete bag-of-words type feature vectors of molecules based on the presence of a specified vocabulary of descriptive substructures as in Morgan's work in 1965 [18]. Subsequent years saw efforts in finding different descriptive properties within the molecule structures or optimising existing sets of descriptive substructures such as in Durant et al. [3] which optimised the set of substructure based 2D descriptors from MDL keysets for drug discovery pipelines. The use of molecular fingerprints such as Morgan/Circular fingerprints [4] continues this branch of constructing descriptors and kernels for molecules. Concurrent efforts recently focus on end-to-end neural models involving graph neural network operators [1, 19]. Here graph neural networks operate over the molecular graph of the drug such that atoms are treated as nodes and bonds are the edges. Node level representations are updated through a series of message passing layers as in Equation 1 as described in Gilmer et al. [20] and Battaglia et al. [21].
62
+
63
+ $$
64
+ {\mathbf{h}}_{i}^{l} = \phi \left( {{h}_{i}^{l - 1},{\bigoplus }_{j \in {\mathcal{N}}_{i}}\psi \left( {{\mathbf{h}}_{i}^{l - 1},{\mathbf{h}}_{j}^{l - 1}}\right) }\right) \tag{1}
65
+ $$
66
+
67
+ Here ${\mathbf{h}}_{i}^{l}$ is the $l$ th layer representation of the features associated with node $i$ (in our context these would be atom features arising from message passing using ${\mathbf{X}}_{N}^{d}$ and ${\mathbf{X}}_{E}^{d}$ ). ${\mathbf{h}}_{i}^{l}$ is the output of the local permutation invariant function composed of the node $i$ ’s previous feature representation ${\mathbf{h}}_{i}^{l - 1}$ and its neighbours $j \in {\mathcal{N}}_{i}$ with $\psi \left( {{\mathbf{h}}_{i}^{l - 1},{\mathbf{h}}_{j}^{l - 1}}\right)$ being the message computed via function $\psi$ and $\bigoplus$ is some permutation invariant aggregation for the messages such as a sum, product, or average. $\phi$ and $\psi$ are typically neural networks. Subsequently, the node level representations are aggregated via permutation invariant pooling operations to form graph-level drug representations. For example, the EPGCN-DS model [22] utilises GCN layers [23] to produce higher level node representations of the atoms in the molecular graphs. The drug representations are then computed via a mean aggregation of the node representations. Such operators have become prevalent in recent proposals of drug pair scoring models with primary distinction being the form of $\psi$ in the message passing layers $\left\lbrack {1,{22},{24},{25}}\right\rbrack$ .
68
+
69
+ Our proposed system lies somewhere in between and in parallel to these efforts. We learn low dimensional continuous distributed representations (described in Section 3.1) of the drugs within the drug pair scoring dataset. These form additional drug features that can be utilised in augmented versions of existing drug pair scoring models. To the best of our knowledge this is the first application of distributed representations of drugs within drug pair scoring.
70
+
71
+ § 2.3 NEURAL MODELS FOR DRUG PAIR SCORING
72
+
73
+ All recent neural models for drug pair scoring can be described with an encoder-decoder framework typically involving 3 parametric functions: (i) a drug encoder, (ii) an encoder for contextual features, and (iii) a decoder which infers the target value. We describe each component below, followed by how some state-of-the-art models can be instantiated out of this framework. A more thorough treatment of this can be found in Rozemberczki et al. [1].
74
+
75
+ The drug encoder is the parametric function ${f}_{{\theta }_{D}}\left( \cdot \right)$ in Equation 2 that takes the drug feature set as input and produces a vector representation of the drug $d$ called ${\mathbf{h}}^{d}.{f}_{{\theta }_{D}}\left( \cdot \right)$ maps the molecular features of the drug into a low dimensional vector space, this can incorporate various neural operators such as feed forward multi-layer perceptron layers as in DeepSynergy [26] and MatchMaker [27] or graph neural network layers as in DeepDDS [24] and DeepDrug [25]. Differences in the architecture of the encoder such as the flavour of message passing network is typically the main differentiator between current existing methods.
76
+
77
+ $$
78
+ {\mathbf{h}}^{d} = {f}_{{\theta }_{D}}\left( {{\mathbf{x}}^{d},{\mathcal{G}}^{d},{\mathbf{X}}_{N}^{d},{\mathbf{X}}_{E}^{d}}\right) ,\forall d \in \mathcal{D} \tag{2}
79
+ $$
80
+
81
+ The context encoder ${f}_{{\theta }_{C}}\left( \cdot \right)$ in Equation 3 is a neural network that outputs a low dimensional representation of the contextual feature set ${\mathbf{x}}^{c}$ . This component does not feature in all of the models we will discuss but plays a prominent part in DeepSynergy [26], MatchMaker [27], and DeepDDS [24].
82
+
83
+ $$
84
+ {\mathbf{h}}^{c} = {f}_{{\theta }_{\mathcal{C}}}\left( {\mathbf{x}}^{c}\right) ,\forall c \in \mathcal{C} \tag{3}
85
+ $$
86
+
87
+ Finally the decoder or head of the model ${f}_{{\theta }_{H}}\left( \cdot \right)$ in Equation 4 combines the outputs of the drug and context encoders $\left( {{\mathbf{h}}^{d},{\mathbf{h}}^{{d}^{\prime }},{\mathbf{h}}^{c}}\right)$ and outputs the predicted probability for a positive label for the drug-pair context triple ${\widehat{y}}^{d,{d}^{\prime },c}$ .
88
+
89
+ $$
90
+ {\widehat{y}}^{d,{d}^{\prime },c} = {f}_{{\theta }_{H}}\left( {{\mathbf{h}}^{d},{\mathbf{h}}^{{d}^{\prime }},{\mathbf{h}}^{c}}\right) ,\forall d,{d}^{\prime } \in \mathcal{D},\forall c \in \mathcal{C} \tag{4}
91
+ $$
92
+
93
+ Training the models in the framework described involves minimising the binary cross entropy for the binary targets or mean absolute error for regression targets with respect to the ${\theta }_{D},{\theta }_{C}$ , and ${\theta }_{H}$ parameters using gradient descent algorithms.
94
+
95
+ $$
96
+ \mathcal{L} = \mathop{\sum }\limits_{{\left( {d,{d}^{\prime },c,{y}^{d,{d}^{\prime },c}}\right) \in \mathcal{Y}}}l\left( {{\widehat{y}}^{d,{d}^{\prime },c},{y}^{d,{d}^{\prime },c}}\right) \tag{5}
97
+ $$
98
+
99
+ § 3 STUDY AND METHODS
100
+
101
+ § 3.1 DISTRIBUTED REPRESENTATIONS OF GRAPHS
102
+
103
+ We adopt the framework of Scherer and Liò [28] for describing distributed representations of graphs based on the R-Convolutional framework for graph kernels [29]. Given a set of $n$ molecular graphs for the drugs in the dataset $\mathbb{G} = \left\{ {{\mathcal{G}}_{1},{\mathcal{G}}_{2},\ldots ,{\mathcal{G}}_{n}}\right\}$ one can induce discrete substructure patterns such as shortest paths, rooted subgraphs, graphlets, etc. using side effects of algorithms such as Floyd-Warshall [30-32] or the Weisfeiler-Lehmann graph isomorphism test [33]. This can be used to produce pattern frequency vectors $X = \left\{ {{x}_{1},{x}_{2},\ldots ,{x}_{n}}\right\}$ describing the occurence frequency of substructure patterns for every graph over a shared vocabulary $\mathbb{V}$ . $\mathbb{V}$ is the set of unique substructure patterns induced over all graphs ${\mathcal{G}}_{i} \in \mathbb{G}$ .
104
+
105
+ Classically one may directly use these pattern frequency vectors within standard machine learning algorithms or construct kernels to perform some task. This has been the approach taken by many state of the art graph kernels in classification tasks [29, 34]. Unfortunately, as the number, complexity, and size of graphs in $\mathbb{G}$ increases so does the number of induced substructure patterns - often dramatically $\left\lbrack {{28},{29},{34}}\right\rbrack$ . This, in turn, causes the pattern frequency vectors of $X$ to be extremely sparse and high dimensional both of which are detrimental to the performance of estimators. Furthermore, the high specificity of the patterns and the sparsity cause a phenomenon known as diagonal dominance across kernel matrices wherein each graph becomes more similar to itself and dissimilar from others, degrading machine learning performance.
106
+
107
+ To address this issue it is possible to learn dense and low dimensional distributed representations of graphs that are inductively biased to be similar when they contain similar substructure patterns and dissimilar if they do not in a self supervised manner. To achieve this we need to construct a corpus dataset $\mathcal{D}$ that details the target-context relationship between a graph and its induced substructure patterns. In the simplest form for graph level representation learning we can specify $\mathcal{R}$ as the set of tuples $\left( {{\mathcal{G}}_{i},{p}_{j}}\right) \in \mathcal{R}$ if ${}_{j} \in \mathbb{V}$ and ${p}_{j} \in {\mathcal{G}}_{i}$ .
108
+
109
+ The corpus can then be used to learn embeddings via a method that incorporates Harris' distributive hypothesis [35] to learn the distributed representations. Methods such as Skipgram, CBOW, PV-DM, PV-DBOW, and GLoVE are some examples of neural embedding methods that utilise this inductive bias [36-38]. In our study we implement Skipgram with negative sampling which optimises the following objective function.
110
+
111
+ $$
112
+ \mathcal{L} = \mathop{\sum }\limits_{{{\mathcal{G}}_{i} \in \mathbb{G}}}\mathop{\sum }\limits_{{p \in \mathbb{V}}}\left| \left\{ {\left( {{\mathcal{G}}_{i},p}\right) \in \mathcal{R}}\right\} \right| \left( {\log \sigma \left( {{\Phi }_{i} \cdot {\mathcal{S}}_{p}}\right) }\right) + q \cdot {\mathbb{E}}_{{p}_{N} \in \mathcal{R}}\left\lbrack {\log \sigma \left( {-{\Phi }_{i} \cdot {p}_{N}}\right) }\right\rbrack \tag{6}
113
+ $$
114
+
115
+ Here $\Phi \in {\mathbb{R}}^{\left| \mathbb{G}\right| \times d}$ is the $d$ -dimensional matrix of graph embeddings we desire of the set of drug graphs $\mathbb{G}$ , and ${\mathbf{\Phi }}_{i}$ is the embedding for ${\mathcal{G}}_{i} \in \mathbb{G}$ . In similar vein, $\mathcal{S} \in {\mathbb{R}}^{\left| V\right| \times d}$ are the $d$ -dimensional embeddings of the substructure patterns in the vocabulary $\mathbb{V}$ such that ${\mathcal{S}}_{p}$ represents the vector embedding corresponding to the substructure pattern $p$ . Whilst these embeddings are tuned as well during the optimisation of Equation 6, ultimately, these substructure embeddings are not used in our case as we are interested in the drug embeddings. $q$ is the number of negative samples with ${p}_{N}$ being the sampled context pattern drawn according to the empirical unigram distribution.
116
+
117
+ $$
118
+ {P}_{R}\left( p\right) = \frac{\left| \left\{ p\mid \forall {\mathcal{G}}_{i} \in \mathbb{G},\left( {\mathcal{G}}_{i},p\right) \in \mathcal{R}\right\} \right| }{\left| \mathcal{R}\right| }
119
+ $$
120
+
121
+ The optimisation of the above objective creates the desired distributed representations in $\mathbf{\Phi }$ , in this the case graph-level drug embeddings. These may be used as additional drug features in the drug feature set as we show in section 3.3. The distributed representations benefit from having lower dimensionality than the pattern frequency vectors, in other words $\left| V\right| > > d$ , being non-sparse, and being inductively biased via the distributive hypothesis. A more thorough treatment of the distributive hypothesis and in-depth reading of the neural embedding methods in this family can be found in $\left\lbrack {{35},{36},{39}}\right\rbrack$ .
122
+
123
+ Various instances of models for learning distributed representations of graphs following our description have been made such as Graph2Vec [40], DGK-WL/SP/GK [29], and AWE [41]. These differentiate primarily on the type of substructure pattern is induced over $\mathbb{G}$ . These have shown strong performance in graph classification tasks, still often performing on par with modern graph neural networks despite using significantly less features and parameters. However, limitations such as the dependency on a set vocabulary and inability to inductively infer representations for new subgraph patterns and new graphs (at least in its standard definitions), coupled with difficulty in scaling to large graphs with many millions of node have led to less attention on these methods. We speculate this has led to developments of deep drug pair score models completely ignoring distributed representations of graphs as part of the pipeline.
124
+
125
+ Table 1: Table of dataset details containing information on the application domain, and summary statistics on the number of drugs, context types, and drug pair context triples. Additional columns highlight the number of unique substructure patterns found across the molecular graphs of the drugs in the dataset based on the substructure patterns induced. $\left| \mathcal{D}\right|$ represents the number of unique drugs. $\left| \mathcal{C}\right|$ represents the set of unique contexts. $\left| \mathcal{Y}\right|$ represents the number of labeled drug-drug context triples. The remaining columns indicate the number of unique substructure patterns found in the drugs with respect to the corresponding substructure patterns extracted: WL $\left( {k = 2}\right)$ is the number of discrete rooted subtrees up to depth 2, WL $\left( {k = 3}\right)$ for rooted subgraphs up to depth 3, and the discrete shortest paths.
126
+
127
+ max width=
128
+
129
+ Dataset Task $\left| \mathcal{D}\right|$ $\left| \mathcal{C}\right|$ l v WL $\left( {k = 2}\right)$ WL $\left( {k = 3}\right)$ Shortest paths
130
+
131
+ 1-8
132
+ DrugCombDB [2] Synergy 2956 112 191,391 70 1591 1310
133
+
134
+ 1-8
135
+ DrugComb [9, 10] Synergy 4146 288 659,333 70 1651 1432
136
+
137
+ 1-8
138
+ DrugbankDDI [11] Interaction 1706 86 383,496 74 1287 2710
139
+
140
+ 1-8
141
+ TwoSides [12] Polypharmacy 644 10 499,582 64 934 8070
142
+
143
+ 1-8
144
+
145
+ § 3.2 ARGUING FOR THE USE OF DISTRIBUTED REPRESENTATIONS OF DRUGS IN DRUG PAIR SCORING PIPELINES
146
+
147
+ Here we show that the use of distributed representations of graphs to construct additional drug features is sensible in drug pair scoring tasks. As discussed in Section 2.1 a drug score pairing model is tasked with learning the function $f\left( {d,{d}^{\prime },c}\right) = {y}^{d,{d}^{\prime },c}$ from the labelled drug-pair context triples in $\mathcal{Y}$ . Looking at the statistics of drug pair scoring datasets in Table 1, we can see that the number of drugs and contexts is far lower than the number of triple observations. The huge and complex combinatorial space of drug-pair contexts (without even considering dosage effects) as well as the time/cost associated with experimenting more triples is a motivating factor for machine learning models. In practice, when such databases are updated it is through the addition of more labelled drug-pair context observations for better coverage [42]. The number of drugs considered rarely increases, as drugs can take many years of development, clinical trials, massive investment and regulatory processes before they enter studies for application domains of drug pair scoring [7, 8].
148
+
149
+ Therefore we can argue that learning distributed representations of the molecular graphs of the drugs in drug pair scoring tasks is sensible. Importantly, the number of discrete substructure patterns grows with the number of unique drugs, not the number of drug-pair-context observations within the dataset. Hence, as long as the number of drugs stays the same, trained drug embeddings can be carried over to any model being trained over the drug-pair context triples with minimal augmentation as we show in Section 3.3. To add further motivation, the number of discrete substructure patterns in the considered set of drugs is driven by the unique atom types and substructure patterns arising out of the bonded atoms. This set of unique atom types is theoretically limited to the periodic table and is obviously a limited subset of this in drugs. Furthermore, the size of the molecular graphs tend to be considerably smaller than social network scale graphs and less random due to chemical bonding rules hence the resulting substructure patterns are fewer and more informative making suitable descriptors in these settings $\left\lbrack {{29},{34},{43}}\right\rbrack$ .
150
+
151
+ § 3.3 INCORPORATING DISTRIBUTED REPRESENTATIONS OF GRAPHS INTO EXISTING DRUG PAIR SCORING PIPELINES
152
+
153
+ Through retrieval of the SMILES strings, we generated the molecular graphs for each of the drugs $\mathbb{G} = \left\{ {{\mathcal{G}}_{d} \mid d \in \mathcal{D}}\right\}$ using TorchDrug [44] and RDKit [45]. Given this set of graphs we considered two discrete substructure patterns to induce over the graphs. For the first substructure pattern we considered rooted subgraphs at different depth $k = 3$ . These may be induced as a side effect of the Weisfeiler-Lehman graph isomorphism test [33, 43]. The second substructure pattern we considered were all the shortest paths of the molecular graph which may be induced using the Floyd-Warshall algorithm [30-32]. Both choices were made based on their completeness and deterministic nature of their inducing algorithms for which there are also fast implementations [28, 46].
154
+
155
+ In either case, the set of unique substructure patterns found across all molecular graphs in $\mathcal{D}$ gives us the molecular substructure vocabulary $\mathbb{V}$ . We construct a target-context corpus of the drugs ${\mathcal{R}}_{\mathcal{D}} = \left\{ {\left( {{\mathcal{G}}_{d},{p}_{j}}\right) \mid {\mathcal{G}}_{d} \in \mathbb{G},{p}_{j} \in {\mathcal{G}}_{d},{p}_{j} \in \mathbb{V}}\right\}$ . We use a skipgram model with negative sampling to learn the desired drug embeddings, optimising the objective function in equation 6 .
156
+
157
+ < g r a p h i c s >
158
+
159
+ Figure 1: A summary of the proposed pipeline for learning and utilising distributed representations of drugs for drug pair scoring. The pipeline consists of two main stages: the learning of the distributed representations and the augmentation of existing models to utilise the new drug embeddings $\mathbf{\Phi }$ which become part of the drug feature set described in Section 2.1. As the learning of the distributed representations is separate from the drug pair scoring task we may transfer the embeddings into the drug feature set of any existing drug pair scoring model without retraining.
160
+
161
+ After training and obtaining the distributed representations of drugs $\Phi$ we add the embeddings to the drug feature set $\left( {{\mathbf{x}}^{d},{\mathbf{\Phi }}^{d},{\mathcal{G}}^{d},{\mathbf{X}}_{N}^{d},{\mathbf{X}}_{E}^{d}}\right) \in {\mathcal{X}}_{\mathcal{D}},\forall d \in \mathcal{D}$ . The remaining task is to develop downstream models which utilise the distributed representations. As the self supervised learning of the distributed representations is separate from the learning for the drug pair scoring task, we may transfer the embeddings into any of the existing drug pair scoring models. A diagram of this workflow can be seen in Figure 1.
162
+
163
+ In order to validate the usefulness of the distributed representations we chose to extend existing drug pair scoring models from different application domains. As a sanity check to see whether the distributed representations carry any useful signal we also implemented a simple MLP with three hidden layers based on DeepSynergy called DROnly which only utilises the embeddings learned. We took seminal models representing the state of the art and recent models containing graph neural networks that operate over the molecular graphs of the drugs. Each augmented model we propose takes the original name of the model and is suffixed with "DR" and the substructure pattern induced over the graphs (WL or SP for rooted subgraphs and shortest paths respectively). In most cases we simply concatenate the distributed representation of the first and second drug (drugs $\alpha$ and $\beta$ in Figure 1) to the corresponding molecular feature vectors being used in the model. In the case of EPGCN-DS-DR and DeepDrugDR the left and right drug embeddings are concatenated to the outputs of the graph neural network drug encoders and fed into the decoder. All of the code for these models is available in our supplementary materials.
164
+
165
+ Table 2: Table of results with information about the original drug pair scoring models such as year of publication and their original application domains. We report the average AUROC on the hold out test set with standard deviations from 5 seeded random splits. Bolded numbers indicate best performing model for each dataset.
166
+
167
+ max width=
168
+
169
+ Model Year Orig. application DrugCombDB DrugComb DrugbankDDI TwoSides
170
+
171
+ 1-7
172
+ DeepSynergy [26] 2018 Synergy 0.796 +- 0.010 0.739 +- 0.005 0.987 +- 0.001 0.933 +- 0.001
173
+
174
+ 1-7
175
+ EPGCN-DS [22] 2020 Interaction 0.703 +- 0.006 0.623 +- 0.002 0.724 +- 0.002 ${0.809} + - {0.006}$
176
+
177
+ 1-7
178
+ DeepDrug [25] 2020 Interaction 0.743 +- 0.001 0.648 +- 0.001 0.862 +- 0.002 0.926 +- 0.001
179
+
180
+ 1-7
181
+ DeepDDS [24] 2021 Synergy 0.791 +- 0.005 0.697 +- 0.002 0.988 +- 0.001 0.944 +- 0.001
182
+
183
+ 1-7
184
+ MatchMaker [27] 2021 Synergy 0.788 +- 0.002 0.720 +- 0.003 0.991 +- 0.001 0.928 +- 0.001
185
+
186
+ 1-7
187
+ DROnly (WL k=3) Proposed Not applicable 0.763 +- 0.002 0.651 +- 0.002 0.809 +- 0.005 0.917 +- 0.002
188
+
189
+ 1-7
190
+ DROnly (SP) Proposed Not applicable 0.711 +- 0.004 ${0.621} + - {0.002}$ 0.710 +- 0.005 0.823 +- 0.005
191
+
192
+ 1-7
193
+ DeepSynergy-DR (WL k=3) Proposed Not applicable $\mathbf{{0.814}} + - \mathbf{{0.004}}$ 0.738 +- 0.001 0.988 +- 0.000 0.934 +- 0.002
194
+
195
+ 1-7
196
+ DeepSynergy-DR (SP) Proposed Not applicable ${0.813} + - {0.003}$ $\mathbf{{0.740}} + - \mathbf{{0.004}}$ 0.988 +- 0.001 0.935 +- 0.000
197
+
198
+ 1-7
199
+ EPGCN-DS-DR (WL k=3) Proposed Not applicable 0.711 +- 0.002 ${0.627} + - {0.001}$ 0.741 +- 0.004 0.822 +- 0.006
200
+
201
+ 1-7
202
+ EPGCN-DS-DR (SP) Proposed Not applicable 0.704 +- 0.001 0.622 +- 0.001 0.730 +- 0.003 0.808 +- 0.002
203
+
204
+ 1-7
205
+ DeepDrug-DR (WL k=3) Proposed Not applicable 0.743 +- 0.001 0.648 +- 0.001 ${0.863} + - {0.000}$ 0.926 +- 0.001
206
+
207
+ 1-7
208
+ DeepDrug-DR (SP) Proposed Not applicable 0.743 +- 0.000 0.648 +- 0.001 0.863 +- 0.001 0.926 +- 0.000
209
+
210
+ 1-7
211
+ DeepDDS-DR (WL k=3) Proposed Not applicable 0.799 +- 0.004 0.700 +- 0.002 0.989 +- 0.000 0.944 +- 0.001
212
+
213
+ 1-7
214
+ DeepDDS-DR (SP) Proposed Not applicable 0.790 +- 0.003 0.696 +- 0.001 0.988 +- 0.001 0.943 +- 0.001
215
+
216
+ 1-7
217
+ MatchMaker-DR (WL k=3) Proposed Not applicable 0.783 +- 0.004 0.714 +- 0.003 $\mathbf{{0.992}} + - \mathbf{{0.000}}$ 0.930 +- 0.001
218
+
219
+ 1-7
220
+ MatchMaker-DR (SP) Proposed Not applicable 0.784 +- 0.002 0.714 +- 0.004 0.991 +- 0.001 0.928 +- 0.002
221
+
222
+ 1-7
223
+
224
+ § 4 EXPERIMENTAL SETUP
225
+
226
+ We empirically validate the usefulness of the distributed drug representations in downstream drug pair scoring tasks. We consider 4 datasets from the domains of drug synergy prediction, polypharmacy prediction, and drug interaction to evaluate our augmented models, which we have previously outlined in Table 1. Five seeded random 0.5/0.5 train and test set splits were made and the average AUROC performance was evaluated over the hold-out test set with standard deviation in Table 2.
227
+
228
+ For the distributed representations of the graphs we set the desired dimensionality at $d = {64}$ and the Skipgram model was trained for 1000 epochs. These hyperparameter values were chosen arbitrarily to simplify the following comparative analysis, however we explore their effects on downstream performance in an ablation study in Appendix A.
229
+
230
+ To obtain the non DR drug-level features as used in DeepSynergy and MatchMaker we retrieved the canonical SMILES strings [13] for each of the drugs in the labeled drug-pair context triples. 256 dimensional Morgan fingerprints [4] were computed for each drug with a radius of 2. Molecular graphs for entry into models with GNNs were generated using TorchDrug (and the underlying RDKit utilities) from the SMILES strings for each drug.
231
+
232
+ We utilised the default hyperparameters for each of the drug pair scoring models as in [47] which are summarised in Table 3 of appendix B. Augmentation of the models affects the input shapes of the drug encoders or the final decoder by the chosen dimensionality of the distributed representations, but does not affect any other original model hyperparameters.
233
+
234
+ Optimisation hyperparameters for training of the models were all kept the same. All drug pair scoring models were trained using an Adam optimiser [48] for 250 epochs with a batch size of 8192 observations, an initial learning rate of ${10}^{-2},{\beta }_{1}$ was set to 0.9 with ${\beta }_{2}$ set to ${0.99},\epsilon = {10}^{-7}$ and finally a weight decay of ${10}^{-5}$ was added. A dropout rate of 0.5 was applied for regularisation.
235
+
236
+ Naturally in addition to these details we make all of our code containing all implementations and scripts for evaluation available in the supplementary materials for reproducibility.
237
+
238
+ § 5 RESULTS AND DISCUSSION
239
+
240
+ Looking at our main results Table 2 we can make 3 main observations. First looking at the original methods we can see that methods using precomputed drug features and contextual features instead of graph neural networks such as DeepSynergy and Matchmaker perform better across drug pair scoring tasks. Combined with the fact that they train and evaluate much faster than methods using graph neural networks, it is generally advisable to use these models in the first instance validating the results in [47]. DeepDDS is the best performing model utilising a graph neural network. It is worth noting that it utilises contextual features like DeepSynergy and Matchmaker and unlike EPGCN-DS and DeepDrug. Secondly, looking at the DROnly model that serves as the sanity check for our embeddings, we can see that it is significantly better than a random model. This indicates the usefulness of the structural affinities and distributive inductive biases within the drug representations for the drug pair scoring tasks. Thirdly, we can see that the incorporation of the distributed representation into the models generally increases the performance of models. Particularly, we observe that the best performances for 3 out of 4 tasks are achieved by models incorporating our embeddings with the final one being a tie (within rounding error of 3 decimal points) between DeepDDS and its DR incorporating equivalent DeepDDS-DR (WL k=3) on TwoSides.
241
+
242
+ The horizontal analysis of the drug pair scoring models highlights that the significantly more expensive graph neural network based models generally perform worse than simpler models employing precomputed drug and context features on MLPs. This in spite of the graph neural network modules also having access to additional atom features on the molecular graphs as computed in TorchDrug. These include features such as the one-hot embedding of the atomic chiral tag, whether it participates in a ring, and whether it is aromatic, and the number of radical electrons on the atom. Hence, despite the wealth of additional information inside the provided molecular graph, we surmise the primary bottleneck for the drug level representations arises from the comparatively simple permutation invariant operators used to pool the node representations such as the global mean operator used in EPGCN-DS. There is an inevitable and large amount of information loss in the attempt to summarise variable amounts of higher level smooth node representations coming out of GNNs into a single vector of the same size, without any trainable parameters. We may partially attribute the additional performance boosts brought in by the distributed representations to the more refined algorithm to constructing the graph level representations, despite the input molecular graph only detailing the atom types and no additional node features. We can also attribute the performance boosts to the usefulness of substructure affinities to the drug pair scoring tasks as indicated in the DROnly performances across the tasks.
243
+
244
+ The learning of the distributed representations comes with two hyperparameters which may affect downstream performance when incorporated into the drug pair scoring models. These use specified hyperparameters are: (i) the dimensionality of the drug embeddings and (ii) the number of epochs for which the skipgram model is trained. We give full details on an ablation study on how varying these hyperparameters affects downstream performance with the experimental setup in Appendix A. To summarise the main points, the downstream performance caused by varying the desired dimensionality initially rises and then falls as expected due to the information bottleneck in very small dimensions and curse of dimensionality in higher dimensions. For varying the training epochs we find a slight but statistically significant positive correlation with performance as the number of epochs increases in two out of four datasets. However, in both cases there is little variation $( \pm {0.02}\mathrm{{ROCAUC}}$ in both ablation studies over the ranges studied) in the final performance of the downstream models given the hyperparameter choices except on the extreme ends of the studied ranges. This indicates the stable nature of the output embeddings and their usefulness in downstream tasks. As such we can generally recommend low dimensional embeddings on par with any other drug features being utilised and a high number of training epochs to obtain good performance. As mentioned previously more details can be found in Appendix A.
245
+
246
+ § 62 6 CONCLUSION
247
+
248
+ We have answered our two research questions posed in the introduction. We presented a methodology for learning and incorporating distributed representations of graphs into machine learning pipelines for drug pair scoring, answering the first question on how we may integrate distributed representations. We assessed the usefulness of the distributed representations of drugs with two parts. In the first part we show that a model only using the learned drug embeddings shows significantly better performance than random, suggesting the usefulness of the substructure pattern affinities between drugs in drug pair scoring. Subsequently for the second part, we augmented recent and state-of-the-art models from synergy, polypharmacy, and drug interaction type prediction to utilise our distibuted representations. Horizontal evaluation of these models shows that the incorporation of the distributed representations improves performance across different tasks and datasets.
papers/LOG/LOG 2022/LOG 2022 Conference/IXvfIex0mX6f/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,742 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # DiffWire: Inductive Graph Rewiring via the Lovász Bound
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ Graph Neural Networks (GNNs) have been shown to achieve competitive results to tackle graph-related tasks, such as node and graph classification, link prediction and node and graph clustering in a variety of domains. Most GNNs use a message passing framework and hence are called MPNNs. Despite their promising results, MPNNs have been reported to suffer from over-smoothing, over-squashing and under-reaching. Graph rewiring and graph pooling have been proposed in the literature as solutions to address these limitations. However, most state-of-the-art graph rewiring methods fail to preserve the global topology of the graph, are neither differentiable nor inductive, and require the tuning of hyper-parameters. In this paper, we propose DIFFWIRE, a novel framework for graph rewiring in MPNNs that is principled, fully differentiable and parameter-free by leveraging the Lovász bound. Our approach provides a unified theory for graph rewiring by proposing two new, complementary layers in MPNNs: CT-LAYER, a layer that learns the commute times and uses them as a relevance function for edge re-weighting; and GAP-LAYER, a layer to optimize the spectral gap, depending on the nature of the network and the task at hand. We empirically validate the value of each of these layers separately with benchmark datasets for graph classification. DIFFWIRE brings together the learnability of commute times to related definitions of curvature, opening the door to creating more expressive MPNNs.
12
+
13
+ ## 21 I Introduction
14
+
15
+ Graph Neural Networks (GNNs) [1, 2] are a class of deep learning models applied to graph structured data. They have been shown to achieve state-of-the-art results in many graph-related tasks, such as node and graph classification [3, 4], link prediction [5] and node and graph clustering [6, 7], and in a variety of domains, including image or molecular structure classification, recommender systems and social influence prediction [8].
16
+
17
+ Most GNNs use a message passing framework and thus are referred to as Message Passing Neural Networks (MPNNs) [4] . In these networks, every node in each layer receives a message from its adjacent neighbors. All the incoming messages at each node are then aggregated and used to update the node's representation via a learnable non-linear function -which is typically implemented by means of a neural network. The final node representations (called node embeddings) are used to perform the graph-related task at hand (e.g. graph classification). MPNNs are extensible, simple and have proven to yield competitive empirical results. Examples of MPNNs include GCN [3], GAT [9], GATv2 [10], GIN [11] and GraphSAGE [12]. However, they typically use transductive learning, i.e. the model observes both the training and testing data during the training phase, which might limit their applicability to graph classification tasks.
18
+
19
+ MPNNs have important limitations due to the inherent complexity of graphs, the limited depth of most state-of-the-art MPNNs and the inability of current methods to the capture global structural information of the graph. The literature has reported best results when MPNNs have a small number of layers, because networks with many layers tend to suffer from over-smoothing [13] and over-squashing [14]. Over-smoothing takes place when the embeddings of nodes that belong to different classes become indistinguishable. Over-squashing refers to the distortion of information flowing
20
+
21
+ from distant nodes due to graph bottlenecks ${}^{1}$ that emerge when the number of $\mathrm{k}$ -hop neighbors grows exponentially with k. They both tend to occur in networks with a large number of layers [15]. Moreover, simple MPNNs with a small number of layers fail to capture information that depends on the entire structure of the graph (e.g., random walk probabilities [16]) and prevent the information flow to reach distant nodes. This phenomenon is called under-reaching [17] and occurs when the MPNNs depth is smaller than the graph's diameter.
22
+
23
+ Graph pooling and graph rewiring have been proposed in the literature as solutions to address these limitations [14]. Given that the main infrastructure for message passing in MPNNs are the edges in the graph, and given that many of these edges might be noisy or inadequate for the downstream task [18], graph rewiring aims to identify such edges and edit them.
24
+
25
+ Many graph rewiring methods rely on edge sampling strategies: first, the edges are assigned new weights according to a relevance function and then they are re-sampled according to the new weights to retain the most relevant edges (i.e. those with larger weights). Edge relevance might be computed in different ways, including randomly [19], based on similarity [20] or on the edge's curvature [21].
26
+
27
+ Due to the diversity of possible graphs and tasks to be performed with those graphs, optimal graph rewiring should include a variety of strategies that are suited not only to the task at hand but also to the nature and structure of the graph.
28
+
29
+ Motivation. State-of-the-art edge sampling strategies have three significant limitations. First, most of the proposed methods fail to preserve the global topology of the graph. Second, most graph rewiring methods are neither differentiable nor inductive [21]. Third, relevance functions that depend on a diffusion measure (typically in the spectral domain) are not parameter-free, which adds a layer of complexity in the models. In this paper, we address these three limitations.
30
+
31
+ Contributions and outline. The main contribution of our work is to propose a theoretical framework called DIFFWIRE for graph rewiring in MPNNs that is principled, fully differentiable, inductive, and parameter-free by leveraging the Lovász bound [16] given by Eq. 1. This bound is a mathematical expression of the relationship between the commute times (effective resistance distance) and the network's spectral gap. Inductive means that given an unseen test graph, DIFFWIRE predicts the optimal graph structure for the task at hand without any parameter tuning. Given the recently reported connection between commute times and curvature [22], and between curvature and the spectral gap [21], our framework provides a unified theory linking these concepts. Our aim is to leverage diffusion and curvature theories to propose a new approach for graph rewiring that preserves the graph's structure.
32
+
33
+ We first propose using the commute times as a relevance function for edge re-weighting. Moreover, we develop a differentiable, parameter-free layer in the GNN (CT-LAYER) to learn the commute times. Second, we propose an alternative graph rewiring approach by adding a layer in the network (GAP-LAYER) that optimizes the spectral gap according to the nature of the network and the task at hand. Finally, we empirically validate the proposed layers with state-of-the-art benchmark datasets in a graph classification task. We select a graph classification task to emphasize the inductive nature of DIFFWIRE: the layers in the GNN (CT-LAYER and GAP-LAYER) are trained to predict the CTs embedding and minimize the spectral gap for unseen graphs, respectively. This approach gives a great advantage when compared to SoTA methods that require optimizing the parameters of the models for each graph. CT-LAYER and GAP-LAYER learn the weights during training to predict the optimal changes in the topology of any unseen graph in test time.
34
+
35
+ The paper is organized as follows: Section 2 provides a summary of the most relevant related literature. Our core technical contribution is described in Section 3, followed by our experimental evaluation and discussion in Section 4. Finally, Section 5 is devoted to conclusions and an outline of our future lines of research.
36
+
37
+ ## 2 Related Work
38
+
39
+ In this section we provide an overview of the most relevant works that have been proposed in the literature to tackle the challenges of over-smoothing, over-squashing and under-reaching in MPNNs by means of graph rewiring and pooling.
40
+
41
+ ---
42
+
43
+ ${}^{1}$ A graph bottleneck is defined a a topological property of the graph that leads to over-squashing.
44
+
45
+ ---
46
+
47
+ Limitations of MPNNs. MPNNs are widely used to tackle many real-world tasks -from social network analysis to protein modeling- with competitive results. However, MPNNs also have important limitations due to the inherent complexity of graphs. Despite such complexity, the literature has reported best results when MPNNs have a small number of layers, because networks with many layers tend to suffer from over-smoothing [13] and over-squashing [14].
48
+
49
+ Over-smoothing $\left\lbrack {8,{15},{23},{24}}\right\rbrack$ takes place when the embeddings of nodes that belong to different classes become indistinguishable. It tends to occur in MPNNs with many layers that are used to tackle short-range tasks, i.e. tasks where a node's correct prediction mostly depends on its local neighborhood. Given this local dependency, it makes intuitive sense that adding layers to the network would not help the network's performance.
50
+
51
+ Conversely, long-range tasks require as many layers in the network as the range of the interaction between the nodes. However, as the number of layers in the network increases, the number of nodes feeding into each of the node's receptive field also increases exponentially, leading to over-squashing $\left\lbrack {{14},{21}}\right\rbrack$ : the information flowing from the receptive field composed of many nodes is compressed in fixed-length node vectors, and hence the graph fails to correctly propagate the messages coming from distant nodes. Thus, over-squashing emerges when there is a bottleneck in the graph and a long-range task.
52
+
53
+ To prevent over-smoothing and over-squashing, the number of layers in MPNNs is typically kept small. However, simple models with a small number of layers fail to capture information that depends on the entire structure of the graph (e.g., random walk probabilities [16]) and prevent the information flow to reach distant nodes. This phenomenon is called under-reaching [17] and occurs when the MPNN's depth is smaller than the graph's diameter.
54
+
55
+ Graph rewiring in MPNNs. Rewiring is a process of changing the graph's structure to control the information flow and hence improve the ability of the network to perform the task at hand (e.g. node or graph classification, link prediction...). Several approaches have been proposed in the literature for graph rewiring, such as connectivity diffusion [25] or evolution [21], adding new bridge-nodes [26] and multi-hop filters [27], and neighborhood [12], node [28] and edge [29] sampling.
56
+
57
+ Edge sampling methods sample the graph's edges based on their weights or relevance, which might be computed in different ways. Huang et al. [19] prove that randomly dropping edges during training improves the performance of GNNs. Klicpera et al. [25], define edge relevance according to the coefficients of a parameterized diffusion process over the graph. Then, the k-hop diffusion matrix is truncated to discard long-range interactions. For Kazi et al. [20], edge relevance is given by the similarity between the nodes' attributes In addition, a reinforcement learning process rewards edges leading to a correct classification and penalizes the rest.
58
+
59
+ Edge sampling-based rewiring has been proposed to tackle over-smoothing and over-squashing in MPNNs. Over-smoothing may be relieved by removing inter-class edges [30]. However, this strategy is only valid when the graph is homophilic, i.e. connected nodes tend to share similar attributes. Otherwise, removing these edges could lead to over-squashing [21] if their removal obstructs the message passing between distant nodes belonging to the same class (heterophily). Increasing the size of the bottlenecks of the graph via rewiring has been shown to improve node classification performance in heterophilic graphs, but not in homophilic graphs [21]. Recently, Topping et al. [21] propose an edge relevance function given by the edge curvature to mitigate over-squashing. They identify the bottleneck of the graph by computing the Ricci curvature of the edges. Next, they remove edges with high curvature and add edges around minimal curvature edges.
60
+
61
+ Graph Structure Learning (GSL). GSL methods [31] aim to learn an optimized graph structure and its corresponding representations at the same time. DIFFWIRE could be seen from the perspective of GSL: CT-LAYER, as a metric-based, neural approach, and GAP-LAYER, as a direct-neural approach to optimize the structure of the graph to the task at hand.
62
+
63
+ Pooling in MPNNs. In addition to graph rewiring, pooling layers simplify the original graph by compressing it into a smaller graph or a vector via pooling operators, which range from simple [32] to more sophisticated approaches, such as DiffPool [33] and MinCut pool [34]. Although graph pooling methods do not consider the edge representations, there is a clear relationship between pooling methods and rewiring since both of them try to reduce the flow of information through the graph's bottleneck.
64
+
65
+ ![01963ef6-344f-7649-932d-264024e54631_3_421_201_934_340_0.jpg](images/01963ef6-344f-7649-932d-264024e54631_3_421_201_934_340_0.jpg)
66
+
67
+ Figure 1: DiffWire. Left: Original graph. Center: Rewired graph after CT-LAYER. Right: Rewired graph after GAP-LAYER. Colors indicate the strength of the edges.
68
+
69
+ ## 3 Proposed Approach: DIFFWIRE for Inductive Graph Rewiring
70
+
71
+ DIFFWIRE provides a unified theory for graph rewiring by proposing two new, complementary layers in MPNNs: first, CT-LAYER, a layer that learns the commute times and uses them as a relevance function for edge re-weighting; and second, GAP-LAYER, a layer to optimize the spectral gap, depending on the nature of the network and the task at hand.
72
+
73
+ In this section, we present the theoretical foundations for the definitions of CT-LAYER and GAP-LAYER. First, we introduce the bound that our approach is based on: The Lovász bound. Table 2 in A. 1 summarizes the notation used in the paper.
74
+
75
+ ### 3.1 The Lovász Bound
76
+
77
+ The Lovász bound, given by Eq. 1, was derived by Lovász in [16] as a means of linking the spectrum governing a random walk in an undirected graph $G = \left( {V, E}\right)$ with the hitting time ${H}_{uv}$ between any two nodes $u$ and $v$ of the graph. ${H}_{uv}$ is the expected number of steps needed to reach (or hit) $v$ from $u;{H}_{vu}$ is defined similarly. The sum of both hitting times between the two nodes, $v$ and $u$ , is the commute time $C{T}_{uv} = {H}_{uv} + {H}_{vu}$ . Thus, $C{T}_{uv}$ is the expected number of steps needed to hit $v$ from $u$ and go back to $u$ . According to the Lovász bound:
78
+
79
+ $$
80
+ \left| {\frac{1}{\operatorname{vol}\left( G\right) }C{T}_{uv} - \left( {\frac{1}{{d}_{u}} + \frac{1}{{d}_{v}}}\right) }\right| \leq \frac{1}{{\lambda }_{2}^{\prime }}\frac{2}{{d}_{\min }} \tag{1}
81
+ $$
82
+
83
+ where ${\lambda }_{2}^{\prime } \geq 0$ is the spectral gap, i.e. the first non-zero eigenvalue of $\mathcal{L} = \mathbf{I} - {\mathbf{D}}^{-1/2}{\mathbf{{AD}}}^{-1/2}$ (normalized Laplacian [35], where $\mathbf{D}$ is the degree matrix and $\mathbf{A}$ , the adjacency matrix); $\operatorname{vol}\left( G\right)$ is the volume of the graph (sum of degrees); ${d}_{u}$ and ${d}_{v}$ are the degrees of nodes $u$ and $v$ , respectively; and ${d}_{min}$ is the minimum degree of the graph.
84
+
85
+ The term $C{T}_{uv}/\operatorname{vol}\left( G\right)$ in Eq. 1 is referred to as the effective resistance, ${R}_{uv}$ , between nodes $u$ and $v$ . The bound states that the effective resistance between two nodes in the graph converges to or diverges from $\left( {1/{d}_{u} + 1/{d}_{v}}\right)$ , depending on whether the graph’s spectral gap diverges from or tends to zero. The larger the spectral gap, the closer $C{T}_{uv}/\operatorname{vol}\left( G\right)$ will be to $\frac{1}{{d}_{u}} + \frac{1}{{d}_{v}}$ and hence the less informative the commute times will be.
86
+
87
+ We propose two novel MPNNs layers based on each side of the inequality in Eq. 1: CT-LAYER, focuses on the left-hand side, and GAP-LAYER, on the right-hand side. The use of each layer depends on the nature of the network and the task at hand. In a graph classification task (our focus), CT-LAYER is expected to yield good results when the graph's spectral gap is small; conversely, GAP-LAYER would be the layer of choice in graphs with large spectral gap.
88
+
89
+ The Lovász bound was later refined by von Luxburg et al. [36]. App. A.2.2 presents this bound along with its relationship with ${R}_{uv}$ as a global measure of node similarity. Once we have defined both sides of the Lovász bound, we proceed to describe their implications for graph rewiring.
90
+
91
+ ### 3.2 CT-LAYER: Commute Times for Graph Rewiring
92
+
93
+ We focus first on the left-hand side of the Lovász bound which concerns the effective resistances $C{T}_{uv}/\operatorname{vol}\left( G\right) = {R}_{uv}$ (or commute times) ${}^{2}$ between any two nodes in the graph.
94
+
95
+ Spectral Sparsification leads to Commute Times. Graph sparsification in undirected graphs may be formulated as finding a graph $H = \left( {V,{E}^{\prime }}\right)$ that is spectrally similar to the original graph $G = \left( {V, E}\right)$ with ${E}^{\prime } \subset E$ . Thus, the spectra of their Laplacians, ${\mathbf{L}}_{G}$ and ${\mathbf{L}}_{H}$ should be similar.
96
+
97
+ Theorem 1 (Spielman and Srivastava [37]). Let Sparsify $\left( {G, q}\right) \rightarrow {G}^{\prime }$ be a sampling algorithm of graph $G = \left( {V, E}\right)$ , where edges $e \in E$ are sampled with probability $q \propto {R}_{e}$ (proportional to the effective resistance). For $n = \left| V\right|$ sufficiently large and $1/\sqrt{n} < \epsilon \leq 1, O\left( {n\log n/{\epsilon }^{2}}\right)$ samples are needed to satisfy $\forall \mathbf{x} \in {\mathbb{R}}^{n} : \left( {1 - \epsilon }\right) {\mathbf{x}}^{T}{\mathbf{L}}_{G}\mathbf{x} \leq {\mathbf{x}}^{T}{\mathbf{L}}_{{G}^{\prime }}\mathbf{x} \leq \left( {1 + \epsilon }\right) {\mathbf{x}}^{T}{\mathbf{L}}_{G}\mathbf{x}$ , with probability $\geq 1/2$ .
98
+
99
+ The above theorem has a simple explanation in terms of Dirichlet energies. The Laplacian $\mathbf{L} =$ $\mathbf{D} - \mathbf{A} \succcurlyeq 0$ , i.e. it is semi-definite positive (all its eigenvalues are non-negative). Then, if we consider $\mathbf{x} : V \rightarrow \mathbb{R}$ as a real-valued function of the $n$ nodes of $G = \left( {V, E}\right)$ , we have that $\mathcal{E}\left( \mathbf{x}\right) \mathrel{\text{:=}} {\mathbf{x}}^{T}{\mathbf{L}}_{G}\mathbf{x} =$ $\mathop{\sum }\limits_{{e = \left( {u, v}\right) \in E}}{\left( {\mathbf{x}}_{u} - {\mathbf{x}}_{v}\right) }^{2} \geq 0$ for any $\mathbf{x}$ . In particular, the eigenvectors $\mathbf{f} \mathrel{\text{:=}} \left\{ {{\mathbf{x}}_{i} : {\mathbf{{Lf}}}_{i} = {\lambda }_{i}{\mathbf{x}}_{i}}\right\}$ are the set of special functions (mutually orthogonal and normalized) that minimize the energies $\mathcal{E}\left( {\mathbf{f}}_{i}\right)$ , i.e. they are the orthogonal functions with the minimal variabilities achievable by the topology of $G$ . Therefore, Theorem 1 states that any minimal variability of ${G}^{\prime }$ is bounded by $\left( {1 \pm \epsilon }\right)$ times that of $G$ if we sample enough edges with probability $q \propto {R}_{e}$ .
100
+
101
+ Therefore, the effective resistance is a principled relevance function, since the resulting graph ${G}^{\prime }$ retains the main properties of $G$ . In particular, we have that the spectra of ${\mathbf{L}}_{G}$ and ${\mathbf{L}}_{{G}^{\prime }}$ are related by $\left( {1 - \epsilon }\right) {\lambda }_{i}^{G} \leq {\lambda }_{i}^{{G}^{\prime }} \leq \left( {1 + \epsilon }\right) {\lambda }_{i}^{G}$ : in short $\left( {1 - \epsilon }\right) {\mathbf{L}}_{G} \preccurlyeq {\mathbf{L}}_{{G}^{\prime }} \preccurlyeq \left( {1 + \epsilon }\right) {\mathbf{L}}_{G}$ . This is a direct result of the theorem since ${\lambda }_{i} = \frac{\mathcal{E}\left( {\mathbf{f}}_{i}\right) }{{\mathbf{f}}_{i}^{T}{\mathbf{f}}_{i}}$ are the normalized minimal variabilities.
102
+
103
+ This first result implies that edge sampling based on effective resistances (or commute times) is a principled way to rewire a graph while preserving its original structure. Next, we present what is a commute times embedding and how it can be spectrally computed.
104
+
105
+ Commute Times Embedding. The choice of effective resistances in Theorem 1 is explained by the fact that ${R}_{uv}$ can be computed from ${R}_{uv} = {\left( {\mathbf{e}}_{u} - {\mathbf{e}}_{v}\right) }^{T}{\mathbf{L}}^{ + }\left( {{\mathbf{e}}_{u} - {\mathbf{e}}_{v}}\right)$ , where ${\mathbf{e}}_{u}$ is the unit vector with a unit value at $u$ and zero elsewhere. ${\mathbf{L}}^{ + } = \mathop{\sum }\limits_{{i > 2}}{\lambda }_{i}^{-1}{\mathbf{f}}_{i}{\mathbf{f}}_{i}^{T}$ , where ${\mathbf{f}}_{i},{\lambda }_{i}$ are the eigenvectors and eigenvalues of $\mathbf{L}$ , is the pseudo-inverse or Green’s function of $G = \left( {V, E}\right)$ if it is connected, and from the theorem we also have ${\left( 1 + \epsilon \right) }^{-1}{\mathbf{L}}_{G}^{ + } \preccurlyeq {\mathbf{L}}_{{G}^{\prime }}^{ + } \preccurlyeq {\left( 1 - \epsilon \right) }^{-1}{\mathbf{L}}_{G}^{ + }$ .
106
+
107
+ The Green’s function leads to envision ${R}_{uv}$ (and therefore $C{T}_{uv}$ ) as metrics relating pairs of nodes of $G$ . For instance ${\mathbf{R}}_{uv} = {\mathbf{L}}_{uu}^{ + } + {\mathbf{L}}_{vv}^{ + } - 2{\mathbf{L}}_{uv}^{ + }$ , is the resistance distance [38] i.e., as noted by Qiu and Hancock [39] the elements ${\mathbf{L}}_{uv}^{ + }$ encode dot products between the embeddings ${\mathbf{z}}_{u}$ and ${\mathbf{z}}_{v}$ of $u$ and $v$ . As a result, the latent space can not only be described spectrally but also in a parameter free-manner, which is not the case for other spectral embeddings, such as heat kernel or diffusion maps as they rely on a time parameter $t$ . More precisely, the embedding matrix $\mathbf{Z}$ whose columns contain the nodes’ embeddings is given by:
108
+
109
+ $$
110
+ \mathbf{Z} \mathrel{\text{:=}} \sqrt{\operatorname{vol}\left( G\right) }{\Lambda }^{-1/2}{\mathbf{F}}^{T} = \sqrt{\operatorname{vol}\left( G\right) }{\Lambda }^{\prime - 1/2}{\mathbf{G}}^{T}{\mathbf{D}}^{-1/2} \tag{2}
111
+ $$
112
+
113
+ where $\Lambda$ is the diagonal matrix of the unnormalized Laplacian $\mathbf{L}$ eigenvalues and $\mathbf{F}$ is the matrix of their associated eigenvectors. Similarly, ${\Lambda }^{\prime }$ contains the eigenvalues of the normalized Laplacian $\mathcal{L}$ and $\mathbf{G}$ the eigenvectors. We have $\mathbf{F} = {\mathbf{{GD}}}^{-1/2}$ or ${\mathbf{f}}_{i} = {\mathbf{g}}_{i}{\mathbf{D}}^{-1/2}$ , where $\mathbf{D}$ is the degree matrix.
114
+
115
+ Finally, the commute times are given by the Euclidean distances between the embeddings $C{T}_{uv} =$ ${\begin{Vmatrix}{\mathbf{z}}_{u} - {\mathbf{z}}_{v}\end{Vmatrix}}^{2}$ . Their spectral form is
116
+
117
+ $$
118
+ {R}_{uv} = \frac{C{T}_{uv}}{\operatorname{vol}\left( G\right) } = \mathop{\sum }\limits_{{i = 2}}^{n}\frac{1}{{\lambda }_{i}}{\left( {\mathbf{f}}_{i}\left( u\right) - {\mathbf{f}}_{i}\left( v\right) \right) }^{2} = \mathop{\sum }\limits_{{i = 2}}^{n}\frac{1}{{\lambda }_{i}^{\prime }}{\left( \frac{{\mathbf{g}}_{i}\left( u\right) }{\sqrt{{d}_{u}}} - \frac{{\mathbf{g}}_{i}\left( v\right) }{\sqrt{{d}_{v}}}\right) }^{2} \tag{3}
119
+ $$
120
+
121
+ Note how in Eq. 3 the commute times rely on the Fiedler vector ${\mathbf{f}}_{2}$ (or ${\mathbf{g}}_{2}$ ) downscaled by the spectral gap ${\lambda }_{2}$ (or more formally ${\lambda }_{2}^{\prime }$ ). The downscaled Fiedler vector dominates the expansion because the Fiedler vector is the solution to the relaxed ratio-cut problem. This is consistent with the fact that $p$ -resistances become the inverse of mincut when $p \rightarrow \infty$ .
122
+
123
+ ---
124
+
125
+ ${}^{2}$ We use commute times and effective resistances interchangeably as per their use in the literature
126
+
127
+ ---
128
+
129
+ Commute Times as an Optimization Problem. In this section, we demonstrate how the CTs may be computed as an optimization problem by means of a differentiable layer in a GNN. Constraining neighboring nodes to have a similar embedding leads to
130
+
131
+ $$
132
+ \mathbf{Z} = \arg \mathop{\min }\limits_{{{\mathbf{Z}}^{T}\mathbf{Z} = \mathbf{I}}}\frac{\mathop{\sum }\limits_{{u, v}}{\begin{Vmatrix}{\mathbf{z}}_{u} - {\mathbf{z}}_{v}\end{Vmatrix}}^{2}{\mathbf{A}}_{uv}}{\mathop{\sum }\limits_{{u, v}}{\mathbf{Z}}_{uv}^{2}{d}_{u}} = \frac{\mathop{\sum }\limits_{{\left( {u, v}\right) \in E}}{\begin{Vmatrix}{\mathbf{z}}_{u} - {\mathbf{z}}_{v}\end{Vmatrix}}^{2}}{\mathop{\sum }\limits_{{u, v}}{\mathbf{Z}}_{uv}^{2}{d}_{u}} = \frac{\operatorname{Tr}\left\lbrack {{\mathbf{Z}}^{T}\mathbf{L}\mathbf{Z}}\right\rbrack }{\operatorname{Tr}\left\lbrack {{\mathbf{Z}}^{T}\mathbf{D}\mathbf{Z}}\right\rbrack }, \tag{4}
133
+ $$
134
+
135
+ which reveals that CTs embeddings result from a Laplacian regularization down-weighted by the degree. As a result, frontier nodes or hubs -i.e. nodes with inter-community edges- which tend to have larger degrees than those lying inside their respective communities will be embedded far away from their neighbors, increasing the distance between communities. Note that the above quotient of traces formulation is easily differentiable and different from $\operatorname{Tr}\left\lbrack \frac{{\mathbf{Z}}^{T}\mathbf{{LZ}}}{{\mathbf{Z}}^{T}\mathbf{{DZ}}}\right\rbrack$ proposed in [39].
136
+
137
+ With the above elements we define CT-LAYER, the first rewiring layer proposed in this paper. See Figure 2 for a graphical representation of the layer.
138
+
139
+ Definition 1 (CT-Layer). Given the matrix ${\mathbf{X}}_{n \times F}$ encoding the features of the nodes after any message passing(MP)layer; ${\mathbf{Z}}_{n \times O\left( n\right) } = \tanh \left( {{MLP}\left( \mathbf{X}\right) }\right)$ learns the association $\mathbf{X} \rightarrow \mathbf{Z}$ while $\mathbf{Z}$ is optimized according to the loss ${L}_{CT} = \frac{\operatorname{Tr}\left\lbrack {{\mathbf{Z}}^{T}\mathbf{L}\mathbf{Z}}\right\rbrack }{\operatorname{Tr}\left\lbrack {{\mathbf{Z}}^{T}\mathbf{D}\mathbf{Z}}\right\rbrack } + {\begin{Vmatrix}\frac{{\mathbf{Z}}^{T}\mathbf{Z}}{{\begin{Vmatrix}{\mathbf{Z}}^{T}\mathbf{Z}\end{Vmatrix}}_{F}} - {\mathbf{I}}_{n}\end{Vmatrix}}_{F}$ . This results in the following resistance diffusion ${\mathbf{T}}^{CT} = \mathbf{R}\left( \mathbf{Z}\right) \odot \mathbf{A}$ , i.e. the Hadamard product between the resistance distance and the adjacency matrix, providing as input to the subsequent MP layer a learnt convolution matrix.
140
+
141
+ Thus, CT-LAYER learns the CTs and rewires an input graph according to them: the edges with maximal resistance will tend to be the most important edges so as to preserve the topology of the graph.
142
+
143
+ ![01963ef6-344f-7649-932d-264024e54631_5_426_1060_944_280_0.jpg](images/01963ef6-344f-7649-932d-264024e54631_5_426_1060_944_280_0.jpg)
144
+
145
+ Figure 2: Detailed depiction of CT-LAYER
146
+
147
+ Below, we present the relationship between the CTs and the graph's bottleneck and curvature.
148
+
149
+ ${\mathbf{T}}^{CT}$ and Graph Bottlenecks. Beyond the principled sparsification of ${\mathbf{T}}^{CT}$ (enabled by Theorem 1), this layer rewires the graph $G = \left( {E, V}\right)$ in such a way that edges with maximal resistance will tend to be the most critical to preserve the topology of the graph. More precisely, although $\mathop{\sum }\limits_{{e \in E}}{R}_{e} = n - 1$ , the bulk of the resistance distribution will be located at graph bottlenecks, if they exist. Otherwise, their magnitude is upper-bounded and the distribution becomes more uniform.
150
+
151
+ Graph bottlenecks are controlled by the graph’s conductance or Cheeger constant, ${h}_{G} = \mathop{\min }\limits_{{S \subseteq V}}{h}_{S}$ , where: ${h}_{S} = \frac{\left| \partial S\right| }{\min \left( {\operatorname{vol}\left( S\right) ,\operatorname{vol}\left( \bar{S}\right) }\right) },\partial S = \{ e = \left( {u, v}\right) : u \in S, v \in \bar{S}\}$ and $\operatorname{vol}\left( S\right) = \mathop{\sum }\limits_{{u \in S}}{d}_{u}$ .
152
+
153
+ The interplay between the graph's conductance and effective resistances is given by:
154
+
155
+ Theorem 2 (Alev et al. [40]). Given a graph $G = \left( {V, E}\right)$ , a subset $S \subseteq V$ with $\operatorname{vol}\left( S\right) \leq \operatorname{vol}\left( G\right) /2$ ,
156
+
157
+ $$
158
+ {h}_{S} \geq \frac{c}{\operatorname{vol}{\left( S\right) }^{1/2 - \epsilon }} \Leftrightarrow \left| {\partial S}\right| \geq c \cdot \operatorname{vol}{\left( S\right) }^{1/2 - \epsilon }, \tag{5}
159
+ $$
160
+
161
+ for some constant $c$ and $\epsilon \in \left\lbrack {0,1/2}\right\rbrack$ . Then, ${R}_{uv} \leq \left( {\frac{1}{{d}_{u}^{2\epsilon }} + \frac{1}{{d}_{v}^{2\epsilon }}}\right) \cdot \frac{1}{\epsilon \cdot {c}^{2}}$ for any pair $u, v$ .
162
+
163
+ According to this theorem, the larger the graph’s bottleneck, the tighter the bound on ${R}_{uv}$ are. Moreover, $\max \left( {R}_{uv}\right) \leq 1/{h}_{S}^{2}$ , i.e., the resistance is bounded by the square of the bottleneck.
164
+
165
+ This bound partially explains the rewiring of the graph in Figure 1-center. As seen in the Figure, rewiring using CT-LAYER sparsifies the graph and assigns larger weights to the edges located in the graph's bottleneck. The interplay between the above theorem and Theorem 1 is described in App. A.1.
166
+
167
+ Recent work has proposed using curvature for graph rewiring. We outline below the relationship between CTs and curvature.
168
+
169
+ Effective Resistances and Curvature. Topping et al. [21] propose an approach for graph rewiring, where the relevance function is given by the Ricci curvature. However, this measure is nondifferentiable. More recent definitions of curvature [22] have been formulated based on resistance distances that would be differentiable using our approach. The resistance curvature of an edge $e = \left( {u, v}\right)$ is ${\kappa }_{uv} \mathrel{\text{:=}} 2\left( {{p}_{u} + {p}_{v}}\right) /{R}_{uv}$ where ${p}_{u} \mathrel{\text{:=}} 1 - \frac{1}{2}\mathop{\sum }\limits_{{u \sim w}}{R}_{uv}$ is the node’s curvature. Relevant properties of the edge resistance curvature are discussed in App. A.1.3, along with a related Theorem proposed in Devriendt and Lambiotte [22].
170
+
171
+ ### 3.3 GAP-LAYER: Spectral Gap Optimization for Graph Rewiring
172
+
173
+ The right-hand side of the Lovász bound in Eq. 1 relies on the graph’s spectral gap ${\lambda }_{2}^{\prime }$ , such that the larger the spectral gap, the closer the commute times would be to their non-informative regime. Note that the spectral graph is typically large in commonly observed graphs -such as communities in social networks which may be bridged by many edges [41]- and, hence, in these cases it would be desirable to rewire the adjacency matrix $\mathbf{A}$ so that ${\lambda }_{2}^{\prime }$ is minimized.
174
+
175
+ In this section, we explain how to rewire the graph's adjacency matrix A to minimize the spectral gap. We propose using the gradient of ${\lambda }_{2}$ wrt each component of $\widetilde{\mathbf{A}}$ . Then, we can compute these gradient either using Laplacians (L, with Fiedler ${\lambda }_{2}$ ) or normalized Laplacians ( $\mathcal{L}$ , with Fiedler ${\lambda }_{2}^{\prime }$ ). We also present an approximation of the Fiedler vectors needed to compute those gradients, and propose computing them as a GNN Layer called the GAP-LAYER. A detailed schematic of GAP-LAYER is shown in Figure 3.
176
+
177
+ Ratio-cut (Rcut) Approximation. We propose to rewire the adjacency matrix, A, so that ${\lambda }_{2}$ is minimized. We consider a matrix $\widetilde{\mathbf{A}}$ close to $\mathbf{A}$ that satisfies $\widetilde{\mathbf{L}}{\mathbf{f}}_{2} = {\lambda }_{2}{\mathbf{f}}_{2}$ , where ${\mathbf{f}}_{2}$ is the solution to the ratio-cut relaxation [42]. Following [43], the gradient of ${\lambda }_{2}$ wrt each component of $\mathbf{A}$ is given by
178
+
179
+ $$
180
+ {\nabla }_{\widetilde{\mathbf{A}}}{\lambda }_{2} \mathrel{\text{:=}} \operatorname{Tr}\left\lbrack {{\left( {\nabla }_{\widetilde{\mathbf{L}}}{\lambda }_{2}\right) }^{T} \cdot {\nabla }_{\widetilde{\mathbf{A}}}\widetilde{\mathbf{L}}}\right\rbrack = \operatorname{diag}\left( {{\mathbf{f}}_{2}{\mathbf{f}}_{2}^{T}}\right) {\mathbf{{11}}}^{T} - {\mathbf{f}}_{2}{\mathbf{f}}_{2}^{T} \tag{6}
181
+ $$
182
+
183
+ where 1 is the vector of $n$ ones; and ${\left\lbrack {\nabla }_{\widetilde{\mathbf{A}}}{\lambda }_{2}\right\rbrack }_{ij}$ is the gradient of ${\lambda }_{2}$ wrt ${\widetilde{\mathbf{A}}}_{uv}$ . The driving force of this gradient relies on the correlation ${\mathbf{f}}_{2}{\mathbf{f}}_{2}^{T}$ . Using this gradient to minimize ${\lambda }_{2}$ results in breaking the graph's bottleneck while preserving simultaneously the inter-cluster structure. We delve into this matter in App. A.2.
184
+
185
+ Normalized-cut (Ncut) Approximation. Similarly, considering now ${\lambda }_{2}^{\prime }$ for rewiring leads to
186
+
187
+ $$
188
+ {\nabla }_{\widetilde{\mathbf{A}}}{\lambda }_{2}^{\prime } \mathrel{\text{:=}} \operatorname{Tr}\left\lbrack {{\left( {\nabla }_{\widetilde{\mathcal{L}}}{\lambda }_{2}\right) }^{T} \cdot {\nabla }_{\widetilde{\mathbf{A}}}\widetilde{\mathcal{L}}}\right\rbrack =
189
+ $$
190
+
191
+ $$
192
+ {\mathbf{d}}^{\prime }\left\{ {{\mathbf{g}}_{2}^{T}{\widetilde{\mathbf{A}}}^{T}{\widetilde{\mathbf{D}}}^{-1/2}{\mathbf{g}}_{2}}\right\} {\mathbf{1}}^{T} + {\mathbf{d}}^{\prime }\left\{ {{\mathbf{g}}_{2}^{T}\widetilde{\mathbf{A}}{\widetilde{\mathbf{D}}}^{-1/2}{\mathbf{g}}_{2}}\right\} {\mathbf{1}}^{T} + {\widetilde{\mathbf{D}}}^{-1/2}{\mathbf{g}}_{2}{\mathbf{g}}_{2}^{T}{\widetilde{\mathbf{D}}}^{-1/2} \tag{7}
193
+ $$
194
+
195
+ where ${\mathbf{d}}^{\prime }$ is a $n \times 1$ vector including derivatives of degree wrt adjacency and related terms. This gradient relies on the Fiedler vector ${\mathbf{g}}_{2}$ (the solution to the normalized-cut relaxation), and on the incoming and outgoing one-hop random walks. This approximation breaks the bottleneck while preserving the global topology of the graph (Figure 1-left). More details and proof are included in App. A.2.
196
+
197
+ We present next an approximation of the Fiedler vector, followed by a proposed new layer in the GNN called the GAP-LAYER to learn how to minimize the spectral gap of the graph.
198
+
199
+ Approximating the Fiedler vector. Given that ${\mathbf{g}}_{2} = {\widetilde{\mathbf{D}}}^{1/2}{\mathbf{f}}_{2}$ , we can obtain the normalized-cut gradient in terms of ${\mathbf{f}}_{2}$ . From [23] we have that
200
+
201
+ $$
202
+ {\mathbf{f}}_{2}\left( u\right) = \left\{ {\begin{array}{ll} + 1/\sqrt{n} & \text{ if }u\text{ belongs to the first cluster } \\ - 1/\sqrt{n} & \text{ if }u\text{ belongs to the second cluster } \end{array} + O\left( \frac{\log n}{n}\right) }\right. \tag{8}
203
+ $$
204
+
205
+ ![01963ef6-344f-7649-932d-264024e54631_7_420_202_964_272_0.jpg](images/01963ef6-344f-7649-932d-264024e54631_7_420_202_964_272_0.jpg)
206
+
207
+ Figure 3: GAP-LAYER (Rcut). For GAP-LAYER (Ncut), substitute ${\nabla }_{\widetilde{\mathbf{A}}}{\lambda }_{2}$ by Eq. 7
208
+
209
+ Definition 2 (GAP-Layer). Given the matrix ${\mathbf{X}}_{n \times F}$ encoding the features of the nodes after any message passing (MP) layer, ${\mathbf{S}}_{n \times 2} = \operatorname{Softmax}\left( {\operatorname{MLP}\left( \mathbf{X}\right) }\right)$ learns the association $\mathbf{X} \rightarrow \mathbf{S}$ while $\mathbf{S}$ is optimized according to the loss ${L}_{Cut} = - \frac{\operatorname{Tr}\left\lbrack {{\mathbf{S}}^{T}\mathbf{A}\mathbf{S}}\right\rbrack }{\operatorname{Tr}\left\lbrack {{\mathbf{S}}^{T}\mathbf{D}\mathbf{S}}\right\rbrack } + {\begin{Vmatrix}\frac{{\mathbf{S}}^{T}\mathbf{S}}{{\begin{Vmatrix}{\mathbf{S}}^{T}\mathbf{S}\end{Vmatrix}}_{F}} - \frac{{\mathbf{I}}_{n}}{\sqrt{2}}\end{Vmatrix}}_{F}$ . Then the Fiedler vector ${\mathbf{f}}_{2}$ is approximated by appyling a softmaxed version of Eq. 8 and considering the loss ${L}_{\text{Fiedler }} =$ $\parallel \widetilde{\mathbf{A}} - \mathbf{A}{\parallel }_{F} + \alpha {\left( {\lambda }_{2}^{ * }\right) }^{2}$ , where ${\lambda }_{2}^{ * } = {\lambda }_{2}$ if we use the ratio-cut approximation (and gradient) and ${\lambda }_{2}^{ * } = {\lambda }_{2}^{\prime }$ if we use the normalized-cut approximation and gradient. This returns $\widetilde{\mathbf{A}}$ and the ${GAP}$ diffusion ${\mathbf{T}}^{GAP} = \widetilde{\mathbf{A}}\left( \mathbf{S}\right) \odot \mathbf{A}$ results from minimizing ${L}_{GAP} \mathrel{\text{:=}} {L}_{Cut} + {L}_{\text{Fiedler }}$ .
210
+
211
+ ## 4 Experiments and Discussion
212
+
213
+ In this section, we study the properties and performance of CT-LAYER and GAP-LAYER in a graph classification task with several benchmark datasets. To illustrate the merits of our approach, we compare CT-LAYER and GAP-LAYER with 3 state-of-the-art diffusion and curvature-based graph rewiring methods. Note that the aim of the evaluation is to shed light on the properties of both layers and illustrate their inductive performance, not to perform a benchmark comparison with all previously proposed graph rewiring methods.
214
+
215
+ ![01963ef6-344f-7649-932d-264024e54631_7_372_1160_1049_223_0.jpg](images/01963ef6-344f-7649-932d-264024e54631_7_372_1160_1049_223_0.jpg)
216
+
217
+ Figure 4: GNN models used in the experiments. Left: MinCut Baseline model. Right: CT-LAYER or GAP-LAYER models, depending on what method is used for rewiring.
218
+
219
+ Baselines:. The first baseline architecture is based on MINCUT Pool [34] and it is shown in Figure 4a. It is the base GNN that we use for graph classification without rewiring. MINCUT Pool layer learns $\left( {{\mathbf{A}}_{n \times n},{\mathbf{X}}_{n \times F}}\right) \rightarrow \left( {{\mathbf{A}}^{\prime }{}_{k \times k},{\mathbf{X}}_{k \times F}}\right)$ , being $k < n$ the new number of node clusters. The next two baselines are graph rewiring methods that belong to the same family of methods as DIFFWIRE, i.e. methods based on diffusion and curvature, namely DIGL (PPR) [25] and SDRF [21]. DIGL is a diffusion-based preprocessing method within the family of metric-based GSL approaches. We set the teleporting probability $\alpha = {0.001}$ and $\epsilon$ is set to keep the same average degree for each graph. Once preprocessed with DIGL, the graphs are provided as input to the MinCut Pool (Baseline1) arquitecture. The third baseline model is SDRF, which performs curvature-based rewiring. SDRF is also a preprocessing method which has 3 parameters that are highly graph-dependent. We set these parameters to $\tau = {20}$ and ${C}^{ + } = 0$ for all experiments as per [21]. The number of iterations is estimated dynamically according to ${0.7} * \left| V\right|$ for each graph.
220
+
221
+ Both DIGL and SDRF aim to preserve the global topology of the graph but require optimizing their parameters for each input graph via hyper-parameter search. In a graph classification task, this search is $O\left( {n}^{3}\right)$ per graph. Details about the parameter tuning in these methods can be found in App. A.3.3.
222
+
223
+ To shed light on the performance and properties of CT-LAYER and GAP-LAYER, we add the corresponding layer in between Linear(X) $\overset{ * }{ \rightarrow }$ Conv1(A, X). We build 3 different models: CT-LayER, GAP-LAYER (Rcut), GAP-LAYER (Ncut), depending on the layer used. For CT-LAYER, we learn ${\mathbf{T}}^{CT}$ which is used as a convolution matrix afterwards. For GAP-LAYER, we learn ${\mathbf{T}}^{GAP}$ either using the Rcut or the Ncut approximations. A schematic of the architectures is shown in Figure 4b and in App. A.3.2.
224
+
225
+ Table 1: Experimental results on common graph classification benchmarks. Red denotes the best model row-wise and blue marks the runner-up. '*' means degree as node feature.
226
+
227
+ <table><tr><td/><td>MinCutPool</td><td>DIGL</td><td>SDRF</td><td>CT-LAYER</td><td>GAP-LAYER (Rcut)</td><td>GAP-LAYER (Ncut)</td></tr><tr><td>REDDIT-B*</td><td>${66.53} \pm {4.47}$</td><td>${76.02} \pm {4.31}$</td><td>${65.3} \pm {7.7}$</td><td>${78.45} \pm {4.59}$</td><td>${77.63} \pm {4.96}$</td><td>${76.00} \pm {5.30}$</td></tr><tr><td>IMDB-B*</td><td>${60.75} \pm {7.03}$</td><td>${59.35} \pm {7.76}$</td><td>${59.2} \pm {6.9}$</td><td>${69.84} \pm {4.60}$</td><td>${69.93} \pm {3.32}$</td><td>${68.80} \pm {3.10}$</td></tr><tr><td>COLLAB*</td><td>${58.00} \pm {6.22}$</td><td>${57.51} \pm {5.95}$</td><td>${56.6} \pm {10}$</td><td>${69.87} \pm {2.40}$</td><td>${64.47} \pm {4.07}$</td><td>${65.89} \pm {4.90}$</td></tr><tr><td>MUTAG</td><td>${84.21} \pm {6.34}$</td><td>${85.00} \pm {5.65}$</td><td>${82.4} \pm {6.8}$</td><td>${86.05} \pm {4.99}$</td><td>${86.90} \pm {4.00}$</td><td>${86.90} \pm {4.00}$</td></tr><tr><td>PROTEINS</td><td>${74.84} \pm {2.39}$</td><td>${74.49} \pm {2.88}$</td><td>${74.4} \pm {2.7}$</td><td>${75.38} \pm {2.97}$</td><td>${75.03} \pm {3.09}$</td><td>${75.34} \pm {2.10}$</td></tr><tr><td>SBM*</td><td>${53.00} \pm {9.90}$</td><td>${56.93} \pm {12.8}$</td><td>${54.1} \pm {7.1}$</td><td>${81.40} \pm {11.7}$</td><td>${90.80} \pm {7.00}$</td><td>${92.26} \pm {2.92}$</td></tr><tr><td>Erdös-Rényi*</td><td>${81.86} \pm {6.26}$</td><td>${81.93} \pm {6.32}$</td><td>${73.6} \pm {9.1}$</td><td>${79.06} \pm {9.89}$</td><td>${79.26} \pm {10.46}$</td><td>${82.26} \pm {3.20}$</td></tr></table>
228
+
229
+ As shown in Table 1, we use in our experiments common benchmark datasets for graph classification. We select datasets both with features and featureless, in which case we use the degree as the node features. These datasets are diverse regarding the topology of their networks: REDDIT-B, IMDB-B and COLLAB contain truncate scale-free graphs (social networks), whereas MUTAG and PROTEINS contain graphs from biology or chemistry. In addition, we use two synthetic datasets with 2 classes: Erdös-Rényi with ${p}_{1} \in \left\lbrack {{0.3},{0.5}}\right\rbrack$ and ${p}_{2} \in \left\lbrack {{0.4},{0.8}}\right\rbrack$ and Stochastic block model (SBM) with parameters ${p}_{1} = {0.8},{p}_{2} = {0.5},{q}_{1} \in \left\lbrack {{0.1},{0.15}}\right\rbrack$ and ${q}_{2} \in \left\lbrack {{0.01},{0.1}}\right\rbrack$ . More details in App. A.3.1.
230
+
231
+ Table 1 reports average accuracies and standard deviation on 10 random data splits, using 85/15 stratified train-test split, training during 60 epochs and reporting the results of the last epoch for each random run. We use Pytorch Geometric framework and our code is publicly available ${}^{3}$ .
232
+
233
+ The experiments support our hypothesis that rewiring based on CT-LAYER and GAP-LAYER improves the performance of the baselines on graph classification. Since both layers are differentiable, they learn how to rewire unseen graphs. The improvements are significant in graphs where social components arise (REDDITB, IMDBB, COLLAB), i.e. graphs with small world properties and power-law degree distributions with a topology based on hubs and authorities. These are graphs where bottlenecks arise easily and our approach is able to properly rewire the graphs. However, the improvements observed in planar or grid networks (MUTAG and PROTEINS) are more limited: the bottleneck does not seem to be critical for the graph classification task.
234
+
235
+ Moreover, our method performs better in graphs with featureless nodes than graphs with node features because it is able to leverage the information encoded in the topology of the graphs. Note that in attribute-based graphs, the weights of the attributes typically overwrite the graph's structure in the classification task, whereas in graphs without node features, the information is encoded in the graph's structure. App. A.3.4 contains an in-depth analysis of the latent space produced by each method.
236
+
237
+ CT-Layer vs GAP-Layer. The real-world datasets explored in this paper are characterized by mild bottlenecks from the perspective of the Lovász bound. For completion, we have included two synthetic datasets (SBM and Erdös-Rényi) where the Lovász bound is very restrictive. As a result, CT-LAYER is outperformed by GAP-LAYER in SBM. Note that the results on the synthetic datasets suffer from large variability. The smaller the graph's bottleneck, the more useful the CT-Layer is. Conversely, the larger the bottleneck, the more useful GAP-Layer is.
238
+
239
+ ## 5 Conclusion and Future Work
240
+
241
+ In this paper, we have proposed DIFFWIRE, a unified framework for graph rewiring that links the two components of the Lovász bound: CTs and the spectral gap. We have presented two novel, fully differentiable and inductive rewiring layers: CT-LAYER and GAP-LAYER. We have empirically evaluated these layers on benchmark datasets for graph classification with competitive results when compared to SoTA baselines, specially in graphs where the the nodes have no attributes and have small-world properties.
242
+
243
+ In future work, we plan to test our approach in other graph-related tasks and intend to apply DIFFWIRE to real-world applications, particularly in social networks, which have unique topology, statistics and direct implications in society.
244
+
245
+ ---
246
+
247
+ ${}^{3}$ https://anonymous.4open.science/r/DiffWireLoG22/readme.md
248
+
249
+ ---
250
+
251
+ References
252
+
253
+ [1] Marco Gori, Gabriele Monfardini, and Franco Scarselli. A new model for learning in graph domains. In Proceedings. 2005 IEEE international joint conference on neural networks, volume 2, pages 729-734, 2005.1
254
+
255
+ [2] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE transactions on neural networks, 20(1):61-80, 2008. 1
256
+
257
+ [3] Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR), 2017. 1
258
+
259
+ [4] Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, page 1263-1272. JMLR.org, 2017. 1
260
+
261
+ [5] Thomas N Kipf and Max Welling. Variational graph auto-encoders. In NeurIPS Workshop on Bayesian Deep Learning, 2016. 1
262
+
263
+ [6] Shaosheng Cao, Wei Lu, and Qiongkai Xu. Deep neural networks for learning graph representations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30, 2016. 1
264
+
265
+ [7] Fei Tian, Bin Gao, Qing Cui, Enhong Chen, and Tie-Yan Liu. Learning deep representations for graph clustering. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 28, 2014. 1
266
+
267
+ [8] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S. Yu. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 32 (1):4-24, 2021. doi: 10.1109/TNNLS.2020.2978386. 1, 3
268
+
269
+ [9] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph Attention Networks. International Conference on Learning Representations, 2018. URL https: //openreview.net/forum?id=rJXMpikCZ. l
270
+
271
+ [10] Shaked Brody, Uri Alon, and Eran Yahav. How attentive are graph attention networks? arXiv preprint arXiv:2105.14491, 2021. 1
272
+
273
+ [11] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id= ryGs6iA5Km. 1
274
+
275
+ [12] William L. Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Proceedings of the 31rd International Conference on NeurIPS, 2017. 1, 3
276
+
277
+ [13] Qimai Li, Zhichao Han, and Xiao-Ming Wu. Deeper insights into graph convolutional networks for semi-supervised learning. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI'18/IAAI'18/EAAI'18. AAAI Press, 2018. ISBN 978-1-57735-800-8.1, 3
278
+
279
+ [14] Uri Alon and Eran Yahav. On the bottleneck of graph neural networks and its practical implications. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id= i800PhOCVH2.1,2,3
280
+
281
+ [15] Jie Zhou, Ganqu Cui, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, and Maosong Sun. Graph neural networks: A review of methods and applications. CoRR, abs/1812.08434, 2018. URL http://arxiv.org/ abs/1812.08434.2,3
282
+
283
+ [16] L. Lovász. Random walks on graphs: A survey. In D. Miklós, V. T. Sós, and T. Szőnyi, editors, Combinatorics, Paul Erdős is Eighty, volume 2, pages 353-398. János Bolyai Mathematical Society, Budapest, 1996. 2, 3, 4
284
+
285
+ [17] Pablo Barceló, Egor V. Kostylev, Mikael Monet, Jorge Pérez, Juan Reutter, and Juan Pablo Silva. The logical expressiveness of graph neural networks. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=r11Z7AEKvB.2, 3
286
+
287
+ [18] Petar Veličković. Message passing all the way up. arXiv preprint arXiv:2202.11097, 2022. 2
288
+
289
+ [19] Wenbing Huang, Yu Rong, Tingyang Xu, Fuchun Sun, and Junzhou Huang. Tackling over-smoothing for general graph convolutional networks. arXiv preprint arXiv:2008.09864, 2020. 2, 3
290
+
291
+ [20] Anees Kazi, Luca Cosmo, Seyed-Ahmad Ahmadi, Nassir Navab, and Michael Bronstein. Differentiable graph module (dgm) for graph convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1-1, 2022. doi: 10.1109/TPAMI.2022.3170249. 2, 3
292
+
293
+ [21] Jake Topping, Francesco Di Giovanni, Benjamin Paul Chamberlain, Xiaowen Dong, and Michael M. Bronstein. Understanding over-squashing and bottlenecks on graphs via curvature. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=7UmjRGzp-A.2, 3, 7, 8, 17, 21
294
+
295
+ [22] Karel Devriendt and Renaud Lambiotte. Discrete curvature on graphs from the effective resistance. arXiv preprint arXiv:2201.06385, 2022. doi: 10.48550/ARXIV.2201.06385. URL https://arxiv.org/abs/2201.06385.2,7,17
296
+
297
+ [23] NT Hoang, Takanori Maehara, and Tsuyoshi Murata. Revisiting graph neural networks: Graph filtering perspective. In 2020 25th International Conference on Pattern Recognition (ICPR), pages 8376-8383. IEEE, 2021. 3, 7, 18
298
+
299
+ [24] Kenta Oono and Taiji Suzuki. Graph neural networks exponentially lose expressive power for node classification. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=S1ld02EFPr.3
300
+
301
+ [25] Johannes Klicpera, Stefan Weißenberger, and Stephan Günnemann. Diffusion improves graph learning. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, 2019. 3, 8, 21
302
+
303
+ [26] Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018. 3
304
+
305
+ [27] Fabrizio Frasca, Emanuele Rossi, Davide Eynard, Benjamin Chamberlain, Michael Bronstein, and Federico Monti. Sign: Scalable inception graph neural networks. In ICML 2020 Workshop on Graph Representation Learning and Beyond, 2020. 3
306
+
307
+ [28] Pál András Papp, Karolis Martinkus, Lukas Faber, and Roger Wattenhofer. Dropgnn: Random dropouts increase the expressiveness of graph neural networks. In 35th Conference on Neural Information Processing Systems (NeurIPS), 2021. 3
308
+
309
+ [29] Yu Rong, Wenbing Huang, Tingyang Xu, and Junzhou Huang. Dropedge: Towards deep graph convolutional networks on node classification. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=Hkx1qkrKPr.3
310
+
311
+ [30] Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04):3438-3445, Apr. 2020. doi: 10.1609/aaai.v34i04.5747. URL https://ojs.aaai.org/index.php/AAAI/article/view/5747.3
312
+
313
+ [31] Yanqiao Zhu, Weizhi Xu, Jinghao Zhang, Yuanqi Du, Jieyu Zhang, Qiang Liu, Carl Yang, and Shu Wu. A survey on graph structure learning: Progress and opportunities. arXiv e-prints, pages arXiv-2103, 2021. 3
314
+
315
+ [32] Diego Mesquita, Amauri Souza, and Samuel Kaski. Rethinking pooling in graph neural networks. Advances in Neural Information Processing Systems, 33:2220-2231, 2020. 3
316
+
317
+ [33] Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, and Jure Leskovec. Hierarchical graph representation learning with differentiable pooling. Advances in neural information processing systems, 31, 2018. 3
318
+
319
+ [34] Filippo Maria Bianchi, Daniele Grattarola, and Cesare Alippi. Spectral clustering with graph neural networks for graph pooling. In Proceedings of the 37th international conference on Machine learning, pages 2729-2738. ACM, 2020. 3, 8
320
+
321
+ [35] F. R. K. Chung. Spectral Graph Theory. American Mathematical Society, 1997. 4
322
+
323
+ [36] Ulrike von Luxburg, Agnes Radl, and Matthias Hein. Hitting and commute times in large random neighborhood graphs. Journal of Machine Learning Research, 15(52):1751-1798, 2014. URL http: //jmlr.org/papers/v15/vonluxburg14a.html. 4, 19
324
+
325
+ [37] Daniel A. Spielman and Nikhil Srivastava. Graph sparsification by effective resistances. SIAM Journal on Computing, 40(6):1913-1926, 2011. doi: 10.1137/080734029. URL https://doi.org/10.1137/080734029.5
326
+
327
+ [38] D. J. Klein and M. Randić. Resistance distance. Journal of Mathematical Chemistry, 12(1):81-95, 1993. doi: 10.1007/BF01164627. URL https://doi.org/10.1007/BF01164627.5
328
+
329
+ [39] Huaijun Qiu and Edwin R. Hancock. Clustering and embedding using commute times. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(11):1873-1890, 2007. doi: 10.1109/TPAMI.2007.1103. 5, 6
330
+
331
+ [40] Vedat Levi Alev, Nima Anari, Lap Chi Lau, and Shayan Oveis Gharan. Graph Clustering using Effective Resistance. In Anna R. Karlin, editor, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018), volume 94 of Leibniz International Proceedings in Informatics (LIPIcs), pages 41:1-41:16, Dagstuhl, Germany, 2018. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik. ISBN 978-3-95977-060-6. doi: 10.4230/LIPIcs.ITCS.2018.41. URL http://drops.dagstuhl.de/opus/volltexte/2018/8369.6, 16
332
+
333
+ [41] Emmanuel Abbe. Community detection and stochastic block models: Recent developments. J. Mach. Learn. Res., 18(1):6446-6531, jan 2017. ISSN 1532-4435. 7, 19
334
+
335
+ [42] Thomas Bühler and Matthias Hein. Spectral clustering based on the graph p-laplacian. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML '09, page 81-88, New York, NY, USA, 2009. Association for Computing Machinery. ISBN 9781605585161. doi: 10.1145/1553374.1553385. URL https://doi.org/10.1145/1553374.1553385.7, 19
336
+
337
+ [43] Jian Kang and Hanghang Tong. N2n: Network derivative mining. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM '19, page 861-870, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450369763. doi: 10.1145/ 3357384.3357910. URL https://doi.org/10.1145/3357384.3357910.7, 18
338
+
339
+ [44] Joshua Batson, Daniel A. Spielman, Nikhil Srivastava, and Shang-Hua Teng. Spectral sparsification of graphs: Theory and algorithms. Commun. ACM, 56(8):87-94, aug 2013. ISSN 0001-0782. doi: 10.1145/2492007.2492029. URL https://doi.org/10.1145/2492007.2492029.15
340
+
341
+ [45] Morteza Alamgir and Ulrike von Luxburg. Phase transition in the family of p-resistances. In Proceedings of the 24th International Conference on Neural Information Processing Systems, NIPS'11, page 379-387, Red Hook, NY, USA, 2011. Curran Associates Inc. ISBN 9781618395993. 19
342
+
343
+ [46] Pan Li and Olgica Milenkovic. Submodular hypergraphs: p-laplacians, Cheeger inequalities and spectral clustering. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 3014-3023. PMLR, 10-15 Jul 2018. URL https://proceedings.mlr.press/v80/li18e.html.19
344
+
345
+ [47] Gregory Berkolaiko, James B Kennedy, Pavel Kurasov, and Delio Mugnolo. Edge connectivity and the spectral gap of combinatorial and quantum graphs. Journal of Physics A: Mathematical and Theoretical, 50(36):365201, 2017. 19
346
+
347
+ [48] Zoran Stanić. Graphs with small spectral gap. Electronic Journal of Linear Algebra, 26:28, 2013. 19
348
+
349
+ ## A Appendix
350
+
351
+ In Appendix A we include a Table with the notation used in the paper and we provide an analysis of the diffusion and its relationship with curvature. In Appendix B, we study in detail GAP-LAYER and the implications of the proposed spectral gradients. Appendix C reports statistics and characteristics of the datasets used in the experimental section, provides more information about the experiments results, 15 describes additional experimental results, and includes a summary of the computing infrastructure used in our experiments.
352
+
353
+ Table 2: Notation.
354
+
355
+ <table><tr><td>Symbol</td><td>Description</td></tr><tr><td>$G = \left( {V, E}\right)$</td><td>Graph = (Nodes, Edges)</td></tr><tr><td>A</td><td>Adjacency matrix: $\mathbf{A} \in {\mathbb{R}}^{n \times n}$</td></tr><tr><td>X</td><td>Feature matrix: $\mathbf{X} \in {\mathbb{R}}^{n \times F}$</td></tr><tr><td>$v$</td><td>Node $v \in V$ or $u \in V$</td></tr><tr><td>$e$</td><td>Edge $e \in E$</td></tr><tr><td>$x$</td><td>Features of node $v:x \in X$</td></tr><tr><td>$n$</td><td>Number of nodes: $n = \left| V\right|$</td></tr><tr><td>$F$</td><td>Number of features</td></tr><tr><td>D</td><td>Degree diagonal matrix where ${d}_{v}$ in ${D}_{vv}$</td></tr><tr><td>${d}_{v}$</td><td>Degree of node $v$</td></tr><tr><td>$\operatorname{vol}\left( G\right)$</td><td>Sum of the degrees of the graph $\operatorname{vol}\left( G\right) = \operatorname{Tr}\left\lbrack D\right\rbrack$</td></tr><tr><td>L</td><td>Laplacian: $\mathbf{L} = \mathbf{D} - \mathbf{A}$</td></tr><tr><td>B</td><td>Signed edge-vertex incidence matrix</td></tr><tr><td>${\mathbf{b}}_{e}$</td><td>Incidence vector: Row vector of $\mathbf{B}$ , with ${\mathbf{b}}_{e = \left( {u, v}\right) } = \left( {{\mathbf{e}}_{u} - {\mathbf{e}}_{v}}\right)$</td></tr><tr><td>${\mathbf{v}}_{e}$</td><td>Projected incidence vector: ${\mathbf{v}}_{e} = {\mathbf{L}}^{+/2}{\mathbf{b}}_{e}$</td></tr><tr><td>$\Gamma$</td><td>Ratio $\Gamma = \frac{1 + \epsilon }{1 - \epsilon }$</td></tr><tr><td>$\varepsilon$</td><td>Dirichlet Energy wrt $\mathbf{L} : \mathcal{E}\left( \mathbf{x}\right) \mathrel{\text{:=}} {\mathbf{x}}^{T}\mathbf{L}\mathbf{x}$</td></tr><tr><td>$\mathcal{L}$</td><td>Normalized Laplacian: $\mathcal{L} = \mathbf{I} - {\mathbf{D}}^{-1/2}{\mathbf{{AD}}}^{-1/2}$</td></tr><tr><td>$\Lambda$</td><td>Eigenvalue matrix of $\mathbf{L}$</td></tr><tr><td>${\Lambda }^{\prime }$</td><td>Eigenvalue matrix of $\mathcal{L}$</td></tr><tr><td>${\lambda }_{i}$</td><td>$i$ -th eigenvalue of $\mathbf{L}$</td></tr><tr><td>${\lambda }_{2}$</td><td>Second eigenvalue of $\mathbf{L}$ : Spectral gap</td></tr><tr><td>${\lambda }_{i}^{\prime }$</td><td>$i$ -th eigenvalue of $\mathcal{L}$</td></tr><tr><td>${\lambda }_{2}^{\prime }$</td><td>Second eigenvalue of $\mathcal{L}$ : Spectral gap</td></tr><tr><td>$\mathbf{F}$</td><td>Matrix of eigenvectors of $\mathbf{L}$</td></tr><tr><td>G</td><td>Matrix of eigenvectors of $\mathcal{L}$</td></tr><tr><td>${\mathbf{f}}_{i}$</td><td>$i$ eigenvector of $\mathbf{L}$</td></tr><tr><td>${\mathrm{f}}_{2}$</td><td>Second eigenvector of $\mathbf{L}$ : Fiedler vector</td></tr><tr><td>${\mathrm{g}}_{i}$</td><td>$i$ eigenvector of $\mathcal{L}$</td></tr><tr><td>${\mathrm{g}}_{2}$</td><td>Second eigenvector of $\mathcal{L}$ : Fiedler vector</td></tr><tr><td>$\widetilde{\mathbf{A}}$</td><td>New Adjacency matrix</td></tr><tr><td>${E}^{\prime }$</td><td>New edges</td></tr><tr><td>${H}_{uv}$</td><td>Hitting time between $u$ and $v$</td></tr><tr><td>$C{T}_{uv}$</td><td>Commute time: $C{T}_{uv} = {H}_{uv} + {H}_{vu}$</td></tr><tr><td>${R}_{uv}$</td><td>Effective resistance: ${R}_{uv} = C{T}_{uv}/\operatorname{vol}\left( G\right)$</td></tr><tr><td>Z</td><td>Matrix of commute times embeddings for all nodes in $G$</td></tr><tr><td>${\mathbf{Z}}_{u}$</td><td>Commute time embedding of node $u$</td></tr><tr><td>${\mathbf{T}}^{CT}$</td><td>Resistance diffusion</td></tr><tr><td>S</td><td>Cluster assignment matrix: $\mathbf{S} \in {\mathbb{R}}^{n \times 2}$</td></tr><tr><td>${\mathbf{T}}^{GAP}$</td><td>GAP diffusion</td></tr><tr><td>${\mathbf{e}}_{u}$</td><td>Unit vector with unit value at $u$ and 0 elsewhere</td></tr><tr><td>${\nabla }_{\widetilde{\mathbf{A}}}{\lambda }_{2}$</td><td>Gradient of ${\lambda }_{2}$ wrt $\mathbf{A}$</td></tr><tr><td>${\left\lbrack {\nabla }_{\widetilde{\mathbf{A}}}{\lambda }_{2}\right\rbrack }_{ij}$</td><td>Gradient of ${\lambda }_{2}$ wrt ${\mathbf{A}}_{uv}$</td></tr><tr><td>${p}_{u}$</td><td>Node curvature: ${p}_{u} \mathrel{\text{:=}} 1 - \frac{1}{2}\mathop{\sum }\limits_{{u \sim w}}{R}_{uv}$</td></tr><tr><td>${\kappa }_{uv}$</td><td>Edge curvature: ${\kappa }_{uv} \mathrel{\text{:=}} 2\left( {{p}_{u} + {p}_{v}}\right) /{R}_{uv}$</td></tr></table>
356
+
357
+ ### A.1 Appendix A: CT-LAYER
358
+
359
+ #### A.1.1 Notation
360
+
361
+ The Table 2 summarizes the notation used in the paper.
362
+
363
+ #### A.1.2 Analysis of Commute Times rewiring
364
+
365
+ First, we provide an answer to the following question:
366
+
367
+ Is resistance diffusion via ${\mathbf{T}}^{CT}$ a principled way of preserving the Cheeger constant?
368
+
369
+ We answer the question above by linking Theorems 1 and 2 in the paper with the Lovász bound. The outline of our explanation follows three steps.
370
+
371
+ - Proposition 1: Theorem 1 (Sparsification) provides a principled way to bias the adjacency matrix so that the edges with the largest weights in the rewired graph correspond to the edges in graph's bottleneck.
372
+
373
+ - Proposition 2: Theorem 2 (Cheeger vs Resistance) can be used to demonstrate that increasing the effective resistance leads to a mild reduction of the Cheeger constant.
374
+
375
+ - Proposition 3: (Conclusion) The effectiveness of the above theorems to contain the Cheeger constant is constrained by the Lovász bound.
376
+
377
+ Next, we provide a thorough explanation of each of the propositions above.
378
+
379
+ Proposition 1 (Biasing). Let ${G}^{\prime } = \operatorname{Sparsify}\left( {G, q}\right)$ be a sampling algorithm of graph $G = \left( {V, E}\right)$ , where edges $e \in E$ are sampled with probability $q \propto {R}_{e}$ (proportional to the effective resistance). This choice is necessary to retain the global structure of $G$ , i.e., to satisfy
380
+
381
+ $$
382
+ \forall \mathbf{x} \in {\mathbb{R}}^{n} : \left( {1 - \epsilon }\right) {\mathbf{x}}^{T}{\mathbf{L}}_{G}\mathbf{x} \leq {\mathbf{x}}^{T}{\mathbf{L}}_{{G}^{\prime }}\mathbf{x} \leq \left( {1 + \epsilon }\right) {\mathbf{x}}^{T}{\mathbf{L}}_{G}\mathbf{x}, \tag{9}
383
+ $$
384
+
385
+ with probability at least $1/2$ by sampling $O\left( {n\log n/{\epsilon }^{2}}\right)$ edges, with $1/\sqrt{n} < \epsilon \leq 1$ , instead of $O\left( m\right)$ , where $m = \left| E\right|$ . In addition, this choice biases the uniform distribution in favor of critical edges in the graph.
386
+
387
+ Proof. We start by expressing the Laplacian $\mathbf{L}$ in terms of the edge-vertex incidence matrix ${\mathbf{B}}_{m \times e}$ :
388
+
389
+ $$
390
+ {\mathbf{B}}_{eu} = \left\{ \begin{matrix} 1 & \text{ if }u\text{ is the head of }e \\ - 1 & \text{ if }u\text{ is the tail of }e \\ 0 & \text{ otherwise } \end{matrix}\right. \tag{10}
391
+ $$
392
+
393
+ where edges in undirected graphs are counted once, i.e. $e = \left( {u, v}\right) = \left( {v, u}\right)$ . Then, we have $\mathbf{L} = {\mathbf{B}}^{T}\mathbf{B} = \mathop{\sum }\limits_{e}{\mathbf{b}}_{e}{\mathbf{b}}_{e}^{T}$ , where ${\mathbf{b}}_{e}$ is a row vector (incidence vector) of $\mathbf{B}$ , with ${\mathbf{b}}_{e = \left( {u, v}\right) } = \left( {{\mathbf{e}}_{u} - {\mathbf{e}}_{v}}\right)$ . In addition, the Dirichlet energies can be expressed as norms:
394
+
395
+ $$
396
+ \mathcal{E}\left( \mathbf{x}\right) = {\mathbf{x}}^{T}\mathbf{{Lx}} = {\mathbf{x}}^{T}{\mathbf{B}}^{T}\mathbf{{Bx}} = \parallel \mathbf{{Bx}}{\parallel }_{2}^{2} = \mathop{\sum }\limits_{{e = \left( {u, v}\right) \in E}}{\left( {\mathbf{x}}_{u} - {\mathbf{x}}_{v}\right) }^{2}. \tag{11}
397
+ $$
398
+
399
+ As a result, the effective resistance ${R}_{e}$ between the two nodes of an edge $e = \left( {u, v}\right)$ can be defined as
400
+
401
+ 544
402
+
403
+ $$
404
+ {R}_{e} = {\left( {\mathbf{e}}_{u} - {\mathbf{e}}_{v}\right) }^{T}{\mathbf{L}}^{ + }\left( {{\mathbf{e}}_{u} - {\mathbf{e}}_{v}}\right) = {\mathbf{b}}_{e}^{T}{\mathbf{L}}^{ + }{\mathbf{b}}_{e} \tag{12}
405
+ $$
406
+
407
+ Next, we reformulate the spectral constraints in Eq. 9, i.e. $\left( {1 - \epsilon }\right) {\mathbf{L}}_{G} \preccurlyeq {\mathbf{L}}_{{G}^{\prime }} \preccurlyeq \left( {1 + \epsilon }\right) {\mathbf{L}}_{G}$ as
408
+
409
+ $$
410
+ {\mathbf{L}}_{G} \preccurlyeq {\mathbf{L}}_{{G}^{\prime }} \preccurlyeq \Gamma {\mathbf{L}}_{G},\Gamma = \frac{1 + \epsilon }{1 - \epsilon }. \tag{13}
411
+ $$
412
+
413
+ This simplifies the analysis, since the above expression can be interpreted as follows: the Dirichlet energies of ${\mathbf{L}}_{{G}^{\prime }}$ are lower-bounded by those of ${\mathbf{L}}_{G}$ and upper-bounded by $\Gamma$ times the energies of ${\mathbf{L}}_{G}$ . Considering that the energies define hyper-ellipsoids, the hyper-ellipsoid associated with ${\mathbf{L}}_{{G}^{\prime }}$ is between the hyper-ellipsoids of ${\mathbf{L}}_{G}$ and $\Gamma$ times the ${\mathbf{L}}_{G}$ .
414
+
415
+ The hyper-ellipsoid analogy provides a framework to proof that the inclusion relationships are preserved under scaling: $M{\mathbf{L}}_{G}M \preccurlyeq M{\mathbf{L}}_{{G}^{\prime }}M \preccurlyeq {M\Gamma }{\mathbf{L}}_{G}M$ where $M$ can be a matrix. In this case, if we set $M \mathrel{\text{:=}} {\left( {\mathbf{L}}_{G}^{ + }\right) }^{1/2} = {\mathbf{L}}_{G}^{+/2}$ we have:
416
+
417
+ $$
418
+ {\mathbf{L}}_{G}^{+/2}{\mathbf{L}}_{G}{\mathbf{L}}_{G}^{+/2} \preccurlyeq {\mathbf{L}}_{G}^{+/2}{\mathbf{L}}_{{G}^{\prime }}{\mathbf{L}}_{G}^{+/2} \preccurlyeq {\mathbf{L}}_{G}^{+/2}\Gamma {\mathbf{L}}_{G}^{+/2}, \tag{14}
419
+ $$
420
+
421
+ 553 which leads to
422
+
423
+ $$
424
+ {\mathbf{I}}_{n} \preccurlyeq {\mathbf{L}}_{G}^{+/2}{\mathbf{L}}_{{G}^{\prime }}{\mathbf{L}}_{G}^{+/2} \preccurlyeq \Gamma {\mathbf{I}}_{n}. \tag{15}
425
+ $$
426
+
427
+ We seek a Laplacian ${\mathbf{L}}_{{G}^{\prime }}$ satisfying the similarity constraints in Eq. 13. Since ${E}^{\prime } \subset E$ , i.e. we want to remove structurally irrelevant edges, we can design ${\mathbf{L}}_{{G}^{\prime }}$ in terms of considering all the edges $E$ :
428
+
429
+ $$
430
+ {\mathbf{L}}_{{G}^{\prime }} \mathrel{\text{:=}} {\mathbf{B}}_{G}^{T}{\mathbf{B}}_{G} = \mathop{\sum }\limits_{e}{s}_{e}{\mathbf{b}}_{e}{\mathbf{b}}_{e}^{T} \tag{16}
431
+ $$
432
+
433
+ and let the similarity constraint define the sampling weights and the choice of $e$ (setting ${s}_{e} \geq 0$ property). More precisely:
434
+
435
+ $$
436
+ {\mathbf{I}}_{n} \preccurlyeq {\mathbf{L}}_{G}^{+/2}\mathop{\sum }\limits_{e}{\mathbf{b}}_{e}{\mathbf{b}}_{e}^{T}{\mathbf{L}}_{G}^{+/2} \preccurlyeq \Gamma {\mathbf{I}}_{n}. \tag{17}
437
+ $$
438
+
439
+ Then if we define ${\mathbf{v}}_{e} \mathrel{\text{:=}} {\mathbf{L}}_{G}^{+/2}{\mathbf{b}}_{e}$ as the projected incidence vector, we have
440
+
441
+ $$
442
+ {\mathbf{I}}_{n} \preccurlyeq \mathop{\sum }\limits_{e}{s}_{e}{\mathbf{v}}_{e}{\mathbf{v}}_{e}^{T} \preccurlyeq \Gamma {\mathbf{I}}_{n}. \tag{18}
443
+ $$
444
+
445
+ Consequently, a spectral sparsifier must find ${s}_{e} \geq 0$ so that the above similarity constraint is satisfied. Since there are $m$ edges in $E,{s}_{e}$ must be zero for most of the edges. But, what are the best candidates to retain? Interestingly, the similarity constraint provides the answer. From Eq. 12 we have
446
+
447
+ $$
448
+ {\mathbf{v}}_{e}^{T}{\mathbf{v}}_{e} = {\begin{Vmatrix}{\mathbf{v}}_{e}\end{Vmatrix}}^{2} = {\begin{Vmatrix}{\mathbf{L}}_{G}^{+/2}{\mathbf{b}}_{e}\end{Vmatrix}}_{2}^{2} = {\mathbf{b}}_{e}^{T}{\mathbf{L}}_{G}^{ + }{\mathbf{b}}_{e} = {R}_{e}. \tag{19}
449
+ $$
450
+
451
+ This result explains why sampling the edges with probability $q \propto {R}_{e}$ leads to a ranking of $m$ edges of $G = \left( {V, E}\right)$ such that edges with large ${R}_{e} = {\begin{Vmatrix}{\mathbf{v}}_{e}\end{Vmatrix}}^{2}$ are preferred ${}^{4}$ .
452
+
453
+ Algorithm 1 implements a deterministic greedy version of Sparsify(G, q), where we build incrementally ${E}^{\prime } \subset E$ by creating a budget of decreasing resistances ${R}_{{e}_{1}} \geq {R}_{{e}_{2}} \geq \ldots \geq {R}_{{e}_{O\left( {n\log n/{\epsilon }^{2}}\right) }}$ .
454
+
455
+ Note that this rewiring strategy preserves the spectral similarities of the graphs, i.e. the global structure of $G = \left( {V, E}\right)$ is captured by ${G}^{\prime } = \left( {V,{E}^{\prime }}\right)$ .
456
+
457
+ Moreover, the maximum ${R}_{e}$ in each graph determines an upper bound on the Cheeger constant and hence an upper bound on the size of the graph's bottleneck, as per the following proposition.
458
+
459
+ Algorithm 1: GREEDYSparsify
460
+
461
+ ---
462
+
463
+ Input $: G = \left( {V, E}\right) ,\epsilon \in (1/\sqrt{n},1\rbrack , n = \left| V\right|$ .
464
+
465
+ Output : ${G}^{\prime } = \left( {V,{E}^{\prime }}\right)$ with ${E}^{\prime } \subset E$ such that $\left| {E}^{\prime }\right| = O\left( {n\log n/{\epsilon }^{2}}\right)$ .
466
+
467
+ $L \leftarrow \operatorname{List}\left( \left\{ {{\mathbf{v}}_{e} : e \in E}\right\} \right)$
468
+
469
+ $Q \leftarrow \operatorname{Sort}\left( {L\text{, descending, criterion} = {\begin{Vmatrix}{\mathbf{v}}_{e}\end{Vmatrix}}^{2}}\right) \vartriangleright$ Sort candidate edges by descending Resistance
470
+
471
+ ${E}^{\prime } \leftarrow \varnothing$
472
+
473
+ $\mathcal{I} \leftarrow {\mathbf{0}}_{n \times n}$
474
+
475
+ repeat
476
+
477
+ ${\mathbf{v}}_{e} \leftarrow \operatorname{pop}\left( Q\right)$ $\vartriangleright$ Remove the head of the queue
478
+
479
+ $\mathcal{I} \leftarrow \mathcal{I} + {\mathbf{v}}_{e}{\mathbf{v}}_{e}^{T}$
480
+
481
+ if $\mathcal{I} \preccurlyeq \Gamma {\mathbf{I}}_{n}$ then
482
+
483
+ ${E}^{\prime } \leftarrow {E}^{\prime } \cup \{ e\}$ $\vartriangleright$ Update the current budget of edges
484
+
485
+ else
486
+
487
+ return ${G}^{\prime } = \left( {V,{E}^{\prime }}\right)$
488
+
489
+ until $Q = \varnothing$
490
+
491
+ ---
492
+
493
+ Proposition 2 (Resistance Diameter). Let ${G}^{\prime } = \operatorname{Sparsify}\left( {G, q}\right)$ be a sampling algorithm of graph $G = \left( {V, E}\right)$ , where edges $e \in E$ are sampled with probability $q \propto {R}_{e}$ (proportional to the effective resistance). Consider the resistance diameter ${\mathcal{R}}_{\text{diam }} \mathrel{\text{:=}} \mathop{\max }\limits_{{u, v}}{R}_{uv}$ . Then, for the pair of(u, v)
494
+
495
+ ---
496
+
497
+ ${}^{4}$ Although some of the elements of this section are derived from [44], we note that the Nikhil Srivastava’s lectures at The Simons Institute (2014) are by far more clarifying.
498
+
499
+ ---
500
+
501
+ does exist an edge $e = \left( {u, v}\right) \in {E}^{\prime }$ in ${G}^{\prime } = \left( {V,{E}^{\prime }}\right)$ such that ${R}_{e} = {\mathcal{R}}_{\text{diam }}$ . A a result the Cheeger constant of $G{h}_{G}$ is upper-bounded as follows:
502
+
503
+ $$
504
+ {h}_{G} \leq \frac{{\alpha }^{\epsilon }}{\sqrt{{\mathcal{R}}_{\text{diam }} \cdot \epsilon }}\operatorname{vol}{\left( S\right) }^{\epsilon - 1/2}, \tag{20}
505
+ $$
506
+
507
+ with $0 < \epsilon < 1/2$ and ${d}_{u} \geq 1/\alpha$ for all $u \in V$ .
508
+
509
+ Proof. The fact that the maximum resistance ${\mathcal{R}}_{\text{diam }}$ is located in an edge is derived from two observations: a) Resistance is upper bounded by the shortest-path distance; and b) edges with maximal resistance are prioritized in (Proposition 1).
510
+
511
+ Theorem 2 states that any attempt to increase the graph's bottleneck in a multiplicative way (i.e. multiplying it by a constant $c \geq 0$ ) results in decreasing the effective resistances as follows:
512
+
513
+ $$
514
+ {R}_{uv} \leq \left( {\frac{1}{{d}_{u}^{2\epsilon }} + \frac{1}{{d}_{v}^{2\epsilon }}}\right) \cdot \frac{1}{\epsilon \cdot {c}^{2}} \tag{21}
515
+ $$
516
+
517
+ with $\epsilon \in \left\lbrack {0,1/2}\right\rbrack$ . This equation is called the resistance bound. Therefore, a multiplicative increase of the bottleneck leads to a quadratic decrease of the resistances.
518
+
519
+ Following Corollary 2 of [40], we obtain an upper bound of any ${h}_{S}$ , i.e. the Cheeger constant for $S \subseteq V$ with $\operatorname{vol}\left( S\right) \leq \operatorname{vol}\left( G\right) /2$ - by defining $c$ properly. In particular we are seeking a value of $c$ that would lead to a contradiction, which is obtained by setting
520
+
521
+ $$
522
+ c = \sqrt{\frac{\left( \frac{1}{{d}_{{u}^{ * }}^{2\epsilon }} + \frac{1}{{d}_{{v}^{ * }}^{2\epsilon }}\right) }{{\mathcal{R}}_{\text{diam }} \cdot \epsilon }}, \tag{22}
523
+ $$
524
+
525
+ where $\left( {{u}^{ * },{v}^{ * }}\right)$ is a pair of nodes with maximal resistance, i.e. ${R}_{{u}^{ * }{v}^{ * }} = {\mathcal{R}}_{\text{diam }}$ .
526
+
527
+ Consider now any other pair of nodes(s, t)with ${R}_{st} < {\mathcal{R}}_{\text{diam }}$ . Following Theorem 2, if the bottleneck of ${h}_{S}$ is multiplied by $c$ , we should have
528
+
529
+ $$
530
+ {R}_{st} \leq \left( {\frac{1}{{d}_{s}^{2\epsilon }} + \frac{1}{{d}_{s}^{2\epsilon }}}\right) \cdot \frac{1}{\epsilon \cdot {c}^{2}} = \left( {\frac{1}{{d}_{s}^{2\epsilon }} + \frac{1}{{d}_{s}^{2\epsilon }}}\right) \cdot \frac{{\mathcal{R}}_{\text{diam }}}{\left( \frac{1}{{d}_{{u}^{ * }}^{2\epsilon }} + \frac{1}{{d}_{{v}^{ * }}^{2\epsilon }}\right) }. \tag{23}
531
+ $$
532
+
533
+ 589 However, since ${\mathcal{R}}_{\text{diam }} \leq \left( {\frac{1}{{d}_{{u}^{ * }}^{2\epsilon }} + \frac{1}{{d}_{{v}^{ * }}^{2\epsilon }}}\right)$ we have that ${R}_{st}$ can satisfy
534
+
535
+ $$
536
+ {R}_{st} > \left( {\frac{1}{{d}_{s}^{2\epsilon }} + \frac{1}{{d}_{s}^{2\epsilon }}}\right) \cdot \frac{1}{\epsilon \cdot {c}^{2}} \tag{24}
537
+ $$
538
+
539
+ 590 which is a contradiction and enables
540
+
541
+ $$
542
+ {h}_{S} \leq \frac{c}{\operatorname{vol}{\left( S\right) }^{1/2 - \epsilon }} \Leftrightarrow \left| {\partial S}\right| \leq c \cdot \operatorname{vol}{\left( S\right) }^{1/2 - \epsilon }. \tag{25}
543
+ $$
544
+
545
+ 591 Using $c$ as defined in Eq. 22 and ${d}_{u} \geq 1/\alpha$ we obtain
546
+
547
+ $$
548
+ c = \sqrt{\frac{\left( \frac{1}{{d}_{{u}^{ * }}^{2\epsilon }} + \frac{1}{{d}_{{v}^{ * }}^{2\epsilon }}\right) }{{\mathcal{R}}_{\text{diam }} \cdot \epsilon }} \leq \sqrt{\frac{{\alpha }^{\epsilon }}{{\mathcal{R}}_{\text{diam }} \cdot \epsilon }} \leq \frac{{\alpha }^{\epsilon }}{\sqrt{{\mathcal{R}}_{\text{diam }} \cdot \epsilon }}. \tag{26}
549
+ $$
550
+
551
+ Therefore,
552
+
553
+ $$
554
+ {h}_{S} \leq \frac{c}{\operatorname{vol}{\left( S\right) }^{1/2 - \epsilon }} \leq \frac{\frac{{\alpha }^{\epsilon }}{\sqrt{{\mathcal{R}}_{\text{diam }} \cdot \epsilon }}}{\operatorname{vol}{\left( S\right) }^{1/2 - \epsilon }} = \frac{{\alpha }^{\epsilon }}{\sqrt{{\mathcal{R}}_{\text{diam }} \cdot \epsilon }} \cdot \operatorname{vol}{\left( S\right) }^{\epsilon - 1/2}. \tag{27}
555
+ $$
556
+
557
+ As a result, the Cheeger constant of $G = \left( {V, E}\right)$ is mildly reduced (by the square root of the maximal resistance).
558
+
559
+ Proposition 3 (Conclusion). Let $\left( {{u}^{ * },{v}^{ * }}\right)$ be a pair of nodes (may be not unique) in $G = \left( {V, E}\right)$ with maximal resistance, i.e. ${R}_{{u}^{ * }{v}^{ * }} = {\mathcal{R}}_{\text{diam }}$ . Then, the Cheeger constant ${h}_{G}$ relies on the ratio between the maximal resistance ${\mathcal{R}}_{\text{diam }}$ and its uninformative approximation $\left( {\frac{1}{{d}_{u}^{ * }} + \frac{1}{{d}_{v}^{ * }}}\right)$ . The closer this ratio is to the unit, the easier it is to contain the Cheeger constant.
560
+
561
+ ![01963ef6-344f-7649-932d-264024e54631_16_317_223_1162_477_0.jpg](images/01963ef6-344f-7649-932d-264024e54631_16_317_223_1162_477_0.jpg)
562
+
563
+ Figure 5: Left: Original graph with nodes colored as Louvain communities. Middle: ${\mathbf{T}}^{CT}$ learnt by CT-LAYER with edges colors as node importance [0,1]. Right: Node and edge curvature: ${\mathbf{T}}^{CT}$ using ${p}_{u} \mathrel{\text{:=}} 1 - \frac{1}{2}\mathop{\sum }\limits_{{u \sim w}}{\mathbf{T}}_{uv}^{CT}$ and ${\kappa }_{uv} \mathrel{\text{:=}} 2\left( {{p}_{u} + {p}_{v}}\right) /{\mathbf{T}}_{uv}^{CT}$
564
+
565
+ with edge an node curvatures as color.
566
+
567
+ Proof. The referred ratio above is the ratio leading to a proper $c$ in Proposition 2. This is consistent with a Lovász regime where the spectral gap ${\lambda }_{2}^{\prime }$ has a moderate value. However, for regimes with very small spectral graphs, i.e. ${\lambda }_{2}^{\prime } \rightarrow 0$ , according to the Lovász bound, ${\mathcal{R}}_{\text{diam }} \gg \left( {\frac{1}{{d}_{u}^{ * }} + \frac{1}{{d}_{v}^{ * }}}\right)$ and hence the Cheeger constant provided by Proposition 2 will tend to zero.
568
+
569
+ We conclude that we can always find an moderate upper bound for the Cheeger constant of $G =$ (V, E), provided that the regime of the Lovász bound is also moderate. Therefore, as the global properties of $G = \left( {V, E}\right)$ are captured by ${G}^{\prime } = \left( {V,{E}^{\prime }}\right)$ , a moderate Cheeger constant, when achievable, also controls the bottlenecks in ${G}^{\prime } = \left( {V,{E}^{\prime }}\right)$ .
570
+
571
+ Our methodology has focused on first exploring the properties of the commute times / effective resistances in $G = \left( {V, E}\right)$ . Next, we have leveraged the spectral similarity to reason about the properties -particularly the Cheeger constant- of $G = \left( {V,{E}^{\prime }}\right)$ . In sum, we conclude that resistance diffusion via ${\mathbf{T}}^{CT}$ is a principled way of preserving the Cheeger constant of $G = \left( {V, E}\right)$ .
572
+
573
+ #### A.1.3 Resistance-based Curvatures
574
+
575
+ We refer to recent work by Devriendt and Lambiotte [22] to complement the contributions of Topping et al. [21] regarding the use of curvature to rewire the edges in a graph.
576
+
577
+ Theorem 3 (Devriendt and Lambiotte [22]). The edge resistance curvature has the following properties: (1) It is bounded by $\left( {4 - {d}_{u} - {d}_{v}}\right) \leq {\kappa }_{uv} \leq 2/{R}_{uv}$ , with equality in the lower bound iff all incident edges to $u$ and $v$ are cut links; (2) It is upper-bounded by the Ollivier-Ricci curvature ${\kappa }_{uv}^{OR} \geq {\kappa }_{uv}$ , with equality if(u, v)is a cut link; and (3) Forman-Ricci curvature is bounded as follows: ${\kappa }_{uv}^{FR}/{R}_{uv} \leq {\kappa }_{uv}$ with equality in the bound if the edge is a cut link.
578
+
579
+ The new definition of curvature given in [21] is related to the resistance distance and thus it is learnable with the proposed framework (CT-LAYER). Actually, the Balanced-Forman curvature (Definition 1 in [21]) relies on the uniformative approximation of the resistance distance.
580
+
581
+ Figure 5 illustrates the relationship between effective resistances / commute times and curvature on an exemplary graph from the COLLAB dataset.
582
+
583
+ As seen in the Figure, effective resistances prioritize the edges connecting outer nodes with hubs or central nodes, while the intra-community connections are de-prioritized. This observation is consistent with the aforementioned theoretical explanations about preserving the bottleneck while breaking the intra-cluster structure. In addition, we also observe that the original edges between hubs have been deleted o have been extremely down-weighted. Regarding curvature, hubs or central nodes have the lowest node curvature (this curvature increases with the number of nodes in a cluster/community). Edge curvatures, which rely on node curvatures, depend on the long-term neighborhoods of the connecting nodes. In general, edge curvatures can be seen as a smoothed version -since they integrate node curvatures- of the inverse of the resistance distances.
584
+
585
+ We observe that edges linking nodes of a given community with hubs tend to have similar edge-curvature values. However, edges linking nodes of different communities with hubs have different edge curvatures (Figure 5-right). This is due to the different number of nodes belonging to each community, and to their different average degree inside their respective communities (property 1 of Theorem 3).
586
+
587
+ Finally, note that the range of edge curvatures is larger than that of resistance distances. The sparsifier transforms a uniform distribution of the edge weights into a less entropic one: in the example of Figure 5 we observe a power-law distribution of edge resistances. As a result, ${\kappa }_{uv} \mathrel{\text{:=}} 2\left( {{p}_{u} + {p}_{v}}\right) /{\mathbf{T}}_{uv}^{CT}$ becomes very large on average (edges with infinite curvature are not shown in the plot) and a log scale is needed to appreciate the differences between edge resistances and edge curvatures.
588
+
589
+ ### A.2 Appendix B: GAP-LAYER
590
+
591
+ #### A.2.1 Spectral Gradients
592
+
593
+ The proposed GAP-LAYER relies on gradients wrt the Laplacian eigenvalues, and particularly the spectral gap $\left( {\lambda }_{2}\right.$ for $\mathbf{L}$ and ${\lambda }_{2}^{\prime }$ wrt $\mathcal{L}$ ). Although the GAP-LAYER inductively rewires the adjacency matrix $\mathbf{A}$ so that ${\lambda }_{2}$ is minimized, the gradients derived in this section may also be applied for gap maximization.
594
+
595
+ Note that while our cost function ${L}_{\text{Fiedler }} = \parallel \widetilde{\mathbf{A}} - \mathbf{A}{\parallel }_{F} + \alpha {\left( {\lambda }_{2}^{ * }\right) }^{2}$ , with ${\lambda }_{2}^{ * } \in \left\{ {{\lambda }_{2},{\lambda }_{2}^{\prime }}\right\}$ , relies on an eigenvalue, we do not compute it explicitly, as its computation has a complexity of $O\left( {n}^{3}\right)$ and would need to be computed in every learning iteration. Instead, we learn an approximation of ${\lambda }_{2}$ ’s eigenvector ${\mathbf{f}}_{2}$ and use its Dirchlet energy $\mathcal{E}\left( {\mathbf{f}}_{2}\right)$ to approximate the eigenvalue. In addition, since ${\mathbf{g}}_{2} = {\mathbf{D}}^{1/2}{\mathbf{f}}_{2}$ , we first approximate ${\mathbf{g}}_{2}$ and then approximate ${\lambda }_{2}^{\prime }$ from $\mathcal{E}\left( {\mathbf{g}}_{2}\right)$ .
596
+
597
+ Gradients of the Ratio-cut Approximation. Let $\mathbf{A}$ be the adjacency matrix of $G = \left( {V, E}\right)$ ; and $\widetilde{\mathbf{A}}$ , a matrix similar to the original adjacency but with minimal ${\lambda }_{2}$ . Then, the gradient of ${\lambda }_{2}$ wrt each component of $\widetilde{\mathbf{A}}$ is given by
598
+
599
+ $$
600
+ {\nabla }_{\widetilde{\mathbf{A}}}{\lambda }_{2} \mathrel{\text{:=}} \operatorname{Tr}\left\lbrack {{\left( {\nabla }_{\widetilde{\mathbf{L}}}{\lambda }_{2}\right) }^{T} \cdot {\nabla }_{\widetilde{\mathbf{A}}}\widetilde{\mathbf{L}}}\right\rbrack = \operatorname{diag}\left( {{\mathbf{f}}_{2}{\mathbf{f}}_{2}^{T}}\right) {\mathbf{{11}}}^{T} - {\mathbf{f}}_{2}{\mathbf{f}}_{2}^{T}, \tag{28}
601
+ $$
602
+
603
+ where 1 is the vector of $n$ ones; and ${\left\lbrack {\nabla }_{\widetilde{\mathbf{A}}}{\lambda }_{2}\right\rbrack }_{ij}$ is the gradient of ${\lambda }_{2}$ wrt ${\widetilde{\mathbf{A}}}_{uv}$ . The above formula is an instance of the network derivative mining mining approach [43]. In this framework, ${\lambda }_{2}$ is seen as a function of $\widetilde{\mathbf{A}}$ and ${\nabla }_{\widetilde{\mathbf{A}}}{\lambda }_{2}$ , the gradient of ${\lambda }_{2}$ wrt $\widetilde{\mathbf{A}}$ , comes from the chain rule of the matrix derivative $\operatorname{Tr}\left\lbrack {{\left( {\nabla }_{\widetilde{\mathbf{L}}}{\lambda }_{2}\right) }^{T} \cdot {\nabla }_{\widetilde{\mathbf{A}}}\widetilde{\mathbf{L}}}\right\rbrack$ . More precisely,
604
+
605
+ $$
606
+ {\nabla }_{\widetilde{\mathbf{L}}}{\lambda }_{2} \mathrel{\text{:=}} \frac{\partial {\lambda }_{2}}{\partial \widetilde{\mathbf{L}}} = {\mathbf{f}}_{2}{\mathbf{f}}_{2}^{T}, \tag{29}
607
+ $$
608
+
609
+ is a matrix relying on an outer product (correlation). In the proposed GAP-LAYER, since ${\mathbf{f}}_{2}$ is approximated by:
610
+
611
+ $$
612
+ {\mathbf{f}}_{2}\left( u\right) = \left\{ \begin{array}{ll} + 1/\sqrt{n} & \text{ if }u\text{ belongs to the first cluster } \\ - 1/\sqrt{n} & \text{ if }u\text{ belongs to the second cluster } \end{array}\right. \tag{30}
613
+ $$
614
+
615
+ i.e. we discard the $O\left( \frac{\log n}{n}\right)$ from Eq. 30 (the non-liniarities conjectured in [23]) in order to simplify the analysis. After reordering the entries of ${\mathbf{f}}_{2}$ for the sake of clarity, ${\mathbf{f}}_{2}{\mathbf{f}}_{2}^{T}$ is the following block matrix:
616
+
617
+ $$
618
+ {\mathbf{f}}_{2}{\mathbf{f}}_{2}^{T} = \left\lbrack \begin{matrix} 1/n & - 1/n \\ - 1/n & 1/n \end{matrix}\right\rbrack \text{whose diagonal matrix is}\operatorname{diag}\left( {{\mathbf{f}}_{2}{\mathbf{f}}_{2}^{T}}\right) = \left\lbrack \begin{matrix} 1/n & 0 \\ 0 & 1/n \end{matrix}\right\rbrack \tag{31}
619
+ $$
620
+
621
+ Then, we have
622
+
623
+ $$
624
+ {\nabla }_{\widetilde{\mathbf{A}}}{\lambda }_{2} = \left\lbrack \begin{matrix} 1/n & 1/n \\ 1/n & 1/n \end{matrix}\right\rbrack - \left\lbrack \begin{matrix} 1/n & - 1/n \\ - 1/n & 1/n \end{matrix}\right\rbrack = \left\lbrack \begin{matrix} 0 & 2/n \\ 2/n & 0 \end{matrix}\right\rbrack \tag{32}
625
+ $$
626
+
627
+ which explains the results in Figure 1-left: edges linking nodes belonging to the same cluster remain unchanged whereas inter-cluster edges have a gradient of $2/n$ . This provides a simple explanation for ${\mathbf{T}}^{GAP} = \widetilde{\mathbf{A}}\left( \mathbf{S}\right) \odot \mathbf{A}$ . The additional masking added by the adjacency matrix ensures that we do not create new links.
628
+
629
+ Gradients Normalized-cut Approximation. Similarly, using ${\lambda }_{2}^{\prime }$ for graph rewiring leads to the following complex expression:
630
+
631
+ $$
632
+ {\nabla }_{\widetilde{\mathbf{A}}}{\lambda }_{2}^{\prime } \mathrel{\text{:=}} \operatorname{Tr}\left\lbrack {{\left( {\nabla }_{\widetilde{\mathcal{L}}}{\lambda }_{2}\right) }^{T} \cdot {\nabla }_{\widetilde{\mathbf{A}}}\widetilde{\mathcal{L}}}\right\rbrack =
633
+ $$
634
+
635
+ $$
636
+ {\mathbf{d}}^{\prime }\left\{ {{\mathbf{g}}_{2}^{T}{\widetilde{\mathbf{A}}}^{T}{\widetilde{\mathbf{D}}}^{-1/2}{\mathbf{g}}_{2}}\right\} {\mathbf{1}}^{T} + {\mathbf{d}}^{\prime }\left\{ {{\mathbf{g}}_{2}^{T}\widetilde{\mathbf{A}}{\widetilde{\mathbf{D}}}^{-1/2}{\mathbf{g}}_{2}}\right\} {\mathbf{1}}^{T} + {\widetilde{\mathbf{D}}}^{-1/2}{\mathbf{g}}_{2}{\mathbf{g}}_{2}^{T}{\widetilde{\mathbf{D}}}^{-1/2}. \tag{33}
637
+ $$
638
+
639
+ 74 However, since ${\mathbf{g}}_{2} = {\mathbf{D}}^{1/2}{\mathbf{f}}_{2}$ and ${\mathbf{f}}_{2} = {\mathbf{D}}^{-1/2}{\mathbf{g}}_{2}$ , the gradient may be simplified as follows:
640
+
641
+ $$
642
+ {\nabla }_{\widetilde{\mathbf{A}}}{\lambda }_{2}^{\prime } \mathrel{\text{:=}} \operatorname{Tr}\left\lbrack {{\left( {\nabla }_{\widetilde{\mathcal{L}}}{\lambda }_{2}\right) }^{T} \cdot {\nabla }_{\widetilde{\mathbf{A}}}\widetilde{\mathcal{L}}}\right\rbrack =
643
+ $$
644
+
645
+ $$
646
+ {\mathbf{d}}^{\prime }\left\{ {{\mathbf{f}}_{2}^{T}{\widetilde{\mathbf{D}}}^{1/2}{\widetilde{\mathbf{A}}}^{T}{\mathbf{f}}_{2}}\right\} {\mathbf{1}}^{T} + {\mathbf{d}}^{\prime }\left\{ {{\mathbf{f}}_{2}^{T}{\widetilde{\mathbf{D}}}^{1/2}\widetilde{\mathbf{A}}{\mathbf{f}}_{2}}\right\} {\mathbf{1}}^{T} + {\widetilde{\mathbf{D}}}^{-1/2}{\mathbf{f}}_{2}{\mathbf{f}}_{2}^{T}{\widetilde{\mathbf{D}}}^{-1/2}. \tag{34}
647
+ $$
648
+
649
+ In addition, considering symmetry for the undirected graph case, we obtain:
650
+
651
+ $$
652
+ {\nabla }_{\widetilde{\mathbf{A}}}{\lambda }_{2}^{\prime } \mathrel{\text{:=}} \operatorname{Tr}\left\lbrack {{\left( {\nabla }_{\widetilde{\mathcal{L}}}{\lambda }_{2}\right) }^{T} \cdot {\nabla }_{\widetilde{\mathbf{A}}}\widetilde{\mathcal{L}}}\right\rbrack =
653
+ $$
654
+
655
+ $$
656
+ 2{\mathbf{d}}^{\prime }\left\{ {{\mathbf{f}}_{2}^{T}{\widetilde{\mathbf{D}}}^{1/2}\widetilde{\mathbf{A}}{\mathbf{f}}_{2}}\right\} {\mathbf{1}}^{T} + {\widetilde{\mathbf{D}}}^{-1/2}{\mathbf{f}}_{2}{\mathbf{f}}_{2}^{T}{\widetilde{\mathbf{D}}}^{-1/2}. \tag{35}
657
+ $$
658
+
659
+ where ${\mathbf{d}}^{\prime }$ is a $n \times 1$ negative vector including derivatives of degree wrt adjacency and related terms. The obtained gradient is composed of two terms.
660
+
661
+ The first term contains the matrix ${\widetilde{\mathbf{D}}}^{1/2}\widetilde{\mathbf{A}}$ which is the adjacency matrix weighted by the square root of the degree; ${\mathbf{f}}_{2}^{T}{\widetilde{\mathbf{D}}}^{1/2}\widetilde{\mathbf{A}}{\mathbf{f}}_{2}$ is a quadratic form (similar to a Dirichlet energy for the Laplacian) which approximates an eigenvalue of ${\widetilde{\mathbf{D}}}^{1/2}\widetilde{\mathbf{A}}$ . We plan to further analyze the properties of this term in future work.
662
+
663
+ The second term, ${\widetilde{\mathbf{D}}}^{-1/2}{\mathbf{f}}_{2}{\mathbf{f}}_{2}^{T}{\widetilde{\mathbf{D}}}^{-1/2}$ , downweights the correlation term for the Ratio-cut case ${\mathbf{f}}_{2}{\mathbf{f}}_{2}^{T}$ by the degrees as in the normalized Laplacian. This results in a normalization of the Fiedler vector: $- 1/n$ becomes $- \sqrt{{d}_{u}{d}_{v}}/n$ at the ${uv}$ entry and similarly for $1/n$ , i.e. each entry contains the average degree assortativity.
664
+
665
+ #### A.2.2 Beyond the Lovász Bound: the von Luxburg et al. bound
666
+
667
+ The Lovász bound was later refined by von Luxburg et al. [36] via a new, tighter bound which replaces ${d}_{\min }$ by ${d}_{\min }^{2}$ in Eq. 1. Given that ${\lambda }_{2}^{\prime } \in (0,2\rbrack$ , as the number of nodes in the graph $\left( {n = \left| V\right| }\right)$ and the average degree increase, then ${R}_{uv} \approx 1/{d}_{u} + 1/{d}_{v}$ . This is likely to happen in certain types of graphs, such as Gaussian similarity-graphs -graphs where two nodes are linked if the neg-exponential of the distances between the respective features of the nodes is large enough; $\epsilon$ -graphs -graphs where the Euclidean distances between the features in the nodes are $\leq \epsilon$ ; and $k - \mathrm{{NN}}$ graphs with large $k$ wrt $n$ . The authors report a linear collapse of ${R}_{uv}$ with the density of the graph in scale-free networks, such as social network graphs, whereas a faster collapse of ${R}_{uv}$ has been reported in community graphs -congruent graphs with Stochastic Block Models (SBMs) [41].
668
+
669
+ Given the importance of the effective resistance, ${R}_{uv}$ , as a global measure of node similarity, the von Luxburg et al.'s refinement motivated the development of robust effective resistances, mostly in the form of $p$ -resistances given by ${R}_{uv}^{p} = \arg \mathop{\min }\limits_{\mathbf{f}}\left\{ {\mathop{\sum }\limits_{{e \in E}}{r}_{e}{\left| {f}_{e}\right| }^{p}}\right\}$ , where $\mathbf{f}$ is a unit-flow injected in $u$ and recovered in $v$ ; and ${r}_{e} = 1/{w}_{e}$ with ${w}_{e}$ being the edge’s weight [45]. For $p = 1,{R}_{uv}^{p}$ corresponds to the shortest path; $p = 2$ results in the effective resistance; and $p \rightarrow \infty$ leads to the inverse of the unweighted $u - v$ -mincut ${}^{5}$ . Note that the optimal $p$ value depends on the type of graph [45] and $p$ -resistances may be studied from the perspective of $p$ -Laplacians [42,46].
670
+
671
+ While ${R}_{uv}$ could be unbounded by minimizing the spectral gap ${\lambda }_{2}^{\prime }$ , this approach has received little attention in the literature of mathematical characterization of graphs with small spectral gaps [47][48], i.e., instead of tackling the daunting problem of explicitly minimizing the gap, researchers in this field have preferred to find graphs with small spectral gaps.
672
+
673
+ ---
674
+
675
+ ${}^{5}$ The link between CTs and mincuts is leveraged in the paper as an essential element of our approach.
676
+
677
+ ---
678
+
679
+ ### A.3 Appendix C: Experiments
680
+
681
+ In this section, we provide details about the graphs contained in each of the datasets used in our experiments, a detailed clarification about architectures and experiments, and, finally, report additional experimental results.
682
+
683
+ #### A.3.1 Datasets statistics
684
+
685
+ Table 3 depicts the number of nodes, edges, average degree, assortativity, number of triangles, transitivity and clustering coefficients (mean and standard deviation) of all the graphs contained in each of the benchmark datasets used in our experiments. As seen in the Table, the datasets are very diverse in their characteristics. In addition, we use two synthetic datasets with 2 classes: Erdös-Rényi with ${p}_{1} \in \left\lbrack {{0.3},{0.5}}\right\rbrack$ and ${p}_{2} \in \left\lbrack {{0.4},{0.8}}\right\rbrack$ and Stochastic block model (SBM) with parameters ${p}_{1} = {0.8}$ , ${p}_{2} = {0.5},{q}_{1} \in \left\lbrack {{0.1},{0.15}}\right\rbrack$ and ${q}_{2} \in \left\lbrack {{0.01},{0.1}}\right\rbrack$ .
686
+
687
+ Table 3: Dataset statistics.
688
+
689
+ <table><tr><td/><td>Nodes</td><td>Egdes</td><td>AVG Degree</td><td>Triangles</td><td>Transitivity</td><td>Clustering</td></tr><tr><td>REDDIT-BINARY</td><td>${429.6} \pm {554}$</td><td>${497.7} \pm {622}$</td><td>${2.33} \pm {0.3}$</td><td>${24} \pm {41}$</td><td>${0.01} \pm {0.02}$</td><td>${0.04} \pm {0.06}$</td></tr><tr><td>IMDB-BINARY</td><td>${19.7} \pm {10}$</td><td>${96.5} \pm {105}$</td><td>${8.88} \pm {5.0}$</td><td>${391} \pm {868}$</td><td>${0.77} \pm {0.15}$</td><td>${0.94} \pm {0.03}$</td></tr><tr><td>COLLAB</td><td>${74.5} \pm {62}$</td><td>${2457} \pm {6438}$</td><td>${37.36} \pm {44}$</td><td>${12} \times {10}^{4} \pm {48} \times {10}^{4}$</td><td>${0.76} \pm {0.21}$</td><td>${0.89} \pm {0.08}$</td></tr><tr><td>MUTAG</td><td>${2.2} \pm {0.1}$</td><td>${19.8} \pm {5.6}$</td><td>${2.18} \pm {0.1}$</td><td>${0.00} \pm {0.0}$</td><td>${0.00} \pm {0.00}$</td><td>${0.00} \pm {0.00}$</td></tr><tr><td>PROTEINS</td><td>${39.1} \pm {45.8}$</td><td>${72.8} \pm {84.6}$</td><td>${3.73} \pm {0.4}$</td><td>${27.4} \pm {30}$</td><td>${0.48} \pm {0.20}$</td><td>${0.51} \pm {0.23}$</td></tr></table>
690
+
691
+ In addition, Figure 6 depicts the histograms of the average node degrees for all the graphs in each of
692
+
693
+ 19 the eight datasets used in our experiments. The datasets are also very diverse in terms of topology,
694
+
695
+ 720 corresponding to social networks, biochemical networks and meshes.
696
+
697
+ ![01963ef6-344f-7649-932d-264024e54631_19_337_1205_1116_605_0.jpg](images/01963ef6-344f-7649-932d-264024e54631_19_337_1205_1116_605_0.jpg)
698
+
699
+ Figure 6: Degree histogram of the average degree of all the graphs in each of the datasets.
700
+
701
+ #### A.3.2 GNN architectures
702
+
703
+ Figure 7 shows the specific GNN architectures used in the experiments explained in section 4 in the manuscript. Although the specific calculation of ${\mathbf{T}}^{GAP}$ and ${\mathbf{T}}^{CT}$ are given in Theorems 2 and 1, we also provide a couple of pictures for a better intuition.
704
+
705
+ ![01963ef6-344f-7649-932d-264024e54631_20_532_197_729_891_0.jpg](images/01963ef6-344f-7649-932d-264024e54631_20_532_197_729_891_0.jpg)
706
+
707
+ Figure 7: Diagrams of the GNNs used in the experiments.
708
+
709
+ #### A.3.3 Training parameters
710
+
711
+ The value of the hyperparameters used in the experiments are the ones by default in the anonymous repository ${}^{6}$ . We report average accuracies and standard deviation on 10 random iterations, using different ${85}/{15}$ train-test stratified split (we do not perform hyperparameter search), training during 60 epochs and reporting the results of the last epoch for each random run. We have used an Adam optimizer, with a learning rate of ${5e} - 4$ and weight decay of ${1e} - 4$ . In addition, the batch size used for the experiments are shown in Table 4. Regarding the synthetic datasets, the parameters are: Erdös-Rényi with ${p}_{1} \in \left\lbrack {{0.3},{0.5}}\right\rbrack$ and ${p}_{2} \in \left\lbrack {{0.4},{0.8}}\right\rbrack$ and Stochastic block model (SBM) ${p}_{1} = {0.8}$ , ${p}_{2} = {0.5},{q}_{1} \in \left\lbrack {{0.1},{0.15}}\right\rbrack$ and ${q}_{2} \in \left\lbrack {{0.01},{0.1}}\right\rbrack$ .
712
+
713
+ Table 4: Dataset Batch size
714
+
715
+ <table><tr><td/><td>Batch</td><td>Dataset size</td></tr><tr><td>REDDIT-BINARY</td><td>64</td><td>1000</td></tr><tr><td>IMDB-BINARY</td><td>64</td><td>2000</td></tr><tr><td>COLLAB</td><td>64</td><td>5000</td></tr><tr><td>MUTAG</td><td>32</td><td>188</td></tr><tr><td>PROTEINS</td><td>64</td><td>1113</td></tr><tr><td>SBM</td><td>32</td><td>1000</td></tr><tr><td>Erdös-Rényi</td><td>32</td><td>1000</td></tr></table>
716
+
717
+ Our experiments also use 2 preprocessing methods DIGL and SDRF. Unlike our proposed methods, both SDRF [21] and DIGL [25] use a set of hyperparamerters to optimize for each specific graph, because both are also not inductive. This approach could be manageable for the task of node classification, where you only have one graph. However, when it comes to graph classification, the number of graphs are huge (4) and it is nor computationally feasible optimize parameters for each specific graph. For DIGL, we use a fixed $\alpha = {0.001}$ and $\epsilon$ based on keeping the same average degree for each graph, i.e., we use a different dynamically chosen $\epsilon$ for each graph in each dataset which maintain the same number of edges as the original graph. In the case of SDRF, the parameters define how stochastic the edge addition is $\left( \tau \right)$ , the graph edit distance upper bound (number of iterations) and optional Ricci upper-bound above which an edge will be removed each iteration $\left( {C}^{ + }\right)$ . We set the parameters $\tau = {20}$ (the edge added is always near the edge of lower curvature), ${C}^{ + } = 0$ (to force one edge is removed every iteration), and number of iterations dynamic according to ${0.7} * \left| V\right|$ . Thus, we maintain the same number of edges in the new graph $\left( {\tau = {20}{C}^{ + } = 0}\right)$ , i.e., same average degree, and we keep the graph distance to the original bounded by ${0.7} * \left| V\right|$ .
718
+
719
+ ---
720
+
721
+ ${}^{6}$ https://anonymous.4open.science/r/DiffWireLoG22/readme.md
722
+
723
+ ---
724
+
725
+ #### A.3.4 Latent Space Analysis
726
+
727
+ In this section, we analyze the latent space produced by the models that use MINCUTPOOL (Figure 7a), GAP-LAYER (Figure 7b) and CT-LAYER (Figure 7c). We plot the output of the readout layer for each model, and then perform dimensionality reduction with TSNE.
728
+
729
+ Observing the latent space of the REDDIT-BINARY dataset (Figure 8), CT-LAYER creates a disperse yet structured latent space for the embeddings of the graphs. This topology in latent spaces show that this method is able to capture different topological details. The main reason is the expressiveness of the commute times as a distance metric when performing rewiring, which has been shown to be a optimal metric to measure node structural similarity. In addition, GAP-LAYER creates a latent space where, although the 2 classes are also separable, the embeddings are more compressed, due to a more aggressive -yet still informative- change in topology. This change in topology is due to the change in bottleneck size that GAP-LAYER applies to the graph. Finally, MINCUT creates a more squeezed and compressed embedding, where both classes lie in the same spaces and most of the graphs have collapsed representations, due to the limited expressiveness of this architecture.
730
+
731
+ ![01963ef6-344f-7649-932d-264024e54631_21_318_1088_1164_435_0.jpg](images/01963ef6-344f-7649-932d-264024e54631_21_318_1088_1164_435_0.jpg)
732
+
733
+ Figure 8: REDDIT embeddings produced by GAP-LAYER (Ncut) CT-LAYER and MINCUT.
734
+
735
+ #### A.3.5 Computing infrastructure
736
+
737
+ Table 5 summarizes the computing infrastructure used in our experiments.
738
+
739
+ Table 5: Computing infrastructure.
740
+
741
+ <table><tr><td>Component</td><td>Details</td></tr><tr><td>GPU</td><td>2x A100-SXM4-40GB</td></tr><tr><td>RAM</td><td>1 TiB</td></tr><tr><td>CPU</td><td>255x AMD 7742 64-Core @ 2.25 GHz</td></tr><tr><td>OS</td><td>Ubuntu 20.04.4 LTS</td></tr></table>
742
+
papers/LOG/LOG 2022/LOG 2022 Conference/IXvfIex0mX6f/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,262 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § DIFFWIRE: INDUCTIVE GRAPH REWIRING VIA THE LOVÁSZ BOUND
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ Graph Neural Networks (GNNs) have been shown to achieve competitive results to tackle graph-related tasks, such as node and graph classification, link prediction and node and graph clustering in a variety of domains. Most GNNs use a message passing framework and hence are called MPNNs. Despite their promising results, MPNNs have been reported to suffer from over-smoothing, over-squashing and under-reaching. Graph rewiring and graph pooling have been proposed in the literature as solutions to address these limitations. However, most state-of-the-art graph rewiring methods fail to preserve the global topology of the graph, are neither differentiable nor inductive, and require the tuning of hyper-parameters. In this paper, we propose DIFFWIRE, a novel framework for graph rewiring in MPNNs that is principled, fully differentiable and parameter-free by leveraging the Lovász bound. Our approach provides a unified theory for graph rewiring by proposing two new, complementary layers in MPNNs: CT-LAYER, a layer that learns the commute times and uses them as a relevance function for edge re-weighting; and GAP-LAYER, a layer to optimize the spectral gap, depending on the nature of the network and the task at hand. We empirically validate the value of each of these layers separately with benchmark datasets for graph classification. DIFFWIRE brings together the learnability of commute times to related definitions of curvature, opening the door to creating more expressive MPNNs.
12
+
13
+ § 21 I INTRODUCTION
14
+
15
+ Graph Neural Networks (GNNs) [1, 2] are a class of deep learning models applied to graph structured data. They have been shown to achieve state-of-the-art results in many graph-related tasks, such as node and graph classification [3, 4], link prediction [5] and node and graph clustering [6, 7], and in a variety of domains, including image or molecular structure classification, recommender systems and social influence prediction [8].
16
+
17
+ Most GNNs use a message passing framework and thus are referred to as Message Passing Neural Networks (MPNNs) [4] . In these networks, every node in each layer receives a message from its adjacent neighbors. All the incoming messages at each node are then aggregated and used to update the node's representation via a learnable non-linear function -which is typically implemented by means of a neural network. The final node representations (called node embeddings) are used to perform the graph-related task at hand (e.g. graph classification). MPNNs are extensible, simple and have proven to yield competitive empirical results. Examples of MPNNs include GCN [3], GAT [9], GATv2 [10], GIN [11] and GraphSAGE [12]. However, they typically use transductive learning, i.e. the model observes both the training and testing data during the training phase, which might limit their applicability to graph classification tasks.
18
+
19
+ MPNNs have important limitations due to the inherent complexity of graphs, the limited depth of most state-of-the-art MPNNs and the inability of current methods to the capture global structural information of the graph. The literature has reported best results when MPNNs have a small number of layers, because networks with many layers tend to suffer from over-smoothing [13] and over-squashing [14]. Over-smoothing takes place when the embeddings of nodes that belong to different classes become indistinguishable. Over-squashing refers to the distortion of information flowing
20
+
21
+ from distant nodes due to graph bottlenecks ${}^{1}$ that emerge when the number of $\mathrm{k}$ -hop neighbors grows exponentially with k. They both tend to occur in networks with a large number of layers [15]. Moreover, simple MPNNs with a small number of layers fail to capture information that depends on the entire structure of the graph (e.g., random walk probabilities [16]) and prevent the information flow to reach distant nodes. This phenomenon is called under-reaching [17] and occurs when the MPNNs depth is smaller than the graph's diameter.
22
+
23
+ Graph pooling and graph rewiring have been proposed in the literature as solutions to address these limitations [14]. Given that the main infrastructure for message passing in MPNNs are the edges in the graph, and given that many of these edges might be noisy or inadequate for the downstream task [18], graph rewiring aims to identify such edges and edit them.
24
+
25
+ Many graph rewiring methods rely on edge sampling strategies: first, the edges are assigned new weights according to a relevance function and then they are re-sampled according to the new weights to retain the most relevant edges (i.e. those with larger weights). Edge relevance might be computed in different ways, including randomly [19], based on similarity [20] or on the edge's curvature [21].
26
+
27
+ Due to the diversity of possible graphs and tasks to be performed with those graphs, optimal graph rewiring should include a variety of strategies that are suited not only to the task at hand but also to the nature and structure of the graph.
28
+
29
+ Motivation. State-of-the-art edge sampling strategies have three significant limitations. First, most of the proposed methods fail to preserve the global topology of the graph. Second, most graph rewiring methods are neither differentiable nor inductive [21]. Third, relevance functions that depend on a diffusion measure (typically in the spectral domain) are not parameter-free, which adds a layer of complexity in the models. In this paper, we address these three limitations.
30
+
31
+ Contributions and outline. The main contribution of our work is to propose a theoretical framework called DIFFWIRE for graph rewiring in MPNNs that is principled, fully differentiable, inductive, and parameter-free by leveraging the Lovász bound [16] given by Eq. 1. This bound is a mathematical expression of the relationship between the commute times (effective resistance distance) and the network's spectral gap. Inductive means that given an unseen test graph, DIFFWIRE predicts the optimal graph structure for the task at hand without any parameter tuning. Given the recently reported connection between commute times and curvature [22], and between curvature and the spectral gap [21], our framework provides a unified theory linking these concepts. Our aim is to leverage diffusion and curvature theories to propose a new approach for graph rewiring that preserves the graph's structure.
32
+
33
+ We first propose using the commute times as a relevance function for edge re-weighting. Moreover, we develop a differentiable, parameter-free layer in the GNN (CT-LAYER) to learn the commute times. Second, we propose an alternative graph rewiring approach by adding a layer in the network (GAP-LAYER) that optimizes the spectral gap according to the nature of the network and the task at hand. Finally, we empirically validate the proposed layers with state-of-the-art benchmark datasets in a graph classification task. We select a graph classification task to emphasize the inductive nature of DIFFWIRE: the layers in the GNN (CT-LAYER and GAP-LAYER) are trained to predict the CTs embedding and minimize the spectral gap for unseen graphs, respectively. This approach gives a great advantage when compared to SoTA methods that require optimizing the parameters of the models for each graph. CT-LAYER and GAP-LAYER learn the weights during training to predict the optimal changes in the topology of any unseen graph in test time.
34
+
35
+ The paper is organized as follows: Section 2 provides a summary of the most relevant related literature. Our core technical contribution is described in Section 3, followed by our experimental evaluation and discussion in Section 4. Finally, Section 5 is devoted to conclusions and an outline of our future lines of research.
36
+
37
+ § 2 RELATED WORK
38
+
39
+ In this section we provide an overview of the most relevant works that have been proposed in the literature to tackle the challenges of over-smoothing, over-squashing and under-reaching in MPNNs by means of graph rewiring and pooling.
40
+
41
+ ${}^{1}$ A graph bottleneck is defined a a topological property of the graph that leads to over-squashing.
42
+
43
+ Limitations of MPNNs. MPNNs are widely used to tackle many real-world tasks -from social network analysis to protein modeling- with competitive results. However, MPNNs also have important limitations due to the inherent complexity of graphs. Despite such complexity, the literature has reported best results when MPNNs have a small number of layers, because networks with many layers tend to suffer from over-smoothing [13] and over-squashing [14].
44
+
45
+ Over-smoothing $\left\lbrack {8,{15},{23},{24}}\right\rbrack$ takes place when the embeddings of nodes that belong to different classes become indistinguishable. It tends to occur in MPNNs with many layers that are used to tackle short-range tasks, i.e. tasks where a node's correct prediction mostly depends on its local neighborhood. Given this local dependency, it makes intuitive sense that adding layers to the network would not help the network's performance.
46
+
47
+ Conversely, long-range tasks require as many layers in the network as the range of the interaction between the nodes. However, as the number of layers in the network increases, the number of nodes feeding into each of the node's receptive field also increases exponentially, leading to over-squashing $\left\lbrack {{14},{21}}\right\rbrack$ : the information flowing from the receptive field composed of many nodes is compressed in fixed-length node vectors, and hence the graph fails to correctly propagate the messages coming from distant nodes. Thus, over-squashing emerges when there is a bottleneck in the graph and a long-range task.
48
+
49
+ To prevent over-smoothing and over-squashing, the number of layers in MPNNs is typically kept small. However, simple models with a small number of layers fail to capture information that depends on the entire structure of the graph (e.g., random walk probabilities [16]) and prevent the information flow to reach distant nodes. This phenomenon is called under-reaching [17] and occurs when the MPNN's depth is smaller than the graph's diameter.
50
+
51
+ Graph rewiring in MPNNs. Rewiring is a process of changing the graph's structure to control the information flow and hence improve the ability of the network to perform the task at hand (e.g. node or graph classification, link prediction...). Several approaches have been proposed in the literature for graph rewiring, such as connectivity diffusion [25] or evolution [21], adding new bridge-nodes [26] and multi-hop filters [27], and neighborhood [12], node [28] and edge [29] sampling.
52
+
53
+ Edge sampling methods sample the graph's edges based on their weights or relevance, which might be computed in different ways. Huang et al. [19] prove that randomly dropping edges during training improves the performance of GNNs. Klicpera et al. [25], define edge relevance according to the coefficients of a parameterized diffusion process over the graph. Then, the k-hop diffusion matrix is truncated to discard long-range interactions. For Kazi et al. [20], edge relevance is given by the similarity between the nodes' attributes In addition, a reinforcement learning process rewards edges leading to a correct classification and penalizes the rest.
54
+
55
+ Edge sampling-based rewiring has been proposed to tackle over-smoothing and over-squashing in MPNNs. Over-smoothing may be relieved by removing inter-class edges [30]. However, this strategy is only valid when the graph is homophilic, i.e. connected nodes tend to share similar attributes. Otherwise, removing these edges could lead to over-squashing [21] if their removal obstructs the message passing between distant nodes belonging to the same class (heterophily). Increasing the size of the bottlenecks of the graph via rewiring has been shown to improve node classification performance in heterophilic graphs, but not in homophilic graphs [21]. Recently, Topping et al. [21] propose an edge relevance function given by the edge curvature to mitigate over-squashing. They identify the bottleneck of the graph by computing the Ricci curvature of the edges. Next, they remove edges with high curvature and add edges around minimal curvature edges.
56
+
57
+ Graph Structure Learning (GSL). GSL methods [31] aim to learn an optimized graph structure and its corresponding representations at the same time. DIFFWIRE could be seen from the perspective of GSL: CT-LAYER, as a metric-based, neural approach, and GAP-LAYER, as a direct-neural approach to optimize the structure of the graph to the task at hand.
58
+
59
+ Pooling in MPNNs. In addition to graph rewiring, pooling layers simplify the original graph by compressing it into a smaller graph or a vector via pooling operators, which range from simple [32] to more sophisticated approaches, such as DiffPool [33] and MinCut pool [34]. Although graph pooling methods do not consider the edge representations, there is a clear relationship between pooling methods and rewiring since both of them try to reduce the flow of information through the graph's bottleneck.
60
+
61
+ < g r a p h i c s >
62
+
63
+ Figure 1: DiffWire. Left: Original graph. Center: Rewired graph after CT-LAYER. Right: Rewired graph after GAP-LAYER. Colors indicate the strength of the edges.
64
+
65
+ § 3 PROPOSED APPROACH: DIFFWIRE FOR INDUCTIVE GRAPH REWIRING
66
+
67
+ DIFFWIRE provides a unified theory for graph rewiring by proposing two new, complementary layers in MPNNs: first, CT-LAYER, a layer that learns the commute times and uses them as a relevance function for edge re-weighting; and second, GAP-LAYER, a layer to optimize the spectral gap, depending on the nature of the network and the task at hand.
68
+
69
+ In this section, we present the theoretical foundations for the definitions of CT-LAYER and GAP-LAYER. First, we introduce the bound that our approach is based on: The Lovász bound. Table 2 in A. 1 summarizes the notation used in the paper.
70
+
71
+ § 3.1 THE LOVÁSZ BOUND
72
+
73
+ The Lovász bound, given by Eq. 1, was derived by Lovász in [16] as a means of linking the spectrum governing a random walk in an undirected graph $G = \left( {V,E}\right)$ with the hitting time ${H}_{uv}$ between any two nodes $u$ and $v$ of the graph. ${H}_{uv}$ is the expected number of steps needed to reach (or hit) $v$ from $u;{H}_{vu}$ is defined similarly. The sum of both hitting times between the two nodes, $v$ and $u$ , is the commute time $C{T}_{uv} = {H}_{uv} + {H}_{vu}$ . Thus, $C{T}_{uv}$ is the expected number of steps needed to hit $v$ from $u$ and go back to $u$ . According to the Lovász bound:
74
+
75
+ $$
76
+ \left| {\frac{1}{\operatorname{vol}\left( G\right) }C{T}_{uv} - \left( {\frac{1}{{d}_{u}} + \frac{1}{{d}_{v}}}\right) }\right| \leq \frac{1}{{\lambda }_{2}^{\prime }}\frac{2}{{d}_{\min }} \tag{1}
77
+ $$
78
+
79
+ where ${\lambda }_{2}^{\prime } \geq 0$ is the spectral gap, i.e. the first non-zero eigenvalue of $\mathcal{L} = \mathbf{I} - {\mathbf{D}}^{-1/2}{\mathbf{{AD}}}^{-1/2}$ (normalized Laplacian [35], where $\mathbf{D}$ is the degree matrix and $\mathbf{A}$ , the adjacency matrix); $\operatorname{vol}\left( G\right)$ is the volume of the graph (sum of degrees); ${d}_{u}$ and ${d}_{v}$ are the degrees of nodes $u$ and $v$ , respectively; and ${d}_{min}$ is the minimum degree of the graph.
80
+
81
+ The term $C{T}_{uv}/\operatorname{vol}\left( G\right)$ in Eq. 1 is referred to as the effective resistance, ${R}_{uv}$ , between nodes $u$ and $v$ . The bound states that the effective resistance between two nodes in the graph converges to or diverges from $\left( {1/{d}_{u} + 1/{d}_{v}}\right)$ , depending on whether the graph’s spectral gap diverges from or tends to zero. The larger the spectral gap, the closer $C{T}_{uv}/\operatorname{vol}\left( G\right)$ will be to $\frac{1}{{d}_{u}} + \frac{1}{{d}_{v}}$ and hence the less informative the commute times will be.
82
+
83
+ We propose two novel MPNNs layers based on each side of the inequality in Eq. 1: CT-LAYER, focuses on the left-hand side, and GAP-LAYER, on the right-hand side. The use of each layer depends on the nature of the network and the task at hand. In a graph classification task (our focus), CT-LAYER is expected to yield good results when the graph's spectral gap is small; conversely, GAP-LAYER would be the layer of choice in graphs with large spectral gap.
84
+
85
+ The Lovász bound was later refined by von Luxburg et al. [36]. App. A.2.2 presents this bound along with its relationship with ${R}_{uv}$ as a global measure of node similarity. Once we have defined both sides of the Lovász bound, we proceed to describe their implications for graph rewiring.
86
+
87
+ § 3.2 CT-LAYER: COMMUTE TIMES FOR GRAPH REWIRING
88
+
89
+ We focus first on the left-hand side of the Lovász bound which concerns the effective resistances $C{T}_{uv}/\operatorname{vol}\left( G\right) = {R}_{uv}$ (or commute times) ${}^{2}$ between any two nodes in the graph.
90
+
91
+ Spectral Sparsification leads to Commute Times. Graph sparsification in undirected graphs may be formulated as finding a graph $H = \left( {V,{E}^{\prime }}\right)$ that is spectrally similar to the original graph $G = \left( {V,E}\right)$ with ${E}^{\prime } \subset E$ . Thus, the spectra of their Laplacians, ${\mathbf{L}}_{G}$ and ${\mathbf{L}}_{H}$ should be similar.
92
+
93
+ Theorem 1 (Spielman and Srivastava [37]). Let Sparsify $\left( {G,q}\right) \rightarrow {G}^{\prime }$ be a sampling algorithm of graph $G = \left( {V,E}\right)$ , where edges $e \in E$ are sampled with probability $q \propto {R}_{e}$ (proportional to the effective resistance). For $n = \left| V\right|$ sufficiently large and $1/\sqrt{n} < \epsilon \leq 1,O\left( {n\log n/{\epsilon }^{2}}\right)$ samples are needed to satisfy $\forall \mathbf{x} \in {\mathbb{R}}^{n} : \left( {1 - \epsilon }\right) {\mathbf{x}}^{T}{\mathbf{L}}_{G}\mathbf{x} \leq {\mathbf{x}}^{T}{\mathbf{L}}_{{G}^{\prime }}\mathbf{x} \leq \left( {1 + \epsilon }\right) {\mathbf{x}}^{T}{\mathbf{L}}_{G}\mathbf{x}$ , with probability $\geq 1/2$ .
94
+
95
+ The above theorem has a simple explanation in terms of Dirichlet energies. The Laplacian $\mathbf{L} =$ $\mathbf{D} - \mathbf{A} \succcurlyeq 0$ , i.e. it is semi-definite positive (all its eigenvalues are non-negative). Then, if we consider $\mathbf{x} : V \rightarrow \mathbb{R}$ as a real-valued function of the $n$ nodes of $G = \left( {V,E}\right)$ , we have that $\mathcal{E}\left( \mathbf{x}\right) \mathrel{\text{ := }} {\mathbf{x}}^{T}{\mathbf{L}}_{G}\mathbf{x} =$ $\mathop{\sum }\limits_{{e = \left( {u,v}\right) \in E}}{\left( {\mathbf{x}}_{u} - {\mathbf{x}}_{v}\right) }^{2} \geq 0$ for any $\mathbf{x}$ . In particular, the eigenvectors $\mathbf{f} \mathrel{\text{ := }} \left\{ {{\mathbf{x}}_{i} : {\mathbf{{Lf}}}_{i} = {\lambda }_{i}{\mathbf{x}}_{i}}\right\}$ are the set of special functions (mutually orthogonal and normalized) that minimize the energies $\mathcal{E}\left( {\mathbf{f}}_{i}\right)$ , i.e. they are the orthogonal functions with the minimal variabilities achievable by the topology of $G$ . Therefore, Theorem 1 states that any minimal variability of ${G}^{\prime }$ is bounded by $\left( {1 \pm \epsilon }\right)$ times that of $G$ if we sample enough edges with probability $q \propto {R}_{e}$ .
96
+
97
+ Therefore, the effective resistance is a principled relevance function, since the resulting graph ${G}^{\prime }$ retains the main properties of $G$ . In particular, we have that the spectra of ${\mathbf{L}}_{G}$ and ${\mathbf{L}}_{{G}^{\prime }}$ are related by $\left( {1 - \epsilon }\right) {\lambda }_{i}^{G} \leq {\lambda }_{i}^{{G}^{\prime }} \leq \left( {1 + \epsilon }\right) {\lambda }_{i}^{G}$ : in short $\left( {1 - \epsilon }\right) {\mathbf{L}}_{G} \preccurlyeq {\mathbf{L}}_{{G}^{\prime }} \preccurlyeq \left( {1 + \epsilon }\right) {\mathbf{L}}_{G}$ . This is a direct result of the theorem since ${\lambda }_{i} = \frac{\mathcal{E}\left( {\mathbf{f}}_{i}\right) }{{\mathbf{f}}_{i}^{T}{\mathbf{f}}_{i}}$ are the normalized minimal variabilities.
98
+
99
+ This first result implies that edge sampling based on effective resistances (or commute times) is a principled way to rewire a graph while preserving its original structure. Next, we present what is a commute times embedding and how it can be spectrally computed.
100
+
101
+ Commute Times Embedding. The choice of effective resistances in Theorem 1 is explained by the fact that ${R}_{uv}$ can be computed from ${R}_{uv} = {\left( {\mathbf{e}}_{u} - {\mathbf{e}}_{v}\right) }^{T}{\mathbf{L}}^{ + }\left( {{\mathbf{e}}_{u} - {\mathbf{e}}_{v}}\right)$ , where ${\mathbf{e}}_{u}$ is the unit vector with a unit value at $u$ and zero elsewhere. ${\mathbf{L}}^{ + } = \mathop{\sum }\limits_{{i > 2}}{\lambda }_{i}^{-1}{\mathbf{f}}_{i}{\mathbf{f}}_{i}^{T}$ , where ${\mathbf{f}}_{i},{\lambda }_{i}$ are the eigenvectors and eigenvalues of $\mathbf{L}$ , is the pseudo-inverse or Green’s function of $G = \left( {V,E}\right)$ if it is connected, and from the theorem we also have ${\left( 1 + \epsilon \right) }^{-1}{\mathbf{L}}_{G}^{ + } \preccurlyeq {\mathbf{L}}_{{G}^{\prime }}^{ + } \preccurlyeq {\left( 1 - \epsilon \right) }^{-1}{\mathbf{L}}_{G}^{ + }$ .
102
+
103
+ The Green’s function leads to envision ${R}_{uv}$ (and therefore $C{T}_{uv}$ ) as metrics relating pairs of nodes of $G$ . For instance ${\mathbf{R}}_{uv} = {\mathbf{L}}_{uu}^{ + } + {\mathbf{L}}_{vv}^{ + } - 2{\mathbf{L}}_{uv}^{ + }$ , is the resistance distance [38] i.e., as noted by Qiu and Hancock [39] the elements ${\mathbf{L}}_{uv}^{ + }$ encode dot products between the embeddings ${\mathbf{z}}_{u}$ and ${\mathbf{z}}_{v}$ of $u$ and $v$ . As a result, the latent space can not only be described spectrally but also in a parameter free-manner, which is not the case for other spectral embeddings, such as heat kernel or diffusion maps as they rely on a time parameter $t$ . More precisely, the embedding matrix $\mathbf{Z}$ whose columns contain the nodes’ embeddings is given by:
104
+
105
+ $$
106
+ \mathbf{Z} \mathrel{\text{ := }} \sqrt{\operatorname{vol}\left( G\right) }{\Lambda }^{-1/2}{\mathbf{F}}^{T} = \sqrt{\operatorname{vol}\left( G\right) }{\Lambda }^{\prime - 1/2}{\mathbf{G}}^{T}{\mathbf{D}}^{-1/2} \tag{2}
107
+ $$
108
+
109
+ where $\Lambda$ is the diagonal matrix of the unnormalized Laplacian $\mathbf{L}$ eigenvalues and $\mathbf{F}$ is the matrix of their associated eigenvectors. Similarly, ${\Lambda }^{\prime }$ contains the eigenvalues of the normalized Laplacian $\mathcal{L}$ and $\mathbf{G}$ the eigenvectors. We have $\mathbf{F} = {\mathbf{{GD}}}^{-1/2}$ or ${\mathbf{f}}_{i} = {\mathbf{g}}_{i}{\mathbf{D}}^{-1/2}$ , where $\mathbf{D}$ is the degree matrix.
110
+
111
+ Finally, the commute times are given by the Euclidean distances between the embeddings $C{T}_{uv} =$ ${\begin{Vmatrix}{\mathbf{z}}_{u} - {\mathbf{z}}_{v}\end{Vmatrix}}^{2}$ . Their spectral form is
112
+
113
+ $$
114
+ {R}_{uv} = \frac{C{T}_{uv}}{\operatorname{vol}\left( G\right) } = \mathop{\sum }\limits_{{i = 2}}^{n}\frac{1}{{\lambda }_{i}}{\left( {\mathbf{f}}_{i}\left( u\right) - {\mathbf{f}}_{i}\left( v\right) \right) }^{2} = \mathop{\sum }\limits_{{i = 2}}^{n}\frac{1}{{\lambda }_{i}^{\prime }}{\left( \frac{{\mathbf{g}}_{i}\left( u\right) }{\sqrt{{d}_{u}}} - \frac{{\mathbf{g}}_{i}\left( v\right) }{\sqrt{{d}_{v}}}\right) }^{2} \tag{3}
115
+ $$
116
+
117
+ Note how in Eq. 3 the commute times rely on the Fiedler vector ${\mathbf{f}}_{2}$ (or ${\mathbf{g}}_{2}$ ) downscaled by the spectral gap ${\lambda }_{2}$ (or more formally ${\lambda }_{2}^{\prime }$ ). The downscaled Fiedler vector dominates the expansion because the Fiedler vector is the solution to the relaxed ratio-cut problem. This is consistent with the fact that $p$ -resistances become the inverse of mincut when $p \rightarrow \infty$ .
118
+
119
+ ${}^{2}$ We use commute times and effective resistances interchangeably as per their use in the literature
120
+
121
+ Commute Times as an Optimization Problem. In this section, we demonstrate how the CTs may be computed as an optimization problem by means of a differentiable layer in a GNN. Constraining neighboring nodes to have a similar embedding leads to
122
+
123
+ $$
124
+ \mathbf{Z} = \arg \mathop{\min }\limits_{{{\mathbf{Z}}^{T}\mathbf{Z} = \mathbf{I}}}\frac{\mathop{\sum }\limits_{{u,v}}{\begin{Vmatrix}{\mathbf{z}}_{u} - {\mathbf{z}}_{v}\end{Vmatrix}}^{2}{\mathbf{A}}_{uv}}{\mathop{\sum }\limits_{{u,v}}{\mathbf{Z}}_{uv}^{2}{d}_{u}} = \frac{\mathop{\sum }\limits_{{\left( {u,v}\right) \in E}}{\begin{Vmatrix}{\mathbf{z}}_{u} - {\mathbf{z}}_{v}\end{Vmatrix}}^{2}}{\mathop{\sum }\limits_{{u,v}}{\mathbf{Z}}_{uv}^{2}{d}_{u}} = \frac{\operatorname{Tr}\left\lbrack {{\mathbf{Z}}^{T}\mathbf{L}\mathbf{Z}}\right\rbrack }{\operatorname{Tr}\left\lbrack {{\mathbf{Z}}^{T}\mathbf{D}\mathbf{Z}}\right\rbrack }, \tag{4}
125
+ $$
126
+
127
+ which reveals that CTs embeddings result from a Laplacian regularization down-weighted by the degree. As a result, frontier nodes or hubs -i.e. nodes with inter-community edges- which tend to have larger degrees than those lying inside their respective communities will be embedded far away from their neighbors, increasing the distance between communities. Note that the above quotient of traces formulation is easily differentiable and different from $\operatorname{Tr}\left\lbrack \frac{{\mathbf{Z}}^{T}\mathbf{{LZ}}}{{\mathbf{Z}}^{T}\mathbf{{DZ}}}\right\rbrack$ proposed in [39].
128
+
129
+ With the above elements we define CT-LAYER, the first rewiring layer proposed in this paper. See Figure 2 for a graphical representation of the layer.
130
+
131
+ Definition 1 (CT-Layer). Given the matrix ${\mathbf{X}}_{n \times F}$ encoding the features of the nodes after any message passing(MP)layer; ${\mathbf{Z}}_{n \times O\left( n\right) } = \tanh \left( {{MLP}\left( \mathbf{X}\right) }\right)$ learns the association $\mathbf{X} \rightarrow \mathbf{Z}$ while $\mathbf{Z}$ is optimized according to the loss ${L}_{CT} = \frac{\operatorname{Tr}\left\lbrack {{\mathbf{Z}}^{T}\mathbf{L}\mathbf{Z}}\right\rbrack }{\operatorname{Tr}\left\lbrack {{\mathbf{Z}}^{T}\mathbf{D}\mathbf{Z}}\right\rbrack } + {\begin{Vmatrix}\frac{{\mathbf{Z}}^{T}\mathbf{Z}}{{\begin{Vmatrix}{\mathbf{Z}}^{T}\mathbf{Z}\end{Vmatrix}}_{F}} - {\mathbf{I}}_{n}\end{Vmatrix}}_{F}$ . This results in the following resistance diffusion ${\mathbf{T}}^{CT} = \mathbf{R}\left( \mathbf{Z}\right) \odot \mathbf{A}$ , i.e. the Hadamard product between the resistance distance and the adjacency matrix, providing as input to the subsequent MP layer a learnt convolution matrix.
132
+
133
+ Thus, CT-LAYER learns the CTs and rewires an input graph according to them: the edges with maximal resistance will tend to be the most important edges so as to preserve the topology of the graph.
134
+
135
+ < g r a p h i c s >
136
+
137
+ Figure 2: Detailed depiction of CT-LAYER
138
+
139
+ Below, we present the relationship between the CTs and the graph's bottleneck and curvature.
140
+
141
+ ${\mathbf{T}}^{CT}$ and Graph Bottlenecks. Beyond the principled sparsification of ${\mathbf{T}}^{CT}$ (enabled by Theorem 1), this layer rewires the graph $G = \left( {E,V}\right)$ in such a way that edges with maximal resistance will tend to be the most critical to preserve the topology of the graph. More precisely, although $\mathop{\sum }\limits_{{e \in E}}{R}_{e} = n - 1$ , the bulk of the resistance distribution will be located at graph bottlenecks, if they exist. Otherwise, their magnitude is upper-bounded and the distribution becomes more uniform.
142
+
143
+ Graph bottlenecks are controlled by the graph’s conductance or Cheeger constant, ${h}_{G} = \mathop{\min }\limits_{{S \subseteq V}}{h}_{S}$ , where: ${h}_{S} = \frac{\left| \partial S\right| }{\min \left( {\operatorname{vol}\left( S\right) ,\operatorname{vol}\left( \bar{S}\right) }\right) },\partial S = \{ e = \left( {u,v}\right) : u \in S,v \in \bar{S}\}$ and $\operatorname{vol}\left( S\right) = \mathop{\sum }\limits_{{u \in S}}{d}_{u}$ .
144
+
145
+ The interplay between the graph's conductance and effective resistances is given by:
146
+
147
+ Theorem 2 (Alev et al. [40]). Given a graph $G = \left( {V,E}\right)$ , a subset $S \subseteq V$ with $\operatorname{vol}\left( S\right) \leq \operatorname{vol}\left( G\right) /2$ ,
148
+
149
+ $$
150
+ {h}_{S} \geq \frac{c}{\operatorname{vol}{\left( S\right) }^{1/2 - \epsilon }} \Leftrightarrow \left| {\partial S}\right| \geq c \cdot \operatorname{vol}{\left( S\right) }^{1/2 - \epsilon }, \tag{5}
151
+ $$
152
+
153
+ for some constant $c$ and $\epsilon \in \left\lbrack {0,1/2}\right\rbrack$ . Then, ${R}_{uv} \leq \left( {\frac{1}{{d}_{u}^{2\epsilon }} + \frac{1}{{d}_{v}^{2\epsilon }}}\right) \cdot \frac{1}{\epsilon \cdot {c}^{2}}$ for any pair $u,v$ .
154
+
155
+ According to this theorem, the larger the graph’s bottleneck, the tighter the bound on ${R}_{uv}$ are. Moreover, $\max \left( {R}_{uv}\right) \leq 1/{h}_{S}^{2}$ , i.e., the resistance is bounded by the square of the bottleneck.
156
+
157
+ This bound partially explains the rewiring of the graph in Figure 1-center. As seen in the Figure, rewiring using CT-LAYER sparsifies the graph and assigns larger weights to the edges located in the graph's bottleneck. The interplay between the above theorem and Theorem 1 is described in App. A.1.
158
+
159
+ Recent work has proposed using curvature for graph rewiring. We outline below the relationship between CTs and curvature.
160
+
161
+ Effective Resistances and Curvature. Topping et al. [21] propose an approach for graph rewiring, where the relevance function is given by the Ricci curvature. However, this measure is nondifferentiable. More recent definitions of curvature [22] have been formulated based on resistance distances that would be differentiable using our approach. The resistance curvature of an edge $e = \left( {u,v}\right)$ is ${\kappa }_{uv} \mathrel{\text{ := }} 2\left( {{p}_{u} + {p}_{v}}\right) /{R}_{uv}$ where ${p}_{u} \mathrel{\text{ := }} 1 - \frac{1}{2}\mathop{\sum }\limits_{{u \sim w}}{R}_{uv}$ is the node’s curvature. Relevant properties of the edge resistance curvature are discussed in App. A.1.3, along with a related Theorem proposed in Devriendt and Lambiotte [22].
162
+
163
+ § 3.3 GAP-LAYER: SPECTRAL GAP OPTIMIZATION FOR GRAPH REWIRING
164
+
165
+ The right-hand side of the Lovász bound in Eq. 1 relies on the graph’s spectral gap ${\lambda }_{2}^{\prime }$ , such that the larger the spectral gap, the closer the commute times would be to their non-informative regime. Note that the spectral graph is typically large in commonly observed graphs -such as communities in social networks which may be bridged by many edges [41]- and, hence, in these cases it would be desirable to rewire the adjacency matrix $\mathbf{A}$ so that ${\lambda }_{2}^{\prime }$ is minimized.
166
+
167
+ In this section, we explain how to rewire the graph's adjacency matrix A to minimize the spectral gap. We propose using the gradient of ${\lambda }_{2}$ wrt each component of $\widetilde{\mathbf{A}}$ . Then, we can compute these gradient either using Laplacians (L, with Fiedler ${\lambda }_{2}$ ) or normalized Laplacians ( $\mathcal{L}$ , with Fiedler ${\lambda }_{2}^{\prime }$ ). We also present an approximation of the Fiedler vectors needed to compute those gradients, and propose computing them as a GNN Layer called the GAP-LAYER. A detailed schematic of GAP-LAYER is shown in Figure 3.
168
+
169
+ Ratio-cut (Rcut) Approximation. We propose to rewire the adjacency matrix, A, so that ${\lambda }_{2}$ is minimized. We consider a matrix $\widetilde{\mathbf{A}}$ close to $\mathbf{A}$ that satisfies $\widetilde{\mathbf{L}}{\mathbf{f}}_{2} = {\lambda }_{2}{\mathbf{f}}_{2}$ , where ${\mathbf{f}}_{2}$ is the solution to the ratio-cut relaxation [42]. Following [43], the gradient of ${\lambda }_{2}$ wrt each component of $\mathbf{A}$ is given by
170
+
171
+ $$
172
+ {\nabla }_{\widetilde{\mathbf{A}}}{\lambda }_{2} \mathrel{\text{ := }} \operatorname{Tr}\left\lbrack {{\left( {\nabla }_{\widetilde{\mathbf{L}}}{\lambda }_{2}\right) }^{T} \cdot {\nabla }_{\widetilde{\mathbf{A}}}\widetilde{\mathbf{L}}}\right\rbrack = \operatorname{diag}\left( {{\mathbf{f}}_{2}{\mathbf{f}}_{2}^{T}}\right) {\mathbf{{11}}}^{T} - {\mathbf{f}}_{2}{\mathbf{f}}_{2}^{T} \tag{6}
173
+ $$
174
+
175
+ where 1 is the vector of $n$ ones; and ${\left\lbrack {\nabla }_{\widetilde{\mathbf{A}}}{\lambda }_{2}\right\rbrack }_{ij}$ is the gradient of ${\lambda }_{2}$ wrt ${\widetilde{\mathbf{A}}}_{uv}$ . The driving force of this gradient relies on the correlation ${\mathbf{f}}_{2}{\mathbf{f}}_{2}^{T}$ . Using this gradient to minimize ${\lambda }_{2}$ results in breaking the graph's bottleneck while preserving simultaneously the inter-cluster structure. We delve into this matter in App. A.2.
176
+
177
+ Normalized-cut (Ncut) Approximation. Similarly, considering now ${\lambda }_{2}^{\prime }$ for rewiring leads to
178
+
179
+ $$
180
+ {\nabla }_{\widetilde{\mathbf{A}}}{\lambda }_{2}^{\prime } \mathrel{\text{ := }} \operatorname{Tr}\left\lbrack {{\left( {\nabla }_{\widetilde{\mathcal{L}}}{\lambda }_{2}\right) }^{T} \cdot {\nabla }_{\widetilde{\mathbf{A}}}\widetilde{\mathcal{L}}}\right\rbrack =
181
+ $$
182
+
183
+ $$
184
+ {\mathbf{d}}^{\prime }\left\{ {{\mathbf{g}}_{2}^{T}{\widetilde{\mathbf{A}}}^{T}{\widetilde{\mathbf{D}}}^{-1/2}{\mathbf{g}}_{2}}\right\} {\mathbf{1}}^{T} + {\mathbf{d}}^{\prime }\left\{ {{\mathbf{g}}_{2}^{T}\widetilde{\mathbf{A}}{\widetilde{\mathbf{D}}}^{-1/2}{\mathbf{g}}_{2}}\right\} {\mathbf{1}}^{T} + {\widetilde{\mathbf{D}}}^{-1/2}{\mathbf{g}}_{2}{\mathbf{g}}_{2}^{T}{\widetilde{\mathbf{D}}}^{-1/2} \tag{7}
185
+ $$
186
+
187
+ where ${\mathbf{d}}^{\prime }$ is a $n \times 1$ vector including derivatives of degree wrt adjacency and related terms. This gradient relies on the Fiedler vector ${\mathbf{g}}_{2}$ (the solution to the normalized-cut relaxation), and on the incoming and outgoing one-hop random walks. This approximation breaks the bottleneck while preserving the global topology of the graph (Figure 1-left). More details and proof are included in App. A.2.
188
+
189
+ We present next an approximation of the Fiedler vector, followed by a proposed new layer in the GNN called the GAP-LAYER to learn how to minimize the spectral gap of the graph.
190
+
191
+ Approximating the Fiedler vector. Given that ${\mathbf{g}}_{2} = {\widetilde{\mathbf{D}}}^{1/2}{\mathbf{f}}_{2}$ , we can obtain the normalized-cut gradient in terms of ${\mathbf{f}}_{2}$ . From [23] we have that
192
+
193
+ $$
194
+ {\mathbf{f}}_{2}\left( u\right) = \left\{ {\begin{array}{ll} + 1/\sqrt{n} & \text{ if }u\text{ belongs to the first cluster } \\ - 1/\sqrt{n} & \text{ if }u\text{ belongs to the second cluster } \end{array} + O\left( \frac{\log n}{n}\right) }\right. \tag{8}
195
+ $$
196
+
197
+ < g r a p h i c s >
198
+
199
+ Figure 3: GAP-LAYER (Rcut). For GAP-LAYER (Ncut), substitute ${\nabla }_{\widetilde{\mathbf{A}}}{\lambda }_{2}$ by Eq. 7
200
+
201
+ Definition 2 (GAP-Layer). Given the matrix ${\mathbf{X}}_{n \times F}$ encoding the features of the nodes after any message passing (MP) layer, ${\mathbf{S}}_{n \times 2} = \operatorname{Softmax}\left( {\operatorname{MLP}\left( \mathbf{X}\right) }\right)$ learns the association $\mathbf{X} \rightarrow \mathbf{S}$ while $\mathbf{S}$ is optimized according to the loss ${L}_{Cut} = - \frac{\operatorname{Tr}\left\lbrack {{\mathbf{S}}^{T}\mathbf{A}\mathbf{S}}\right\rbrack }{\operatorname{Tr}\left\lbrack {{\mathbf{S}}^{T}\mathbf{D}\mathbf{S}}\right\rbrack } + {\begin{Vmatrix}\frac{{\mathbf{S}}^{T}\mathbf{S}}{{\begin{Vmatrix}{\mathbf{S}}^{T}\mathbf{S}\end{Vmatrix}}_{F}} - \frac{{\mathbf{I}}_{n}}{\sqrt{2}}\end{Vmatrix}}_{F}$ . Then the Fiedler vector ${\mathbf{f}}_{2}$ is approximated by appyling a softmaxed version of Eq. 8 and considering the loss ${L}_{\text{ Fiedler }} =$ $\parallel \widetilde{\mathbf{A}} - \mathbf{A}{\parallel }_{F} + \alpha {\left( {\lambda }_{2}^{ * }\right) }^{2}$ , where ${\lambda }_{2}^{ * } = {\lambda }_{2}$ if we use the ratio-cut approximation (and gradient) and ${\lambda }_{2}^{ * } = {\lambda }_{2}^{\prime }$ if we use the normalized-cut approximation and gradient. This returns $\widetilde{\mathbf{A}}$ and the ${GAP}$ diffusion ${\mathbf{T}}^{GAP} = \widetilde{\mathbf{A}}\left( \mathbf{S}\right) \odot \mathbf{A}$ results from minimizing ${L}_{GAP} \mathrel{\text{ := }} {L}_{Cut} + {L}_{\text{ Fiedler }}$ .
202
+
203
+ § 4 EXPERIMENTS AND DISCUSSION
204
+
205
+ In this section, we study the properties and performance of CT-LAYER and GAP-LAYER in a graph classification task with several benchmark datasets. To illustrate the merits of our approach, we compare CT-LAYER and GAP-LAYER with 3 state-of-the-art diffusion and curvature-based graph rewiring methods. Note that the aim of the evaluation is to shed light on the properties of both layers and illustrate their inductive performance, not to perform a benchmark comparison with all previously proposed graph rewiring methods.
206
+
207
+ < g r a p h i c s >
208
+
209
+ Figure 4: GNN models used in the experiments. Left: MinCut Baseline model. Right: CT-LAYER or GAP-LAYER models, depending on what method is used for rewiring.
210
+
211
+ Baselines:. The first baseline architecture is based on MINCUT Pool [34] and it is shown in Figure 4a. It is the base GNN that we use for graph classification without rewiring. MINCUT Pool layer learns $\left( {{\mathbf{A}}_{n \times n},{\mathbf{X}}_{n \times F}}\right) \rightarrow \left( {{\mathbf{A}}^{\prime }{}_{k \times k},{\mathbf{X}}_{k \times F}}\right)$ , being $k < n$ the new number of node clusters. The next two baselines are graph rewiring methods that belong to the same family of methods as DIFFWIRE, i.e. methods based on diffusion and curvature, namely DIGL (PPR) [25] and SDRF [21]. DIGL is a diffusion-based preprocessing method within the family of metric-based GSL approaches. We set the teleporting probability $\alpha = {0.001}$ and $\epsilon$ is set to keep the same average degree for each graph. Once preprocessed with DIGL, the graphs are provided as input to the MinCut Pool (Baseline1) arquitecture. The third baseline model is SDRF, which performs curvature-based rewiring. SDRF is also a preprocessing method which has 3 parameters that are highly graph-dependent. We set these parameters to $\tau = {20}$ and ${C}^{ + } = 0$ for all experiments as per [21]. The number of iterations is estimated dynamically according to ${0.7} * \left| V\right|$ for each graph.
212
+
213
+ Both DIGL and SDRF aim to preserve the global topology of the graph but require optimizing their parameters for each input graph via hyper-parameter search. In a graph classification task, this search is $O\left( {n}^{3}\right)$ per graph. Details about the parameter tuning in these methods can be found in App. A.3.3.
214
+
215
+ To shed light on the performance and properties of CT-LAYER and GAP-LAYER, we add the corresponding layer in between Linear(X) $\overset{ * }{ \rightarrow }$ Conv1(A, X). We build 3 different models: CT-LayER, GAP-LAYER (Rcut), GAP-LAYER (Ncut), depending on the layer used. For CT-LAYER, we learn ${\mathbf{T}}^{CT}$ which is used as a convolution matrix afterwards. For GAP-LAYER, we learn ${\mathbf{T}}^{GAP}$ either using the Rcut or the Ncut approximations. A schematic of the architectures is shown in Figure 4b and in App. A.3.2.
216
+
217
+ Table 1: Experimental results on common graph classification benchmarks. Red denotes the best model row-wise and blue marks the runner-up. '*' means degree as node feature.
218
+
219
+ max width=
220
+
221
+ X MinCutPool DIGL SDRF CT-LAYER GAP-LAYER (Rcut) GAP-LAYER (Ncut)
222
+
223
+ 1-7
224
+ REDDIT-B* ${66.53} \pm {4.47}$ ${76.02} \pm {4.31}$ ${65.3} \pm {7.7}$ ${78.45} \pm {4.59}$ ${77.63} \pm {4.96}$ ${76.00} \pm {5.30}$
225
+
226
+ 1-7
227
+ IMDB-B* ${60.75} \pm {7.03}$ ${59.35} \pm {7.76}$ ${59.2} \pm {6.9}$ ${69.84} \pm {4.60}$ ${69.93} \pm {3.32}$ ${68.80} \pm {3.10}$
228
+
229
+ 1-7
230
+ COLLAB* ${58.00} \pm {6.22}$ ${57.51} \pm {5.95}$ ${56.6} \pm {10}$ ${69.87} \pm {2.40}$ ${64.47} \pm {4.07}$ ${65.89} \pm {4.90}$
231
+
232
+ 1-7
233
+ MUTAG ${84.21} \pm {6.34}$ ${85.00} \pm {5.65}$ ${82.4} \pm {6.8}$ ${86.05} \pm {4.99}$ ${86.90} \pm {4.00}$ ${86.90} \pm {4.00}$
234
+
235
+ 1-7
236
+ PROTEINS ${74.84} \pm {2.39}$ ${74.49} \pm {2.88}$ ${74.4} \pm {2.7}$ ${75.38} \pm {2.97}$ ${75.03} \pm {3.09}$ ${75.34} \pm {2.10}$
237
+
238
+ 1-7
239
+ SBM* ${53.00} \pm {9.90}$ ${56.93} \pm {12.8}$ ${54.1} \pm {7.1}$ ${81.40} \pm {11.7}$ ${90.80} \pm {7.00}$ ${92.26} \pm {2.92}$
240
+
241
+ 1-7
242
+ Erdös-Rényi* ${81.86} \pm {6.26}$ ${81.93} \pm {6.32}$ ${73.6} \pm {9.1}$ ${79.06} \pm {9.89}$ ${79.26} \pm {10.46}$ ${82.26} \pm {3.20}$
243
+
244
+ 1-7
245
+
246
+ As shown in Table 1, we use in our experiments common benchmark datasets for graph classification. We select datasets both with features and featureless, in which case we use the degree as the node features. These datasets are diverse regarding the topology of their networks: REDDIT-B, IMDB-B and COLLAB contain truncate scale-free graphs (social networks), whereas MUTAG and PROTEINS contain graphs from biology or chemistry. In addition, we use two synthetic datasets with 2 classes: Erdös-Rényi with ${p}_{1} \in \left\lbrack {{0.3},{0.5}}\right\rbrack$ and ${p}_{2} \in \left\lbrack {{0.4},{0.8}}\right\rbrack$ and Stochastic block model (SBM) with parameters ${p}_{1} = {0.8},{p}_{2} = {0.5},{q}_{1} \in \left\lbrack {{0.1},{0.15}}\right\rbrack$ and ${q}_{2} \in \left\lbrack {{0.01},{0.1}}\right\rbrack$ . More details in App. A.3.1.
247
+
248
+ Table 1 reports average accuracies and standard deviation on 10 random data splits, using 85/15 stratified train-test split, training during 60 epochs and reporting the results of the last epoch for each random run. We use Pytorch Geometric framework and our code is publicly available ${}^{3}$ .
249
+
250
+ The experiments support our hypothesis that rewiring based on CT-LAYER and GAP-LAYER improves the performance of the baselines on graph classification. Since both layers are differentiable, they learn how to rewire unseen graphs. The improvements are significant in graphs where social components arise (REDDITB, IMDBB, COLLAB), i.e. graphs with small world properties and power-law degree distributions with a topology based on hubs and authorities. These are graphs where bottlenecks arise easily and our approach is able to properly rewire the graphs. However, the improvements observed in planar or grid networks (MUTAG and PROTEINS) are more limited: the bottleneck does not seem to be critical for the graph classification task.
251
+
252
+ Moreover, our method performs better in graphs with featureless nodes than graphs with node features because it is able to leverage the information encoded in the topology of the graphs. Note that in attribute-based graphs, the weights of the attributes typically overwrite the graph's structure in the classification task, whereas in graphs without node features, the information is encoded in the graph's structure. App. A.3.4 contains an in-depth analysis of the latent space produced by each method.
253
+
254
+ CT-Layer vs GAP-Layer. The real-world datasets explored in this paper are characterized by mild bottlenecks from the perspective of the Lovász bound. For completion, we have included two synthetic datasets (SBM and Erdös-Rényi) where the Lovász bound is very restrictive. As a result, CT-LAYER is outperformed by GAP-LAYER in SBM. Note that the results on the synthetic datasets suffer from large variability. The smaller the graph's bottleneck, the more useful the CT-Layer is. Conversely, the larger the bottleneck, the more useful GAP-Layer is.
255
+
256
+ § 5 CONCLUSION AND FUTURE WORK
257
+
258
+ In this paper, we have proposed DIFFWIRE, a unified framework for graph rewiring that links the two components of the Lovász bound: CTs and the spectral gap. We have presented two novel, fully differentiable and inductive rewiring layers: CT-LAYER and GAP-LAYER. We have empirically evaluated these layers on benchmark datasets for graph classification with competitive results when compared to SoTA baselines, specially in graphs where the the nodes have no attributes and have small-world properties.
259
+
260
+ In future work, we plan to test our approach in other graph-related tasks and intend to apply DIFFWIRE to real-world applications, particularly in social networks, which have unique topology, statistics and direct implications in society.
261
+
262
+ ${}^{3}$ https://anonymous.4open.science/r/DiffWireLoG22/readme.md
papers/LOG/LOG 2022/LOG 2022 Conference/KQNsbAmJEug/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Not too little, not too much: a theoretical analysis of graph (over)smoothing
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ We analyze graph smoothing with mean aggregation, where each node successively receives the average of the features of its neighbors. Indeed, it has been observed that Graph Neural Networks (GNNs), which generally follow some variant of Message-Passing (MP) with repeated aggregation, may be subject to the over-smoothing phenomenon: by performing too many rounds of MP, the node features tend to converge to a non-informative limit. At the other end of the spectrum, it is intuitively obvious that some MP rounds are necessary, but existing analyses do not exhibit both phenomena at once. In this paper, we consider simplified linear GNNs, and rigorously analyze two examples of random graphs for which a finite number of mean aggregation steps provably improves the learning performance, before oversmoothing kicks in. We identify two key phenomena: graph smoothing shrinks non-principal directions in the data faster than principal ones, which is useful for regression, and shrinks nodes within communities faster than they collapse together, which improves classification.
12
+
13
+ ## 1 Introduction
14
+
15
+ In recent years, deep architectures such as Graph Neural Networks (GNNs), along with the availability of large sets of graph data, have significantly broadened the field of machine learning on graphs and structured data, see $\left\lbrack {3,4,9,{19}}\right\rbrack$ for reviews. Most GNNs rely on the Message-Passing (MP) framework $\left\lbrack {8,{11}}\right\rbrack$ . At each layer $k$ , for each node $i$ , a representation ${z}_{i}^{\left( k\right) }$ is computed using the representations of the neighbors ${\mathcal{N}}_{i}$ of $i$ in the graph at the previous layer: ${z}_{i}^{\left( k\right) } = \operatorname{AGG}\left( {\left\{ {z}_{j}^{\left( k - 1\right) }\right\} }_{j \in {\mathcal{N}}_{i}}\right)$ , where AGG is an aggregation function that treats ${\left\{ {z}_{j}^{\left( k - 1\right) }\right\} }_{j \in {\mathcal{N}}_{i}}$ as an unordered set, to respect the absence of node ordering in the graph. Here we consider one of the most classical, mean aggregation:
16
+
17
+ $$
18
+ {z}_{i}^{\left( k\right) } = \frac{1}{\mathop{\sum }\limits_{j}{a}_{ij}}\mathop{\sum }\limits_{j}{a}_{ij}\Psi \left( {z}_{j}^{\left( k - 1\right) }\right) \tag{1}
19
+ $$
20
+
21
+ where the ${a}_{ij} \in {\mathbb{R}}_{ + }$ are the entries of the adjacency matrix of the graph, and $\Psi$ is some function (usually a Multi-Layer Perceptron).While MP is a natural and rather general framework, its limitations were quickly observed by researchers and practitioners. Foremost among them is the so-called oversmoothing phenomenon [14]: as the GNN gets deeper and many rounds of MP are performed, the node features ${z}_{i}^{\left( k\right) }$ tend to become too similar across the graph. To relieve it, researchers have explored residual mechanisms $\left\lbrack {7,{13}}\right\rbrack$ , dropping connections $\left\lbrack {10}\right\rbrack$ , clever normalizations $\left\lbrack {21}\right\rbrack$ or regularizations [6], among others. Some works have acknowledged the important role of the aggregation function, and proposed new exotic diffusion strategies [2] or to optimize it [12].
22
+
23
+ On the theoretical side, oversmoothing has mostly been analyzed in the infinite-layer limit $k \rightarrow \infty$ . In this case, classical spectral analysis of graph operators such as the Laplacian can be leveraged to indeed show that node features will always converge to some limit that carries a limited amount of information [17]. This is particularly true for mean aggregation (1). However, there has been little research at the other end of the spectrum. Generally, researchers show the power of GNNs for a sufficient (unbounded) number of layers, such as the now-famous ability to distinguish graph isomorphism as well as the Weisfeiler-Lehman test and all its variants $\left\lbrack {{16},{20}}\right\rbrack$ . Since these results are valid for an unbounded number of layers, the settings adopted in these works are, by definition, incompatible with non-informative oversmoothing.
24
+
25
+ In a recent preprint ${\left\lbrack 1\right\rbrack }^{1}$ , we showcase two representative exemples, of regression and classification, on which linear GNNs (sometimes called SGC [18]) are provably subject to this double phenomenon: some smoothing is useful for learning, while too much smoothing inevitably leads to oversmoothing.
26
+
27
+ We adopt on a model of latent space random graphs, with node features. We identify two key phenomena for this: smoothing shrinks non-principal directions in the data faster than principal ones (Sec. 4), and shrinks communities faster than they collapse together (Sec. 5). Although our theoretical settings are obviously simplified, we believe it is a step towards a better comprehension of graph aggregation, of the relationship between node features and graph structure. All proofs are given in the full paper [1], of which the present document is an extended abstract.
28
+
29
+ ## 2 Preliminaries
30
+
31
+ Notations. The norm $\parallel \cdot \parallel$ is the Euclidean norm for vectors and spectral norm for (rectangular) matrices. For a psd matrix $\sum$ , the Mahalanobis norm is $\parallel x{\parallel }_{\sum }^{2}\overset{\text{ def. }}{ = }{x}^{\top }{\sum x}$ . The determinant of a matrix is $\left| S\right|$ , and its smallest eigenvalue is ${\lambda }_{\min }\left( S\right)$ . The multivariate Gaussian distribution with mean $\mu$ and covariance $\sum$ is denoted by ${\mathcal{N}}_{\mu ,\sum }\left( x\right) = \det {\left( 2\pi \sum \right) }^{-\frac{1}{2}}{e}^{-\frac{1}{2}\parallel x - \mu {\parallel }_{{\sum }^{-1}}^{2}}$ .
32
+
33
+ SSL.. In this paper, we consider Semi-Supervised Learning (SSL) [5, 11] on an undirected graph of size $n$ . We observe a weighted adjacency matrix $A = {\left\lbrack {a}_{ij}\right\rbrack }_{i, j = 1}^{n} \in {\mathbb{R}}_{ + }^{n \times n}$ as well as node features ${z}_{1},\ldots {z}_{n} \in {\mathbb{R}}^{p}$ at each node of the graph. We also observe some labels ${y}_{1},\ldots ,{y}_{{n}_{\mathrm{{tr}}}} \in \mathbb{R}$ at training time and aim to predict the remaining labels ${y}_{{n}_{\mathrm{{tr}}} + 1},\ldots ,{y}_{n}$ . For simplicity, we assume that ${n}_{\mathrm{{tr}}}$ and ${n}_{\text{te }}$ are both in $\mathcal{O}\left( n\right)$ . We denote by $Z \in {\mathbb{R}}^{n \times p}$ the matrix whose rows contain the node features, ${Z}_{\mathrm{{tr}}},{Z}_{\mathrm{{te}}}$ respectively its first ${n}_{\mathrm{{tr}}}$ and last ${n}_{\mathrm{{te}}}$ rows, and similarly ${Y}_{\mathrm{{tr}}},{Y}_{\mathrm{{te}}}$ the vectors containing the observed and non-observed labels.
34
+
35
+ Graph smoothing with mean aggregation. Here we consider a simplified situation of linear GNN with mean aggregation, often used as a theoretical baseline [18]. A linear GNN with $k$ layers just corresponds to performing $k$ rounds of mean aggregation on the node features, then learning on the smoothed features. We denote by ${d}_{A} = {\left\lbrack \mathop{\sum }\limits_{i}{a}_{ij}\right\rbrack }_{j} \in {\mathbb{R}}_{ + }^{n}$ the vector containing the degrees of the graph and $D = \operatorname{diag}\left( {d}_{A}\right)$ . Assuming that all degrees are non-zero, performing one round of mean aggregation corresponds to multiplying $Z$ by $L = {D}^{-1}A$ . The smoothed node features after $k$ rounds of mean aggregation are: ${Z}^{\left( k\right) } = {L}^{k}Z$ . Each row, denoted by ${z}_{i}^{\left( k\right) } \in {\mathbb{R}}^{p}$ , contains the smoothed features of an individual node. Its first ${n}_{\mathrm{{tr}}}$ and last ${n}_{\mathrm{{te}}}$ rows are denoted ${Z}_{\mathrm{{tr}}}^{\left( k\right) },{Z}_{\mathrm{{te}}}^{\left( k\right) }$ .
36
+
37
+ Learning. In this paper, we consider learning with a Mean Square Error (MSE) loss and Ridge regularization. For $\lambda > 0$ , the regression coefficients vector on the smoothed features is
38
+
39
+ $$
40
+ {\widehat{\beta }}^{\left( k\right) }\overset{\text{ def. }}{ = }{\operatorname{argmin}}_{\beta }\frac{1}{{n}_{\mathrm{{tr}}}}{\begin{Vmatrix}{Y}_{\mathrm{{tr}}} - {Z}_{\mathrm{{tr}}}^{\left( k\right) }\beta \end{Vmatrix}}^{2} + \lambda \parallel \beta {\parallel }^{2} = {\left( \frac{{\left( {Z}_{\mathrm{{tr}}}^{\left( k\right) }\right) }^{\top }{Z}_{\mathrm{{tr}}}^{\left( k\right) }}{{n}_{\mathrm{{tr}}}} + \lambda \mathrm{{Id}}\right) }^{-1}\frac{{\left( {Z}_{\mathrm{{tr}}}^{\left( k\right) }\right) }^{\top }{Y}_{\mathrm{{tr}}}}{{n}_{\mathrm{{tr}}}} \tag{2}
41
+ $$
42
+
43
+ Then, the test risk is defined as
44
+
45
+ $$
46
+ {\mathcal{R}}^{\left( k\right) }\overset{\text{ def. }}{ = }{n}_{\text{te }}^{-1}{\begin{Vmatrix}{Y}_{\text{te }} - {\widehat{{Y}_{\text{te }}}}^{\left( k\right) }\end{Vmatrix}}^{2}\;\text{ where }{\widehat{{Y}_{\text{te }}}}^{\left( k\right) } = {Z}_{\text{te }}^{\left( k\right) }{\widehat{\beta }}^{\left( k\right) } \tag{3}
47
+ $$
48
+
49
+ Our goal is to illustrate some situations where a finite amount of smoothing provably improves the test risk, that is, there is an optimal ${k}^{ \star } > 0$ such that ${\mathcal{R}}^{\left( {k}^{ \star }\right) } < \min \left( {{\mathcal{R}}^{\left( 0\right) },{\mathcal{R}}^{\left( \infty \right) }}\right)$ .
50
+
51
+ Random graph model. We adopt popular latent space random graph models akin to graphons [15]. In these models, to each node $i$ is associated an unobserved latent variable ${x}_{i} \in {\mathbb{R}}^{d}$ with $d \geq p$ , and edge weights are assumed to be equal to ${a}_{ij} = W\left( {{x}_{i},{x}_{j}}\right)$ where $W : {\mathbb{R}}^{d} \times {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}_{ + }$ is a connectivity kernel. Note that edges may also be taken as random Bernoulli variables, but we do not consider this here for simplicity. Moreover, we consider that the $\left( {{x}_{i},{y}_{i}}\right)$ are drawn iid from
52
+
53
+ ---
54
+
55
+ ${}^{1}$ the reference has been anonymized for review purpose.
56
+
57
+ ---
58
+
59
+ some joint distribution, and the node features are a linear projection of the latent variables to a lower dimension: ${z}_{i} = {M}^{\top }{x}_{i}$ for some unknown $M \in {\mathbb{R}}^{d \times p}$ that satisfies ${M}^{\top }M = {\operatorname{Id}}_{p}$ . To summarize:
60
+
61
+ $$
62
+ \forall i, j,\;\left( {{x}_{i},{y}_{i}}\right) \overset{\text{ iid }}{ \sim }P,\;{z}_{i} = {M}^{\top }{x}_{i},\;{a}_{ij} = W\left( {{x}_{i},{x}_{j}}\right) \tag{4}
63
+ $$
64
+
65
+ For this model, note that ${Z}^{\left( k\right) } = {X}^{\left( k\right) }M$ where ${X}^{\left( k\right) } = {L}^{k}X$ . In the rest of the paper, we use the Gaussian kernel with a small additive term $\varepsilon > 0$ :
66
+
67
+ $$
68
+ W\left( {x, y}\right) = \varepsilon + {W}_{g}\left( {x, y}\right) \;\text{ where }{W}_{g}\left( {x, y}\right) \overset{\text{ def. }}{ = }{e}^{-\frac{1}{2}\parallel x - y{\parallel }^{2}} \tag{5}
69
+ $$
70
+
71
+ The coefficient $\varepsilon$ is added to lower-bound the degrees of the graph and avoid degenerate situations.
72
+
73
+ ## 3 Oversmoothing
74
+
75
+ In this section, we briefly examine the oversmoothing case, when $k \rightarrow \infty$ while all other parameters are fixed. In this case, it is well-known that all node features converge even for general GNNs [17]. For completeness, we state below this result in our settings.
76
+
77
+ Theorem 1. Define $v = {Z}^{\top }\bar{d}$ and ${\bar{y}}_{\mathrm{{tr}}} = {n}_{\mathrm{{tr}}}^{-1}\mathop{\sum }\limits_{{i = 1}}^{{n}_{\mathrm{{tr}}}}{y}_{i}$ . We have ${\widehat{Y}}_{\mathrm{{te}}}^{\left( k\right) }\xrightarrow[{k \rightarrow \infty }]{}\left( {\frac{\parallel v{\parallel }^{2}}{\lambda + \parallel v{\parallel }^{2}}{\bar{y}}_{\mathrm{{tr}}}}\right) {1}_{{n}_{\mathrm{{te}}}}$ .
78
+
79
+ Hence, in the limit $k \rightarrow \infty$ , the predicted labels become all equal. Using simple concentration inequalities, it is generally easy to show that ${\mathcal{R}}^{\left( \infty \right) } \approx \operatorname{Var}\left( y\right) + \mathcal{O}\left( {1/\sqrt{n}}\right)$ when $\lambda \rightarrow 0$ . In most cases, this leads to situations where ${\mathcal{R}}^{\left( 0\right) } < {\mathcal{R}}^{\left( \infty \right) }$ , and oversmoothing occurs.
80
+
81
+ ## 4 Finite smoothing: Linear Regression
82
+
83
+ In this section, we consider a problem of linear regression on Gaussian data. We consider $x \sim {\mathcal{N}}_{0,\sum }$ for some positive definite covariance matrix $\sum$ , and $y = {x}^{\top }{\beta }^{ \star }$ , without noise for simplicity. For a symmetric positive semi-definite matrix $S \in {\mathbb{R}}^{d \times d}$ , we define the following function
84
+
85
+ $$
86
+ {R}_{\text{reg. }}\left( S\right) \overset{\text{ def. }}{ = }{\left( {\sum }^{\frac{1}{2}}{\beta }^{ \star }\right) }^{\top }{\left( \operatorname{Id} - {S}^{\frac{1}{2}}M{\left( \lambda \operatorname{Id} + {M}^{\top }SM\right) }^{-1}{M}^{\top }{S}^{\frac{1}{2}}\right) }^{2}\left( {{\sum }^{\frac{1}{2}}{\beta }^{ \star }}\right) \in {\mathbb{R}}_{ + } \tag{6}
87
+ $$
88
+
89
+ where we recall that $M$ is the projection matrix to obtain the node features $z = {M}^{\top }x$ . Note that it satisfies $0 \leq R\left( S\right) \leq {\begin{Vmatrix}{\beta }^{ \star }\end{Vmatrix}}_{\sum }^{2}$ . We additionally define ${\sum }^{\left( k\right) } = {\left( \operatorname{Id} + {\sum }^{-1}\right) }^{-{2k}}\sum$ . The main result of this section is the following.
90
+
91
+ Theorem 2 (Regression risk without smoothing.). With probability at least $1 - \rho$ ,
92
+
93
+ $$
94
+ {\mathcal{R}}^{\left( 0\right) } = {R}_{\text{reg. }}\left( \sum \right) + \mathcal{O}\left( \frac{\parallel \sum \parallel {\begin{Vmatrix}{\beta }^{ \star }\end{Vmatrix}}^{2}d\sqrt{\log \left( {1/\rho }\right) }}{\left( {\lambda + {\lambda }_{\text{min }}^{\left( 0\right) }}\right) \sqrt{n}}\right) \tag{7}
95
+ $$
96
+
97
+ and
98
+
99
+ $$
100
+ {\mathcal{R}}^{\left( 1\right) } = {R}_{\text{reg. }}\left( {\sum }^{\left( 1\right) }\right) + \mathcal{O}\left( {C{\varepsilon }^{1/5}}\right) + \mathcal{O}\left( \frac{{C}^{\prime }\log n\sqrt{d + \log \left( {1/\rho }\right) }}{\left( {\lambda + {\lambda }_{\min }^{\left( 1\right) }}\right) \sqrt{n}}\right) \tag{8}
101
+ $$
102
+
103
+ where $C = \operatorname{poly}\left( {\parallel \sum \parallel ,{e}^{d},\left| {\mathrm{{Id}} + \sum }\right| }\right) ,{C}^{\prime } = \operatorname{poly}\left( {{\varepsilon }^{-1},\parallel \sum \parallel ,\begin{Vmatrix}{\beta }^{ \star }\end{Vmatrix}}\right)$ and ${\lambda }_{\min }^{\left( k\right) } = {\lambda }_{\min }\left( {{M}^{\top }{\sum }^{\left( k\right) }M}\right)$ .
104
+
105
+ This theorem gives a limiting expression of ${\mathcal{R}}^{\left( 0\right) }$ and ${\mathcal{R}}^{\left( 1\right) }$ with additional error terms. Since it is easy to show that for $n$ large enough ${\mathcal{R}}^{\left( 0\right) } \leq {\mathcal{R}}^{\left( \infty \right) } \approx \operatorname{Var}\left( y\right)$ , we obtain the following result.
106
+
107
+ Corollary 1. Take any $\rho > 0$ , and suppose ${R}_{\text{reg. }}\left( {\sum }^{\left( 1\right) }\right) < {R}_{\text{reg. }}\left( \sum \right)$ . If $\varepsilon$ is sufficiently small and $n$ is sufficiently large, then with probability $1 - \rho$ , there is ${k}^{ \star } > 0$ such that ${\mathcal{R}}^{\left( {k}^{ \star }\right) } < \min \left( {{\mathcal{R}}^{\left( 0\right) },{\mathcal{R}}^{\left( \infty \right) }}\right)$ .
108
+
109
+ In other words, under some hypothesis on ${R}_{\text{reg. }}$ , there is indeed coexistence of beneficial finite smoothing and oversmoothing. Below we exhibit a simple example where this hypothesis is satisfied.
110
+
111
+ As expected in linear regression, the covariance of the ${x}_{k}^{\left( k\right) }$ is key in the expression of the risk. It can be seen in the proof of Theorem 2 (available at [1]) that ${x}^{\left( 1\right) }$ behaves like ${\left( \operatorname{Id} + {\sum }^{-1}\right) }^{-1}x$ , whose covariance is ${\sum }^{\left( 1\right) }$ , hence the consequence that ${\mathcal{R}}^{\left( 1\right) } \approx {R}_{\text{reg. }}\left( {\sum }^{\left( 1\right) }\right)$ . Similarly, by applying repeated smoothing we can extrapolate that ${x}^{\left( k\right) }$ behaves like ${\left( \operatorname{Id} + {\sum }^{-1}\right) }^{-k}x$ , such that ${\mathcal{R}}^{\left( k\right) } \approx {R}_{\text{reg. }}\left( {\sum }^{\left( k\right) }\right)$ .
112
+
113
+ ![01963f04-5f42-7270-80f2-976e2cf1c094_3_323_205_1154_287_0.jpg](images/01963f04-5f42-7270-80f2-976e2cf1c094_3_323_205_1154_287_0.jpg)
114
+
115
+ Figure 1: Illustration of mean aggregation smoothing on the $2\mathrm{D}$ example described in the text. First three figures on the left, top: unobserved latent variables ${X}^{\left( k\right) }$ in dimension $d = 2$ where the colors are the $Y$ ; bottom: observed node features ${Z}^{\left( k\right) } = {X}^{\left( k\right) }M$ in dimension $p = 1$ on the x-axis, labels $Y$ on the y-axis. From left to right, three order of smoothing $k = 0,1$ and 2 are represented. Figure on the right: comparison of empirical and theoretical MSE (details in [1]) with respect to order of smoothing $k$ .
116
+
117
+ ![01963f04-5f42-7270-80f2-976e2cf1c094_3_322_674_1153_276_0.jpg](images/01963f04-5f42-7270-80f2-976e2cf1c094_3_322_674_1153_276_0.jpg)
118
+
119
+ Figure 2: Illustration of mean aggregation smoothing on a classification task with two Gaussians with dimensions $d = 2, p = 1$ , where $M$ projects on the first coordinate. First three figures on the left, top: density of unobserved latent variables ${X}^{\left( k\right) }$ in dimension $d = 2$ ; bottom: density of observed node features ${Z}^{\left( k\right) } =$ ${X}^{\left( k\right) }M$ in dimension $p = 1$ . From left to right, three order of smoothing $k = 0,1$ and 2 are represented. Figure on the right: comparison of empirical and theoretical MSE with respect to order of smoothing $k$ .
120
+
121
+ The matrix ${\sum }^{\left( k\right) }$ has the same eigendecomposition as $\sum$ , but where every ${\lambda }_{i}$ is replaced by ${\lambda }_{i}^{\left( k\right) } =$ ${\left( 1 + 1/{\lambda }_{i}\right) }^{-{2k}}{\lambda }_{i}$ . This can be interpreted as follows: when ${\lambda }_{i} \gg 1$ is large, ${\lambda }_{i}^{\left( k\right) } \sim {\lambda }_{i}$ , while if ${\lambda }_{i} \ll 1$ is small, ${\lambda }_{i}^{\left( k\right) } \sim {\lambda }_{i}^{{2k} + 1}$ . Hence smoothing shrinks the directions of the small eigenvalues faster than that of the large ones. Thus, if ${\beta }^{ \star }$ is mostly aligned with the eigenvectors of large eigenvalues, smoothing may reduce unwanted noise in the node features $z = {M}^{\top }x$ .
122
+
123
+ We illustrate this on a toy situation (Fig. 1). Consider the following settings: $d = 2, p = 1,\sum$ has two eigenvalues ${\lambda }_{1} = 2$ and ${\lambda }_{2} = 1/2$ , with respective eigenvectors ${u}_{1} = \left\lbrack {1,1}\right\rbrack /\sqrt{2}$ and ${u}_{2} = \left\lbrack {-1,1}\right\rbrack /\sqrt{2}$ , and ${\beta }^{ \star } = b{u}_{1}$ . Finally, ${M}^{\top } = \left\lbrack {1,0}\right\rbrack$ is the projection on the first coordinate. In this case, we can compute explicitely: ${\mathcal{R}}^{\left( k\right) } \approx {R}_{\text{reg. }}\left( {\sum }^{\left( k\right) }\right) = {\lambda }_{1}{b}^{2}\frac{{\left( 2\lambda + {\lambda }_{2}^{\left( k\right) }\right) }^{2} + {\lambda }_{2}^{\left( k\right) }{\lambda }_{1}^{\left( k\right) }}{{\left( 2\lambda + {\lambda }_{1}^{\left( k\right) } + {\lambda }_{2}^{\left( k\right) }\right) }^{2}}$ . So, if ${\lambda }_{2}^{\left( k\right) }$ decreases faster than ${\lambda }_{1}^{\left( k\right) }$ , this function will first decrease to a minimum of approximately ${\lambda }_{1}{b}^{2}{\left( \frac{2\lambda }{{2\lambda } + {\lambda }_{1}^{\left( {k}^{ \star }\right) }}\right) }^{2}$ (when ${\lambda }_{2}^{\left( k\right) } \approx 0$ ), before increasing again to ${\lambda }_{1}{b}^{2} = {\begin{Vmatrix}{\beta }^{ \star }\end{Vmatrix}}_{\sum }^{2} = \mathop{\lim }\limits_{{n \rightarrow \infty }}{\mathcal{R}}^{\left( \infty \right) }$ .
124
+
125
+ ## 5 Finite smoothing: classification
126
+
127
+ In this last section, we examine a simple classification problem for two balanced classes with Gaussian distribution: $\left( {x, y}\right) \sim \left( {1/2}\right) \left( {{\mathcal{N}}_{\mu ,\mathrm{{Id}}}\otimes \{ 1\} + {\mathcal{N}}_{-\mu ,\mathrm{{Id}}}\otimes \{ - 1\} }\right)$ . We note that this is not a difficult problem per se, and that linear regression is certainly not the method of choice to solve it. Our main goal is to illustrate the smoothing phenomenon. Our main result is the following.
128
+
129
+ Theorem 3. Take any $\rho > 0$ . If $\varepsilon$ is sufficiently small, and $\parallel \mu \parallel , n$ are sufficiently large, and $\begin{Vmatrix}{{M}^{\top }\mu }\end{Vmatrix} > 0$ , then with probability $1 - \rho$ , there is ${k}^{ \star } > 0$ such that ${\mathcal{R}}^{\left( {k}^{ \star }\right) } < \min \left( {{\mathcal{R}}^{\left( 0\right) },{\mathcal{R}}^{\left( \infty \right) }}\right)$ .
130
+
131
+ Note that we have assumed $\parallel \mu \parallel$ to be sufficiently large here. However, we do not assume that $\begin{Vmatrix}{{M}^{\top }\mu }\end{Vmatrix}$ is large (just non-zero), and the classification problem on the ${z}_{i}$ alone may be very difficult. As seen in the proof [1] and Fig. 2 on a $d = 2$ example, the interpretation here is the following: in the proper regime, communities will initially concentrate in the latent space faster than they get closer from each other, which helps learning on the ${z}^{\left( k\right) }$ . Then, they eventually collapse together.
132
+
133
+ References
134
+
135
+ [1] Anonymized. Not too little, not too much: a theoretical analysis of graph (over)smoothing. ArXiv, pages 1-24, 2022. 2, 3, 4
136
+
137
+ [2] Cristian Bodnar, Francesco Di Giovanni, Benjamin Paul Chamberlain, Pietro Liò, and Michael M. Bronstein. Neural Sheaf Diffusion: A Topological Perspective on Heterophily and Oversmoothing in GNNs. 2022. 1
138
+
139
+ [3] Michael M. Bronstein, Joan Bruna, Taco Cohen, and Petar Veličković. Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges. 2021. 1
140
+
141
+ [4] Michael M. Bronstein, Joan Bruna, Yann Lecun, Arthur Szlam, and Pierre Vandergheynst. Geometric Deep Learning: Going beyond Euclidean data. IEEE Signal Processing Magazine, 34(4):18-42, 2017. 1
142
+
143
+ [5] Olivier Chapelle, Bernhard Schölkopf, and Alexander Zien. Semi-Supervised Learning. 2010. 2
144
+
145
+ [6] Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. AAAI 2020 - 34th AAAI Conference on Artificial Intelligence, pages 3438-3445, 2020. 1
146
+
147
+ [7] Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, and Yaliang Li. Simple and deep graph convolutional networks. 37th International Conference on Machine Learning, ICML 2020, PartF16814:1703-1713, 2020. 1
148
+
149
+ [8] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural Message Passing for Quantum Chemistry. In International Conference on Machine Learning (ICML), pages 1-14, 2017. 1
150
+
151
+ [9] William L. Hamilton. Graph Representation Learning. 2020. 1
152
+
153
+ [10] Wenbing Huang, Yu Rong, Tingyang Xu, Fuchun Sun, and Junzhou Huang. Tackling Over-Smoothing for General Graph Convolutional Networks. 14(8):1-14, 2020. 1
154
+
155
+ [11] Thomas N Kipf and Max Welling. Semi-Supervised Learning with Graph Convolutional Networks. In International Conference on Learning Representations (ICLR), 2017. 1, 2
156
+
157
+ [12] Kwei Herng Lai, Daochen Zha, Kaixiong Zhou, and Xia Hu. Policy-GNN: Aggregation Optimization for Graph Neural Networks. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 461-471, 2020. 1
158
+
159
+ [13] Guohao Li, Matthias Muller, Ali Thabet, and Bernard Ghanem. DeepGCNs: Can GCNs go as deep as CNNs? Proceedings of the IEEE International Conference on Computer Vision, 2019-Octob:9266-9275, 2019. 1
160
+
161
+ [14] Qimai Li, Zhichao Han, and Xiao Ming Wu. Deeper insights into graph convolutional networks for semi-supervised learning. 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, pages 3538-3545, 2018. 1
162
+
163
+ [15] László Lovász. Large networks and graph limits. Colloquium Publications, 60:487, 2012. 2
164
+
165
+ [16] Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably Powerful Graph Networks. In Advances in Neural Information Processing Systems (NeurIPS), pages 1-12, 2019. 2
166
+
167
+ [17] Kenta Oono and Taiji Suzuki. Graph Neural Networks Exponentially Lose Expressive Power for Node Classification. In International Conference on Learning Representation (ICLR), 2020. 1,3
168
+
169
+ [18] Felix Wu, Tianyi Zhang, Amauri Holanda de Souza, Christopher Fifty, Tao Yu, and Kilian Q. Weinberger. Simplifying Graph Convolutional Networks. 2019. 2
170
+
171
+ [19] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S. Yu. A Comprehensive Survey on Graph Neural Networks. IEEE Transactions on Neural Networks and Learning Systems, pages 1-21, 2020. 1
172
+
173
+ [20] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How Powerful are Graph Neural Networks? In ICLR, pages 1-15, 2019. 2
174
+
175
+ [21] Lingxiao Zhao and Leman Akoglu. PairNorm: Tackling Oversmoothing in GNNs. pages 1-17, 2019. 1
papers/LOG/LOG 2022/LOG 2022 Conference/KQNsbAmJEug/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § NOT TOO LITTLE, NOT TOO MUCH: A THEORETICAL ANALYSIS OF GRAPH (OVER)SMOOTHING
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ We analyze graph smoothing with mean aggregation, where each node successively receives the average of the features of its neighbors. Indeed, it has been observed that Graph Neural Networks (GNNs), which generally follow some variant of Message-Passing (MP) with repeated aggregation, may be subject to the over-smoothing phenomenon: by performing too many rounds of MP, the node features tend to converge to a non-informative limit. At the other end of the spectrum, it is intuitively obvious that some MP rounds are necessary, but existing analyses do not exhibit both phenomena at once. In this paper, we consider simplified linear GNNs, and rigorously analyze two examples of random graphs for which a finite number of mean aggregation steps provably improves the learning performance, before oversmoothing kicks in. We identify two key phenomena: graph smoothing shrinks non-principal directions in the data faster than principal ones, which is useful for regression, and shrinks nodes within communities faster than they collapse together, which improves classification.
12
+
13
+ § 1 INTRODUCTION
14
+
15
+ In recent years, deep architectures such as Graph Neural Networks (GNNs), along with the availability of large sets of graph data, have significantly broadened the field of machine learning on graphs and structured data, see $\left\lbrack {3,4,9,{19}}\right\rbrack$ for reviews. Most GNNs rely on the Message-Passing (MP) framework $\left\lbrack {8,{11}}\right\rbrack$ . At each layer $k$ , for each node $i$ , a representation ${z}_{i}^{\left( k\right) }$ is computed using the representations of the neighbors ${\mathcal{N}}_{i}$ of $i$ in the graph at the previous layer: ${z}_{i}^{\left( k\right) } = \operatorname{AGG}\left( {\left\{ {z}_{j}^{\left( k - 1\right) }\right\} }_{j \in {\mathcal{N}}_{i}}\right)$ , where AGG is an aggregation function that treats ${\left\{ {z}_{j}^{\left( k - 1\right) }\right\} }_{j \in {\mathcal{N}}_{i}}$ as an unordered set, to respect the absence of node ordering in the graph. Here we consider one of the most classical, mean aggregation:
16
+
17
+ $$
18
+ {z}_{i}^{\left( k\right) } = \frac{1}{\mathop{\sum }\limits_{j}{a}_{ij}}\mathop{\sum }\limits_{j}{a}_{ij}\Psi \left( {z}_{j}^{\left( k - 1\right) }\right) \tag{1}
19
+ $$
20
+
21
+ where the ${a}_{ij} \in {\mathbb{R}}_{ + }$ are the entries of the adjacency matrix of the graph, and $\Psi$ is some function (usually a Multi-Layer Perceptron).While MP is a natural and rather general framework, its limitations were quickly observed by researchers and practitioners. Foremost among them is the so-called oversmoothing phenomenon [14]: as the GNN gets deeper and many rounds of MP are performed, the node features ${z}_{i}^{\left( k\right) }$ tend to become too similar across the graph. To relieve it, researchers have explored residual mechanisms $\left\lbrack {7,{13}}\right\rbrack$ , dropping connections $\left\lbrack {10}\right\rbrack$ , clever normalizations $\left\lbrack {21}\right\rbrack$ or regularizations [6], among others. Some works have acknowledged the important role of the aggregation function, and proposed new exotic diffusion strategies [2] or to optimize it [12].
22
+
23
+ On the theoretical side, oversmoothing has mostly been analyzed in the infinite-layer limit $k \rightarrow \infty$ . In this case, classical spectral analysis of graph operators such as the Laplacian can be leveraged to indeed show that node features will always converge to some limit that carries a limited amount of information [17]. This is particularly true for mean aggregation (1). However, there has been little research at the other end of the spectrum. Generally, researchers show the power of GNNs for a sufficient (unbounded) number of layers, such as the now-famous ability to distinguish graph isomorphism as well as the Weisfeiler-Lehman test and all its variants $\left\lbrack {{16},{20}}\right\rbrack$ . Since these results are valid for an unbounded number of layers, the settings adopted in these works are, by definition, incompatible with non-informative oversmoothing.
24
+
25
+ In a recent preprint ${\left\lbrack 1\right\rbrack }^{1}$ , we showcase two representative exemples, of regression and classification, on which linear GNNs (sometimes called SGC [18]) are provably subject to this double phenomenon: some smoothing is useful for learning, while too much smoothing inevitably leads to oversmoothing.
26
+
27
+ We adopt on a model of latent space random graphs, with node features. We identify two key phenomena for this: smoothing shrinks non-principal directions in the data faster than principal ones (Sec. 4), and shrinks communities faster than they collapse together (Sec. 5). Although our theoretical settings are obviously simplified, we believe it is a step towards a better comprehension of graph aggregation, of the relationship between node features and graph structure. All proofs are given in the full paper [1], of which the present document is an extended abstract.
28
+
29
+ § 2 PRELIMINARIES
30
+
31
+ Notations. The norm $\parallel \cdot \parallel$ is the Euclidean norm for vectors and spectral norm for (rectangular) matrices. For a psd matrix $\sum$ , the Mahalanobis norm is $\parallel x{\parallel }_{\sum }^{2}\overset{\text{ def. }}{ = }{x}^{\top }{\sum x}$ . The determinant of a matrix is $\left| S\right|$ , and its smallest eigenvalue is ${\lambda }_{\min }\left( S\right)$ . The multivariate Gaussian distribution with mean $\mu$ and covariance $\sum$ is denoted by ${\mathcal{N}}_{\mu ,\sum }\left( x\right) = \det {\left( 2\pi \sum \right) }^{-\frac{1}{2}}{e}^{-\frac{1}{2}\parallel x - \mu {\parallel }_{{\sum }^{-1}}^{2}}$ .
32
+
33
+ SSL.. In this paper, we consider Semi-Supervised Learning (SSL) [5, 11] on an undirected graph of size $n$ . We observe a weighted adjacency matrix $A = {\left\lbrack {a}_{ij}\right\rbrack }_{i,j = 1}^{n} \in {\mathbb{R}}_{ + }^{n \times n}$ as well as node features ${z}_{1},\ldots {z}_{n} \in {\mathbb{R}}^{p}$ at each node of the graph. We also observe some labels ${y}_{1},\ldots ,{y}_{{n}_{\mathrm{{tr}}}} \in \mathbb{R}$ at training time and aim to predict the remaining labels ${y}_{{n}_{\mathrm{{tr}}} + 1},\ldots ,{y}_{n}$ . For simplicity, we assume that ${n}_{\mathrm{{tr}}}$ and ${n}_{\text{ te }}$ are both in $\mathcal{O}\left( n\right)$ . We denote by $Z \in {\mathbb{R}}^{n \times p}$ the matrix whose rows contain the node features, ${Z}_{\mathrm{{tr}}},{Z}_{\mathrm{{te}}}$ respectively its first ${n}_{\mathrm{{tr}}}$ and last ${n}_{\mathrm{{te}}}$ rows, and similarly ${Y}_{\mathrm{{tr}}},{Y}_{\mathrm{{te}}}$ the vectors containing the observed and non-observed labels.
34
+
35
+ Graph smoothing with mean aggregation. Here we consider a simplified situation of linear GNN with mean aggregation, often used as a theoretical baseline [18]. A linear GNN with $k$ layers just corresponds to performing $k$ rounds of mean aggregation on the node features, then learning on the smoothed features. We denote by ${d}_{A} = {\left\lbrack \mathop{\sum }\limits_{i}{a}_{ij}\right\rbrack }_{j} \in {\mathbb{R}}_{ + }^{n}$ the vector containing the degrees of the graph and $D = \operatorname{diag}\left( {d}_{A}\right)$ . Assuming that all degrees are non-zero, performing one round of mean aggregation corresponds to multiplying $Z$ by $L = {D}^{-1}A$ . The smoothed node features after $k$ rounds of mean aggregation are: ${Z}^{\left( k\right) } = {L}^{k}Z$ . Each row, denoted by ${z}_{i}^{\left( k\right) } \in {\mathbb{R}}^{p}$ , contains the smoothed features of an individual node. Its first ${n}_{\mathrm{{tr}}}$ and last ${n}_{\mathrm{{te}}}$ rows are denoted ${Z}_{\mathrm{{tr}}}^{\left( k\right) },{Z}_{\mathrm{{te}}}^{\left( k\right) }$ .
36
+
37
+ Learning. In this paper, we consider learning with a Mean Square Error (MSE) loss and Ridge regularization. For $\lambda > 0$ , the regression coefficients vector on the smoothed features is
38
+
39
+ $$
40
+ {\widehat{\beta }}^{\left( k\right) }\overset{\text{ def. }}{ = }{\operatorname{argmin}}_{\beta }\frac{1}{{n}_{\mathrm{{tr}}}}{\begin{Vmatrix}{Y}_{\mathrm{{tr}}} - {Z}_{\mathrm{{tr}}}^{\left( k\right) }\beta \end{Vmatrix}}^{2} + \lambda \parallel \beta {\parallel }^{2} = {\left( \frac{{\left( {Z}_{\mathrm{{tr}}}^{\left( k\right) }\right) }^{\top }{Z}_{\mathrm{{tr}}}^{\left( k\right) }}{{n}_{\mathrm{{tr}}}} + \lambda \mathrm{{Id}}\right) }^{-1}\frac{{\left( {Z}_{\mathrm{{tr}}}^{\left( k\right) }\right) }^{\top }{Y}_{\mathrm{{tr}}}}{{n}_{\mathrm{{tr}}}} \tag{2}
41
+ $$
42
+
43
+ Then, the test risk is defined as
44
+
45
+ $$
46
+ {\mathcal{R}}^{\left( k\right) }\overset{\text{ def. }}{ = }{n}_{\text{ te }}^{-1}{\begin{Vmatrix}{Y}_{\text{ te }} - {\widehat{{Y}_{\text{ te }}}}^{\left( k\right) }\end{Vmatrix}}^{2}\;\text{ where }{\widehat{{Y}_{\text{ te }}}}^{\left( k\right) } = {Z}_{\text{ te }}^{\left( k\right) }{\widehat{\beta }}^{\left( k\right) } \tag{3}
47
+ $$
48
+
49
+ Our goal is to illustrate some situations where a finite amount of smoothing provably improves the test risk, that is, there is an optimal ${k}^{ \star } > 0$ such that ${\mathcal{R}}^{\left( {k}^{ \star }\right) } < \min \left( {{\mathcal{R}}^{\left( 0\right) },{\mathcal{R}}^{\left( \infty \right) }}\right)$ .
50
+
51
+ Random graph model. We adopt popular latent space random graph models akin to graphons [15]. In these models, to each node $i$ is associated an unobserved latent variable ${x}_{i} \in {\mathbb{R}}^{d}$ with $d \geq p$ , and edge weights are assumed to be equal to ${a}_{ij} = W\left( {{x}_{i},{x}_{j}}\right)$ where $W : {\mathbb{R}}^{d} \times {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}_{ + }$ is a connectivity kernel. Note that edges may also be taken as random Bernoulli variables, but we do not consider this here for simplicity. Moreover, we consider that the $\left( {{x}_{i},{y}_{i}}\right)$ are drawn iid from
52
+
53
+ ${}^{1}$ the reference has been anonymized for review purpose.
54
+
55
+ some joint distribution, and the node features are a linear projection of the latent variables to a lower dimension: ${z}_{i} = {M}^{\top }{x}_{i}$ for some unknown $M \in {\mathbb{R}}^{d \times p}$ that satisfies ${M}^{\top }M = {\operatorname{Id}}_{p}$ . To summarize:
56
+
57
+ $$
58
+ \forall i,j,\;\left( {{x}_{i},{y}_{i}}\right) \overset{\text{ iid }}{ \sim }P,\;{z}_{i} = {M}^{\top }{x}_{i},\;{a}_{ij} = W\left( {{x}_{i},{x}_{j}}\right) \tag{4}
59
+ $$
60
+
61
+ For this model, note that ${Z}^{\left( k\right) } = {X}^{\left( k\right) }M$ where ${X}^{\left( k\right) } = {L}^{k}X$ . In the rest of the paper, we use the Gaussian kernel with a small additive term $\varepsilon > 0$ :
62
+
63
+ $$
64
+ W\left( {x,y}\right) = \varepsilon + {W}_{g}\left( {x,y}\right) \;\text{ where }{W}_{g}\left( {x,y}\right) \overset{\text{ def. }}{ = }{e}^{-\frac{1}{2}\parallel x - y{\parallel }^{2}} \tag{5}
65
+ $$
66
+
67
+ The coefficient $\varepsilon$ is added to lower-bound the degrees of the graph and avoid degenerate situations.
68
+
69
+ § 3 OVERSMOOTHING
70
+
71
+ In this section, we briefly examine the oversmoothing case, when $k \rightarrow \infty$ while all other parameters are fixed. In this case, it is well-known that all node features converge even for general GNNs [17]. For completeness, we state below this result in our settings.
72
+
73
+ Theorem 1. Define $v = {Z}^{\top }\bar{d}$ and ${\bar{y}}_{\mathrm{{tr}}} = {n}_{\mathrm{{tr}}}^{-1}\mathop{\sum }\limits_{{i = 1}}^{{n}_{\mathrm{{tr}}}}{y}_{i}$ . We have ${\widehat{Y}}_{\mathrm{{te}}}^{\left( k\right) }\xrightarrow[{k \rightarrow \infty }]{}\left( {\frac{\parallel v{\parallel }^{2}}{\lambda + \parallel v{\parallel }^{2}}{\bar{y}}_{\mathrm{{tr}}}}\right) {1}_{{n}_{\mathrm{{te}}}}$ .
74
+
75
+ Hence, in the limit $k \rightarrow \infty$ , the predicted labels become all equal. Using simple concentration inequalities, it is generally easy to show that ${\mathcal{R}}^{\left( \infty \right) } \approx \operatorname{Var}\left( y\right) + \mathcal{O}\left( {1/\sqrt{n}}\right)$ when $\lambda \rightarrow 0$ . In most cases, this leads to situations where ${\mathcal{R}}^{\left( 0\right) } < {\mathcal{R}}^{\left( \infty \right) }$ , and oversmoothing occurs.
76
+
77
+ § 4 FINITE SMOOTHING: LINEAR REGRESSION
78
+
79
+ In this section, we consider a problem of linear regression on Gaussian data. We consider $x \sim {\mathcal{N}}_{0,\sum }$ for some positive definite covariance matrix $\sum$ , and $y = {x}^{\top }{\beta }^{ \star }$ , without noise for simplicity. For a symmetric positive semi-definite matrix $S \in {\mathbb{R}}^{d \times d}$ , we define the following function
80
+
81
+ $$
82
+ {R}_{\text{ reg. }}\left( S\right) \overset{\text{ def. }}{ = }{\left( {\sum }^{\frac{1}{2}}{\beta }^{ \star }\right) }^{\top }{\left( \operatorname{Id} - {S}^{\frac{1}{2}}M{\left( \lambda \operatorname{Id} + {M}^{\top }SM\right) }^{-1}{M}^{\top }{S}^{\frac{1}{2}}\right) }^{2}\left( {{\sum }^{\frac{1}{2}}{\beta }^{ \star }}\right) \in {\mathbb{R}}_{ + } \tag{6}
83
+ $$
84
+
85
+ where we recall that $M$ is the projection matrix to obtain the node features $z = {M}^{\top }x$ . Note that it satisfies $0 \leq R\left( S\right) \leq {\begin{Vmatrix}{\beta }^{ \star }\end{Vmatrix}}_{\sum }^{2}$ . We additionally define ${\sum }^{\left( k\right) } = {\left( \operatorname{Id} + {\sum }^{-1}\right) }^{-{2k}}\sum$ . The main result of this section is the following.
86
+
87
+ Theorem 2 (Regression risk without smoothing.). With probability at least $1 - \rho$ ,
88
+
89
+ $$
90
+ {\mathcal{R}}^{\left( 0\right) } = {R}_{\text{ reg. }}\left( \sum \right) + \mathcal{O}\left( \frac{\parallel \sum \parallel {\begin{Vmatrix}{\beta }^{ \star }\end{Vmatrix}}^{2}d\sqrt{\log \left( {1/\rho }\right) }}{\left( {\lambda + {\lambda }_{\text{ min }}^{\left( 0\right) }}\right) \sqrt{n}}\right) \tag{7}
91
+ $$
92
+
93
+ and
94
+
95
+ $$
96
+ {\mathcal{R}}^{\left( 1\right) } = {R}_{\text{ reg. }}\left( {\sum }^{\left( 1\right) }\right) + \mathcal{O}\left( {C{\varepsilon }^{1/5}}\right) + \mathcal{O}\left( \frac{{C}^{\prime }\log n\sqrt{d + \log \left( {1/\rho }\right) }}{\left( {\lambda + {\lambda }_{\min }^{\left( 1\right) }}\right) \sqrt{n}}\right) \tag{8}
97
+ $$
98
+
99
+ where $C = \operatorname{poly}\left( {\parallel \sum \parallel ,{e}^{d},\left| {\mathrm{{Id}} + \sum }\right| }\right) ,{C}^{\prime } = \operatorname{poly}\left( {{\varepsilon }^{-1},\parallel \sum \parallel ,\begin{Vmatrix}{\beta }^{ \star }\end{Vmatrix}}\right)$ and ${\lambda }_{\min }^{\left( k\right) } = {\lambda }_{\min }\left( {{M}^{\top }{\sum }^{\left( k\right) }M}\right)$ .
100
+
101
+ This theorem gives a limiting expression of ${\mathcal{R}}^{\left( 0\right) }$ and ${\mathcal{R}}^{\left( 1\right) }$ with additional error terms. Since it is easy to show that for $n$ large enough ${\mathcal{R}}^{\left( 0\right) } \leq {\mathcal{R}}^{\left( \infty \right) } \approx \operatorname{Var}\left( y\right)$ , we obtain the following result.
102
+
103
+ Corollary 1. Take any $\rho > 0$ , and suppose ${R}_{\text{ reg. }}\left( {\sum }^{\left( 1\right) }\right) < {R}_{\text{ reg. }}\left( \sum \right)$ . If $\varepsilon$ is sufficiently small and $n$ is sufficiently large, then with probability $1 - \rho$ , there is ${k}^{ \star } > 0$ such that ${\mathcal{R}}^{\left( {k}^{ \star }\right) } < \min \left( {{\mathcal{R}}^{\left( 0\right) },{\mathcal{R}}^{\left( \infty \right) }}\right)$ .
104
+
105
+ In other words, under some hypothesis on ${R}_{\text{ reg. }}$ , there is indeed coexistence of beneficial finite smoothing and oversmoothing. Below we exhibit a simple example where this hypothesis is satisfied.
106
+
107
+ As expected in linear regression, the covariance of the ${x}_{k}^{\left( k\right) }$ is key in the expression of the risk. It can be seen in the proof of Theorem 2 (available at [1]) that ${x}^{\left( 1\right) }$ behaves like ${\left( \operatorname{Id} + {\sum }^{-1}\right) }^{-1}x$ , whose covariance is ${\sum }^{\left( 1\right) }$ , hence the consequence that ${\mathcal{R}}^{\left( 1\right) } \approx {R}_{\text{ reg. }}\left( {\sum }^{\left( 1\right) }\right)$ . Similarly, by applying repeated smoothing we can extrapolate that ${x}^{\left( k\right) }$ behaves like ${\left( \operatorname{Id} + {\sum }^{-1}\right) }^{-k}x$ , such that ${\mathcal{R}}^{\left( k\right) } \approx {R}_{\text{ reg. }}\left( {\sum }^{\left( k\right) }\right)$ .
108
+
109
+ < g r a p h i c s >
110
+
111
+ Figure 1: Illustration of mean aggregation smoothing on the $2\mathrm{D}$ example described in the text. First three figures on the left, top: unobserved latent variables ${X}^{\left( k\right) }$ in dimension $d = 2$ where the colors are the $Y$ ; bottom: observed node features ${Z}^{\left( k\right) } = {X}^{\left( k\right) }M$ in dimension $p = 1$ on the x-axis, labels $Y$ on the y-axis. From left to right, three order of smoothing $k = 0,1$ and 2 are represented. Figure on the right: comparison of empirical and theoretical MSE (details in [1]) with respect to order of smoothing $k$ .
112
+
113
+ < g r a p h i c s >
114
+
115
+ Figure 2: Illustration of mean aggregation smoothing on a classification task with two Gaussians with dimensions $d = 2,p = 1$ , where $M$ projects on the first coordinate. First three figures on the left, top: density of unobserved latent variables ${X}^{\left( k\right) }$ in dimension $d = 2$ ; bottom: density of observed node features ${Z}^{\left( k\right) } =$ ${X}^{\left( k\right) }M$ in dimension $p = 1$ . From left to right, three order of smoothing $k = 0,1$ and 2 are represented. Figure on the right: comparison of empirical and theoretical MSE with respect to order of smoothing $k$ .
116
+
117
+ The matrix ${\sum }^{\left( k\right) }$ has the same eigendecomposition as $\sum$ , but where every ${\lambda }_{i}$ is replaced by ${\lambda }_{i}^{\left( k\right) } =$ ${\left( 1 + 1/{\lambda }_{i}\right) }^{-{2k}}{\lambda }_{i}$ . This can be interpreted as follows: when ${\lambda }_{i} \gg 1$ is large, ${\lambda }_{i}^{\left( k\right) } \sim {\lambda }_{i}$ , while if ${\lambda }_{i} \ll 1$ is small, ${\lambda }_{i}^{\left( k\right) } \sim {\lambda }_{i}^{{2k} + 1}$ . Hence smoothing shrinks the directions of the small eigenvalues faster than that of the large ones. Thus, if ${\beta }^{ \star }$ is mostly aligned with the eigenvectors of large eigenvalues, smoothing may reduce unwanted noise in the node features $z = {M}^{\top }x$ .
118
+
119
+ We illustrate this on a toy situation (Fig. 1). Consider the following settings: $d = 2,p = 1,\sum$ has two eigenvalues ${\lambda }_{1} = 2$ and ${\lambda }_{2} = 1/2$ , with respective eigenvectors ${u}_{1} = \left\lbrack {1,1}\right\rbrack /\sqrt{2}$ and ${u}_{2} = \left\lbrack {-1,1}\right\rbrack /\sqrt{2}$ , and ${\beta }^{ \star } = b{u}_{1}$ . Finally, ${M}^{\top } = \left\lbrack {1,0}\right\rbrack$ is the projection on the first coordinate. In this case, we can compute explicitely: ${\mathcal{R}}^{\left( k\right) } \approx {R}_{\text{ reg. }}\left( {\sum }^{\left( k\right) }\right) = {\lambda }_{1}{b}^{2}\frac{{\left( 2\lambda + {\lambda }_{2}^{\left( k\right) }\right) }^{2} + {\lambda }_{2}^{\left( k\right) }{\lambda }_{1}^{\left( k\right) }}{{\left( 2\lambda + {\lambda }_{1}^{\left( k\right) } + {\lambda }_{2}^{\left( k\right) }\right) }^{2}}$ . So, if ${\lambda }_{2}^{\left( k\right) }$ decreases faster than ${\lambda }_{1}^{\left( k\right) }$ , this function will first decrease to a minimum of approximately ${\lambda }_{1}{b}^{2}{\left( \frac{2\lambda }{{2\lambda } + {\lambda }_{1}^{\left( {k}^{ \star }\right) }}\right) }^{2}$ (when ${\lambda }_{2}^{\left( k\right) } \approx 0$ ), before increasing again to ${\lambda }_{1}{b}^{2} = {\begin{Vmatrix}{\beta }^{ \star }\end{Vmatrix}}_{\sum }^{2} = \mathop{\lim }\limits_{{n \rightarrow \infty }}{\mathcal{R}}^{\left( \infty \right) }$ .
120
+
121
+ § 5 FINITE SMOOTHING: CLASSIFICATION
122
+
123
+ In this last section, we examine a simple classification problem for two balanced classes with Gaussian distribution: $\left( {x,y}\right) \sim \left( {1/2}\right) \left( {{\mathcal{N}}_{\mu ,\mathrm{{Id}}}\otimes \{ 1\} + {\mathcal{N}}_{-\mu ,\mathrm{{Id}}}\otimes \{ - 1\} }\right)$ . We note that this is not a difficult problem per se, and that linear regression is certainly not the method of choice to solve it. Our main goal is to illustrate the smoothing phenomenon. Our main result is the following.
124
+
125
+ Theorem 3. Take any $\rho > 0$ . If $\varepsilon$ is sufficiently small, and $\parallel \mu \parallel ,n$ are sufficiently large, and $\begin{Vmatrix}{{M}^{\top }\mu }\end{Vmatrix} > 0$ , then with probability $1 - \rho$ , there is ${k}^{ \star } > 0$ such that ${\mathcal{R}}^{\left( {k}^{ \star }\right) } < \min \left( {{\mathcal{R}}^{\left( 0\right) },{\mathcal{R}}^{\left( \infty \right) }}\right)$ .
126
+
127
+ Note that we have assumed $\parallel \mu \parallel$ to be sufficiently large here. However, we do not assume that $\begin{Vmatrix}{{M}^{\top }\mu }\end{Vmatrix}$ is large (just non-zero), and the classification problem on the ${z}_{i}$ alone may be very difficult. As seen in the proof [1] and Fig. 2 on a $d = 2$ example, the interpretation here is the following: in the proper regime, communities will initially concentrate in the latent space faster than they get closer from each other, which helps learning on the ${z}^{\left( k\right) }$ . Then, they eventually collapse together.
papers/LOG/LOG 2022/LOG 2022 Conference/KUGwmnSdPV3/Initial_manuscript_md/Initial_manuscript.md ADDED
The diff for this file is too large to render. See raw diff
 
papers/LOG/LOG 2022/LOG 2022 Conference/KUGwmnSdPV3/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,288 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § MSGNN: A SPECTRAL GRAPH NEURAL NETWORK BASED ON A NOVEL MAGNETIC SIGNED LAPLACIAN
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ Signed and directed networks are ubiquitous in real-world applications. However, there has been relatively little work proposing spectral graph neural networks (GNNs) for such networks. Here we introduce a signed directed Laplacian matrix, which we call the magnetic signed Laplacian, as a natural generalization of both the signed Laplacian on signed graphs and the magnetic Laplacian on directed graphs. We then use this matrix to construct a novel efficient spectral GNN architecture and conduct extensive experiments on both node clustering and link prediction tasks. In these experiments, we consider tasks related to signed information, tasks related to directional information, and tasks related to both signed and directional information. We demonstrate that our proposed spectral GNN is effective for incorporating both signed and directional information, and attains leading performance on a wide range of data sets. Additionally, we provide a novel synthetic network model, which we refer to as the signed directed stochastic block model, and a number of novel real-world data sets based on lead-lag relationships in financial time series.
12
+
13
+ § 16 1 INTRODUCTION
14
+
15
+ Graph Neural Networks (GNNs) have emerged as a powerful tool for extracting information from graph-structured data and have achieved state-of-the-art performance on a variety of machine learning tasks. However, compared to research on constructing GNNs for unsigned and undirected graphs, and graphs with multiple types of edges, GNNs for graphs where the edges have a natural notion of sign, direction, or both, have received relatively little attention.
16
+
17
+ There is a demand for such tools because many important and interesting phenomena are naturally modeled as signed and/or directed graphs, i.e., graphs in which objects may have either positive or negative relationships, and/or in which such relationships are not necessarily symmetric [1]. For example, in the analysis of social networks, positive and negative edges could model friendship or enmity, and directional information could model the influence of one person on another [2, 3]. Signed/directed networks also arise when analyzing time-series data with lead-lag relationships [4], detecting influential groups in social networks [5], and computing rankings from pairwise comparisons [6]. Additionally, signed and directed networks are a natural model for group conflict analysis [7], modeling the interaction network of the agents during a rumor spreading process [8], and maximizing positive influence while formulating opinions [9].
18
+
19
+ Broadly speaking, most GNNs may be classified as either spectral or spatial. Spatial methods typically define convolution on graphs as a localized aggregation operator whereas spectral methods rely on the eigen-decomposition of a suitable graph Laplacian. The goal of this paper is to introduce a novel Laplacian and an associated spectral GNN for signed directed graphs. While spatial GNNs exist, such as SDGNN [3], SiGAT [10], SNEA [11], and SSSNET [12] proposed for signed (and possibly directed) networks, this is one of the first works to propose a spectral GNN for such networks.
20
+
21
+ A principal challenge in extending traditional spectral GNNs to this setting is to define a proper notion of the signed, directed graph Laplacian. Such a Laplacian should be positive semidefinite, have a bounded spectrum when properly normalized, and encode information about both the sign and direction of each edge. Here, we unify the magnetic Laplacian, which has been used in [13] to construct a GNN on an (unsigned) directed graph, with a signed Laplacian which has been used for a variety of data science tasks on (undirected) signed graphs [14-17]. Importantly, our proposed matrix, which we refer to as the magnetic signed Laplacian, reduces to either the magnetic Laplacian or the signed Laplacian when the graph is directed, but not signed, or signed, but not directed.
22
+
23
+ Although this magnetic signed Laplacian is fairly straightforward to obtain, it is novel and surprisingly powerful: We show that our proposed Magnetic Signed GNN (MSGNN) is effective for a variety of node clustering and link prediction tasks. Specifically, we consider several variations of the link prediction task, some of which prioritize signed information over directional information, some of which prioritize directional information over signed information, while others emphasize the method's ability to extract both signed and directional information simultaneously.
24
+
25
+ In addition to testing MSGNN on established data sets, we also devise a novel synthetic model which we call the Signed Directed Stochastic Block Model (SDSBM), which generalizes both the (undirected) Signed Stochastic Block Model from [12] and the (unsigned) Directed Stochastic Block Model from [5]. Analogous to these previous models, our SDSBM can be defined by a meta-graph structure and additional parameters describing density and noise levels. We also introduce a number of signed directed networks for link prediction tasks using lead-lag relationships in real-world financial time series.
26
+
27
+ Main Contributions. The main contributions of our work are: (1) We devise a novel matrix called the magnetic signed Laplacian, which can naturally be applied to signed and directed networks. The magnetic signed Laplacian is Hermitian, positive semidefinite, and the eigenvalues of its normalized counterpart lie in $\left\lbrack {0,2}\right\rbrack$ . It reduces to existing Laplacians when the network is unsigned and/or undirected. (2) We propose an efficient spectral graph neural network architecture, MSGNN, based on this magnetic signed Laplacian, which attains leading performance on extensive node clustering and link prediction tasks, including novel tasks that consider edge sign and directionality jointly. To the best of our knowledge, this is the first work to evaluate ${\mathrm{{GNNs}}}^{1}$ on tasks that are related to both edge sign and directionality. (3) We introduce a novel synthetic model for signed and directed networks, called Signed Directed Stochastic Block Model (SDSBM), and also contribute a number of new real-world data sets constructed from lead-lag relationships of financial time series data.
28
+
29
+ § 2 RELATED WORK
30
+
31
+ In this section, we review related work constructing neural networks for directed graphs and signed graphs. We refer the reader to [1] for more background information.
32
+
33
+ Several works have aimed to define neural networks on directed graphs by constructing various directed graph Laplacians and defining convolution as multiplication in the associated eigenbasis. [18] defines a directed graph Laplacian by generalizing identities involving the undirected graph Laplacian and the stationary distribution of a random walk. [19] uses a similar idea, but with PageRank in place of a random walk. [20] constructs three different first- and second-order symmetric adjacency matrices and uses these adjacency matrices to define associated Laplacians. Similarly, [21] uses several different graph Laplacians based on various graph motifs.
34
+
35
+ Quite closely related to our work, [13] constructs a graph neural network using the magnetic Laplacian. Indeed, in the case where all links are positive, our GNN exactly reduces to the one proposed in [13]. Importantly, unlike the other directed graph Laplacians mentioned here, the magnetic Laplacian is a complex, Hermitian matrix rather than a real, symmetric matrix. We also note [5], which constructs a GNN for node clustering on directed graphs based on flow imbalance.
36
+
37
+ All of the above works are restricted to unsigned graphs, i.e., graphs with positive edge weights. However, there are also a number of neural networks introduced for signed (and possibly also directed) graphs, mostly focusing on the task of link sign prediction, i.e., predicting whether a link between two nodes will be positive or negative. SGCN by [22] is one of the first graph neural network methods to be applicable to signed networks, using an approach based on balance theory [23]. However, its design is mainly aimed at undirected graphs. SiGAT [10] utilizes a graph attention mechanism based on [24] to learn node embeddings for signed, directed graphs, using a novel motif-based GNN architecture based on balance theory and status theory [25]. Subsequently, SDGNN by [3] builds upon this work by increasing its efficiency and proposing a new objective function. In a similar vein, SNEA [11] proposes a signed graph neural network for link sign prediction based on a novel objective function. In a different line of work, [12] proposes SSSNET, a GNN not based on balance theory designed for semi-supervised node clustering in signed (and possibly directed) graphs.
38
+
39
+ ${}^{1}$ Some previous work, such as [3], evaluates GNNs on signed and directed graphs. However, they focus on tasks where either only signed information is important, or where only directional information is important.
40
+
41
+ Additionally, we note that several GNNs [26-28] have been introduced for multi-relational graphs, i.e., graphs with different types of edges. In such networks, the number of learnable parameters typically increases linearly with the number of edge types. Signed graphs, at least if the graph is unweighted or the weighting function $w$ only takes finitely many values, can be thought of as special cases of multi-relational graphs. However, in the context of (possibly weighted) signed graphs, there is an implicit relationship between the different edge-types, namely that a negative edge is interpreted as the opposite of a positive edge and that edges with large weights are deemed more important than edges with small weights. These relationships will allow us to construct a network with significantly fewer trainable parameters than if we were considering an arbitrary multi-relational graph.
42
+
43
+ We are aware of a concurrent preprint [29] which also constructs a GNN, SigMaNet, based on a signed magnetic Laplacian. Notably, the signed magnetic Laplacian considered in [29] is different from the magnetic signed Laplacian proposed here. The claimed advantage of SigMaNet is that it does not require the tuning of a charge parameter $q$ and is invariant to, e.g., doubling the weight of every edge. In our work, for the sake of simplicity, we usually set $q = {0.25}$ , except for when the graph is undirected (in which case we set $q = 0$ ). However, a user may choose to also tune $q$ through a standard cross-validation procedure as in [13]. Moreover, one can readily address the latter issue by normalizing the adjacency matrix via a preprocessing step (e.g., [30]). In contrast to our magnetic signed Laplacian, in the case where the graph is not signed but is weighted and directed, the matrix proposed in [29] does not reduce to the magnetic Laplacian considered in [13]. For example, denoting the graph adjacency matrix by $\mathbf{A}$ , consider the case where $0 < {\mathbf{A}}_{j,i} < {\mathbf{A}}_{i,j}$ . Let $m = \frac{1}{2}\left( {{\mathbf{A}}_{i,j} + {\mathbf{A}}_{j,i}}\right) ,\delta = {\mathbf{A}}_{i,j} - {\mathbf{A}}_{j,i}$ , and let $\mathrm{i}$ denote the imaginary unit. Then the(i, j)-th entry of the matrix ${\mathbf{L}}^{\sigma }$ proposed in [29] is given by ${\mathbf{L}}_{i,j}^{\sigma } = m\mathbf{i}$ , whereas the corresponding entry of the unnormalized magnetic Laplacian is given by ${\left( {\mathbf{L}}_{U}^{\left( q\right) }\right) }_{i,j} = m\exp \left( {{2\pi }\mathrm{i}{q\delta }}\right)$ . Moreover, while SigMaNet is in principle well-defined on signed and directed graphs, the experiments in [29] are restricted to tasks where only signed or directional information is important (but not both). In our experiments, we find that our proposed method outperforms SigMaNet on a variety of tasks on signed and/or directed networks. Moreover, we observe that the signed magnetic Laplacian ${\mathbf{L}}^{\sigma }$ proposed in [29] has an undesirable property when the graph is unweighted - a node is assigned to have degree zero if it has an equal number of positive and negative connections. Our proposed Laplacian does not suffer from this issue.
44
+
45
+ § 3 PROPOSED METHOD
46
+
47
+ § 3.1 PROBLEM FORMULATION
48
+
49
+ Let $\mathcal{G} = \left( {\mathcal{V},\mathcal{E},w,{\mathbf{X}}_{\mathcal{V}}}\right)$ denote a signed, and possibly directed, weighted graph with node attributes, where $\mathcal{V}$ is the set of nodes (or vertices), $\mathcal{E}$ is the set of (directed) edges (or links), and $w : \mathcal{E} \rightarrow \left( {-\infty ,\infty }\right) \smallsetminus \{ 0\}$ is the weighting function. Let ${\mathcal{E}}^{ + } = \{ e \in \mathcal{E} : w\left( e\right) > 0\}$ denote the set of positive edges and let ${\mathcal{E}}^{ - } = \{ e \in \mathcal{E} : w\left( e\right) < 0\}$ denote the set of negative edges so that $\mathcal{E} = {\mathcal{E}}^{ + } \cup {\mathcal{E}}^{ - }$ . Here, we do allow self loops but not multiple edges; if ${v}_{i},{v}_{j} \in \mathcal{V}$ , there is at most one edge $e \in \mathcal{E}$ from ${v}_{i}$ to ${v}_{j}$ . Let $n = \left| \mathcal{V}\right|$ , and let ${d}_{\text{ in }}$ be the number of attributes at each node, so that ${\mathbf{X}}_{\mathcal{V}}$ is an $n \times {d}_{\text{ in }}$ matrix whose rows are the attributes of each node. We let $\mathbf{A} = {\left( {A}_{ij}\right) }_{i,j \in \mathcal{V}}$ denote the weighted, signed adjacency matrix where ${\mathbf{A}}_{i,j} = {w}_{i,j}$ if $\left( {{v}_{i},{v}_{j}}\right) \in \mathcal{E}$ , and ${\mathbf{A}}_{i,j} = 0$ otherwise.
50
+
51
+ § 3.2 MAGNETIC SIGNED LAPLACIAN
52
+
53
+ In this section, we define Hermitian matrices ${\mathbf{L}}_{U}^{\left( q\right) }$ and ${\mathbf{L}}_{N}^{\left( q\right) }$ which we refer to as the unnormalized and normalized magnetic signed Laplacian matrices, respectively. We first define a symmetrized adjacency matrix and an absolute degree matrix by
54
+
55
+ $$
56
+ {\widetilde{\mathbf{A}}}_{i,j} \mathrel{\text{ := }} \frac{1}{2}\left( {{\mathbf{A}}_{i,j} + {\mathbf{A}}_{j,i}}\right) ,\;1 \leq i,j \leq n,{\widetilde{\mathbf{D}}}_{i,i} \mathrel{\text{ := }} \frac{1}{2}\mathop{\sum }\limits_{{j = 1}}^{n}\left( {\left| {\mathbf{A}}_{i,j}\right| + \left| {\mathbf{A}}_{j,i}\right| }\right) ,1 \leq i \leq n,
57
+ $$
58
+
59
+ with ${\widetilde{\mathbf{D}}}_{i,j} = 0$ for $i \neq j$ . Importantly, the use of absolute values ensures that the entries of $\widetilde{\mathbf{D}}$ are nonnegative. Furthermore, it ensures that all ${\widetilde{\mathbf{D}}}_{i,i}$ will be strictly positive if the graph is connected. This is in contrast to the construction in [29] which will give a node degree zero if it has an equal number of positive and negative neighbors (for unweighted networks). To capture directional information, we next define a phase matrix ${\Theta }^{\left( q\right) }$ by ${\Theta }_{i,j}^{\left( q\right) } \mathrel{\text{ := }} {2\pi q}\left( {{\mathbf{A}}_{i,j} - {\mathbf{A}}_{j,i}}\right)$ , where $q \in \mathbb{R}$ is the so-called "charge parameter." In our experiments, for simplicity, we set $q = 0$ when the task at hand is unrelated to directionality, or when the underlying graph is undirected, and we set $q = {0.25}$ for all the other tasks (except in an ablation study on the role of $q$ ). With $\odot$ denoting elementwise multiplication, and i. denoting the imaginary unit, we now construct a complex Hermitian matrix ${\mathbf{H}}^{\left( q\right) }$ by
60
+
61
+ $$
62
+ {\mathbf{H}}^{\left( q\right) } \mathrel{\text{ := }} \widetilde{\mathbf{A}} \odot \exp \left( {\mathrm{i}{\mathbf{\Theta }}^{\left( q\right) }}\right)
63
+ $$
64
+
65
+ where $\exp \left( {\mathrm{i}{\mathbf{\Theta }}^{\left( q\right) }}\right)$ is defined elementwise by $\exp {\left( \mathrm{i}{\mathbf{\Theta }}^{\left( q\right) }\right) }_{i,j} \mathrel{\text{ := }} \exp \left( {\mathrm{i}{\mathbf{\Theta }}_{i,j}^{\left( q\right) }}\right)$ .
66
+
67
+ Note that ${\mathbf{H}}^{\left( q\right) }$ is Hermitian, as $\widetilde{\mathbf{A}}$ is symmetric and ${\mathbf{\Theta }}^{\left( q\right) }$ is skew-symmetric. In particular, when $q = 0$ , we have ${\mathbf{H}}^{\left( 0\right) } = \widetilde{\mathbf{A}}$ . Therefore, setting $q = 0$ is equivalent to making the input graph symmetric and discarding directional information. In general, however, ${\mathbf{H}}^{\left( q\right) }$ captures information about a link’s sign, through $\widetilde{\mathbf{A}}$ , and about its direction, through ${\mathbf{\Theta }}^{\left( q\right) }$ .
68
+
69
+ We observe that flipping the direction of an edge, i.e., replacing a positive or negative link from ${v}_{i}$ to ${v}_{j}$ with a link of the same sign from ${v}_{j}$ to ${v}_{i}$ corresponds to complex conjugation of ${\mathbf{H}}_{i,j}^{\left( q\right) }$ (assuming either that there is not already a link from ${v}_{j}$ to ${v}_{i}$ or that we also flip the direction of that link if there is one). We also note that if $q = {0.25},{\mathbf{A}}_{i,j} = \pm 1$ , and ${\mathbf{A}}_{j,i} = 0$ , we have
70
+
71
+ $$
72
+ {\mathbf{H}}_{i,j}^{\left( {0.25}\right) } = \pm \frac{i}{2} = - {\mathbf{H}}_{j,i}^{\left( {0.25}\right) }.
73
+ $$
74
+
75
+ Thus, a unit-weight edge from ${v}_{i}$ to ${v}_{j}$ is treated as the opposite of a unit-weight edge from ${v}_{j}$ to ${v}_{i}$ . Given ${\mathbf{H}}^{\left( q\right) }$ , we next define the unnormalized magnetic signed Laplacian by
76
+
77
+ $$
78
+ {\mathbf{L}}_{U}^{\left( q\right) } \mathrel{\text{ := }} \widetilde{\mathbf{D}} - {\mathbf{H}}^{\left( q\right) } = \widetilde{\mathbf{D}} - \widetilde{\mathbf{A}} \odot \exp \left( {\mathrm{i}{\mathbf{\Theta }}^{\left( q\right) }}\right) , \tag{1}
79
+ $$
80
+
81
+ and also define the normalized magnetic signed Laplacian by
82
+
83
+ $$
84
+ {\mathbf{L}}_{N}^{\left( q\right) } \mathrel{\text{ := }} \mathbf{I} - \left( {{\widetilde{\mathbf{D}}}^{-1/2}\widetilde{\mathbf{A}}{\widetilde{\mathbf{D}}}^{-1/2}}\right) \odot \exp \left( {\mathrm{i}{\mathbf{\Theta }}^{\left( q\right) }}\right) . \tag{2}
85
+ $$
86
+
87
+ When the graph $\mathcal{G}$ is directed, but not signed, ${\mathbf{L}}_{U}^{\left( q\right) }$ and ${\mathbf{L}}_{N}^{\left( q\right) }$ reduce to the magnetic Laplacians utilized in works such as $\left\lbrack {{13},{31},{32}}\right\rbrack$ and $\left\lbrack {33}\right\rbrack$ . Similarly, when $\mathcal{G}$ is signed, but not directed, ${\mathbf{L}}_{U}^{\left( q\right) }$ and ${\mathbf{L}}_{N}^{\left( q\right) }$ reduce to the signed Laplacian matrices considered in e.g., [14, 17] and [34]. Additionally, when the graph is neither signed nor directed, they reduce to the standard normalized and unnormalized graph Laplacians [35]. The following theorems show that ${\mathbf{L}}_{U}^{\left( q\right) }$ and ${\mathbf{L}}_{N}^{\left( q\right) }$ satisfy properties analogous to the traditional graph Laplacians. The proofs are in Appendix A.
88
+
89
+ Theorem 1. For any signed directed graph $\mathcal{G}$ defined in Sec. 3.1, $\forall q \in \mathbb{R}$ , both the unnormalized magnetic signed Laplacian ${\mathbf{L}}_{U}^{\left( q\right) }$ and its normalized counterpart ${\mathbf{L}}_{N}^{\left( q\right) }$ are positive semidefinite.
90
+
91
+ Theorem 2. For any signed directed graph $\mathcal{G}$ defined in Sec. 3.1, $\forall q \in \mathbb{R}$ , the eigenvalues of the normalized magnetic signed Laplacian ${\mathbf{L}}_{N}^{\left( q\right) }$ are contained in the interval $\left\lbrack {0,2}\right\rbrack$ .
92
+
93
+ By construction, ${\mathbf{L}}_{U}^{\left( q\right) }$ and ${\mathbf{L}}_{N}^{\left( q\right) }$ are Hermitian, and Theorem 1 shows they are positive semidefinite. In particular they are diagonalizable by an orthonormal basis of complex eigenvectors ${\mathbf{u}}_{1},\ldots ,{\mathbf{u}}_{n}$ associated to real, nonnegative eigenvalues ${\lambda }_{1} \leq \ldots \leq {\lambda }_{n} = {\lambda }_{\max }$ . Thus, similar to the traditional normalized Laplacian, we may factor ${\mathbf{L}}_{N}^{\left( q\right) } = \mathbf{U}\mathbf{\Lambda }{\mathbf{U}}^{ \dagger }$ , where $\mathbf{U}$ is an $n \times n$ matrix whose $k$ -th column is ${\mathbf{u}}_{k}$ , for $1 \leq k \leq n,\mathbf{\Lambda }$ is a diagonal matrix with ${\mathbf{\Lambda }}_{k,k} = {\lambda }_{k}$ , and ${\mathbf{U}}^{ \dagger }$ is the conjugate transpose of U. A similar formula holds for ${\mathbf{L}}_{U}^{\left( q\right) }$ .
94
+
95
+ § 3.3 SPECTRAL CONVOLUTION VIA THE MAGNETIC SIGNED LAPLACIAN
96
+
97
+ In this section, we show how to use a Hermitian, positive semidefinite matrix $\mathbf{L}$ such as the normalized or unnormalized magnetic signed Laplacian introduced in Sec. 3.2, to define convolution on a signed directed graph. This method is similar to the ones proposed for unsigned (possibly directed) graphs in, e.g., [36] and [13], but we provide details in order to keep our work reasonably self-contained.
98
+
99
+ Given $\mathbf{L}$ , let ${\mathbf{u}}_{1}\ldots ,{\mathbf{u}}_{n}$ be an orthonormal basis of eigenvectors such that ${\mathbf{{Lu}}}_{k} = {\lambda }_{k}{\mathbf{u}}_{k}$ , and let $\mathbf{U}$ be an $n \times n$ matrix whose $k$ -th column is ${\mathbf{u}}_{k}$ , for $1 \leq k \leq n$ . For a signal $\mathbf{x} : \mathcal{V} \rightarrow \mathbb{C}$ , we define its Fourier transform $\widehat{\mathbf{x}} \in {\mathbb{C}}^{n}$ by $\widehat{\mathbf{x}}\left( k\right) = \left\langle {\mathbf{x},{\mathbf{u}}_{k}}\right\rangle \mathrel{\text{ := }} {\mathbf{u}}_{k}^{ \dagger }\mathbf{x}$ , and equivalently, $\widehat{\mathbf{x}} = {\mathbf{U}}^{ \dagger }\mathbf{x}$ . Since $\mathbf{U}$ is unitary, we readily obtain the Fourier inversion formula
100
+
101
+ $$
102
+ \mathbf{x} = \mathbf{U}\widehat{\mathbf{x}} = \mathop{\sum }\limits_{{k = 1}}^{n}\widehat{\mathbf{x}}\left( k\right) {\mathbf{u}}_{k}. \tag{3}
103
+ $$
104
+
105
+ Analogous to the well-known convolution theorem in Euclidean domains, we define the convolution of $\mathbf{x}$ with a filter $\mathbf{y}$ as multiplication in the Fourier domain, i.e., $\widehat{\mathbf{y} * \mathbf{x}}\left( k\right) = \widehat{\mathbf{y}}\left( k\right) \widehat{\mathbf{x}}\left( k\right)$ . By (3), this implies $\mathbf{y} * \mathbf{x} = \mathbf{U}\operatorname{Diag}\left( \widehat{\mathbf{y}}\right) \widehat{\mathbf{x}} = \left( {\mathbf{U}\operatorname{Diag}\left( \widehat{\mathbf{y}}\right) {\mathbf{U}}^{ \dagger }}\right) \mathbf{x}$ , where $\operatorname{Diag}\left( \mathbf{z}\right)$ denotes a diagonal matrix with the vector $\mathbf{z}$ on its diagonal. Therefore, we declare that $\mathbf{Y}$ is a generalized convolution matrix if
106
+
107
+ $$
108
+ \mathbf{Y} = \mathbf{U}\mathbf{\sum }{\mathbf{U}}^{ \dagger }, \tag{4}
109
+ $$
110
+
111
+ for a diagonal matrix $\mathbf{\sum }$ . This is a natural generalization of the class of convolutions used in [37].
112
+
113
+ Some potential drawbacks exist when defining a convolution via (4). First, it requires one to compute the eigen-decomposition of $\mathbf{L}$ which is expensive for large graphs. Second, the number of trainable parameters equals the size of the graph (the number of nodes), rendering GNNs constructed via (4) prone to overfitting. To remedy these issues, we follow [38] (see also [39]) and observe that spectral convolution may also be implemented in the spatial domain via polynomials of $\mathbf{L}$ by setting $\mathbf{\sum }$ equal to a polynomial of $\mathbf{\Lambda }$ . This reduces the number of trainable parameters from the size of the graph to the degree of the polynomial and also enhances robustness to perturbations [40]. As in [38], we let $\widetilde{\mathbf{\Lambda }} = \frac{2}{{\lambda }_{\max }}\mathbf{\Lambda } - \mathbf{I}$ denote the normalized eigenvalue matrix (with entries in $\left\lbrack {-1,1}\right\rbrack$ ) and choose $\mathbf{\sum } = \mathop{\sum }\limits_{{k = 0}}^{K}{\theta }_{k}{T}_{k}\left( \widetilde{\mathbf{\Lambda }}\right)$ , for some ${\theta }_{1},\ldots ,{\theta }_{k} \in \mathbb{R}$ where for $0 \leq k \leq K,{T}_{k}$ is the Chebyshev polynomials defined by ${T}_{0}\left( x\right) = 1,{T}_{1}\left( x\right) = x$ , and ${T}_{k}\left( x\right) = {2x}{T}_{k - 1}\left( x\right) + {T}_{k - 2}\left( x\right)$ for $k \geq 2$ . Since $\mathbf{U}$ is unitary, we have ${\left( \mathbf{U}\widetilde{\mathbf{\Lambda }}{\mathbf{U}}^{ \dagger }\right) }^{k} = \mathbf{U}{\widetilde{\mathbf{\Lambda }}}^{k}{\mathbf{U}}^{ \dagger }$ , and thus, letting $\widetilde{\mathbf{L}} \mathrel{\text{ := }} \frac{2}{{\lambda }_{\max }}\mathbf{L} - \mathbf{I}$ , we have
114
+
115
+ $$
116
+ \mathbf{Y}\mathbf{x} = \mathbf{U}\mathop{\sum }\limits_{{k = 0}}^{K}{\theta }_{k}{T}_{k}\left( \widetilde{\mathbf{\Lambda }}\right) {\mathbf{U}}^{ \dagger }\mathbf{x} = \mathop{\sum }\limits_{{k = 0}}^{K}{\theta }_{k}{T}_{k}\left( \widetilde{\mathbf{L}}\right) \mathbf{x}. \tag{5}
117
+ $$
118
+
119
+ This is the class of convolutional filters we will use in our experiments. However, one could also imitate Sec. 3.1 on [13] to produce a class of filters based on [36] rather than [38].
120
+
121
+ It is important to note that $\widetilde{\mathbf{L}}$ is constructed so that, in (5), ${\left( \mathbf{Y}\mathbf{x}\right) }_{i}$ depends on all nodes within $K$ -hops from ${v}_{i}$ on the undirected, unsigned counterpart of $\mathcal{G}$ , i.e. the graph whose adjacency matrix is given by ${\mathbf{A}}_{i,j}^{\prime } = \frac{1}{2}\left( {\left| {\mathbf{A}}_{i,j}\right| + \left| {\mathbf{A}}_{j,i}\right| }\right)$ . Therefore, this notion of convolution does not favor "outgoing neighbors" $\left\{ {{v}_{j} \in \mathcal{V} : \left( {{v}_{i},{v}_{j}}\right) \in \mathcal{E}}\right\}$ over "incoming neighbors" $\left\{ {{v}_{j} \in V : \left( {{v}_{j},{v}_{i}}\right) \in \mathcal{E}}\right\}$ (or vice versa). This is important since for a given node ${v}_{i}$ , both sets may contain different, useful information. Furthermore, since the phase matrix ${\mathbf{\Theta }}^{\left( q\right) }$ encodes an outgoing edge and an incoming edge differently, the filter matrix $\mathbf{Y}$ is also able to aggregate information from these two sets in different ways. For computational complexity, we note that while the matrix $\exp \left( {\mathrm{i}{\mathbf{\Theta }}^{\left( q\right) }}\right)$ is dense in theory, in practice, one only needs to compute a small fraction of its entries corresponding to the nonzero entries of $\widetilde{\mathbf{A}}$ (which is sparse for most real-world data sets). Thus, the computational complexity of the convolution proposed here is equivalent to that of its undirected, unsigned counterparts.
122
+
123
+ § 3.4 THE MSGNN ARCHITECTURE
124
+
125
+ We now define our network, MSGNN. Let ${\mathbf{X}}^{\left( 0\right) }$ be an $n \times {F}_{0}$ input matrix with columns ${\mathbf{x}}_{1}^{\left( 0\right) },\ldots {\mathbf{x}}_{{F}_{0}}^{\left( 0\right) }$ , and $L$ denote the number of convolution layers. As in [13], we use a complex version of the Rectified Linear Unit defined by $\sigma \left( z\right) = z$ , if $- \pi /2 \leq \arg \left( z\right) < \pi /2$ , and $\sigma \left( z\right) = 0$ otherwise, where $\arg \left( \cdot \right)$ is the complex argument of $z \in \mathbb{C}$ . Let ${F}_{\ell }$ be the number of channels in the $\ell$ -th layer. For $1 \leq \ell \leq L$ , $1 \leq i \leq {F}_{\ell }$ , and $1 \leq j \leq {F}_{\ell - 1}$ , let ${\mathbf{Y}}_{ij}^{\left( \ell \right) }$ be a convolution matrix defined by (4) or (5). Given the $\left( {\ell - 1}\right)$ -st layer hidden representation matrix ${\mathbf{X}}^{\left( \ell - 1\right) }$ , we define ${\mathbf{X}}^{\left( \ell \right) }$ columnwise by
126
+
127
+ $$
128
+ {\mathbf{x}}_{j}^{\left( \ell \right) } = \sigma \left( {\mathop{\sum }\limits_{{i = 1}}^{{F}_{\ell - 1}}{\mathbf{Y}}_{ij}^{\left( \ell \right) }{\mathbf{x}}_{i}^{\left( \ell - 1\right) } + {\mathbf{b}}_{j}^{\left( \ell \right) }}\right) , \tag{6}
129
+ $$
130
+
131
+ where ${\mathbf{b}}_{j}^{\left( \ell \right) }$ is a bias vector with equal real and imaginary parts, $\operatorname{Real}\left( {\mathbf{b}}_{j}^{\left( \ell \right) }\right) = \operatorname{Imag}\left( {\mathbf{b}}_{j}^{\left( \ell \right) }\right)$ . In matrix form we write ${\mathbf{X}}^{\left( \ell \right) } = {\mathbf{Z}}^{\left( \ell \right) }\left( {\mathbf{X}}^{\left( \ell - 1\right) }\right)$ , where ${\mathbf{Z}}^{\left( \ell \right) }$ is a hidden layer of the form (6). In our experiments, we utilize convolutions of the form (5) with $\mathbf{L} = {\mathbf{L}}_{N}^{\left( q\right) }$ and set $K = 1$ , in which case we obtain
132
+
133
+ $$
134
+ {\mathbf{X}}^{\left( \ell \right) } = \sigma \left( {{\mathbf{X}}^{\left( \ell - 1\right) }{\mathbf{W}}_{\text{ self }}^{\left( \ell \right) } + {\widetilde{\mathbf{L}}}_{N}^{\left( q\right) }{\mathbf{X}}^{\left( \ell - 1\right) }{\mathbf{W}}_{\text{ neigh }}^{\left( \ell \right) } + {\mathbf{B}}^{\left( \ell \right) }}\right) ,
135
+ $$
136
+
137
+ where ${\mathbf{W}}_{\text{ self }}^{\left( \ell \right) }$ and ${\mathbf{W}}_{\text{ neigh }}^{\left( \ell \right) }$ are learned weight matrices corresponding to the filter weights of different channels and ${\mathbf{B}}^{\left( \ell \right) } = \left( {{\mathbf{b}}_{1}^{\left( \ell \right) },\ldots ,{\mathbf{b}}_{{F}_{\ell }}^{\left( \ell \right) }}\right)$ . After the convolutional layers, we unwind the complex matrix ${\mathbf{X}}^{\left( L\right) }$ into a real-valued $n \times 2{F}_{L}$ matrix. For node clustering, we then apply a fully connected layer followed by the softmax function. By default, we set $L = 2$ , in which case, our network is given by
138
+
139
+ $$
140
+ \operatorname{softmax}\left( {\operatorname{unwind}\left( {{\mathbf{Z}}^{\left( 2\right) }\left( {{\mathbf{Z}}^{\left( 1\right) }\left( {\mathbf{X}}^{\left( 0\right) }\right) }\right) }\right) {\mathbf{W}}^{\left( 3\right) }}\right) .
141
+ $$
142
+
143
+ For link prediction, we apply the same method, except we concatenate rows corresponding to pairs of nodes after the unwind layer before applying the linear layer and softmax.
144
+
145
+ § 4 EXPERIMENTS
146
+
147
+ § 4.1 TASKS AND EVALUATION METRICS
148
+
149
+ Node Clustering. In the node clustering task, one aims to partition the nodes of the graph into the disjoint union of $C$ sets ${\mathcal{C}}_{0},\ldots ,{\mathcal{C}}_{C - 1}$ . Typically in an unsigned, undirected network, one aims to choose the ${\mathcal{C}}_{i}$ ’s so that there are many links within each cluster and comparably few links between clusters, in which case nodes within each cluster are similar due to dense connections. In general, however, similarity could be defined differently [41]. In a signed graph, clusters can be formed by grouping together nodes with positive links and separating nodes with negative links (see [12]). In a directed graph, clusters can be determined by a directed flow on the network (see [5]). More generally, we can define clusters based on an underlying meta-graph, where meta-nodes, each of which corresponds to a cluster in the network, can be distinguished based on either signed or directional information (e.g., flow imbalance [5]). This general meta-graph idea motivates our introduction of a novel synthetic network model, which we will define in Sec. 4.2, driven by both link sign and directionality.
150
+
151
+ All of our node clustering experiments are done in the semi-supervised setting, where one selects a fraction of the nodes in each cluster as seed nodes, with known cluster membership labels. In all of our node clustering tasks, we measure our performance using the Adjusted Rand Index (ARI) [42].
152
+
153
+ Link Prediction. On undirected, unsigned graphs, link prediction is simply the task of predicting whether or not there is a link between a pair of nodes. Here, we consider five different variations of the link prediction task for signed and/or directed networks. In our first task, link sign prediction (SP), one assumes that there is a link from ${v}_{i}$ to ${v}_{j}$ and aims to predict whether that link is positive or negative, i.e., whether $\left( {{v}_{i},{v}_{j}}\right) \in {\mathcal{E}}^{ + }$ or $\left( {{v}_{i},{v}_{j}}\right) \in {\mathcal{E}}^{ - }$ . Our second task, direction prediction (DP), one aims to predict whether $\left( {{v}_{i},{v}_{j}}\right) \in \mathcal{E}$ or $\left( {{v}_{j},{v}_{i}}\right) \in \mathcal{E}$ under the assumption that exactly one of these two conditions holds. We also consider three-, four-, and five-class prediction problems. In the three-class problem (3C), the possibilities are $\left( {{v}_{i},{v}_{j}}\right) \in \mathcal{E},\left( {{v}_{j},{v}_{i}}\right) \in \mathcal{E}$ , or that neither $\left( {{v}_{i},{v}_{j}}\right)$ nor $\left( {{v}_{j},{v}_{i}}\right)$ are in $\mathcal{E}$ . For the four-class problem (4C), the possibilities are $\left( {{v}_{i},{v}_{j}}\right) \in {\mathcal{E}}^{ + },\left( {{v}_{i},{v}_{j}}\right) \in {\mathcal{E}}^{ - },\left( {{v}_{j},{v}_{i}}\right) \in {\mathcal{E}}^{ + }$ , and $\left( {{v}_{j},{v}_{i}}\right) \in {\mathcal{E}}^{ - }$ . For the five-class problem (5C), we also add in the possibility that neither $\left( {{v}_{i},{v}_{j}}\right)$ nor $\left( {{v}_{j},{v}_{i}}\right)$ are in $\mathcal{E}$ . For all tasks, we evaluate the performance with classification accuracy. Notably, while (SP), (DP), and (3C) only require a method to be able to extract signed or directed information, the tasks (4C) and (5C) require it to be able to effectively process both sign and directional information. Also, we discard those edges that satisfy more than one condition in the possibilities for training and evaluation, but these edges are kept in the input network observed during training.
154
+
155
+ § 4.2 SYNTHETIC DATA FOR NODE CLUSTERING
156
+
157
+ Established Synthetic Models. We conduct experiments on the Signed Stochastic Block Models (SSBMs) and polarized SSBMs (POL-SSBMs) introduced in [12]. Notably, both of these models are signed, but undirected. In the $\operatorname{SSBM}\left( {n,C,p,\rho ,\eta }\right)$ model, $n$ represents the number of nodes, $C$ is the number of clusters, $p$ is the probability that there is a link (of either sign) between two nodes, $\rho$ is the approximate ratio between the largest cluster size and the smallest cluster size, and $\eta$ is the probability that an edge will have the "wrong" sign, i.e., that an intra-cluster edge will be negative or an inter-cluster edge will be positive. POL-SSBM $\left( {n,r,p,\rho ,\eta ,N}\right)$ is a hierarchical variation of the SSBM model consisting of $r$ communities, each of which is itself an SSBM. We refer the reader to [12] for details of both models.
158
+
159
+ A novel Synthetic Model: Signed Directed Stochastic Block Model (SDSBM). Given a meta-graph adjacency matrix $\mathbf{F} = {\left( {\mathbf{F}}_{k,l}\right) }_{k,l = 0,\ldots ,C - 1}$ , an edge sparsity level $p$ , a number of nodes $n$ , and a sign flip noise level parameter $0 \leq \eta \leq {0.5}$ , we defined a SDSBM model, denoted by SDSBM $\left( {\mathbf{F},n,p,\rho ,\eta }\right)$ , as follows: 1) Assign block sizes ${n}_{0} \leq {n}_{1} \leq \cdots \leq {n}_{C - 1}$ based on a parameter $\rho \geq 1$ , which approximately represents the ratio between the size of largest block and the size of the smallest block, using the same method as in [12]. 2) Assign each node to one of the $C$ blocks, so that each block ${C}_{i}$ has size ${n}_{i}$ . 3) For nodes ${v}_{i} \in {\mathcal{C}}_{k}$ , and ${v}_{j} \in {\mathcal{C}}_{l}$ , independently sample an edge from ${v}_{i}$ to ${v}_{j}$ with probability $p \cdot \left| {\mathbf{F}}_{k,l}\right|$ . Give this edge weight 1 if ${F}_{k,l} \geq 0$ and weight -1 if ${F}_{k,l} < 0$ . 4) Flip the sign of all the edges in the generated graph with sign-flip probability $\eta$ .
160
+
161
+ In our experiments, we use two sets of specific meta-graph structures $\left\{ {{\mathbf{F}}_{1}\left( \gamma \right) }\right\} ,\left\{ {{\mathbf{F}}_{2}\left( \gamma \right) }\right\}$ , with three and four clusters, respectively, where $0 \leq \gamma \leq {0.5}$ is the directional noise level. Specifically, we are interested in $\operatorname{SDSBM}\left( {{\mathbf{F}}_{1}\left( \gamma \right) ,n,p,\rho ,\eta }\right)$ and $\operatorname{SDSBM}\left( {{\mathbf{F}}_{2}\left( \gamma \right) ,n,p,\rho ,\eta }\right)$ models with varying $\gamma$ where
162
+
163
+ $$
164
+ {\mathbf{F}}_{1}\left( \gamma \right) = \left\lbrack \begin{matrix} {0.5} & \gamma & - \gamma \\ 1 - \gamma & {0.5} & - {0.5} \\ - 1 + \gamma & - {0.5} & {0.5} \end{matrix}\right\rbrack ,{\mathbf{F}}_{2}\left( \gamma \right) = \left\lbrack \begin{matrix} {0.5} & \gamma & - \gamma & - \gamma \\ 1 - \gamma & {0.5} & - {0.5} & - \gamma \\ - 1 + \gamma & - {0.5} & {0.5} & - \gamma \\ - 1 + \gamma & - 1 + \gamma & - 1 + \gamma & {0.5} \end{matrix}\right\rbrack .
165
+ $$
166
+
167
+ To better understand the above SDSBM models, toy examples are provided in Appendix B. We also note that the SDSBM model proposed here is a generalization of both the SSBM model from [12] and the Directed Stochastic Block Model from [5] when we have suitable meta-graph structures.
168
+
169
+ § 4.3 REAL-WORLD DATA FOR LINK PREDICTION
170
+
171
+ Standard Real-World Data Sets. We consider four standard real-world signed and directed data sets. BitCoin-Alpha and BitCoin-OTC [2] describe bitcoin trading. Slashdot [43] is related to a technology news website, and Epinions [44] describes consumer reviews. These networks range in size from 3783 to 131580 nodes. Only Slashdot and Epinions are unweighted.
172
+
173
+ Novel Financial Data Sets from Stock Returns. Using financial time series data, we build signed directed networks where the weighted edges encode lead-lag relationships inherent in the financial market, for each year in the interval 2000-2020. The lead-lag matrices are built from time series of daily price returns ${}^{2}$ . We refer to these networks as our Fiancial Lead-Lag (FiLL) data sets. For each year in the data set, we build a signed directed graph (FiLL-pvCLCL) based on the price return of 444 stocks at market close times on consecutive days. We also build another graph (FiLL-OPCL), based on the price return of 430 stocks from market open to close. The difference between 444 versus 430 stems from the non-availability of certain open and close prices on some days for certain stocks. The lead-lag metric that is captured by the entry ${\mathbf{A}}_{i,j}$ in each network encodes a measure that quantifies the extent to which stock ${v}_{i}$ leads stock ${v}_{j}$ , and is obtained by computing the linear regression coefficient when regressing the time series (of length 245) of daily returns of stock ${v}_{i}$ against the lag-one version of the time series (of length 245) of the daily returns of stock ${v}_{j}$ . Specifically, we use the beta coefficient of the corresponding simple linear regression, to serve as the one-day lead-lag metric. The resulting matrix is asymmetric and signed, rendering it amenable to a signed, directed network interpretation. The initial matrix is dense, with nonzero entries outside the main diagonal, since we do not consider the own auto-correlation of each stock. Note that an alternative approach to building the directed network could be based on Granger causality, or other measures that quantify the lead-lag between a pair of time series, potentially while accounting for nonlinearity, such as second-order log signatures from rough paths theory as in [4].
174
+
175
+ Next, we sparsify each network, keeping only ${20}\%$ of the edges with the largest magnitudes. We also report the average results across the all the yearly data sets (a total of 42 networks) where the data set is denoted by FiLL (avg.). To facilitate future research using these data sets as benchmarks, both the dense lead-lag matrices and their sparsified counterparts will be made publicly available.
176
+
177
+ ${}^{2}$ Raw CRSP data accessed through https://wrds-www.wharton.upenn.edu/.
178
+
179
+ § 4.4 EXPERIMENTAL RESULTS
180
+
181
+ We compare MSGNN against representative GNNs which are described in Section 2. The six methods we consider are 1) SGCN [22], 2) SDGNN [3], 3) SiGAT [10], 4) SNEA [11], 5) SSSNET [12], and 6) SigMaNet [29]. For all link prediction tasks, comparisons are carried out on all baselines; for the node clustering tasks, we only compare MSGNN against SSSNET and SigMaNet as adapting the other methods to this task is nontrivial. Implementation details are provided in Appendix C, along with a runtime comparison which shows that MSGNN is the fastest method, see Table 2 in Appendix C. Extended results are in Appendix D and E. Anonymous codes and some preprocessed data are available at https://anonymous.4open.science/r/MSGNN.
182
+
183
+ § 4.4.1 NODE CLUSTERING
184
+
185
+ < g r a p h i c s >
186
+
187
+ Figure 1: Node clustering test ARI comparison on synthetic data. Dashed lines highlight MSGNN's performance with $q = {.25}$ . Error bars indicate one standard error. Results are averaged over ten runs - five different networks, each with two distinct data splits.
188
+
189
+ Figure 1 compares the node clustering performance of MSGNN with two other signed GNNs on synthetic data, and against variants of MSGNN on SDSBMs that take different $q$ values. For signed, undirected networks $q$ does not has an effect, and hence we only report one MSGNN variant. Error bars are given by one standard error. We conclude that MSGNN outperforms SigMaNet on all data sets and is competitive with SSSNET. On the majority of data sets, MSGNN achieves leading performance, whereas on the others it is slightly outperformed by SSSNET, depending on the network density, noise level, network size, and the underlying meta-graph structure. On these relatively small data sets, MSGNN and SSSNET have comparable runtime and are faster than SigMaNet. Comparing the MSGNN variants, we conclude that the directional information in these SDSBM models plays a vital role since MSGNN with $q = 0$ is usually outperformed by MSGNN with nonzero $q$ .
190
+
191
+ § 4.4.2 LINK PREDICTION
192
+
193
+ Our results for link prediction in Table 1 indicate that MSGNN is the top performing method, achieving the highest level of accuracy in 13 out of 25 total cases and being among the leading two in 19 out of 25 total cases. SNEA is the second best performing method, but is the least efficient in speed due to its use of graph attention, see runtime comparison in Appendix C.
194
+
195
+ Specifically, the "avg." results for the novel financial data sets first average the accuracy values across all individual networks (a total of 42 networks), then report the mean and standard deviation over the five runs. Results for individual FiLL networks are reported in Appendix E. Note that $\pm {0.0}$ in the result tables indicates that the standard deviation is less than 0.05%.
196
+
197
+ Table 1: Test accuracy (%) comparison the signed and directed link prediction tasks introduced in Sec. 4.1. The best is marked in bold red and the second best is marked in underline blue.
198
+
199
+ max width=
200
+
201
+ Data Set Link Task SGCN SDGNN SiGAT SNEA SSSNET SigMaNet MSGNN
202
+
203
+ 1-9
204
+ 5*BitCoin-Alpha SP ${64.7} \pm {0.9}$ ${64.5} \pm {1.1}$ ${62.9} \pm {0.9}$ ${64.1} \pm {1.3}$ ${67.4} \pm {1.1}$ ${47.8} \pm {3.9}$ ${69.8} \pm {2.5}$
205
+
206
+ 2-9
207
+ DP ${60.4} \pm {1.7}$ ${61.5} \pm {1.0}$ ${61.9} \pm {1.9}$ ${60.9} \pm {1.7}$ ${68.1} \pm {2.3}$ ${49.4} \pm {3.1}$ ${68.3} \pm {3.7}$
208
+
209
+ 2-9
210
+ 3C ${81.4} \pm {0.5}$ ${79.2} \pm {0.9}$ 77.1±0.7 ${83.2} \pm {0.5}$ ${78.3} \pm {4.7}$ 37.4±16.7 ${82.7} \pm {1.2}$
211
+
212
+ 2-9
213
+ 4C ${51.1} \pm {0.8}$ ${52.5} \pm {1.1}$ 49.3±0.7 ${52.4} \pm {1.8}$ ${54.3} \pm {2.9}$ ${20.6} \pm {6.3}$ ${55.9} \pm {4.0}$
214
+
215
+ 2-9
216
+ 5C ${79.5} \pm {0.3}$ ${78.2} \pm {0.5}$ ${76.5} \pm {0.3}$ ${81.1} \pm {0.3}$ ${77.9} \pm {0.3}$ ${34.2} \pm {6.5}$ ${81.4} \pm {0.6}$
217
+
218
+ 1-9
219
+ 5*BitCoin-OTC SP ${65.6} \pm {0.9}$ ${65.3} \pm {1.2}$ ${62.8} \pm {1.3}$ ${67.7} \pm {0.5}$ ${70.1} \pm {1.2}$ ${50.0} \pm {2.3}$ ${65.5} \pm {10.1}$
220
+
221
+ 2-9
222
+ DP ${63.8} \pm {1.2}$ 63.2±1.5 ${64.0} \pm {2.0}$ ${65.3} \pm {1.2}$ ${69.6} \pm {1.0}$ ${48.4} \pm {4.9}$ 73.1±1.2
223
+
224
+ 2-9
225
+ 3C 79.0±0.7 77.3±0.7 73.6±0.7 ${82.2} \pm {0.4}$ ${76.9} \pm {1.1}$ ${26.8} \pm {10.9}$ ${81.3} \pm {1.6}$
226
+
227
+ 2-9
228
+ 4C ${51.5} \pm {0.4}$ ${55.3} \pm {0.8}$ ${51.2} \pm {1.8}$ ${56.9} \pm {0.7}$ ${57.0} \pm {2.0}$ ${23.3} \pm {7.4}$ ${55.5} \pm {3.3}$
229
+
230
+ 2-9
231
+ 5C 77.4±0.7 ${77.3} \pm {0.8}$ ${74.1} \pm {0.5}$ ${80.5} \pm {0.5}$ ${74.0} \pm {1.6}$ ${25.9} \pm {6.2}$ ${79.0} \pm {2.1}$
232
+
233
+ 1-9
234
+ 5*Slashdot SP ${74.7} \pm {0.5}$ ${74.1} \pm {0.7}$ ${64.0} \pm {1.3}$ ${70.6} \pm {1.0}$ ${86.6} \pm {2.2}$ ${57.9} \pm {5.3}$ ${92.4} \pm {0.6}$
235
+
236
+ 2-9
237
+ DP 74.8±0.9 74.2±1.4 ${62.8} \pm {0.9}$ ${71.1} \pm {1.1}$ ${87.8} \pm {1.0}$ ${53.0} \pm {4.0}$ ${84.3} \pm {17.2}$
238
+
239
+ 2-9
240
+ 3C ${69.7} \pm {0.3}$ ${66.3} \pm {1.8}$ ${49.1} \pm {1.2}$ ${72.5} \pm {0.7}$ 79.3±1.2 ${42.0} \pm {7.9}$ ${84.7} \pm {0.6}$
241
+
242
+ 2-9
243
+ 4C ${63.2} \pm {0.3}$ ${64.0} \pm {0.7}$ ${53.4} \pm {0.2}$ ${60.5} \pm {0.6}$ ${72.7} \pm {0.6}$ ${25.7} \pm {8.9}$ 73.5±3.7
244
+
245
+ 2-9
246
+ 5C ${64.4} \pm {0.3}$ ${62.6} \pm {2.0}$ ${44.4} \pm {1.4}$ ${66.4} \pm {0.5}$ 70.4±0.7 ${19.3} \pm {8.6}$ ${74.2} \pm {0.9}$
247
+
248
+ 1-9
249
+ 5*Epinions SP ${62.9} \pm {0.5}$ ${67.7} \pm {0.8}$ ${63.6} \pm {0.5}$ ${66.5} \pm {1.0}$ ${78.5} \pm {2.1}$ ${53.3} \pm {10.6}$ ${69.7} \pm {16.2}$
250
+
251
+ 2-9
252
+ DP ${61.7} \pm {0.5}$ ${67.9} \pm {0.6}$ ${63.6} \pm {0.8}$ ${66.4} \pm {1.2}$ ${73.9} \pm {6.2}$ 49.0±3.2 ${81.0} \pm {1.5}$
253
+
254
+ 2-9
255
+ 3C ${70.3} \pm {0.8}$ ${73.2} \pm {0.8}$ ${52.3} \pm {1.3}$ ${72.8} \pm {0.2}$ ${72.7} \pm {2.0}$ ${30.5} \pm {8.3}$ ${81.5} \pm {1.3}$
256
+
257
+ 2-9
258
+ 4C ${66.7} \pm {1.2}$ ${71.0} \pm {0.6}$ ${62.3} \pm {0.5}$ ${69.5} \pm {0.7}$ ${70.2} \pm {5.2}$ ${29.9} \pm {6.4}$ ${74.9} \pm {3.6}$
259
+
260
+ 2-9
261
+ 5C ${73.5} \pm {0.8}$ ${76.6} \pm {0.7}$ ${52.9} \pm {0.7}$ ${74.2} \pm {0.1}$ ${70.3} \pm {4.6}$ ${22.1} \pm {6.1}$ ${77.9} \pm {1.4}$
262
+
263
+ 1-9
264
+ 5*FiLL (avg.) SP ${88.4} \pm {0.0}$ ${82.0} \pm {0.3}$ ${76.9} \pm {0.1}$ ${90.0} \pm {0.0}$ ${88.7} \pm {0.3}$ ${50.4} \pm {1.8}$ ${88.4} \pm {1.3}$
265
+
266
+ 2-9
267
+ DP ${88.5} \pm {0.1}$ ${82.0} \pm {0.2}$ ${76.9} \pm {0.1}$ ${90.0} \pm {0.0}$ ${88.8} \pm {0.3}$ ${48.0} \pm {2.7}$ ${88.6} \pm {2.1}$
268
+
269
+ 2-9
270
+ 3C ${63.0} \pm {0.1}$ ${59.3} \pm {0.0}$ ${55.3} \pm {0.1}$ ${64.3} \pm {0.1}$ ${62.2} \pm {0.3}$ ${33.7} \pm {1.3}$ ${64.2} \pm {0.8}$
271
+
272
+ 2-9
273
+ 4C ${81.7} \pm {0.0}$ ${78.8} \pm {0.1}$ ${70.5} \pm {0.1}$ ${83.2} \pm {0.1}$ ${80.0} \pm {0.3}$ ${24.9} \pm {0.9}$ ${78.8} \pm {1.2}$
274
+
275
+ 2-9
276
+ 5C ${63.8} \pm {0.0}$ ${61.1} \pm {0.1}$ ${55.5} \pm {0.1}$ ${64.8} \pm {0.1}$ ${60.4} \pm {0.4}$ ${19.8} \pm {1.1}$ ${62.1} \pm {0.8}$
277
+
278
+ 1-9
279
+
280
+ § 4.4.3 ABLATION STUDY AND DISCUSSION
281
+
282
+ We now discuss the influence of the charge parameter $q$ in the Laplacian matrix, and how input features affect the performance. Table 3 in Appendix D compares different variants of MSGNN on the link prediction tasks, with respect to (1) whether we use a traditional signed Laplacian that is initially designed for undirected networks (for which $q = 0$ ) and a magnetic signed Laplacian that strongly emphasizes directionality (for which $q = {0.25}$ ); (2) whether to include sign in input node features (if False, then only in- and out-degrees are computed like in [13] regardless of edge signs, otherwise features are constructed based on the positive and negative subgraphs separately); and (3) whether we take edge weights into account (if False, we view all edge weights as having magnitude one). Taking the standard errors into account, we find that incorporating directionality into the Laplacian matrix (i.e., having nonzero $q$ ) typically leads to slightly better performance in the directionality-related tasks (DP, $3\mathrm{C},4\mathrm{C},5\mathrm{C}$ ). Drilling deeper, a comparison using different $q$ values in the magnetic part shown in Table 4 provides further evidence that nonzero $q$ values usually boost performance compared with no emphasis on directionality at all $\left( {q = 0}\right)$ , in addition to Figure 1 for node clustering.
283
+
284
+ Moreover, signed features are in general helpful for tasks involving sign prediction. For constructing weighted features we see no significant difference in simply summing up entries in the adjacency matrix compared to summing the absolute values of the entries. Besides, it seems that calculating degrees based on unit-magnitude weights is generally not best, as even when it leads performance it is within one standard error of another method. Treating negative edge weights as the negation of positive ones is also not helpful (by not having separate degree features for the positive and negative subgraphs), which may explain why SigMaNet performs poorly in most scenarios due to its undesirable property. Surprisingly often, including only signed information but not weighted features does well. To conclude, constructing features based on the positive and negative subgraphs separately is helpful, and including directional information is generally beneficial.
285
+
286
+ § 5 CONCLUSION AND OUTLOOK
287
+
288
+ In this paper, we propose a spectral graph neural network based on a novel magnetic signed Laplacian matrix, introduce a novel synthetic network model and new real-world data sets, and conduct experiments on node clustering and link prediction tasks that are not restricted to considering either link sign or directionality alone. MSGNN performs as well or better than leading GNNs, while being considerably faster on real-world data sets. Future plans include investigating an extension to temporal/dynamic graphs, where node features and/or edge information could evolve over time [45, 46]. We are also interested in extending our work to being able to encode nontrivial edge features, to develop objectives which explicitly handle heterogeneous edge densities throughout the graph, and to extend our approach to hypergraphs and other complex network structures.
papers/LOG/LOG 2022/LOG 2022 Conference/KgjU7K0yEgt/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Graph Heavy Light Decomposed Networks: Towards learning scalable long-range graph patterns
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ We present graph heavy light decomposed networks (GraphHLDNs), novel neural network architectures allowing reasoning about long-range relationships on graphs reducible to trees. By decomposing the trees into a set of interconnected chains in a way similar to the heavy-light decomposition algorithm, we rewire a tree with $n$ vertices so that its depth is in order of $O\left( {{\log }^{2}n}\right)$ after building a binary-tree-shaped neural network over each chain. This enables faster propagation and aggregation of information over the whole graph while being able to reason about long-range sequences of nodes and considering their ordering. We show that in this way the method is partially addressing the previous need of message-passing architectures for step-by-step supervision to execute certain algorithms out-of-distribution. Our method is also applicable to real-world datasets, achieving results competitive with other state-of-the-art architectures targeted on learning long-range dependencies or using positional encodings on several molecular datasets.
12
+
13
+ ## 15 1 Introduction
14
+
15
+ In most graph neural network architectures where in each layer nodes aggregate information from their neighbours, the range in which the information can travel is limited by the number of propagation layers. This hinders the ability of such architectures to reason about long-range dependencies, patterns and metrics such as orderings of vertices, their distances, or attributes of paths between two or more nodes.
16
+
17
+ Furthermore, even if the network manages to learn and recognize such patterns on smaller graphs for example by using a step-by-step supervision signal as in [1], the networks have poor ability to generalize such patterns to graphs of larger scales and sizes [1, 2].
18
+
19
+ Several recent works tried to tackle long-range reasoning. Approaches include addition of various positional encodings [3-5], hierarchical networks that make connections between distant nodes [6] or inclusion of modules that dynamically change the number of propagation layers based on the task or graph size [2]. These methods however have their limitations: For example hierarchical networks merge multiple nodes together which leads to loss of information about their original edge connections. On the other hand modules dynamically changing propagation layers usually require linear number of steps depending on the graph diameter leading to over-smoothing and diminishing/exploding gradient problem on large graphs.
20
+
21
+ In this work, we propose a novel architecture that allows better reasoning over long-range distances on graphs reducible to trees. This is done by decomposing the trees into a set of chains, similarly to the heavy-light decomposition algorithm (introduced by Sleator and Tarjan [7]) and connecting different chains through binary-tree-shaped neural networks. This design allows the network to reason both about neighbouring relationships, but also about larger units such as paths. We show that in this way our model is able to learn, execute and strongly generalize without step-by-step supervision signal several algorithms, such as finding the shortest path, the lowest common ancestor or minimum vertex cover. Further we show that GraphHLDN has strong utility on real-world datasets 0 and is competitive or even outperforms best models on molecular datasets such as AQSOL [4], ESOL [8] or Peptides-struct [9].
22
+
23
+ ![01963ee8-a182-75c4-86fc-84d3ded32e47_1_319_226_1167_443_0.jpg](images/01963ee8-a182-75c4-86fc-84d3ded32e47_1_319_226_1167_443_0.jpg)
24
+
25
+ Figure 1: On the left is an example of input tree rooted with edges separated into heavy and light edges. On the right each heavy chain was transformed into a binary tree. Binary trees were then connected along light edges. To compute the graph level feature we use encode, process, decode method, when first all inputs are encoded. In the process phase, the nodes are evaluated bottom-up layer by layer. In binary tree internal nodes (yellow) binary merging MLP $\phi$ is used.
26
+
27
+ ## 2 Methodology
28
+
29
+ The key step in our method is the generation of the heavy-light decomposed(HLD) tree. This consists of three main sub-steps: first selecting light edges splitting the tree into chains, then creation of binary trees over each chain and finally connecting those trees along the light edges. Firstly, after rooting the tree, the edges are split into heavy and light ones in a similar way as in the heavy light decomposition algorithm: i.e. so that each vertex has at most one heavy children and from each vertex the path to the the root contains at most $O{\left( \log n\right) }^{1}$ light edges. After creating the binary trees, we connect them along the light edges. The binary tree root of each chain connected with a light edge in the original tree will be connected with the light edge parent in the original tree as displayed in Figure 1.
30
+
31
+ To process the input tree we closely follow the setup with encode-process-decode[10] architecture as used in [1] and [11], where GraphHLDN is at the heart of the processing phase. To describe the progression of information through the GraphHLDN we split the nodes into two categories - merging nodes (which were created as part of GraphHLDN) and original nodes. To compute graph-level output we traverse through the graph in layers determined by depths of the individual vertices from deepest vertices to the root. In layers we combine aggregated information from deeper layers to obtain aggregated information at the higher level. In each merging node, the representation of two of its children ${x}_{l}$ and ${x}_{r}$ is combined using trainable multi-layer perceptron (MLP) $\phi$ as $\phi \left( {{x}_{l},{x}_{r}}\right) .{}^{2}$ In some of the original nodes we need to process light children as well. In order to do this we take a representation of each light-children. Then we process them through separate MLP ${\phi }_{2}$ , and afterwards combine them using sum aggregation. Then MLP ${\phi }_{3}$ is used which combines the representation of the original node ${x}_{p}$ and element-wise sum ${x}_{c}$ obtained from its light-edge children as ${\phi }_{3}\left( {{x}_{p},{x}_{c}}\right)$ . Using these rules layer by layer from bottom up, the network gradually aggregates the information to the root.
32
+
33
+ When the goal is to compute node-level targets, we can send the information downwards traversing in top down approach where in each layer we combine representation of each node from bottom-up approach and representation of its parent from top down approach. These aggregation (bottom-up) and spreading (top-down) passes of information can be combined multiple times in order to allow more general functions to be learned.
34
+
35
+ Choice of this method is beneficial for two reasons. Firstly it is very similar to segment trees which are often used with heavy light decomposition for answering queries about trees in $O\left( {{\log }^{2}n}\right)$ time.
36
+
37
+ ---
38
+
39
+ ${}^{1}n$ denotes number of vertices in tree
40
+
41
+ ${}^{2}$ We can also note that each merging node merges two parts of a heavy chain along some edge in the original tree. Therefore in this merging process we can besides encoded children representation include also the edge representation of this edge.
42
+
43
+ ---
44
+
45
+ Table 1: Results comparing MPNN with GraphHLDN on synthetic tasks. Each task has exact solution, so average test accuracy is reported in and out of distribution.
46
+
47
+ <table><tr><td rowspan="2">Task description</td><td colspan="2">GraphHLDN</td><td colspan="2">MPNN</td></tr><tr><td>$n \leq {100}$</td><td>$n \leq {10000}$</td><td>$n \leq {100}$</td><td>$n \leq {10000}$</td></tr><tr><td>Predict nodes on shortest path</td><td>100%</td><td>99.95%</td><td>71.13%</td><td>81.89%</td></tr><tr><td>Find LCA of 2 nodes with given root</td><td>99.5%</td><td>91.4%</td><td>32.16%</td><td>20.68%</td></tr><tr><td>Predict nodes in MVC ${}^{3}$</td><td>99.37%</td><td>99.55%</td><td>91.27%</td><td>91.02%</td></tr></table>
48
+
49
+ This allows it to do the computation using $O\left( {{\log }^{2}n}\right)$ message passing iterations resolving vanishing gradient problem and also improving generalisation out of distribution because multiplying the number of nodes results in only constant increase in number of iterations. Secondly, this structure allows preservation of ordering information of vertices along the path in the process, as the learnable merging function has left and right vertex as separate and distinguishable inputs. Note that in all layers we use the same perceptron for merging. This means that merging function should work on all different sizes of segments (it should be able to merge vertex representing one node with vertex representing 1024 nodes). One of the features which we expect from merging node is therefore associativity. We enforce this by adding associativity consistency loss (regularisation) which is computed by taking random triplets of nodes from tree with their representations $a, b, c$ and then enforcing that $\left| {\mathrm{{BN}}\left( {\phi \left( {\phi \left( {a, b}\right) , c}\right) }\right) - \mathrm{{BN}}\left( {\phi \left( {a,\phi \left( {b, c}\right) }\right) }\right) }\right|$ is minimal. Batch normalisation function $\mathrm{{BN}}$ is used in order to normalize among the features. To make the effect of the normalization stronger, instead of creating just one heavy-light tree from a defined root, we choose multiple random roots with different corresponding HLD trees and during testing we average output of each such tree.
50
+
51
+ ## 3 Evaluation
52
+
53
+ We evaluate the proposed architecture on both synthetic algorithmic datasets and molecular benchmarks. In algorithmic datasets the input graphs consist of uniformly randomly selected trees with the training and validation datasets having up to 100 nodes and test sets having up to 10000 nodes to test out-of-distribution generalization to larger graphs. The evaluation focuses on the following node classification tasks: prediction of nodes on the shortest path between two marked nodes, finding the lowest common ancestor for two randomly selected nodes and a randomly marked root, prediction of nodes in the minimum vertex cover. We use a GraphHLDN network with hidden embeddings having size 64 and three-layer multi-layer perceptrons with LeakyReLUs. For comparison we train a 30 iteration message passing neural network having sum aggregation and the same hidden embedding size and multi-layer perceptrons.
54
+
55
+ We compare the performance of GraphHLDN on Peptides-Struct, AQSOL and ESOL benchmarking datasets with previously reported baseline results from [4] [9] and [12]. The only difference is that in the case of Peptides-Struct we use hidden embeddings of size 128. For each graph in the datasets, we select a random spanning tree of the graph. In the case that the graph has multiple components we randomly select just one. We then create 30 randomly chosen transformed HLD trees from the selected spanning trees and during testing report how the averaged prediction on all 30 trees compares with the targets.
56
+
57
+ Discussion and conclusions. As can be seen from the table 1, GraphHLDN is able to learn the algorithmic patterns from synthetic tasks and generalizes out of training distribution to graphs with hundred times more nodes. This is despite the fact that no step-by-step supervision signal was used to learn intermediate algorithmic steps as required by previous works that could only generalize to much smaller graphs. For most tasks, the precision is near perfect in the case of GraphHLDN, suggesting that the network learns the actual algorithm behind the dataset target rather than some kind of its approximation.
58
+
59
+ Due to the non-invariance of the proposed network to the choice of HLD tree and the lack of strong local neighbourhood inductive bias it may be expected that this architecture would generalize poorly on noisy real-world data. This seems especially likely due to the tree-shaped structure of GraphHLDN, where the nodes in each layer need to aggregate and summarize information from nearly twice as many nodes from a deeper layer. This introduces the bottleneck causing over-squashing of exponentially growing information into fixed-size vectors which was shown to negatively impact the performance of graph neural networks [13] on graphs with negatively curved edges [14]. However, as can be seen in the tables 2, 3 and 4, GraphHLDN is not only applicable to synthetic tasks but it can also be practically useful on molecular datasets. GraphHLDN outperforms all models reported in [4] on AQSOL dataset while using a significantly smaller number of parameters. Similarly on ESOL it almost matches the performance of D-MPNN and in the case of the Peptides-struct dataset focused on long-range dependencies, GraphHLDN is competetive with transformer-based architectures.
60
+
61
+ ---
62
+
63
+ ${}^{3}$ Weighted Minimum vertex cover; if there are conflicts we prefer solutions where selected nodes are as close to root of HLD as possible, this leads to unique solutions. Weights are integers between 1 and 5 .
64
+
65
+ ---
66
+
67
+ Table 2: Results comparing test MAE on AQSOL dataset. The suffix LapPE denotes the use of Laplacian Eigenvectors as node positional encodings with dimension 4 .
68
+
69
+ <table><tr><td>Model</td><td>L</td><td>$\mathbf{\# {Params}}$</td><td>Test MAE $\pm$ s.d.</td></tr><tr><td>RingGNN</td><td>2</td><td>123k</td><td>${3.769} \pm {1.012}$</td></tr><tr><td>GIN</td><td>16</td><td>514k</td><td>${1.962} \pm {0.058}$</td></tr><tr><td>MoNet</td><td>16</td><td>507k</td><td>${1.501} \pm {0.056}$</td></tr><tr><td>GAT</td><td>16</td><td>540k</td><td>${1.403} \pm {0.008}$</td></tr><tr><td>GCN</td><td>16</td><td>511k</td><td>${1.333} \pm {0.013}$</td></tr><tr><td>GatedGCN</td><td>16</td><td>507k</td><td>${1.308} \pm {0.013}$</td></tr><tr><td>3WLGNN</td><td>3</td><td>525k</td><td>${1.108} \pm {0.036}$</td></tr><tr><td>GatedGCN-LapPE</td><td>16</td><td>507k</td><td>${0.996} \pm {0.008}$</td></tr><tr><td>GraphHLDN</td><td>N/A</td><td>87k</td><td>$\mathbf{{0.882} \pm {0.012}}$</td></tr></table>
70
+
71
+ Table 3: Results comparing test MAE on Peptides-struct dataset.
72
+
73
+ <table><tr><td>Model</td><td>L</td><td>$\mathbf{\# {Params}}$</td><td>Test MAE $\pm$ s.d.</td></tr><tr><td>GINE</td><td>5</td><td>547k</td><td>${0.354} \pm {0.0045}$</td></tr><tr><td>GCN</td><td>5</td><td>508k</td><td>${0.349} \pm {0.0013}$</td></tr><tr><td>GatedGCN</td><td>5</td><td>509k</td><td>${0.342} \pm {0.0013}$</td></tr><tr><td>GatedGCN+RWSE</td><td>5</td><td>506k</td><td>${0.335} \pm {0.0006}$</td></tr><tr><td>SAN+LapPE</td><td>4</td><td>493k</td><td>${0.268} \pm {0.0043}$</td></tr><tr><td>SAN+RWSE</td><td>4</td><td>500k</td><td>${0.254} \pm {0.0012}$</td></tr><tr><td>Transformer+LapPE</td><td>4</td><td>488k</td><td>${0.252} \pm {0.0016}$</td></tr><tr><td>GraphHLDN</td><td>N/A</td><td>351k</td><td>${0.288} \pm {0.0032}$</td></tr></table>
74
+
75
+ It is also notable that this performance is achieved despite ignoring certain edges not included in the spanning trees when the input graphs aren't trees. We hope that our work will inspire further research in extending the capabilities of GraphHLDN to other graph topologies and further enhancing or combining capabilities of classical message-passing with GraphHLDN.
76
+
77
+ Table 4: Results comparing test RMSE on ESOL dataset.
78
+
79
+ <table><tr><td>Model</td><td>L</td><td>$\mathbf{\# {Params}}$</td><td>Test RMSE $\pm$ s.d.</td></tr><tr><td>Fingerprint + MLP</td><td>5</td><td>401k</td><td>${0.922} \pm {0.017}$</td></tr><tr><td>GIN</td><td>5</td><td>626k</td><td>${0.665} \pm {0.026}$</td></tr><tr><td>GAT</td><td>5</td><td>671k</td><td>${0.654} \pm {0.028}$</td></tr><tr><td>D-MPNN</td><td>5</td><td>100k</td><td>${0.635} \pm {0.027}$</td></tr><tr><td>GraphHLDN</td><td>N/A</td><td>87k</td><td>${0.639} \pm {0.019}$</td></tr></table>
80
+
81
+ ## References
82
+
83
+ [1] Petar Velickovic, Rex Ying, Matilde Padovano, Raia Hadsell, and Charles Blundell. Neural
84
+
85
+ execution of graph algorithms. ArXiv, abs/1910.10593, 2020. 1, 2
86
+
87
+ [2] Hao Tang, Zhiao Huang, Jiayuan Gu, Baoliang Lu, and Hao Su. Towards scale-invariant graph-related problem solving by iterative homogeneous gnns. the 34th Annual Conference on Neural Information Processing Systems (NeurIPS), 2020. 1
88
+
89
+ [3] Rickard Brüel-Gabrielsson, Mikhail Yurochkin, and Justin Solomon. Rewiring with positional encodings for graph neural networks, 012022. 1
90
+
91
+ [4] Vijay Prakash Dwivedi, Chaitanya K Joshi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. Benchmarking graph neural networks. arXiv preprint arXiv:2003.00982, 2020.1,3,4
92
+
93
+ [5] Hongya Wang, Haoteng Yin, Muhan Zhang, and Pan Li. Equivariant and stable positional encoding for more powerful graph neural networks. ArXiv, abs/2203.00199, 2022. 1
94
+
95
+ [6] Ladislav Rampášek and Guy Wolf. Hierarchical graph neural nets can capture long-range interactions, 2021. 1
96
+
97
+ [7] Daniel D Sleator and Robert Endre Tarjan. A data structure for dynamic trees. In Proceedings of the thirteenth annual ACM symposium on Theory of computing, pages 114-122, 1981. 1
98
+
99
+ [8] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. arXiv preprint arXiv:2005.00687, 2020. 1
100
+
101
+ [9] Vijay Prakash Dwivedi, Ladislav Rampášek, Mikhail Galkin, Ali Parviz, Guy Wolf, Anh Tuan Luu, and Dominique Beaini. Long range graph benchmark. arXiv:2206.08164, 2022. 1, 3
102
+
103
+ [10] Jessica Hamrick, Kelsey Allen, Victor Bapst, Tina Zhu, Kevin McKee, Joshua Tenenbaum, and Peter Battaglia. Relational inductive bias for physical construction in humans and machines, 06 2018.2
104
+
105
+ [11] Petar Veličković, Adrià Puigdomènech Badia, David Budden, Razvan Pascanu, Andrea Ban-ino, Misha Dashevskiy, Raia Hadsell, and Charles Blundell. The clrs algorithmic reasoning benchmark. arXiv preprint arXiv:2205.15659, 2022. 2
106
+
107
+ [12] Gary Becigneul, Octavian Ganea, Benson Chen, Regina Barzilay, and Tommi Jaakkola. Optimal transport graph neural networks, 062020. 3
108
+
109
+ [13] Uri Alon and Eran Yahav. On the bottleneck of graph neural networks and its practical implications. In International Conference on Learning Representations, 2021. URL https: //openreview.net/forum?id=i800PhOCVH2.4
110
+
111
+ [14] Jake Topping, Francesco Di Giovanni, Benjamin Chamberlain, Xiaowen Dong, and Michael Bronstein. Understanding over-squashing and bottlenecks on graphs via curvature, 11 2021. 4
papers/LOG/LOG 2022/LOG 2022 Conference/KgjU7K0yEgt/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § GRAPH HEAVY LIGHT DECOMPOSED NETWORKS: TOWARDS LEARNING SCALABLE LONG-RANGE GRAPH PATTERNS
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ We present graph heavy light decomposed networks (GraphHLDNs), novel neural network architectures allowing reasoning about long-range relationships on graphs reducible to trees. By decomposing the trees into a set of interconnected chains in a way similar to the heavy-light decomposition algorithm, we rewire a tree with $n$ vertices so that its depth is in order of $O\left( {{\log }^{2}n}\right)$ after building a binary-tree-shaped neural network over each chain. This enables faster propagation and aggregation of information over the whole graph while being able to reason about long-range sequences of nodes and considering their ordering. We show that in this way the method is partially addressing the previous need of message-passing architectures for step-by-step supervision to execute certain algorithms out-of-distribution. Our method is also applicable to real-world datasets, achieving results competitive with other state-of-the-art architectures targeted on learning long-range dependencies or using positional encodings on several molecular datasets.
12
+
13
+ § 15 1 INTRODUCTION
14
+
15
+ In most graph neural network architectures where in each layer nodes aggregate information from their neighbours, the range in which the information can travel is limited by the number of propagation layers. This hinders the ability of such architectures to reason about long-range dependencies, patterns and metrics such as orderings of vertices, their distances, or attributes of paths between two or more nodes.
16
+
17
+ Furthermore, even if the network manages to learn and recognize such patterns on smaller graphs for example by using a step-by-step supervision signal as in [1], the networks have poor ability to generalize such patterns to graphs of larger scales and sizes [1, 2].
18
+
19
+ Several recent works tried to tackle long-range reasoning. Approaches include addition of various positional encodings [3-5], hierarchical networks that make connections between distant nodes [6] or inclusion of modules that dynamically change the number of propagation layers based on the task or graph size [2]. These methods however have their limitations: For example hierarchical networks merge multiple nodes together which leads to loss of information about their original edge connections. On the other hand modules dynamically changing propagation layers usually require linear number of steps depending on the graph diameter leading to over-smoothing and diminishing/exploding gradient problem on large graphs.
20
+
21
+ In this work, we propose a novel architecture that allows better reasoning over long-range distances on graphs reducible to trees. This is done by decomposing the trees into a set of chains, similarly to the heavy-light decomposition algorithm (introduced by Sleator and Tarjan [7]) and connecting different chains through binary-tree-shaped neural networks. This design allows the network to reason both about neighbouring relationships, but also about larger units such as paths. We show that in this way our model is able to learn, execute and strongly generalize without step-by-step supervision signal several algorithms, such as finding the shortest path, the lowest common ancestor or minimum vertex cover. Further we show that GraphHLDN has strong utility on real-world datasets 0 and is competitive or even outperforms best models on molecular datasets such as AQSOL [4], ESOL [8] or Peptides-struct [9].
22
+
23
+ < g r a p h i c s >
24
+
25
+ Figure 1: On the left is an example of input tree rooted with edges separated into heavy and light edges. On the right each heavy chain was transformed into a binary tree. Binary trees were then connected along light edges. To compute the graph level feature we use encode, process, decode method, when first all inputs are encoded. In the process phase, the nodes are evaluated bottom-up layer by layer. In binary tree internal nodes (yellow) binary merging MLP $\phi$ is used.
26
+
27
+ § 2 METHODOLOGY
28
+
29
+ The key step in our method is the generation of the heavy-light decomposed(HLD) tree. This consists of three main sub-steps: first selecting light edges splitting the tree into chains, then creation of binary trees over each chain and finally connecting those trees along the light edges. Firstly, after rooting the tree, the edges are split into heavy and light ones in a similar way as in the heavy light decomposition algorithm: i.e. so that each vertex has at most one heavy children and from each vertex the path to the the root contains at most $O{\left( \log n\right) }^{1}$ light edges. After creating the binary trees, we connect them along the light edges. The binary tree root of each chain connected with a light edge in the original tree will be connected with the light edge parent in the original tree as displayed in Figure 1.
30
+
31
+ To process the input tree we closely follow the setup with encode-process-decode[10] architecture as used in [1] and [11], where GraphHLDN is at the heart of the processing phase. To describe the progression of information through the GraphHLDN we split the nodes into two categories - merging nodes (which were created as part of GraphHLDN) and original nodes. To compute graph-level output we traverse through the graph in layers determined by depths of the individual vertices from deepest vertices to the root. In layers we combine aggregated information from deeper layers to obtain aggregated information at the higher level. In each merging node, the representation of two of its children ${x}_{l}$ and ${x}_{r}$ is combined using trainable multi-layer perceptron (MLP) $\phi$ as $\phi \left( {{x}_{l},{x}_{r}}\right) .{}^{2}$ In some of the original nodes we need to process light children as well. In order to do this we take a representation of each light-children. Then we process them through separate MLP ${\phi }_{2}$ , and afterwards combine them using sum aggregation. Then MLP ${\phi }_{3}$ is used which combines the representation of the original node ${x}_{p}$ and element-wise sum ${x}_{c}$ obtained from its light-edge children as ${\phi }_{3}\left( {{x}_{p},{x}_{c}}\right)$ . Using these rules layer by layer from bottom up, the network gradually aggregates the information to the root.
32
+
33
+ When the goal is to compute node-level targets, we can send the information downwards traversing in top down approach where in each layer we combine representation of each node from bottom-up approach and representation of its parent from top down approach. These aggregation (bottom-up) and spreading (top-down) passes of information can be combined multiple times in order to allow more general functions to be learned.
34
+
35
+ Choice of this method is beneficial for two reasons. Firstly it is very similar to segment trees which are often used with heavy light decomposition for answering queries about trees in $O\left( {{\log }^{2}n}\right)$ time.
36
+
37
+ ${}^{1}n$ denotes number of vertices in tree
38
+
39
+ ${}^{2}$ We can also note that each merging node merges two parts of a heavy chain along some edge in the original tree. Therefore in this merging process we can besides encoded children representation include also the edge representation of this edge.
40
+
41
+ Table 1: Results comparing MPNN with GraphHLDN on synthetic tasks. Each task has exact solution, so average test accuracy is reported in and out of distribution.
42
+
43
+ max width=
44
+
45
+ 2*Task description 2|c|GraphHLDN 2|c|MPNN
46
+
47
+ 2-5
48
+ $n \leq {100}$ $n \leq {10000}$ $n \leq {100}$ $n \leq {10000}$
49
+
50
+ 1-5
51
+ Predict nodes on shortest path 100% 99.95% 71.13% 81.89%
52
+
53
+ 1-5
54
+ Find LCA of 2 nodes with given root 99.5% 91.4% 32.16% 20.68%
55
+
56
+ 1-5
57
+ Predict nodes in MVC ${}^{3}$ 99.37% 99.55% 91.27% 91.02%
58
+
59
+ 1-5
60
+
61
+ This allows it to do the computation using $O\left( {{\log }^{2}n}\right)$ message passing iterations resolving vanishing gradient problem and also improving generalisation out of distribution because multiplying the number of nodes results in only constant increase in number of iterations. Secondly, this structure allows preservation of ordering information of vertices along the path in the process, as the learnable merging function has left and right vertex as separate and distinguishable inputs. Note that in all layers we use the same perceptron for merging. This means that merging function should work on all different sizes of segments (it should be able to merge vertex representing one node with vertex representing 1024 nodes). One of the features which we expect from merging node is therefore associativity. We enforce this by adding associativity consistency loss (regularisation) which is computed by taking random triplets of nodes from tree with their representations $a,b,c$ and then enforcing that $\left| {\mathrm{{BN}}\left( {\phi \left( {\phi \left( {a,b}\right) ,c}\right) }\right) - \mathrm{{BN}}\left( {\phi \left( {a,\phi \left( {b,c}\right) }\right) }\right) }\right|$ is minimal. Batch normalisation function $\mathrm{{BN}}$ is used in order to normalize among the features. To make the effect of the normalization stronger, instead of creating just one heavy-light tree from a defined root, we choose multiple random roots with different corresponding HLD trees and during testing we average output of each such tree.
62
+
63
+ § 3 EVALUATION
64
+
65
+ We evaluate the proposed architecture on both synthetic algorithmic datasets and molecular benchmarks. In algorithmic datasets the input graphs consist of uniformly randomly selected trees with the training and validation datasets having up to 100 nodes and test sets having up to 10000 nodes to test out-of-distribution generalization to larger graphs. The evaluation focuses on the following node classification tasks: prediction of nodes on the shortest path between two marked nodes, finding the lowest common ancestor for two randomly selected nodes and a randomly marked root, prediction of nodes in the minimum vertex cover. We use a GraphHLDN network with hidden embeddings having size 64 and three-layer multi-layer perceptrons with LeakyReLUs. For comparison we train a 30 iteration message passing neural network having sum aggregation and the same hidden embedding size and multi-layer perceptrons.
66
+
67
+ We compare the performance of GraphHLDN on Peptides-Struct, AQSOL and ESOL benchmarking datasets with previously reported baseline results from [4] [9] and [12]. The only difference is that in the case of Peptides-Struct we use hidden embeddings of size 128. For each graph in the datasets, we select a random spanning tree of the graph. In the case that the graph has multiple components we randomly select just one. We then create 30 randomly chosen transformed HLD trees from the selected spanning trees and during testing report how the averaged prediction on all 30 trees compares with the targets.
68
+
69
+ Discussion and conclusions. As can be seen from the table 1, GraphHLDN is able to learn the algorithmic patterns from synthetic tasks and generalizes out of training distribution to graphs with hundred times more nodes. This is despite the fact that no step-by-step supervision signal was used to learn intermediate algorithmic steps as required by previous works that could only generalize to much smaller graphs. For most tasks, the precision is near perfect in the case of GraphHLDN, suggesting that the network learns the actual algorithm behind the dataset target rather than some kind of its approximation.
70
+
71
+ Due to the non-invariance of the proposed network to the choice of HLD tree and the lack of strong local neighbourhood inductive bias it may be expected that this architecture would generalize poorly on noisy real-world data. This seems especially likely due to the tree-shaped structure of GraphHLDN, where the nodes in each layer need to aggregate and summarize information from nearly twice as many nodes from a deeper layer. This introduces the bottleneck causing over-squashing of exponentially growing information into fixed-size vectors which was shown to negatively impact the performance of graph neural networks [13] on graphs with negatively curved edges [14]. However, as can be seen in the tables 2, 3 and 4, GraphHLDN is not only applicable to synthetic tasks but it can also be practically useful on molecular datasets. GraphHLDN outperforms all models reported in [4] on AQSOL dataset while using a significantly smaller number of parameters. Similarly on ESOL it almost matches the performance of D-MPNN and in the case of the Peptides-struct dataset focused on long-range dependencies, GraphHLDN is competetive with transformer-based architectures.
72
+
73
+ ${}^{3}$ Weighted Minimum vertex cover; if there are conflicts we prefer solutions where selected nodes are as close to root of HLD as possible, this leads to unique solutions. Weights are integers between 1 and 5 .
74
+
75
+ Table 2: Results comparing test MAE on AQSOL dataset. The suffix LapPE denotes the use of Laplacian Eigenvectors as node positional encodings with dimension 4 .
76
+
77
+ max width=
78
+
79
+ Model L $\mathbf{\# {Params}}$ Test MAE $\pm$ s.d.
80
+
81
+ 1-4
82
+ RingGNN 2 123k ${3.769} \pm {1.012}$
83
+
84
+ 1-4
85
+ GIN 16 514k ${1.962} \pm {0.058}$
86
+
87
+ 1-4
88
+ MoNet 16 507k ${1.501} \pm {0.056}$
89
+
90
+ 1-4
91
+ GAT 16 540k ${1.403} \pm {0.008}$
92
+
93
+ 1-4
94
+ GCN 16 511k ${1.333} \pm {0.013}$
95
+
96
+ 1-4
97
+ GatedGCN 16 507k ${1.308} \pm {0.013}$
98
+
99
+ 1-4
100
+ 3WLGNN 3 525k ${1.108} \pm {0.036}$
101
+
102
+ 1-4
103
+ GatedGCN-LapPE 16 507k ${0.996} \pm {0.008}$
104
+
105
+ 1-4
106
+ GraphHLDN N/A 87k $\mathbf{{0.882} \pm {0.012}}$
107
+
108
+ 1-4
109
+
110
+ Table 3: Results comparing test MAE on Peptides-struct dataset.
111
+
112
+ max width=
113
+
114
+ Model L $\mathbf{\# {Params}}$ Test MAE $\pm$ s.d.
115
+
116
+ 1-4
117
+ GINE 5 547k ${0.354} \pm {0.0045}$
118
+
119
+ 1-4
120
+ GCN 5 508k ${0.349} \pm {0.0013}$
121
+
122
+ 1-4
123
+ GatedGCN 5 509k ${0.342} \pm {0.0013}$
124
+
125
+ 1-4
126
+ GatedGCN+RWSE 5 506k ${0.335} \pm {0.0006}$
127
+
128
+ 1-4
129
+ SAN+LapPE 4 493k ${0.268} \pm {0.0043}$
130
+
131
+ 1-4
132
+ SAN+RWSE 4 500k ${0.254} \pm {0.0012}$
133
+
134
+ 1-4
135
+ Transformer+LapPE 4 488k ${0.252} \pm {0.0016}$
136
+
137
+ 1-4
138
+ GraphHLDN N/A 351k ${0.288} \pm {0.0032}$
139
+
140
+ 1-4
141
+
142
+ It is also notable that this performance is achieved despite ignoring certain edges not included in the spanning trees when the input graphs aren't trees. We hope that our work will inspire further research in extending the capabilities of GraphHLDN to other graph topologies and further enhancing or combining capabilities of classical message-passing with GraphHLDN.
143
+
144
+ Table 4: Results comparing test RMSE on ESOL dataset.
145
+
146
+ max width=
147
+
148
+ Model L $\mathbf{\# {Params}}$ Test RMSE $\pm$ s.d.
149
+
150
+ 1-4
151
+ Fingerprint + MLP 5 401k ${0.922} \pm {0.017}$
152
+
153
+ 1-4
154
+ GIN 5 626k ${0.665} \pm {0.026}$
155
+
156
+ 1-4
157
+ GAT 5 671k ${0.654} \pm {0.028}$
158
+
159
+ 1-4
160
+ D-MPNN 5 100k ${0.635} \pm {0.027}$
161
+
162
+ 1-4
163
+ GraphHLDN N/A 87k ${0.639} \pm {0.019}$
164
+
165
+ 1-4
papers/LOG/LOG 2022/LOG 2022 Conference/O7msz8Ou7o/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,403 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Collaboration-aware Graph Neural Network for Recommender Systems
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ Graph Neural Networks (GNNs) have been successfully adopted in recommendation systems by virtue of the message-passing that implicitly captures collaborative effect. Nevertheless, most of the existing message-passing mechanisms for recommendation are directly inherited from GNNs without scrutinizing whether the captured collaborative effect would benefit the prediction of user preferences. To quantify the benefit of the captured collaborative effect, we propose a recommendation-oriented topological metric, Common Interacted Ratio (CIR), which measures the level of interaction between a specific neighbor of a node with the rest of its neighbors. Then we propose a recommendation-tailored GNN, Collaboration-Aware Graph Convolutional Network (CAGCN), that goes beyond 1-WL test in distinguishing non-bipartite-subgraph-isomorphic graphs. Experiments on six benchmark datasets show that the best CAGCN variant outperforms the most representative GNN-based recommendation model, LightGCN, by nearly 10% in Recall @20 and also achieves more than ${80}\%$ speedup. Our code is available at https://github.com/submissionconf2023/CAGCN
12
+
13
+ ## 17 1 Introduction
14
+
15
+ Recommendation aims to alleviate information overload through helping users discover items of interest $\left\lbrack {1,2}\right\rbrack$ . Given historical user-item interactions, the key of recommendation systems is to leverage the Collaborative Effect [3-5] to predict how likely users will interact with items. A common paradigm for modeling collaborative effect is to first learn embeddings of users/items capable of recovering historical user-item interactions and then perform top-k recommendation based on the pairwise similarity between the learned user/item embeddings.
16
+
17
+ Since user-item interactions can be naturally represented as a bipartite graph, recent research has started to leverage GNNs to learn user/item embeddings for the recommendation [5-7]. Two pioneering works NGCF [5] and LightGCN [7] leverage graph convolutions to aggregate messages from local neighborhoods, which directly injects the collaborative signal into user/item embeddings. However, blindly passing messages following existing styles of GNNs could capture harmful collaborative signals from these unreliable interactions, which corrupts user/item embeddings and hinders the performance of GNN-based models [8]. Despite the fundamental importance of capturing beneficial collaborative signals, the related studies are still in their infancy. To fill this crucial gap, we aim to customize message-passing for recommendations and propose a recommendation-tailored GNN, namely Collaboration-Aware Graph Convolutional Network, that selectively passes neighborhood information based on their Common Interacted Ratio (CIR). Our contributions are:
18
+
19
+ - Novel Recommendation-tailored Topological Metric: We propose a recommendation-oriented topological metric, Common Interacted Ratio (CIR), and demonstrate the capability of CIR to quantify the benefits of aggregating messages from neighborhoods.
20
+
21
+ - Novel Recommendation-tailored Graph Convolution: We incorporate CIR into message-passing and propose a novel Collaboration-Aware Graph Convolutional Network (CAGCN). Then we prove that it can goes beyond 1-WL test in detecting non-bipartite-subgraph-isomorphic graphs, and demonstrate its superiority via comprehensive experiments on real-world datasets including two newly collected datasets and provide an in-depth interpretation on its advantages.
22
+
23
+ ![01963ec7-5751-77cc-9084-2440c3b619c1_1_332_204_1047_374_0.jpg](images/01963ec7-5751-77cc-9084-2440c3b619c1_1_332_204_1047_374_0.jpg)
24
+
25
+ Figure 1: In (a)-(b), since ${j}_{1},{j}_{2}$ have more interactions (paths) with (to) $i$ ’s neighbors than ${j}_{3}$ , leveraging more collaborations from ${j}_{1},{j}_{2}$ than ${j}_{3}$ would increase $u$ ’s ranking over $i$ . In (c), we quantify the CIR between ${j}_{1}$ and $u$ via the paths (and associated nodes) between ${j}_{1}$ and ${\widehat{\mathcal{N}}}_{u}^{1}$ .
26
+
27
+ ## 2 Method
28
+
29
+ In this section, we introduce notations used in this work, a novel recommendation-oriented topological metric (i.e., Common Interacted Ratio (CIR)) and then propose the collaboration-aware GNN.
30
+
31
+ Preliminary. Let $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ be the user-item bipartite graph, where the node set $\mathcal{V} = \mathcal{U} \cup \mathcal{I}$ includes the user set $\mathcal{U}$ and the item set $\mathcal{I}$ . User-item interactions are denoted as edges $\mathcal{E}$ where ${e}_{pq}$ represents the edge between node $p$ and $q$ . The network topology is described by adjacency matrix $\mathbf{A} \in \{ 0,1{\} }^{\left( {\left| \mathcal{U}\right| + \left| \mathcal{I}\right| }\right) \times \left( {\left| \mathcal{U}\right| + \left| \mathcal{I}\right| }\right) }$ , where ${\mathbf{A}}_{pq} = 1$ when ${e}_{pq} \in \mathcal{E}$ , and ${\mathbf{A}}_{pq} = 0$ otherwise. Let ${\mathcal{N}}_{p}^{l}$ and ${\widehat{\mathcal{N}}}_{p}^{l}$ denote the set of neighbors that are exactly $l$ -hops away from $p$ in the training and testing set. Let ${\mathcal{S}}_{p} = \left( {{\mathcal{V}}_{{\mathcal{S}}_{p}},{\mathcal{E}}_{{\mathcal{S}}_{p}}}\right)$ be the neighborhood subgraph [9] induced in $\mathcal{G}$ by ${\widetilde{\mathcal{N}}}_{p}^{1} = {\mathcal{N}}_{p}^{1} \cup \{ p\}$ . We use ${\mathcal{P}}_{pq}^{l}$ to denote the set of shortest paths of length $l$ between node $p$ and $q$ and denote one of such paths as ${P}_{pq}^{l}$ . Note that ${\mathcal{P}}_{pq}^{l} = \varnothing$ if it is impossible to have a path between $p$ and $q$ of length $l$ , e.g., ${\mathcal{P}}_{11}^{1} = \varnothing$ in an acyclic graph. Furthermore, we denote the initial embeddings of users/items in graph $\mathcal{G}$ as ${\mathbf{E}}^{0} \in {\mathbb{R}}^{\left( {n + m}\right) \times {d}^{0}}$ where ${\mathbf{e}}_{p}^{0} = {\mathbf{E}}_{p}^{0}$ is the node $p$ ’s embedding and let ${d}_{p}$ be the degree of node $p$ .
32
+
33
+ ### 2.1 Common Interacted Ratio
34
+
35
+ Graph-based methods capture collaboration from other users/items by message-passing. However, we cannot guarantee all of these collaborations benefit the prediction of users' preferences. For example, in Figure 1(a)-(b), given a center user $u$ , we expect to leverage more collaborations from $u$ ’s observed neighboring items that have higher level of interactions (e.g., ${j}_{1},{j}_{2}$ rather than ${j}_{3}$ ) with items that would be interacted with $u$ (e.g., $i$ ). To mathematically quantify such level of interactions, we propose a graph topological metric, Common Interacted Ratio (CIR):
36
+
37
+ Definition 2.1. Common Interacted Ratio (CIR): For an observed neighboring item $j \in {\mathcal{N}}_{u}^{1}$ of user $u$ , the CIR of $j$ around $u$ considering nodes up to $\left( {L + 1}\right)$ -hops away from $u$ , i.e., ${\widehat{\phi }}_{u}^{L}\left( j\right)$ , is defined as the average interacting ratio of $j$ with all neighboring items of $u$ in ${\widehat{\mathcal{N}}}_{u}^{1}$ through paths of length less than or equal to ${2L}$ :
38
+
39
+ $$
40
+ {\widehat{\phi }}_{u}^{L}\left( j\right) = \frac{1}{\left| {\widehat{\mathcal{N}}}_{u}^{1}\right| }\mathop{\sum }\limits_{{i \in {\widehat{\mathcal{N}}}_{u}^{1}}}\mathop{\sum }\limits_{{l = 1}}^{L}{\beta }^{2l}\mathop{\sum }\limits_{{{P}_{ji}^{2l} \in {\mathcal{P}}_{ji}^{2l}}}\frac{1}{f\left( \left\{ {{\mathcal{N}}_{k}^{1} \mid k \in {P}_{ji}^{2l}}\right\} \right) },\forall j \in {\mathcal{N}}_{u}^{1},\forall u \in \mathcal{U}, \tag{1}
41
+ $$
42
+
43
+ where $\left\{ {{\mathcal{N}}_{k}^{1} \mid k \in {P}_{ji}^{2l}}\right\}$ represents the set of the 1-hop neighborhood of node $k$ along the path ${P}_{ji}^{2l}$ from node $j$ to $i$ of length ${2l}.f$ is a normalization function to differentiate the importance of different paths in ${\mathcal{P}}_{ji}^{2l}$ and its value depends on the neighborhood of each node on the path ${P}_{ji}^{2l}$ . As shown in Figure 1(c), the CIR of ${j}_{1}$ centering around $u,{\widehat{\phi }}_{u}^{L}\left( {j}_{1}\right)$ is decided by paths of length between 2 to ${2L}$ . By configuring different $L$ and $f,\mathop{\sum }\limits_{{{P}_{ji}^{2l} \in {\mathcal{P}}_{ji}^{2l}}}\frac{1}{f\left( \left\{ {{\mathcal{N}}_{k}^{1} \mid k \in {P}_{ji}^{2l}}\right\} \right) }$ could express many existing graph
44
+
45
+ similarity metrics [10-14] and we thoroughly discuss them in Appendix A.2. Calculating ${\widehat{\phi }}_{u}^{L}\left( j\right)$ is unrealistic since we do not have access to the testing set ${\widehat{\mathcal{N}}}_{u}^{1}$ in advance. Thereby, we propose to approximate ${\widehat{\phi }}_{u}\left( j\right)$ by enumerating $i$ from the observed training set ${\mathcal{N}}_{u}^{1}$ instead of ${\widehat{\mathcal{N}}}_{u}^{1}$ and denote this estimated version as ${\phi }_{u}^{L}\left( j\right)$ . Such approximation assumes that neighboring nodes interacting more with other neighboring nodes in the training set would also interact more with neighboring nodes in the testing set, which is verified in Appendix A.3. We further empirically rationalize that edges with higher ${\phi }_{u}\left( j\right)$ are more important to the recommendation performance in Appendix A.7.3.
46
+
47
+ ### 2.2 Collaboration-Aware Graph Convolutional Network
48
+
49
+ In order to pass node messages based on the benefits of their corresponding collaborations, we develop Collaboration-Aware Graph Convolutional Network. The core idea is to strengthen/weaken the messages passed from neighbors with higher/lower estimated CIR to center nodes. To achieve this, we compute the edge weight as: ${\mathbf{\Phi }}_{ij} = {\phi }_{i}\left( j\right)$ when ${\mathbf{A}}_{ij} > 0$ (and 0 otherwise), where ${\phi }_{i}\left( j\right)$ is the estimated CIR of neighboring node $j$ centering around $i$ . Note that unlike the symmetric graph convolution ${\mathbf{D}}^{-{0.5}}\mathbf{A}{\mathbf{D}}^{-{0.5}}$ used in LightGCN, here $\mathbf{\Phi }$ is asymmetrical: the interacting level of node $j$ with $i$ ’s neighborhood is likely to be different from the interacting level of node $i$ with $j$ ’s neighborhood. We further normalize $\Phi$ and combine it with the LightGCN convolution:
50
+
51
+ $$
52
+ {\mathbf{e}}_{i}^{l + 1} = \mathop{\sum }\limits_{{j \in {\mathcal{N}}_{i}^{1}}}g\left( {{\gamma }_{i}\frac{{\mathbf{\Phi }}_{ij}}{\mathop{\sum }\limits_{{k \in {\mathcal{N}}_{i}^{1}}}{\mathbf{\Phi }}_{ik}},{d}_{i}^{-{0.5}}{d}_{j}^{-{0.5}}}\right) {\mathbf{e}}_{j}^{l},\forall i \in \mathcal{V} \tag{2}
53
+ $$
54
+
55
+ where ${\gamma }_{i}$ is a coefficient that varies the total amount of message flowing to each node $i$ and controls the embedding magnitude of that node [15]. $g$ is a function combining the edge weights computed according to CIR and LightGCN. In Appendix A.4, we prove that under certain choice of $g$ and ${\gamma }_{i}$ , CAGCN can go beyond 1-WL test in distinguishing non-bipartite-subgraph-isomorphic graphs. Following the principle of LightGCN that the designed graph convolution should be light and easy to train, all other components of our architecture except the message-passing is exactly the same as LightGCN, which is covered in Appendix A.1 and A.5.
56
+
57
+ ## 3 Experiments
58
+
59
+ ### 3.1 Experimental Settings
60
+
61
+ We used six datasets including two newly collected datasets from other domains. MF [16], NGCF [5], LightGCN [7], UltraGCN [6], and GTN [8] are baselines. More details about datasets, baselines and experimental setup are provided in Appendix A.6. For the first model variant CAGCN, we set $g\left( {A, B}\right) = g\left( A\right)$ where we remove $B = {d}_{i}^{-{0.5}}{d}_{j}^{-{0.5}}$ to solely demonstrate the power of passing messages according to CIR and set ${\gamma }_{i} = \mathop{\sum }\limits_{{j \in {\mathcal{N}}_{i}^{1}}}{d}_{i}^{-{0.5}}{d}_{j}^{-{0.5}}$ to ensure the same embedding magnitude. For the second model variant CAGCN*, we set $g$ as weighted sum and ${\gamma }_{i} = \gamma$ as a constant controlling the contributions of capturing different collaborations.
62
+
63
+ ### 3.2 Experimental Results
64
+
65
+ Here we describe the main experimental result observations with detailed insights in Appendix A.7.
66
+
67
+ Performance Comparison. Performance of baselines are provided in Table 1. We first compare the performance of LightGCN and CAGCN-variants. Clearly, CAGCN-jc/sc/ln achieves higher performance than LightGCN because we selectively propagate node embeddings by the proposed CIR metrics (JC, SC, LHN). However, CAGCN-cn mostly performs worse than LightGCN because nodes having more common neighbors with other nodes are more likely to have higher degrees and hence aggregate more false-positive neighbors' information during message-passing. Comparing CAGCN*-variants with other competing baselines, CAGCN*-jc/sc almost consistently achieves higher performance than other baselines except UltraGCN on Amazon. This is because UltraGCN allows multiple negative samples for each positive interaction. Since GTN [8] uses different embedding size, we exclusively compare our model and GTN in Table 4 in Appendix A.7.
68
+
69
+ Efficiency Comparison. As recommendation models will be eventually deployed in user-item data of real-world scale, it is crucial to compare the efficiency of the proposed CAGCN(*) with other baselines. For fair comparison, we use a uniform code framework implemented ourselves for all models and run them on the same machine. Clearly in Figure 2(a), CAGCN* achieves extremely higher performance in significant less time. This is because the designed graph convolution could recognize neighbors whose collaborations are most beneficial to users' ranking and by passing stronger messages from these neighbors.
70
+
71
+ Impact of Propagation Layers. We increase the propagation layer of CAGCN* and LightGCN from 1 to 4 and visualize their corresponding performance in Figure 2(b). The performance first increases as layer increases from 1 to 3 and then decreases on both datasets, which is consistent with findings in [7]. Our CAGCN* is always better than LightGCN at all layers.
72
+
73
+ Table 1: Results on R@20 and N@20 (i.e., Recall and NDCG) with best and runner-up highlighted.
74
+
75
+ <table><tr><td>Model</td><td rowspan="2">$\mathbf{{Metric}}$</td><td rowspan="2">$\mathbf{{MF}}$</td><td rowspan="2">NGCF</td><td rowspan="2">LightGCN</td><td rowspan="2">UltraGCN</td><td colspan="4">CAGCN</td><td colspan="3">CAGCN*</td></tr><tr><td/><td>-jc</td><td>-sc</td><td>-cn</td><td>-lhn</td><td>-jc</td><td>-sc</td><td>-lhn</td></tr><tr><td rowspan="2">Gowalla</td><td>Recall@20</td><td>0.1554</td><td>0.1563</td><td>0.1817</td><td>0.1867</td><td>0.1825</td><td>0.1826</td><td>0.1632</td><td>0.1821</td><td>0.1878</td><td>0.1878</td><td>0.1857</td></tr><tr><td>NDCG@20</td><td>0.1301</td><td>0.1300</td><td>0.1570</td><td>0.1580</td><td>0.1575</td><td>0.1577</td><td>0.1381</td><td>0.1577</td><td>0.1591</td><td>0.1588</td><td>0.1563</td></tr><tr><td rowspan="2">Yelp2018</td><td>Recall@20</td><td>0.0539</td><td>0.0596</td><td>0.0659</td><td>0.0675</td><td>0.0674</td><td>0.0671</td><td>0.0661</td><td>0.0661</td><td>0.0708</td><td>0.0711</td><td>0.0676</td></tr><tr><td>NDCG@20</td><td>0.0460</td><td>0.0489</td><td>0.0554</td><td>0.0553</td><td>0.0564</td><td>0.0560</td><td>0.0546</td><td>0.0555</td><td>0.0586</td><td>0.0590</td><td>0.0554</td></tr><tr><td rowspan="2">Amazon</td><td>Recall@20</td><td>0.0337</td><td>0.0336</td><td>0.0420</td><td>0.0682</td><td>0.0435</td><td>0.0435</td><td>0.0403</td><td>0.0422</td><td>0.0510</td><td>0.0506</td><td>0.0457</td></tr><tr><td>NDCG@20</td><td>0.0265</td><td>0.0262</td><td>0.0331</td><td>0.0553</td><td>0.0343</td><td>0.0342</td><td>0.0321</td><td>0.0333</td><td>0.0403</td><td>0.0400</td><td>0.0361</td></tr><tr><td rowspan="2">Ml-1M</td><td>Recall@20</td><td>0.2604</td><td>0.2619</td><td>0.2752</td><td>0.2783</td><td>0.2780</td><td>0.2786</td><td>0.2730</td><td>0.2760</td><td>0.2822</td><td>0.2827</td><td>0.2799</td></tr><tr><td>NDCG@20</td><td>0.2697</td><td>0.2729</td><td>0.2820</td><td>0.2638</td><td>0.2871</td><td>0.2881</td><td>0.2818</td><td>0.2871</td><td>0.2775</td><td>0.2776</td><td>0.2745</td></tr><tr><td rowspan="2">Loseit</td><td>Recall@20</td><td>0.0539</td><td>0.0574</td><td>0.0588</td><td>0.0621</td><td>0.0622</td><td>0.0625</td><td>0.0502</td><td>0.0592</td><td>0.0654</td><td>0.0658</td><td>0.0658</td></tr><tr><td>NDCG@20</td><td>0.0420</td><td>0.0442</td><td>0.0465</td><td>0.0446</td><td>0.0474</td><td>0.0470</td><td>0.0379</td><td>0.0461</td><td>0.0486</td><td>0.0484</td><td>0.0489</td></tr><tr><td rowspan="2">News</td><td>Recall@20</td><td>0.1942</td><td>0.1994</td><td>0.2035</td><td>0.2034</td><td>0.2135</td><td>0.2132</td><td>0.1726</td><td>0.2084</td><td>0.2182</td><td>0.2172</td><td>0.2053</td></tr><tr><td>NDCG@20</td><td>0.1235</td><td>0.1291</td><td>0.1311</td><td>0.1301</td><td>0.1385</td><td>0.1384</td><td>0.1064</td><td>0.1327</td><td>0.1405</td><td>0.1414</td><td>0.1311</td></tr><tr><td rowspan="2">$\mathbf{{Avg}.{Rank}}$</td><td>Recall@20</td><td>9.83</td><td>9.17</td><td>7.33</td><td>4.17</td><td>4.67</td><td>4.33</td><td>8.83</td><td>6.17</td><td>1.67</td><td>1.50</td><td>3.33</td></tr><tr><td>NDCG@20</td><td>9.50</td><td>9.17</td><td>5.83</td><td>6.00</td><td>3.67</td><td>4.00</td><td>8.33</td><td>5.00</td><td>2.50</td><td>2.50</td><td>5.17</td></tr></table>
76
+
77
+ ![01963ec7-5751-77cc-9084-2440c3b619c1_3_308_685_1188_482_0.jpg](images/01963ec7-5751-77cc-9084-2440c3b619c1_3_308_685_1188_482_0.jpg)
78
+
79
+ Figure 2: Efficiency comparison in column (a). Performance of different propagation layers in column (b). Performance of different models on users with different degrees in (c).
80
+
81
+ Interpretation on the advantages of CAGCN(*). Here we visualize the performance of all models for nodes in different degree groups. Comparing non-graph-based methods (e.g., MF), graph-based methods (e.g., LightGCN, CAGCN(*)) achieve higher performance for lower degree nodes $\lbrack 0,{300})$ while lower performance for higher degree nodes $\lbrack {300},\operatorname{Inf})$ . Because the node degrees follow the power-law distribution [17], the average performance of graph-based methods would still be higher. To the best of our knowledge, this is the first work discovering such performance imbalance/unfairness issue w.r.t. node degree in graph-based recommendation models. On one hand, graph-based models could leverage neighborhood information to augment the weak supervision for low-degree nodes. On the other hand, it would introduce many noisy/unreliable interactions for higher-degree nodes. It is crucial to design an unbiased graph-based recommendation model that can achieve higher performance on both low and high degree nodes. In addition, the opposite performance trends between NDCG and Recall indicates that different evaluation metrics have different levels of sensitivity to node degrees.
82
+
83
+ ## 4 Conclusion
84
+
85
+ In this paper, we propose the Common Interacted Ratio (CIR) to determine whether the captured collaborative effect would benefit the prediction of user preferences. Then we propose the Collaboration-Aware Graph Convolutional Network to aggregate neighboring nodes' information based on their CIRs. We further define a new type of isomorphism, bipartite-subgraph-isomorphism, and prove that our CAGCN* can be more expressive than 1-WL in distinguishing subtree(subgraph)-isomorphic yet non-bipartite-subgraph-isomorphic graphs. Experimental results demonstrate the advantages of the proposed CAGCN(*) over other baselines. Specifically, CAGCN* outperforms the most representative graph-based recommendation model, LightGCN [7], by 9% in Recall@20 but also achieves more than 79% speedup. In the future, we plan to explore the imbalanced performance improvement among nodes in different degree groups as observed in Figure 2(c), especially from a GNN fairness perspective [18].
86
+
87
+ References
88
+
89
+ [1] Paul Covington, Jay Adams, and Emre Sargin. Deep neural networks for youtube recommendations. In Proceedings of the 10th ACM conference on recommender systems, pages 191-198, 2016.
90
+
91
+ [2] Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L Hamilton, and Jure Leskovec. Graph convolutional neural networks for web-scale recommender systems. In Proceedings of the 24th ACM SIGKDD, pages 974-983, 2018.
92
+
93
+ [3] Travis Ebesu, Bin Shen, and Yi Fang. Collaborative memory network for recommendation systems. In The 41st international ACM SIGIR conference on research & development in information retrieval, pages 515-524, 2018.
94
+
95
+ [4] Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. Neural collaborative filtering. In Proceedings of the 26th international conference on world wide web, pages 173-182, 2017.
96
+
97
+ [5] Xiang Wang, Xiangnan He, Meng Wang, Fuli Feng, and Tat-Seng Chua. Neural graph collaborative filtering. In Proceedings of the 42nd international ACM SIGIR conference on Research and development in Information Retrieval, pages 165-174, 2019.
98
+
99
+ [6] Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, and Xiuqiang He. Ultragen: Ultra simplification of graph convolutional networks for recommendation. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pages 1253-1262, 2021.
100
+
101
+ [7] Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, pages 639-648, 2020.
102
+
103
+ [8] Wenqi Fan, Xiaorui Liu, Wei Jin, Xiangyu Zhao, Jiliang Tang, and Qing Li. Graph trend networks for recommendations. arXiv preprint arXiv:2108.05552, 2021.
104
+
105
+ [9] Asiri Wijesinghe and Qing Wang. A new perspective on" how graph neural networks go beyond weisfeiler-lehman?". In ICLR, 2021.
106
+
107
+ [10] Elizabeth A Leicht, Petter Holme, and Mark EJ Newman. Vertex similarity in networks. Physical Review E, 73(2):026120, 2006.
108
+
109
+ [11] Tao Zhou, Linyuan Lü, and Yi-Cheng Zhang. Predicting missing links via local information. The European Physical Journal B, 71(4):623-630, 2009.
110
+
111
+ [12] Mark EJ Newman. Clustering and preferential attachment in growing networks. Physical review $E,{64}\left( 2\right) : {025102},{2001}$ .
112
+
113
+ [13] Gerard Salton. Automatic text processing: The transformation, analysis, and retrieval of. Reading: Addison-Wesley, 169, 1989.
114
+
115
+ [14] David Liben-Nowell and Jon Kleinberg. The link-prediction problem for social networks. Journal of the American society for information science and technology, 58(7):1019-1031, 2007.
116
+
117
+ [15] Dongmin Park, Hwanjun Song, Minseok Kim, and Jae-Gil Lee. Trap: Two-level regularized autoencoder-based embedding for power-law distributed data. In Proceedings of The Web Conference 2020, pages 1615-1624, 2020.
118
+
119
+ [16] Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. Bpr: Bayesian personalized ranking from implicit feedback. arXiv preprint arXiv:1205.2618, 2012.
120
+
121
+ [17] Andrew T Stephen and Olivier Toubia. Explaining the power-law degree distribution in a social commerce network. Social Networks, 31(4):262-270, 2009.
122
+
123
+ [18] Yu Wang. Fair graph representation learning with imbalanced and biased data. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, 2022.
124
+
125
+ [19] William Webber, Alistair Moffat, and Justin Zobel. A similarity measure for indefinite rankings. ACM Transactions on Information Systems (TOIS), 28(4):1-38, 2010.
126
+
127
+ [20] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018.
128
+
129
+ [21] Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In ${ICLR},{2017}$ .
130
+
131
+ [22] Ruining He and Julian McAuley. Vbpr: visual bayesian personalized ranking from implicit feedback. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30, 2016.
132
+
133
+ [23] Hao-Ming Fu, Patrick Poirson, Kwot Sin Lee, and Chen Wang. Revisiting neighborhood-based link prediction for collaborative filtering. arXiv preprint arXiv:2203.15789, 2022.
134
+
135
+ [24] Huiyuan Chen, Kaixiong Zhou, Kwei-Herng Lai, Xia Hu, Fei Wang, and Hao Yang. Adversarial graph perturbations for recommendations at scale. 2022.
136
+
137
+ [25] Huiyuan Chen, Chin-Chia Michael Yeh, Fei Wang, and Hao Yang. Graph neural transport networks with non-local attentions for recommender systems. In Proceedings of the ACM Web Conference 2022, pages 1955-1964, 2022.
138
+
139
+ [26] Huiyuan Chen, Lan Wang, Yusan Lin, Chin-Chia Michael Yeh, Fei Wang, and Hao Yang. Structured graph convolutional networks with stochastic masks for recommender systems. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 614-623, 2021.
140
+
141
+ [27] David Goldberg, David Nichols, Brian M Oki, and Douglas Terry. Using collaborative filtering to weave an information tapestry. Communications of the ACM, 35(12):61-70, 1992.
142
+
143
+ [28] Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. Bpr: Bayesian personalized ranking from implicit feedback. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 452-461, 2009.
144
+
145
+ [29] Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, 42(8):30-37, 2009.
146
+
147
+ [30] Yi Tay, Luu Anh Tuan, and Siu Cheung Hui. Latent relational metric learning via memory-based attention for collaborative ranking. In WWW, pages 729-739, 2018.
148
+
149
+ [31] Marco Gori, Augusto Pucci, V Roma, and I Siena. Itemrank: A random-walk based scoring algorithm for recommender engines. In IJCAI, volume 7, pages 2766-2771, 2007.
150
+
151
+ [32] Xiangnan He, Ming Gao, Min-Yen Kan, and Dingxian Wang. Birank: Towards ranking on bipartite graphs. IEEE Transactions on Knowledge and Data Engineering, 29(1):57-71, 2016.
152
+
153
+ [33] Jheng-Hong Yang, Chih-Ming Chen, Chuan-Ju Wang, and Ming-Feng Tsai. Hop-rec: high-order proximity for implicit recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems, pages 140-144, 2018.
154
+
155
+ [34] Yu Wang and Tyler Derr. Tree decomposed graph neural network. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pages 2040-2049, 2021.
156
+
157
+ [35] Jiancan Wu, Xiang Wang, Fuli Feng, Xiangnan He, Liang Chen, Jianxun Lian, and Xing Xie. Self-supervised graph learning for recommendation. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 726-735, 2021.
158
+
159
+ [36] Yu Wang, Wei Jin, and Tyler Derr. Graph neural networks: Self-supervised learning. In Graph Neural Networks: Foundations, Frontiers, and Applications, pages 391-420. Springer, 2022.
160
+
161
+ ## A Appendix
162
+
163
+ ### A.1 Model Architecture and Training of LightGCN
164
+
165
+ Since our analysis is performed on the architecture of LightGCN, here we introduce the framework of LightGCN. Given the initial user and item embeddings ${\mathbf{E}}^{0} \in {\mathbb{R}}^{\left( {n + m}\right) \times {d}^{0}}$ , LightGCN performs L layers' message-passing as:
166
+
167
+ $$
168
+ {\mathbf{E}}^{l} = {\widetilde{\mathbf{A}}}^{l}{\mathbf{E}}^{0},\;\forall l \in \{ 1,2,\ldots , L\} , \tag{3}
169
+ $$
170
+
171
+ where $\widetilde{\mathbf{A}} = {\widetilde{\mathbf{D}}}^{-{0.5}}\mathbf{A}{\widetilde{\mathbf{D}}}^{-{0.5}}$ and $\widetilde{\mathbf{D}}$ is the degree matrix of $\mathbf{A}$ . Then all $L$ layers propagated embed-dings are aggregated together by mean-pooling:
172
+
173
+ $$
174
+ {\mathbf{E}}^{L} = \frac{1}{\left( L + 1\right) }\mathop{\sum }\limits_{{l = 0}}^{L}{\mathbf{E}}^{l}. \tag{4}
175
+ $$
176
+
177
+ In the training stage, for each observed user-item interaction(u, i), LightGCN randomly samples a negative item ${i}^{ - }$ that $u$ has never interacted with before, and forms the triple $\left( {u, i,{i}^{ - }}\right)$ , which collectively forms the set of observed training triples $\mathcal{O}$ . After that, the ranking scores of the user over these two items are computed as ${y}_{ui} = {\mathbf{e}}_{u}^{\top }{\mathbf{e}}_{i}$ and ${y}_{u{i}^{ - }} = {\mathbf{e}}_{u}^{\top }{\mathbf{e}}_{{i}^{ - }}$ , which are finally used in optimizing the pairwise Bayesian Personalized Ranking (BPR) loss [16] and formalized as: ${y}_{ui} = {\mathbf{e}}_{u}^{\top }{\mathbf{e}}_{i}$ and ${y}_{ui} - = {\mathbf{e}}_{u}^{\top }{\mathbf{e}}_{i} -$
178
+
179
+ $$
180
+ {\mathcal{L}}_{\mathrm{{BPR}}} = \mathop{\sum }\limits_{{\left( {u, i,{i}^{ - }}\right) \in \mathcal{O}}} - \ln \sigma \left( {{y}_{ui} - {y}_{u{i}^{ - }}}\right) , \tag{5}
181
+ $$
182
+
183
+ where $\sigma \left( \cdot \right)$ is the Sigmoid function, and here we omit the ${L}_{2}$ regularization term since it is mainly for alleviating overfitting and has no influence on collaborative effect captured by message passing.
184
+
185
+ ### A.2 Graph Topological Metrics for CIR
186
+
187
+ Here we demonstrate that by configuring different $f$ and $L,{\widehat{\phi }}_{u}^{L}\left( j\right)$ can express many existing graph similarity metrics.
188
+
189
+ $$
190
+ {\widehat{\phi }}_{u}^{L}\left( j\right) = \frac{1}{\left| {\widehat{\mathcal{N}}}_{u}^{1}\right| }\mathop{\sum }\limits_{{i \in {\widehat{\mathcal{N}}}_{u}^{1}}}\mathop{\sum }\limits_{{l = 1}}^{L}{\beta }^{2l}\mathop{\sum }\limits_{{{P}_{ji}^{2l} \in {\mathcal{P}}_{ji}^{2l}}}\frac{1}{f\left( \left\{ {{\mathcal{N}}_{k}^{1} \mid k \in {P}_{ji}^{2l}}\right\} \right) },\forall j \in {\mathcal{N}}_{u}^{1},\forall u \in \mathcal{U}, \tag{6}
191
+ $$
192
+
193
+ - Jaccard Similarity (JC) [14]: The JC score is a classic measure of similarity between two neighborhood sets, which is defined as the ratio of the intersection of two neighborhood sets to the
194
+
195
+ union of these two sets:
196
+
197
+ $$
198
+ \operatorname{JC}\left( {i, j}\right) = \frac{\left| {\mathcal{N}}_{i}^{1} \cap {\mathcal{N}}_{j}^{1}\right| }{\left| {\mathcal{N}}_{i}^{1} \cup {\mathcal{N}}_{j}^{1}\right| } \tag{7}
199
+ $$
200
+
201
+ Let $L = 1$ and set $f\left( \left\{ {{\mathcal{N}}_{k}^{1} \mid k \in {P}_{ji}^{2l}}\right\} \right) = \left| {{\mathcal{N}}_{i}^{1} \cup {\mathcal{N}}_{j}^{1}}\right|$ , then we have:
202
+
203
+ $$
204
+ {\widehat{\phi }}_{u}^{L}\left( j\right) = \frac{1}{\left| {\widehat{\mathcal{N}}}_{u}^{1}\right| }\mathop{\sum }\limits_{{i \in {\widehat{\mathcal{N}}}_{u}^{1}}}{\beta }^{2}\mathop{\sum }\limits_{{{P}_{j}^{2l} \in {\mathcal{P}}_{ji}^{2l}}}\frac{1}{\left| {\mathcal{N}}_{i}^{1} \cup {\mathcal{N}}_{j}^{1}\right| } = \frac{{\beta }^{2}}{\left| {\widehat{\mathcal{N}}}_{u}^{1}\right| }\mathop{\sum }\limits_{{i \in {\widehat{\mathcal{N}}}_{u}^{1}}}\frac{\left| {\mathcal{N}}_{i}^{1} \cap {\mathcal{N}}_{j}^{1}\right| }{\left| {\mathcal{N}}_{i}^{1} \cup {\mathcal{N}}_{j}^{1}\right| } = \frac{{\beta }^{2}}{\left| {\widehat{\mathcal{N}}}_{u}^{1}\right| }\mathop{\sum }\limits_{{i \in {\widehat{\mathcal{N}}}_{u}^{1}}}\operatorname{JC}\left( {i, j}\right) \left( 8\right)
205
+ $$
206
+
207
+ Salton Cosine Similarity (SC) [13]: The SC score measures the cosine similarity between the neighborhood sets of two nodes:
208
+
209
+ $$
210
+ \operatorname{SC}\left( {i, j}\right) = \frac{\left| {\mathcal{N}}_{i}^{1} \cap {\mathcal{N}}_{j}^{1}\right| }{\sqrt{\left| {\mathcal{N}}_{i}^{1} \cup {\mathcal{N}}_{j}^{1}\right| }} \tag{9}
211
+ $$
212
+
213
+ let $L = 1$ and set $f\left( \left\{ {{\mathcal{N}}_{k}^{1} \mid k \in {P}_{ji}^{2l}}\right\} \right) = \sqrt{\left| {\mathcal{N}}_{i}^{1} \cup {\mathcal{N}}_{j}^{1}\right| }$ , then we have:
214
+
215
+ $$
216
+ {\widehat{\phi }}_{u}^{L}\left( j\right) = \frac{1}{\left| {\widehat{\mathcal{N}}}_{u}^{1}\right| }\mathop{\sum }\limits_{{i \in {\widehat{\mathcal{N}}}_{u}^{1}}}{\beta }^{2}\mathop{\sum }\limits_{{{P}_{ji}^{2l} \in {\mathcal{P}}_{ji}^{2l}}}\frac{1}{\sqrt{\left| {\mathcal{N}}_{i}^{1} \cup {\mathcal{N}}_{j}^{1}\right| }} = \frac{{\beta }^{2}}{\left| {\widehat{\mathcal{N}}}_{u}^{1}\right| }\mathop{\sum }\limits_{{i \in {\widehat{\mathcal{N}}}_{u}^{1}}}\frac{\left| {\mathcal{N}}_{i}^{1} \cap {\mathcal{N}}_{j}^{1}\right| }{\sqrt{\left| {\mathcal{N}}_{i}^{1} \cup {\mathcal{N}}_{j}^{1}\right| }} = \frac{{\beta }^{2}}{\left| {\widehat{\mathcal{N}}}_{u}^{1}\right| }\mathop{\sum }\limits_{{i \in {\widehat{\mathcal{N}}}_{u}^{1}}}\operatorname{SC}\left( {i, j}\right) \left( 1\right) \tag{j10}
217
+ $$
218
+
219
+ - Common Neighbors (CN) [12]: The CN score measures the number of common neighbors of two nodes and is frequently used for measuring the proximity between two nodes:
220
+
221
+ $$
222
+ \operatorname{CN}\left( {i, j}\right) = \left| {{\mathcal{N}}_{i}^{1} \cap {\mathcal{N}}_{j}^{1}}\right| \tag{11}
223
+ $$
224
+
225
+ Let $L = 1$ and set $f\left( \left\{ {{\mathcal{N}}_{k}^{1} \mid k \in {P}_{ji}^{2l}}\right\} \right) = 1$ , then we have:
226
+
227
+ $$
228
+ {\widehat{\phi }}_{u}^{L}\left( j\right) = \frac{1}{\left| {\widehat{\mathcal{N}}}_{u}^{1}\right| }\mathop{\sum }\limits_{{i \in {\widehat{\mathcal{N}}}_{u}^{1}}}{\beta }^{2}\mathop{\sum }\limits_{{{P}_{ji}^{2l} \in {\mathcal{P}}_{ji}^{2l}}}1 = \frac{{\beta }^{2}}{\left| {\widehat{\mathcal{N}}}_{u}^{1}\right| }\mathop{\sum }\limits_{{i \in {\widehat{\mathcal{N}}}_{u}^{1}}}\left| {{\mathcal{N}}_{i}^{1} \cap {\mathcal{N}}_{j}^{1}}\right| = \frac{{\beta }^{2}}{\left| {\widehat{\mathcal{N}}}_{u}^{1}\right| }\mathop{\sum }\limits_{{i \in {\widehat{\mathcal{N}}}_{u}^{1}}}\operatorname{CN}\left( {i, j}\right) \tag{12}
229
+ $$
230
+
231
+ Since $\mathrm{{CN}}$ does not contain any normalization to remove the bias of degree in quantifying proximity and hence performs worse than other metrics as demonstrated by our recommendation experiments in Table 1.
232
+
233
+ - Leicht-Holme-Nerman (LHN) [10]: LHN is very similar to SC. However, it removes the square
234
+
235
+ 276 root in the denominator and is more sensitive to the degree of node:
236
+
237
+ $$
238
+ \operatorname{LHN}\left( {i, j}\right) = \frac{\left| {\mathcal{N}}_{i}^{1} \cap {\mathcal{N}}_{j}^{1}\right| }{\left| {\mathcal{N}}_{i}^{1}\right| \cdot \left| {\mathcal{N}}_{j}^{1}\right| } \tag{13}
239
+ $$
240
+
241
+ Let $L = 1$ and set $f\left( \left\{ {{\mathcal{N}}_{k}^{1} \mid k \in {P}_{ji}^{2l}}\right\} \right) = \left| {\mathcal{N}}_{i}^{1}\right| \cdot \left| {\mathcal{N}}_{j}^{1}\right|$ , then we have:
242
+
243
+ $$
244
+ {\widehat{\phi }}_{u}^{L}\left( j\right) = \frac{1}{\left| {\widehat{\mathcal{N}}}_{u}^{1}\right| }\mathop{\sum }\limits_{{i \in {\widehat{\mathcal{N}}}_{u}^{1}}}{\beta }^{2}\mathop{\sum }\limits_{{{P}_{ji}^{2l} \in {\mathcal{P}}_{ji}^{2l}}}\frac{1}{\left| {\mathcal{N}}_{i}^{1}\right| \cdot \left| {\mathcal{N}}_{j}^{1}\right| } = \frac{{\beta }^{2}}{\left| {\widehat{\mathcal{N}}}_{u}^{1}\right| }\mathop{\sum }\limits_{{i \in {\widehat{\mathcal{N}}}_{u}^{1}}}\frac{\left| {\mathcal{N}}_{i}^{1} \cap {\mathcal{N}}_{j}^{1}\right| }{\left| {\mathcal{N}}_{i}^{1}\right| \cdot \left| {\mathcal{N}}_{j}^{1}\right| } = \frac{{\beta }^{2}}{\left| {\widehat{\mathcal{N}}}_{u}^{1}\right| }\mathop{\sum }\limits_{{i \in {\widehat{\mathcal{N}}}_{u}^{1}}}\operatorname{LHN}\left( {i, j}\right)
245
+ $$
246
+
247
+ (14)
248
+
249
+ - Resource Allocation (RA) [11]: RA is very similar to SC. However, it removes the square root in the denominator and is more sensitive to the degree of node:
250
+
251
+ $$
252
+ \operatorname{RA}\left( {i, j}\right) = \mathop{\sum }\limits_{{k \in {\mathcal{N}}_{i}^{1} \cap {\mathcal{N}}_{j}^{1}}}\frac{1}{\left| {\mathcal{N}}_{k}^{1}\right| } \tag{15}
253
+ $$
254
+
255
+ 280 Let $L = 1$ and set $f\left( \left\{ {{\mathcal{N}}_{k}^{1} \mid k \in {P}_{ji}^{2l}}\right\} \right) = \mathop{\prod }\limits_{{k \in {P}_{ji}^{2l}/\{ i, j\} }}\left| {\mathcal{N}}_{k}^{1}\right|$ , then we have:
256
+
257
+ $$
258
+ {\widehat{\phi }}_{u}^{L}\left( j\right) = \frac{1}{\left| {\widehat{\mathcal{N}}}_{u}^{1}\right| }\mathop{\sum }\limits_{{i \in {\widehat{\mathcal{N}}}_{u}^{1}}}{\beta }^{2}\mathop{\sum }\limits_{{{P}_{ji}^{2l} \in {\mathcal{P}}_{ji}^{2l}}}\frac{1}{\mathop{\prod }\limits_{{k \in {P}_{ji}^{2l}/\{ i, j\} }}\left| {\mathcal{N}}_{k}^{1}\right| } = \frac{{\beta }^{2}}{\left| {\widehat{\mathcal{N}}}_{u}^{1}\right| }\mathop{\sum }\limits_{{i \in {\widehat{\mathcal{N}}}_{u}^{1}}}\mathop{\sum }\limits_{{k \in {\mathcal{N}}_{i}^{1 \cdot } \cap {\mathcal{N}}_{j}^{1}}}\frac{1}{\left| {\mathcal{N}}_{k}^{1}\right| } = \frac{{\beta }^{2}}{\left| {\widehat{\mathcal{N}}}_{u}^{1}\right| }\mathop{\sum }\limits_{{i \in {\widehat{\mathcal{N}}}_{u}^{1}}}\operatorname{RA}\left( {i, j}\right)
259
+ $$
260
+
261
+ (16)
262
+
263
+ We further emphasize that our proposed CIR is a generalized version of these five existing metrics and can be delicately designed toward satisfying downstream tasks. We leave such exploration on the choice of $f$ as one potential future work.
264
+
265
+ ### A.3 Approximation of CIR
266
+
267
+ Calculating ${\widehat{\phi }}_{u}\left( j\right)$ is unrealistic since we do not have access to the testing set ${\widehat{\mathcal{N}}}_{u}^{1}$ in advance. Thereby, we propose to approximate ${\widehat{\phi }}_{u}\left( j\right)$ by enumerating $i$ from the observed training set ${\mathcal{N}}_{u}^{1}$ instead of ${\widehat{\mathcal{N}}}_{u}^{1}$ and denote this estimated version as ${\phi }_{u}\left( j\right)$ . Such approximation assumes that neighboring nodes interacting more with other neighboring nodes in the training set would also interact more with neighboring nodes in the testing set. We empirically verify such approximation by comparing the ranking consistency among CIRs calculated from training neighborhoods (i.e., ${\phi }_{u}\left( j\right)$ ), from testing neighborhoods ((i.e., ${\widehat{\phi }}_{u}\left( j\right)$ )) and from full neighborhoods (we replace ${\widehat{\mathcal{N}}}_{u}^{1}$ with ${\mathcal{N}}_{u}^{1} \cup {\widehat{\mathcal{N}}}_{u}^{1}$ in (6)). Here we respectively use four topological metrics (JC, SC, LHN, and CN) to define $f$ and rank the obtained three lists. Then, we measure the similarity of the ranked lists between Train-Test and between Train-Full by Rank-Biased Overlap (RBO) [19]. The averaged RBO values over all nodes $v \in \mathcal{V}$ on three datasets are shown in Table 2. We can clearly see that the RBO values on all these datasets using all topological metrics are beyond 0.5 , which verifies our approximation. The RBO value between Train-Full is always higher than the one between Train-Test because most interactions are in the training set.
268
+
269
+ Table 2: Average Rank-Biased Overlap (RBO) of the ranked neighbor lists between training (i.e., $\left. {\mathcal{N}}_{u}^{1}\right)$ and testing/full (i.e., ${\widehat{\mathcal{N}}}_{u}^{1}$ and ${\mathcal{N}}_{u}^{1} \cup {\widehat{\mathcal{N}}}_{u}^{1}$ , respectively) dataset over all nodes $u \in \mathcal{U}$
270
+
271
+ <table><tr><td rowspan="2">Metric</td><td colspan="2">Gowalla</td><td colspan="2">Yelp</td><td colspan="2">Ml-1M</td></tr><tr><td>Train-Test</td><td>Train-Full</td><td>Train-Test</td><td>Train-Full</td><td>Train-Test</td><td>Train-Full</td></tr><tr><td>JC</td><td>${0.604} \pm {0.129}$</td><td>${0.902} \pm {0.084}$</td><td>${0.636} \pm {0.124}$</td><td>${0.897} \pm {0.081}$</td><td>${0.848} \pm {0.092}$</td><td>${0.978} \pm {0.019}$</td></tr><tr><td>SC</td><td>${0.611} \pm {0.127}$</td><td>${0.896} \pm {0.084}$</td><td>${0.657} \pm {0.124}$</td><td>${0.900} \pm {0.077}$</td><td>${0.876} \pm {0.077}$</td><td>${0.983} \pm {0.015}$</td></tr><tr><td>LHN</td><td>${0.598} \pm {0.121}$</td><td>${0.974} \pm {0.036}$</td><td>${0.578} \pm {0.100}$</td><td>0.976±0.029</td><td>${0.845} \pm {0.082}$</td><td>${0.987} \pm {0.009}$</td></tr><tr><td>CN</td><td>${0.784} \pm {0.120}$</td><td>${0.979} \pm {0.029}$</td><td>${0.836} \pm {0.100}$</td><td>${0.983} \pm {0.023}$</td><td>${0.957} \pm {0.039}$</td><td>${0.995} \pm {0.006}$</td></tr></table>
272
+
273
+ ### A.4 Expressiveness of CAGCN
274
+
275
+ Here we thoroughly prove that when $g$ is set to be MLP, CAGCN can be more expressive than 1-WL. First, we review the concepts of subtree-isomorphism and subgraph-isomorphism.
276
+
277
+ Definition A.1. Subtree-isomporphism [9]: ${\mathcal{S}}_{u}$ and ${\mathcal{S}}_{i}$ are subtree-isomorphic, denoted as ${\mathcal{S}}_{u}{ \cong }_{\text{subtree }}{\mathcal{S}}_{i}$ , if there exists a bijective mapping $h : {\widetilde{\mathcal{N}}}_{u}^{1} \rightarrow {\widetilde{\mathcal{N}}}_{i}^{1}$ such that $h\left( u\right) = i$ and $\forall v \in {\widetilde{\mathcal{N}}}_{u}^{1}, h\left( v\right) = j,{\mathbf{e}}_{v}^{l} = {\mathbf{e}}_{j}^{l}.$
278
+
279
+ Definition A.2. Subgraph-isomporphism [9]: ${\mathcal{S}}_{u}$ and ${\mathcal{S}}_{i}$ are subgraph-isomorphic, denoted as ${\mathcal{S}}_{u}{ \cong }_{\text{subgraph }}{\mathcal{S}}_{i}$ , if there exists a bijective mapping $h : {\widetilde{\mathcal{N}}}_{u}^{1} \rightarrow {\widetilde{\mathcal{N}}}_{i}^{1}$ such that $h\left( u\right) = i$ and $\forall {v}_{1},{v}_{2} \in {\widetilde{\mathcal{N}}}_{u}^{1},{e}_{{v}_{1}{v}_{2}} \in {\mathcal{E}}_{{\mathcal{S}}_{u}}$ iff ${e}_{h\left( {v}_{1}\right) h\left( {v}_{2}\right) } \in {\mathcal{E}}_{{\mathcal{S}}_{i}}$ and ${\mathbf{e}}_{{v}_{1}}^{l} = {\mathbf{e}}_{h\left( {v}_{1}\right) }^{l},{\mathbf{e}}_{{v}_{2}}^{l} = {\mathbf{e}}_{h\left( {v}_{2}\right) }^{l}$ .
280
+
281
+ Then we theoretically demonstrate the equivalence between the subtree-isomorphism and the subgraph-isomorphism in bipartite graphs:
282
+
283
+ Theorem 1. In bipartite graphs, two subgraphs that are subtree-isomorphic if and only if they are subgraph-isomorphic.
284
+
285
+ Proof. We prove this theorem in two directions. Firstly $\left( \Rightarrow \right)$ , we prove that in a bipartite graph, two subgraphs that are subtree-isomorphic are also subgraph-isomorphic by contradiction. Assuming that there exists two subgraphs ${\mathcal{S}}_{u},{\mathcal{S}}_{i}$ that are subtree-isomorphic yet not subgraph-isomorphic in a bipartite graph, i.e., ${S}_{u}{ \cong }_{\text{subtree }}{S}_{i},{\mathcal{S}}_{u}{ ≆ }_{\text{subgraph }}{\mathcal{S}}_{i}$ . By definition of subtree-isomorphism, we trivially have ${\mathbf{e}}_{v}^{l} = {\mathbf{e}}_{h\left( v\right) }^{l},\forall v \in {\mathcal{V}}_{{\mathcal{S}}_{u}}$ . Then to guarantee ${\mathcal{S}}_{u}{ ≄ }_{\text{subgraph }}{\mathcal{S}}_{i}$ and also since edges are only allowed to connect $u$ and its neighbors ${\mathcal{N}}_{u}^{1}$ in the bipartite graph, there must exist at least an edge ${e}_{uv}$ between $u$ and one of its neighbors $v \in {\mathcal{N}}_{u}^{1}$ such that ${e}_{uv} \in {\mathcal{E}}_{{\mathcal{S}}_{u}},{e}_{h\left( u\right) h\left( v\right) } \notin {\mathcal{E}}_{{\mathcal{S}}_{i}}$ , which contradicts the assumption that ${S}_{u}{ \cong }_{\text{subtree }}{S}_{i}$ . Secondly $\left( \Leftarrow \right)$ , we can prove that in a bipartite graph, two subgraphs that are subgraph-isomorphic are also subtree-isomorphic, which trivially holds since in any graph, subgraph-isomorphism leads to subtree-isomorphism [9].
286
+
287
+ Since 1-WL test can distinguish subtree-isomorphic graphs [9], the equivalence between these two isomorphisms indicates that in bipartite graphs, both of the subtree-isomorphic graphs and subgraph-isomorphic graphs can be distinguished by 1-WL test. Therefore, to go beyond 1-WL in bipartite graphs, we propose a novel bipartite-subgraph-isomorphism in Definition A.3, which is even harder to be distinguished than the subgraph-isomorphism by 1-WL test:
288
+
289
+ Definition A.3. Bipartite-subgraph-isomorphism: ${\mathcal{S}}_{u}$ and ${\mathcal{S}}_{i}$ are bipartite-subgraph-isomorphic, denoted as ${\mathcal{S}}_{u}{ \cong }_{\text{bi-subgraph }}{\mathcal{S}}_{i}$ , if there exists a bijective mapping $h : {\widetilde{\mathcal{N}}}_{u}^{1} \cup {\mathcal{N}}_{u}^{2} \rightarrow {\widetilde{\mathcal{N}}}_{i}^{1} \cup {\mathcal{N}}_{i}^{2}$ such that $h\left( u\right) = i$ and $\forall v,{v}^{\prime } \in {\widetilde{\mathcal{N}}}_{u}^{1} \cup {\mathcal{N}}_{u}^{2},{e}_{v{v}^{\prime }} \in \mathcal{E} \Leftrightarrow {e}_{h\left( v\right) h\left( {v}^{\prime }\right) } \in \mathcal{E}$ and ${\mathbf{e}}_{v}^{l} = {\mathbf{e}}_{h\left( v\right) }^{l},{\mathbf{e}}_{{v}^{\prime }}^{l} = {\mathbf{e}}_{h\left( {v}^{\prime }\right) }^{l}$ .
290
+
291
+ With the bipartite-subgraph-isomorphism defined, we prove the injective property in the following: Lemma 1. If $g$ is MLP, then $g\left( \left\{ {\left( {{\gamma }_{i}{\widetilde{\mathbf{\Phi }}}_{ij},{\mathbf{e}}_{j}^{l}}\right) \mid j \in {\mathcal{N}}_{i}^{1}}\right\} \right) ,\left\{ {\left( {{d}_{i}^{-{0.5}}{d}_{j}^{-{0.5}},{\mathbf{e}}_{j}^{l}}\right) \mid j \in {\mathcal{N}}_{i}^{1}}\right) )$ is injective.
292
+
293
+ Proof. If we assume that all node embeddings share the same discretization precision, then em-beddings of all nodes in a graph can form a countable set $\mathcal{H}$ . Similarly, for each edge in a graph, its CIR-based weight ${\widetilde{\mathbf{\Phi }}}_{ij}$ and degree-based weight ${d}_{i}^{-{0.5}}{d}_{j}^{-{0.5}}$ can also form two different countable sets ${\mathcal{W}}_{1},{\mathcal{W}}_{2}$ with $\left| {\mathcal{W}}_{1}\right| = \left| {\mathcal{W}}_{2}\right|$ . Then ${\mathcal{P}}_{1} = \left\{ {{\widetilde{\mathbf{\Phi }}}_{ij}{\mathbf{e}}_{i} \mid {\widetilde{\mathbf{\Phi }}}_{ij} \in {\mathcal{W}}_{1},{\mathbf{e}}_{i} \in \mathcal{H}}\right\} ,{\mathcal{P}}_{2} =$ $\left\{ {{d}_{i}^{-{0.5}}{d}_{j}^{-{0.5}}{\mathbf{e}}_{i} \mid {d}_{i}^{-{0.5}}{d}_{j}^{-{0.5}} \in {\mathcal{W}}_{2},{\mathbf{e}}_{i} \in \mathcal{H}}\right\}$ are also two countable sets. Let ${P}_{1},{P}_{2}$ be two multisets containing elements from ${\mathcal{P}}_{1}$ and ${\mathcal{P}}_{2}$ , respectively, and $\left| {P}_{1}\right| = \left| {P}_{2}\right|$ . Then by Lemma 1 in [9], there exists a function $f$ such that $\pi \left( {{P}_{1},{P}_{2}}\right) = \mathop{\sum }\limits_{{{p}_{1} \in {P}_{1},{p}_{2} \in {P}_{2}}}f\left( {{p}_{1},{p}_{2}}\right)$ is unique for any distinct pair of multisets $\left( {{P}_{1},{P}_{2}}\right)$ . Since the MLP-based $\mathrm{g}$ is a universal approximator [20] and hence can learn the function $f$ , we know that $g$ is injective.
294
+
295
+ Theorem 2. Let $M$ be a GNN with sufficient number of CAGC-based convolution layers defined by (2). If $g$ is MLP, then $M$ is strictly more expressive than 1-WL in distinguishing subtree-isomorphic yet non-bipartite-subgraph-isomorphic graphs.
296
+
297
+ Proof. We prove this theorem in two directions. Firstly $\left( \Rightarrow \right)$ , following [9], we prove that the designed CAGCN here can distinguish any two graphs that are distinguishable by 1-WL by contradiction. Assume that there exist two graphs ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ which can be distinguished by 1-WL but cannot be distinguished by CAGCN. Further, suppose that 1-WL cannot distinguish these two graphs in the iterations from 0 to $L - 1$ , but can distinguish them in the ${L}^{\text{th }}$ iteration. Then, there must exist two neighborhood subgraphs ${S}_{u}$ and ${S}_{i}$ whose neighboring nodes correspond to two different sets of node labels at the ${L}^{\text{th }}$ iteration, i.e., $\left\{ {{\mathbf{e}}_{v}^{l} \mid v \in {\mathcal{N}}_{u}^{1}}\right\} \neq \left\{ {{\mathbf{e}}_{j}^{l} \mid j \in {\mathcal{N}}_{i}^{1}}\right\}$ . Since $g$ is injective by Lemma 1, for ${S}_{u}$ and ${S}_{i}, g$ would yield two different feature vectors at the ${L}^{\text{th }}$ iteration. This means that CAGCN can also distinguish ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ , which contradicts the assumption.
298
+
299
+ Secondly $\left( \Leftarrow \right)$ , we prove that there exist at least two graphs that can be distinguished by CAGCN but cannot be distinguished by 1-WL. Figure 3 presents two of such graphs ${S}_{u},{S}_{u}^{\prime }$ , which are subgraph isomorphic but non-bipartite-subgraph-isomorphic. Assuming $u$ and ${u}^{\prime }$ have exactly the same neighborhood feature vectors $\mathbf{e}$ , then directly propagating according to 1-WL or even considering node degree as the edge weight as GCN [21] can still end up with the same propagated feature for $u$ and ${u}^{\prime }$ . However, if we leverage JC to calculate CIR as introduced in Appendix A.2, then we would end up with $\left\{ {{\left( {d}_{u}{d}_{{j}_{1}}\right) }^{-{0.5}}\mathbf{e},{\left( {d}_{u}{d}_{{j}_{2}}\right) }^{-{0.5}}\mathbf{e},{\left( {d}_{u}{d}_{{j}_{3}}\right) }^{-{0.5}}\mathbf{e}}\right\} \neq \left\{ {\left( {{d}_{{u}^{\prime }}^{-{0.5}}{d}_{{j}_{1}^{\prime }}^{-{0.5}} + {\widetilde{\mathbf{\Phi }}}_{{u}^{\prime }{j}_{1}^{\prime }}}\right) \mathbf{e},\left( {{d}_{{u}^{\prime }}^{-{0.5}}{d}_{{j}_{2}^{\prime }}^{-{0.5}} + }\right. }\right.$ $\left. {\left. {{\widetilde{\Phi }}_{{u}^{\prime }{j}_{2}^{\prime }})\mathbf{e},\left( {{d}_{{u}^{\prime }}^{-{0.5}}{d}_{{j}_{3}^{\prime }}^{-{0.5}} + {\widetilde{\Phi }}_{{u}^{\prime }{j}_{3}^{\prime }}}\right) \mathbf{e}}\right) \text{. Since}g\text{is injective by Lemma 1, CAGCN would yield two}}\right\}$ different embeddings for $u$ and ${u}^{\prime }$ .
300
+
301
+ ![01963ec7-5751-77cc-9084-2440c3b619c1_9_301_800_1187_350_0.jpg](images/01963ec7-5751-77cc-9084-2440c3b619c1_9_301_800_1187_350_0.jpg)
302
+
303
+ Figure 3: An example showing two neighborhood subgraphs ${\mathcal{S}}_{u},{\mathcal{S}}_{{u}^{\prime }}$ that are subgraph-isomorphic but not bipartite-subgraph-isomorphic.
304
+
305
+ Theorem 2 indicates that GNNs whose aggregation scheme is CAGC can distinguish non-bipartite-subgraph-isomorphic graphs that are indistinguishable by 1-WL.
306
+
307
+ ### A.5 Model Architecture CAGCN
308
+
309
+ ![01963ec7-5751-77cc-9084-2440c3b619c1_9_318_1437_1128_516_0.jpg](images/01963ec7-5751-77cc-9084-2440c3b619c1_9_318_1437_1128_516_0.jpg)
310
+
311
+ Figure 4: Comparing the model architecture of CAGCN and LightGCN.
312
+
313
+ The model architecture of our CAGCN is shown in Figure 4. We take a specific example of computing the ranking of user $u$ over item $i$ . We first calculate the estimated CIR of each neighbor with respect to the rest of the corresponding neighborhoods as (2) and then we iteratively propagate neighbors' embeddings with the awareness of the collaboration benefits by following the calculated CIR. Then we weighted combine the propagated embeddings at each layer to obtain the aggregated embedding for $u$ and $i$ as (3). After that, we calculate their ranking based on the dot-product similarity. The optimization of CAGCN is the same as LightGCN shown in (5).
314
+
315
+ Table 3: Basic dataset statistics.
316
+
317
+ <table><tr><td>Dataset</td><td>#Users</td><td>#Items</td><td>#Interactions</td><td>Density</td></tr><tr><td>Gowalla</td><td>29,858</td><td>40, 981</td><td>1, 027, 370</td><td>0.084%</td></tr><tr><td>Yelp</td><td>31, 668</td><td>38,048</td><td>1, 561, 406</td><td>0.130%</td></tr><tr><td>Amazon</td><td>52,643</td><td>91, 599</td><td>2, 984, 108</td><td>0.062%</td></tr><tr><td>Ml-1M</td><td>6, 022</td><td>3,043</td><td>895, 699</td><td>4.888%</td></tr><tr><td>Loseit</td><td>5, 334</td><td>54,595</td><td>230, 866</td><td>0.08%</td></tr><tr><td>News</td><td>29, 785</td><td>21, 549</td><td>766,874</td><td>0.119%</td></tr></table>
318
+
319
+ ### A.6 Experimental Setting
320
+
321
+ #### A.6.1 Datasets
322
+
323
+ Following [5, 7], we validate the proposed approach on four widely used benchmark datasets in recommender systems, including Gowalla, Yelp, Amazon, and MI-1M, the details of which are provided in [5, 7]. Moreover, we collect two extra datasets to further demonstrate the superiority of our proposed model in even broader user-item interaction domains: (1) Loseit: This dataset is collected from subreddit loseit - Lose the Fat ${}^{1}$ from March 2020 to March 2022 where users discuss healthy and sustainable methods of losing weight via posts. To ensure the quality of this dataset, we use the 10-core setting [22], i.e., retaining users and posts with at least ten interactions. (2) News: This dataset includes the interactions from subreddit World News ${}^{2}$ where users share major news around the world via posts. Similarly, we use the 10-core setting to ensure the quality of this dataset. We summarize the statistics of all six datasets in Table 3.
324
+
325
+ #### A.6.2 Baselines
326
+
327
+ We compare our proposed CAGCN with the following baselines:
328
+
329
+ - MF [16]: This is the most classic collaborative filtering method equipped with the BPR loss [16], which preserves users' ranking over interacted items with respect to uninteracted items.
330
+
331
+ - NGCF [5]: This was the very first GNN-based collaborative filtering model to incorporate high-order connectivity of user-item interactions for recommendation.
332
+
333
+ - LightGCN [7]: This is the most popular collaborative filtering model based on GNNs, which extends NGCF by removing feature transformation and nonlinear activation, and achieves better trade-off between the performance and efficiency.
334
+
335
+ - UltraGCN [6]: This model simplifies GCNs for collaborative filtering by omitting infinite layers of message passing for efficient recommendation, and it constructs the user-user graphs to leverage higher-order relationships. Thus, it achieves both better performance and shorter running time than LightGCN.
336
+
337
+ - GTN [8]: This model leverages a robust and adaptive propagation based on the trend of the aggregated messages to avoid the unreliable user-item interactions.
338
+
339
+ Note that here we only focus on baselines leveraging graph convolution (besides the classic MF) including the state-of-the-art GNN-based recommendation models (i.e., UltraGCN and GTN). There are some other developing methodology directions (e.g., [23-26]) that can obtain comparable results to the aforementioned baselines on some of the benchmark datasets. However, these methods are either not GNN-based [23] or incorporates some other general machine learning techniques rather than focus on graph convolution, e.g., SGCNs [26] leverages the stochastic binary masks to remove noisy edges, and GOTNet [25] performs k-Means clustering on nodes' embeddings to capture long-range dependencies. Given our main focus is on advancing the frontier of graph-convolution in recommendation systems, we omit these other comparable baselines. Note that our work could be further enhanced if incorporating these general techniques but we leave this as one future direction.
340
+
341
+ ---
342
+
343
+ ${}^{1}$ https://www.reddit.com/r/loseit/
344
+
345
+ ${}^{2}$ https://www.reddit.com/r/worldnews/
346
+
347
+ ---
348
+
349
+ #### A.6.3 Evaluation Metrics
350
+
351
+ Two popular metrics: Recall@K and Normalized Discounted Cumulative Gain (NDCG@K) [5] are adopted to evaluate all models. We set the default value of $K$ as 20 and report the average of Recall @20 and NDCG@20 over all users in the test set. In the inference phase, we treat items that the user has never interacted with in training set as candidate items. All recommendation models predict the user's preference scores over these candidate items and rank them based on the computed scores to further calculate Recall @20 and NDCG @20.
352
+
353
+ #### A.6.4 Hyperparameter Settings
354
+
355
+ We strictly follow the experimental setting used in LightGCN [7] to ensure the fair comparison. For all other models, we adopt exactly the same hyper-parameters as suggested by the corresponding papers for all baselines to avoid any biased comparison: the embedding size ${d}^{0} = {64}$ , learning rate ${lr} = {0.001}$ , the number of propagating layers $L = 3$ , training batch size 2048. The coefficient of 12-regularization is searched in $\left\{ {1{e}^{-4},1{e}^{-3}}\right\}$ . As the user/item embedding is the main network parameter, it is crucial to ensure the same embedding size for fair comparison between different models. Therefore, when comparing with GTN [8], we set the embedding size to be 256 to align with [8]. For CAGCN, we set ${\gamma }_{i}$ as $\mathop{\sum }\limits_{{j \in {\mathcal{N}}_{i}^{1}}}{d}_{i}^{-{0.5}}{d}_{j}^{-{0.5}}$ to ensure the same embedding magnitude as LightGCN. For $g$ and ${\gamma }_{i}$ , CAGCN*, we set $g$ as the weighted sum in Eq. (2) for efficiency/less computation. Although using the weighted sum cannot guarantee the universal approximation of $g$ as MLP [20], we empirically find it still achieves superior performance over existing work. Furthermore, we set ${\gamma }_{i} = \gamma$ as a constant controlling the contributions of capturing different collaborations. Note that we search the optimal $\gamma$ within $\{ 1,{1.2},{1.5},{1.7},{2.0}\}$ . In addition, we term the model variant as $\operatorname{CAGCN}\left( *\right)$ -jc if we use JC to compute $\phi$ .
356
+
357
+ ### A.7 Additional Experimental Results
358
+
359
+ #### A.7.1 Performance Comparison between CAGCN and GTN
360
+
361
+ Here we compare the performance between CAGCN and GTN. We first increase the embedding size ${d}^{0}$ to 256 following ${\left\lbrack 8\right\rbrack }^{3}$ and observe the consistent superiority of our model over GTN in Table 4. This is because in GTN [8], the edge weights for message-passing are still computed based on node embeddings that implicitly encode noisy collaborative signals from unreliable interactions. Conversely, our CAGCN* directly alleviates the propagation on unreliable interactions based on its CIR value, which removes noisy interactions from the source.
362
+
363
+ Table 4: Performance comparison of CAGCN* with GTN.
364
+
365
+ <table><tr><td rowspan="2">Model</td><td rowspan="2">$\mathbf{{Metric}}$</td><td rowspan="2">GTN</td><td colspan="3">CAGCN*</td></tr><tr><td>-jc</td><td>-sc</td><td>-lhn</td></tr><tr><td rowspan="2">Gowalla</td><td>R@20</td><td>0.1870</td><td>0.1901</td><td>0.1899</td><td>0.1885</td></tr><tr><td>N@20</td><td>0.1588</td><td>0.1604</td><td>0.1603</td><td>0.1576</td></tr><tr><td rowspan="2">Yelp2018</td><td>R@20</td><td>0.0679</td><td>0.0731</td><td>0.0729</td><td>0.0689</td></tr><tr><td>N@20</td><td>0.0554</td><td>0.0605</td><td>0.0601</td><td>0.0565</td></tr><tr><td rowspan="2">Amazon</td><td>R@20</td><td>0.0450</td><td>0.0573</td><td>0.0575</td><td>0.0520</td></tr><tr><td>N@20</td><td>0.0346</td><td>0.0456</td><td>0.0458</td><td>0.0409</td></tr></table>
366
+
367
+ #### A.7.2 Efficiency Comparison
368
+
369
+ As justified in Section 3, the efficiency plays a significant role in evaluating recommendation systems. As recommendation models will be eventually deployed in user-item data of real-world scale, it is crucial to compare the efficiency of the proposed CAGCN(*) with other baselines. For fair comparison, we use a uniform code framework implemented ourselves for all models and run them on the same machine with Ubuntu 20.04 system, AMD Ryzen 9 5900 12-Core Processor (2200 MHz), 128 GB RAM and GPU NVIDIA GeForce RTX 3090. Following the experimental setting in Figure 2(a), we present the NDCG@20 with the training time in Figure 5. Clearly, CAGCN* achieves extremely higher performance in significant less time because the collaboration-aware graph convolution leverages more beneficial collaborations from neighborhoods.
370
+
371
+ ---
372
+
373
+ ${}^{3}$ As the user/item embedding is the main network parameters, it is crucial to ensure the same embedding size when comparing different models and hence we use the exactly the same embedding size as GTN.
374
+
375
+ ---
376
+
377
+ ![01963ec7-5751-77cc-9084-2440c3b619c1_12_322_469_1141_280_0.jpg](images/01963ec7-5751-77cc-9084-2440c3b619c1_12_322_469_1141_280_0.jpg)
378
+
379
+ Figure 5: Comparing the training time of CAGCN(*) with other baselines on four datasets. For clear visualization, we only report the efficiency of the best CAGCN(*) variant based on Table 1 for each dataset. CAGCN* almost always achieves extremely higher NDCG@20 with significant less time.
380
+
381
+ Furthermore, we report the first time that our best CAGCN* variant achieving the best performance of LightGCN on each dataset in Table 5. To ensure the fair comparison, we also include the time for precomputing CIR matrix as the preprocess time for our CAGCN*. We could see CAGCN* spends significant less time to achieve the same best performance as LightGCN, which highlights the broad prospects to deploy CAGCN* in real-world recommendations.
382
+
383
+ Table 5: Efficiency comparison of CAGCN* with LightGCN.
384
+
385
+ <table><tr><td>Model</td><td>Stage</td><td>Gowalla</td><td>Yelp</td><td>Amazon</td><td>MI-1M</td><td>Loseit</td><td>News</td></tr><tr><td>LightGCN</td><td>Training</td><td>16432.0</td><td>28788.0</td><td>81976.5</td><td>18872.3</td><td>39031.0</td><td>13860.8</td></tr><tr><td rowspan="3">CAGCN*</td><td>Preprocess</td><td>167.4</td><td>281.6</td><td>1035.8</td><td>33.8</td><td>31.4</td><td>169.0</td></tr><tr><td>Training</td><td>2963.2</td><td>1904.4</td><td>1983.9</td><td>11304.7</td><td>10417.7</td><td>1088.4</td></tr><tr><td>Total</td><td>3130.6</td><td>2186.0</td><td>3019.7</td><td>11338.5</td><td>10449.1</td><td>1157.4</td></tr><tr><td rowspan="2">Improve</td><td>Training</td><td>82.0%</td><td>93.4%</td><td>97.6%</td><td>40.1%</td><td>73.3%</td><td>92.1%</td></tr><tr><td>Total</td><td>80.9%</td><td>92.4%</td><td>96.3%</td><td>39.9%</td><td>73.2%</td><td>91.6%</td></tr></table>
386
+
387
+ #### A.7.3 Empirical Analysis of CIR
388
+
389
+ To rationalize that edges with higher CIR would be more important to the recommendation performance. We leverage the LightGCN model with pre-trained user/item embeddings, remove all edges among nodes and add edges incrementally. Here we take two strategies: (1) Global Strategy: adding top-k edges among all edges in the whole graph according to their CIR; (2) Local Strategy: adding top-k edges among all edges around each node according to their CIR. Specifically for the local one, we first add the edges with highest CIR around each node and then add the edges with ${2}^{\text{nd }}$ highest CIR around each node and so on so forth. For both of these two strategies, we keep adding edges until the total number of added edges reach the predefined budget. We rank edges according to JC, SC, LHN and CO respectively and also compare them with randomly addition. We can clearly see that in most cases, adding edges with higher JC/SC/LHN would lead to better performance than random one, which demonstrates the importance of edges with higher JC/SC/LHN.
390
+
391
+ ### A.8 Related Work
392
+
393
+ #### A.8.1 Collaborative Filtering (CF)
394
+
395
+ As an effective tool for personalized recommendation, CF assumes that people sharing similar interest on one thing tend to have the same preference on another thing, and it predicts the interest of a user (filtering) by utilizing the preference from other users who have similar interests (collaborative) [27]. 71 Early CF methods used MF techniques [28], which generally map the IDs of users and items to a joint latent factor space and take the inner product of the embeddings to estimate the user-item interactions [16, 29]. Despite the initial success, these methods failed to capture the nonlinear user-item relationships due to their intrinsic linearity. To address this issue, deep learning was used to capture the non-linearity (e.g. by replacing the linear inner product operation with the nonlinear neural networks) [5, 30]. All above methods capture CF effect by optimizing embedding similarity based on observed user-item interactions. Stepping further, graph-based methods are proposed to leverage message-passing to directly inject the CF effect into the user/item embeddings [5, 7].
396
+
397
+ ![01963ec7-5751-77cc-9084-2440c3b619c1_13_322_204_1146_276_0.jpg](images/01963ec7-5751-77cc-9084-2440c3b619c1_13_322_204_1146_276_0.jpg)
398
+
399
+ Figure 6: Performance of recommendation when adding edges randomly and according to different variants of CIR. (a) Adding edges
400
+
401
+ #### A.8.2 Graph-based Methods for Recommendation
402
+
403
+ Since user-item interaction can be naturally modeled as a bipartite graph, another line of research $\left\lbrack {5,7,{31},{32}}\right\rbrack$ infers users’ preferences by exploring the topological patterns of user-item bipartite graphs. Two pioneering work, ItemRank [31] and BiRank [32], define users' preferences based on their observed interacted items and perform label propagation to capture the CF effect. Although users' ranking scores are computed based on structural proximity between the observed items and the target item, the non-trainable user preferences and the lack of recommendation-based objectives in these methods lead to inferior performance to embedding-based methods such as MF-BPR [16]. Furthermore, HOP-Rec [33] combines the graph-based methods, which better capture the collaboration among nodes, and embedding-based methods, which better optimize the recommendation objective function. Yet, interactions captured by random walks do not fully explore the high-layer neighbors and multi-hop dependencies [34]. By contrast, GNN-based recommendation methods are superior at encoding structural proximity (especially higher-order connection) in user/item em-beddings, which is crucial in capturing the CF effect $\left\lbrack {5,7,8}\right\rbrack$ . For example, SGL $\left\lbrack {35}\right\rbrack$ further leverages contrastive learning [36] with graph augmentation to enhance model robustness against noisy interactions, but it still follows the existing message-passing mechanism of GNNs without any justification. In fact, all of these GNN-based models directly borrow the traditional graph convolution operation from node/graph classification and blindly propagate neighboring users/items embeddings without any recommendation-tailored modification. Actually, our work has demonstrated that the collaboration captured by message-passing may not always improve users' ranking over items, which inspires us to design a new generation of graph convolutions that adaptively pass messages based on the benefits provided by the captured collaborations.
papers/LOG/LOG 2022/LOG 2022 Conference/O7msz8Ou7o/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § COLLABORATION-AWARE GRAPH NEURAL NETWORK FOR RECOMMENDER SYSTEMS
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ Graph Neural Networks (GNNs) have been successfully adopted in recommendation systems by virtue of the message-passing that implicitly captures collaborative effect. Nevertheless, most of the existing message-passing mechanisms for recommendation are directly inherited from GNNs without scrutinizing whether the captured collaborative effect would benefit the prediction of user preferences. To quantify the benefit of the captured collaborative effect, we propose a recommendation-oriented topological metric, Common Interacted Ratio (CIR), which measures the level of interaction between a specific neighbor of a node with the rest of its neighbors. Then we propose a recommendation-tailored GNN, Collaboration-Aware Graph Convolutional Network (CAGCN), that goes beyond 1-WL test in distinguishing non-bipartite-subgraph-isomorphic graphs. Experiments on six benchmark datasets show that the best CAGCN variant outperforms the most representative GNN-based recommendation model, LightGCN, by nearly 10% in Recall @20 and also achieves more than ${80}\%$ speedup. Our code is available at https://github.com/submissionconf2023/CAGCN
12
+
13
+ § 17 1 INTRODUCTION
14
+
15
+ Recommendation aims to alleviate information overload through helping users discover items of interest $\left\lbrack {1,2}\right\rbrack$ . Given historical user-item interactions, the key of recommendation systems is to leverage the Collaborative Effect [3-5] to predict how likely users will interact with items. A common paradigm for modeling collaborative effect is to first learn embeddings of users/items capable of recovering historical user-item interactions and then perform top-k recommendation based on the pairwise similarity between the learned user/item embeddings.
16
+
17
+ Since user-item interactions can be naturally represented as a bipartite graph, recent research has started to leverage GNNs to learn user/item embeddings for the recommendation [5-7]. Two pioneering works NGCF [5] and LightGCN [7] leverage graph convolutions to aggregate messages from local neighborhoods, which directly injects the collaborative signal into user/item embeddings. However, blindly passing messages following existing styles of GNNs could capture harmful collaborative signals from these unreliable interactions, which corrupts user/item embeddings and hinders the performance of GNN-based models [8]. Despite the fundamental importance of capturing beneficial collaborative signals, the related studies are still in their infancy. To fill this crucial gap, we aim to customize message-passing for recommendations and propose a recommendation-tailored GNN, namely Collaboration-Aware Graph Convolutional Network, that selectively passes neighborhood information based on their Common Interacted Ratio (CIR). Our contributions are:
18
+
19
+ * Novel Recommendation-tailored Topological Metric: We propose a recommendation-oriented topological metric, Common Interacted Ratio (CIR), and demonstrate the capability of CIR to quantify the benefits of aggregating messages from neighborhoods.
20
+
21
+ * Novel Recommendation-tailored Graph Convolution: We incorporate CIR into message-passing and propose a novel Collaboration-Aware Graph Convolutional Network (CAGCN). Then we prove that it can goes beyond 1-WL test in detecting non-bipartite-subgraph-isomorphic graphs, and demonstrate its superiority via comprehensive experiments on real-world datasets including two newly collected datasets and provide an in-depth interpretation on its advantages.
22
+
23
+ < g r a p h i c s >
24
+
25
+ Figure 1: In (a)-(b), since ${j}_{1},{j}_{2}$ have more interactions (paths) with (to) $i$ ’s neighbors than ${j}_{3}$ , leveraging more collaborations from ${j}_{1},{j}_{2}$ than ${j}_{3}$ would increase $u$ ’s ranking over $i$ . In (c), we quantify the CIR between ${j}_{1}$ and $u$ via the paths (and associated nodes) between ${j}_{1}$ and ${\widehat{\mathcal{N}}}_{u}^{1}$ .
26
+
27
+ § 2 METHOD
28
+
29
+ In this section, we introduce notations used in this work, a novel recommendation-oriented topological metric (i.e., Common Interacted Ratio (CIR)) and then propose the collaboration-aware GNN.
30
+
31
+ Preliminary. Let $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ be the user-item bipartite graph, where the node set $\mathcal{V} = \mathcal{U} \cup \mathcal{I}$ includes the user set $\mathcal{U}$ and the item set $\mathcal{I}$ . User-item interactions are denoted as edges $\mathcal{E}$ where ${e}_{pq}$ represents the edge between node $p$ and $q$ . The network topology is described by adjacency matrix $\mathbf{A} \in \{ 0,1{\} }^{\left( {\left| \mathcal{U}\right| + \left| \mathcal{I}\right| }\right) \times \left( {\left| \mathcal{U}\right| + \left| \mathcal{I}\right| }\right) }$ , where ${\mathbf{A}}_{pq} = 1$ when ${e}_{pq} \in \mathcal{E}$ , and ${\mathbf{A}}_{pq} = 0$ otherwise. Let ${\mathcal{N}}_{p}^{l}$ and ${\widehat{\mathcal{N}}}_{p}^{l}$ denote the set of neighbors that are exactly $l$ -hops away from $p$ in the training and testing set. Let ${\mathcal{S}}_{p} = \left( {{\mathcal{V}}_{{\mathcal{S}}_{p}},{\mathcal{E}}_{{\mathcal{S}}_{p}}}\right)$ be the neighborhood subgraph [9] induced in $\mathcal{G}$ by ${\widetilde{\mathcal{N}}}_{p}^{1} = {\mathcal{N}}_{p}^{1} \cup \{ p\}$ . We use ${\mathcal{P}}_{pq}^{l}$ to denote the set of shortest paths of length $l$ between node $p$ and $q$ and denote one of such paths as ${P}_{pq}^{l}$ . Note that ${\mathcal{P}}_{pq}^{l} = \varnothing$ if it is impossible to have a path between $p$ and $q$ of length $l$ , e.g., ${\mathcal{P}}_{11}^{1} = \varnothing$ in an acyclic graph. Furthermore, we denote the initial embeddings of users/items in graph $\mathcal{G}$ as ${\mathbf{E}}^{0} \in {\mathbb{R}}^{\left( {n + m}\right) \times {d}^{0}}$ where ${\mathbf{e}}_{p}^{0} = {\mathbf{E}}_{p}^{0}$ is the node $p$ ’s embedding and let ${d}_{p}$ be the degree of node $p$ .
32
+
33
+ § 2.1 COMMON INTERACTED RATIO
34
+
35
+ Graph-based methods capture collaboration from other users/items by message-passing. However, we cannot guarantee all of these collaborations benefit the prediction of users' preferences. For example, in Figure 1(a)-(b), given a center user $u$ , we expect to leverage more collaborations from $u$ ’s observed neighboring items that have higher level of interactions (e.g., ${j}_{1},{j}_{2}$ rather than ${j}_{3}$ ) with items that would be interacted with $u$ (e.g., $i$ ). To mathematically quantify such level of interactions, we propose a graph topological metric, Common Interacted Ratio (CIR):
36
+
37
+ Definition 2.1. Common Interacted Ratio (CIR): For an observed neighboring item $j \in {\mathcal{N}}_{u}^{1}$ of user $u$ , the CIR of $j$ around $u$ considering nodes up to $\left( {L + 1}\right)$ -hops away from $u$ , i.e., ${\widehat{\phi }}_{u}^{L}\left( j\right)$ , is defined as the average interacting ratio of $j$ with all neighboring items of $u$ in ${\widehat{\mathcal{N}}}_{u}^{1}$ through paths of length less than or equal to ${2L}$ :
38
+
39
+ $$
40
+ {\widehat{\phi }}_{u}^{L}\left( j\right) = \frac{1}{\left| {\widehat{\mathcal{N}}}_{u}^{1}\right| }\mathop{\sum }\limits_{{i \in {\widehat{\mathcal{N}}}_{u}^{1}}}\mathop{\sum }\limits_{{l = 1}}^{L}{\beta }^{2l}\mathop{\sum }\limits_{{{P}_{ji}^{2l} \in {\mathcal{P}}_{ji}^{2l}}}\frac{1}{f\left( \left\{ {{\mathcal{N}}_{k}^{1} \mid k \in {P}_{ji}^{2l}}\right\} \right) },\forall j \in {\mathcal{N}}_{u}^{1},\forall u \in \mathcal{U}, \tag{1}
41
+ $$
42
+
43
+ where $\left\{ {{\mathcal{N}}_{k}^{1} \mid k \in {P}_{ji}^{2l}}\right\}$ represents the set of the 1-hop neighborhood of node $k$ along the path ${P}_{ji}^{2l}$ from node $j$ to $i$ of length ${2l}.f$ is a normalization function to differentiate the importance of different paths in ${\mathcal{P}}_{ji}^{2l}$ and its value depends on the neighborhood of each node on the path ${P}_{ji}^{2l}$ . As shown in Figure 1(c), the CIR of ${j}_{1}$ centering around $u,{\widehat{\phi }}_{u}^{L}\left( {j}_{1}\right)$ is decided by paths of length between 2 to ${2L}$ . By configuring different $L$ and $f,\mathop{\sum }\limits_{{{P}_{ji}^{2l} \in {\mathcal{P}}_{ji}^{2l}}}\frac{1}{f\left( \left\{ {{\mathcal{N}}_{k}^{1} \mid k \in {P}_{ji}^{2l}}\right\} \right) }$ could express many existing graph
44
+
45
+ similarity metrics [10-14] and we thoroughly discuss them in Appendix A.2. Calculating ${\widehat{\phi }}_{u}^{L}\left( j\right)$ is unrealistic since we do not have access to the testing set ${\widehat{\mathcal{N}}}_{u}^{1}$ in advance. Thereby, we propose to approximate ${\widehat{\phi }}_{u}\left( j\right)$ by enumerating $i$ from the observed training set ${\mathcal{N}}_{u}^{1}$ instead of ${\widehat{\mathcal{N}}}_{u}^{1}$ and denote this estimated version as ${\phi }_{u}^{L}\left( j\right)$ . Such approximation assumes that neighboring nodes interacting more with other neighboring nodes in the training set would also interact more with neighboring nodes in the testing set, which is verified in Appendix A.3. We further empirically rationalize that edges with higher ${\phi }_{u}\left( j\right)$ are more important to the recommendation performance in Appendix A.7.3.
46
+
47
+ § 2.2 COLLABORATION-AWARE GRAPH CONVOLUTIONAL NETWORK
48
+
49
+ In order to pass node messages based on the benefits of their corresponding collaborations, we develop Collaboration-Aware Graph Convolutional Network. The core idea is to strengthen/weaken the messages passed from neighbors with higher/lower estimated CIR to center nodes. To achieve this, we compute the edge weight as: ${\mathbf{\Phi }}_{ij} = {\phi }_{i}\left( j\right)$ when ${\mathbf{A}}_{ij} > 0$ (and 0 otherwise), where ${\phi }_{i}\left( j\right)$ is the estimated CIR of neighboring node $j$ centering around $i$ . Note that unlike the symmetric graph convolution ${\mathbf{D}}^{-{0.5}}\mathbf{A}{\mathbf{D}}^{-{0.5}}$ used in LightGCN, here $\mathbf{\Phi }$ is asymmetrical: the interacting level of node $j$ with $i$ ’s neighborhood is likely to be different from the interacting level of node $i$ with $j$ ’s neighborhood. We further normalize $\Phi$ and combine it with the LightGCN convolution:
50
+
51
+ $$
52
+ {\mathbf{e}}_{i}^{l + 1} = \mathop{\sum }\limits_{{j \in {\mathcal{N}}_{i}^{1}}}g\left( {{\gamma }_{i}\frac{{\mathbf{\Phi }}_{ij}}{\mathop{\sum }\limits_{{k \in {\mathcal{N}}_{i}^{1}}}{\mathbf{\Phi }}_{ik}},{d}_{i}^{-{0.5}}{d}_{j}^{-{0.5}}}\right) {\mathbf{e}}_{j}^{l},\forall i \in \mathcal{V} \tag{2}
53
+ $$
54
+
55
+ where ${\gamma }_{i}$ is a coefficient that varies the total amount of message flowing to each node $i$ and controls the embedding magnitude of that node [15]. $g$ is a function combining the edge weights computed according to CIR and LightGCN. In Appendix A.4, we prove that under certain choice of $g$ and ${\gamma }_{i}$ , CAGCN can go beyond 1-WL test in distinguishing non-bipartite-subgraph-isomorphic graphs. Following the principle of LightGCN that the designed graph convolution should be light and easy to train, all other components of our architecture except the message-passing is exactly the same as LightGCN, which is covered in Appendix A.1 and A.5.
56
+
57
+ § 3 EXPERIMENTS
58
+
59
+ § 3.1 EXPERIMENTAL SETTINGS
60
+
61
+ We used six datasets including two newly collected datasets from other domains. MF [16], NGCF [5], LightGCN [7], UltraGCN [6], and GTN [8] are baselines. More details about datasets, baselines and experimental setup are provided in Appendix A.6. For the first model variant CAGCN, we set $g\left( {A,B}\right) = g\left( A\right)$ where we remove $B = {d}_{i}^{-{0.5}}{d}_{j}^{-{0.5}}$ to solely demonstrate the power of passing messages according to CIR and set ${\gamma }_{i} = \mathop{\sum }\limits_{{j \in {\mathcal{N}}_{i}^{1}}}{d}_{i}^{-{0.5}}{d}_{j}^{-{0.5}}$ to ensure the same embedding magnitude. For the second model variant CAGCN*, we set $g$ as weighted sum and ${\gamma }_{i} = \gamma$ as a constant controlling the contributions of capturing different collaborations.
62
+
63
+ § 3.2 EXPERIMENTAL RESULTS
64
+
65
+ Here we describe the main experimental result observations with detailed insights in Appendix A.7.
66
+
67
+ Performance Comparison. Performance of baselines are provided in Table 1. We first compare the performance of LightGCN and CAGCN-variants. Clearly, CAGCN-jc/sc/ln achieves higher performance than LightGCN because we selectively propagate node embeddings by the proposed CIR metrics (JC, SC, LHN). However, CAGCN-cn mostly performs worse than LightGCN because nodes having more common neighbors with other nodes are more likely to have higher degrees and hence aggregate more false-positive neighbors' information during message-passing. Comparing CAGCN*-variants with other competing baselines, CAGCN*-jc/sc almost consistently achieves higher performance than other baselines except UltraGCN on Amazon. This is because UltraGCN allows multiple negative samples for each positive interaction. Since GTN [8] uses different embedding size, we exclusively compare our model and GTN in Table 4 in Appendix A.7.
68
+
69
+ Efficiency Comparison. As recommendation models will be eventually deployed in user-item data of real-world scale, it is crucial to compare the efficiency of the proposed CAGCN(*) with other baselines. For fair comparison, we use a uniform code framework implemented ourselves for all models and run them on the same machine. Clearly in Figure 2(a), CAGCN* achieves extremely higher performance in significant less time. This is because the designed graph convolution could recognize neighbors whose collaborations are most beneficial to users' ranking and by passing stronger messages from these neighbors.
70
+
71
+ Impact of Propagation Layers. We increase the propagation layer of CAGCN* and LightGCN from 1 to 4 and visualize their corresponding performance in Figure 2(b). The performance first increases as layer increases from 1 to 3 and then decreases on both datasets, which is consistent with findings in [7]. Our CAGCN* is always better than LightGCN at all layers.
72
+
73
+ Table 1: Results on R@20 and N@20 (i.e., Recall and NDCG) with best and runner-up highlighted.
74
+
75
+ max width=
76
+
77
+ Model 2*$\mathbf{{Metric}}$ 2*$\mathbf{{MF}}$ 2*NGCF 2*LightGCN 2*UltraGCN 4|c|CAGCN 3|c|CAGCN*
78
+
79
+ 1-1
80
+ 7-13
81
+ X -jc -sc -cn -lhn -jc -sc -lhn
82
+
83
+ 1-13
84
+ 2*Gowalla Recall@20 0.1554 0.1563 0.1817 0.1867 0.1825 0.1826 0.1632 0.1821 0.1878 0.1878 0.1857
85
+
86
+ 2-13
87
+ NDCG@20 0.1301 0.1300 0.1570 0.1580 0.1575 0.1577 0.1381 0.1577 0.1591 0.1588 0.1563
88
+
89
+ 1-13
90
+ 2*Yelp2018 Recall@20 0.0539 0.0596 0.0659 0.0675 0.0674 0.0671 0.0661 0.0661 0.0708 0.0711 0.0676
91
+
92
+ 2-13
93
+ NDCG@20 0.0460 0.0489 0.0554 0.0553 0.0564 0.0560 0.0546 0.0555 0.0586 0.0590 0.0554
94
+
95
+ 1-13
96
+ 2*Amazon Recall@20 0.0337 0.0336 0.0420 0.0682 0.0435 0.0435 0.0403 0.0422 0.0510 0.0506 0.0457
97
+
98
+ 2-13
99
+ NDCG@20 0.0265 0.0262 0.0331 0.0553 0.0343 0.0342 0.0321 0.0333 0.0403 0.0400 0.0361
100
+
101
+ 1-13
102
+ 2*Ml-1M Recall@20 0.2604 0.2619 0.2752 0.2783 0.2780 0.2786 0.2730 0.2760 0.2822 0.2827 0.2799
103
+
104
+ 2-13
105
+ NDCG@20 0.2697 0.2729 0.2820 0.2638 0.2871 0.2881 0.2818 0.2871 0.2775 0.2776 0.2745
106
+
107
+ 1-13
108
+ 2*Loseit Recall@20 0.0539 0.0574 0.0588 0.0621 0.0622 0.0625 0.0502 0.0592 0.0654 0.0658 0.0658
109
+
110
+ 2-13
111
+ NDCG@20 0.0420 0.0442 0.0465 0.0446 0.0474 0.0470 0.0379 0.0461 0.0486 0.0484 0.0489
112
+
113
+ 1-13
114
+ 2*News Recall@20 0.1942 0.1994 0.2035 0.2034 0.2135 0.2132 0.1726 0.2084 0.2182 0.2172 0.2053
115
+
116
+ 2-13
117
+ NDCG@20 0.1235 0.1291 0.1311 0.1301 0.1385 0.1384 0.1064 0.1327 0.1405 0.1414 0.1311
118
+
119
+ 1-13
120
+ 2*$\mathbf{{Avg}.{Rank}}$ Recall@20 9.83 9.17 7.33 4.17 4.67 4.33 8.83 6.17 1.67 1.50 3.33
121
+
122
+ 2-13
123
+ NDCG@20 9.50 9.17 5.83 6.00 3.67 4.00 8.33 5.00 2.50 2.50 5.17
124
+
125
+ 1-13
126
+
127
+ < g r a p h i c s >
128
+
129
+ Figure 2: Efficiency comparison in column (a). Performance of different propagation layers in column (b). Performance of different models on users with different degrees in (c).
130
+
131
+ Interpretation on the advantages of CAGCN(*). Here we visualize the performance of all models for nodes in different degree groups. Comparing non-graph-based methods (e.g., MF), graph-based methods (e.g., LightGCN, CAGCN(*)) achieve higher performance for lower degree nodes $\lbrack 0,{300})$ while lower performance for higher degree nodes $\lbrack {300},\operatorname{Inf})$ . Because the node degrees follow the power-law distribution [17], the average performance of graph-based methods would still be higher. To the best of our knowledge, this is the first work discovering such performance imbalance/unfairness issue w.r.t. node degree in graph-based recommendation models. On one hand, graph-based models could leverage neighborhood information to augment the weak supervision for low-degree nodes. On the other hand, it would introduce many noisy/unreliable interactions for higher-degree nodes. It is crucial to design an unbiased graph-based recommendation model that can achieve higher performance on both low and high degree nodes. In addition, the opposite performance trends between NDCG and Recall indicates that different evaluation metrics have different levels of sensitivity to node degrees.
132
+
133
+ § 4 CONCLUSION
134
+
135
+ In this paper, we propose the Common Interacted Ratio (CIR) to determine whether the captured collaborative effect would benefit the prediction of user preferences. Then we propose the Collaboration-Aware Graph Convolutional Network to aggregate neighboring nodes' information based on their CIRs. We further define a new type of isomorphism, bipartite-subgraph-isomorphism, and prove that our CAGCN* can be more expressive than 1-WL in distinguishing subtree(subgraph)-isomorphic yet non-bipartite-subgraph-isomorphic graphs. Experimental results demonstrate the advantages of the proposed CAGCN(*) over other baselines. Specifically, CAGCN* outperforms the most representative graph-based recommendation model, LightGCN [7], by 9% in Recall@20 but also achieves more than 79% speedup. In the future, we plan to explore the imbalanced performance improvement among nodes in different degree groups as observed in Figure 2(c), especially from a GNN fairness perspective [18].
papers/LOG/LOG 2022/LOG 2022 Conference/PTz0aXJp7A/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,380 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Similarity-based Link Prediction from Modular Compression of Network Flows
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ Node similarity scores are a foundation for machine learning in graphs for clustering, node classification, anomaly detection, and link prediction with applications in biological systems, information networks, and recommender systems. Recent works on link prediction use vector space embeddings to calculate node similarities in undirected networks with good performance, but they have several disadvantages: limited interpretability, they require hyperparameter tuning, manual model fitting through dimensionality reduction, and poor performance of symmetric similarities in directed link prediction. We propose MapSim, an information-theoretic measure to assess node similarities based on modular compression of network flows. Different from vector space embeddings, MapSim represents nodes in a discrete, non-metric space of communities and yields asymmetric similarities in an unsupervised fashion. We compare MapSim on a link prediction task to popular embedding-based algorithms across 47 data sets of networks and find that Map-Sim’s average performance across all networks is more than 7% higher than its closest competitor, outperforming all embedding methods in 14 of the 47 networks. Our method demonstrates the potential of compression-based approaches in graph representation learning, with promising applications in other graph learning tasks.
12
+
13
+ ## 191 Introduction
14
+
15
+ Calculating similarity scores between objects is a fundamental problem in supervised and unsupervised machine learning tasks from clustering, anomaly detection, and text mining to classification and recommender systems. In Euclidean feature spaces, similarities between feature vectors are commonly calculated based on lengths, norms, angles, or other geometric concepts, possibly using kernel functions that perform implicit non-linear mappings to high-dimensional feature spaces [1]. For relational data represented as graphs, methods using the graph topology to calculate pairwise node similarities can address learning problems such as graph clustering, node classification, and link prediction. For link prediction, many recent works take a multi-step approach that separates representation learning and link prediction $\left\lbrack {2,3}\right\rbrack$ : First, they learn a node embedding in a latent vector space from the graph's topology, using methods such as graph or matrix factorisation [4, 5], or random-walk based techniques [6-8]. Then, they interpret node positions as points in a high-dimensional feature space, possibly applying downstream dimensionality reduction. Finally, they use node positions in the resulting feature space to assign new "features" to pairs of nodes, which can be used to predict links. Taking an unsupervised approach, we predict links based on node similarities [9] by calculating distance metrics or similarity scores between node pairs and then ranking them. We can alternatively use a supervised approach [10] by (i) using binary operators like the Hadamard product [7], (ii) sampling negative instances (node pairs not connected by links), and (iii) using the features of positive and negative instances to train a supervised binary classifier [7].
16
+
17
+ Advances in graph embedding and representation learning have considerably improved our ability to predict links in networks, with applications in biological [11] and social [12] networks and in recommender systems [13]. However, these methods also introduce challenges for real-world link prediction tasks: First, they require specifying hyperparameters that control aspects regarding the scale of patterns in graphs, influence of local and non-local structures, and latent space dimensionality
18
+
19
+ ![01963ee2-b813-70d9-996a-a15eb62a6ee3_1_308_194_1180_234_0.jpg](images/01963ee2-b813-70d9-996a-a15eb62a6ee3_1_308_194_1180_234_0.jpg)
20
+
21
+ Figure 1: We calculate node similarities for predicting links based on a network's modular coding scheme of the map equation. Blue and orange nodes have a unique codeword within their module derived from their stationary visit rates. Decimal numbers show the theoretical lower limit for the codeword length in bits. Map equation similarity derives description lengths for predicted links, connecting more similar nodes uses fewer bits. Intra-community links tend to have shorter description lengths than inter-community links.
22
+
23
+ [14]. Network-specific hyperparameter tuning addresses these issues, but is challenging in real applications and aggravates the risk of overfitting; recent systematic comparisons reveal that the performance of different methods largely varies across data sets $\left\lbrack {2,3}\right\rbrack$ . Altogether, this makes it difficult for practitioners to choose and optimally parametrise an embedding method. Second, using latent metric spaces implies symmetric similarities, limiting the performance when predicting directed links [5, 15]. Third, an important issue is that, compared with manually-crafted features, embeddings tend to have low interpretability: We can assess the similarity of nodes, but we cannot explain why some nodes are more similar than others [2-4]. Finally, recent works have highlighted fundamental limitations of low-dimensional representations of complex networks [16], questioning to what extent Euclidean embeddings can capture patterns that are relevant for link prediction.
24
+
25
+ Motivated by recent works highlighting the importance of community structures for link prediction $\left\lbrack {2,{17},{18}}\right\rbrack$ , we propose a novel approach to similarity-based link prediction that addresses these issues. Our contributions are:
26
+
27
+ - We introduce map equation similarity, MapSim for short, an information-theoretic method to calculate asymmetric node similarities. MapSim builds on the map equation [19], a framework that applies coding theory to compress random walks based on hierarchical cluster structures.
28
+
29
+ - Different from other random walk-based embedding techniques, our work builds on an analytical approach to calculate the expected description length of random walks in the limit, and, thus, requires neither simulating random walks nor tuning hyperparameters.
30
+
31
+ - Following the minimum description length principle, MapSim incorporates Occam's razor and balances explanatory power with model complexity, making dimensionality reduction superfluous. With hierarchical cluster structures, MapSim captures patterns at multiple scales simultaneously and combines advantages of local and non-local similarity scores.
32
+
33
+ - We validate MapSim in an unsupervised, similarity-based link prediction task and compare its performance to six widely used random walk-based embedding techniques in 47 directed and undirected empirical networks from different domains. Highlighting challenges in the generalisability of embedding techniques and parametrisations across different networks, this analysis is a contribution in itself.
34
+
35
+ - Confirming recent surveys, we find that, without network-specific hyperparameter tuning, the performance of popular embedding techniques for unsupervised link prediction heavily depends on the data. In contrast, MapSim provides high performance across a wide range of networks, with an average performance ${7.7}\%$ and ${7.5}\%$ better than the best competitor in undirected and directed networks, respectively. MapSim ourtperforms all competing method in 14 of the 47 networks with a standard deviation less than half of that of the closest competitor. We find that the worst-case performance of our method is ${44}\%$ and ${33}\%$ better compared to popular embedding techniques in undirected and directed networks, respectively.
36
+
37
+ In summary, we take a novel perspective on graph representation learning that fundamentally differs from other random walk-based graph embeddings. Rather than embedding nodes into a metric space, leading to symmetric similarities, we develop an unsupervised learning framework where (i) positions of nodes in a coding tree capture their representation in a non-metric latent space, and (ii) node similarities are calculated based on how well transitions between nodes are compressed by a network's hierarchical modular structure (figure 1). Apart from node similarities that can be "explained" based on community structures captured in the coding tree, MapSim yields asymmetric similarity scores that naturally support link prediction in directed networks. We provide a simple, non-parametric, and scalable unsupervised method with high generalisability across data sets, and which should thus be of interest for practitioners. Our work demonstrates the power of compression-based approaches to graph representation learning, with promising applications in other graph learning tasks.
38
+
39
+ ## 2 Related Work and Background
40
+
41
+ We first summarise recent works on graph embedding and similarity-based link prediction. Then, we review the map equation, an information-theoretic objective function for community detection and theoretical foundation of our compression-based similarity score.
42
+
43
+ ### 2.1 Related Work
44
+
45
+ Since we focus on unsupervised similarity-based link prediction, we consider methods to calculate a bivariate function $\operatorname{sim}\left( {u, v}\right) \in {\mathbb{R}}^{d}$ , where $u, v \in V$ are nodes in a directed or undirected, possibly weighted graph $G = \left( {V, E}\right) \left\lbrack {{20},{21}}\right\rbrack$ . While similarity metrics often consider scalar functions $\left( {d = 1}\right)$ , recent vector space embeddings use binary operators to assign vector-valued "features" with $d > 1$ to node pairs. Since vectorial features are typically used in downstream classification techniques, this can be seen as an implicit mapping to similarities, for example "similar" features being assigned similar class probabilities. We limit our discussion to topological or structural approaches [20], functions $\operatorname{sim}\left( {u, v}\right)$ that can be calculated solely based on the edges $E$ in graph $G$ without requiring additional information such as node attributes or other non-topological graph properties.
46
+
47
+ Several works define scalar similarities based on local topological characteristics such as the Jaccard index of neighbour sets, degrees of nodes, or degree-weighted measures of common neighbours [22]. Other methods define similarities based on random walks, paths, or topological distance between nodes $\left\lbrack {9,{23} - {25}}\right\rbrack$ . Compared to purely local approaches, an advantage of random walk-based methods is their ability to incorporate both local and non-local information, which is crucial for sparse networks where nodes may lack common neighbours. Since walk-based methods reveal cluster patterns in networks [19], they generally perform well in downstream tasks such as link prediction and graph clustering [2]. Graph factorisation approaches that use eigenvectors of different types of Laplacian matrices that represent relationships between nodes share this high performance [26], likely because (i) Laplacians capture the dynamics of continuous-time random walks [27], and (ii) spectral methods can capture small cuts in graphs [28].
48
+
49
+ Building on these ideas, some recent works on graph representation learning combine random walks and deep learning to obtain high-dimensional vector space embeddings of nodes, serving as features in downstream learning tasks [3, 14]: Perozzi et al. [6] generate a large number of short random walks to learn latent space representations of nodes by applying a word embedding technique that considers node sequences as word sequences in a sentence. This corresponds to an implicit factorisation of a matrix whose entries capture the logarithm of the expected probabilities to walk between nodes in a given number of steps [29]. Following a similar walk-based approach, Grover and Leskovec [7] generate node sequences with a biased random walker whose exploration behaviour can be tuned by search bias parameters $p$ and $q$ . The resulting walk sequences are used as input for the word embedding algorithm word2vec [30], which embeds objects in a latent vector space with configurable dimensionality. Tang et al. [8] construct vector space embeddings of nodes that simultaneously preserve first- and second-order proximities between nodes. Similar to Adamic and Adar [22], second-order node proximities are defined based on common neighbours. Extending the random walk approach in [6], Perozzi et al. [31] learn embeddings from so-called walklets, random walks that skip some nodes, resulting in embeddings that capture structural features at multiple scales.
50
+
51
+ The abovementioned graph embedding methods compute a representation of nodes in a, compared to the number of nodes in the network, low-dimensional Euclidean space. A suitably defined metric for similarity or distance of nodes enables recovering the link topology with high fidelity [32], forming the basis for similarity-based link prediction. In contrast, [10] argued for a new perspective that uses supervised classifiers based on (i) multi-dimensional features of node pairs, and (ii) an undersampling of negative instances to address inherent class imbalances in link prediction. Recent applications of graph embedding to link prediction have taken a similar supervised approach, for example using vector-valued binary operators to construct features for node pairs from node vectors $\left\lbrack {6,7,{21}}\right\rbrack$ . Despite good performance, recent works have cast a more critical light on such applications of low-dimensional graph embeddings. Questioning the distinction between deep learning-based embeddings and graph factorisation techniques, Qiu et al. [4] show that popular embedding techniques can be understood as (approximate) factorisations of matrices that capture graph topology. Thus, low-dimensional embeddings can be viewed as a (lossy) compression of graphs, while link prediction or graph reconstruction can be viewed as the decompression step. Fitting this view, a recent study of the topological characteristics of networks' low-dimensional Euclidean representations has highlighted fundamental limitations of embeddings to capture complex structures found in real networks [16].
52
+
53
+ Techniques like node2vec, LINE, or DeepWalk have been reported to perform well for link prediction despite those limitations. However, recent surveys concur that finetuning their hyperparameters to the specific data set is required $\left\lbrack {2,{18},{33}}\right\rbrack$ , which can be problematic in large data sets and increases the risk of overfitting. When used for link prediction, graph embedding methods are typically combined with dimensionality reduction and supervised classification algorithms, possibly using non-linear kernels. Recent comparative studies found that the performance of Euclidean graph embeddings for link prediction is connected to their ability to represent communities in the graph as clusters in the feature space [2], which, due to the non-linear nature of graph data [34], strongly depends on their topology. Using symmetric operators or distance measures in metric spaces limits their ability to predict directed links because the ground truth for(u, v)can differ from(v, u)[15].
54
+
55
+ These issues raise the general question whether we should address link prediction based on low-dimensional Euclidean embeddings. Recent works addressed some of those open questions, for example with hyperbolic or non-linear embeddings [17, 34], extensions of Euclidean embeddings for directed link prediction [15], or embeddings that explicitly account for community structures $\left\lbrack {{18},{35},{36}}\right\rbrack$ . However, existing works still use hyperparameters, require separate dimensionality reduction or model selection techniques to identify the optimal number of dimensions, fail to capture rich hierarchically nested community structures present in real-world networks [37], or do not integrate community detection with the actual representation learning. Addressing all issues at once, we take a novel approach that treats graph representation learning as a compression problem: We use the map equation [19], an analytical information-theoretic approach to compress flows of random walks in directed or undirected, possibly weighted networks based on their modular structure. The resulting hierarchical coding tree with node assignments can be viewed as an embedding in a discrete, non-metric latent space of (possibly hierarchical) community labels with automatically optimised dimensionality using a minimum description length approach. As an analytical approach, our method neither introduces hyperparameters nor needs to simulate random walks. Using a non-metric latent space naturally yields asymmetric node similarities suitable to predict directed links.
56
+
57
+ ### 2.2 Background: the map equation
58
+
59
+ The map equation is an information-theoretic objective function for community detection that, conceptually, models network flows with random walks [19]. To detect communities, the map equation compresses the random walks' per-step description length by searching for sets of nodes with long flow persistence: network areas where a random walker tends to stay for a longer time.
60
+
61
+ Consider a communication game where the sender observes a random walker on a network, and uses binary codewords to update the receiver about the random walker's location. In the simplest case, all nodes belong to the same module and we use a Huffman code to assign unique codewords to the nodes based on their stationary visit rates. With a one-module partition, the sender communicates one codeword per random walk step to the receiver. The theoretical lower limit for the per-step description length, we call it codelength, is the entropy of the nodes' visit rates [38],
62
+
63
+ $$
64
+ L\left( {\mathrm{\;M}}_{1}\right) = \mathcal{H}\left( P\right) = - \mathop{\sum }\limits_{{u \in V}}{p}_{u}{\log }_{2}{p}_{u} \tag{1}
65
+ $$
66
+
67
+ Here ${\mathrm{M}}_{1}$ is the one-module partition, $\mathcal{H}$ is the Shannon entropy, $P$ is the set of the nodes’ visit rates, and ${p}_{u}$ is node $u$ ’s visit rate.
68
+
69
+ In networks with modular structure, we can compress the random walks' description by grouping nodes into more than one module such that a random walker tends to remain within modules and module switches become rare. This lets us re-use codewords across modules and design a codebook per module based on the nodes' module-normalised visit rates. However, sender and receiver need
70
+
71
+ ![01963ee2-b813-70d9-996a-a15eb62a6ee3_4_341_207_1116_302_0.jpg](images/01963ee2-b813-70d9-996a-a15eb62a6ee3_4_341_207_1116_302_0.jpg)
72
+
73
+ Figure 2: Coding principles behind the map equation. Left: An example network with nine nodes, ten links, and two communities, A and B, as indicated by colours. Each random-walker step is encoded by one codeword for intra-module transitions or three codewords for inter-module transitions. Codewords are shown next to nodes in colours, their length in the information-theoretic limit in black. Module entry and exit codewords are shown to the left and right of the coloured arrows, respectively. The black trace shows a possible section of a random walk with its encoding at the bottom. Right: Coding tree corresponding to the network's community structure. Links are annotated with transition rates because we calculate similarities in the information-theoretic limit. Each path in the coding tree corresponds to a network link, which may or may not exist. The coder remembers the random walker's module but not the most recently visited node. To describe the intra-module transition from node 5 to 3, we use $- {\log }_{2}\left( {3/{12}}\right) = 2$ bits. The inter-module transition from node 5 to 7 takes three steps in the coding tree and requires $- {\log }_{2}\left( {1/{12} \cdot 1/2 \cdot 2/{10}}\right) \approx {6.9}$ bits.
74
+
75
+ a way to encode module switches. The map equation uses a designated module exit codeword per module and an index-level codebook with module entry codewords. In a two-level partition, the sender communicates one codeword for intra-module random-walker steps to the receiver and three codewords for inter-module steps (figure 2). The lower limit for the codelength is given by the sum of entropies associated with module and index codebooks, weighted by their usage rates. Given a partition of the network’s nodes into modules, $M$ , the map equation [19] formalises this relationship,
76
+
77
+ $$
78
+ L\left( \mathrm{\;M}\right) = q\mathcal{H}\left( Q\right) + \mathop{\sum }\limits_{{\mathrm{m} \in \mathrm{M}}}{p}_{\mathrm{m}}\mathcal{H}\left( {P}_{\mathrm{m}}\right) . \tag{2}
79
+ $$
80
+
81
+ Here $q = \mathop{\sum }\limits_{{\mathrm{m} \in \mathrm{M}}}{q}_{\mathrm{m}}$ is the index-level codebook usage rate, ${q}_{\mathrm{m}}$ is the entry rate for module $\mathrm{m}$ , and $Q = \left\{ {{q}_{\mathrm{m}} \mid \mathrm{m} \in \mathrm{M}}\right\}$ is the set of module entry rates; ${\mathrm{m}}_{\text{exit }}$ is the exit rate for module $\mathrm{m},{p}_{\mathrm{m}} =$ ${\mathrm{m}}_{\text{exit }} + \mathop{\sum }\limits_{{u \in \mathfrak{m}}}{p}_{u}$ is the codebook usage rate for module $\mathrm{m}$ , and ${P}_{\mathrm{m}} = \left\{ {\mathrm{m}}_{\text{exit }}\right\} \cup \left\{ {{p}_{u} \mid u \in \mathrm{m}}\right\}$ is the set of node visit rates in $\mathrm{m}$ , including $\mathrm{m}$ ’s module exit rate.
82
+
83
+ ## 3 MapSim: node similarities from modular flow compression
84
+
85
+ Compression-based similarity measures consider pairs of objects more similar if they jointly compress better. Extending this idea to networks, we exploit the coding of network flows based on the map equation, and use it to calculate information-theoretic pairwise similarities between nodes: MapSim. We interpret a network's community structure as an implicit embedding and, roughly speaking, consider nodes in the same community as more similar than nodes in different communities.
86
+
87
+ To calculate node similarities, we begin with a network partition and its corresponding modular coding scheme, which can be visualised as a tree, annotated with the transition rates defined by the link patterns in the network (figure 2). While the network's topology constrains random walks to transitions along existing links, the coding scheme is more flexible and can describe transitions between any pair of nodes. To describe the transition from node $u$ to $v$ , we find the corresponding path in the partition tree and multiply the transition rates along that path, that is, we use the coarse-grained description of the network's community structure, not the network's actual link pattern; it can describe any transition regardless of whether the link(u, v)exists in the network or not. The description length in bits for a path with transition rate $r$ is $- {\log }_{2}\left( r\right)$ . For example, consider the scenario in figure 2 where we calculate similarity scores for the two directed links(5,3)and(5,7), neither of 15 which exists in the network. Nodes 5 and 3 are in module $A$ , and the rate at which a random walker in $A$ visits node 3 is $3/{12}$ , requiring $- {\log }_{2}\left( {3/{12}}\right) = 2$ bits to describe that transition. Node 7 is in module $B$ , and a random walker in $A$ exits $A$ at rate $1/{12}$ , enters $B$ at rate $1/2$ , and then visits node 7 at rate $2/{10}$ , that is, at rate $1/{120}$ , requiring $- {\log }_{2}\left( {1/{120}}\right) \approx {6.9}$ bits.
88
+
89
+ ![01963ee2-b813-70d9-996a-a15eb62a6ee3_5_606_206_582_463_0.jpg](images/01963ee2-b813-70d9-996a-a15eb62a6ee3_5_606_206_582_463_0.jpg)
90
+
91
+ Figure 3: Illustration of map equation similarity between nodes $u$ and $v$ with addresses $\operatorname{addr}\left( {\mathrm{M}, u}\right) = \left\lbrack {{p}_{1},\ldots ,{p}_{i},{u}_{j},{u}_{k}}\right\rbrack$ and $\operatorname{addr}\left( {\mathrm{M}, v}\right) = \left\lbrack {{p}_{1},\ldots ,{p}_{i},{v}_{j},{v}_{k},{v}_{l}}\right\rbrack$ , respectively, where $\mathrm{M}$ is the complete network partition. The longest common prefix shared by the addresses for $u$ and $v$ is $p = \left\lbrack {{p}_{1},\ldots ,{p}_{i}}\right\rbrack$ , and ${\mathrm{M}}_{\langle p\rangle }$ is the sub-module at address $p$ within $\mathrm{M}$ , that is the smallest module that contains $u$ and $v$ .
92
+
93
+ Paths to derive similarities emanate from modules, not from nodes, because the model must generalise to unobserved data. If compression was our sole purpose, we would use node-specific codebooks containing codewords for neighbouring nodes, no longer detect communities, and be able to describe merely observed links. Instead, the map equation's coding scheme is designed to capitalise on modular network structures: The modular code structure provides a model that generalises to unobserved data, coarse-grains the path descriptions, and prevents overfitting.
94
+
95
+ For the general case, where $\mathrm{M}$ can be a hierarchical network partition, we number the sub-modules within each module $\mathrm{m}$ from 1 to ${n}_{\mathrm{m}}$ - we refer to these numbers as addresses - such that an ordered sequence of addresses uniquely identifies a path starting at the root of the partition tree. We let addr: $\mathrm{M} \times N \rightarrow$ List(N)be a function that takes a network partition and a node as input, and returns the node’s address in the partition. To calculate the similarity of node $v$ to $u$ , we identify the longest common prefix $p$ of the nodes’ addresses, addr(M, u)and addr(M, v), and select the partition tree’s sub-tree ${\mathrm{M}}_{\langle p\rangle }$ that corresponds to the prefix $p : {\mathrm{M}}_{\langle p\rangle }$ is the smallest sub-tree that contains $u$ and $v$ . We obtain the addresses for $u$ and $v$ within sub-tree ${\widetilde{\mathrm{M}}}_{\langle p\rangle }$ by removing the prefix $p$ from their addresses. That is, addr $\left( {\mathrm{M}, u}\right) = p + + \operatorname{addr}\left( {{\mathrm{M}}_{\langle p\rangle }, u}\right)$ and $\operatorname{addr}\left( {\mathrm{M}, v}\right) = p + + \operatorname{addr}\left( {{\mathrm{M}}_{\langle p\rangle }, v}\right)$ , where $+ +$ is list concatenation. The rate at which a random walker transitions from $u$ to $v$ is the product of (i) the rate at which the random walker moves along the path addr $\left( {{\mathrm{M}}_{\langle p\rangle }, u}\right)$ in reverse direction, $\operatorname{rev}\left( {{\mathrm{M}}_{\langle p\rangle },\operatorname{addr}\left( {{\mathrm{M}}_{\langle p\rangle }, u}\right) }\right)$ , that is from $u$ to the root of ${M}_{\langle p\rangle }$ , and (ii) the rate at which the random walker moves along the path addr $\left( {{\mathrm{M}}_{\langle p\rangle }, v}\right)$ in forward direction, forw $\left( {{\mathrm{M}}_{\langle p\rangle },\operatorname{addr}\left( {{\mathrm{M}}_{\langle p\rangle }, v}\right) }\right)$ , that is from the root of ${\mathrm{M}}_{\langle p\rangle }$ to $v$ , where
96
+
97
+ $$
98
+ \operatorname{rev}\left( {\mathrm{M}, a}\right) = \left\{ \begin{array}{ll} 1 & \text{ if }a = \left\lbrack x\right\rbrack \\ {\mathrm{M}}_{\langle \left\lbrack x\right\rbrack \rangle ,\operatorname{exit}} \cdot \operatorname{rev}\left( {{\mathrm{M}}_{\langle \left\lbrack x\right\rbrack \rangle },{a}^{\prime }}\right) & \text{ if }a = \left\lbrack x\right\rbrack + + {a}^{\prime } \end{array}\right. \tag{3}
99
+ $$
100
+
101
+ 239
102
+
103
+ $$
104
+ \text{ forw }\left( {\mathrm{M}, a}\right) = \left\{ \begin{array}{ll} {p}_{\langle \lbrack x\rbrack \rangle }/{p}_{\mathrm{M}} & \text{ if }a = \left\lbrack x\right\rbrack \\ {\mathrm{M}}_{\langle \lbrack x\rbrack \rangle ,\text{ enter }} \cdot \operatorname{forw}\left( {{\mathrm{M}}_{\langle \lbrack x\rbrack \rangle },{a}^{\prime }}\right) & \text{ if }a = \left\lbrack x\right\rbrack + + {a}^{\prime } \end{array}\right. \tag{4}
105
+ $$
106
+
107
+ and ${a}^{\prime }$ denotes non-empty sequences. Here ${p}_{\mathrm{M}}$ is the codebook use rate for module $\mathrm{M}$ and ${p}_{\langle \left\lbrack x\right\rbrack \rangle }$ is the visit rate for the node identified by address $x$ within the given module. The final addresses in equation 3 and equation 4 are treated differently, reflecting that the map equation forgets the most recently visited node.
108
+
109
+ We illustrate these ideas in a generic example (figure 3). In short, we define map equation similarity,
110
+
111
+ $$
112
+ \operatorname{MapSim}\left( {M, u, v}\right) = \operatorname{rev}\left( {{\mathrm{M}}_{\langle p\rangle },\operatorname{addr}\left( {{\mathrm{M}}_{\langle p\rangle }, u}\right) }\right) \cdot \operatorname{forw}\left( {{\mathrm{M}}_{\langle p\rangle },\operatorname{addr}\left( {{\mathrm{M}}_{\langle p\rangle }, v}\right) }\right) , \tag{5}
113
+ $$
114
+
115
+ where $p$ is the longest common prefix shared by the addresses of $u$ and $v$ in the partition tree defined by $\mathrm{M}$ . To express map equation similarity in terms of description length, we take the $- {\log }_{2}$ of MapSim and regard pairs of nodes that yield a shorter description length as more similar.
116
+
117
+ MapSim is asymmetric since module entry and exit rates are, in general, different and $u$ and $v$ can have different visit rates. MapSim is zero if one node is in a disconnected component; the exit rate for regions without out-links is zero, so the corresponding description length is infinitely long. This issue can be addressed with the regularised map equation [39], which addresses networks with incomplete observations using weak links between all pairs of nodes to connect all modules with modelled link strengths that depend on the connection patterns of each node.
118
+
119
+ We calculate node similarities in three steps: (i) inferring a network's community with Infomap [40], a greedy, search-based optimisation algorithm for the map equation, (ii) representing the corresponding coding scheme in a suitable data structure, and (iii) using MapSim to computing similarities based on the coding scheme. The overall approach is illustrated in figs. 1-3 and algorithm 1.
120
+
121
+ Algorithm 1: Pseudo-code of function MapSim calculating similarity score for node pair(u, v).
122
+
123
+ ---
124
+
125
+ Input : $\operatorname{graph}G$ and pair of nodes(u, v)
126
+
127
+ Output : similarity score of(u, v)
128
+
129
+ // Use InfoMap to construct coding tree for compression
130
+
131
+ modules $=$ infomap.minimiseMapEquation(G)
132
+
133
+ tree = buildPartitionTree( $G$ , modules)
134
+
135
+ p = longestCommonPrefix(tree, $u, v$ )
136
+
137
+ tree ${}_{\langle \mathrm{p}\rangle } =$ smallestSubtree(tree, p)
138
+
139
+ // calculate code length of random walks from $u$ to $v$
140
+
141
+ addrU = addr(tree $\left( {{\text{tree}}_{\langle \mathrm{p}\rangle }, u}\right)$
142
+
143
+ addrV $= \operatorname{addr}\left( {{\operatorname{tree}}_{\langle \mathrm{p}\rangle }, v}\right)$
144
+
145
+ revRate $= \operatorname{rev}\left( {{\operatorname{tree}}_{\langle \mathrm{p}\rangle },\operatorname{addrU}}\right)$
146
+
147
+ fwdRate $=$ forw(tree ${}_{\langle \mathrm{p}\rangle }$ , addr $\mathrm{V}$ )
148
+
149
+ return $- {\log }_{2}$ (revRate - fwdRate)
150
+
151
+ ---
152
+
153
+ ## 4 Experimental Validation
154
+
155
+ We evaluate the performance of MapSim in unsupervised, similarity-based link prediction for 47 real-world networks, 35 directed (table 1) and 12 undirected (table 3), retrieved from Netzschleuder [41] and Konect [42]. Details of the directed and undirected networks are shown in tables 2 and 4, respectively. Our analysis is based on a Python-implementation available on GitHub ${}^{1}$ , building on Infomap, a fast and greedy search algorithm for minimising the map equation with an open source implementation in C++ [40, 43]. As baseline, we use four random walk-based embedding methods: DeepWalk [6], node2vec [7], LINE [8], and NERD [15], using the respective author's implementation. We also include results for MapSim based on the one-module partition for comparison. Adopting the argument by [7], we exclude graph factorisation methods and simple local similarity scores because they have already been shown to be inferior to node2vec. We include NERD because it is a recent random walk-based embedding method proposed for directed link prediction with higher reported performance than other walk-based embeddings [15].
156
+
157
+ ### 4.1 Unsupervised Link Prediction
158
+
159
+ Different from works that use graph embeddings for supervised link prediction, we address unsupervised link prediction. Like Goyal and Ferrara [2] and Khosla et al. [15], we take a similarity-based approach that does not require training a classifier using a set of negative and positive samples. Different from, for example, Grover and Leskovec [7], we compute similarity scores based on node embeddings, rather than applying a supervised classifier to multi-dimensional features computed for node pairs. To simplify the comparison with the recent graph embedding NERD specifically designed for directed networks, we adopt the approach by Khosla et al. [15] and calculate node similarities as the sigmoid over the feature vectors' dot product.
160
+
161
+ ---
162
+
163
+ ${}^{1}$ Link not included for double-blind review.
164
+
165
+ ---
166
+
167
+ Considering how different embedding techniques generalise across data sets, we purposefully refrained from hyperparameter tuning. We chose a single set of hyperparameters for each method, informed by the default parameters given by the respective authors and recent surveys' discussion regarding which hyperparameter values generally provide good link prediction performance. For DeepWalk and node2vec, we sample $r = {80}$ random walks of length $l = {40}$ per node, and use a window size of $w = {10}$ . For both methods, the underlying word embedding is applied using the default model parameters fixed by the authors, ${skipgram} = 1, k = {10}$ and mincount $= 0$ . For node2vec we set the return parameter to $p = 1$ . Since for $q = p = 1$ node2vec is identical to DeepWalk, we use $q = 4$ , which was found to provide good performance for link prediction [2]. We run LINE with first-order $\left( {\mathrm{{LINE}}}_{1}\right)$ , second-order $\left( {\mathrm{{LINE}}}_{2}\right)$ , and combined first-and-second-order proximity $\left( {\mathrm{{LINE}}}_{1 + 2}\right)$ , use 1,000 samples per node, and $s = 5$ negative samples. For NERD, we use 800 samples and $\kappa = 3$ negative samples per node. We set the number of neighbourhood nodes to $n = 1$ , as suggested by the authors for link prediction. We use $d = {128}$ dimensions for all embeddings. Since MapSim is a non-parametric method, it does not require setting any hyperparameters. However, to avoid local optima when heuristically minimising the map equation, we run Infomap 100 times and select the partition with the shortest description length.
168
+
169
+ We use 5-fold cross-validation to split links into train and test sets, treating weighted links as indivisible. We calculate the node embedding (for MapSim the coding tree) in the training network, derive predictions based on node similarities, and evaluate them based on the links in the validation set. For each fold, we restrict the resulting training network to its largest weakly connected component in directed networks, and largest connected component in undirected networks, respectively. For a validation set with $k$ positive links, we sample $k$ negative links uniformly at random, and calculate scores for all ${2k}$ links. In undirected networks, for each positive link(u, v), we also consider(v, u) as positive, and, therefore, sample two negative links per positive link. Varying the discrimination threshold, we obtain a receiver operator characteristic (ROC) per fold, and calculate the area under the curve (AUC). Detailed results, including average and worst-case performance, are shown in tables 2 and 4; we also report precision-recall performance (table 5). We include MapSim based on the one-module partition in the results and note that it performs better than using a modular partition in some cases: this suggests that the network does not have a strong community structure, which could be addressed with the regularised map equation [39]. When mentioning MapSim in the following, we refer to using modular partitions.
170
+
171
+ Our results show that, on average, MapSim outperforms all baseline methods across the 47 data sets with an increase in average AUC by approximately ${7.7}\%$ and ${7.5}\%$ for directed and undirected networks, respectively. Using a one-sided two-sample $t$ -test, we find that MapSim’s average performance across all networks is significantly higher than that of the best graph embedding method, ${\mathrm{{LINE}}}_{1 + 2}$ , both in directed and undirected networks ( $p \approx {0.008}$ and $p \approx {0.039}$ , respectively). MapSim provides the best performance in 14 of the 47 networks, with a standard deviation of the AUC score less than half of that of the best embedding-based method $\left( {\mathrm{{LINE}}}_{1 + 2}\right)$ . For undirected networks, MapSim achieves the best performance for five of the 12 networks, while none of the embedding methods beats MapSim's performance in more than two networks. We find the largest performance gain in the directed network linux, where MapSim yields an increase of AUC of approximately 22.6% compared to the best embedding (NERD).
172
+
173
+ MapSim’s worst-case performance across all networks is approximately 44% and 33% above the worst-case performance of the best-performing embedding for directed and undirected networks, respectively. MapSim’ performance advantage can be as high as ${84}\%$ , for example ${AUC} = {0.988}$ of MapSim in foursquare-friendships-new vs. ${AUC} = {0.537}$ for node2vec. While node2vec performs best in the largest directed network, MapSim performs best in the largest undirected network and in some of the small (directed and undirected) networks, suggesting that MapSim works well both for small and large networks.
174
+
175
+ We attribute those encouraging results to multiple features of our method: Different from graph embedding techniques that require downstream dimensionality reduction, MapSim's compression approach implicitly includes model selection and avoids overfitting. Moreover, the representation of nodes in the coding tree is integrated with the optimisation of hierarchical community structures in the network. Due to its non-parametric approach and the use of the analytical map equation, MapSim performs well in absence of tuning to the specific data set.
176
+
177
+ ![01963ee2-b813-70d9-996a-a15eb62a6ee3_8_597_202_602_331_0.jpg](images/01963ee2-b813-70d9-996a-a15eb62a6ee3_8_597_202_602_331_0.jpg)
178
+
179
+ Figure 4: Runtime behavior for inferring the community structure with Infomap and constructing the coding tree for MapSim in synthetic $k$ -regular networks with different size.
180
+
181
+ ### 4.2 Scalability Analysis
182
+
183
+ We analyse MapSim's scalability in synthetically generated networks with modular structure and tunable size and link density. We generate $k$ -regular random graphs with $N$ nodes and (mean) degree $k$ . To avoid trivial configurations where a modular structure is absent, we create a network by first generating two $k$ -regular random graphs with $\frac{N}{2}$ nodes each and "cross" two links, one from each of the two graphs, to obtain a single connected network with strong community structure. We then apply Infomap to (i) minimise the map equation and extract the network's modular structure, and (ii) construct the coding tree for calculating node similarities. We repeat this 10 times for random networks with different numbers of $N$ nodes and degrees $k$ . The average run times are reported in figure 4, which shows that, for sparse networks, the runtime of MapSim is linear in the size of the network. Edler et al. [43] report that the theoretical asymptotic bound of computational complexity for the optimisation of the map equation is in $\mathcal{O}\left( {NlogN}\right)$ , which is the same as for vector space embedding techniques like node2vec and DeepWalk ${}^{2}$ . Thus, MapSim does not entail higher computational complexity compared to popular graph embeddings. This makes it an interesting choice for practitioners looking for a simple and scalable method that works well in small, large, directed, and undirected networks.
184
+
185
+ ## 5 Conclusion and Outlook
186
+
187
+ We propose MapSim, a novel information-theoretic approach to compute node similarities based on a modular compression of network flows. Different from vector space embeddings, MapSim represents nodes in a discrete, non-metric space of communities that yields asymmetric similarities suitable to predict links in directed and undirected networks. The resulting similarities can be explained based on a network's modular structure. Using description length minimisation, MapSim naturally accounts for Occam's razor, which avoids overfitting and yields a parsimonious coding tree. Addressing unsupervised link prediction, we compare MapSim to popular embedding-based algorithms on 47 data sets that cover networks from a few hundred to hundreds of thousand nodes and millions of edges. Our analysis shows that the average performance of MapSim is more than $7\%$ higher than its closest competitor, outperforming all competing methods in 14 of the 47 networks. Taking a new perspective on graph representation learning, our work demonstrates the potential of compression-based methods, with promising applications in other graph learning tasks. Moreover, recent generalisations of the map equation to temporal and higher-order networks [43] suggest that our method can be applied to graphs with non-dyadic or time-stamped relationships.
188
+
189
+ ## References
190
+
191
+ [1] Thomas Hofmann, Bernhard Schölkopf, and Alexander J Smola. Kernel methods in machine learning. The annals of statistics, 36(3):1171-1220, 2008. 1
192
+
193
+ [2] Palash Goyal and Emilio Ferrara. Graph embedding techniques, applications, and performance: A survey. Knowledge-Based Systems, 151:78-94, 2018. ISSN 0950-7051. 1, 2, 3, 4, 7, 8, 9
194
+
195
+ ---
196
+
197
+ ${}^{2}\left\lbrack {2,3}\right\rbrack$ report linear complexity, which we could not confirm in the literature.
198
+
199
+ ---
200
+
201
+ [3] Nino Arsov and Georgina Mirceva. Network embedding: An overview. arXiv preprint arXiv:1911.11726,2019.1,2,3,9
202
+
203
+ [4] Jiezhong Qiu, Yuxiao Dong, Hao Ma, Jian Li, Kuansan Wang, and Jie Tang. Network embedding as matrix factorization: Unifying deepwalk, line, pte, and node2vec. In Proc. of the Eleventh ACM Intl. Conf. on Web Search and Data Mining, WSDM '18, page 459-467, New York, 2018. ACM. ISBN 9781450355810. 1, 2, 4
204
+
205
+ [5] Mingdong Ou, Peng Cui, Jian Pei, Ziwei Zhang, and Wenwu Zhu. Asymmetric transitivity preserving graph embedding. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, 2016. 1, 2
206
+
207
+ [6] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 701-710. ACM, 2014. 1, 3, 4, 7
208
+
209
+ [7] Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 855-864. ACM, 2016. 1, 3, 4, 7
210
+
211
+ [8] Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scale information network embedding. In Proceedings of the 24th international conference on world wide web, pages 1067-1077, 2015. 1, 3, 7
212
+
213
+ [9] David Liben-Nowell and Jon Kleinberg. The link-prediction problem for social networks. Journal of the American society for information science and technology, 58(7):1019-1031, 2007. 1,3
214
+
215
+ [10] Ryan N. Lichtenwalter, Jake T. Lussier, and Nitesh V. Chawla. New perspectives and methods in link prediction. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '10, page 243-252, New York, NY, USA, 2010. ACM. ISBN 9781450300551. 1, 3
216
+
217
+ [11] Zachary Stanfield, Mustafa Coşkun, and Mehmet Koyutürk. Drug response prediction as a link prediction problem. Scientific reports, 7(1):1-13, 2017. 1
218
+
219
+ [12] Mohammad Al Hasan and Mohammed J Zaki. A survey of link prediction in social networks. In Social network data analytics, pages 243-275. Springer, 2011. 1
220
+
221
+ [13] Hsinchun Chen, Xin Li, and Zan Huang. Link prediction approach to collaborative filtering. In Proceedings of the 5th ACM/IEEE-CS Joint Conference on Digital Libraries (JCDL'05), pages 141-142. IEEE, 2005. 1
222
+
223
+ [14] Mengjia Xu. Understanding graph embedding methods and their applications. SIAM Review, 63(4):825-853,2021. 2,3
224
+
225
+ [15] Megha Khosla, Jurek Leonhardt, Wolfgang Nejdl, and Avishek Anand. Node representation learning for directed graphs. In Machine Learning and Knowledge Discovery in Databases, pages 395-411, Cham, 2020. Springer. ISBN 978-3-030-46150-8. 2, 4, 7
226
+
227
+ [16] C. Seshadhri, Aneesh Sharma, Andrew Stolman, and Ashish Goel. The impossibility of low-rank representations for triangle-rich complex networks. Proceedings of the National Academy of Sciences, 117(11):5631-5637, 2020. ISSN 0027-8424. 2, 4
228
+
229
+ [17] Ali Faqeeh, Saeed Osat, and Filippo Radicchi. Characterizing the analogy between hyperbolic embedding and community structure of complex networks. Phys. Rev. Lett., 121:098301, Aug 2018.2,4
230
+
231
+ [18] Yi-Jiao Zhang, Kai-Cheng Yang, and Filippo Radicchi. Systematic comparison of graph embedding methods in practical tasks. Phys. Rev. E, 104, 2021. ISSN 2470-0053. 2, 4
232
+
233
+ [19] Martin Rosvall and Carl T Bergstrom. Maps of random walks on complex networks reveal community structure. Proceedings of the national academy of sciences, 105(4):1118-1123, 2008. 2,3,4,5
234
+
235
+ [20] Linyuan Lü and Tao Zhou. Link prediction in complex networks: A survey. Physica A: Statistical Mechanics and its Applications, 390(6):1150-1170, 2011. ISSN 0378-4371. 3
236
+
237
+ [21] Víctor Martínez, Fernando Berzal, and Juan-Carlos Cubero. A survey of link prediction in complex networks. ACM Computing Surveys, 49(4), 2016. 3, 4
238
+
239
+ [22] Lada A Adamic and Eytan Adar. Friends and neighbors on the web. Social networks, 25(3): 211-230, 2003. 3
240
+
241
+ [23] Zhenjiang Lin, Michael R Lyu, and Irwin King. Pagesim: a novel link-based measure of web page similarity. In Proceedings of the 15th international conference on World Wide Web, pages 1019-1020, 2006. 3
242
+
243
+ [24] Weiping Liu and Linyuan Lü. Link prediction based on local random walk. EPL (Europhysics Letters), 89(5):58007, mar 2010.
244
+
245
+ [25] Joydeep Chandra, Ingo Scholtes, Niloy Ganguly, and Frank Schweitzer. A tunable mechanism for identifying trusted nodes in large scale distributed networks. In 2012 IEEE 11th International Conference on Trust, Security and Privacy in Computing and Communications, pages 722-729. IEEE, 2012. 3
246
+
247
+ [26] Matthew Brand and Kun Huang. A unifying theorem for spectral embedding and clustering. In International Workshop on Artificial Intelligence and Statistics, pages 41-48. PMLR, 2003. 3
248
+
249
+ [27] Naoki Masuda, Mason A. Porter, and Renaud Lambiotte. Random walks and diffusion on networks. Physics Reports, 716-717:1-58, 2017. ISSN 0370-1573. 3
250
+
251
+ [28] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. Advances in neural information processing systems, 14, 2001. 3
252
+
253
+ [29] Cheng Yang and Zhiyuan Liu. Comprehend deepwalk as matrix factorization. arXiv preprint arXiv:1501.00358, 2015. 3
254
+
255
+ [30] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems, 26, 2013. 3
256
+
257
+ [31] Bryan Perozzi, Vivek Kulkarni, Haochen Chen, and Steven Skiena. Don't walk, skip! online learning of multi-scale network embeddings. In Proc. of the 2017 IEEE/ACM Intl. Conf. on Advances in Social Networks Analysis and Mining 2017, ASONAM '17, page 258-265, New York, NY, USA, 2017. ACM. ISBN 9781450349932. 3
258
+
259
+ [32] Sudhanshu Chanpuriya, Cameron Musco, Konstantinos Sotiropoulos, and Charalampos E. Tsourakakis. Node embeddings and exact low-rank representations of complex networks, 2020. 3
260
+
261
+ [33] Thanh Tam Nguyen and Chi Thang Duong. A comparison of network embedding approaches. Technical report, School of Computer and Communication Sciences, EPFL, Technical Report, 2018.4
262
+
263
+ [34] Xin Sun, Zenghui Song, Yongbo Yu, Junyu Dong, Claudia Plant, and Christian Böhm. Network embedding via deep prediction model. arXiv preprint arXiv:2104.13323, 2021. 4
264
+
265
+ [35] Xiao Wang, Peng Cui, Jing Wang, Jian Pei, Wenwu Zhu, and Shiqiang Yang. Community preserving network embedding. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17, page 203-209. AAAI Press, 2017. 4
266
+
267
+ [36] Sandro Cavallari, Vincent W. Zheng, Hongyun Cai, Kevin Chen-Chuan Chang, and Erik Cambria. Learning community embedding with community detection and node embedding on graphs. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM '17, page 377-386, New York, NY, USA, 2017. ACM. ISBN 9781450349185. 4
268
+
269
+ [37] Martin Rosvall and Carl T Bergstrom. Multilevel compression of random walks on networks reveals hierarchical organization in large integrated systems. PloSone, 6(4):e18209, 2011. 4
270
+
271
+ [38] Claude Elwood Shannon. A mathematical theory of communication. Bell Labs Tech. J., 27(3): 379-423, 7 1948. 4
272
+
273
+ [39] Jelena Smiljanić, Christopher Blöcker, Daniel Edler, and Martin Rosvall. Mapping flows on weighted and directed networks with incomplete observations. Journal of Complex Networks, 9 (6), 122021. ISSN 2051-1329. 7, 8
274
+
275
+ [40] D. Edler, A. Eriksson, and M. Rosvall. The infomap software package. https://www.mapequation.org, 2020. 7
276
+
277
+ [41] Tiago P. Peixoto. The netzschleuder network catalogue and repository. https://networks.skewed.de/, 2020. 7
278
+
279
+ [42] Jérôme Kunegis. Konect: The koblenz network collection. In Proceedings of the 22nd International Conference on World Wide Web, WWW '13 Companion, page 1343-1350, New York, NY, USA, 2013. ACM. ISBN 9781450320382. 7, 14, 16
280
+
281
+ [43] Daniel Edler, Ludvig Bohlin, and Martin Rosvall. Mapping Higher-Order Network Flows in Memory and Multilayer Networks with Infomap. Algorithms, 10:112, 2017. 7, 9
282
+
283
+ [44] R. Guimerà, L. Danon, A. Díaz-Guilera, F. Giralt, and A. Arenas. Self-similar community structure in a network of human interactions. Phys. Rev. E, 68:065103, Dec 2003. 14
284
+
285
+ [45] Lada A. Adamic and Natalie Glance. The political blogosphere and the 2004 u.s. election: Divided they blog. In Proceedings of the 3rd International Workshop on Link Discovery, LinkKDD '05, New York, 2005. ACM. ISBN 1595932151. 14
286
+
287
+ [46] Ulrich Stelzl, Uwe Worm, Maciej Lalowski, Christian Haenig, Felix H Brembeck, Heike Goehler, Martin Stroedicke, Martina Zenkner, Anke Schoenherr, Susanne Koeppen, et al. A human protein-protein interaction network: a resource for annotating the proteome. Cell, 122 (6):957-968,2005. 14
288
+
289
+ [47] Rob M Ewing, Peter Chu, Fred Elisma, Hongyan Li, Paul Taylor, Shane Climie, Linda McBroom-Cerajewski, Mark D Robinson, Liam O'Connor, Michael Li, et al. Large-scale mapping of human protein-protein interactions by mass spectrometry. Molecular systems biology, 3(1):89, 2007. 14
290
+
291
+ [48] Bureau of Transportation Statistics. T-100 domestic market. https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=310,2017.14
292
+
293
+ [49] Ron Milo, Shalev Itzkovitz, Nadav Kashtan, Reuven Levitt, Shai Shen-Orr, Inbal Ayzenshtat, Michal Sheffer, and Uri Alon. Superfamilies of evolved and designed networks. Science, 303 (5663):1538-1542, 2004. 14
294
+
295
+ [50] The openflights.org website. https://openflights.org/data.html, 2022. 14
296
+
297
+ [51] Paolo Massa, Martino Salvetti, and Danilo Tomasoni. Bowling alone and trust decline in social network sites. In 2009 Eighth IEEE International Conference on Dependable, Autonomic and Secure Computing, pages 658-663, 2009. 14
298
+
299
+ [52] Michael Ley. The dblp computer science bibliography: Evolution, research issues, perspectives. In String Processing and Information Retrieval, pages 1-10, Berlin, Heidelberg, 2002. Springer Berlin Heidelberg. ISBN 978-3-540-45735-0. 14
300
+
301
+ [53] Michael Fire, Rami Puzis, and Yuval Elovici. Link Prediction in Highly Fractional Data Sets, pages 283-300. Springer New York, New York, NY, 2013. ISBN 978-1-4614-5311-6. 14
302
+
303
+ [54] B. Stabler. Transportation network test problems. https://github.com/bstabler/ TransportationNetworks, 2022. 14
304
+
305
+ [55] M. Zaversnik V. Batagelj, A. Orvar. Network analysis of texts. Language Technologies, pages 143-148, 2002. 14
306
+
307
+ [56] Gergely Palla, Illés J Farkas, Péter Pollner, Imre Derényi, and Tamás Vicsek. Directed network modules. New Journal of Physics, 9(6):186-186, jun 2007. 14
308
+
309
+ [57] George R Kiss, Christine Armstrong, Robert Milroy, and James Piper. An associative thesaurus of english and its computer analysis. The computer and literary studies, pages 153-165, 1973. 14
310
+
311
+ [58] Andrew Kachites McCallum, Kamal Nigam, Jason Rennie, and Kristie Seymore. Automating the construction of internet portals with machine learning. Information Retrieval, 3(2):127-163, 2000. 14
312
+
313
+ [59] Johannes Gehrke, Paul Ginsparg, and Jon Kleinberg. Overview of the 2003 kdd cup. SIGKDD Explor. Newsl., 5(2):149-151, dec 2003. ISSN 1931-0145. 14
314
+
315
+ [60] Munmun De Choudhury, Hari Sundaram, Ajita John, and Dorée Duncan Seligmann. Social synchrony: Predicting mimicry of user actions in online social media. In 2009 International Conference on Computational Science and Engineering, volume 4, pages 151-158, 2009. 14
316
+
317
+ [61] Bryan Klimt and Yiming Yang. The enron corpus: A new dataset for email classification research. In Machine Learning: ECML 2004, pages 217-226, Berlin, Heidelberg, 2004. Springer Berlin Heidelberg. ISBN 978-3-540-30115-8. 14
318
+
319
+ [62] Furkan Gursoy and Dilek Gunnec. Influence maximization in social networks under deterministic linear threshold model. Knowledge-Based Systems, 161:111-123, 2018. ISSN 0950-7051. 14
320
+
321
+ [63] Oliver Richters and Tiago P Peixoto. Trust transitivity in social networks. PloS one, 6(4): e18384, 2011. 14
322
+
323
+ [64] Bimal Viswanath, Alan Mislove, Meeyoung Cha, and Krishna P. Gummadi. On the evolution of user interaction in facebook. In Proceedings of the 2nd ACM Workshop on Online Social Networks, page 37-42, New York, 2009. ACM. ISBN 9781605584454. 14
324
+
325
+ [65] Vicenç Gómez, Andreas Kaltenbrunner, and Vicente López. Statistical analysis of the social network and discussion threads in slashdot. In Proceedings of the 17th International Conference on World Wide Web, WWW '08, page 645-654, New York, NY, USA, 2008. ACM. ISBN 9781605580852.14
326
+
327
+ [66] Kevin Gullikson. Python dependency analysis. http://kgullikson88.github.io/blog/ pypi-analysis.html, 2016. 14
328
+
329
+ [67] Matthew Richardson, Rakesh Agrawal, and Pedro Domingos. Trust management for the semantic web. In The Semantic Web - ISWC 2003, pages 351-368, Berlin, Heidelberg, 2003. Springer Berlin Heidelberg. ISBN 978-3-540-39718-2. 14
330
+
331
+ [68] Michael Fire, Lena Tenenboim-Chekina, Rami Puzis, Ofrit Lesser, Lior Rokach, and Yuval Elovici. Computationally efficient link prediction in a variety of social networks. ACM Trans. Intell. Syst. Technol., 5(1), jan 2014. ISSN 2157-6904. 14, 16
332
+
333
+ [69] Manlio De Domenico, Antonio Lima, Paul Mougel, and Mirco Musolesi. The anatomy of a scientific rumor. Scientific reports, 3(1):1-9, 2013. 14
334
+
335
+ [70] Jure Leskovec, Lada A. Adamic, and Bernardo A. Huberman. The dynamics of viral marketing. ACM Trans. Web, 1(1):5-es, may 2007. ISSN 1559-1131. 14
336
+
337
+ [71] Réka Albert, Hawoong Jeong, and Albert-László Barabási. Diameter of the world-wide web. nature, 401(6749):130-131, 1999. 14
338
+
339
+ [72] Munmun De Choudhury. Discovery of information disseminators and receptors on online social media. In Proceedings of the 21st ACM Conference on Hypertext and Hypermedia, page 279-280, New York, 2010. ACM. ISBN 9781450300414. 14
340
+
341
+ [73] Samin Aref, David Friggens, and Shaun Hendy. Analysing scientific collaborations of new zealand institutions using scopus bibliometric data. In Proceedings of the Australasian Computer Science Week Multiconference, ACSW '18, New York, NY, USA, 2018. ACM. ISBN 9781450354363. 16
342
+
343
+ [74] Paolo Crucitti, Vito Latora, and Sergio Porta. Centrality measures in spatial networks of urban streets. Phys. Rev. E, 73:036125, Mar 2006. 16
344
+
345
+ [75] Duncan J Watts and Steven H Strogatz. Collective dynamics of 'small-world'networks. nature, 393(6684):440-442, 1998. 16
346
+
347
+ [76] Michael Fire, Rami Puzis, and Yuval Elovici. Organization mining using online social networks. CoRR, abs/1303.3741, 2013. 16
348
+
349
+ [77] G. Joshi-Tope, M. Gillespie, I. Vastrik, P. D'Eustachio, E. Schmidt, B. de Bono, B. Jassal, G.R. Gopinath, G.R. Wu, L. Matthews, S. Lewis, E. Birney, and L. Stein. Reactome: a knowledgebase of biological pathways. Nucleic Acids Research, 33(suppl_1):D428-D432, 01 2005. ISSN 0305-1048. 16
350
+
351
+ [78] Manlio De Domenico, Andrea Lancichinetti, Alex Arenas, and Martin Rosvall. Identifying modular flows on multilayer networks reveals highly overlapping organization in interconnected systems. Phys. Rev. X, 5:011027, Mar 2015. 16
352
+
353
+ [79] R. Alberich, J. Miro-Julia, and F. Rossello. Marvel universe looks almost like a real social network, 2002. 16
354
+
355
+ [80] Brian Karrer, M. E. J. Newman, and Lenka Zdeborová. Percolation on sparse networks. Phys. Rev. Lett., 113:208702, Nov 2014. 16
356
+
357
+ [81] Dingqi Yang, Daqing Zhang, Longbiao Chen, and Bingqing Qu. Nationtelescope: Monitoring and visualizing large-scale collective behavior in Ibsns. Journal of Network and Computer Applications, 55:170-180, 2015. ISSN 1084-8045. 16
358
+
359
+ ## A Appendix
360
+
361
+ Table 1: Properties of 35 directed networks, where weighted networks are marked with W, temporal link counts before aggregation into a static network are marked with $*$ , and $\rho$ is link reciprocity.
362
+
363
+ <table><tr><td>Data</td><td>Ref</td><td>Nodes</td><td>Edges</td><td>$\rho$</td></tr><tr><td>uni-email</td><td>[44]</td><td>1,133</td><td>10,903</td><td>1.000</td></tr><tr><td>polblogs</td><td>[45]</td><td>1,490</td><td>19,090</td><td>0.243</td></tr><tr><td>interactome-stelzl</td><td>[46]</td><td>1,706</td><td>6,207</td><td>0.972</td></tr><tr><td>interactome-figeys</td><td>[47]</td><td>2,239</td><td>6,452</td><td>0.006</td></tr><tr><td>us-air-traffic</td><td>[48]</td><td>2,278</td><td>*6,390,340</td><td>0.757</td></tr><tr><td>word-adjacency-japanese</td><td>[49]</td><td>2,704</td><td>8,300</td><td>0.073</td></tr><tr><td>openflights ${}^{\mathrm{v}}$</td><td>[50]</td><td>3,214</td><td>66,771</td><td>0.978</td></tr><tr><td>jdk</td><td>[42]</td><td>6,434</td><td>150,985</td><td>0.009</td></tr><tr><td>advogato ${}^{W}$</td><td>[51]</td><td>6,541</td><td>51,127</td><td>0.307</td></tr><tr><td>word-adjacency-spanish</td><td>[49]</td><td>11,586</td><td>45,129</td><td>0.091</td></tr><tr><td>dblp-cite</td><td>[52]</td><td>12,590</td><td>49,759</td><td>0.004</td></tr><tr><td>anybeat</td><td>[53]</td><td>12,645</td><td>67,053</td><td>0.535</td></tr><tr><td>chicago-road</td><td>[54]</td><td>12,982</td><td>39,018</td><td>0.943</td></tr><tr><td>foldoc ${}^{\mathrm{w}}$</td><td>[55]</td><td>13,356</td><td>120,238</td><td>0.479</td></tr><tr><td>google</td><td>[56]</td><td>15,763</td><td>171,206</td><td>0.254</td></tr><tr><td>word-assoc ${}^{\mathrm{w}}$</td><td>[57]</td><td>23,132</td><td>312,342</td><td>0.094</td></tr><tr><td>cora</td><td>[58]</td><td>23,166</td><td>91,500</td><td>0.051</td></tr><tr><td>arxiv-citation-HepTh</td><td>[59]</td><td>27,770</td><td>352,807</td><td>0.003</td></tr><tr><td>digg-reply ${}^{W}$</td><td>[60]</td><td>30,398</td><td>*87,627</td><td>0.002</td></tr><tr><td>linux</td><td>[42]</td><td>30,837</td><td>213,954</td><td>0.002</td></tr><tr><td>arxiv-citation-HepPh</td><td>[59]</td><td>34,546</td><td>421,578</td><td>0.003</td></tr><tr><td>email-enron</td><td>[61]</td><td>36,692</td><td>367,662</td><td>1.000</td></tr><tr><td>inploid</td><td>[62]</td><td>39,749</td><td>57,276</td><td>0.272</td></tr><tr><td>pgp-strong</td><td>[63]</td><td>39,796</td><td>301,498</td><td>0.660</td></tr><tr><td>facebook-wall ${}^{w}$</td><td>[64]</td><td>46,952</td><td>*876,993</td><td>0.588</td></tr><tr><td>slashdot-threads ${}^{\mathrm{w}}$</td><td>[65]</td><td>51,083</td><td>*140,778</td><td>0.210</td></tr><tr><td>python-dependency</td><td>[66]</td><td>58,743</td><td>108,399</td><td>0.004</td></tr><tr><td>lkml-reply ${}^{\mathrm{W}}$</td><td>[42]</td><td>63,399</td><td>*1,096,440</td><td>0.635</td></tr><tr><td>epinions-trust</td><td>[67]</td><td>75,888</td><td>508,837</td><td>0.405</td></tr><tr><td>prosper</td><td>[42]</td><td>89,269</td><td>3,394,979</td><td>< 0.001</td></tr><tr><td>google-plus</td><td>[68]</td><td>211,187</td><td>1,506,896</td><td>0.482</td></tr><tr><td>twitter-higgs-retweet ${}^{\mathrm{w}}$</td><td>[69]</td><td>256,491</td><td>328,132</td><td>0.005</td></tr><tr><td>amazon-copurchases-302</td><td>[70]</td><td>262,111</td><td>1,234,877</td><td>0.543</td></tr><tr><td>notre-dame-web</td><td>[71]</td><td>325,729</td><td>1,497,134</td><td>0.507</td></tr><tr><td>twitter-followers</td><td>[72]</td><td>465,017</td><td>834,797</td><td>0.003</td></tr></table>
364
+
365
+ Table 2: ROC AUC for link prediction in 35 directed networks for DeepWalk (DW), node2vec (n2v), ${\mathrm{{LINE}}}_{1}\left( {\mathrm{L}}_{1}\right) ,{\mathrm{{LINE}}}_{2}\left( {\mathrm{L}}_{2}\right) ,{\mathrm{{LINE}}}_{1 + 2}\left( {\mathrm{L}}_{1 + 2}\right)$ , NERD, MapSim based on the one-level partition (MapSim ${}_{1}$ ), and MapSim based on modular partitions. Networks marked with W are weighted. $\dagger$ marks cases with AUC $< {0.5}$ where we flipped the predicted link scores for AUC $> {0.5}$ . The best results per network are shown in bold, second-best underlined, and then rounded.
366
+
367
+ <table><tr><td>Data</td><td>DW</td><td>n2v</td><td>${\mathrm{L}}_{1}$</td><td>${\mathrm{L}}_{2}$</td><td>${\mathrm{L}}_{1 + 2}$</td><td>NERD</td><td>${\mathrm{{MS}}}_{1}$</td><td>MS</td></tr><tr><td>uni-email</td><td>0.911</td><td>${}^{ \dagger }{0.505}$</td><td>0.957</td><td>0.903</td><td>0.932</td><td>0.667</td><td>0.711</td><td>0.852</td></tr><tr><td>polblogs</td><td>0.705</td><td>0.695</td><td>0.804</td><td>0.823</td><td>0.841</td><td>0.652</td><td>$\underline{0.868}$</td><td>0.914</td></tr><tr><td>interactome-stelzl</td><td>0.810</td><td>${}^{ \dagger }{0.505}$</td><td>0.913</td><td>0.758</td><td>$\underline{0.849}$</td><td>0.524</td><td>0.710</td><td>0.755</td></tr><tr><td>interactome-figeys</td><td>0.524</td><td>${}^{ \dagger }{0.828}$</td><td>10.905</td><td>0.529</td><td>${}^{ \dagger }{0.850}$</td><td>0.605</td><td>0.773</td><td>0.839</td></tr><tr><td>us-air-traffic ${}^{\mathrm{W}}$</td><td>0.649</td><td>0.572</td><td>0.563</td><td>0.935</td><td>$\underline{0.933}$</td><td>0.774</td><td>0.858</td><td>0.916</td></tr><tr><td>word-adjacency-japanese</td><td>10.538</td><td>${}^{ \dagger }{0.645}$</td><td>${}^{ \dagger }{0.580}$</td><td>0.748</td><td>0.743</td><td>0.526</td><td>0.811</td><td>$\underline{0.800}$</td></tr><tr><td>openflights ${}^{\mathrm{w}}$</td><td>0.782</td><td>${}^{ \dagger }{0.665}$</td><td>0.918</td><td>0.934</td><td>0.948</td><td>0.708</td><td>0.838</td><td>0.941</td></tr><tr><td>jdk</td><td>0.746</td><td>0.857</td><td>0.820</td><td>0.695</td><td>0.755</td><td>0.725</td><td>0.974</td><td>0.986</td></tr><tr><td>advogato"</td><td>0.738</td><td>0.563</td><td>0.806</td><td>0.865</td><td>0.883</td><td>0.742</td><td>0.812</td><td>$\underline{0.878}$</td></tr><tr><td>word-adjacency-spanish</td><td>10.538</td><td>0.672</td><td>${}^{ \dagger }{0.713}$</td><td>0.824</td><td>0.791</td><td>0.632</td><td>0.811</td><td>0.805</td></tr><tr><td>dblp-cite</td><td>0.840</td><td>${}^{ \dagger }{0.537}$</td><td>${}^{ \dagger }{0.589}$</td><td>0.646</td><td>0.549</td><td>0.877</td><td>0.823</td><td>0.890</td></tr><tr><td>anybeat</td><td>0.647</td><td>0.539</td><td>0.644</td><td>0.841</td><td>0.857</td><td>0.683</td><td>0.834</td><td>$\underline{0.850}$</td></tr><tr><td>chicago-road</td><td>0.998</td><td>0.816</td><td>$\underline{0.981}$</td><td>0.670</td><td>0.835</td><td>${}^{ \dagger }{0.583}$</td><td>†0.608</td><td>0.848</td></tr><tr><td>foldoc"</td><td>0.927</td><td>0.549</td><td>0.951</td><td>0.832</td><td>0.905</td><td>0.571</td><td>0.618</td><td>0.845</td></tr><tr><td>google</td><td>0.844</td><td>0.792</td><td>0.831</td><td>0.868</td><td>0.896</td><td>0.697</td><td>0.867</td><td>0.962</td></tr><tr><td>word-assoc ${}^{W}$</td><td>0.729</td><td>0.830</td><td>0.813</td><td>0.869</td><td>0.916</td><td>0.884</td><td>0.837</td><td>0.849</td></tr><tr><td>cora</td><td>0.939</td><td>0.839</td><td>0.950</td><td>0.761</td><td>0.831</td><td>0.830</td><td>0.839</td><td>0.906</td></tr><tr><td>arxiv-citation-HepTh</td><td>0.878</td><td>0.839</td><td>0.958</td><td>0.857</td><td>0.901</td><td>0.850</td><td>0.842</td><td>0.942</td></tr><tr><td>digg-reply ${}^{W}$</td><td>${}^{ \dagger }{0.546}$</td><td>0.618</td><td>${}^{ \dagger }{0.552}$</td><td>0.714</td><td>0.693</td><td>0.841</td><td>0.845</td><td>0.836</td></tr><tr><td>linux</td><td>0.704</td><td>0.726</td><td>0.567</td><td>0.722</td><td>0.734</td><td>0.784</td><td>0.959</td><td>0.961</td></tr><tr><td>arxiv-citation-HepPh</td><td>$\underline{0.959}$</td><td>0.897</td><td>0.975</td><td>0.835</td><td>0.898</td><td>0.860</td><td>0.830</td><td>0.942</td></tr><tr><td>email-enron</td><td>0.823</td><td>${}^{ \dagger }{0.594}$</td><td>0.983</td><td>0.946</td><td>0.963</td><td>0.819</td><td>0.840</td><td>0.931</td></tr><tr><td>inploid</td><td>0.631</td><td>0.766</td><td>0.516</td><td>0.838</td><td>0.828</td><td>0.753</td><td>$\underline{0.845}$</td><td>0.870</td></tr><tr><td>pgp-strong</td><td>0.873</td><td>0.527</td><td>0.984</td><td>0.890</td><td>0.924</td><td>0.795</td><td>0.782</td><td>0.925</td></tr><tr><td>facebook-wall ${}^{w}$</td><td>0.877</td><td>0.789</td><td>0.931</td><td>0.809</td><td>0.855</td><td>0.813</td><td>0.768</td><td>0.867</td></tr><tr><td>slashdot-threads ${}^{\mathrm{w}}$</td><td>0.565</td><td>0.781</td><td>0.629</td><td>0.748</td><td>0.771</td><td>0.796</td><td>0.877</td><td>$\underline{0.876}$</td></tr><tr><td>python-dependency</td><td>0.751</td><td>0.735</td><td>${}^{ \dagger }{0.556}$</td><td>0.520</td><td>${}^{ \dagger }{0.505}$</td><td>0.832</td><td>0.965</td><td>0.913</td></tr><tr><td>lkml-reply ${}^{W}$</td><td>0.537</td><td>0.731</td><td>0.590</td><td>0.945</td><td>0.944</td><td>0.724</td><td>0.908</td><td>0.933</td></tr><tr><td>epinions-trust</td><td>0.599</td><td>0.777</td><td>0.806</td><td>0.943</td><td>0.952</td><td>0.887</td><td>0.916</td><td>0.937</td></tr><tr><td>prosper</td><td>0.828</td><td>0.631</td><td>0.697</td><td>†0.614</td><td>${}^{ \dagger }{0.518}$</td><td>0.952</td><td>0.891</td><td>$\underline{0.945}$</td></tr><tr><td>google-plus</td><td>0.752</td><td>0.725</td><td>0.957</td><td>0.787</td><td>0.893</td><td>0.891</td><td>0.862</td><td>0.946</td></tr><tr><td>twitter-higgs-retweet ${}^{\mathrm{w}}$</td><td>0.620</td><td>0.879</td><td>${}^{ \dagger }{0.695}$</td><td>†0.522</td><td>${}^{ \dagger }{0.569}$</td><td>0.799</td><td>0.977</td><td>0.820</td></tr><tr><td>amazon-copurchases-302</td><td>0.963</td><td>0.826</td><td>0.980</td><td>0.896</td><td>0.936</td><td>0.575</td><td>0.638</td><td>0.910</td></tr><tr><td>notre-dame-web</td><td>0.965</td><td>0.926</td><td>0.975</td><td>0.919</td><td>0.964</td><td>0.923</td><td>0.867</td><td>0.962</td></tr><tr><td>twitter-followers</td><td>0.526</td><td>10.993</td><td>†0.993</td><td>0.510</td><td>${}^{ \dagger }{0.973}$</td><td>0.917</td><td>0.809</td><td>0.871</td></tr><tr><td>Average</td><td>0.750</td><td>0.719</td><td>0.802</td><td>0.786</td><td>$\underline{0.832}$</td><td>0.757</td><td>0.830</td><td>0.892</td></tr><tr><td>Worst</td><td>0.524</td><td>0.505</td><td>0.516</td><td>0.510</td><td>0.505</td><td>0.524</td><td>0.608</td><td>0.755</td></tr><tr><td>Standard Deviation</td><td>0.148</td><td>0.131</td><td>0.164</td><td>0.129</td><td>0.128</td><td>0.118</td><td>$\underline{0.088}$</td><td>0.054</td></tr></table>
368
+
369
+ Table 3: Properties of 12 undirected networks, where weighted networks are marked with W.
370
+
371
+ <table><tr><td>Data</td><td>Ref</td><td>Nodes</td><td>Edges</td></tr><tr><td>new-zealand-collab ${}^{\mathrm{W}}$</td><td>[73]</td><td>1,511</td><td>4,273</td></tr><tr><td>urban-streets-venice</td><td>[74]</td><td>1,840</td><td>2,407</td></tr><tr><td>urban-streets-ahmedabad</td><td>[74]</td><td>2,870</td><td>4,387</td></tr><tr><td>power</td><td>[75]</td><td>4,941</td><td>6,594</td></tr><tr><td>facebook-organizations-L1</td><td>[76]</td><td>5,793</td><td>45,266</td></tr><tr><td>reactome</td><td>[77]</td><td>6,327</td><td>147,547</td></tr><tr><td>physics-collab-arXiv ${}^{W}$</td><td>[78]</td><td>14,488</td><td>59,026</td></tr><tr><td>marvel-universe</td><td>[79]</td><td>19,428</td><td>95,497</td></tr><tr><td>internet-as</td><td>[80]</td><td>22,963</td><td>48,436</td></tr><tr><td>marker-cafe</td><td>[68]</td><td>69,413</td><td>1,644,849</td></tr><tr><td>livemocha</td><td>[42]</td><td>104,103</td><td>2,193,083</td></tr><tr><td>foursquare-friendships-new</td><td>[81]</td><td>114,324</td><td>607,333</td></tr></table>
372
+
373
+ Table 4: ROC AUC on 12 undirected networks for DeepWalk (DW), node2vec (n2v), LINE ${}_{1}\left( {\mathrm{L}}_{1}\right)$ , ${\mathrm{{LINE}}}_{2}\left( {\mathrm{L}}_{2}\right) ,{\mathrm{{LINE}}}_{1 + 2}\left( {\mathrm{L}}_{1 + 2}\right)$ , NERD, MapSim based on the one-level partition $\left( {\mathrm{{MS}}}_{1}\right)$ , and MapSim based on modular partitions (MS). Networks marked with W are weighted. $\dagger$ marks cases with AUC $< {0.5}$ where we flipped the predicted link scores for AUC $> {0.5}$ . The best results per network are shown in bold, second-best underlined, and then rounded.
374
+
375
+ <table><tr><td>Data</td><td>DW</td><td>n2v</td><td>${\mathrm{L}}_{1}$</td><td>${\mathrm{L}}_{2}$</td><td>${\mathrm{L}}_{1 + 2}$</td><td>NERD</td><td>${\mathrm{{MS}}}_{1}$</td><td>MS</td></tr><tr><td>new-zealand-collab ${}^{\mathrm{W}}$</td><td>0.616</td><td>0.734</td><td>${}^{ \dagger }{0.660}$</td><td>0.921</td><td>0.895</td><td>${}^{ \dagger }{0.559}$</td><td>0.834</td><td>0.839</td></tr><tr><td>urban-streets-venice</td><td>0.872</td><td>0.834</td><td>0.777</td><td>0.570</td><td>0.668</td><td>0.573</td><td>${}^{ \dagger }{0.607}$</td><td>0.889</td></tr><tr><td>urban-streets-ahmedabad</td><td>0.939</td><td>0.890</td><td>0.828</td><td>${}^{ \dagger }{0.533}$</td><td>0.629</td><td>${}^{ \dagger }{0.575}$</td><td>†0.731</td><td>0.897</td></tr><tr><td>power</td><td>0.919</td><td>0.863</td><td>0.827</td><td>0.741</td><td>0.777</td><td>0.600</td><td>0.552</td><td>0.959</td></tr><tr><td>facebook-organizations-L1</td><td>0.937</td><td>0.516</td><td>0.968</td><td>0.954</td><td>0.966</td><td>0.846</td><td>0.864</td><td>0.979</td></tr><tr><td>reactome</td><td>0.934</td><td>0.592</td><td>0.983</td><td>0.925</td><td>0.950</td><td>0.846</td><td>0.820</td><td>0.978</td></tr><tr><td>physics-collab-arXiv ${}^{W}$</td><td>0.929</td><td>0.521</td><td>0.977</td><td>0.807</td><td>0.871</td><td>0.695</td><td>0.568</td><td>0.955</td></tr><tr><td>marvel-universe</td><td>0.854</td><td>${}^{ \dagger }{0.633}$</td><td>0.879</td><td>0.834</td><td>0.902</td><td>0.852</td><td>0.679</td><td>0.900</td></tr><tr><td>internet-as</td><td>0.641</td><td>${}^{ \dagger }{0.705}$</td><td>0.535</td><td>0.921</td><td>0.920</td><td>0.744</td><td>0.766</td><td>0.927</td></tr><tr><td>marker-cafe</td><td>0.576</td><td>0.906</td><td>0.760</td><td>0.920</td><td>0.914</td><td>0.930</td><td>0.907</td><td>0.916</td></tr><tr><td>livemocha</td><td>0.708</td><td>0.758</td><td>0.839</td><td>0.861</td><td>0.876</td><td>0.924</td><td>0.855</td><td>0.876</td></tr><tr><td>foursquare-friendships-new</td><td>0.924</td><td>0.537</td><td>$\underline{0.968}$</td><td>0.932</td><td>0.950</td><td>0.836</td><td>0.791</td><td>0.988</td></tr><tr><td>Average</td><td>0.821</td><td>0.707</td><td>0.834</td><td>0.826</td><td>0.860</td><td>0.748</td><td>0.748</td><td>0.925</td></tr><tr><td>Worst</td><td>0.576</td><td>0.521</td><td>0.535</td><td>0.533</td><td>0.629</td><td>0.559</td><td>0.552</td><td>0.839</td></tr><tr><td>Standard Deviation</td><td>0.136</td><td>0.140</td><td>0.132</td><td>0.137</td><td>0.106</td><td>0.136</td><td>0.116</td><td>0.045</td></tr></table>
376
+
377
+ Table 5: Average precision on 47 directed and undirected networks for DeepWalk (DW), node2vec (n2v), ${\mathrm{{LINE}}}_{1}\left( {\mathrm{L}}_{1}\right) ,{\mathrm{{LINE}}}_{2}\left( {\mathrm{L}}_{2}\right) ,{\mathrm{{LINE}}}_{1 + 2}\left( {\mathrm{L}}_{1 + 2}\right)$ , NERD, MapSim based on the one-level partition (MapSim ${}_{1}$ ), and MapSim based on modular partitions. Weighted networks are marked with W. Results marked with $\dagger$ correspond to cases with AUC $< {0.5}$ where we flipped the predicted link scores Results are rounded, the best results are shown in bold, second-best are underlined.
378
+
379
+ <table><tr><td>Data</td><td>DW</td><td>n2v</td><td>${\mathrm{L}}_{1}$</td><td>${\mathrm{L}}_{2}$</td><td>${\mathrm{L}}_{1 + 2}$</td><td>NERD</td><td>${\mathrm{{MS}}}_{1}$</td><td>MS</td></tr><tr><td>uni-email</td><td>0.914</td><td>${}^{ \dagger }{0.513}$</td><td>0.964</td><td>0.916</td><td>0.940</td><td>0.736</td><td>0.692</td><td>0.870</td></tr><tr><td>polblogs</td><td>0.627</td><td>0.631</td><td>0.817</td><td>0.838</td><td>0.853</td><td>0.724</td><td>0.851</td><td>0.903</td></tr><tr><td>new-zealand-collab ${}^{W}$</td><td>0.643</td><td>0.661</td><td>†0.768</td><td>0.925</td><td>0.907</td><td>10.606</td><td>0.855</td><td>0.865</td></tr><tr><td>interactome-stelzl</td><td>0.835</td><td>${}^{ \dagger }{0.513}$</td><td>0.944</td><td>0.773</td><td>0.853</td><td>0.612</td><td>0.757</td><td>0.820</td></tr><tr><td>urban-streets-venice</td><td>0.897</td><td>0.870</td><td>0.828</td><td>0.634</td><td>0.711</td><td>0.597</td><td>${}^{ \dagger }{0.564}$</td><td>$\underline{0.890}$</td></tr><tr><td>interactome-figeys</td><td>0.533</td><td>${}^{ \dagger }{0.703}$</td><td>10.889</td><td>0.653</td><td>${}^{ \dagger }{0.865}$</td><td>0.730</td><td>0.730</td><td>0.819</td></tr><tr><td>us-air-traffic ${}^{\mathrm{W}}$</td><td>0.616</td><td>0.552</td><td>0.685</td><td>0.937</td><td>0.934</td><td>0.835</td><td>0.833</td><td>0.903</td></tr><tr><td>word-adjacency-japanese</td><td>${}^{ \dagger }{0.494}$</td><td>${}^{ \dagger }{0.570}$</td><td>${}^{ \dagger }{0.623}$</td><td>0.801</td><td>0.796</td><td>0.628</td><td>0.855</td><td>0.831</td></tr><tr><td>urban-streets-ahmedabad</td><td>0.953</td><td>0.919</td><td>0.864</td><td>${}^{ \dagger }{0.577}$</td><td>0.685</td><td>${}^{ \dagger }{0.523}$</td><td>${}^{ \dagger }{0.658}$</td><td>0.915</td></tr><tr><td>openflights ${}^{\mathrm{w}}$</td><td>0.767</td><td>${}^{ \dagger }{0.621}$</td><td>0.934</td><td>0.950</td><td>0.960</td><td>0.798</td><td>0.840</td><td>0.950</td></tr><tr><td>power</td><td>0.936</td><td>0.897</td><td>0.874</td><td>0.800</td><td>0.828</td><td>0.620</td><td>0.566</td><td>0.962</td></tr><tr><td>facebook-organizations-L1</td><td>0.919</td><td>0.508</td><td>0.977</td><td>0.966</td><td>0.974</td><td>0.882</td><td>0.835</td><td>0.976</td></tr><tr><td>reactome</td><td>0.908</td><td>0.580</td><td>0.985</td><td>0.944</td><td>0.961</td><td>0.890</td><td>0.786</td><td>0.978</td></tr><tr><td>jdk</td><td>0.777</td><td>0.862</td><td>0.891</td><td>0.737</td><td>0.807</td><td>0.761</td><td>0.973</td><td>0.987</td></tr><tr><td>advogato ${}^{W}$</td><td>0.769</td><td>0.505</td><td>0.868</td><td>0.892</td><td>0.905</td><td>0.805</td><td>0.810</td><td>0.890</td></tr><tr><td>word-adjacency-spanish</td><td>${}^{ \dagger }{0.496}$</td><td>0.652</td><td>${}^{ \dagger }{0.754}$</td><td>0.863</td><td>0.848</td><td>0.732</td><td>0.863</td><td>0.851</td></tr><tr><td>dblp-cite</td><td>0.834</td><td>${}^{ \dagger }{0.485}$</td><td>${}^{ \dagger }{0.551}$</td><td>0.742</td><td>0.646</td><td>0.908</td><td>0.828</td><td>0.905</td></tr><tr><td>anybeat</td><td>0.672</td><td>0.523</td><td>0.748</td><td>0.884</td><td>0.894</td><td>0.784</td><td>0.867</td><td>0.883</td></tr><tr><td>chicago-road</td><td>0.998</td><td>0.863</td><td>0.986</td><td>0.735</td><td>0.874</td><td>${}^{ \dagger }{0.559}$</td><td>${}^{ \dagger }{0.579}$</td><td>0.909</td></tr><tr><td>foldoe ${}^{\mathrm{W}}$</td><td>0.946</td><td>0.575</td><td>0.966</td><td>0.848</td><td>0.914</td><td>0.629</td><td>0.658</td><td>0.888</td></tr><tr><td>physics-collab-arXiv ${}^{W}$</td><td>0.939</td><td>0.592</td><td>0.983</td><td>0.858</td><td>0.899</td><td>0.725</td><td>0.634</td><td>0.964</td></tr><tr><td>google</td><td>0.859</td><td>0.775</td><td>0.903</td><td>0.878</td><td>0.907</td><td>0.775</td><td>0.889</td><td>0.976</td></tr><tr><td>marvel-universe</td><td>0.864</td><td>${}^{ \dagger }{0.666}$</td><td>0.914</td><td>0.840</td><td>0.899</td><td>0.884</td><td>0.615</td><td>0.910</td></tr><tr><td>internet-as</td><td>0.685</td><td>${}^{ \dagger }{0.742}$</td><td>0.659</td><td>0.930</td><td>0.930</td><td>0.822</td><td>0.817</td><td>0.932</td></tr><tr><td>word-assoc ${}^{\mathrm{W}}$</td><td>0.727</td><td>0.846</td><td>0.873</td><td>0.896</td><td>0.922</td><td>0.902</td><td>0.848</td><td>0.862</td></tr><tr><td>cora</td><td>$\underline{0.938}$</td><td>0.815</td><td>0.958</td><td>0.834</td><td>0.880</td><td>0.847</td><td>0.826</td><td>0.926</td></tr><tr><td>arxiv-citation-HepTh</td><td>0.865</td><td>0.812</td><td>0.966</td><td>0.896</td><td>0.925</td><td>0.868</td><td>0.839</td><td>0.952</td></tr><tr><td>digg-reply ${}^{W}$</td><td>${}^{ \dagger }{0.501}$</td><td>0.585</td><td>${}^{ \dagger }{0.604}$</td><td>0.772</td><td>0.761</td><td>0.873</td><td>0.835</td><td>0.834</td></tr><tr><td>linux</td><td>0.734</td><td>0.663</td><td>0.701</td><td>0.733</td><td>0.754</td><td>0.835</td><td>0.959</td><td>0.965</td></tr><tr><td>arxiv-citation-HepPh</td><td>0.952</td><td>0.890</td><td>0.975</td><td>0.881</td><td>0.923</td><td>0.870</td><td>0.813</td><td>0.952</td></tr><tr><td>email-enron</td><td>0.816</td><td>${}^{ \dagger }{0.541}$</td><td>0.988</td><td>0.963</td><td>0.974</td><td>0.873</td><td>0.860</td><td>0.949</td></tr><tr><td>inploid</td><td>0.667</td><td>0.736</td><td>0.532</td><td>0.879</td><td>0.875</td><td>0.819</td><td>0.869</td><td>0.891</td></tr><tr><td>pgp-strong</td><td>0.879</td><td>0.568</td><td>0.989</td><td>0.927</td><td>0.946</td><td>0.848</td><td>0.804</td><td>0.954</td></tr><tr><td>facebook-wall ${}^{W}$</td><td>0.865</td><td>0.744</td><td>0.951</td><td>0.865</td><td>0.890</td><td>0.833</td><td>0.753</td><td>0.890</td></tr><tr><td>slashdot-threads ${}^{\mathrm{W}}$</td><td>0.604</td><td>0.769</td><td>0.744</td><td>0.835</td><td>0.848</td><td>0.855</td><td>0.883</td><td>0.886</td></tr><tr><td>python-dependency</td><td>0.790</td><td>0.763</td><td>${}^{ \dagger }{0.715}$</td><td>0.653</td><td>${}^{ \dagger }{0.632}$</td><td>0.889</td><td>0.965</td><td>0.915</td></tr><tr><td>1kml-reply ${}^{\mathrm{v}}$</td><td>0.494</td><td>0.612</td><td>0.719</td><td>0.959</td><td>0.958</td><td>0.821</td><td>0.920</td><td>0.942</td></tr><tr><td>marker-cafe</td><td>0.539</td><td>0.853</td><td>0.832</td><td>0.921</td><td>0.917</td><td>0.949</td><td>0.901</td><td>0.912</td></tr><tr><td>epinions-trust</td><td>0.611</td><td>0.679</td><td>0.875</td><td>0.960</td><td>0.964</td><td>0.925</td><td>0.921</td><td>0.947</td></tr><tr><td>prosper</td><td>0.818</td><td>0.531</td><td>0.616</td><td>10.650</td><td>${}^{ \dagger }{0.465}$</td><td>0.956</td><td>0.855</td><td>0.927</td></tr><tr><td>livemocha</td><td>0.694</td><td>0.737</td><td>0.880</td><td>0.868</td><td>0.884</td><td>0.930</td><td>0.854</td><td>0.881</td></tr><tr><td>foursquare-friendships-new</td><td>0.918</td><td>0.520</td><td>0.976</td><td>0.948</td><td>0.961</td><td>0.858</td><td>0.792</td><td>0.988</td></tr><tr><td>google-plus</td><td>0.704</td><td>0.679</td><td>0.960</td><td>0.870</td><td>0.921</td><td>0.921</td><td>0.870</td><td>0.960</td></tr><tr><td>twitter-higgs-retweet ${}^{\mathrm{W}}$</td><td>0.630</td><td>0.880</td><td>†0.800</td><td>${}^{ \dagger }{0.634}$</td><td>${}^{ \dagger }{0.707}$</td><td>0.874</td><td>0.976</td><td>0.822</td></tr><tr><td>amazon-copurchases-302</td><td>0.966</td><td>0.850</td><td>0.987</td><td>0.931</td><td>0.957</td><td>0.589</td><td>0.656</td><td>0.946</td></tr><tr><td>notre-dame-web</td><td>0.967</td><td>0.938</td><td>0.980</td><td>0.946</td><td>0.971</td><td>0.930</td><td>0.891</td><td>0.971</td></tr><tr><td>twitter-followers</td><td>0.549</td><td>${}^{ \dagger }{0.987}$</td><td>10.989</td><td>0.714</td><td>${}^{ \dagger }{0.977}$</td><td>0.955</td><td>0.839</td><td>0.887</td></tr></table>
380
+
papers/LOG/LOG 2022/LOG 2022 Conference/PTz0aXJp7A/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,179 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § SIMILARITY-BASED LINK PREDICTION FROM MODULAR COMPRESSION OF NETWORK FLOWS
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ Node similarity scores are a foundation for machine learning in graphs for clustering, node classification, anomaly detection, and link prediction with applications in biological systems, information networks, and recommender systems. Recent works on link prediction use vector space embeddings to calculate node similarities in undirected networks with good performance, but they have several disadvantages: limited interpretability, they require hyperparameter tuning, manual model fitting through dimensionality reduction, and poor performance of symmetric similarities in directed link prediction. We propose MapSim, an information-theoretic measure to assess node similarities based on modular compression of network flows. Different from vector space embeddings, MapSim represents nodes in a discrete, non-metric space of communities and yields asymmetric similarities in an unsupervised fashion. We compare MapSim on a link prediction task to popular embedding-based algorithms across 47 data sets of networks and find that Map-Sim’s average performance across all networks is more than 7% higher than its closest competitor, outperforming all embedding methods in 14 of the 47 networks. Our method demonstrates the potential of compression-based approaches in graph representation learning, with promising applications in other graph learning tasks.
12
+
13
+ § 191 INTRODUCTION
14
+
15
+ Calculating similarity scores between objects is a fundamental problem in supervised and unsupervised machine learning tasks from clustering, anomaly detection, and text mining to classification and recommender systems. In Euclidean feature spaces, similarities between feature vectors are commonly calculated based on lengths, norms, angles, or other geometric concepts, possibly using kernel functions that perform implicit non-linear mappings to high-dimensional feature spaces [1]. For relational data represented as graphs, methods using the graph topology to calculate pairwise node similarities can address learning problems such as graph clustering, node classification, and link prediction. For link prediction, many recent works take a multi-step approach that separates representation learning and link prediction $\left\lbrack {2,3}\right\rbrack$ : First, they learn a node embedding in a latent vector space from the graph's topology, using methods such as graph or matrix factorisation [4, 5], or random-walk based techniques [6-8]. Then, they interpret node positions as points in a high-dimensional feature space, possibly applying downstream dimensionality reduction. Finally, they use node positions in the resulting feature space to assign new "features" to pairs of nodes, which can be used to predict links. Taking an unsupervised approach, we predict links based on node similarities [9] by calculating distance metrics or similarity scores between node pairs and then ranking them. We can alternatively use a supervised approach [10] by (i) using binary operators like the Hadamard product [7], (ii) sampling negative instances (node pairs not connected by links), and (iii) using the features of positive and negative instances to train a supervised binary classifier [7].
16
+
17
+ Advances in graph embedding and representation learning have considerably improved our ability to predict links in networks, with applications in biological [11] and social [12] networks and in recommender systems [13]. However, these methods also introduce challenges for real-world link prediction tasks: First, they require specifying hyperparameters that control aspects regarding the scale of patterns in graphs, influence of local and non-local structures, and latent space dimensionality
18
+
19
+ Original network Modular code books Modular path costs in bits 1101 3.6 0.3.3 001 3.3 3.6 3.3 101.8 MapSim 11 1.8 6.3 1.1.0 MapEq 1100 3.6 012.0 002.6
20
+
21
+ Figure 1: We calculate node similarities for predicting links based on a network's modular coding scheme of the map equation. Blue and orange nodes have a unique codeword within their module derived from their stationary visit rates. Decimal numbers show the theoretical lower limit for the codeword length in bits. Map equation similarity derives description lengths for predicted links, connecting more similar nodes uses fewer bits. Intra-community links tend to have shorter description lengths than inter-community links.
22
+
23
+ [14]. Network-specific hyperparameter tuning addresses these issues, but is challenging in real applications and aggravates the risk of overfitting; recent systematic comparisons reveal that the performance of different methods largely varies across data sets $\left\lbrack {2,3}\right\rbrack$ . Altogether, this makes it difficult for practitioners to choose and optimally parametrise an embedding method. Second, using latent metric spaces implies symmetric similarities, limiting the performance when predicting directed links [5, 15]. Third, an important issue is that, compared with manually-crafted features, embeddings tend to have low interpretability: We can assess the similarity of nodes, but we cannot explain why some nodes are more similar than others [2-4]. Finally, recent works have highlighted fundamental limitations of low-dimensional representations of complex networks [16], questioning to what extent Euclidean embeddings can capture patterns that are relevant for link prediction.
24
+
25
+ Motivated by recent works highlighting the importance of community structures for link prediction $\left\lbrack {2,{17},{18}}\right\rbrack$ , we propose a novel approach to similarity-based link prediction that addresses these issues. Our contributions are:
26
+
27
+ * We introduce map equation similarity, MapSim for short, an information-theoretic method to calculate asymmetric node similarities. MapSim builds on the map equation [19], a framework that applies coding theory to compress random walks based on hierarchical cluster structures.
28
+
29
+ * Different from other random walk-based embedding techniques, our work builds on an analytical approach to calculate the expected description length of random walks in the limit, and, thus, requires neither simulating random walks nor tuning hyperparameters.
30
+
31
+ * Following the minimum description length principle, MapSim incorporates Occam's razor and balances explanatory power with model complexity, making dimensionality reduction superfluous. With hierarchical cluster structures, MapSim captures patterns at multiple scales simultaneously and combines advantages of local and non-local similarity scores.
32
+
33
+ * We validate MapSim in an unsupervised, similarity-based link prediction task and compare its performance to six widely used random walk-based embedding techniques in 47 directed and undirected empirical networks from different domains. Highlighting challenges in the generalisability of embedding techniques and parametrisations across different networks, this analysis is a contribution in itself.
34
+
35
+ * Confirming recent surveys, we find that, without network-specific hyperparameter tuning, the performance of popular embedding techniques for unsupervised link prediction heavily depends on the data. In contrast, MapSim provides high performance across a wide range of networks, with an average performance ${7.7}\%$ and ${7.5}\%$ better than the best competitor in undirected and directed networks, respectively. MapSim ourtperforms all competing method in 14 of the 47 networks with a standard deviation less than half of that of the closest competitor. We find that the worst-case performance of our method is ${44}\%$ and ${33}\%$ better compared to popular embedding techniques in undirected and directed networks, respectively.
36
+
37
+ In summary, we take a novel perspective on graph representation learning that fundamentally differs from other random walk-based graph embeddings. Rather than embedding nodes into a metric space, leading to symmetric similarities, we develop an unsupervised learning framework where (i) positions of nodes in a coding tree capture their representation in a non-metric latent space, and (ii) node similarities are calculated based on how well transitions between nodes are compressed by a network's hierarchical modular structure (figure 1). Apart from node similarities that can be "explained" based on community structures captured in the coding tree, MapSim yields asymmetric similarity scores that naturally support link prediction in directed networks. We provide a simple, non-parametric, and scalable unsupervised method with high generalisability across data sets, and which should thus be of interest for practitioners. Our work demonstrates the power of compression-based approaches to graph representation learning, with promising applications in other graph learning tasks.
38
+
39
+ § 2 RELATED WORK AND BACKGROUND
40
+
41
+ We first summarise recent works on graph embedding and similarity-based link prediction. Then, we review the map equation, an information-theoretic objective function for community detection and theoretical foundation of our compression-based similarity score.
42
+
43
+ § 2.1 RELATED WORK
44
+
45
+ Since we focus on unsupervised similarity-based link prediction, we consider methods to calculate a bivariate function $\operatorname{sim}\left( {u,v}\right) \in {\mathbb{R}}^{d}$ , where $u,v \in V$ are nodes in a directed or undirected, possibly weighted graph $G = \left( {V,E}\right) \left\lbrack {{20},{21}}\right\rbrack$ . While similarity metrics often consider scalar functions $\left( {d = 1}\right)$ , recent vector space embeddings use binary operators to assign vector-valued "features" with $d > 1$ to node pairs. Since vectorial features are typically used in downstream classification techniques, this can be seen as an implicit mapping to similarities, for example "similar" features being assigned similar class probabilities. We limit our discussion to topological or structural approaches [20], functions $\operatorname{sim}\left( {u,v}\right)$ that can be calculated solely based on the edges $E$ in graph $G$ without requiring additional information such as node attributes or other non-topological graph properties.
46
+
47
+ Several works define scalar similarities based on local topological characteristics such as the Jaccard index of neighbour sets, degrees of nodes, or degree-weighted measures of common neighbours [22]. Other methods define similarities based on random walks, paths, or topological distance between nodes $\left\lbrack {9,{23} - {25}}\right\rbrack$ . Compared to purely local approaches, an advantage of random walk-based methods is their ability to incorporate both local and non-local information, which is crucial for sparse networks where nodes may lack common neighbours. Since walk-based methods reveal cluster patterns in networks [19], they generally perform well in downstream tasks such as link prediction and graph clustering [2]. Graph factorisation approaches that use eigenvectors of different types of Laplacian matrices that represent relationships between nodes share this high performance [26], likely because (i) Laplacians capture the dynamics of continuous-time random walks [27], and (ii) spectral methods can capture small cuts in graphs [28].
48
+
49
+ Building on these ideas, some recent works on graph representation learning combine random walks and deep learning to obtain high-dimensional vector space embeddings of nodes, serving as features in downstream learning tasks [3, 14]: Perozzi et al. [6] generate a large number of short random walks to learn latent space representations of nodes by applying a word embedding technique that considers node sequences as word sequences in a sentence. This corresponds to an implicit factorisation of a matrix whose entries capture the logarithm of the expected probabilities to walk between nodes in a given number of steps [29]. Following a similar walk-based approach, Grover and Leskovec [7] generate node sequences with a biased random walker whose exploration behaviour can be tuned by search bias parameters $p$ and $q$ . The resulting walk sequences are used as input for the word embedding algorithm word2vec [30], which embeds objects in a latent vector space with configurable dimensionality. Tang et al. [8] construct vector space embeddings of nodes that simultaneously preserve first- and second-order proximities between nodes. Similar to Adamic and Adar [22], second-order node proximities are defined based on common neighbours. Extending the random walk approach in [6], Perozzi et al. [31] learn embeddings from so-called walklets, random walks that skip some nodes, resulting in embeddings that capture structural features at multiple scales.
50
+
51
+ The abovementioned graph embedding methods compute a representation of nodes in a, compared to the number of nodes in the network, low-dimensional Euclidean space. A suitably defined metric for similarity or distance of nodes enables recovering the link topology with high fidelity [32], forming the basis for similarity-based link prediction. In contrast, [10] argued for a new perspective that uses supervised classifiers based on (i) multi-dimensional features of node pairs, and (ii) an undersampling of negative instances to address inherent class imbalances in link prediction. Recent applications of graph embedding to link prediction have taken a similar supervised approach, for example using vector-valued binary operators to construct features for node pairs from node vectors $\left\lbrack {6,7,{21}}\right\rbrack$ . Despite good performance, recent works have cast a more critical light on such applications of low-dimensional graph embeddings. Questioning the distinction between deep learning-based embeddings and graph factorisation techniques, Qiu et al. [4] show that popular embedding techniques can be understood as (approximate) factorisations of matrices that capture graph topology. Thus, low-dimensional embeddings can be viewed as a (lossy) compression of graphs, while link prediction or graph reconstruction can be viewed as the decompression step. Fitting this view, a recent study of the topological characteristics of networks' low-dimensional Euclidean representations has highlighted fundamental limitations of embeddings to capture complex structures found in real networks [16].
52
+
53
+ Techniques like node2vec, LINE, or DeepWalk have been reported to perform well for link prediction despite those limitations. However, recent surveys concur that finetuning their hyperparameters to the specific data set is required $\left\lbrack {2,{18},{33}}\right\rbrack$ , which can be problematic in large data sets and increases the risk of overfitting. When used for link prediction, graph embedding methods are typically combined with dimensionality reduction and supervised classification algorithms, possibly using non-linear kernels. Recent comparative studies found that the performance of Euclidean graph embeddings for link prediction is connected to their ability to represent communities in the graph as clusters in the feature space [2], which, due to the non-linear nature of graph data [34], strongly depends on their topology. Using symmetric operators or distance measures in metric spaces limits their ability to predict directed links because the ground truth for(u, v)can differ from(v, u)[15].
54
+
55
+ These issues raise the general question whether we should address link prediction based on low-dimensional Euclidean embeddings. Recent works addressed some of those open questions, for example with hyperbolic or non-linear embeddings [17, 34], extensions of Euclidean embeddings for directed link prediction [15], or embeddings that explicitly account for community structures $\left\lbrack {{18},{35},{36}}\right\rbrack$ . However, existing works still use hyperparameters, require separate dimensionality reduction or model selection techniques to identify the optimal number of dimensions, fail to capture rich hierarchically nested community structures present in real-world networks [37], or do not integrate community detection with the actual representation learning. Addressing all issues at once, we take a novel approach that treats graph representation learning as a compression problem: We use the map equation [19], an analytical information-theoretic approach to compress flows of random walks in directed or undirected, possibly weighted networks based on their modular structure. The resulting hierarchical coding tree with node assignments can be viewed as an embedding in a discrete, non-metric latent space of (possibly hierarchical) community labels with automatically optimised dimensionality using a minimum description length approach. As an analytical approach, our method neither introduces hyperparameters nor needs to simulate random walks. Using a non-metric latent space naturally yields asymmetric node similarities suitable to predict directed links.
56
+
57
+ § 2.2 BACKGROUND: THE MAP EQUATION
58
+
59
+ The map equation is an information-theoretic objective function for community detection that, conceptually, models network flows with random walks [19]. To detect communities, the map equation compresses the random walks' per-step description length by searching for sets of nodes with long flow persistence: network areas where a random walker tends to stay for a longer time.
60
+
61
+ Consider a communication game where the sender observes a random walker on a network, and uses binary codewords to update the receiver about the random walker's location. In the simplest case, all nodes belong to the same module and we use a Huffman code to assign unique codewords to the nodes based on their stationary visit rates. With a one-module partition, the sender communicates one codeword per random walk step to the receiver. The theoretical lower limit for the per-step description length, we call it codelength, is the entropy of the nodes' visit rates [38],
62
+
63
+ $$
64
+ L\left( {\mathrm{\;M}}_{1}\right) = \mathcal{H}\left( P\right) = - \mathop{\sum }\limits_{{u \in V}}{p}_{u}{\log }_{2}{p}_{u} \tag{1}
65
+ $$
66
+
67
+ Here ${\mathrm{M}}_{1}$ is the one-module partition, $\mathcal{H}$ is the Shannon entropy, $P$ is the set of the nodes’ visit rates, and ${p}_{u}$ is node $u$ ’s visit rate.
68
+
69
+ In networks with modular structure, we can compress the random walks' description by grouping nodes into more than one module such that a random walker tends to remain within modules and module switches become rare. This lets us re-use codewords across modules and design a codebook per module based on the nodes' module-normalised visit rates. However, sender and receiver need
70
+
71
+ 111 2.6 2 bits 6.9 bits 000 3.3 1100 3.6 10 2.0 10 1.8 111.8 012.3 012.0 encoding: 0 01 00 10 1101 1 10 11 01 (18.1 bits)
72
+
73
+ Figure 2: Coding principles behind the map equation. Left: An example network with nine nodes, ten links, and two communities, A and B, as indicated by colours. Each random-walker step is encoded by one codeword for intra-module transitions or three codewords for inter-module transitions. Codewords are shown next to nodes in colours, their length in the information-theoretic limit in black. Module entry and exit codewords are shown to the left and right of the coloured arrows, respectively. The black trace shows a possible section of a random walk with its encoding at the bottom. Right: Coding tree corresponding to the network's community structure. Links are annotated with transition rates because we calculate similarities in the information-theoretic limit. Each path in the coding tree corresponds to a network link, which may or may not exist. The coder remembers the random walker's module but not the most recently visited node. To describe the intra-module transition from node 5 to 3, we use $- {\log }_{2}\left( {3/{12}}\right) = 2$ bits. The inter-module transition from node 5 to 7 takes three steps in the coding tree and requires $- {\log }_{2}\left( {1/{12} \cdot 1/2 \cdot 2/{10}}\right) \approx {6.9}$ bits.
74
+
75
+ a way to encode module switches. The map equation uses a designated module exit codeword per module and an index-level codebook with module entry codewords. In a two-level partition, the sender communicates one codeword for intra-module random-walker steps to the receiver and three codewords for inter-module steps (figure 2). The lower limit for the codelength is given by the sum of entropies associated with module and index codebooks, weighted by their usage rates. Given a partition of the network’s nodes into modules, $M$ , the map equation [19] formalises this relationship,
76
+
77
+ $$
78
+ L\left( \mathrm{\;M}\right) = q\mathcal{H}\left( Q\right) + \mathop{\sum }\limits_{{\mathrm{m} \in \mathrm{M}}}{p}_{\mathrm{m}}\mathcal{H}\left( {P}_{\mathrm{m}}\right) . \tag{2}
79
+ $$
80
+
81
+ Here $q = \mathop{\sum }\limits_{{\mathrm{m} \in \mathrm{M}}}{q}_{\mathrm{m}}$ is the index-level codebook usage rate, ${q}_{\mathrm{m}}$ is the entry rate for module $\mathrm{m}$ , and $Q = \left\{ {{q}_{\mathrm{m}} \mid \mathrm{m} \in \mathrm{M}}\right\}$ is the set of module entry rates; ${\mathrm{m}}_{\text{ exit }}$ is the exit rate for module $\mathrm{m},{p}_{\mathrm{m}} =$ ${\mathrm{m}}_{\text{ exit }} + \mathop{\sum }\limits_{{u \in \mathfrak{m}}}{p}_{u}$ is the codebook usage rate for module $\mathrm{m}$ , and ${P}_{\mathrm{m}} = \left\{ {\mathrm{m}}_{\text{ exit }}\right\} \cup \left\{ {{p}_{u} \mid u \in \mathrm{m}}\right\}$ is the set of node visit rates in $\mathrm{m}$ , including $\mathrm{m}$ ’s module exit rate.
82
+
83
+ § 3 MAPSIM: NODE SIMILARITIES FROM MODULAR FLOW COMPRESSION
84
+
85
+ Compression-based similarity measures consider pairs of objects more similar if they jointly compress better. Extending this idea to networks, we exploit the coding of network flows based on the map equation, and use it to calculate information-theoretic pairwise similarities between nodes: MapSim. We interpret a network's community structure as an implicit embedding and, roughly speaking, consider nodes in the same community as more similar than nodes in different communities.
86
+
87
+ To calculate node similarities, we begin with a network partition and its corresponding modular coding scheme, which can be visualised as a tree, annotated with the transition rates defined by the link patterns in the network (figure 2). While the network's topology constrains random walks to transitions along existing links, the coding scheme is more flexible and can describe transitions between any pair of nodes. To describe the transition from node $u$ to $v$ , we find the corresponding path in the partition tree and multiply the transition rates along that path, that is, we use the coarse-grained description of the network's community structure, not the network's actual link pattern; it can describe any transition regardless of whether the link(u, v)exists in the network or not. The description length in bits for a path with transition rate $r$ is $- {\log }_{2}\left( r\right)$ . For example, consider the scenario in figure 2 where we calculate similarity scores for the two directed links(5,3)and(5,7), neither of 15 which exists in the network. Nodes 5 and 3 are in module $A$ , and the rate at which a random walker in $A$ visits node 3 is $3/{12}$ , requiring $- {\log }_{2}\left( {3/{12}}\right) = 2$ bits to describe that transition. Node 7 is in module $B$ , and a random walker in $A$ exits $A$ at rate $1/{12}$ , enters $B$ at rate $1/2$ , and then visits node 7 at rate $2/{10}$ , that is, at rate $1/{120}$ , requiring $- {\log }_{2}\left( {1/{120}}\right) \approx {6.9}$ bits.
88
+
89
+ $\operatorname{rev}\left( {{M}_{\langle p\rangle },\left\lbrack {{u}_{j},{u}_{k}}\right\rbrack }\right)$ prefix $p = \left\lbrack {{p}_{1},\ldots ,{p}_{i}}\right\rbrack$ forw $\left( {{M}_{\langle p\rangle },\left\lbrack {{v}_{j},{v}_{k},{v}_{l}}\right\rbrack }\right)$ ...
90
+
91
+ Figure 3: Illustration of map equation similarity between nodes $u$ and $v$ with addresses $\operatorname{addr}\left( {\mathrm{M},u}\right) = \left\lbrack {{p}_{1},\ldots ,{p}_{i},{u}_{j},{u}_{k}}\right\rbrack$ and $\operatorname{addr}\left( {\mathrm{M},v}\right) = \left\lbrack {{p}_{1},\ldots ,{p}_{i},{v}_{j},{v}_{k},{v}_{l}}\right\rbrack$ , respectively, where $\mathrm{M}$ is the complete network partition. The longest common prefix shared by the addresses for $u$ and $v$ is $p = \left\lbrack {{p}_{1},\ldots ,{p}_{i}}\right\rbrack$ , and ${\mathrm{M}}_{\langle p\rangle }$ is the sub-module at address $p$ within $\mathrm{M}$ , that is the smallest module that contains $u$ and $v$ .
92
+
93
+ Paths to derive similarities emanate from modules, not from nodes, because the model must generalise to unobserved data. If compression was our sole purpose, we would use node-specific codebooks containing codewords for neighbouring nodes, no longer detect communities, and be able to describe merely observed links. Instead, the map equation's coding scheme is designed to capitalise on modular network structures: The modular code structure provides a model that generalises to unobserved data, coarse-grains the path descriptions, and prevents overfitting.
94
+
95
+ For the general case, where $\mathrm{M}$ can be a hierarchical network partition, we number the sub-modules within each module $\mathrm{m}$ from 1 to ${n}_{\mathrm{m}}$ - we refer to these numbers as addresses - such that an ordered sequence of addresses uniquely identifies a path starting at the root of the partition tree. We let addr: $\mathrm{M} \times N \rightarrow$ List(N)be a function that takes a network partition and a node as input, and returns the node’s address in the partition. To calculate the similarity of node $v$ to $u$ , we identify the longest common prefix $p$ of the nodes’ addresses, addr(M, u)and addr(M, v), and select the partition tree’s sub-tree ${\mathrm{M}}_{\langle p\rangle }$ that corresponds to the prefix $p : {\mathrm{M}}_{\langle p\rangle }$ is the smallest sub-tree that contains $u$ and $v$ . We obtain the addresses for $u$ and $v$ within sub-tree ${\widetilde{\mathrm{M}}}_{\langle p\rangle }$ by removing the prefix $p$ from their addresses. That is, addr $\left( {\mathrm{M},u}\right) = p + + \operatorname{addr}\left( {{\mathrm{M}}_{\langle p\rangle },u}\right)$ and $\operatorname{addr}\left( {\mathrm{M},v}\right) = p + + \operatorname{addr}\left( {{\mathrm{M}}_{\langle p\rangle },v}\right)$ , where $+ +$ is list concatenation. The rate at which a random walker transitions from $u$ to $v$ is the product of (i) the rate at which the random walker moves along the path addr $\left( {{\mathrm{M}}_{\langle p\rangle },u}\right)$ in reverse direction, $\operatorname{rev}\left( {{\mathrm{M}}_{\langle p\rangle },\operatorname{addr}\left( {{\mathrm{M}}_{\langle p\rangle },u}\right) }\right)$ , that is from $u$ to the root of ${M}_{\langle p\rangle }$ , and (ii) the rate at which the random walker moves along the path addr $\left( {{\mathrm{M}}_{\langle p\rangle },v}\right)$ in forward direction, forw $\left( {{\mathrm{M}}_{\langle p\rangle },\operatorname{addr}\left( {{\mathrm{M}}_{\langle p\rangle },v}\right) }\right)$ , that is from the root of ${\mathrm{M}}_{\langle p\rangle }$ to $v$ , where
96
+
97
+ $$
98
+ \operatorname{rev}\left( {\mathrm{M},a}\right) = \left\{ \begin{array}{ll} 1 & \text{ if }a = \left\lbrack x\right\rbrack \\ {\mathrm{M}}_{\langle \left\lbrack x\right\rbrack \rangle ,\operatorname{exit}} \cdot \operatorname{rev}\left( {{\mathrm{M}}_{\langle \left\lbrack x\right\rbrack \rangle },{a}^{\prime }}\right) & \text{ if }a = \left\lbrack x\right\rbrack + + {a}^{\prime } \end{array}\right. \tag{3}
99
+ $$
100
+
101
+ 239
102
+
103
+ $$
104
+ \text{ forw }\left( {\mathrm{M},a}\right) = \left\{ \begin{array}{ll} {p}_{\langle \lbrack x\rbrack \rangle }/{p}_{\mathrm{M}} & \text{ if }a = \left\lbrack x\right\rbrack \\ {\mathrm{M}}_{\langle \lbrack x\rbrack \rangle ,\text{ enter }} \cdot \operatorname{forw}\left( {{\mathrm{M}}_{\langle \lbrack x\rbrack \rangle },{a}^{\prime }}\right) & \text{ if }a = \left\lbrack x\right\rbrack + + {a}^{\prime } \end{array}\right. \tag{4}
105
+ $$
106
+
107
+ and ${a}^{\prime }$ denotes non-empty sequences. Here ${p}_{\mathrm{M}}$ is the codebook use rate for module $\mathrm{M}$ and ${p}_{\langle \left\lbrack x\right\rbrack \rangle }$ is the visit rate for the node identified by address $x$ within the given module. The final addresses in equation 3 and equation 4 are treated differently, reflecting that the map equation forgets the most recently visited node.
108
+
109
+ We illustrate these ideas in a generic example (figure 3). In short, we define map equation similarity,
110
+
111
+ $$
112
+ \operatorname{MapSim}\left( {M,u,v}\right) = \operatorname{rev}\left( {{\mathrm{M}}_{\langle p\rangle },\operatorname{addr}\left( {{\mathrm{M}}_{\langle p\rangle },u}\right) }\right) \cdot \operatorname{forw}\left( {{\mathrm{M}}_{\langle p\rangle },\operatorname{addr}\left( {{\mathrm{M}}_{\langle p\rangle },v}\right) }\right) , \tag{5}
113
+ $$
114
+
115
+ where $p$ is the longest common prefix shared by the addresses of $u$ and $v$ in the partition tree defined by $\mathrm{M}$ . To express map equation similarity in terms of description length, we take the $- {\log }_{2}$ of MapSim and regard pairs of nodes that yield a shorter description length as more similar.
116
+
117
+ MapSim is asymmetric since module entry and exit rates are, in general, different and $u$ and $v$ can have different visit rates. MapSim is zero if one node is in a disconnected component; the exit rate for regions without out-links is zero, so the corresponding description length is infinitely long. This issue can be addressed with the regularised map equation [39], which addresses networks with incomplete observations using weak links between all pairs of nodes to connect all modules with modelled link strengths that depend on the connection patterns of each node.
118
+
119
+ We calculate node similarities in three steps: (i) inferring a network's community with Infomap [40], a greedy, search-based optimisation algorithm for the map equation, (ii) representing the corresponding coding scheme in a suitable data structure, and (iii) using MapSim to computing similarities based on the coding scheme. The overall approach is illustrated in figs. 1-3 and algorithm 1.
120
+
121
+ Algorithm 1: Pseudo-code of function MapSim calculating similarity score for node pair(u, v).
122
+
123
+ Input : $\operatorname{graph}G$ and pair of nodes(u, v)
124
+
125
+ Output : similarity score of(u, v)
126
+
127
+ // Use InfoMap to construct coding tree for compression
128
+
129
+ modules $=$ infomap.minimiseMapEquation(G)
130
+
131
+ tree = buildPartitionTree( $G$ , modules)
132
+
133
+ p = longestCommonPrefix(tree, $u,v$ )
134
+
135
+ tree ${}_{\langle \mathrm{p}\rangle } =$ smallestSubtree(tree, p)
136
+
137
+ // calculate code length of random walks from $u$ to $v$
138
+
139
+ addrU = addr(tree $\left( {{\text{ tree }}_{\langle \mathrm{p}\rangle },u}\right)$
140
+
141
+ addrV $= \operatorname{addr}\left( {{\operatorname{tree}}_{\langle \mathrm{p}\rangle },v}\right)$
142
+
143
+ revRate $= \operatorname{rev}\left( {{\operatorname{tree}}_{\langle \mathrm{p}\rangle },\operatorname{addrU}}\right)$
144
+
145
+ fwdRate $=$ forw(tree ${}_{\langle \mathrm{p}\rangle }$ , addr $\mathrm{V}$ )
146
+
147
+ return $- {\log }_{2}$ (revRate - fwdRate)
148
+
149
+ § 4 EXPERIMENTAL VALIDATION
150
+
151
+ We evaluate the performance of MapSim in unsupervised, similarity-based link prediction for 47 real-world networks, 35 directed (table 1) and 12 undirected (table 3), retrieved from Netzschleuder [41] and Konect [42]. Details of the directed and undirected networks are shown in tables 2 and 4, respectively. Our analysis is based on a Python-implementation available on GitHub ${}^{1}$ , building on Infomap, a fast and greedy search algorithm for minimising the map equation with an open source implementation in C++ [40, 43]. As baseline, we use four random walk-based embedding methods: DeepWalk [6], node2vec [7], LINE [8], and NERD [15], using the respective author's implementation. We also include results for MapSim based on the one-module partition for comparison. Adopting the argument by [7], we exclude graph factorisation methods and simple local similarity scores because they have already been shown to be inferior to node2vec. We include NERD because it is a recent random walk-based embedding method proposed for directed link prediction with higher reported performance than other walk-based embeddings [15].
152
+
153
+ § 4.1 UNSUPERVISED LINK PREDICTION
154
+
155
+ Different from works that use graph embeddings for supervised link prediction, we address unsupervised link prediction. Like Goyal and Ferrara [2] and Khosla et al. [15], we take a similarity-based approach that does not require training a classifier using a set of negative and positive samples. Different from, for example, Grover and Leskovec [7], we compute similarity scores based on node embeddings, rather than applying a supervised classifier to multi-dimensional features computed for node pairs. To simplify the comparison with the recent graph embedding NERD specifically designed for directed networks, we adopt the approach by Khosla et al. [15] and calculate node similarities as the sigmoid over the feature vectors' dot product.
156
+
157
+ ${}^{1}$ Link not included for double-blind review.
158
+
159
+ Considering how different embedding techniques generalise across data sets, we purposefully refrained from hyperparameter tuning. We chose a single set of hyperparameters for each method, informed by the default parameters given by the respective authors and recent surveys' discussion regarding which hyperparameter values generally provide good link prediction performance. For DeepWalk and node2vec, we sample $r = {80}$ random walks of length $l = {40}$ per node, and use a window size of $w = {10}$ . For both methods, the underlying word embedding is applied using the default model parameters fixed by the authors, ${skipgram} = 1,k = {10}$ and mincount $= 0$ . For node2vec we set the return parameter to $p = 1$ . Since for $q = p = 1$ node2vec is identical to DeepWalk, we use $q = 4$ , which was found to provide good performance for link prediction [2]. We run LINE with first-order $\left( {\mathrm{{LINE}}}_{1}\right)$ , second-order $\left( {\mathrm{{LINE}}}_{2}\right)$ , and combined first-and-second-order proximity $\left( {\mathrm{{LINE}}}_{1 + 2}\right)$ , use 1,000 samples per node, and $s = 5$ negative samples. For NERD, we use 800 samples and $\kappa = 3$ negative samples per node. We set the number of neighbourhood nodes to $n = 1$ , as suggested by the authors for link prediction. We use $d = {128}$ dimensions for all embeddings. Since MapSim is a non-parametric method, it does not require setting any hyperparameters. However, to avoid local optima when heuristically minimising the map equation, we run Infomap 100 times and select the partition with the shortest description length.
160
+
161
+ We use 5-fold cross-validation to split links into train and test sets, treating weighted links as indivisible. We calculate the node embedding (for MapSim the coding tree) in the training network, derive predictions based on node similarities, and evaluate them based on the links in the validation set. For each fold, we restrict the resulting training network to its largest weakly connected component in directed networks, and largest connected component in undirected networks, respectively. For a validation set with $k$ positive links, we sample $k$ negative links uniformly at random, and calculate scores for all ${2k}$ links. In undirected networks, for each positive link(u, v), we also consider(v, u) as positive, and, therefore, sample two negative links per positive link. Varying the discrimination threshold, we obtain a receiver operator characteristic (ROC) per fold, and calculate the area under the curve (AUC). Detailed results, including average and worst-case performance, are shown in tables 2 and 4; we also report precision-recall performance (table 5). We include MapSim based on the one-module partition in the results and note that it performs better than using a modular partition in some cases: this suggests that the network does not have a strong community structure, which could be addressed with the regularised map equation [39]. When mentioning MapSim in the following, we refer to using modular partitions.
162
+
163
+ Our results show that, on average, MapSim outperforms all baseline methods across the 47 data sets with an increase in average AUC by approximately ${7.7}\%$ and ${7.5}\%$ for directed and undirected networks, respectively. Using a one-sided two-sample $t$ -test, we find that MapSim’s average performance across all networks is significantly higher than that of the best graph embedding method, ${\mathrm{{LINE}}}_{1 + 2}$ , both in directed and undirected networks ( $p \approx {0.008}$ and $p \approx {0.039}$ , respectively). MapSim provides the best performance in 14 of the 47 networks, with a standard deviation of the AUC score less than half of that of the best embedding-based method $\left( {\mathrm{{LINE}}}_{1 + 2}\right)$ . For undirected networks, MapSim achieves the best performance for five of the 12 networks, while none of the embedding methods beats MapSim's performance in more than two networks. We find the largest performance gain in the directed network linux, where MapSim yields an increase of AUC of approximately 22.6% compared to the best embedding (NERD).
164
+
165
+ MapSim’s worst-case performance across all networks is approximately 44% and 33% above the worst-case performance of the best-performing embedding for directed and undirected networks, respectively. MapSim’ performance advantage can be as high as ${84}\%$ , for example ${AUC} = {0.988}$ of MapSim in foursquare-friendships-new vs. ${AUC} = {0.537}$ for node2vec. While node2vec performs best in the largest directed network, MapSim performs best in the largest undirected network and in some of the small (directed and undirected) networks, suggesting that MapSim works well both for small and large networks.
166
+
167
+ We attribute those encouraging results to multiple features of our method: Different from graph embedding techniques that require downstream dimensionality reduction, MapSim's compression approach implicitly includes model selection and avoids overfitting. Moreover, the representation of nodes in the coding tree is integrated with the optimisation of hierarchical community structures in the network. Due to its non-parametric approach and the use of the analytical map equation, MapSim performs well in absence of tuning to the specific data set.
168
+
169
+ ${10}^{2}$ k = 3 ${10}^{4}$ ${10}^{5}$ ${10}^{6}$ N time [seconds] ${10}^{1}$ k = 5 ${10}^{0}$ k = 10 ${10}^{-1}$ ${10}^{-2}$ ${10}^{-3}$ ${10}^{2}$ ${10}^{3}$
170
+
171
+ Figure 4: Runtime behavior for inferring the community structure with Infomap and constructing the coding tree for MapSim in synthetic $k$ -regular networks with different size.
172
+
173
+ § 4.2 SCALABILITY ANALYSIS
174
+
175
+ We analyse MapSim's scalability in synthetically generated networks with modular structure and tunable size and link density. We generate $k$ -regular random graphs with $N$ nodes and (mean) degree $k$ . To avoid trivial configurations where a modular structure is absent, we create a network by first generating two $k$ -regular random graphs with $\frac{N}{2}$ nodes each and "cross" two links, one from each of the two graphs, to obtain a single connected network with strong community structure. We then apply Infomap to (i) minimise the map equation and extract the network's modular structure, and (ii) construct the coding tree for calculating node similarities. We repeat this 10 times for random networks with different numbers of $N$ nodes and degrees $k$ . The average run times are reported in figure 4, which shows that, for sparse networks, the runtime of MapSim is linear in the size of the network. Edler et al. [43] report that the theoretical asymptotic bound of computational complexity for the optimisation of the map equation is in $\mathcal{O}\left( {NlogN}\right)$ , which is the same as for vector space embedding techniques like node2vec and DeepWalk ${}^{2}$ . Thus, MapSim does not entail higher computational complexity compared to popular graph embeddings. This makes it an interesting choice for practitioners looking for a simple and scalable method that works well in small, large, directed, and undirected networks.
176
+
177
+ § 5 CONCLUSION AND OUTLOOK
178
+
179
+ We propose MapSim, a novel information-theoretic approach to compute node similarities based on a modular compression of network flows. Different from vector space embeddings, MapSim represents nodes in a discrete, non-metric space of communities that yields asymmetric similarities suitable to predict links in directed and undirected networks. The resulting similarities can be explained based on a network's modular structure. Using description length minimisation, MapSim naturally accounts for Occam's razor, which avoids overfitting and yields a parsimonious coding tree. Addressing unsupervised link prediction, we compare MapSim to popular embedding-based algorithms on 47 data sets that cover networks from a few hundred to hundreds of thousand nodes and millions of edges. Our analysis shows that the average performance of MapSim is more than $7\%$ higher than its closest competitor, outperforming all competing methods in 14 of the 47 networks. Taking a new perspective on graph representation learning, our work demonstrates the potential of compression-based methods, with promising applications in other graph learning tasks. Moreover, recent generalisations of the map equation to temporal and higher-order networks [43] suggest that our method can be applied to graphs with non-dyadic or time-stamped relationships.
papers/LOG/LOG 2022/LOG 2022 Conference/QBGYYu3l3dG/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,397 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Reasoning-Modulated Representations
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ Neural networks leverage robust internal representations in order to generalise. Learning them is difficult, and often requires a large training set that covers the data distribution densely. We study a common setting where our task is not purely opaque. Indeed, very often we may have access to information about the underlying system (e.g. that observations must obey certain laws of physics) that any "tabula rasa" neural network would need to re-learn from scratch, penalising performance. We incorporate this information into a pre-trained reasoning module, and investigate its role in shaping the discovered representations in diverse self-supervised learning settings from pixels. Our approach paves the way for a new class of representation learning, grounded in algorithmic priors.
12
+
13
+ ## 1 Introduction
14
+
15
+ Neural networks are able to learn policies in environments without access to their specifics [1], generate large quantities of text [2], or automatically fold proteins to high accuracy [3]. However, such "tabula rasa" approaches hinge on having access to substantial quantities of data, from which robust representations can be learned. Without a large training set that spans the data distribution, representation learning is difficult [4-6].
16
+
17
+ Here, we study ways to construct neural networks with representations that are robust, while retaining a data-driven approach. We rely on a simple observation: very often, we have some (partial) knowledge of the underlying dynamics of the data, which could help make stronger predictions from fewer observations. This knowledge, however, usually requires us to be mindful of abstract properties of the data-and such properties cannot always be robustly extracted from natural observations.
18
+
19
+ ![01963f0e-8468-7cbf-8608-24641db11925_0_306_1435_1185_294_0.jpg](images/01963f0e-8468-7cbf-8608-24641db11925_0_306_1435_1185_294_0.jpg)
20
+
21
+ Figure 1: Bouncing balls example (re-printed, with permission, from Battaglia et al. [7]). Natural inputs, $\mathbf{x}$ , correspond to pixel observations. Predicting future observations (natural outputs, $\mathbf{y}$ ), can be simplified as follows: if we are able to extract a set of abstract inputs, $\overline{\mathbf{x}}$ ,(e.g. the radius, position and velocity for each ball), the movements in this space must obey the laws of physics.
22
+
23
+ Motivation. Consider the task of predicting the future state of a system of $n$ bouncing balls, from a pixel input $\mathbf{x}$ (Figure 1). Reliably estimating future pixel observations, $\mathbf{y}$ , is a challenging reconstruction task. However, the generative properties of this system are simple. Assuming knowledge of simple abstract inputs (radius, ${r}_{c}$ , position, ${\mathbf{x}}_{c}$ , and velocity, ${\mathbf{v}}_{c}$ ) for every ball, ${\overline{\mathbf{x}}}_{c}$ , the future movements in this abstract space are the result of applying the laws of physics to these
24
+
25
+ low-dimensional quantities. Hence, future abstract states, $\overline{\mathbf{y}}$ , can be computed via a simple algorithm that aggregates pair-wise forces between objects.
26
+
27
+ While this gives us a potentially simpler path from pixel inputs to pixel outputs, via abstract inputs to abstract outputs $\left( {\mathbf{x} \rightarrow \overline{\mathbf{x}} \rightsquigarrow \overline{\mathbf{y}} \rightarrow \mathbf{y}}\right)$ , it still places potentially unrealistic demands on our task setup, every step of the way:
28
+
29
+ $\mathbf{x} \rightarrow \overline{\mathbf{x}}$ : Necessitates either upfront knowledge of how to abstract away $\overline{\mathbf{x}}$ from $\mathbf{x}$ , or a massive dataset of paired $\left( {\mathbf{x},\overline{\mathbf{x}}}\right)$ to learn such a mapping from;
30
+
31
+ $\overline{\mathbf{x}} \rightsquigarrow \overline{\mathbf{y}} :$ Implies that the algorithm perfectly simulates all aspects of the output. In reality, an algorithm may often only give partial context about y. Further, algorithms often assume that $\mathbf{x}$ is provided without error, exposing an algorithmic bottleneck [8]: if $\overline{\mathbf{x}}$ is incorrectly predicted, this will negatively compound in $\overline{\mathbf{y}}$ , hence $\mathbf{y}$ ;
32
+
33
+ $\overline{\mathbf{y}} \rightarrow \mathbf{y} :$ Necessitates a renderer that generates $\mathbf{y}$ from $\overline{\mathbf{y}}$ , or a dataset of paired $\left( {\overline{\mathbf{y}},\mathbf{y}}\right)$ to learn it.
34
+
35
+ We will assume a general setting where none of the above constraints hold: we know that the mapping $\overline{\mathbf{x}} \rightsquigarrow \overline{\mathbf{y}}$ is likely of use to our predictor, but we do not assume a trivial mapping or a paired dataset which would allow us to convert directly from $\mathbf{x}$ to $\overline{\mathbf{x}}$ or from $\overline{\mathbf{y}}$ to $\mathbf{y}$ . Our only remaining assumption is that the algorithm $\overline{\mathbf{x}} \rightsquigarrow \overline{\mathbf{y}}$ can be efficiently computed, allowing us to generate massive quantities of paired abstract input-output pairs, $\left( {\overline{\mathbf{x}},\overline{\mathbf{y}}}\right)$ .
36
+
37
+ Present work. In this setting, we propose Reasoning-Modulated Representations (RMR), an approach that first learns a latent-space processor of abstract data; i.e. a mapping $\overline{\mathbf{x}}\overset{f}{ \rightarrow }\mathbf{z}\overset{P}{ \rightarrow }{\mathbf{z}}^{\prime }\overset{g}{ \rightarrow }\overline{\mathbf{y}}$ , where $\mathbf{z} \in {\mathbb{R}}^{k}$ are high-dimensional latent vectors. $f$ and $g$ are an encoder and decoder, designed to take abstract representations to and from this latent space, and $P$ is a processor network which simulates the algorithm $\overline{\mathbf{x}} \rightsquigarrow \overline{\mathbf{y}}$ in the latent space.
38
+
39
+ We then observe, in the spirit of neural algorithmic reasoning [9], that such a processor network can be used as a drop-in differentiable component for any task where the $\overline{\mathbf{x}} \rightsquigarrow \overline{\mathbf{y}}$ kind of reasoning may be
40
+
41
+ applicable. Hence, we then learn a pipeline $\mathbf{x}\overset{\widetilde{f}}{ \rightarrow }\mathbf{z}\overset{P}{ \rightarrow }{\mathbf{z}}^{\prime }\overset{\widetilde{g}}{ \rightarrow }\mathbf{y}$ , which modulates the representations $\mathbf{z}$ obtained from $\mathbf{x}$ , forcing them to pass through the pre-trained processor network. By doing so, we have ameliorated the original requirement for a massive natural dataset of(x, y)pairs. Instead, we inject knowledge from a massive abstract dataset of $\left( {\overline{\mathbf{x}},\overline{\mathbf{y}}}\right)$ pairs, directly through the pre-trained parameters of $P$ . This has the potential to relieve the pressure on encoders and decoders $\widetilde{f}$ and $\widetilde{g}$ , which we experimentally validate on several challenging representation learning domains.
42
+
43
+ Our contributions can be summarised as follows:
44
+
45
+ - Verifying and extending prior work, we show that meaningful latent-space models can be learned from massive abstract datasets, on physics simulations and Atari 2600 games;
46
+
47
+ - We then show that these latent-space models can be used as differentiable components within neural pipelines that process raw observations. In doing so, we recover a neural network pipeline that relies solely on the existence of massive abstract datasets (which can often be automatically generated).
48
+
49
+ - Finally, we demonstrate early signs of processor reusability: latent-space abstract models can be used in tasks which do not even directly align with their environment, so long as these tasks can benefit from their underlying reasoning procedure.
50
+
51
+ ## 2 Related Work
52
+
53
+ Neural algorithmic reasoning. RMR relies on being able to construct robust latent-space models that imitate abstract reasoning procedures. This makes it well aligned with neural algorithmic reasoning [10], which is concerned with constructing neuralised versions of classical algorithms (typically by learning to execute them in a manner that extrapolates). Leveraging the ideas of algorithmic alignment [11], several known algorithmic primitives have already been successfully neuralised. This includes iterative computation [12, 13], linearithmic algorithms [14], and data structures $\left\lbrack {{15},{16}}\right\rbrack$ . Further, the XLVIN model [8] demonstrates how such primitives can be re-used for data-efficient planning, paving the way for a blueprint [9] that we leverage in RMR as well.
54
+
55
+ Physical simulation with neural networks. Our work also has contact points with prior art in using (graph) neural networks for physics simulations. In fact, there is a tight coupling between algorithmic computation and simulations, as the latter are typically realised using the former. Within this space, abstract GNN models of physics have been proposed by works such as interaction networks [7] and NPE [17], and extended to pixel-based inputs by visual interaction networks [18]. The generalisation power of these models has increased drastically in recent years, with effective models of systems of particles [19] as well as meshes [20] being proposed. Excitingly, it has also been demonstrated that rudimentary laws of physics can occasionally be recovered from the update rules of these GNNs [21], and that they can be used to uncover new physical knowledge [22].
56
+
57
+ Recent work has also explored placing additional constraints on learning-based physical simulators, for example by using Hamiltonian ODE integrators in conjunction with GNN models [23], or by coupling a (non-neural) differentiable physics engine directly to visual inputs and optimizing its parameters via backprop [24, 25].
58
+
59
+ Object-centric and modular models for dynamic environments. RMR with factored latents can be viewed as a form of object-centric neural network, in which visual objects in an image or video are represented as separate latent variables in the model and their temporal dynamics and pairwise interactions are modeled via GNNs or self-attention mechanisms. There is a rich literature on discovering objects and learning their dynamics from raw visual data without supervision, with object-centric models such as R-NEM [26], SQAIR [27], OP3 [28], SCALOR [29], G-SWM [30]. Recent work has explored using contrastive losses [31] in this context or other losses directly in latent space [32]. Related approaches discover and use keypoints [33] to describe objects and even discover causal relations from visual input [34] using neural relational inference [35] in conjunction with a keypoint discovery method. A related line of works integrate attention-mechanisms in modular and object-centric models to interface latent variables with visual input, including models such as RMC [36], RIM [37], Slot Attention [38], SCOFF [39], and NPS [40].
60
+
61
+ ## 3 RMR architecture
62
+
63
+ Having provided a high-level overview of RMR and surveyed the relevant related work, we proceed to carefully detail the blueprint of RMR's various components. This will allow us to ground any subsequent RMR experiments on diverse domains directly in our blueprint. Throughout this section, it will be useful to refer to Figure 2 which presents a visual overview of this section.
64
+
65
+ Preliminaries. We assume a set of natural inputs, $\mathcal{X}$ , and a set of natural outputs, $\mathcal{Y}$ . These sets represent the possible inputs and outputs of a target function, $\Phi : \mathcal{X} \rightarrow \mathcal{Y}$ , which we would like to learn based on a (potentially small) dataset of input-output pairs,(x, y), where $\mathbf{y} = \Phi \left( \mathbf{x}\right)$ .
66
+
67
+ We further assume that the inner workings of $\Phi$ can be related to an algorithm, $A : \overline{\mathcal{X}} \rightarrow \overline{\mathcal{Y}}$ . The algorithm operates over a set of abstract inputs, $\overline{\mathcal{X}}$ , and produces outputs from an abstract output set $\mathcal{Y}$ . Typically, it will be the case that $\dim \mathcal{X} \ll \dim \mathcal{X}$ ; that is, abstract inputs are assumed substantially lower-dimensional than natural inputs. We do not assume existence of any aligned input pairs $\left( {\mathbf{x},\overline{\mathbf{x}}}\right)$ , and we do not assume that $A$ perfectly explains the computations of $\Phi$ . What we do assume is that $A$ is either known or can be trivially computed, giving rise to a massive dataset of abstract input-output pairs, $\left( {\overline{\mathbf{x}},\overline{\mathbf{y}}}\right)$ , where $\overline{\mathbf{y}} = A\left( \overline{\mathbf{x}}\right)$ .
68
+
69
+ Lastly, we assume a latent space, $\mathcal{Z}$ , and that we can construct neural network components to both encode and decode from it. Typically, $\mathcal{Z}$ will be a real-valued vector space $\left( {\mathcal{Z} = {\mathbb{R}}^{k}}\right)$ which is high-dimensional; that is, $k > \dim \mathcal{X}$ . This ensures that any neural networks operating over $\mathcal{Z}$ are not vulnerable to bottleneck effects.
70
+
71
+ Note that either the natural or abstract input set may be factorised, e.g., into objects; in this case, we can accordingly factorise the latent space, enforcing $\mathcal{Z} = {\mathbb{R}}^{n \times k}$ , where $n$ is the assumed maximal number of objects (typically a hyperparameter of the models if not known upfront).
72
+
73
+ Abstract pipeline. RMR training proceeds by first learning a model of the algorithm $A$ , which is bound to pass through a latent-space representation. That is, we learn a neural network approximator $g\left( {P\left( {f\left( \overline{\mathbf{x}}\right) }\right) }\right) \approx A\left( \overline{\mathbf{x}}\right)$ , which follows the encode-process-decode paradigm [41]. It consists of the following three building blocks: Encoder, $f : \mathcal{X} \rightarrow \mathcal{Z}$ , tasked with projecting the abstract inputs into the latent space; Processor, $P : \mathcal{Z} \rightarrow \mathcal{Z}$ , simulating individual steps of the algorithm in the
74
+
75
+ ![01963f0e-8468-7cbf-8608-24641db11925_3_314_201_1172_917_0.jpg](images/01963f0e-8468-7cbf-8608-24641db11925_3_314_201_1172_917_0.jpg)
76
+
77
+ Figure 2: Reasoning-modulated representation learner (RMR).
78
+
79
+ latent space; Decoder, $g : \mathcal{Z} \rightarrow \overline{\mathcal{Y}}$ , tasked with projecting latents back into the abstract output space. Such a pipeline is now widely used both in neural algorithmic reasoning [12] and learning physical simulations [19], and can be trained end-to-end with gradient descent.
80
+
81
+ For reasons that will become apparent, it is favourable for most of the computational effort to be performed by $P$ . Accordingly, encoders and decoders are often designed to be simple learnable linear projections, while processors tend to be either deep MLPs or graph neural networks-depending on whether the latent space is factorised into nodes.
82
+
83
+ Natural pipeline. Once an appropriate processor, $P$ , has been learned, it may be observed that it corresponds to a highly favourable component in our setting. Namely, we can relate its operations to the algorithm $A$ , and since it stays high-dimensional, it is a differentiable component we can easily plug into other neural networks without incurring any bottleneck effects. This insight was originally recovered in XLVIN [8], where it yielded a generic implicit planner. As an ablation, we have also rediscovered the bottleneck effect in our settings; see Appendix C. We now leverage similar insights for general representation learning tasks.
84
+
85
+ On a high level, what we need to do is simple and elegant: swap out $f$ and $g$ for natural encoders and decoders, $\widetilde{f} : \mathcal{X} \rightarrow \mathcal{Z}$ and $\widetilde{g} : \mathcal{Z} \rightarrow \mathcal{Y}$ , respectively. We are then able to learn a function $\widetilde{g}\left( {P\left( {\widetilde{f}\left( \mathbf{x}\right) }\right) }\right) \approx \Phi \left( \mathbf{x}\right)$ , which is once again to be optimised through gradient descent. We would like $P$ to retain its semantics during training, and therefore it is typically kept frozen in the natural pipeline. Note that $P$ might not perfectly represent $A$ which in turn might not perfectly represent $\Phi$ . While we rely on a skip connection in our implementation of $P$ , it has no learnable parameters and does not offer the system the ability to learn a correction of $P$ in the natural setting. Our choice is motivated by the desire to both maintain the semantics and interpretability of $P$ and to make this processor a bottleneck forcing the model to rely on it. We show empirically that our pipeline is surprisingly robust to imperfect $P$ models even with weak (linear) encoders/decoders.
86
+
87
+ It is worth noting several potential challenges that may arise while training the natural pipeline, especially if the training data for it is sparsely available. We also suggest remedies for each:
88
+
89
+ - If $\mathrm{x}$ and/or $\mathrm{y}$ exhibit any nontrivial geometry, simple linear projections will rarely suffice for $\widetilde{f}$ and $\widetilde{g}$ . For example, our natural inputs will often be pixel-based, necessitating a convolutional neural network for $\widetilde{f}$ .
90
+
91
+ - Further, since the parameters of $P$ are kept frozen, $\widetilde{f}$ is left with a challenging task of mapping natural inputs into an appropriate manifold that $P$ can meaningfully operate over. While we demonstrate clear empirical evidence that such meaningful mappings definitely occur, we remark that its success may hinge on carefully tuning the hyperparameters of $\widetilde{f}$ .
92
+
93
+ - A very common setting assumes that the abstract inputs and latents are factorised into objects, but the natural inputs are not. In this case, $\widetilde{f}$ is tasked with predicting appropriate object representations from the natural inputs. This is known to be a challenging feat [42], but can be successfully performed. Sometimes arbitrarily factorising the feature maps of a CNN [31] is sufficient, while at other times, models such as slot attention [38] may be required.
94
+
95
+ - One corollary of using automated object extractors for $\widetilde{f}$ is that it’s very difficult to enforce their slot representations to line up in the same way as in the abstract inputs. This implies that $P$ should be permutation equivariant (and hence motivates using a GNN for it).
96
+
97
+ ## 4 RMR for bouncing balls
98
+
99
+ To evaluate the capability of the RMR pipeline for transfer from the abstract space to the pixel space, we apply it on the "bouncing balls" problem. The bouncing balls problem is an instance of a physics simulation problem, where the task is to predict the next state of an environment in which multiple balls are bouncing between each other and a bounding box. Though this problem had been studied in the the context of physics simulation from (abstract) trajectories [7] and from (natural) videos $\left\lbrack {{18},{26},{43}}\right\rbrack$ , here we focus on the aptitude of RMR to transfer learned representations from trajectories to videos.
100
+
101
+ Our results affirm that strong abstract models can be trained on such tasks, and that including them in a video pipeline induces more robust representations. See Appendix A for more details on hyperparameters and experimental setup.
102
+
103
+ Preliminaries. Here, trajectories are represented by 2D coordinates of 10 balls through time, defining our abstract inputs and outputs $\mathcal{X} = \mathcal{Y} = {\mathbb{R}}^{{10} \times 2}$ . We slice these trajectories into a series of moving windows containing the input, ${\overline{\mathbf{x}}}^{ * }$ , spanning a history of three previous states, and the target, $\overline{\mathbf{y}}$ , representing the next state. We obtain these trajectories from a 3D simulator (MuJoCo [44]), together with their short-video renderings, which represent our natural input and output space $\mathcal{X} = \mathcal{Y} = {\mathbb{R}}^{{64} \times {64} \times 3}$ . Our goal is to train an RMR abstract model on trajectories and transfer learned representations to improve a dynamics model trained on these videos.
104
+
105
+ Abstract pipeline. So as to model the dynamics of trajectories, we closely follow the RMR desiderata for the abstract model. We set $f$ to a linear projection over the input concatenation, $P$ to a Message Passing Neural Network (MPNN), following previous work [7,19], and $g$ to a linear projection.
106
+
107
+ Our model learns a transition function $g\left( {P\left( {f\left( {\overline{\mathbf{x}}}^{ * }\right) }\right) \approx \overline{\mathbf{y}}}\right.$ , supervised using Mean Squared Error (MSE) over ball positions in the next step. It achieves an MSE of ${4.59} \times {10}^{-4}$ , which, evaluated qualitatively, demonstrates the ability of the model to predict physically realistic behavior when unrolled for 10 steps (the model is trained on 1-step dynamics only). Next, we take the processor $P$ from the abstract pipeline and re-use it in the natural pipeline.
108
+
109
+ Natural pipeline. Here we evaluate whether the pre-trained RMR processor can be reused for learning the dynamics of the bouncing balls from videos. The pixel-based encoder $\bar{f}$ is concatenation of per-input-image Slot Attention model [38], passed through a linear layer, and a Broadcast Decoder [45] for the pixel-based decoder $\bar{g}$ .
110
+
111
+ The full model is a transition function $\bar{g}\left( {P\left( {\bar{f}\left( {\mathbf{x}}^{ * }\right) }\right) \approx \mathbf{y}}\right.$ , supervised by pixel reconstruction loss over the next step image. We compare the performance of the RMR model with a pre-trained processor
112
+
113
+ $P$ against a baseline in which $P$ is trained fully end-to-end. The RMR model achieves an MSE of ${7.94} \pm {0.41}\left( {\times {10}^{-4}}\right)$ , whereas the baseline achieves ${9.47} \pm {0.24}\left( {\times {10}^{-4}}\right)$ . We take a qualitative look at the reconstruction rollout of the RMR model in Figure 3, and expose the algorithmic bottleneck properties for this task in Appendix C.
114
+
115
+ ![01963f0e-8468-7cbf-8608-24641db11925_5_311_373_1180_299_0.jpg](images/01963f0e-8468-7cbf-8608-24641db11925_5_311_373_1180_299_0.jpg)
116
+
117
+ Figure 3: RMR for bouncing balls reconstruction rollout. States marked in green are the natural input, followed by the reconstructed output. The states below the reconstruction is the ground truth.
118
+
119
+ ## 5 Contrastive RMR for Atari
120
+
121
+ We evaluate the potential of our RMR pipeline for state representation learning on the Atari 2600 [46]. We find the RMR applicable here because there is a potential wealth of information that can be obtained about the Atari's operation-namely, by inspecting its RAM traces.
122
+
123
+ Preliminaries. Accordingly, we will define our set of abstract inputs and outputs as Atari RAM matrices. Given that the Atari has 128 bytes of memory, $\overline{\mathcal{X}} = \overline{\mathcal{Y}} = {\mathbb{B}}^{{128} \times 8}$ (where $\mathbb{B} = \{ 0,1\}$ is the set of bits). We collect data about how the console modifies the RAM by acting in the environment and recording the trace of RAM arrays we observe. These traces will be of the form $\left( {\overline{\mathbf{x}}, a,\overline{\mathbf{y}}}\right)$ which signify that the agent’s initial RAM state was $\overline{\mathbf{x}}$ , and that after performing action $a \in \mathcal{A}$ , its RAM state was updated to $\overline{\mathbf{y}}$ . We assume that $a$ is encoded as an 18-way one-hot vector.
124
+
125
+ We would like to leverage any reasoning module obtained over RAM states to support representation learning from raw pixels. Accordingly, our natural inputs, $\mathcal{X}$ , are pixel arrays representing the Atari’s framebuffer.
126
+
127
+ Mirroring prior work, we perform contrastive learning directly in the latent space, and set $\mathcal{Y} = \mathcal{Z}$ ; that is, our natural outputs correspond to an estimate of the "updated" latents after taking an action. All our models use latent representations of 64 dimensions per slot, meaning $\mathcal{Z} = {\mathbb{R}}^{{128} \times {64}}$ .
128
+
129
+ We note that it is important to generate a diverse dataset of experiences in order to train a robust RAM model. To simulate a dataset which might be gathered by human players of varying skill, we sample our data using the 32 policy heads of a pre-trained Agent57 [47]. Each policy head collects data over three episodes in the studied games. Note that this implies a substantially more challenging dataset than the one reported by [48], wherein data was collected by a purely random policy, which may well fail to explore many relevant regions of the games.
130
+
131
+ Abstract pipeline. Firstly, we set out to verify that it is possible to train nontrivial Atari RAM transition models. The construction of this abstract experiment follows almost exactly the abstract RMR setup: $f$ and $g$ are appropriately sized linear projections, while $P$ needs to take into account which action was taken when updating the latents. To simplify the implementation and allow further model re-use, we consider the action a part of the $P$ ’s inputs. See Appendix F for detailed equations.
132
+
133
+ This implies that our transition model learns a function $g\left( {P\left( {f\left( \bar{\mathbf{x}}\right) , a}\right) }\right) \approx \overline{\mathbf{y}}$ . We supervise this model using binary cross-entropy to predict each bit of the resulting RAM state. Since RAM transitions are assumed deterministic, we assume a fully Markovian setup and learn 1-step dynamics.
134
+
135
+ For brevity purposes, we detail our exact hyperparameters and results per each Atari game considered in Appendix B. Our results ascertain the message passing neural network (MPNN) [49] as a highly potent processor network in Atari: it ranked most potent in 17 out of 24 games considered, compared to MLPs and Deep Sets [50]. Accordingly, we will focus on leveraging pre-trained MPNN processors for the next phase of the RMR pipeline.
136
+
137
+ Natural pipeline. We now set out to evaluate whether our pre-trained RMR processors can be meaningfully re-used by an encoder in a pixel-based contrastive learning pipeline.
138
+
139
+ Table 1: Natural modelling results for Atari 2600. Bit-level ${\mathrm{F}}_{1}$ reported for slots with high entropy, as in Anand et al. [48]. Results assumed significant at $p < {0.05}$ (one-sided paired Wilcoxon test).
140
+
141
+ <table><tr><td>Game</td><td>Agent57</td><td>C-SWM</td><td>$\mathbf{{RMR}}$</td><td>$p$ -value</td></tr><tr><td>Asteroids</td><td>${0.514} \pm {0.001}$</td><td>${0.582} \pm {0.009}$</td><td>0.593 $\pm {0.004}$</td><td>$< {10}^{-5}$</td></tr><tr><td>Battlezone</td><td>${0.351} \pm {0.003}$</td><td>${0.592} \pm {0.005}$</td><td>${0.589} \pm {0.007}$</td><td>0.056</td></tr><tr><td>Berzerk</td><td>${0.454} \pm {0.084}$</td><td>${0.463} \pm {0.053}$</td><td>${0.470} \pm {0.025}$</td><td>0.364</td></tr><tr><td>Bowling</td><td>${0.554} \pm {0.004}$</td><td>${0.944} \pm {0.006}$</td><td>${0.946} \pm {0.003}$</td><td>0.071</td></tr><tr><td>Boxing</td><td>${0.558} \pm {0.002}$</td><td>${0.667} \pm {0.012}$</td><td>${0.669} \pm {0.011}$</td><td>0.215</td></tr><tr><td>Breakout</td><td>${0.657} \pm {0.001}$</td><td>${0.836} \pm {0.009}$</td><td>0.852±0.008</td><td>$< {10}^{-5}$</td></tr><tr><td>Demon Attack</td><td>${0.539} \pm {0.004}$</td><td>${0.653} \pm {0.006}$</td><td>0.658 $\pm {0.004}$</td><td>0.002</td></tr><tr><td>Freeway</td><td>${0.424} \pm {0.052}$</td><td>${0.912} \pm {0.025}$</td><td>${\mathbf{{0.919}}}_{\pm {0.035}}$</td><td>0.032</td></tr><tr><td>Frostbite</td><td>${0.405} \pm {0.001}$</td><td>${0.580}_{\pm {0.025}}$</td><td>0.594 $\pm {0.016}$</td><td>0.035</td></tr><tr><td>H.E.R.O.</td><td>${0.481} \pm {0.001}$</td><td>${0.729} \pm {0.026}$</td><td>0.779±0.021</td><td>$< {10}^{-5}$</td></tr><tr><td>Montezuma's Revenge</td><td>${0.743} \pm {0.003}$</td><td>${0.824} \pm {0.012}$</td><td>${0.821} \pm {0.016}$</td><td>0.156</td></tr><tr><td>Ms. Pac-Man</td><td>${0.506} \pm {0.001}$</td><td>${0.599} \pm {0.004}$</td><td>0.602±0.006</td><td>0.038</td></tr><tr><td>Pitfall!</td><td>${0.495} \pm {0.003}$</td><td>0.626 $\pm {0.015}$</td><td>${0.603} \pm {0.010}$</td><td>$< {10}^{-5}$</td></tr><tr><td>Pong</td><td>${0.392} \pm {0.001}$</td><td>${0.750} \pm {0.016}$</td><td>0.762±0.010</td><td>0.001</td></tr><tr><td>Private Eye</td><td>${0.594} \pm {0.001}$</td><td>${0.863} \pm {0.010}$</td><td>0.867 $\pm {0.008}$</td><td>0.045</td></tr><tr><td>Q*Bert</td><td>${0.536} \pm {0.010}$</td><td>${0.588} \pm {0.015}$</td><td>${0.590} \pm {0.017}$</td><td>0.165</td></tr><tr><td>River Raid</td><td>${0.686} \pm {0.001}$</td><td>${0.762} \pm {0.005}$</td><td>0.764 $\pm {0.007}$</td><td>0.032</td></tr><tr><td>Seaquest</td><td>${0.472} \pm {0.007}$</td><td>${0.634} \pm {0.013}$</td><td>0.653±0.008</td><td>$< {10}^{-5}$</td></tr><tr><td>Skiing</td><td>${0.599} \pm {0.007}$</td><td>${0.766} \pm {0.028}$</td><td>0.775±0.014</td><td>0.174</td></tr><tr><td>Space Invaders</td><td>${0.588} \pm {0.002}$</td><td>${0.719} \pm {0.012}$</td><td>0.761 $\pm {0.006}$</td><td>$< {10}^{-5}$</td></tr><tr><td>Tennis</td><td>${0.533} \pm {0.008}$</td><td>${0.724} \pm {0.007}$</td><td>0.729 $\pm {0.005}$</td><td>0.007</td></tr><tr><td>Venture</td><td>${0.567} \pm {0.001}$</td><td>${0.632} \pm {0.005}$</td><td>${0.633} \pm {0.004}$</td><td>0.392</td></tr><tr><td>Video Pinball</td><td>${0.375} \pm {0.011}$</td><td>${0.724} \pm {0.009}$</td><td>0.745±0.008</td><td>$< {10}^{-5}$</td></tr><tr><td>Yars’ Revenge</td><td>${0.608} \pm {0.001}$</td><td>${0.715} \pm {0.008}$</td><td>0.751±0.010</td><td>$< {10}^{-5}$</td></tr></table>
142
+
143
+ For our pixel-based encoder $\widetilde{f}$ , we use the same CNN trunk as Anand et al. [48]-however, as we require slot-level rather than flat embeddings, the final layers of our encoder are different. Namely, we apply a $1 \times 1$ convolution computing ${128m}$ feature maps (where $m$ is the number of feature maps per-slot). We then flatten the spatial axes, giving every slot $m \times h \times w$ features, which we finally linearly project to 64-dimensional features per-slot, aligning with our pre-trained $P$ . Note that setting $m = 1$ recovers exactly the style of object detection employed by C-SWM [31]. Since our desired outputs are themselves latents, $\widetilde{g}$ is a single linear projection to 64 dimensions.
144
+
145
+ Overall, our pixel-based transition model learns a function $\widetilde{g}\left( {P\left( {\widetilde{f}\left( \mathbf{x}\right) , a}\right) }\right) \approx \widetilde{f}\left( \mathbf{y}\right)$ , where $\mathbf{y}$ is the next state observed after applying action $a$ in state $\mathrm{x}$ . To optimise it, we re-use exactly the same TransE-inspired [51] contrastive loss that C-SWM [31] used.
146
+
147
+ Once state representation learning concludes, all components are typically thrown away except for the encoder, $\widetilde{f}$ , which is used for downstream tasks. As a proxy for evaluating the quality of the encoder, we train linear classifiers on the concatenation of all slot embeddings obtained from $\widetilde{f}$ to predict individual RAM bits, exactly as in the abstract model case. Note that we have not violated our assumption that paired $\left( {\mathbf{x},\overline{\mathbf{x}}}\right)$ samples will not be provided while training the natural model-in this phase, the encoder $\widetilde{f}$ is frozen, and gradients can only flow into the linear probe.
148
+
149
+ Our comparisons for this experiment, evaluating our RMR pipeline against an identical architecture with an unfrozen $P$ (equivalent to C-SWM [31]) is provided in Table 1. For each of 20 random seeds, we feed identical batches to both models, hence we can perform a paired Wilcoxon test to assess the statistical significance of any observed differences on validation episodes. The results are in line with our hypothesis: representations learnt by RMR are significantly better $\left( {p < {0.05}}\right)$ on 15 out of 24 games, significantly worse only on Pitfall !—and indistinguishable to C-SWM's on others. This is despite the fact their architectures are identical, indicating that the pre-trained abstract model induces stronger representations for predicting the underlying data factors. As was the case for bouncing balls, we expose the algorithmic bottleneck here too; see Appendix C.
150
+
151
+ As a relevant initial baseline-and to emphasise the difficulty of our task-we also include in Table 1 the performance of the latent embeddings extracted from a pre-trained Agent57 [47]. These embeddings are, perhaps unsurprisingly, substantially worse than both the RMR and C-SWM models. As Agent57 was trained on a reward-maximising objective, its embeddings are likely to capture the controllable aspects of the input, while filtering out the environment's background.
152
+
153
+ Abstract model transfer. While the results above indicate that endowing a self-supervised Atari representation learner with knowledge of the underlying game's RAM transitions yields stronger representations, one may still argue that this constitutes a form of "privileged information". This is due to the fact we knew upfront which game we were learning the representations for, and hence could leverage the specific RAM dynamics of this game.
154
+
155
+ ![01963f0e-8468-7cbf-8608-24641db11925_7_329_707_1148_1149_0.jpg](images/01963f0e-8468-7cbf-8608-24641db11925_7_329_707_1148_1149_0.jpg)
156
+
157
+ Figure 4: A hierarchically clustered heatmap depiction of the transfer results, where the Train Game abstract models have been trained on the Test Games. The results are summarised by a Wilcoxon test, denoting where the performance of RMR is better, does not differ or is worse than C-SWM's. In each cell, the mean relative $\%$ improvement is noted. The significance cutoff is $p < {0.05}$ .
158
+
159
+ In the final experiment, we study a more general setting: we are given access to RAM traces showing us how the Atari console manipulates its memory in response to player input, but we cannot guarantee these traces came from the same game that we are performing representation learning over. Can we still effectively leverage this knowledge?
160
+
161
+ Specifically, we test abstract model transfer in the following way: first, we take a pre-trained abstract processor network $P$ from one game (the "train game") and freeze it. Then, using this processor, we perform the aforementioned natural pipeline training and testing over frames from another game (the "test game"). We then evaluate whether the recovered performance improves over the C-SWM model with unfrozen weights-once again using a paired one-sided Wilcoxon test over 20 seeds to ascertain statistical significance of any differences observed. The final outcome of our experiment is hence a ${24} \times {24}$ matrix, indicating the quality of abstract model transfer from every game to every other game. Table 1 corresponds to the "diagonal" entries of this matrix.
162
+
163
+ The results, presented in Figure 4, testify to the performance of RMR. Representations learned by RMR transfer better in 64.6% of the train/test game pairs, are indistinguishable from C-SWM in ${33.7}\%$ of the game pairs and perform worse than C-SWM in only 1.7% of game pairs. Therefore, in plentiful circumstances, the answer to our original question is positive: discovering a trace of Atari RAM transitions of unknown origin can often be of high significance for representation learning from Atari pixels, regardless of whether the underlying games match.
164
+
165
+ Qualitative analysis of transfer. While we find this result interesting in and of itself, it also raises interesting follow-up questions. Is representation learning on certain games more prone to being improved just by knowing anything about the Atari console? Figure 4 certainly implies so: several games (such as Seaquest or Space Invaders) have "fully-green" columns, making them "universal recipients". Similarly, we may be interested about the "donor" properties of each game - to what extent are their RAM models useful across a broad range of test games?
166
+
167
+ We study both of the above questions by performing a hierarchical (complete-link) clustering of the rows and columns of the ${24} \times {24}$ matrix of transfer performances, to identify clusters of related donor and recipient games. Both clusterings are marked in Figure 4 (on the sides of the matrix).
168
+
169
+ The analysis reveals several well-formed clusters, from which we are able to make some preliminary observations, based on the properties of the various games. To name a few examples:
170
+
171
+ - The strongest recipients (e.g. Yars' Revenge, H.E.R.O., Seaquest, Space Invaders and Asteroids) tend to include elements of "shooting" in their gameplay.
172
+
173
+ - Conversely, the weakest recipients (River Raid, Berzerk, Private Eye, Ms. Pac-Man and Pitfall!) are all games in which movement is generally unrestricted across most of the screen, indicating a larger range of possible coordinate values to model.
174
+
175
+ - Pong, Breakout, Battlezone and Skiing cluster closely in terms of donor properties-and they are all games in which movement is restricted to one axis only.
176
+
177
+ - Lastly, strong donors (Montezuma’s Revenge, Yars’ Revenge, ${\mathrm{Q}}^{ * }$ Bert and Venture) all feature massive, abrupt, changes to the game state (as they feature multiple rooms, for example). This implies that they might generally have seen a more diverse set of transitions. Further, on Yars' Revenge, a massive laser features, which, conveniently, prints an RGB projection of the RAM itself on the frame.
178
+
179
+ ## 20 6 Conclusions
180
+
181
+ We presented Reasoning-Modulated Representations (RMR), a novel approach for leveraging background algorithmic knowledge within a representation learning pipeline. By encoding the underlying algorithmic priors as weights of a processor neural network, we alleviate requirements on alignment between abstract and natural inputs and protect our model against bottlenecks. We believe that RMR paves the way to a novel class of algorithmic reasoning-inspired representations, with a high potential for transfer across tasks-a feat that has largely eluded deep reinforcement learning research.
182
+
183
+ ## References
184
+
185
+ [1] Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, et al. Mastering
186
+
187
+ atari, go, chess and shogi by planning with a learned model. Nature, 588(7839):604-609, 2020. 1
188
+
189
+ [2] Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020. 1
190
+
191
+ [3] Andrew W Senior, Richard Evans, John Jumper, James Kirkpatrick, Laurent Sifre, Tim Green, Chongli Qin, Augustin Zídek, Alexander WR Nelson, Alex Bridgland, et al. Improved protein structure prediction using potentials from deep learning. Nature, 577(7792):706-710, 2020. 1
192
+
193
+ [4] Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling modern machine-learning practice and the classical bias-variance trade-off. Proceedings of the National Academy of Sciences, 116(32):15849-15854, 2019. 1
194
+
195
+ [5] Yann LeCun. The power and limits of deep learning. Research-Technology Management, 61(6): 22-27, 2018.
196
+
197
+ [6] Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Oct 2017. 1
198
+
199
+ [7] Peter W Battaglia, Razvan Pascanu, Matthew Lai, Danilo Rezende, and Koray Kavukcuoglu. Interaction networks for learning about objects, relations and physics. arXiv preprint arXiv:1612.00222, 2016. 1, 3, 5, 14
200
+
201
+ [8] Andreea-Ioana Deac, Petar Veličković, Ognjen Milinkovic, Pierre-Luc Bacon, Jian Tang, and Mladen Nikolic. Neural algorithmic reasoners are implicit planners. Advances in Neural Information Processing Systems, 34, 2021. 2, 4, 13, 16
202
+
203
+ [9] Petar Veličković and Charles Blundell. Neural algorithmic reasoning. Patterns, 2(7):100273, 2021. 2
204
+
205
+ [10] Quentin Cappart, Didier Chételat, Elias Khalil, Andrea Lodi, Christopher Morris, and Petar Veličković. Combinatorial optimization and reasoning with graph neural networks. arXiv preprint arXiv:2102.09544, 2021. 2
206
+
207
+ [11] Keyulu Xu, Jingling Li, Mozhi Zhang, Simon S Du, Ken-ichi Kawarabayashi, and Stefanie Jegelka. What can neural networks reason about? arXiv preprint arXiv:1905.13211, 2019. 2
208
+
209
+ [12] Petar Veličković, Rex Ying, Matilde Padovano, Raia Hadsell, and Charles Blundell. Neural execution of graph algorithms. arXiv preprint arXiv:1910.10593, 2019. 2, 4, 16
210
+
211
+ [13] Hao Tang, Zhiao Huang, Jiayuan Gu, Bao-Liang Lu, and Hao Su. Towards scale-invariant graph-related problem solving by iterative homogeneous graph neural networks. arXiv preprint arXiv:2010.13547, 2020. 2
212
+
213
+ [14] Kärlis Freivalds, Emīls Ozolinš, and Agris Šostaks. Neural shuffle-exchange networks-sequence processing in o (n log n) time. arXiv preprint arXiv:1907.07897, 2019. 2
214
+
215
+ [15] Heiko Strathmann, Mohammadamin Barekatain, Charles Blundell, and Petar Veličković. Persistent message passing. arXiv preprint arXiv:2103.01043, 2021. 2, 13
216
+
217
+ [16] Petar Veličković, Lars Buesing, Matthew C Overlan, Razvan Pascanu, Oriol Vinyals, and Charles Blundell. Pointer graph networks. arXiv preprint arXiv:2006.06380, 2020. 2, 13
218
+
219
+ [17] Michael B Chang, Tomer Ullman, Antonio Torralba, and Joshua B Tenenbaum. A compositional object-based approach to learning physical dynamics. arXiv preprint arXiv:1612.00341, 2016. 3
220
+
221
+ [18] Nicholas Watters, Daniel Zoran, Theophane Weber, Peter Battaglia, Razvan Pascanu, and Andrea Tacchetti. Visual interaction networks: Learning a physics simulator from video. Advances in neural information processing systems, 30:4539-4547, 2017. 3, 5
222
+
223
+ [19] Alvaro Sanchez-Gonzalez, Jonathan Godwin, Tobias Pfaff, Rex Ying, Jure Leskovec, and Peter Battaglia. Learning to simulate complex physics with graph networks. In International Conference on Machine Learning, pages 8459-8468. PMLR, 2020. 3, 4, 5
224
+
225
+ [20] Tobias Pfaff, Meire Fortunato, Alvaro Sanchez-Gonzalez, and Peter W Battaglia. Learning mesh-based simulation with graph networks. arXiv preprint arXiv:2010.03409, 2020. 3
226
+
227
+ [21] Miles Cranmer, Alvaro Sanchez-Gonzalez, Peter Battaglia, Rui Xu, Kyle Cranmer, David Spergel, and Shirley Ho. Discovering symbolic models from deep learning with inductive biases. arXiv preprint arXiv:2006.11287, 2020. 3
228
+
229
+ [22] Victor Bapst, Thomas Keck, A Grabska-Barwińska, Craig Donner, Ekin Dogus Cubuk, Samuel S Schoenholz, Annette Obika, Alexander WR Nelson, Trevor Back, Demis Hassabis, et al. Unveiling the predictive power of static structure in glassy systems. Nature Physics, 16(4): 448-454, 2020. 3
230
+
231
+ [23] Alvaro Sanchez-Gonzalez, Victor Bapst, Kyle Cranmer, and Peter Battaglia. Hamiltonian graph networks with ode integrators. arXiv preprint arXiv:1909.12790, 2019. 3
232
+
233
+ [24] Miguel Jaques, Michael Burke, and Timothy Hospedales. Physics-as-inverse-graphics: Unsupervised physical parameter estimation from video. arXiv preprint arXiv:1905.11169, 2019. 3
234
+
235
+ [25] Krishna Murthy Jatavallabhula, Miles Macklin, Florian Golemo, Vikram Voleti, Linda Petrini, Martin Weiss, Breandan Considine, Jerome Parent-Levesque, Kevin Xie, Kenny Erleben, et al. gradsim: Differentiable simulation for system identification and visuomotor control. arXiv preprint arXiv:2104.02646, 2021. 3
236
+
237
+ [26] Sjoerd Van Steenkiste, Michael Chang, Klaus Greff, and Jürgen Schmidhuber. Relational neural expectation maximization: Unsupervised discovery of objects and their interactions. arXiv preprint arXiv:1802.10353, 2018. 3, 5
238
+
239
+ [27] Adam R Kosiorek, Hyunjik Kim, Ingmar Posner, and Yee Whye Teh. Sequential attend, infer, repeat: Generative modelling of moving objects. arXiv preprint arXiv:1806.01794, 2018. 3
240
+
241
+ [28] Rishi Veerapaneni, John D Co-Reyes, Michael Chang, Michael Janner, Chelsea Finn, Jiajun Wu, Joshua Tenenbaum, and Sergey Levine. Entity abstraction in visual model-based reinforcement learning. In Conference on Robot Learning, pages 1439-1456. PMLR, 2020. 3
242
+
243
+ [29] Jindong Jiang, Sepehr Janghorbani, Gerard De Melo, and Sungjin Ahn. Scalor: Generative world models with scalable object representations. arXiv preprint arXiv:1910.02384, 2019. 3
244
+
245
+ [30] Zhixuan Lin, Yi-Fu Wu, Skand Peri, Bofeng Fu, Jindong Jiang, and Sungjin Ahn. Improving generative imagination in object-centric world models. In International Conference on Machine Learning, pages 6140-6149. PMLR, 2020. 3
246
+
247
+ [31] Thomas Kipf, Elise van der Pol, and Max Welling. Contrastive learning of structured world models. arXiv preprint arXiv:1911.12247, 2019. 3, 5, 7, 13
248
+
249
+ [32] Vincent François-Lavet, Yoshua Bengio, Doina Precup, and Joelle Pineau. Combined reinforcement learning via abstract representations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3582-3589, 2019. 3
250
+
251
+ [33] Tejas D Kulkarni, Ankush Gupta, Catalin Ionescu, Sebastian Borgeaud, Malcolm Reynolds, Andrew Zisserman, and Volodymyr Mnih. Unsupervised learning of object keypoints for perception and control. Advances in neural information processing systems, 32:10724-10734, 2019. 3
252
+
253
+ [34] Yunzhu Li, Antonio Torralba, Animashree Anandkumar, Dieter Fox, and Animesh Garg. Causal discovery in physical systems from videos. arXiv preprint arXiv:2007.00631, 2020. 3
254
+
255
+ [35] Thomas Kipf, Ethan Fetaya, Kuan-Chieh Wang, Max Welling, and Richard Zemel. Neural relational inference for interacting systems. In International Conference on Machine Learning, pages 2688-2697. PMLR, 2018. 3
256
+
257
+ [36] Adam Santoro, Ryan Faulkner, David Raposo, Jack Rae, Mike Chrzanowski, Theophane Weber, Daan Wierstra, Oriol Vinyals, Razvan Pascanu, and Timothy Lillicrap. Relational recurrent neural networks. arXiv preprint arXiv:1806.01822, 2018. 3
258
+
259
+ [37] Anirudh Goyal, Alex Lamb, Jordan Hoffmann, Shagun Sodhani, Sergey Levine, Yoshua Bengio, and Bernhard Schölkopf. Recurrent independent mechanisms. arXiv preprint arXiv:1909.10893, 2019. 3
260
+
261
+ [38] Francesco Locatello, Dirk Weissenborn, Thomas Unterthiner, Aravindh Mahendran, Georg Heigold, Jakob Uszkoreit, Alexey Dosovitskiy, and Thomas Kipf. Object-centric learning with slot attention. arXiv preprint arXiv:2006.15055, 2020. 3, 5, 13
262
+
263
+ [39] Anirudh Goyal, Alex Lamb, Phanideep Gampa, Philippe Beaudoin, Sergey Levine, Charles Blundell, Yoshua Bengio, and Michael Mozer. Factorizing declarative and procedural knowledge in structured, dynamic environments. In International Conference on Learning Representations, 2021. 3
264
+
265
+ [40] Anirudh Goyal, Aniket Didolkar, Nan Rosemary Ke, Charles Blundell, Philippe Beaudoin, Nicolas Heess, Michael Mozer, and Yoshua Bengio. Neural production systems. arXiv preprint arXiv:2103.01937, 2021. 3
266
+
267
+ [41] Jessica B Hamrick, Kelsey R Allen, Victor Bapst, Tina Zhu, Kevin R McKee, Joshua B Tenenbaum, and Peter W Battaglia. Relational inductive bias for physical construction in humans and machines. arXiv preprint arXiv:1806.01203, 2018. 3
268
+
269
+ [42] Klaus Greff, Sjoerd van Steenkiste, and Jürgen Schmidhuber. On the binding problem in artificial neural networks. arXiv preprint arXiv:2012.05208, 2020. 5
270
+
271
+ [43] Sindy Löwe, Klaus Greff, Rico Jonschkowski, Alexey Dosovitskiy, and Thomas Kipf. Learning object-centric video models by contrasting sets. arXiv preprint arXiv:2011.10287, 2020. 5
272
+
273
+ [44] Emanuel Toerdorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems, pages 5026-5033. IEEE, 2012. 5
274
+
275
+ [45] Nicholas Watters, Loic Matthey, Christopher P Burgess, and Alexander Lerchner. Spatial broadcast decoder: A simple architecture for learning disentangled representations in vaes. arXiv preprint arXiv:1901.07017, 2019. 5, 13
276
+
277
+ [46] Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253-279, 2013. 6
278
+
279
+ [47] Adrià Puigdomènech Badia, Bilal Piot, Steven Kapturowski, Pablo Sprechmann, Alex Vitvit-skyi, Zhaohan Daniel Guo, and Charles Blundell. Agent57: Outperforming the atari human benchmark. In International Conference on Machine Learning, pages 507-517. PMLR, 2020. 6, 8
280
+
281
+ [48] Ankesh Anand, Evan Racah, Sherjil Ozair, Yoshua Bengio, Marc-Alexandre Côté, and R Devon Hjelm. Unsupervised state representation learning in atari. arXiv preprint arXiv:1906.08226, 2019.6,7,13,14
282
+
283
+ [49] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International Conference on Machine Learning, pages 1263-1272. PMLR, 2017. 6, 13, 16
284
+
285
+ [50] Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Ruslan Salakhutdinov, and Alexander Smola. Deep sets. arXiv preprint arXiv:1703.06114, 2017. 6, 13
286
+
287
+ [51] Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multi-relational data. In Neural Information Processing Systems (NIPS), pages 1-9, 2013. 7
288
+
289
+ [52] Adam Santoro, David Raposo, David GT Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. A simple neural network module for relational reasoning. arXiv preprint arXiv:1706.01427, 2017. 13
290
+
291
+ [53] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 13
292
+
293
+ [54] Keyulu Xu, Mozhi Zhang, Jingling Li, Simon S Du, Ken-ichi Kawarabayashi, and Stefanie Jegelka. How neural networks extrapolate: From feedforward to graph neural networks. arXiv preprint arXiv:2009.11848, 2020. 16
294
+
295
+ ## A Bouncing balls modelling setup
296
+
297
+ Abstract pipeline. The $f$ is a linear projection over the concatenation of inputs, outputting a 128- long representation for each of the 10 balls in the input. The $P$ is a MPNN over the fully connected graph of the ball representations. It is a 2-pass MPNN with a 3-layered ReLU-activated MLP as a message function, without the final layer activation, projecting to the same 128-dimensional space, per object. Finally, $g$ is a linear projection applied on each object representation of the output of $P$ .
298
+
299
+ The model is MSE-supervised with ball position on the next step. It is trained on a 8-core TPU for 10000 epochs, with a batch size of 512, and the Adam optimizer with the initial learning rate of 0.0001 .
300
+
301
+ Natural pipeline. $\bar{f}$ is a Slot Attention model [38] on each input image, concatenating the images and passing them through a linear layer, outputting a 128-dimensional vector for each of the objects. $\bar{g}$ is a Broadcast Decoder [45] containing a sequence of 5 transposed convolutions and a linear layer mapping before calculating the reconstructions and their masks.
302
+
303
+ The model is MSE-supervised by pixel reconstruction (per-pixel MSE) over the next step image. Both the RMR and the baseline are trained on a 8-core TPU for 1000 epochs, with a batch size of 512, with the Adam optimiser and the initial learning rate of 0.0001 , all over 3 random seeds.
304
+
305
+ ## B Atari abstract modelling setup and results
306
+
307
+ Our best processor network is a MPNN [49] over a fully connected graph [52] of RAM slots, which concatenates the action embedding to every node (as done in [31]). It uses three-layer MLPs as message functions, with the ReLU activation applied after each hidden layer. The entire model is trained for every game in isolation, over 48 distinct episodes of Agent57 experience. We use the Adam SGD optimiser [53] with a batch size of 50 and a learning rate of 0.001 across all Atari experiments. To evaluate the benefits of message passing, we also compare our model to Deep Sets [50], which is equivalent to our MPNN model-only it passes messages over the identity adjacency matrix. Lastly, we evaluate the benefits of factorised latents by comparing our methods against a three-layer MLP applied on the flattened RAM state.
308
+
309
+ One immediate observation is that RAM updates in Atari are extremely sparse, with a copy baseline already being very strong for many games. To prevent the model from having to repeatedly re-learn identity functions, we also make it predict masks of the shape $\mathcal{M} = {\mathbb{B}}^{128}$ , specifying which cells are to be overwritten by the model at this step. This strategy, coupled with teacher forcing (as done by $\left\lbrack {{15},{16}}\right\rbrack )$ yielded substantially stronger predictors. We also use this observation to prevent over-inflating our prediction scores: we only display prediction accuracy over RAM slots with label entropy larger than 0.6 (as done by [48]).
310
+
311
+ The full results of training Atari RAM transition models, for the games studied in [48], are provided in Table 2. We evaluate both bit-level ${\mathrm{F}}_{1}$ scores, as well as slot-level accuracy (for which all 8 bits need to be predicted correctly in order to count), over the remaining 48 Agent57 episodes as validation. To the best of our knowledge, this is the first comprehensive feasibility study for learning Atari RAM transition models.
312
+
313
+ ## C Rediscovering the algorithmic bottleneck
314
+
315
+ As mentioned in the main text body, one of the key reasons in favour of a high-dimensional algorithmic component is to avoid the algorithmic bottleneck, as first exposed by Deac et al. [8].
316
+
317
+ In short, the performance guarantees of running classical algorithms rely on having the exactly correct inputs for them. If there are any errors in the predictions of these (usually very low-dimensional) inputs, these errors may propagate to the algorithmic computations and yield suboptimal results. Further, there is no room for any kind of fallback if such an event occurs.
318
+
319
+ In contrast, the high-dimensional neural processors like the ones we study here are not vulnerable to bottleneck effects: if any dimensions of the latent state are poorly predicted, the other components of it could step in and compensate for this. Further, we can easily support skip connections in the case where the algorithm is not fully descriptive of the problem we're solving.
320
+
321
+ In this section, we reaffirm the bottleneck effect for both Atari and bouncing ball experiments by providing additional sets of ablations on the processor network's latent size.
322
+
323
+ We ablate various RMR processor network architectures, for $\dim \mathbf{z} \in \{ 2,4,8,{16},{32},{64}\}$ , but leaving all other components and operations unchanged.
324
+
325
+ Table 2: Abstract modelling results for Atari 2600. Entire-slot accuracies and bit-level ${\mathrm{F}}_{1}$ scores are reported only for slots with high entropy, as per [48].
326
+
327
+ <table><tr><td rowspan="2">Game</td><td colspan="2">Copy baseline</td><td colspan="2">MLP</td><td colspan="2">Deep Sets</td><td colspan="2">$\mathbf{{MPNN}}$</td></tr><tr><td>Slot acc.</td><td>Bit ${\mathrm{F}}_{1}$</td><td>Slot acc.</td><td>Bit ${\mathrm{F}}_{1}$</td><td>Slot acc.</td><td>Bit ${\mathrm{F}}_{1}$</td><td>Slot acc.</td><td>Bit ${\mathrm{F}}_{1}$</td></tr><tr><td>Asteroids</td><td>70.65%</td><td>0.856</td><td>71.28%</td><td>0.872</td><td>72.84%</td><td>0.879</td><td>$\mathbf{{80.69}\% }$</td><td>0.930</td></tr><tr><td>Battlezone</td><td>57.09%</td><td>0.841</td><td>61.19%</td><td>0.840</td><td>61.71%</td><td>0.867</td><td>71.06%</td><td>0.892</td></tr><tr><td>Berzerk</td><td>84.32%</td><td>0.905</td><td>86.17%</td><td>0.930</td><td>84.16%</td><td>0.923</td><td>86.67%</td><td>0.933</td></tr><tr><td>Bowling</td><td>93.86%</td><td>0.972</td><td>97.43%</td><td>0.991</td><td>90.72%</td><td>0.966</td><td>98.41%</td><td>0.995</td></tr><tr><td>Boxing</td><td>59.78%</td><td>0.848</td><td>54.45%</td><td>0.834</td><td>59.79%</td><td>0.890</td><td>58.56%</td><td>0.877</td></tr><tr><td>Breakout</td><td>89.80%</td><td>0.949</td><td>92.77%</td><td>0.970</td><td>94.34%</td><td>0.979</td><td>96.45%</td><td>0.988</td></tr><tr><td>Demon Attack</td><td>67.90%</td><td>0.850</td><td>68.43%</td><td>0.864</td><td>66.70%</td><td>0.877</td><td>69.51%</td><td>0.879</td></tr><tr><td>Freeway</td><td>46.65%</td><td>0.787</td><td>75.93%</td><td>0.921</td><td>84.68%</td><td>0.959</td><td>89.17%</td><td>0.965</td></tr><tr><td>Frostbite</td><td>76.83%</td><td>0.904</td><td>79.09%</td><td>0.904</td><td>78.25%</td><td>0.946</td><td>76.52%</td><td>0.918</td></tr><tr><td>H.E.R.O.</td><td>76.71%</td><td>0.891</td><td>82.96%</td><td>0.932</td><td>80.17%</td><td>0.929</td><td>89.07%</td><td>0.956</td></tr><tr><td>Montezuma's Revenge</td><td>82.58%</td><td>0.907</td><td>87.30%</td><td>0.941</td><td>85.90%</td><td>0.951</td><td>85.44%</td><td>0.932</td></tr><tr><td>Ms. Pac-Man</td><td>83.80%</td><td>0.941</td><td>80.60%</td><td>0.935</td><td>81.50%</td><td>0.952</td><td>85.88%</td><td>0.966</td></tr><tr><td>Pitfall!</td><td>66.60%</td><td>0.862</td><td>78.28%</td><td>0.923</td><td>81.92%</td><td>0.947</td><td>80.40%</td><td>0.941</td></tr><tr><td>Pong</td><td>68.76%</td><td>0.873</td><td>73.58%</td><td>0.911</td><td>74.71%</td><td>0.920</td><td>83.23%</td><td>0.952</td></tr><tr><td>Private Eye</td><td>75.25%</td><td>0.889</td><td>81.95%</td><td>0.932</td><td>84.77%</td><td>0.954</td><td>86.41%</td><td>0.955</td></tr><tr><td>${\mathrm{Q}}^{ * }$ Bert</td><td>83.00%</td><td>0.915</td><td>90.07%</td><td>0.966</td><td>87.87%</td><td>0.943</td><td>89.26%</td><td>0.946</td></tr><tr><td>River Raid</td><td>76.95%</td><td>0.895</td><td>80.82%</td><td>0.927</td><td>69.96%</td><td>0.865</td><td>$\mathbf{{86.79}\% }$</td><td>0.954</td></tr><tr><td>Seaquest</td><td>71.23%</td><td>0.859</td><td>78.48%</td><td>0.898</td><td>75.53%</td><td>0.906</td><td>70.94%</td><td>0.798</td></tr><tr><td>Skiing</td><td>91.02%</td><td>0.966</td><td>93.42%</td><td>0.980</td><td>93.51%</td><td>0.983</td><td>96.37%</td><td>0.992</td></tr><tr><td>Space Invaders</td><td>81.67%</td><td>0.942</td><td>84.62%</td><td>0.957</td><td>89.38%</td><td>0.974</td><td>91.98%</td><td>0.985</td></tr><tr><td>Tennis</td><td>78.13%</td><td>0.890</td><td>82.13%</td><td>0.926</td><td>71.60%</td><td>0.856</td><td>80.20%</td><td>0.893</td></tr><tr><td>Venture</td><td>61.29%</td><td>0.858</td><td>63.16%</td><td>0.863</td><td>64.88%</td><td>0.886</td><td>76.56%</td><td>0.935</td></tr><tr><td>Video Pinball</td><td>76.71%</td><td>0.848</td><td>85.92%</td><td>0.912</td><td>78.64%</td><td>0.877</td><td>86.61%</td><td>0.913</td></tr><tr><td>Yars’ Revenge</td><td>69.25%</td><td>0.896</td><td>74.87%</td><td>0.929</td><td>72.69%</td><td>0.948</td><td>$\mathbf{{84.11}\% }$</td><td>0.969</td></tr></table>
328
+
329
+ The results of this ablation for the Atari representation learning setting are provided in Figure 5. It can be clearly observed that, as we reduce the size of the latents, this typically induces a performance regression in the downstream bit ${\mathrm{F}}_{1}$ scores. This indicates the algorithmic bottleneck effect.
330
+
331
+ On the bouncing balls task, we observe the algorithmic bottleneck as well, with more pronounced effects. Namely, for all latent sizes 32 and under, the validation MSE shoots up even though the training MSE keeps improving. Ultimately, we also attempted using an Interaction Network [7] as a bottlenecked RMR processor which requires projecting on to $\bar{\mathbf{x}}$ rather than $\mathbf{z}$ , which exhibited exactly the same behaviour, at a larger scale. The IN processor achieved a final validation MSE of ${341.40} \pm {5.60}\left( {\times {10}^{-4}}\right)$ . This is roughly ${43} \times$ worse than the non-bottlenecked RMR.
332
+
333
+ ## D On the weak acceptor properties of Battlezone
334
+
335
+ Out of all the Atari games studied in our work, Battlezone could been singled out as the least "receptive" to RMR processors-with only four games successfully donating their RAM representations to its pixel based encoder (Figure 4).
336
+
337
+ We set out to study why this effect took place. Upon inspection of the game’s dynamics ${}^{1}$ , we determined that Battlezone has certain properties that make representation learning uniquely challenging and somewhat decoupled from the underlying console computation.
338
+
339
+ Namely, Battlezone is the only game in our dataset that is played in the "first-person" perspective. Therefore, from the point of view of the pixel inputs, it may seem as if the player is always in the same place, and we would expect the mechanics and the way of thinking about the 'avatar' to be substantially different from all other Atari games considered.
340
+
341
+ ---
342
+
343
+ https://www.youtube.com/watch?v=9X4_xy7rC1A
344
+
345
+ ---
346
+
347
+ ![01963f0e-8468-7cbf-8608-24641db11925_14_426_360_941_1508_0.jpg](images/01963f0e-8468-7cbf-8608-24641db11925_14_426_360_941_1508_0.jpg)
348
+
349
+ Figure 5: Bit-level ${\mathrm{F}}_{1}$ scores for the Atari experiments while varying the latent size. A clear decreasing trend is apparent a $\dim \mathbf{z}$ is reduced, indicating the bottleneck effect.
350
+
351
+ ## E On the data distribution and setup used for training $P$
352
+
353
+ In many cases, while an algorithmic prior may be known, generative aspects of the relevant abstract inputs for the natural task could be unknown. For example, it may be known that the natural task could benefit from a sorting subroutine, but it may not be known upfront how many objects will need to be sorted.
354
+
355
+ When such details are unknown, it may still be possible to fall back to abstract inputs sampled from some sensible generic random distribution-so long as the processor network is trained in a way that promotes extrapolation. For a recent theoretical treatment of OOD generalisation in algorithmic reasoners, we refer the reader to Xu et al. [54]. Further, as observed by Deac et al. [8] in the case of implicit planning, even generic random abstract distributions can promote useful transfer to noisy pixel-based acting settings such as Atari.
356
+
357
+ Lastly, it is important to ensure that, in the abstract pipeline, the processor network $P$ carries the brunt of the computational burden; otherwise, the reasoning task may be partially captured in either $f$ or $g$ , and not left to $P$ ’s weights. In our work, we generally promote such behaviour by keeping $f$ and $g$ to be only linear layers; but even in settings where this is not appropriate, we recommend taking care to not overpower the abstract encoder and decoder.
358
+
359
+ ## F Atari abstract model equations
360
+
361
+ In this appendix, we provide a "bird's eye" view of the modelling steps taken by our abstract Atari pipeline, aiming to support future implementations of the RMR blueprint. We will readily re-use the notation from the main text for the various components.
362
+
363
+ Firstly, the abstract encoder $f$ is applied on the relevant Atari RAM representations, $\overline{\mathbf{x}}$ , augmented by a one-hot representation which is aiming to provide a generic embedding of the semantics of each RAM slot. $f$ is implemented as a linear layer, hence:
364
+
365
+ $$
366
+ \mathbf{z} = f\left( {\overline{\mathbf{x}}\parallel \mathbf{o}}\right) = {\mathbf{W}}^{f}\overline{\mathbf{x}} + {\mathbf{U}}^{f}\mathbf{o} + {\mathbf{b}}^{f} \tag{1}
367
+ $$
368
+
369
+ where ${\mathbf{W}}^{f},{\mathbf{U}}^{f},{\mathbf{b}}^{f}$ are learnable weights, and $\mathbf{o} \in {\mathbb{B}}^{{128} \times {128}}$ is a one-hot encoding s.t. $\mathbf{o} = {\mathbf{I}}_{128}$ .
370
+
371
+ In a separate pipeline (officially part of the processor network), the performed actions are also encoded using a linear action encoder:
372
+
373
+ $$
374
+ \mathbf{\alpha } = {f}_{a}\left( \mathbf{a}\right) = {\mathbf{W}}^{{f}_{a}}\mathbf{a} + {\mathbf{b}}^{{f}_{a}} \tag{2}
375
+ $$
376
+
377
+ where $\mathbf{a}$ is a one-hot encoded action representation.
378
+
379
+ Then, the processor network GNN is called to update these latents, and a sum-based skip connection is employed to promote a model-free path:
380
+
381
+ $$
382
+ {\mathbf{z}}^{\prime } = P\left( {\mathbf{z} + \mathbf{\alpha }}\right) + \mathbf{z} + \mathbf{\alpha } \tag{3}
383
+ $$
384
+
385
+ Here, the processor network $P$ is implemented as a standard message passing neural network [49] over a fully connected graph. Equations of such a $P$ are commonly exposed in e.g. Veličković et al. [12].
386
+
387
+ Lastly, the relevant decoder networks predict two properties: a mask which signifies which RAM slots have been changed as a result of applying $\mathbf{a}$ , and the updated states $\overline{\mathbf{y}}$ .
388
+
389
+ $$
390
+ \mathbf{\mu } = {g}_{\mu }\left( {\mathbf{z}}^{\prime }\right) = \sigma \left( {{\mathbf{W}}^{\mu }{\mathbf{z}}^{\prime } + {\mathbf{b}}^{\mu }}\right) \tag{4}
391
+ $$
392
+
393
+ $$
394
+ \overline{\mathbf{y}} = g\left( {\mathbf{z}}^{\prime }\right) \odot {\mathbb{I}}_{\mathbf{\mu } > {0.5}} = \left( {{\mathbf{W}}^{g}{\mathbf{z}}^{\prime } + {\mathbf{b}}^{g}}\right) \odot {\mathbb{I}}_{\mathbf{\mu } > {0.5}} \tag{5}
395
+ $$
396
+
397
+ Here, $\sigma$ is the logistic sigmoid activation, $\odot$ is the elementwise product, and $\mathbb{I}$ is the indicator function, which thresholds the mask. While this thresholding is non-differentiable, we note that the mask values can be directly supervised from known trajectories, and at training time, teacher forcing can be applied-slotting ground-truth masks in place of the indicator function.
papers/LOG/LOG 2022/LOG 2022 Conference/QBGYYu3l3dG/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,257 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § REASONING-MODULATED REPRESENTATIONS
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ Neural networks leverage robust internal representations in order to generalise. Learning them is difficult, and often requires a large training set that covers the data distribution densely. We study a common setting where our task is not purely opaque. Indeed, very often we may have access to information about the underlying system (e.g. that observations must obey certain laws of physics) that any "tabula rasa" neural network would need to re-learn from scratch, penalising performance. We incorporate this information into a pre-trained reasoning module, and investigate its role in shaping the discovered representations in diverse self-supervised learning settings from pixels. Our approach paves the way for a new class of representation learning, grounded in algorithmic priors.
12
+
13
+ § 1 INTRODUCTION
14
+
15
+ Neural networks are able to learn policies in environments without access to their specifics [1], generate large quantities of text [2], or automatically fold proteins to high accuracy [3]. However, such "tabula rasa" approaches hinge on having access to substantial quantities of data, from which robust representations can be learned. Without a large training set that spans the data distribution, representation learning is difficult [4-6].
16
+
17
+ Here, we study ways to construct neural networks with representations that are robust, while retaining a data-driven approach. We rely on a simple observation: very often, we have some (partial) knowledge of the underlying dynamics of the data, which could help make stronger predictions from fewer observations. This knowledge, however, usually requires us to be mindful of abstract properties of the data-and such properties cannot always be robustly extracted from natural observations.
18
+
19
+ < g r a p h i c s >
20
+
21
+ Figure 1: Bouncing balls example (re-printed, with permission, from Battaglia et al. [7]). Natural inputs, $\mathbf{x}$ , correspond to pixel observations. Predicting future observations (natural outputs, $\mathbf{y}$ ), can be simplified as follows: if we are able to extract a set of abstract inputs, $\overline{\mathbf{x}}$ ,(e.g. the radius, position and velocity for each ball), the movements in this space must obey the laws of physics.
22
+
23
+ Motivation. Consider the task of predicting the future state of a system of $n$ bouncing balls, from a pixel input $\mathbf{x}$ (Figure 1). Reliably estimating future pixel observations, $\mathbf{y}$ , is a challenging reconstruction task. However, the generative properties of this system are simple. Assuming knowledge of simple abstract inputs (radius, ${r}_{c}$ , position, ${\mathbf{x}}_{c}$ , and velocity, ${\mathbf{v}}_{c}$ ) for every ball, ${\overline{\mathbf{x}}}_{c}$ , the future movements in this abstract space are the result of applying the laws of physics to these
24
+
25
+ low-dimensional quantities. Hence, future abstract states, $\overline{\mathbf{y}}$ , can be computed via a simple algorithm that aggregates pair-wise forces between objects.
26
+
27
+ While this gives us a potentially simpler path from pixel inputs to pixel outputs, via abstract inputs to abstract outputs $\left( {\mathbf{x} \rightarrow \overline{\mathbf{x}} \rightsquigarrow \overline{\mathbf{y}} \rightarrow \mathbf{y}}\right)$ , it still places potentially unrealistic demands on our task setup, every step of the way:
28
+
29
+ $\mathbf{x} \rightarrow \overline{\mathbf{x}}$ : Necessitates either upfront knowledge of how to abstract away $\overline{\mathbf{x}}$ from $\mathbf{x}$ , or a massive dataset of paired $\left( {\mathbf{x},\overline{\mathbf{x}}}\right)$ to learn such a mapping from;
30
+
31
+ $\overline{\mathbf{x}} \rightsquigarrow \overline{\mathbf{y}} :$ Implies that the algorithm perfectly simulates all aspects of the output. In reality, an algorithm may often only give partial context about y. Further, algorithms often assume that $\mathbf{x}$ is provided without error, exposing an algorithmic bottleneck [8]: if $\overline{\mathbf{x}}$ is incorrectly predicted, this will negatively compound in $\overline{\mathbf{y}}$ , hence $\mathbf{y}$ ;
32
+
33
+ $\overline{\mathbf{y}} \rightarrow \mathbf{y} :$ Necessitates a renderer that generates $\mathbf{y}$ from $\overline{\mathbf{y}}$ , or a dataset of paired $\left( {\overline{\mathbf{y}},\mathbf{y}}\right)$ to learn it.
34
+
35
+ We will assume a general setting where none of the above constraints hold: we know that the mapping $\overline{\mathbf{x}} \rightsquigarrow \overline{\mathbf{y}}$ is likely of use to our predictor, but we do not assume a trivial mapping or a paired dataset which would allow us to convert directly from $\mathbf{x}$ to $\overline{\mathbf{x}}$ or from $\overline{\mathbf{y}}$ to $\mathbf{y}$ . Our only remaining assumption is that the algorithm $\overline{\mathbf{x}} \rightsquigarrow \overline{\mathbf{y}}$ can be efficiently computed, allowing us to generate massive quantities of paired abstract input-output pairs, $\left( {\overline{\mathbf{x}},\overline{\mathbf{y}}}\right)$ .
36
+
37
+ Present work. In this setting, we propose Reasoning-Modulated Representations (RMR), an approach that first learns a latent-space processor of abstract data; i.e. a mapping $\overline{\mathbf{x}}\overset{f}{ \rightarrow }\mathbf{z}\overset{P}{ \rightarrow }{\mathbf{z}}^{\prime }\overset{g}{ \rightarrow }\overline{\mathbf{y}}$ , where $\mathbf{z} \in {\mathbb{R}}^{k}$ are high-dimensional latent vectors. $f$ and $g$ are an encoder and decoder, designed to take abstract representations to and from this latent space, and $P$ is a processor network which simulates the algorithm $\overline{\mathbf{x}} \rightsquigarrow \overline{\mathbf{y}}$ in the latent space.
38
+
39
+ We then observe, in the spirit of neural algorithmic reasoning [9], that such a processor network can be used as a drop-in differentiable component for any task where the $\overline{\mathbf{x}} \rightsquigarrow \overline{\mathbf{y}}$ kind of reasoning may be
40
+
41
+ applicable. Hence, we then learn a pipeline $\mathbf{x}\overset{\widetilde{f}}{ \rightarrow }\mathbf{z}\overset{P}{ \rightarrow }{\mathbf{z}}^{\prime }\overset{\widetilde{g}}{ \rightarrow }\mathbf{y}$ , which modulates the representations $\mathbf{z}$ obtained from $\mathbf{x}$ , forcing them to pass through the pre-trained processor network. By doing so, we have ameliorated the original requirement for a massive natural dataset of(x, y)pairs. Instead, we inject knowledge from a massive abstract dataset of $\left( {\overline{\mathbf{x}},\overline{\mathbf{y}}}\right)$ pairs, directly through the pre-trained parameters of $P$ . This has the potential to relieve the pressure on encoders and decoders $\widetilde{f}$ and $\widetilde{g}$ , which we experimentally validate on several challenging representation learning domains.
42
+
43
+ Our contributions can be summarised as follows:
44
+
45
+ * Verifying and extending prior work, we show that meaningful latent-space models can be learned from massive abstract datasets, on physics simulations and Atari 2600 games;
46
+
47
+ * We then show that these latent-space models can be used as differentiable components within neural pipelines that process raw observations. In doing so, we recover a neural network pipeline that relies solely on the existence of massive abstract datasets (which can often be automatically generated).
48
+
49
+ * Finally, we demonstrate early signs of processor reusability: latent-space abstract models can be used in tasks which do not even directly align with their environment, so long as these tasks can benefit from their underlying reasoning procedure.
50
+
51
+ § 2 RELATED WORK
52
+
53
+ Neural algorithmic reasoning. RMR relies on being able to construct robust latent-space models that imitate abstract reasoning procedures. This makes it well aligned with neural algorithmic reasoning [10], which is concerned with constructing neuralised versions of classical algorithms (typically by learning to execute them in a manner that extrapolates). Leveraging the ideas of algorithmic alignment [11], several known algorithmic primitives have already been successfully neuralised. This includes iterative computation [12, 13], linearithmic algorithms [14], and data structures $\left\lbrack {{15},{16}}\right\rbrack$ . Further, the XLVIN model [8] demonstrates how such primitives can be re-used for data-efficient planning, paving the way for a blueprint [9] that we leverage in RMR as well.
54
+
55
+ Physical simulation with neural networks. Our work also has contact points with prior art in using (graph) neural networks for physics simulations. In fact, there is a tight coupling between algorithmic computation and simulations, as the latter are typically realised using the former. Within this space, abstract GNN models of physics have been proposed by works such as interaction networks [7] and NPE [17], and extended to pixel-based inputs by visual interaction networks [18]. The generalisation power of these models has increased drastically in recent years, with effective models of systems of particles [19] as well as meshes [20] being proposed. Excitingly, it has also been demonstrated that rudimentary laws of physics can occasionally be recovered from the update rules of these GNNs [21], and that they can be used to uncover new physical knowledge [22].
56
+
57
+ Recent work has also explored placing additional constraints on learning-based physical simulators, for example by using Hamiltonian ODE integrators in conjunction with GNN models [23], or by coupling a (non-neural) differentiable physics engine directly to visual inputs and optimizing its parameters via backprop [24, 25].
58
+
59
+ Object-centric and modular models for dynamic environments. RMR with factored latents can be viewed as a form of object-centric neural network, in which visual objects in an image or video are represented as separate latent variables in the model and their temporal dynamics and pairwise interactions are modeled via GNNs or self-attention mechanisms. There is a rich literature on discovering objects and learning their dynamics from raw visual data without supervision, with object-centric models such as R-NEM [26], SQAIR [27], OP3 [28], SCALOR [29], G-SWM [30]. Recent work has explored using contrastive losses [31] in this context or other losses directly in latent space [32]. Related approaches discover and use keypoints [33] to describe objects and even discover causal relations from visual input [34] using neural relational inference [35] in conjunction with a keypoint discovery method. A related line of works integrate attention-mechanisms in modular and object-centric models to interface latent variables with visual input, including models such as RMC [36], RIM [37], Slot Attention [38], SCOFF [39], and NPS [40].
60
+
61
+ § 3 RMR ARCHITECTURE
62
+
63
+ Having provided a high-level overview of RMR and surveyed the relevant related work, we proceed to carefully detail the blueprint of RMR's various components. This will allow us to ground any subsequent RMR experiments on diverse domains directly in our blueprint. Throughout this section, it will be useful to refer to Figure 2 which presents a visual overview of this section.
64
+
65
+ Preliminaries. We assume a set of natural inputs, $\mathcal{X}$ , and a set of natural outputs, $\mathcal{Y}$ . These sets represent the possible inputs and outputs of a target function, $\Phi : \mathcal{X} \rightarrow \mathcal{Y}$ , which we would like to learn based on a (potentially small) dataset of input-output pairs,(x, y), where $\mathbf{y} = \Phi \left( \mathbf{x}\right)$ .
66
+
67
+ We further assume that the inner workings of $\Phi$ can be related to an algorithm, $A : \overline{\mathcal{X}} \rightarrow \overline{\mathcal{Y}}$ . The algorithm operates over a set of abstract inputs, $\overline{\mathcal{X}}$ , and produces outputs from an abstract output set $\mathcal{Y}$ . Typically, it will be the case that $\dim \mathcal{X} \ll \dim \mathcal{X}$ ; that is, abstract inputs are assumed substantially lower-dimensional than natural inputs. We do not assume existence of any aligned input pairs $\left( {\mathbf{x},\overline{\mathbf{x}}}\right)$ , and we do not assume that $A$ perfectly explains the computations of $\Phi$ . What we do assume is that $A$ is either known or can be trivially computed, giving rise to a massive dataset of abstract input-output pairs, $\left( {\overline{\mathbf{x}},\overline{\mathbf{y}}}\right)$ , where $\overline{\mathbf{y}} = A\left( \overline{\mathbf{x}}\right)$ .
68
+
69
+ Lastly, we assume a latent space, $\mathcal{Z}$ , and that we can construct neural network components to both encode and decode from it. Typically, $\mathcal{Z}$ will be a real-valued vector space $\left( {\mathcal{Z} = {\mathbb{R}}^{k}}\right)$ which is high-dimensional; that is, $k > \dim \mathcal{X}$ . This ensures that any neural networks operating over $\mathcal{Z}$ are not vulnerable to bottleneck effects.
70
+
71
+ Note that either the natural or abstract input set may be factorised, e.g., into objects; in this case, we can accordingly factorise the latent space, enforcing $\mathcal{Z} = {\mathbb{R}}^{n \times k}$ , where $n$ is the assumed maximal number of objects (typically a hyperparameter of the models if not known upfront).
72
+
73
+ Abstract pipeline. RMR training proceeds by first learning a model of the algorithm $A$ , which is bound to pass through a latent-space representation. That is, we learn a neural network approximator $g\left( {P\left( {f\left( \overline{\mathbf{x}}\right) }\right) }\right) \approx A\left( \overline{\mathbf{x}}\right)$ , which follows the encode-process-decode paradigm [41]. It consists of the following three building blocks: Encoder, $f : \mathcal{X} \rightarrow \mathcal{Z}$ , tasked with projecting the abstract inputs into the latent space; Processor, $P : \mathcal{Z} \rightarrow \mathcal{Z}$ , simulating individual steps of the algorithm in the
74
+
75
+ < g r a p h i c s >
76
+
77
+ Figure 2: Reasoning-modulated representation learner (RMR).
78
+
79
+ latent space; Decoder, $g : \mathcal{Z} \rightarrow \overline{\mathcal{Y}}$ , tasked with projecting latents back into the abstract output space. Such a pipeline is now widely used both in neural algorithmic reasoning [12] and learning physical simulations [19], and can be trained end-to-end with gradient descent.
80
+
81
+ For reasons that will become apparent, it is favourable for most of the computational effort to be performed by $P$ . Accordingly, encoders and decoders are often designed to be simple learnable linear projections, while processors tend to be either deep MLPs or graph neural networks-depending on whether the latent space is factorised into nodes.
82
+
83
+ Natural pipeline. Once an appropriate processor, $P$ , has been learned, it may be observed that it corresponds to a highly favourable component in our setting. Namely, we can relate its operations to the algorithm $A$ , and since it stays high-dimensional, it is a differentiable component we can easily plug into other neural networks without incurring any bottleneck effects. This insight was originally recovered in XLVIN [8], where it yielded a generic implicit planner. As an ablation, we have also rediscovered the bottleneck effect in our settings; see Appendix C. We now leverage similar insights for general representation learning tasks.
84
+
85
+ On a high level, what we need to do is simple and elegant: swap out $f$ and $g$ for natural encoders and decoders, $\widetilde{f} : \mathcal{X} \rightarrow \mathcal{Z}$ and $\widetilde{g} : \mathcal{Z} \rightarrow \mathcal{Y}$ , respectively. We are then able to learn a function $\widetilde{g}\left( {P\left( {\widetilde{f}\left( \mathbf{x}\right) }\right) }\right) \approx \Phi \left( \mathbf{x}\right)$ , which is once again to be optimised through gradient descent. We would like $P$ to retain its semantics during training, and therefore it is typically kept frozen in the natural pipeline. Note that $P$ might not perfectly represent $A$ which in turn might not perfectly represent $\Phi$ . While we rely on a skip connection in our implementation of $P$ , it has no learnable parameters and does not offer the system the ability to learn a correction of $P$ in the natural setting. Our choice is motivated by the desire to both maintain the semantics and interpretability of $P$ and to make this processor a bottleneck forcing the model to rely on it. We show empirically that our pipeline is surprisingly robust to imperfect $P$ models even with weak (linear) encoders/decoders.
86
+
87
+ It is worth noting several potential challenges that may arise while training the natural pipeline, especially if the training data for it is sparsely available. We also suggest remedies for each:
88
+
89
+ * If $\mathrm{x}$ and/or $\mathrm{y}$ exhibit any nontrivial geometry, simple linear projections will rarely suffice for $\widetilde{f}$ and $\widetilde{g}$ . For example, our natural inputs will often be pixel-based, necessitating a convolutional neural network for $\widetilde{f}$ .
90
+
91
+ * Further, since the parameters of $P$ are kept frozen, $\widetilde{f}$ is left with a challenging task of mapping natural inputs into an appropriate manifold that $P$ can meaningfully operate over. While we demonstrate clear empirical evidence that such meaningful mappings definitely occur, we remark that its success may hinge on carefully tuning the hyperparameters of $\widetilde{f}$ .
92
+
93
+ * A very common setting assumes that the abstract inputs and latents are factorised into objects, but the natural inputs are not. In this case, $\widetilde{f}$ is tasked with predicting appropriate object representations from the natural inputs. This is known to be a challenging feat [42], but can be successfully performed. Sometimes arbitrarily factorising the feature maps of a CNN [31] is sufficient, while at other times, models such as slot attention [38] may be required.
94
+
95
+ * One corollary of using automated object extractors for $\widetilde{f}$ is that it’s very difficult to enforce their slot representations to line up in the same way as in the abstract inputs. This implies that $P$ should be permutation equivariant (and hence motivates using a GNN for it).
96
+
97
+ § 4 RMR FOR BOUNCING BALLS
98
+
99
+ To evaluate the capability of the RMR pipeline for transfer from the abstract space to the pixel space, we apply it on the "bouncing balls" problem. The bouncing balls problem is an instance of a physics simulation problem, where the task is to predict the next state of an environment in which multiple balls are bouncing between each other and a bounding box. Though this problem had been studied in the the context of physics simulation from (abstract) trajectories [7] and from (natural) videos $\left\lbrack {{18},{26},{43}}\right\rbrack$ , here we focus on the aptitude of RMR to transfer learned representations from trajectories to videos.
100
+
101
+ Our results affirm that strong abstract models can be trained on such tasks, and that including them in a video pipeline induces more robust representations. See Appendix A for more details on hyperparameters and experimental setup.
102
+
103
+ Preliminaries. Here, trajectories are represented by 2D coordinates of 10 balls through time, defining our abstract inputs and outputs $\mathcal{X} = \mathcal{Y} = {\mathbb{R}}^{{10} \times 2}$ . We slice these trajectories into a series of moving windows containing the input, ${\overline{\mathbf{x}}}^{ * }$ , spanning a history of three previous states, and the target, $\overline{\mathbf{y}}$ , representing the next state. We obtain these trajectories from a 3D simulator (MuJoCo [44]), together with their short-video renderings, which represent our natural input and output space $\mathcal{X} = \mathcal{Y} = {\mathbb{R}}^{{64} \times {64} \times 3}$ . Our goal is to train an RMR abstract model on trajectories and transfer learned representations to improve a dynamics model trained on these videos.
104
+
105
+ Abstract pipeline. So as to model the dynamics of trajectories, we closely follow the RMR desiderata for the abstract model. We set $f$ to a linear projection over the input concatenation, $P$ to a Message Passing Neural Network (MPNN), following previous work [7,19], and $g$ to a linear projection.
106
+
107
+ Our model learns a transition function $g\left( {P\left( {f\left( {\overline{\mathbf{x}}}^{ * }\right) }\right) \approx \overline{\mathbf{y}}}\right.$ , supervised using Mean Squared Error (MSE) over ball positions in the next step. It achieves an MSE of ${4.59} \times {10}^{-4}$ , which, evaluated qualitatively, demonstrates the ability of the model to predict physically realistic behavior when unrolled for 10 steps (the model is trained on 1-step dynamics only). Next, we take the processor $P$ from the abstract pipeline and re-use it in the natural pipeline.
108
+
109
+ Natural pipeline. Here we evaluate whether the pre-trained RMR processor can be reused for learning the dynamics of the bouncing balls from videos. The pixel-based encoder $\bar{f}$ is concatenation of per-input-image Slot Attention model [38], passed through a linear layer, and a Broadcast Decoder [45] for the pixel-based decoder $\bar{g}$ .
110
+
111
+ The full model is a transition function $\bar{g}\left( {P\left( {\bar{f}\left( {\mathbf{x}}^{ * }\right) }\right) \approx \mathbf{y}}\right.$ , supervised by pixel reconstruction loss over the next step image. We compare the performance of the RMR model with a pre-trained processor
112
+
113
+ $P$ against a baseline in which $P$ is trained fully end-to-end. The RMR model achieves an MSE of ${7.94} \pm {0.41}\left( {\times {10}^{-4}}\right)$ , whereas the baseline achieves ${9.47} \pm {0.24}\left( {\times {10}^{-4}}\right)$ . We take a qualitative look at the reconstruction rollout of the RMR model in Figure 3, and expose the algorithmic bottleneck properties for this task in Appendix C.
114
+
115
+ < g r a p h i c s >
116
+
117
+ Figure 3: RMR for bouncing balls reconstruction rollout. States marked in green are the natural input, followed by the reconstructed output. The states below the reconstruction is the ground truth.
118
+
119
+ § 5 CONTRASTIVE RMR FOR ATARI
120
+
121
+ We evaluate the potential of our RMR pipeline for state representation learning on the Atari 2600 [46]. We find the RMR applicable here because there is a potential wealth of information that can be obtained about the Atari's operation-namely, by inspecting its RAM traces.
122
+
123
+ Preliminaries. Accordingly, we will define our set of abstract inputs and outputs as Atari RAM matrices. Given that the Atari has 128 bytes of memory, $\overline{\mathcal{X}} = \overline{\mathcal{Y}} = {\mathbb{B}}^{{128} \times 8}$ (where $\mathbb{B} = \{ 0,1\}$ is the set of bits). We collect data about how the console modifies the RAM by acting in the environment and recording the trace of RAM arrays we observe. These traces will be of the form $\left( {\overline{\mathbf{x}},a,\overline{\mathbf{y}}}\right)$ which signify that the agent’s initial RAM state was $\overline{\mathbf{x}}$ , and that after performing action $a \in \mathcal{A}$ , its RAM state was updated to $\overline{\mathbf{y}}$ . We assume that $a$ is encoded as an 18-way one-hot vector.
124
+
125
+ We would like to leverage any reasoning module obtained over RAM states to support representation learning from raw pixels. Accordingly, our natural inputs, $\mathcal{X}$ , are pixel arrays representing the Atari’s framebuffer.
126
+
127
+ Mirroring prior work, we perform contrastive learning directly in the latent space, and set $\mathcal{Y} = \mathcal{Z}$ ; that is, our natural outputs correspond to an estimate of the "updated" latents after taking an action. All our models use latent representations of 64 dimensions per slot, meaning $\mathcal{Z} = {\mathbb{R}}^{{128} \times {64}}$ .
128
+
129
+ We note that it is important to generate a diverse dataset of experiences in order to train a robust RAM model. To simulate a dataset which might be gathered by human players of varying skill, we sample our data using the 32 policy heads of a pre-trained Agent57 [47]. Each policy head collects data over three episodes in the studied games. Note that this implies a substantially more challenging dataset than the one reported by [48], wherein data was collected by a purely random policy, which may well fail to explore many relevant regions of the games.
130
+
131
+ Abstract pipeline. Firstly, we set out to verify that it is possible to train nontrivial Atari RAM transition models. The construction of this abstract experiment follows almost exactly the abstract RMR setup: $f$ and $g$ are appropriately sized linear projections, while $P$ needs to take into account which action was taken when updating the latents. To simplify the implementation and allow further model re-use, we consider the action a part of the $P$ ’s inputs. See Appendix F for detailed equations.
132
+
133
+ This implies that our transition model learns a function $g\left( {P\left( {f\left( \bar{\mathbf{x}}\right) ,a}\right) }\right) \approx \overline{\mathbf{y}}$ . We supervise this model using binary cross-entropy to predict each bit of the resulting RAM state. Since RAM transitions are assumed deterministic, we assume a fully Markovian setup and learn 1-step dynamics.
134
+
135
+ For brevity purposes, we detail our exact hyperparameters and results per each Atari game considered in Appendix B. Our results ascertain the message passing neural network (MPNN) [49] as a highly potent processor network in Atari: it ranked most potent in 17 out of 24 games considered, compared to MLPs and Deep Sets [50]. Accordingly, we will focus on leveraging pre-trained MPNN processors for the next phase of the RMR pipeline.
136
+
137
+ Natural pipeline. We now set out to evaluate whether our pre-trained RMR processors can be meaningfully re-used by an encoder in a pixel-based contrastive learning pipeline.
138
+
139
+ Table 1: Natural modelling results for Atari 2600. Bit-level ${\mathrm{F}}_{1}$ reported for slots with high entropy, as in Anand et al. [48]. Results assumed significant at $p < {0.05}$ (one-sided paired Wilcoxon test).
140
+
141
+ max width=
142
+
143
+ Game Agent57 C-SWM $\mathbf{{RMR}}$ $p$ -value
144
+
145
+ 1-5
146
+ Asteroids ${0.514} \pm {0.001}$ ${0.582} \pm {0.009}$ 0.593 $\pm {0.004}$ $< {10}^{-5}$
147
+
148
+ 1-5
149
+ Battlezone ${0.351} \pm {0.003}$ ${0.592} \pm {0.005}$ ${0.589} \pm {0.007}$ 0.056
150
+
151
+ 1-5
152
+ Berzerk ${0.454} \pm {0.084}$ ${0.463} \pm {0.053}$ ${0.470} \pm {0.025}$ 0.364
153
+
154
+ 1-5
155
+ Bowling ${0.554} \pm {0.004}$ ${0.944} \pm {0.006}$ ${0.946} \pm {0.003}$ 0.071
156
+
157
+ 1-5
158
+ Boxing ${0.558} \pm {0.002}$ ${0.667} \pm {0.012}$ ${0.669} \pm {0.011}$ 0.215
159
+
160
+ 1-5
161
+ Breakout ${0.657} \pm {0.001}$ ${0.836} \pm {0.009}$ 0.852±0.008 $< {10}^{-5}$
162
+
163
+ 1-5
164
+ Demon Attack ${0.539} \pm {0.004}$ ${0.653} \pm {0.006}$ 0.658 $\pm {0.004}$ 0.002
165
+
166
+ 1-5
167
+ Freeway ${0.424} \pm {0.052}$ ${0.912} \pm {0.025}$ ${\mathbf{{0.919}}}_{\pm {0.035}}$ 0.032
168
+
169
+ 1-5
170
+ Frostbite ${0.405} \pm {0.001}$ ${0.580}_{\pm {0.025}}$ 0.594 $\pm {0.016}$ 0.035
171
+
172
+ 1-5
173
+ H.E.R.O. ${0.481} \pm {0.001}$ ${0.729} \pm {0.026}$ 0.779±0.021 $< {10}^{-5}$
174
+
175
+ 1-5
176
+ Montezuma's Revenge ${0.743} \pm {0.003}$ ${0.824} \pm {0.012}$ ${0.821} \pm {0.016}$ 0.156
177
+
178
+ 1-5
179
+ Ms. Pac-Man ${0.506} \pm {0.001}$ ${0.599} \pm {0.004}$ 0.602±0.006 0.038
180
+
181
+ 1-5
182
+ Pitfall! ${0.495} \pm {0.003}$ 0.626 $\pm {0.015}$ ${0.603} \pm {0.010}$ $< {10}^{-5}$
183
+
184
+ 1-5
185
+ Pong ${0.392} \pm {0.001}$ ${0.750} \pm {0.016}$ 0.762±0.010 0.001
186
+
187
+ 1-5
188
+ Private Eye ${0.594} \pm {0.001}$ ${0.863} \pm {0.010}$ 0.867 $\pm {0.008}$ 0.045
189
+
190
+ 1-5
191
+ Q*Bert ${0.536} \pm {0.010}$ ${0.588} \pm {0.015}$ ${0.590} \pm {0.017}$ 0.165
192
+
193
+ 1-5
194
+ River Raid ${0.686} \pm {0.001}$ ${0.762} \pm {0.005}$ 0.764 $\pm {0.007}$ 0.032
195
+
196
+ 1-5
197
+ Seaquest ${0.472} \pm {0.007}$ ${0.634} \pm {0.013}$ 0.653±0.008 $< {10}^{-5}$
198
+
199
+ 1-5
200
+ Skiing ${0.599} \pm {0.007}$ ${0.766} \pm {0.028}$ 0.775±0.014 0.174
201
+
202
+ 1-5
203
+ Space Invaders ${0.588} \pm {0.002}$ ${0.719} \pm {0.012}$ 0.761 $\pm {0.006}$ $< {10}^{-5}$
204
+
205
+ 1-5
206
+ Tennis ${0.533} \pm {0.008}$ ${0.724} \pm {0.007}$ 0.729 $\pm {0.005}$ 0.007
207
+
208
+ 1-5
209
+ Venture ${0.567} \pm {0.001}$ ${0.632} \pm {0.005}$ ${0.633} \pm {0.004}$ 0.392
210
+
211
+ 1-5
212
+ Video Pinball ${0.375} \pm {0.011}$ ${0.724} \pm {0.009}$ 0.745±0.008 $< {10}^{-5}$
213
+
214
+ 1-5
215
+ Yars’ Revenge ${0.608} \pm {0.001}$ ${0.715} \pm {0.008}$ 0.751±0.010 $< {10}^{-5}$
216
+
217
+ 1-5
218
+
219
+ For our pixel-based encoder $\widetilde{f}$ , we use the same CNN trunk as Anand et al. [48]-however, as we require slot-level rather than flat embeddings, the final layers of our encoder are different. Namely, we apply a $1 \times 1$ convolution computing ${128m}$ feature maps (where $m$ is the number of feature maps per-slot). We then flatten the spatial axes, giving every slot $m \times h \times w$ features, which we finally linearly project to 64-dimensional features per-slot, aligning with our pre-trained $P$ . Note that setting $m = 1$ recovers exactly the style of object detection employed by C-SWM [31]. Since our desired outputs are themselves latents, $\widetilde{g}$ is a single linear projection to 64 dimensions.
220
+
221
+ Overall, our pixel-based transition model learns a function $\widetilde{g}\left( {P\left( {\widetilde{f}\left( \mathbf{x}\right) ,a}\right) }\right) \approx \widetilde{f}\left( \mathbf{y}\right)$ , where $\mathbf{y}$ is the next state observed after applying action $a$ in state $\mathrm{x}$ . To optimise it, we re-use exactly the same TransE-inspired [51] contrastive loss that C-SWM [31] used.
222
+
223
+ Once state representation learning concludes, all components are typically thrown away except for the encoder, $\widetilde{f}$ , which is used for downstream tasks. As a proxy for evaluating the quality of the encoder, we train linear classifiers on the concatenation of all slot embeddings obtained from $\widetilde{f}$ to predict individual RAM bits, exactly as in the abstract model case. Note that we have not violated our assumption that paired $\left( {\mathbf{x},\overline{\mathbf{x}}}\right)$ samples will not be provided while training the natural model-in this phase, the encoder $\widetilde{f}$ is frozen, and gradients can only flow into the linear probe.
224
+
225
+ Our comparisons for this experiment, evaluating our RMR pipeline against an identical architecture with an unfrozen $P$ (equivalent to C-SWM [31]) is provided in Table 1. For each of 20 random seeds, we feed identical batches to both models, hence we can perform a paired Wilcoxon test to assess the statistical significance of any observed differences on validation episodes. The results are in line with our hypothesis: representations learnt by RMR are significantly better $\left( {p < {0.05}}\right)$ on 15 out of 24 games, significantly worse only on Pitfall !—and indistinguishable to C-SWM's on others. This is despite the fact their architectures are identical, indicating that the pre-trained abstract model induces stronger representations for predicting the underlying data factors. As was the case for bouncing balls, we expose the algorithmic bottleneck here too; see Appendix C.
226
+
227
+ As a relevant initial baseline-and to emphasise the difficulty of our task-we also include in Table 1 the performance of the latent embeddings extracted from a pre-trained Agent57 [47]. These embeddings are, perhaps unsurprisingly, substantially worse than both the RMR and C-SWM models. As Agent57 was trained on a reward-maximising objective, its embeddings are likely to capture the controllable aspects of the input, while filtering out the environment's background.
228
+
229
+ Abstract model transfer. While the results above indicate that endowing a self-supervised Atari representation learner with knowledge of the underlying game's RAM transitions yields stronger representations, one may still argue that this constitutes a form of "privileged information". This is due to the fact we knew upfront which game we were learning the representations for, and hence could leverage the specific RAM dynamics of this game.
230
+
231
+ < g r a p h i c s >
232
+
233
+ Figure 4: A hierarchically clustered heatmap depiction of the transfer results, where the Train Game abstract models have been trained on the Test Games. The results are summarised by a Wilcoxon test, denoting where the performance of RMR is better, does not differ or is worse than C-SWM's. In each cell, the mean relative $\%$ improvement is noted. The significance cutoff is $p < {0.05}$ .
234
+
235
+ In the final experiment, we study a more general setting: we are given access to RAM traces showing us how the Atari console manipulates its memory in response to player input, but we cannot guarantee these traces came from the same game that we are performing representation learning over. Can we still effectively leverage this knowledge?
236
+
237
+ Specifically, we test abstract model transfer in the following way: first, we take a pre-trained abstract processor network $P$ from one game (the "train game") and freeze it. Then, using this processor, we perform the aforementioned natural pipeline training and testing over frames from another game (the "test game"). We then evaluate whether the recovered performance improves over the C-SWM model with unfrozen weights-once again using a paired one-sided Wilcoxon test over 20 seeds to ascertain statistical significance of any differences observed. The final outcome of our experiment is hence a ${24} \times {24}$ matrix, indicating the quality of abstract model transfer from every game to every other game. Table 1 corresponds to the "diagonal" entries of this matrix.
238
+
239
+ The results, presented in Figure 4, testify to the performance of RMR. Representations learned by RMR transfer better in 64.6% of the train/test game pairs, are indistinguishable from C-SWM in ${33.7}\%$ of the game pairs and perform worse than C-SWM in only 1.7% of game pairs. Therefore, in plentiful circumstances, the answer to our original question is positive: discovering a trace of Atari RAM transitions of unknown origin can often be of high significance for representation learning from Atari pixels, regardless of whether the underlying games match.
240
+
241
+ Qualitative analysis of transfer. While we find this result interesting in and of itself, it also raises interesting follow-up questions. Is representation learning on certain games more prone to being improved just by knowing anything about the Atari console? Figure 4 certainly implies so: several games (such as Seaquest or Space Invaders) have "fully-green" columns, making them "universal recipients". Similarly, we may be interested about the "donor" properties of each game - to what extent are their RAM models useful across a broad range of test games?
242
+
243
+ We study both of the above questions by performing a hierarchical (complete-link) clustering of the rows and columns of the ${24} \times {24}$ matrix of transfer performances, to identify clusters of related donor and recipient games. Both clusterings are marked in Figure 4 (on the sides of the matrix).
244
+
245
+ The analysis reveals several well-formed clusters, from which we are able to make some preliminary observations, based on the properties of the various games. To name a few examples:
246
+
247
+ * The strongest recipients (e.g. Yars' Revenge, H.E.R.O., Seaquest, Space Invaders and Asteroids) tend to include elements of "shooting" in their gameplay.
248
+
249
+ * Conversely, the weakest recipients (River Raid, Berzerk, Private Eye, Ms. Pac-Man and Pitfall!) are all games in which movement is generally unrestricted across most of the screen, indicating a larger range of possible coordinate values to model.
250
+
251
+ * Pong, Breakout, Battlezone and Skiing cluster closely in terms of donor properties-and they are all games in which movement is restricted to one axis only.
252
+
253
+ * Lastly, strong donors (Montezuma’s Revenge, Yars’ Revenge, ${\mathrm{Q}}^{ * }$ Bert and Venture) all feature massive, abrupt, changes to the game state (as they feature multiple rooms, for example). This implies that they might generally have seen a more diverse set of transitions. Further, on Yars' Revenge, a massive laser features, which, conveniently, prints an RGB projection of the RAM itself on the frame.
254
+
255
+ § 20 6 CONCLUSIONS
256
+
257
+ We presented Reasoning-Modulated Representations (RMR), a novel approach for leveraging background algorithmic knowledge within a representation learning pipeline. By encoding the underlying algorithmic priors as weights of a processor neural network, we alleviate requirements on alignment between abstract and natural inputs and protect our model against bottlenecks. We believe that RMR paves the way to a novel class of algorithmic reasoning-inspired representations, with a high potential for transfer across tasks-a feat that has largely eluded deep reinforcement learning research.
papers/LOG/LOG 2022/LOG 2022 Conference/QDN0jSXuvtX/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,410 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # FakeEdge: Alleviate Dataset Shift in Link Prediction
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ Link prediction is a crucial problem in graph-structured data. Due to the recent success of graph neural networks (GNNs), a variety of GNN-based models were proposed to tackle the link prediction task. Specifically, GNNs leverage the message passing paradigm to obtain node representation, which relies on link connectivity. However, in a link prediction task, links in the training set are always present while ones in the testing set are not yet formed, resulting in a discrepancy of the connectivity pattern and bias of the learned representation. It leads to a problem of dataset shift which degrades the model performance. In this paper, we first identify the dataset shift problem in the link prediction task and provide theoretical analyses on how existing link prediction methods are vulnerable to it. We then propose FakeEdge, a model-agnostic technique, to address the problem by mitigating the graph topological gap between training and testing sets. Extensive experiments demonstrate the applicability and superiority of FakeEdge on multiple datasets across various domains.
12
+
13
+ ## 1 Introduction
14
+
15
+ Graph structured data is ubiquitous across a variety of domains, including social networks [1], protein-protein interactions [2], movie recommendations [3], and citation networks [4]. It provides a non-Euclidean structure to describe the relations among entities. The link prediction task is to predict missing links or new forming links in an observed network [5]. Recently, with the success of graph neural networks (GNNs) for graph representation learning [6-9], several GNN-based methods have been developed [10-14] to solve link prediction tasks. These methods encode the representation of target links with the topological structures and node/edge attributes in their local neighborhood. After recognizing the pattern of observed links (training sets), they predict the likelihood of forming new links between node pairs (testing sets) where no link is yet observed.
16
+
17
+ Nevertheless, existing methods pose a discrepancy of the target link representation between training and testing sets. As the target link is never observed in the testing set by the nature of the task, it will have a different local topological structure when compared to its counterpart from the training set. Thus, the corrupted topological structure shifts the target link representation in the testing set, which we recognize as a dataset shift problem [15, 16] in link prediction.
18
+
19
+ We give a concrete example to illustrate how dataset shift can happen in the link prediction task, especially for GNN-based models with message passing paradigm [17] simulating the 1-dimensional Weisfeiler-Lehman (1-WL) test [18]. In Figure 1, we have two local neighborhoods sampled as subgraphs from the training (top) and testing (bottom) set respectively. The node pairs of interest, which we call focal node pairs, are denoted by black bold circles. From a bird's-eye viewpoint, these two subgraphs are isomorphic when we consider the existence of the positive test link (dashed line), even though the test link has not been observed. Ideally, two isomorphic graphs should have the same representation encoded by GNNs, leading to the same link prediction outcome. However, one iteration of 1-WL in Figure 1 produces different colors for the focal node pairs between training and testing sets, which indicates that the one-layer GNN can encode different representations for these two isomorphic subgraphs, giving rise to dataset shift issue.
20
+
21
+ Dataset shift can substantially degrade model performance since it violates the common assumption that the joint distribution of inputs and outputs stays the same in both the training and testing set. The root cause of this phenomenon in link prediction is the unique characteristic of the target link: the link always plays a dual role in the problem setting and determines both the input and the output for a link prediction task. The existence of the link apparently decides whether it is a positive or negative sample (output). Simultaneously, the presence of the link can also influence how the representation is learned through the introduction of different topological structures around the link (input). Thus, it entangles representation learning and labels in the link prediction problem.
22
+
23
+ ![01963ef3-6c52-7a03-8a97-3dd88902a9e0_1_310_202_1175_321_0.jpg](images/01963ef3-6c52-7a03-8a97-3dd88902a9e0_1_310_202_1175_321_0.jpg)
24
+
25
+ Figure 1: 1-WL test is performed to exhibit the learning process of GNNs. Two node pairs (denoted as bold black circles) and their surrounding subgraphs are sampled from the graph as a training (top) and testing (bottom) instance respectively. Two subgraphs are isomorphic when we omit the focal links. One iteration of 1-WL assigns different colors, indicating the occurrence of dataset shift.
26
+
27
+ To decouple the dual role of the link, we advocate a framework, namely subgraph link prediction, which disentangles the label of the link and its topological structure. As most practical link prediction methods make a prediction by capturing the local neighborhood of the link [1, 11, 12, 19, 20], we unify them all into this framework, where the input is the extracted subgraph around the focal node pair and the output is the likelihood of forming a link incident with the focal node pair in the subgraph. From the perspective of the framework, we find that the dataset shift issue is mainly caused by the presence/absence of the focal link in the subgraph from the training/testing set. This motivates us to propose a simple but effective technique, FakeEdge, to deliberately add or remove the focal link in the subgraph so that the subgraph can stay consistent across training and testing. FakeEdge is a model-agnostic technique, allowing it to be applied to any subgraph link prediction model. It assures that the model would learn the same subgraph representation regardless of the existence of the focal link. Lastly, empirical experiments prove that diminishing the dataset shift issue can significantly boost the link prediction performance on different baseline models.
28
+
29
+ We summarize our contributions as follows. We first unify most of the link prediction methods into a common framework named as subgraph link prediction, which treats link prediction as a subgraph classification task. In the view of the framework, we theoretically investigate the dataset shift issue in link prediction tasks, which motivates us to propose FakeEdge, a model-agnostic augmentation technique, to ease the distribution gap between the training and testing. We further conduct extensive experiments on a variety of baseline models to reveal the performance improvement with FakeEdge to show its capability of alleviating the dataset shift issue on a broad range of benchmarks.
30
+
31
+ ## 2 Related work
32
+
33
+ Link Prediction. Early studies on link prediction problems mainly focus on heuristics methods, which require expertise on the underlying trait of network or hand-crafted features, including Common Neighbor [1], Adamic-Adar index [20] and Preferential Attachment [21], etc. WLNM [22] suggests a method to encode the induced subgraph of the target link as an adjacency matrix to represent the link. With the huge success of GNN [9], GNN-based link prediction methods have become dominant across different areas. Graph Auto Encoder(GAE) and Variational Graph Auto Encoder(VGAE) [10] perform link prediction tasks by reconstructing the graph structure. SEAL [11] and DE [13] propose methods to label the nodes according to the distance to the focal node pair. To better exploit the structural motifs [23] in distinct graphs, a walk-based pooling method (WalkPool) [12] is designed to extract the representation of the local neighborhood. PLNLP [14] sheds light on pairwise learning to rank the node pairs of interest. Based on two-dimensional Weisfeiler-Lehman tests, Hu et al. propose a link prediction method that can directly obtain node pair representation [24].
34
+
35
+ Graph Data Augmentation. Several data augmentation methods are introduced to modify the graph connectivity by adding or removing edges [25]. DropEdge [26] acts like a message passing reducer to tackle over-smoothing or overfitting problems [27]. Topping et al. modify the graph's topological structure by removing negatively curved edges to solve the bottleneck issue [29] of message passing [28]. GDC [30] applies graph diffusion methods on the observed graph to generate a diffused counterpart as the computation graph. For the link prediction task, CFLP [31] generates counterfactual links to augment the original graph. Edge Proposal Set [32] injects edges into the training graph, which are recognized by other link predictors in order to improve performance.
36
+
37
+ ## 3 A proposed unified framework for link prediction
38
+
39
+ In this section, we formally introduce the link prediction task and formulate several existing GNN-based methods into a common general framework.
40
+
41
+ ### 3.1 Preliminary
42
+
43
+ Let $\mathcal{G} = \left( {V, E,{\mathbf{x}}^{V},{\mathbf{x}}^{E}}\right)$ be an undirected graph. $V$ is the set of nodes with size $n$ , which can be indexed as $\{ i{\} }_{i = 1}^{n}.E \subseteq V \times V$ is the observed set of edges. ${\mathbf{x}}_{i}^{V} \in {\mathcal{X}}^{V}$ represents the feature of node i. ${\mathbf{x}}_{i, j}^{E} \in {\mathcal{X}}^{E}$ represents the feature of the edge(i, j)if $\left( {i, j}\right) \in E$ . The other unobserved set of edges is ${E}_{c} \subseteq V \times V \smallsetminus E$ , which are either missing or going to form in the future in the original graph $\mathcal{G}$ . $d\left( {i, j}\right)$ denotes the shortest path distance between node $i$ and $j$ . The $r$ -hop enclosing subgraph ${\mathcal{G}}_{i, j}^{r}$ for node $i, j$ is the subgraph induced from $\mathcal{G}$ by node sets ${V}_{i, j}^{r} = \{ v \mid v \in V, d\left( {v, i}\right) \leq r$ or $d\left( {v, j}\right) \leq r\}$ . The edges set of ${\mathcal{G}}_{i, j}^{r}$ are ${E}_{i, j}^{r} = \left\{ {\left( {p, q}\right) \mid \left( {p, q}\right) \in E\text{and}p, q \in {V}_{i, j}^{r}}\right\}$ . An enclosing subgraph ${\mathcal{G}}_{i, j}^{r} = \left( {{V}_{i, j}^{r},{E}_{i, j}^{r},{\mathbf{x}}_{{V}_{i, j}}^{V},{\mathbf{x}}_{{E}_{i, j}}^{E}}\right)$ contains all the information in the neighborhood of node $i, j$ . The node set $\{ i, j\}$ is called the focal node pair, where we are interested in if there exists (observed) or should exist (unobserved) an edge between nodes $i, j$ . In the context of link prediction, we will use the term subgraph to denote enclosing subgraph in the following sections.
44
+
45
+ ### 3.2 Subgraph link prediction
46
+
47
+ In this section, we discuss the definition of Subgraph Link Prediction and investigate how current link prediction methods can be unified in this framework. We mainly focus on link prediction methods based on GNNs, which propagate the message to each node's neighbors in order to learn the representation. We start by giving the definition of the subgraph's properties:
48
+
49
+ Definition 1. Given a graph $\mathcal{G} = \left( {V, E,{\mathbf{x}}^{V},{\mathbf{x}}^{E}}\right)$ and the unobserved edge set ${E}_{c}$ , a subgraph ${\mathcal{G}}_{i, j}^{r}$ have the following properties:
50
+
51
+ 1. a label $\mathrm{y} \in \{ 0,1\}$ of the subgraph indicates if there exists, or will form, an edge incident with focal node pair $\{ i, j\}$ . That is, ${\mathcal{G}}_{i, j}^{r}$ label $\mathrm{y} = 1$ if and only if $\left( {i, j}\right) \in E \cup {E}_{c}$ . Otherwise, label $\mathrm{y} = 0$ .
52
+
53
+ 2. the existence $\mathrm{e} \in \{ 0,1\}$ of an edge in the subgraph indicates whether there is an edge observed at the focal node pair $\{ i, j\}$ . If $\left( {i, j}\right) \in E,\mathrm{e} = 1$ . Otherwise $\mathrm{e} = 0$ .
54
+
55
+ 3. a phase $\mathrm{c} \in \{$ train, test $\}$ denotes whether the subgraph belongs to training or testing stage. Especially for a positive subgraph $\left( {\mathrm{y} = 1}\right)$ , if $\left( {i, j}\right) \in E$ , then $\mathrm{c} =$ train. If $\left( {i, j}\right) \in {E}_{c}$ , then $\mathrm{c} =$ test.
56
+
57
+ Note that, the label $\mathrm{y} = 1$ does not necessarily indicate the observation of the edge at the focal node pair $\{ i, j\}$ . A subgraph in the testing set may have the label $\mathrm{y} = 1$ but the edge may not be present. The existence $\mathrm{e} = 1$ only when the edge is observed at the focal node pair.
58
+
59
+ Definition 2. Given a subgraph ${\mathcal{G}}_{i, j}^{r}$ , Subgraph Link Prediction is a task to learn a feature $\mathbf{h}$ of the subgraph ${\mathcal{G}}_{i, j}^{r}$ and uses it to predict the label $\mathrm{y} \in \{ 0,1\}$ of the subgraph.
60
+
61
+ Generally, subgraph link prediction regards the link prediction task as a subgraph classification task. The pipeline of subgraph link prediction starts with extracting the subgraph ${\mathcal{G}}_{i, j}^{r}$ around the focal node pair $\{ i, j\}$ , and then applies GNNs to encode the node representation $\mathbf{Z}$ . The latent feature $\mathbf{h}$ of the subgraph is obtained by pooling methods on $\mathbf{Z}$ . In the end, the subgraph feature $\mathbf{h}$ is fed into a classifier. In summary, the whole pipeline entails:
62
+
63
+ 1. Subgraph Extraction: Extract the subgraph ${\mathcal{G}}_{i, j}^{r}$ around the focal node pair $\{ i, j\}$ ;
64
+
65
+ 2. Node Representation Learning: $Z = \operatorname{GNN}\left( {\mathcal{G}}_{i, j}^{r}\right)$ , where $Z \in {\mathbb{R}}^{\left| {V}_{i, j}^{r}\right| \times {F}_{\text{hidden }}}$ is the node embedding matrix learned by the GNN encoder;
66
+
67
+ 3. Pooling: $\mathbf{h} = \operatorname{Pooling}\left( {\mathbf{Z};{\mathcal{G}}_{i, j}^{r}}\right)$ , where $\mathbf{h} \in {\mathbb{R}}^{{F}_{\text{pooled }}}$ is the latent feature of the subgraph ${\mathcal{G}}_{i, j}^{r}$ ; 4. Classification: $y =$ Classifier(h).
68
+
69
+ There are two main streams of GNN-based link prediction models. Models like SEAL [11] and WalkPool [12] can naturally fall into the subgraph link prediction framework, as they thoroughly follow the pipeline. In SEAL, SortPooling [33] serves as a readout to aggregate the node's features in the subgraph. WalkPool designs a random-walk based pooling method to extract the subgraph feature h. Both methods take advantage of the node's representation from the entire subgraph.
70
+
71
+ In addition, there is another stream of link prediction models, such as GAE [10] and PLNLP [14], which learns the node representation and then devises a score function on the representation of the focal node pair to represent the likelihood of forming a link. We find that these GNN-based methods with the message passing paradigm also belong to a subgraph link prediction task. Considering a GAE with $l$ layers, each node $v$ essentially learns its embedding from its $l$ -hop neighbors $\{ i \mid i \in$ $V, d\left( {i, v}\right) \leq l\}$ . The score function can be then regarded as a center pooling on the subgraph, which only aggregates the features of the focal node pair as $\mathbf{h}$ to represent the subgraph. For a focal node pair $\{ i, j\}$ and GAE with $l$ layers, an $l$ -hop subgraph ${\mathcal{G}}_{i, j}^{l}$ sufficiently contains all the information needed to learn the representation of nodes in the subgraph and score the focal node pair $i, j$ . Thus, the GNN-based models can also be seen as a citizen of subgraph link prediction. In terms of the score function, there are plenty of options depending on the predictive power in practice. In general, the common choices are: (1) Hadamard product: $\mathbf{h} = {z}_{i} \circ {z}_{j}$ ; (2) MLP: $\mathbf{h} = \operatorname{MLP}\left( {{z}_{i} \circ {z}_{j}}\right)$ where MLP is the Multi-Layer Perceptron; (3) BiLinear: $\mathbf{h} = {z}_{i}\mathbf{W}{z}_{j}$ where $\mathbf{W}$ is a learnable matrix; (4) BiLinearMLP: $\mathbf{h} = \operatorname{MLP}\left( {z}_{i}\right) \circ \operatorname{MLP}\left( {z}_{j}\right)$ .
72
+
73
+ In addition to GNN-based methods, the concept of the subgraph link prediction can be extended to low-order heuristics link predictors, like Common Neighbor [1], Adamic-Adar index [20], Preferential Attachment [21], Jaccard Index [34], and Resource Allocation [35]. The predictors with the order $r$ can be computed by the subgraph ${\mathcal{G}}_{i, j}^{r}$ . The scalar value can be seen as the latent feature $\mathbf{h}$ .
74
+
75
+ ## 4 FakeEdge: Mitigates dataset shift in subgraph link prediction
76
+
77
+ In this section, we start by giving the definition of dataset shift in the general case, and then formally discuss how dataset shift occurs with regard to subgraph link prediction. Then we propose FakeEdge as a graph augmentation technique to ease the distribution gap of the subgraph representation between the training and testing sets. Lastly, we discuss how FakeEdge can enhance the expressive power of any GNN-based subgraph link prediction model.
78
+
79
+ ### 4.1 Dataset shift
80
+
81
+ Definition 3. Dataset Shift happens when the joint distribution between train and test is different. That is, $p\left( {\mathbf{h},\mathrm{y} \mid \mathrm{c} = \text{ train }}\right) \neq p\left( {\mathbf{h},\mathrm{y} \mid \mathrm{c} = \text{ test }}\right)$ .
82
+
83
+ A simple example of dataset shift is an object detection system. If the system is only designed and trained under good weather conditions, it may fail to capture objects in bad weather. In general, dataset shift is often caused by some unknown latent variable, like the weather condition in the example above. The unknown variable is not observable during the training phase so the model cannot fully capture the conditions during testing. Similarly, the edge existence $\mathrm{e} \in \{ 0,1\}$ in the subgraph poses as an "unknown" variable in the subgraph link prediction task. Most of the current GNN-based models neglect the effect of the edge existence on encoding the subgraph's feature.
84
+
85
+ Definition 4. A subgraph’s feature $\mathbf{h}$ is called Edge Invariant if $p\left( {\mathbf{h},\mathbf{y} \mid \mathbf{e}}\right) = p\left( {\mathbf{h},\mathbf{y}}\right)$ .
86
+
87
+ To explain, the Edge Invariant subgraph embedding stays the same no matter if the edge is present at the focal node pair or not. It disentangles the edge's existence and the subgraph representation learning. For example, common neighbor predictor is Edge Invariant because the existence of an edge at the focal node pair will not affect the number of common neighbors that two nodes can have. However, Preferential Attachment, another widely used heuristics link prediction predictor, is not Edge Invariant because the node degree varies depending on the existence of the edge.
88
+
89
+ Theorem 1. GNN cannot learn the subgraph feature $\mathbf{h}$ to be Edge Invariant.
90
+
91
+ Recall that the subgraphs in Figure 1 are encoded differently between the training and testing set because of the presence/absence of the focal link. Thus, the vanilla GNN cannot learn the Edge
92
+
93
+ ![01963ef3-6c52-7a03-8a97-3dd88902a9e0_4_309_208_1162_364_0.jpg](images/01963ef3-6c52-7a03-8a97-3dd88902a9e0_4_309_208_1162_364_0.jpg)
94
+
95
+ Figure 2: The proposed four FakeEdge methods. In general, FakeEdge encourages the link prediction model to learn the subgraph representation by always deliberately adding or removing the edges at the focal node pair in each subgraph. In this way, FakeEdge can reduce the distribution gap of the learned subgraph representation between the training and testing set.
96
+
97
+ Invariant subgraph feature. Learning Edge Invariant subgraph feature is crucial to mitigate the dataset shift problem. Here, we give our main theorem about the issue in the link prediction task:
98
+
99
+ Theorem 2. Given $p\left( {\mathbf{h},\mathbf{y} \mid \mathbf{e},\mathbf{c}}\right) = p\left( {\mathbf{h},\mathbf{y} \mid \mathbf{e}}\right)$ , there is no Dataset Shift in the link prediction if the subgraph embedding is Edge Invariant. That is, $p\left( {\mathbf{h},\mathrm{y} \mid \mathbf{e}}\right) = p\left( {\mathbf{h},\mathrm{y}}\right) \Rightarrow p\left( {\mathbf{h},\mathrm{y} \mid \mathbf{c}}\right) = p\left( {\mathbf{h},\mathrm{y}}\right)$ .
100
+
101
+ The assumption $p\left( {\mathbf{h},\mathbf{y} \mid \mathbf{e},\mathbf{c}}\right) = p\left( {\mathbf{h},\mathbf{y} \mid \mathbf{e}}\right)$ states that when the edge at the focal node pair is taken into consideration, the joint distribution keeps the same across the training and testing stages, which means that there is no other underlying unobserved latent variable shifting the distribution. The theorem shows an Edge Invariant subgraph embedding will not cause a dataset shift phenomenon.
102
+
103
+ Theorem 2 gives us the motivation to design the subgraph embedding to be Edge Invariant. When it comes to GNNs, the practical GNN is essentially a message passing neural network [17]. The existence of the edge incident at the focal node pair can determine the computational graph for message passing when learning the node representation.
104
+
105
+ ### 4.2 Proposed methods
106
+
107
+ Having developed conditions of dataset shift phenomenon in link prediction, we next introduce several straightforward subgraph augmentation techniques, FakeEdge (Figure 2), which satisfies the conditions in Theorem 2. The motivation is to mitigate the distribution shift of the subgraph embedding by eliminating the different patterns of target link existence between training and testing sets. Note that all of the strategies follow the same discipline: align the topological structure around the focal node pair in the training and testing datasets, especially for the isomorphic subgraphs. Therefore, we expect that it can gain comparable performance improvement across different strategies.
108
+
109
+ Compared to the vanilla GNN-based subgraph link prediction methods, FakeEdge augments the computation graph for node representation learning and subgraph pooling step to obtain an Edge Invariant embedding for the entire subgraph.
110
+
111
+ Edge Plus A simple strategy is to always make the edge present at the focal node pair for all training and testing samples. Namely, we add an edge into the edge set of subgraph by ${E}_{i, j}^{r + } = {E}_{i, j}^{r} \cup \{ \left( {i, j}\right) \}$ , and use this edge set to calculate the representation ${\mathbf{h}}^{\text{plus }}$ of the subgraph ${\mathcal{G}}_{i, j}^{r + }$ .
112
+
113
+ Edge Minus Another straightforward modification is to remove the edge at the focal node pair if existing. That is, we remove the edge from the edge set of subgraph by ${E}_{i, j}^{r - } = {E}_{i, j}^{r} \smallsetminus \{ \left( {i, j}\right) \}$ , and obtain the representation ${\mathbf{h}}^{\text{minus }}$ from ${\mathcal{G}}_{i, j}^{r - }$ .
114
+
115
+ For GNN-based models, adding or removing edges at the focal node pair can amplify or reduce message propagation along the subgraph. It may also change the connectivity of the subgraph. We are interested to see if it can be beneficial to take both situations into consideration by combining them. Based on Edge Plus and Edge Minus, we further develop another two Edge Invariant methods:
116
+
117
+ Edge Mean To combine Edge Plus and Edge Minus, one can extract these two features and fuse them into one view. One way is to take the average of the two latent features by ${\mathbf{h}}^{\text{mean }} = \frac{{\mathbf{h}}^{\text{plus }} + {\mathbf{h}}^{\text{minus }}}{2}$ .
118
+
119
+ Edge Att Edge Mean weighs ${\mathcal{G}}_{i, j}^{r + }$ and ${\mathcal{G}}_{i, j}^{r - }$ equally on all subgraphs. To vary the importance of two modified subgraphs, we can apply an adaptive weighted sum operation. Similar to the practice in the text translation [36], we apply an attention mechanism to fuse the ${\mathbf{h}}^{\text{plus }}$ and ${\mathbf{h}}^{\text{minus }}$ by:
120
+
121
+ $$
122
+ {\mathbf{h}}^{\text{att }} = {w}^{\text{plus }} * {\mathbf{h}}^{\text{plus }} + {w}^{\text{minus }} * {\mathbf{h}}^{\text{minus }}, \tag{1}
123
+ $$
124
+
125
+ $$
126
+ \text{where}{w}^{ \cdot } = \operatorname{SoftMax}\left( {{\mathbf{q}}^{\top } \cdot \tanh \left( {\mathbf{W} \cdot {\mathbf{h}}^{ \cdot } + \mathbf{b}}\right) }\right) \tag{2}
127
+ $$
128
+
129
+ 4.3 Expressive power of structural representation
130
+
131
+ In addition to solving the issue of dataset shift, FakeEdge can tackle another problem that impedes the expressive power of link prediction methods on the structural representation [37]. In general, a powerful model is expected to discriminate most of the non-isomorphic focal node pairs. For instance, in Figure 3 we have two isomorphic subgraphs $A$ and $B$ , which do not have any overlapping nodes. Suppose that the focal node pairs we are interested in are $\{ u, w\}$ and $\{ v, w\}$ . Obviously, those two focal node pairs have different structural roles in the graph, and we expect different structural representations for them. With GNN-based methods like GAE, the node representation of the node $u$ and $v$ will be the same ${z}_{u} = {z}_{v}$ , due to the fact that they have isomorphic neighborhoods. GAE applies a score function on the focal node pair to pool the subgraph's feature. Hence, the structural representation of node sets $\{ u, w\}$ and $\{ v, w\}$ would be the same, leaving them inseparable in the embedding space. This issue is caused by the limitation of GNNs, whose expressive power is bounded by 1-WL test [38].
132
+
133
+ ![01963ef3-6c52-7a03-8a97-3dd88902a9e0_5_334_606_592_496_0.jpg](images/01963ef3-6c52-7a03-8a97-3dd88902a9e0_5_334_606_592_496_0.jpg)
134
+
135
+ Figure 3: Given two isomorphic but nonoverlapping subgraphs $A$ and $B$ , GNNs learn the same representation for the nodes $u$ and $v$ . Hence, GNN-based methods cannot distinguish focal node pairs $\{ u, w\}$ and $\{ v, w\}$ . However, by adding a Fa-keEdge at $\{ u, w\}$ (shown as the dashed line in the figure), it can break the tie of the representation for $u$ and $v$ , thanks to $u$ ’s modified neighborhood.
136
+
137
+ Zhang et al. address this problem by assigning distinct labels between the focal node pair and the rest of the nodes in the subgraph [19]. With FakeEdge, we can utilize the Edge Plus strategy to deliberately add an edge between nodes $u$ and $w$ (shown as the dashed line in Figure 3). Note that the edge between $v$ and $w$ has already existed, so there is no need to add an edge between them. Therefore, the node $u$ and $v$ will have different neighborhoods ( $u$ has 4 neighbors and $v$ has 3 neighbors), resulting in the different node representation between the node $u$ and $v$ after the first iteration of message propagation with GNN. In the end, we can obtain different representations for two focal node pairs.
138
+
139
+ According to Theorem 2 in [19], such non-isomorphic focal node pairs $\{ u, w\} ,\{ v, w\}$ are not sporadic cases in a graph. Given an $n$ nodes graph whose node degree is $\mathcal{O}\left( {{\log }^{\frac{1 - \epsilon }{2r}}n}\right)$ for any constant $\epsilon > 0$ , there exists $\omega \left( {n}^{2\epsilon }\right)$ pairs of such kind of $\{ u, w\}$ and $\{ v, w\}$ , which cannot be distinguished by GNN-based models like GAE. However, FakeEdge can enhance the expressive power of link prediction methods by modifying the subgraph's local connectivity.
140
+
141
+ ## 5 Experiments
142
+
143
+ In this section, we conduct extensive experiments to evaluate how FakeEdge can mitigate the dataset shift issue on various baseline models in the link prediction task. Then we empirically show the distribution gap of the subgraph representation between the training and testing and discuss how the dataset shift issue can worsen with deeper GNNs.
144
+
145
+ Table 1: Comparison with and without FakeEdge (AUC). The best results are highlighted in bold.
146
+
147
+ <table><tr><td>Models</td><td>FakeEdge</td><td>$\mathbf{{Cora}}$</td><td>Citeseer</td><td>Pubmed</td><td>USAir</td><td>NS</td><td>$\mathbf{{PB}}$</td><td>Yeast</td><td>C.ele</td><td>Power</td><td>Router</td><td>E.coli</td></tr><tr><td rowspan="5">GCN</td><td>Original</td><td>${84.92} \pm {1.95}$</td><td>77.05±2.18</td><td>${81.58} \pm {4.62}$</td><td>${94.07} \pm {1.50}$</td><td>96.92±0.73</td><td>93.17±0.45</td><td>93.76±0.65</td><td>${88.78} \pm {1.85}$</td><td>76.32±4.65</td><td>${60.72} \pm {5.88}$</td><td>95.35±0.36</td></tr><tr><td>Edge Plus</td><td>${91.94}_{\pm {0.90}}$</td><td>89.54±1.17</td><td>${97.91}_{\pm {0.14}}$</td><td>${97.10} \pm {1.01}$</td><td>98.03±0.72</td><td>${95.48} \pm {0.42}$</td><td>97.86±0.27</td><td>89.65±1.74</td><td>${\mathbf{{85.42}}}_{\pm {0.91}}$</td><td>95.96±0.41</td><td>98.05±0.30</td></tr><tr><td>Edge Minus</td><td>${92.01}_{\pm {0.94}}$</td><td>90.29±0.ss</td><td>97.87±0.15</td><td>97.16±0.97</td><td>${\mathbf{{98.14}}}_{\pm {0.66}}$</td><td>${95.50}_{\pm {0.43}}$</td><td>${\mathbf{{97.90}}}_{\pm {0.29}}$</td><td>89.47±1.86</td><td>${85.39}_{\pm {1.08}}$</td><td>96.05±0.37</td><td>97.97±0.31</td></tr><tr><td>Edge Mean</td><td>${91.86} \pm {0.76}$</td><td>${89.61}_{\pm {0.96}}$</td><td>${97.94}_{\pm {0.13}}$</td><td>${97.19}_{\pm {1.00}}$</td><td>${98.08}_{\pm {0.66}}$</td><td>95.52±0.43</td><td>${97.70}_{\pm {0.36}}$</td><td>89.62±1.82</td><td>${85.23} \pm {1.00}$</td><td>${\mathbf{{96.08}}}_{\pm {0.35}}$</td><td>${\mathbf{{98.07}}}_{\pm {0.27}}$</td></tr><tr><td>Edge Att</td><td>${\mathbf{{92.06}}}_{\pm {0.85}}$</td><td>${}_{{88.96} \pm {1.05}}$</td><td>${\mathbf{{97.96}}}_{\pm {0.12}}$</td><td>${\mathbf{{97.20}}}_{\pm {0.69}}$</td><td>97.96±0.39</td><td>${95.46}_{\pm {0.45}}$</td><td>97.65±0.17</td><td>89.76±2.06</td><td>${}_{{85.26} \pm {1.32}}$</td><td>${95.90}_{\pm {0.47}}$</td><td>${98.04}_{\pm {0.16}}$</td></tr><tr><td rowspan="5">SAGE</td><td>Original</td><td>89.12±0.90</td><td>87.76±0.97</td><td>${94.95}_{\pm {0.44}}$</td><td>96.57±0.57</td><td>${98.11} \pm {0.48}$</td><td>${94.12} \pm {0.45}$</td><td>${97.11}_{\pm {0.31}}$</td><td>87.62±1.63</td><td>79.35±1.66</td><td>${88.37} \pm {1.46}$</td><td>95.70±0.44</td></tr><tr><td>Edge Plus</td><td>${93.21}_{\pm {0.82}}$</td><td>90.88±0.80</td><td>${97.91}_{\pm {0.14}}$</td><td>97.64±0.73</td><td>${\mathbf{{98.72}}}_{\pm {0.59}}$</td><td>95.68±0.39</td><td>98.20±0.13</td><td>${\mathbf{{90.94}}}_{\pm {1.48}}$</td><td>86.36±0.97</td><td>96.46±0.38</td><td>${98.41}_{\pm {0.19}}$</td></tr><tr><td>Edge Minus</td><td>${92.45} \pm {0.78}$</td><td>90.14±1.04</td><td>97.93±0.14</td><td>97.50±0.67</td><td>98.66±0.55</td><td>95.57±0.39</td><td>${}_{{98.13} \pm {0.10}}$</td><td>90.83±1.59</td><td>${85.62} \pm {1.17}$</td><td>92.91±1.09</td><td>98.34±0.26</td></tr><tr><td>Edge Mean</td><td>92.77±0.69</td><td>90.60±0.94</td><td>97.96±0.13</td><td>97.67±0.70</td><td>${98.62} \pm {0.61}$</td><td>${\mathbf{{95.69}}}_{\pm {0.37}}$</td><td>${98.20}_{\pm {0.13}}$</td><td>90.86±1.51</td><td>${86.24} \pm {1.01}$</td><td>${96.22} \pm {0.38}$</td><td>${98.41}_{\pm {0.21}}$</td></tr><tr><td>Edge Att</td><td>${\mathbf{{93.31}}}_{\pm {1.02}}$</td><td>${\mathbf{{91.01}}}_{\pm {1.14}}$</td><td>${\mathbf{{98.01}}}_{\pm {0.13}}$</td><td>97.40±0.94</td><td>${98.70}_{\pm {0.59}}$</td><td>${95.49}_{\pm {0.49}}$</td><td>${\mathbf{{98.22}}}_{\pm {0.24}}$</td><td>90.64±1.88</td><td>$\mathbf{{86.46}} \pm {0.91}$</td><td>96.31±0.59</td><td>${\mathbf{{98.43}}}_{\pm {0.13}}$</td></tr><tr><td rowspan="5">GIN</td><td>Original</td><td>${82.70}_{\pm {1.93}}$</td><td>77.85±2.64</td><td>${91.32} \pm {1.13}$</td><td>${94.89}_{\pm {0.89}}$</td><td>96.05±1.10</td><td>${92.95} \pm {0.51}$</td><td>${94.50}_{\pm {0.65}}$</td><td>${85.23} \pm {2.56}$</td><td>${73.29}_{\pm {3.88}}$</td><td>${84.29}_{\pm {1.20}}$</td><td>${94.34}_{\pm {0.57}}$</td></tr><tr><td>Edge Plus</td><td>90.72±1.11</td><td>89.54±1.19</td><td>97.63±0.14</td><td>96.03±1.37</td><td>98.51±0.55</td><td>${95.38}_{\pm {0.35}}$</td><td>${\mathbf{{97.84}}}_{\pm {0.40}}$</td><td>${\mathbf{{89.71}}}_{\pm {2.06}}$</td><td>${\mathbf{{86.61}}}_{\pm {0.87}}$</td><td>${95.79}_{\pm {0.48}}$</td><td>97.67±0.23</td></tr><tr><td>Edge Minus</td><td>89.88±1.26</td><td>${89.30} \pm {1.08}$</td><td>97.27±0.17</td><td>96.36±0.83</td><td>98.62±0.45</td><td>${95.35}_{\pm {0.35}}$</td><td>${97.80}_{\pm {0.41}}$</td><td>${89.40}_{\pm {1.91}}$</td><td>${86.55} \pm {0.83}$</td><td>95.72±0.45</td><td>97.33±0.36</td></tr><tr><td>Edge Mean</td><td>90.30±1.22</td><td>89.47±1.13</td><td>97.53±0.19</td><td>${\mathbf{{96.45}}}_{\pm {0.90}}$</td><td>98.66±0.45</td><td>${\mathbf{{95.39}}}_{\pm {0.37}}$</td><td>97.78±0.40</td><td>89.66±2.00</td><td>${86.51}_{\pm {0.92}}$</td><td>95.73±0.43</td><td>97.57±0.32</td></tr><tr><td>Edge Att</td><td>90.76±0.88</td><td>89.55±0.61</td><td>${97.50}_{\pm {0.15}}$</td><td>${96.34}_{\pm {0.82}}$</td><td>98.35±0.54</td><td>${95.29}_{\pm {0.29}}$</td><td>97.66±0.33</td><td>${89.39}_{\pm {1.61}}$</td><td>${86.21}_{\pm {0.67}}$</td><td>95.78±0.52</td><td>${\mathbf{{97.74}}}_{\pm {0.33}}$</td></tr><tr><td rowspan="5">PLNLP</td><td>Original</td><td>${82.37} \pm {1.70}$</td><td>${82.93} \pm {1.73}$</td><td>87.36±4.90</td><td>${95.37}_{\pm {0.87}}$</td><td>97.86±0.93</td><td>${92.99}_{\pm {0.71}}$</td><td>${95.09}_{\pm {1.47}}$</td><td>${88.31}_{\pm {2.21}}$</td><td>${81.59} \pm {4.31}$</td><td>${86.41}_{\pm {1.63}}$</td><td>90.63±1.68</td></tr><tr><td>Edge Plus</td><td>${91.62} \pm {0.87}$</td><td>${\mathbf{{89.88}}}_{\pm {1.19}}$</td><td>${98.31}_{\pm {0.21}}$</td><td>${98.09}_{\pm {0.73}}$</td><td>${\mathbf{{98.77}}}_{\pm {0.39}}$</td><td>${\mathbf{{95.33}}}_{\pm {0.39}}$</td><td>${98.10}_{\pm {0.33}}$</td><td>${91.77} \pm {2.16}$</td><td>90.04±0.57</td><td>${96.45}_{\pm {0.40}}$</td><td>${\mathbf{{98.03}}}_{\pm {0.23}}$</td></tr><tr><td>Edge Minus</td><td>${\mathbf{{91.84}}}_{\pm {1.42}}$</td><td>${88.99}_{\pm {1.48}}$</td><td>${\mathbf{{98.44}}}_{\pm {0.14}}$</td><td>97.92±0.52</td><td>${98.59}_{\pm {0.44}}$</td><td>${95.20}_{\pm {0.34}}$</td><td>${98.01}_{\pm {0.38}}$</td><td>${91.60}_{\pm {2.23}}$</td><td>89.26±0.58</td><td>${95.01}_{\pm {0.47}}$</td><td>97.80±0.16</td></tr><tr><td>Edge Mean</td><td>${91.77} \pm {1.49}$</td><td>${89.45} \pm {1.50}$</td><td>98.36±0.16</td><td>${\mathbf{{98.17}}}_{\pm {0.60}}$</td><td>${98.66} \pm {0.56}$</td><td>${95.30}_{\pm {0.37}}$</td><td>98.10±0.39</td><td>${91.70} \pm {2.18}$</td><td>90.05±0.52</td><td>${96.29} \pm {0.47}$</td><td>${98.02} \pm {0.20}$</td></tr><tr><td>Edge Att</td><td>${91.22} \pm {1.34}$</td><td>${88.75} \pm {1.70}$</td><td>${98.41}_{\pm {0.17}}$</td><td>98.13±0.61</td><td>${98.70} \pm {0.40}$</td><td>${95.32} \pm {0.38}$</td><td>98.06±0.37</td><td>${91.72} \pm {2.12}$</td><td>${\mathbf{{90.08}}}_{\pm {0.54}}$</td><td>${96.40} \pm {0.40}$</td><td>${98.01}_{\pm {0.18}}$</td></tr><tr><td rowspan="5">SEAL</td><td>Original</td><td>${90.13} \pm {1.94}$</td><td>${87.59} \pm {1.57}$</td><td>${95.79}_{\pm {0.78}}$</td><td>97.26±0.58</td><td>97.44±1.07</td><td>${95.06}_{\pm {0.46}}$</td><td>${96.91}_{\pm {0.45}}$</td><td>${88.75} \pm {1.90}$</td><td>${}_{{78.14} \pm {3.14}}$</td><td>${92.35} \pm {1.21}$</td><td>97.33±0.28</td></tr><tr><td>Edge Plus</td><td>${90.01} \pm {1.95}$</td><td>89.65±1.22</td><td>97.30±0.34</td><td>97.34±0.59</td><td>98.35±0.63</td><td>95.35±0.38</td><td>97.67±0.32</td><td>${89.20} \pm {1.86}$</td><td>${85.25} \pm {0.80}$</td><td>95.47±0.58</td><td>97.84±0.25</td></tr><tr><td>Edge Minus</td><td>${91.04} \pm {1.91}$</td><td>89.74±1.16</td><td>97.50±0.33</td><td>97.27±0.63</td><td>98.17±0.74</td><td>${\mathbf{{95.36}}}_{\pm {0.37}}$</td><td>97.64±0.30</td><td>89.35±1.98</td><td>${\mathbf{{85.30}}}_{\pm {0.91}}$</td><td>${\mathbf{{95.77}}}_{\pm {0.79}}$</td><td>97.79±0.30</td></tr><tr><td>Edge Mean</td><td>90.36±2.17</td><td>89.87±1.14</td><td>${\mathbf{{97.52}}}_{\pm {0.34}}$</td><td>${\mathbf{{97.38}}}_{\pm {0.68}}$</td><td>${98.23} \pm {0.70}$</td><td>${95.30}_{\pm {0.34}}$</td><td>97.68±0.33</td><td>${89.19}_{\pm {1.85}}$</td><td>${85.30}_{\pm {0.87}}$</td><td>${95.61} \pm {0.64}$</td><td>97.83±0.23</td></tr><tr><td>Edge Att</td><td>$\mathbf{{91.08}} \pm {1.67}$</td><td>89.35±1.43</td><td>97.26±0.45</td><td>97.04±0.79</td><td>${\mathbf{{98.52}}}_{\pm {0.57}}$</td><td>95.19±0.43</td><td>${\mathbf{{97.70}}}_{\pm {0.40}}$</td><td>${\mathbf{{89.37}}}_{\pm {1.40}}$</td><td>${}_{{85.24} \pm {1.39}}$</td><td>${95.14}_{\pm {0.62}}$</td><td>${\mathbf{{97.90}}}_{\pm {0.33}}$</td></tr><tr><td rowspan="5">WalkPool</td><td>Original</td><td>${92.00} \pm {0.79}$</td><td>89.64±1.01</td><td>97.70±0.19</td><td>97.83±0.97</td><td>${99.00} \pm {0.45}$</td><td>94.53±0.44</td><td>${96.81} \pm {0.92}$</td><td>93.71±1.11</td><td>${82.43} \pm {3.57}$</td><td>87.46±7.45</td><td>${95.00} \pm {0.90}$</td></tr><tr><td>Edge Plus</td><td>91.96±0.79</td><td>${89.49}_{\pm {0.96}}$</td><td>98.36±0.13</td><td>97.97±0.96</td><td>${98.99}_{\pm {0.58}}$</td><td>95.47±0.32</td><td>98.28±0.24</td><td>93.79±1.11</td><td>${91.24} \pm {0.84}$</td><td>${97.31}_{\pm {0.26}}$</td><td>98.65±0.17</td></tr><tr><td>Edge Minus</td><td>${91.97} \pm {0.80}$</td><td>89.61±1.04</td><td>${\mathbf{{98.43}}}_{\pm {0.10}}$</td><td>98.03±0.95</td><td>99.02±0.54</td><td>95.47±0.32</td><td>${\mathbf{{98.30}}}_{\pm {0.23}}$</td><td>${\mathbf{{93.83}}}_{\pm {1.13}}$</td><td>${\mathbf{{91.28}}}_{\pm {0.90}}$</td><td>${\mathbf{{97.35}}}_{\pm {0.28}}$</td><td>98.66±0.17</td></tr><tr><td>Edge Mean</td><td>${91.77} \pm {0.74}$</td><td>89.55±1.09</td><td>${98.39}_{\pm {0.11}}$</td><td>98.01±0.89</td><td>99.02±0.56</td><td>${95.47} \pm {0.29}$</td><td>${98.30}_{\pm {0.24}}$</td><td>93.70±1.12</td><td>${91.26} \pm {0.81}$</td><td>97.27±0.29</td><td>98.65±0.19</td></tr><tr><td>Edge Att</td><td>${91.98} \pm {0.80}$</td><td>${}_{{89.36} \pm {0.74}}$</td><td>98.37±0.19</td><td>${\mathbf{{98.12}}}_{\pm {0.81}}$</td><td>99.03±0.50</td><td>${\mathbf{{95.47}}}_{\pm {0.27}}$</td><td>98.28±0.24</td><td>93.63±1.11</td><td>${91.25} \pm {0.60}$</td><td>97.27±0.27</td><td>${\mathbf{{98.70}}}_{\pm {0.14}}$</td></tr></table>
148
+
149
+ ### 5.1 Experimental setup
150
+
151
+ Baseline methods. We show how FakeEdge techniques can improve the existing link prediction methods, including GAE-like models [10], PLNLP [14], SEAL [11], and WalkPool [12]. To examine the effectiveness of FakeEdge, we compare the model performance with subgraph representation learned on the original unmodified subgraph and the FakeEdge augmented ones. For GAE-like models, we apply different GNN encoders, including GCN [9], SAGE [39] and GIN [38]. SEAL and WalkPool have already been implemented in the fashion of the subgraph link prediction. However, a subgraph extraction preprocessing is needed for GAE and PLNLP, since they are not initially implemented as the subgraph link prediction. GCN, SAGE, and PLNLP use a score function to pool the subgraph. GCN and SAGE use the Hadamard product as the score function, while MLP is applied for PLNLP (see Section 3.2 for discussions about the score function). Moreover, GIN applies a subgraph-level pooling strategy, called "mean readout" [38], whose pooling is based on the entire subgraph. Similarly, SEAL and WalkPool also utilize the pooling on the entire subgraph to aggregate the representation. More details about the model implementation can be found in Appendix C.
152
+
153
+ Benchmark datasets. For the experiment, we use 3 datasets with node attributes and 8 without attributes. The graph datasets with node attributes are three citation networks: Cora [40], Cite-seer [41], and Pubmed [42]. The graph datasets without node attributes are eight graphs in a variety of domains: USAir [43], NS [44], PB [45], Yeast [46], C.ele [47], Power [47], Router [48], and E.coli [49]. More details about the benchmark datasets can be found in Appendix D.
154
+
155
+ Evaluation protocols. Following the same experimental setting as of $\left\lbrack {{11},{12}}\right\rbrack$ , the links are split into 3 parts: ${85}\%$ for training, $5\%$ for validation, and ${10}\%$ for testing. The links in validation and testing are unobserved during the training phase. We also implement a universal data pipeline for different methods to eliminate the data perturbation caused by train/test split. We perform 10 random data splits to reduce the performance disturbance. Area under the curve (AUC) [50] is used as the evaluation metrics and is reported by the epoch with the highest score on the validation set.
156
+
157
+ ### 5.2 Results
158
+
159
+ FakeEdge on GAE-like models. The results of models with (Edge Plus, Edge Minus, Edge Mean, and Edge Att) and without (Original) FakeEdge are shown in Table 1. We observe that FakeEdge is a vital component for all different methods. With FakeEdge, the link prediction model can obtain a significant performance improvement on all datasets. GAE-like models and PLNLP achieve the most
160
+
161
+ ![01963ef3-6c52-7a03-8a97-3dd88902a9e0_7_308_196_1181_342_0.jpg](images/01963ef3-6c52-7a03-8a97-3dd88902a9e0_7_308_196_1181_342_0.jpg)
162
+
163
+ Figure 4: Distribution gap (AUC) of the positive samples between the training and testing set.
164
+
165
+ remarkable performance improvement when FakeEdge alleviates the dataset shift issue. FakeEdge boosts them by $2\% - {11}\%$ on different datasets. GCN, SAGE, and PLNLP all have a score function as the pooling methods, which is solely based on the focal node pair. In particular, the focal node pair is incident with the target link, which determines how the message passes around it. Therefore, the most severe dataset shift issues happen at the embedding of the focal node pair during the node representation learning step. FakeEdge is expected to bring a notable improvement to these situations.
166
+
167
+ Encoder matters. In addition, the choice of encoder plays an important role when GAE is deployed on the Original subgraph. We can see that SAGE shows the best performance without FakeEdge among these 3 encoders. However, after applying FakeEdge, all GAE-like methods achieve comparable better results regardless of the choice of the encoder. We come to a hypothesis that the plain SAGE itself leverages the idea of FakeEdge to partially mitigate the dataset shift issue. Each node's neighborhood in SAGE is a fixed-size set of nodes, which is uniformly sampled from the full neighborhood set. Thus, when learning the node representation of the focal node pair in the positive training sets, it is possible that one node of the focal node pair is not selected as the neighbor of the other node during the neighborhood sampling stage. In this case, the FakeEdge technique Edge Minus is applied to modify such a subgraph.
168
+
169
+ FakeEdge on subgraph-based models. In terms of SEAL and WalkPool, FakeEdge can still robustly enhance the model performance across different datasets. Especially for datasets like Power and Router, FakeEdge increases the AUC by over 10% on both methods. Both methods achieve better results across different datasets, except WalkPool model on datasets Cora and Citeseer. One of the crucial components of WalkPool is the walk-based pooling method, which actually operates on both the Edge Plus and Edge Minus graphs. Different from FakeEdge technique, WalkPool tackles the dataset shift problem mainly on the subgraph pooling stage. Thus, WalkPool shows similar model performance between the Original and FakeEdge augmented graphs. Moreover, SEAL and WalkPool have utilized one of the FakeEdge techniques as a trick in their initial implementations. However, they have failed to explicitly point out the fix of dataset shift issue from such a trick in their papers.
170
+
171
+ Different FakeEdge techniques. When comparing different FakeEdge techniques, Edge Att appears to be the most stable, with a slightly better overall performance and a smaller variance. However, there is no significant difference between these techniques. This observation is consistent with our expectation since all FakeEdge techniques follow the same discipline to fix the dataset shift issue.
172
+
173
+ ### 5.3 Further discussions
174
+
175
+ In this section, we conduct experiments to more thoroughly study why FakeEdge can improve the performance of the link prediction methods. We first give an empirical experiment to show how severe the distribution gap can be between training and testing. Then, we discuss the dataset shift issue with deeper GNNs.
176
+
177
+ #### 5.3.1 Distribution gap between the training and testing
178
+
179
+ FakeEdge aims to produce Edge Invariant subgraph embedding during the training and testing phases in the link prediction task, especially for those positive samples $p\left( {\mathbf{h} \mid \mathbf{y} = 1}\right)$ . That is, the subgraph representation of the positive samples between the training and testing should be difficult, if at all, to be distinguishable from each other. Formally, we ask whether $p\left( {\mathbf{h} \mid \mathrm{y} = 1,\mathrm{c} = \operatorname{train}}\right) = p(\mathbf{h} \mid \mathrm{y} =$ $1,\mathrm{c} =$ test), by conducting an empirical experiment on the subgraph embedding.
180
+
181
+ Table 2: GIN's performance improvement by Edge Att compared to Original with a different number of layers. GIN utilizes mean-pooling as the subgraph-level readout.
182
+
183
+ <table><tr><td>Layers</td><td>$\mathbf{{Cora}}$</td><td>Citeseer</td><td>Pubmed</td><td>USAir</td><td>NS</td><td>$\mathbf{{PB}}$</td><td>Yeast</td><td>C.ele</td><td>Power</td><td>Router</td><td>E.coli</td></tr><tr><td>1</td><td>†2.80%</td><td>↑3.65%</td><td>†4.53%</td><td>10.29%</td><td>†1.30%</td><td>†1.02%</td><td>†1.54%</td><td>↑2.13%</td><td>†5.24%</td><td>†11.19%</td><td>†1.67%</td></tr><tr><td>2</td><td>↑4.66%</td><td>↑14.53%</td><td>↑6.64%</td><td>10.73%</td><td>↑1.55%</td><td>↑2.16%</td><td>↑3.40%</td><td>↑5.41%</td><td>↑25.32%</td><td>↑14.73%</td><td>↑2.59%</td></tr><tr><td>3</td><td>↑9.78%</td><td>↑15.19%</td><td>†6.57%</td><td>10.98%</td><td>†2.49%</td><td>†2.43%</td><td>↑3.60%</td><td>†4.48%</td><td>†20.46%</td><td>↑13.38%</td><td>↑3.14%</td></tr></table>
184
+
185
+ We retrieve the subgraph embedding of the positive samples from both the training and testing stages, and randomly shuffle the embedding. Then we classify whether the sample is from training ( $\mathrm{c} =$ train) or testing $\left( {\mathrm{c} = \text{test}}\right)$ . The shuffled positive samples are split ${80}\% /{20}\%$ as train and inference sets. Note that the train set here, as well as the inference set, contains both the shuffled positive samples from the training and testing set in the link prediction task. Then we feed the subgraph embedding into a 2-layer MLP classifier to investigate whether the classifier can differentiate the training samples (c = train) and the testing samples (c = test). In general, the classifier will struggle to undertake the classification if the embedding of training and testing samples is drawn from the same underlying distribution, which indicates there is no significant dataset shift issue.
186
+
187
+ We use GAE with the GCN as the encoder to run the experiment. AUC is used to measure the discriminating power of the classifier. The results are shown in Figure 4. Without FakeEdge, the classifier shows a significant ability to separate positive samples between training and testing. When it comes to the subgraph embedding with FakeEdge, the classifier stumbles in distinguishing the samples. The comparison clearly reveals how different the subgraph embedding can be between the training and testing, while FakeEdge can both provably and empirically diminish the distribution gap.
188
+
189
+ #### 5.3.2 Dataset shift with deeper GNNs
190
+
191
+ Given two graphs with $n$ nodes in each graph,1-WL test may take up to $n$ iterations to determine whether two graphs are isomorphic [51]. Thus, GNNs, which mimic 1-WL test, tend to discriminate more non-isomorphic graphs when the number of GNN layers increases. SEAL [19] has empirically witnessed a stronger representation power and obtained more expressive link representation with deeper GNNs. However, we notice that the dataset shift issue in the subgraph link prediction becomes more severe when GNNs try to capture long-range information with more layers.
192
+
193
+ We reproduce the experiments on GIN by using $l = 1,2,3$ message passing layers and compare the model performance by AUC scores with and without FakeEdge. Here we only apply Edge Att as the FakeEdge technique. The relative AUC score improvement of Edge Att is reported, namely $\left( {{AU}{C}_{\text{EdgeAtt }} - {AU}{C}_{\text{Original }}}\right) /{AU}{C}_{\text{Original }}$ . The results are shown in Table 2. As we can observe, the relative performance improvement between Edge Att and Original becomes more significant with more layers, which indicates that the dataset shift issue can be potentially more critical when we seek deeper GNNs for greater predictive power.
194
+
195
+ To explain such a phenomenon, we hypothesize that GNNs with more layers will involve more nodes in the subgraph, such that their computation graph is dependent on the existence of the edge at the focal node pair. For example, select a node $v$ from the subgraph ${\mathcal{G}}_{i, j}^{r}$ , which is at least $l$ hops away from the focal node pair $\{ i, j\}$ , namely $l = \min \left( {d\left( {i, v}\right) , d\left( {j, v}\right) }\right)$ . If the GNN has only $l$ layers, $v$ will not include the edge(i, j)in its computation graph. But with a GNN with $l + 1$ layers, the edge (i, j)will affect $v$ ’s computation graph. We leave the validation of the hypothesis to future work.
196
+
197
+ ## 6 Conclusion
198
+
199
+ Dataset shift is arguably one of the most challenging problems in the world of machine learning. However, to the best of our knowledge, none of the previous studies sheds light on this notable phenomenon in link prediction. In this paper, we studied the issue of dataset shift in link prediction tasks with GNN-based models. We first unified several existing models into a framework of subgraph link prediction. Then, we theoretically investigated the phenomenon of dataset shift in subgraph link prediction and proposed a model-agnostic technique FakeEdge to amend the issue. Experiments with different models over a wide range of datasets verified the effectiveness of FakeEdge.
200
+
201
+ References
202
+
203
+ [1] David Liben-Nowell and Jon Kleinberg. The link prediction problem for social networks. In Proceedings of the twelfth international conference on Information and knowledge management, CIKM '03, pages 556-559, New York, NY, USA, November 2003. Association for Computing Machinery. ISBN 978-1-58113-723-1. doi: 10.1145/956863.956972. URL http://doi.org/ 10.1145/956863.956972.1, 2, 4
204
+
205
+ [2] Damian Szklarczyk, Annika L. Gable, David Lyon, Alexander Junge, Stefan Wyder, Jaime Huerta-Cepas, Milan Simonovic, Nadezhda T. Doncheva, John H. Morris, Peer Bork, Lars J. Jensen, and Christian von Mering. STRING v11: protein-protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets. Nucleic Acids Research, 47(D1):D607-D613, January 2019. ISSN 1362-4962. doi: 10.1093/ nar/gky1131. 1
206
+
207
+ [3] Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, 42(8):30-37, 2009. Publisher: IEEE. 1
208
+
209
+ [4] Zhilin Yang, William Cohen, and Ruslan Salakhudinov. Revisiting semi-supervised learning with graph embeddings. In International conference on machine learning, pages 40-48. PMLR, 2016. 1
210
+
211
+ [5] Yang Yang, Ryan N. Lichtenwalter, and Nitesh V. Chawla. Evaluating link prediction methods. Knowledge and Information Systems, 45(3):751-782, December 2015. ISSN 0219-3116. doi: 10.1007/s10115-014-0789-0. URL https://doi.org/10.1007/s10115-014-0789-0.1
212
+
213
+ [6] Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016. URL https://proceedings.neurips.cc/paper/2016/ file/04df4d434d481c5bb723be1b6df1ee65-Paper.pdf. 1
214
+
215
+ [7] David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alan Aspuru-Guzik, and Ryan P Adams. Convolutional Networks on Graphs for Learning Molecular Fingerprints. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015. URL https://proceedings.neurips.cc/paper/2015/file/ f9be311e65d81a9ad8150a60844bb94c-Paper.pdf.
216
+
217
+ [8] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral Networks and Locally Connected Networks on Graphs. arXiv:1312.6203 [cs], May 2014. URL http: //arxiv.org/abs/1312.6203. arXiv: 1312.6203.
218
+
219
+ [9] Thomas N. Kipf and Max Welling. Semi-Supervised Classification with Graph Convolutional Networks. arXiv:1609.02907 [cs, stat], February 2017. URL http://arxiv.org/abs/1609.02907.arXiv: 1609.02907. 1, 2, 7, 14
220
+
221
+ [10] Thomas N. Kipf and Max Welling. Variational Graph Auto-Encoders, 2016. _eprint: 1611.07308.1,2,4,7
222
+
223
+ [11] Muhan Zhang and Yixin Chen. Link Prediction Based on Graph Neural Networks. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/file/ 53f0d7c537d99b3824f0f99d62ea2428-Paper.pdf. 2, 4, 7, 14
224
+
225
+ [12] Liming Pan, Cheng Shi, and Ivan Dokmanić. Neural Link Prediction with Walk Pooling. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=CCu6RcUMwK0.2,4,7,14
226
+
227
+ [13] Pan Li, Yanbang Wang, Hongwei Wang, and Jure Leskovec. Distance Encoding: Design Provably More Powerful Neural Networks for Graph Representation Learning. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 4465-4478. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/ 2f73168bf3656f697507752ec592c437-Paper.pdf.2
228
+
229
+ [14] Zhitao Wang, Yong Zhou, Litao Hong, Yuanhang Zou, Hanjing Su, and Shouzhi Chen. Pairwise Learning for Neural Link Prediction. arXiv:2112.02936 [cs], January 2022. URL http: //arxiv.org/abs/2112.02936. arXiv: 2112.02936. 1, 2, 4, 7, 14
230
+
231
+ [15] Joaquin Quiñonero-Candela, Masashi Sugiyama, Anton Schwaighofer, Neil D. Lawrence, Michael I. Jordan, and Thomas G. Dietterich, editors. Dataset Shift in Machine Learning. Neural Information Processing series. MIT Press, Cambridge, MA, USA, December 2008. ISBN 978-0-262-17005-5. 1
232
+
233
+ [16] Jose G. Moreno-Torres, Troy Raeder, RocíO Alaiz-RodríGuez, Nitesh V. Chawla, and Francisco Herrera. A unifying view on dataset shift in classification. Pattern Recognition, 45(1):521-530, January 2012. ISSN 0031-3203. doi: 10.1016/j.patcog.2011.06.019. URL http://doi.org/ 10.1016/j.patcog.2011.06.019.1
234
+
235
+ [17] Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural Message Passing for Quantum Chemistry. CoRR, abs/1704.01212, 2017. URL http: //arxiv.org/abs/1704.01212. arXiv: 1704.01212. 1, 5, 13
236
+
237
+ [18] Boris Weisfeiler and Andrei Leman. The reduction of a graph to canonical form and the algebra which appears therein. NTI, Series, 2(9):12-16, 1968. 1
238
+
239
+ [19] Muhan Zhang, Pan Li, Yinglong Xia, Kai Wang, and Long Jin. Labeling Trick: A Theory of Using Graph Neural Networks for Multi-Node Representation Learning. In M. Ran-zato, A. Beygelzimer, Y. Dauphin, P. S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 9061-9073. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/file/ 4be49c79f233b4f4070794825c323733-Paper.pdf. 2, 6, 9, 14
240
+
241
+ [20] Lada A. Adamic and Eytan Adar. Friends and neighbors on the Web. Social Networks, 25(3): 211-230, 2003. ISSN 0378-8733. doi: https://doi.org/10.1016/S0378-8733(03)00009-1.URL https://www.sciencedirect.com/science/article/pii/S0378873303000091.2,4
242
+
243
+ [21] Albert-László Barabási and Réka Albert. Emergence of Scaling in Random Networks. Science, 286(5439):509-512, 1999. doi: 10.1126/science.286.5439.509. URL https://www.science.org/doi/abs/10.1126/science.286.5439.509._eprint: https://www.science.org/doi/pdf/10.1126/science.286.5439.509.2, 4
244
+
245
+ [22] Muhan Zhang and Yixin Chen. Weisfeiler-Lehman Neural Machine for Link Prediction. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 575-583, Halifax NS Canada, August 2017. ACM. ISBN 978-1-4503-4887- 4. doi: 10.1145/3097983.3097996. URL https://dl.acm.org/doi/10.1145/3097983.3097996.2
246
+
247
+ [23] Ron Milo, Shalev Itzkovitz, Nadav Kashtan, Reuven Levitt, Shai Shen-Orr, In-bal Ayzenshtat, Michal Sheffer, and Uri Alon. Superfamilies of Evolved and Designed Networks. Science, 303(5663):1538-1542, 2004. doi: 10.1126/science. 1089167. URL https://www.science.org/doi/abs/10.1126/science.1089167._eprint: https://www.science.org/doi/pdf/10.1126/science.1089167.2
248
+
249
+ [24] Yang Hu, Xiyuan Wang, Zhouchen Lin, Pan Li, and Muhan Zhang. Two-Dimensional Weisfeiler-Lehman Graph Neural Networks for Link Prediction, June 2022. URL http://arxiv.org/ abs/2206.09567. arXiv:2206.09567 [cs]. 2
250
+
251
+ [25] Tong Zhao, Gang Liu, Stephan Günnemann, and Meng Jiang. Graph Data Augmentation for Graph Machine Learning: A Survey. arXiv e-prints, page arXiv:2202.08871, February 2022. _eprint: 2202.08871. 3
252
+
253
+ [26] Yu Rong, Wenbing Huang, Tingyang Xu, and Junzhou Huang. DropEdge: Towards Deep Graph Convolutional Networks on Node Classification. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=Hkx1qkrKPr.3
254
+
255
+ [27] Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun. Measuring and Relieving the Over-smoothing Problem for Graph Neural Networks from the Topological View. In AAAI, 2020. 3
256
+
257
+ [28] Jake Topping, Francesco Di Giovanni, Benjamin Paul Chamberlain, Xiaowen Dong, and Michael M. Bronstein. Understanding over-squashing and bottlenecks on graphs via curvature. arXiv:2111.14522 [cs, stat], November 2021. URL http://arxiv.org/abs/2111.14522.arXiv: 2111.14522. 3
258
+
259
+ [29] Uri Alon and Eran Yahav. On the Bottleneck of Graph Neural Networks and its Practical Implications. arXiv:2006.05205 [cs, stat], March 2021. URL http://arxiv.org/abs/2006.05205.arXiv: 2006.05205. 3
260
+
261
+ [30] Johannes Klicpera, Stefan Weißenberger, and Stephan Günnemann. Diffusion Improves Graph Learning. arXiv:1911.05485 [cs, stat], December 2019. URL http://arxiv.org/abs/1911.05485.arXiv: 1911.05485. 3
262
+
263
+ [31] Tong Zhao, Gang Liu, Daheng Wang, Wenhao Yu, and Meng Jiang. Learning from Counterfactual Links for Link Prediction. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepes-vari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 26911- 26926. PMLR, July 2022. URL https://proceedings.mlr.press/v162/zhao22e.html.3
264
+
265
+ [32] Abhay Singh, Qian Huang, Sijia Linda Huang, Omkar Bhalerao, Horace He, Ser-Nam Lim, and Austin R. Benson. Edge Proposal Sets for Link Prediction. Technical Report arXiv:2106.15810, arXiv, June 2021. URL http://arxiv.org/abs/2106.15810.arXiv:2106.15810 [cs] type: article. 3
266
+
267
+ [33] Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An End-to-End Deep Learning Architecture for Graph Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1), April 2018. ISSN 2374-3468. doi: 10.1609/aaai.v32i1.11782. URL https://ojs.aaai.org/index.php/AAAI/article/view/11782.Number: 1.4
268
+
269
+ [34] Paul Jaccard. The Distribution of the Flora in the Alpine Zone.1. New Phytologist, 11(2): 37-50, 1912. ISSN 1469-8137. doi: 10.1111/j.1469-8137.1912.tb05611.x. URL https: //onlinelibrary.wiley.com/doi/abs/10.1111/j.1469-8137.1912.tb05611.x. _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1469-8137.1912.tb05611.x.4
270
+
271
+ [35] Tao Zhou, Linyuan Lü, and Yi-Cheng Zhang. Predicting missing links via local information. The European Physical Journal B, 71(4):623-630, 2009. Publisher: Springer. 4
272
+
273
+ [36] Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. Effective Approaches to Attention-based Neural Machine Translation, 2015. URL https://arxiv.org/abs/1508.04025.6
274
+
275
+ [37] Balasubramaniam Srinivasan and Bruno Ribeiro. On the Equivalence between Positional Node Embeddings and Structural Graph Representations. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=SJxzFySKwH.6
276
+
277
+ [38] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How Powerful are Graph Neural Networks? CoRR, abs/1810.00826, 2018. URL http://arxiv.org/abs/1810.00826.arXiv: 1810.00826. 6, 7, 14
278
+
279
+ [39] William L. Hamilton, Rex Ying, and Jure Leskovec. Inductive Representation Learning on Large Graphs. arXiv:1706.02216 [cs, stat], September 2018. URL http://arxiv.org/abs/ 1706.02216. arXiv: 1706.02216. 7, 14
280
+
281
+ [40] Andrew Kachites McCallum, Kamal Nigam, Jason Rennie, and Kristie Seymore. Automating the construction of internet portals with machine learning. Information Retrieval, 3(2):127-163, 2000. Publisher: Springer. 7, 15
282
+
283
+ [41] C Lee Giles, Kurt D Bollacker, and Steve Lawrence. CiteSeer: An automatic citation indexing system. In Proceedings of the third ACM conference on Digital libraries, pages 89-98, 1998. 7, 15
284
+
285
+ [42] Galileo Namata, Ben London, Lise Getoor, and Bert Huang. Query-driven active surveying for collective classification. In 10th International Workshop on Mining and Learning with Graphs, volume 8, page 1, 2012. 7, 15
286
+
287
+ [43] Vladimir Batagelj and Andrej Mrvar. Pajek datasets website, 2006. URL http://vlado.fmf . uni-lj.si/pub/networks/data/.7,15
288
+
289
+ [44] Mark EJ Newman. Finding community structure in networks using the eigenvectors of matrices. Physical review E, 74(3):036104, 2006. Publisher: APS. 7, 15
290
+
291
+ [45] Robert Ackland and others. Mapping the US political blogosphere: Are conservative bloggers more prominent? In BlogTalk Downunder 2005 Conference, Sydney, 2005. 7, 15
292
+
293
+ [46] Christian Von Mering, Roland Krause, Berend Snel, Michael Cornell, Stephen G Oliver, Stanley Fields, and Peer Bork. Comparative assessment of large-scale data sets of protein-protein interactions. Nature, 417(6887):399-403, 2002. Publisher: Nature Publishing Group. 7, 15
294
+
295
+ [47] Duncan J Watts and Steven H Strogatz. Collective dynamics of 'small-world'networks. nature, 393(6684):440-442, 1998. Publisher: Nature Publishing Group. 7, 15
296
+
297
+ [48] Neil Spring, Ratul Mahajan, and David Wetherall. Measuring ISP topologies with Rocketfuel. ACM SIGCOMM Computer Communication Review, 32(4):133-145, 2002. Publisher: ACM New York, NY, USA. 7, 15
298
+
299
+ [49] Muhan Zhang, Zhicheng Cui, Shali Jiang, and Yixin Chen. Beyond link prediction: Predicting hyperlinks in adjacency space. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.7,15
300
+
301
+ [50] Andrew P. Bradley. The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognition, 30(7):1145-1159, July 1997. ISSN 0031-3203. doi: 10.1016/S0031-3203(96)00142-2. URL https://www.sciencedirect.com/science/ article/pii/S0031320396001422. 7
302
+
303
+ [51] Nino Shervashidze, Pascal Schweitzer, Erik Jan Van Leeuwen, Kurt Mehlhorn, and Karsten M Borgwardt. Weisfeiler-lehman graph kernels. Journal of Machine Learning Research, 12(9), 2011.9
304
+
305
+ [52] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open Graph Benchmark: Datasets for Machine Learning on Graphs. arXiv:2005.00687 [cs, stat], February 2021. URL http://arxiv.org/abs/2005.00687.arXiv: 2005.00687. 15
306
+
307
+ ## A Proof of Theorem 1
308
+
309
+ We restate the Theorem 1: GNN cannot learn the subgraph feature $\mathbf{h}$ to be Edge Invariant.
310
+
311
+ Proof. Recall that the computation of subgraph feature $\mathbf{h}$ involves steps such as:
312
+
313
+ 1. Subgraph Extraction: Extract the subgraph ${\mathcal{G}}_{i, j}^{r}$ around the focal node pair $\{ i, j\}$ ;
314
+
315
+ 2. Node Representation Learning: $Z = \operatorname{GNN}\left( {\mathcal{G}}_{i, j}^{r}\right)$ , where $Z \in {\mathbb{R}}^{\left| {V}_{i, j}^{r}\right| \times {F}_{\text{hidden }}}$ is the node embedding matrix learned by the GNN encoder;
316
+
317
+ 3. Pooling: $\mathbf{h} = \operatorname{Pooling}\left( {\mathbf{Z};{\mathcal{G}}_{i, j}^{r}}\right)$ , where $\mathbf{h} \in {\mathbb{R}}^{{F}_{\text{pooled }}}$ is the latent feature of the subgraph ${\mathcal{G}}_{i, j}^{r}$ ;
318
+
319
+ Here, GNN is Message Passing Neural Network [17]. Given a subgraph $\mathcal{G} = \left( {V, E,{\mathbf{x}}^{V},{\mathbf{x}}^{E}}\right)$ , GNN with $T$ layers applies following rules to update the representation of node $i \in V$ :
320
+
321
+ $$
322
+ {h}_{i}^{\left( t + 1\right) } = {U}_{t}\left( {{h}_{i}^{\left( t\right) },\mathop{\sum }\limits_{{w \in \mathcal{N}\left( i\right) }}{M}_{t}\left( {{h}_{i}^{\left( t\right) },{h}_{w}^{\left( t\right) },{\mathbf{x}}_{i, w}^{E}}\right) }\right) , \tag{3}
323
+ $$
324
+
325
+ where $\mathcal{N}\left( i\right)$ is the neighborhood of node $i$ in $\mathcal{G},{M}_{t}$ is the message passing function at layer $t$ and ${U}_{t}$ is the node update function at layer $t$ . The hidden states at the first layer are set as ${h}_{i}^{\left( 0\right) } = {\mathbf{x}}_{i}^{V}$ . The hidden states at the last layer are the outputs ${Z}_{i} = {h}_{i}^{\left( T\right) }$ .
326
+
327
+ Given any subgraph ${\mathcal{G}}_{i, j}^{r} = \left( {{V}_{i, j}^{r},{E}_{i, j}^{r},{\mathbf{x}}_{{V}_{i, j}}^{V},{\mathbf{x}}_{{E}_{i, j}}^{E}}\right)$ with the edge present at the focal node pair $\left( {i, j}\right) \in {E}_{i, j}^{r}$ , we construct another isomorphic subgraph ${\mathcal{G}}_{\bar{i},\bar{j}}^{r} = \left( {{V}_{\bar{i},\bar{j}}^{r},{E}_{\bar{i},\bar{j}}^{r},{x}_{{V}_{\bar{i},\bar{j}}^{r}}^{V},{x}_{{E}_{\bar{i},\bar{j}}^{r}}^{E}}\right)$ , but remove the edge $\left( {\bar{i},\bar{j}}\right)$ from the edge set ${E}_{\bar{i},\bar{j}}^{r}$ of the subgraph. ${\mathcal{G}}_{\bar{i},\bar{j}}^{r}$ can be seen as the counterpart of ${\mathcal{G}}_{i, j}^{r}$ in the testing set.
328
+
329
+ Thus, for the first iteration of node updates $t = 1$ :
330
+
331
+ $$
332
+ {h}_{i}^{\left( 1\right) } = {U}_{t}\left( {{h}_{i}^{\left( 0\right) },\mathop{\sum }\limits_{{w \in \mathcal{N}\left( i\right) }}{M}_{t}\left( {{h}_{i}^{\left( 0\right) },{h}_{w}^{\left( 0\right) },{\mathbf{x}}_{i, w}^{E}}\right) }\right) , \tag{4}
333
+ $$
334
+
335
+ $$
336
+ {h}_{\bar{i}}^{\left( 1\right) } = {U}_{t}\left( {{h}_{\bar{i}}^{\left( 0\right) },\mathop{\sum }\limits_{{w \in \mathcal{N}\left( \bar{i}\right) }}{M}_{t}\left( {{h}_{\bar{i}}^{\left( 0\right) },{h}_{w}^{\left( 0\right) },{\mathbf{x}}_{\bar{i}, w}^{E}}\right) }\right) , \tag{5}
337
+ $$
338
+
339
+ Note that $\mathcal{N}\left( \bar{i}\right) \cup \{ j\} = \mathcal{N}\left( i\right)$ . We have:
340
+
341
+ $$
342
+ {h}_{i}^{\left( 1\right) } = {U}_{t}\left( {{h}_{i}^{\left( 0\right) },\mathop{\sum }\limits_{{w \in \mathcal{N}\left( i\right) \smallsetminus \{ j\} }}{M}_{t}\left( {{h}_{i}^{\left( 0\right) },{h}_{w}^{\left( 0\right) },{\mathbf{x}}_{i, w}^{E}}\right) + {M}_{t}\left( {{h}_{i}^{\left( 0\right) },{h}_{j}^{\left( 0\right) },{\mathbf{x}}_{i, j}^{E}}\right) }\right) \tag{6}
343
+ $$
344
+
345
+ $$
346
+ = {U}_{t}\left( {{h}_{\widetilde{i}}^{\left( 0\right) },\mathop{\sum }\limits_{{w \in \mathcal{N}\left( \widetilde{i}\right) }}{M}_{t}\left( {{h}_{\widetilde{i}}^{\left( 0\right) },{h}_{w}^{\left( 0\right) },{\mathbf{x}}_{\widetilde{i}, w}^{E}}\right) + {M}_{t}\left( {{h}_{\widetilde{i}}^{\left( 0\right) },{h}_{\widetilde{j}}^{\left( 0\right) },{\mathbf{x}}_{\widetilde{i},\widetilde{j}}^{E}}\right) }\right) , \tag{7}
347
+ $$
348
+
349
+ As ${U}_{t}$ is injective, $p\left( {{h}_{i}^{\left( 1\right) },\mathrm{y} = 1 \mid \mathrm{e} = 1}\right) \neq p\left( {{h}_{i}^{\left( 1\right) },\mathrm{y} = 1}\right) = p\left( {{h}_{i}^{\left( 1\right) },\mathrm{y} = 1 \mid \mathrm{e} = 0}\right)$ . Similarly, we can conclude that $p\left( {{h}_{i}^{\left( T\right) },\mathrm{y} = 1 \mid \mathrm{e} = 1}\right) \neq p\left( {{h}_{i}^{\left( T\right) },\mathrm{y} = 1 \mid \mathrm{e} = 0}\right)$ .
350
+
351
+ As we use the last iteration of node updates ${h}_{i}^{\left( T\right) }$ as the final node representation $\mathbf{Z}$ , we have $p\left( {\mathbf{Z},\mathrm{y} \mid \mathrm{e} = 1}\right) \neq p\left( {\mathbf{Z},\mathrm{y} \mid \mathrm{e} = 0}\right)$ , which leads to $p\left( {\mathrm{\;h},\mathrm{y} \mid \mathrm{e} = 1}\right) \neq p\left( {\mathrm{\;h},\mathrm{y} \mid \mathrm{e} = 0}\right)$ and concludes the proof.
352
+
353
+ ## B Proof of Theorem 2
354
+
355
+ We restate the Theorem 2: Given $p\left( {\mathbf{h},\mathbf{y} \mid \mathbf{e},\mathbf{c}}\right) = p\left( {\mathbf{h},\mathbf{y} \mid \mathbf{e}}\right)$ , there is no Dataset Shift in the link prediction if the subgraph embedding is Edge Invariant. That is, $p\left( {\mathbf{h},\mathbf{y} \mid \mathbf{e}}\right) = p\left( {\mathbf{h},\mathbf{y}}\right) \Rightarrow p\left( {\mathbf{h},\mathbf{y} \mid \mathbf{c}}\right) =$ $p\left( {\mathbf{h},\mathbf{y}}\right)$ .
356
+
357
+ Proof.
358
+
359
+ $$
360
+ p\left( {\mathbf{h} = \mathbf{h},\mathrm{y} = y \mid \mathrm{c} = c}\right) \tag{8}
361
+ $$
362
+
363
+ $$
364
+ = {\mathbb{E}}_{\mathrm{e}}\left\lbrack {p\left( {\mathbf{h} = \mathbf{h},\mathrm{y} = y \mid \mathrm{c} = c,\mathrm{e}}\right) p\left( {\mathrm{e} \mid \mathrm{c} = c}\right) }\right\rbrack \tag{9}
365
+ $$
366
+
367
+ $$
368
+ = {\mathbb{E}}_{\mathrm{e}}\left\lbrack {p\left( {\mathbf{h} = \mathbf{h},\mathrm{y} = y}\right) p\left( {\mathrm{e} \mid \mathrm{c} = c}\right) }\right\rbrack \tag{10}
369
+ $$
370
+
371
+ $$
372
+ = p\left( {\mathbf{h} = \mathbf{h},\mathrm{y} = y}\right) \text{.} \tag{11}
373
+ $$
374
+
375
+ 589
376
+
377
+ ## C Details about the baseline methods
378
+
379
+ To verify the effectiveness of FakeEdge, we tend to introduce minimal modification to the baseline models and make them compatible with FakeEdge techniques. The baseline models in our experiments are mainly from the two streams of link prediction models. One is the GAE-like model, including GCN [9], SAGE [39], GIN [38] and PLNLP [14]. The other includes SEAL [11] and WalkPool [12]. GCN, SAGE and PLNLP learn the node representation and apply a score function on the focal node pair to represent the link. As GAE-like models are not implemented in the fashion of subgraph link prediction, the subgraph extraction step is necessary for them as preprocessing. We follow the code from the labeling trick [19], which implements the GAE models as the subgraph link prediction task. In particular, GIN concatenates the node embedding from different layers to learn the node representation and applies a subgraph-level readout to aggregate as the subgraph representation.
380
+
381
+ For the selection of hyperparameters, we use the same configuration as [19] on datasets Cora, Citeseer and Pubmed. As they do not have experiments on other 8 networks without attributes, we set the subgraph hop number as 2 and leave the rest of them as default. For PLNLP, we also add a subgraph extraction step without modifying the core part of the pairwise learning strategy. Then, we find that the performance of PLNLP is very unstable on different train/test splits. The performance's standard deviation of PLNLP is over ${10}\%$ on each experiment. Therefore, we apply the Double-Radius Node Labeling [11] to stabilize the model.
382
+
383
+ SEAL and WalkPool have applied one of the FakeEdge techniques in their initial implementation. SEAL uses a Edge Minus strategy to remove all the edges at focal node pair as a preprocessing step, while WalkPool applies Edge Plus to always inject edges into the subgraph for node representation learning. Additionally, WalkPool has the walk-based pooling method operating on both the Edge Plus and Edge Minus graphs. This design is kept in our experiment. Thus, our FakeEdge technique only takes effect on the node representation step for WalkPool. From the results in Section 5.2, we can conclude that the dataset shift issue on the node representation solely would significantly impact the model performance. We also use the same hyperparameter settings as originally reported in their paper. The code will be publicly available.
384
+
385
+ ## D Benchmark dataset descriptions
386
+
387
+ The graph datasets with node attributes are three citation networks: Cora [40], Citeseer [41] and Pubmed [42]. Nodes represent publications and edges represent citation links. The graph datasets without node attributes are: (1) USAir [43]: a graph of US Air lines; (2) NS [44]: a collaboration network of network science researchers; (3) PB [45]: a graph of links between web pages on US political topic; (4) Yeast [46]: a protein-protein interaction network in yeast; (5) C.ele [47]: the neural network of Caenorhabditis elegans; (6) Power [47]: the network of the western US $\acute{s}$ electric grid; (7) Router [48]: the Internet connection at the router-level; (8) E.coli [49]: the reaction network of metabolites in Escherichia coli. The detailed statistics of the datasets can be found in Table 3.
388
+
389
+ Table 3: Statistics of link prediction datasets.
390
+
391
+ <table><tr><td>Dataset</td><td>#Nodes</td><td>$\mathbf{\# {Edges}}$</td><td>Avg. node deg.</td><td>Density</td><td>Attr. Dimension</td></tr><tr><td>Cora</td><td>2708</td><td>10556</td><td>3.90</td><td>0.2880%</td><td>1433</td></tr><tr><td>Citeseer</td><td>3327</td><td>9104</td><td>2.74</td><td>0.1645%</td><td>3703</td></tr><tr><td>Pubmed</td><td>19717</td><td>88648</td><td>4.50</td><td>0.0456%</td><td>500</td></tr><tr><td>USAir</td><td>332</td><td>4252</td><td>12.81</td><td>7.7385%</td><td>-</td></tr><tr><td>NS</td><td>1589</td><td>5484</td><td>3.45</td><td>0.4347%</td><td>-</td></tr><tr><td>$\mathbf{{PB}}$</td><td>1222</td><td>33428</td><td>27.36</td><td>4.4808%</td><td>-</td></tr><tr><td>Yeast</td><td>2375</td><td>23386</td><td>9.85</td><td>0.8295%</td><td>-</td></tr><tr><td>C.ele</td><td>297</td><td>4296</td><td>14.46</td><td>9.7734%</td><td>-</td></tr><tr><td>Power</td><td>4941</td><td>13188</td><td>2.67</td><td>0.1081%</td><td>-</td></tr><tr><td>Router</td><td>5022</td><td>12516</td><td>2.49</td><td>0.0993%</td><td>-</td></tr><tr><td>E.coli</td><td>1805</td><td>29320</td><td>16.24</td><td>1.8009%</td><td>-</td></tr></table>
392
+
393
+ ## E Results measured by Hits@20 and statistical significance of results
394
+
395
+ We adopt another widely used metrics in the link prediction task [52], Hits @20, to evaluate the model performance with and without FakeEdge. The results are shown in Table 4. FakeEdge can boost all the models predictive power on different datasets.
396
+
397
+ Note that the AUC scores on several datasets are almost saturated in Table 1. To further verify the statistical significance of the improvement, a two-sided $t$ -test is conducted with the null hypothesis that the augmented Edge Att and the Original representation learning would reach at the identical average scores. The $p$ -values of different methods can be found in Table 5. Recall that the $p$ -value smaller than 0.05 is considered as statistically significant. GAE-like methods obtain significant improvement on almost all of the datasets, except GCN on C.ele. SEAL shows significant improvement with Edge Att on 7 out of 11 datasets. For WalkPool, more than half of the datasets are significantly better.
398
+
399
+ ## F Limitation
400
+
401
+ FakeEdge can align the embedding of isomorphic subgraphs in training and testing sets. However, it can pose a limitation that hinders one aspect of the GNN-based model's expressive power. Figure 1 gives an example where subgraphs are from training and testing phases, respectively. Now consider that those two subgraphs are both from training set $\left( {\mathrm{c} = \text{train}}\right)$ . Still, the top subgraph has edge observed at focal node pair $\left( {\mathrm{y} = 1}\right)$ , while the other does not $\left( {\mathrm{y} = 0}\right)$ . With FakeEdge, two subgraphs will be modified to be isomorphic, yielding the same representation. However, they are non-isomorphic before the modification. To the best of our knowledge, no existing method can simultaneously achieve the most expressive power and get rid of dataset shift issue, because the edge at the focal node pair in the testing set can never be observed under a practical problem setting.
402
+
403
+ Table 4: Comparison with and without FakeEdge (Hits@20). The best results are highlighted in bold.
404
+
405
+ <table><tr><td>Models</td><td>FakeEdge</td><td>$\mathbf{{Cora}}$</td><td>Citeseer</td><td>Pubmed</td><td>$\mathbf{{USAir}}$</td><td>NS</td><td>$\mathbf{{PB}}$</td><td>Yeast</td><td>C.ele</td><td>Power</td><td>Router</td><td>E.coli</td></tr><tr><td rowspan="5">GCN</td><td>Original</td><td>${}_{{65.35} \pm {3.64}}$</td><td>${61.71} \pm {2.60}$</td><td>${48.97} \pm {1.92}$</td><td>${87.69} \pm {3.92}$</td><td>92.77±1.72</td><td>${41.60} \pm {2.52}$</td><td>${85.26} \pm {1.90}$</td><td>${65.33} \pm {7.55}$</td><td>${39.64} \pm {5.47}$</td><td>${39.41}_{\pm {2.38}}$</td><td>${82.21} \pm {2.02}$</td></tr><tr><td>Edge Plus</td><td>${68.31}_{\pm {2.89}}$</td><td>${65.80} \pm {3.28}$</td><td>${\mathbf{{55.70}}}_{\pm {3.07}}$</td><td>${89.34}_{\pm {4.09}}$</td><td>93.28±1.69</td><td>${43.98} \pm {6.25}$</td><td>${87.19} \pm {2.13}$</td><td>$\mathbf{{66.68}} \pm {5.25}$</td><td>46.92 $\pm {3.78}$</td><td>${72.03} \pm {2.85}$</td><td>${86.03} \pm {1.40}$</td></tr><tr><td>Edge Minus</td><td>67.97±2.62</td><td>${66.13} \pm {3.30}$</td><td>${54.29} \pm {2.66}$</td><td>90.57±3.30</td><td>${\mathbf{{93.61}}}_{\pm {1.68}}$</td><td>${43.92} \pm {5.82}$</td><td>${86.66} \pm {2.18}$</td><td>${66.07}_{\pm {6.14}}$</td><td>47.97±2.58</td><td>${\mathbf{{72.34}}}_{\pm {2.58}}$</td><td>${}_{{85.68} \pm {1.84}}$</td></tr><tr><td>Edge Mean</td><td>67.76±3.02</td><td>${66.11}_{\pm {2.48}}$</td><td>${54.55} \pm {2.88}$</td><td>89.48±3.52</td><td>92.77±1.99</td><td>${44.64} \pm {6.93}$</td><td>${86.64} \pm {2.03}$</td><td>${}_{{65.28} \pm {6.33}}$</td><td>47.54±2.95</td><td>${72.26} \pm {2.68}$</td><td>${}_{{85.62} \pm {1.71}}$</td></tr><tr><td>Edge Att</td><td>${\mathbf{{68.43}}}_{\pm {3.72}}$</td><td>67.65±4.11</td><td>${55.55} \pm {2.70}$</td><td>${\mathbf{{90.80}}}_{\pm {4.50}}$</td><td>${92.88}_{\pm {2.27}}$</td><td>${\mathbf{{44.80}}}_{\pm {6.60}}$</td><td>${\mathbf{{87.83}}}_{\pm {0.92}}$</td><td>${65.93}_{\pm {11.06}}$</td><td>${\mathbf{{48.50}}}_{\pm {2.20}}$</td><td>${70.96}_{\pm {2.85}}$</td><td>$\mathbf{{86.56}} \pm {1.69}$</td></tr><tr><td rowspan="5">SAGE</td><td>Original</td><td>${61.67} \pm {3.68}$</td><td>${61.10} \pm {1.54}$</td><td>${45.29} \pm {3.99}$</td><td>${89.20} \pm {2.80}$</td><td>${91.93} \pm {2.74}$</td><td>${39.51}_{\pm {4.44}}$</td><td>${84.11}_{\pm {1.47}}$</td><td>${58.55} \pm {7.17}$</td><td>${42.97} \pm {5.34}$</td><td>${30.02} \pm {2.75}$</td><td>${75.30} \pm {2.77}$</td></tr><tr><td>Edge Plus</td><td>${68.58}_{\pm {2.77}}$</td><td>${65.47} \pm {3.58}$</td><td>${55.23} \pm {2.81}$</td><td>${92.59}_{\pm {3.71}}$</td><td>${93.83} \pm {2.54}$</td><td>${\mathbf{{49.10}}}_{\pm {5.38}}$</td><td>89.36±0.72</td><td>69.72±6.02</td><td>${\mathbf{{49.70}}}_{\pm {2.57}}$</td><td>${\mathbf{{74.90}}}_{\pm {3.73}}$</td><td>88.16±1.29</td></tr><tr><td>Edge Minus</td><td>66.26±2.54</td><td>62.97±3.50</td><td>${53.43} \pm {3.52}$</td><td>${91.32} \pm {3.42}$</td><td>${}^{{93.54} \pm {1.96}}$</td><td>${48.72} \pm {4.90}$</td><td>${88.27} \pm {1.00}$</td><td>${\mathbf{{69.81}}}_{\pm {5.34}}$</td><td>47.63±1.87</td><td>56.67±7.20</td><td>${87.89}_{\pm {1.59}}$</td></tr><tr><td>Edge Mean</td><td>66.74±2.71</td><td>${}_{{65.96} \pm {2.62}}$</td><td>${55.21} \pm {2.84}$</td><td>${91.51} \pm {3.49}$</td><td>${93.25} \pm {2.88}$</td><td>${48.89}_{\pm {6.14}}$</td><td>${}_{{89.30} \pm {0.72}}$</td><td>69.21±7.17</td><td>47.54±3.52</td><td>${73.89}_{\pm {3.50}}$</td><td>${}_{{88.05} \pm {1.62}}$</td></tr><tr><td>Edge Att</td><td>${\mathbf{{68.80}}}_{\pm {2.65}}$</td><td>66.62 $\pm {3.67}$</td><td>${55.18}_{\pm {2.99}}$</td><td>${\mathbf{{92.92}}}_{\pm {3.11}}$</td><td>${\mathbf{{94.09}}}_{\pm {1.60}}$</td><td>${48.53} \pm {5.15}$</td><td>${}_{{89.10} \pm {1.17}}$</td><td>69.30±7.53</td><td>47.06±2.21</td><td>${73.60}_{\pm {4.68}}$</td><td>87.63±1.66</td></tr><tr><td rowspan="5">GIN</td><td>Original</td><td>${55.71} \pm {4.38}$</td><td>${51.71} \pm {4.31}$</td><td>${40.14} \pm {3.98}$</td><td>${}_{{86.08} \pm {3.14}}$</td><td>90.51±3.45</td><td>${38.79} \pm {5.32}$</td><td>79.57±1.74</td><td>${54.95} \pm {5.91}$</td><td>41.56±1.47</td><td>${55.47} \pm {4.37}$</td><td>77.37±2.84</td></tr><tr><td>Edge Plus</td><td>${\mathbf{{64.42}}}_{\pm {2.67}}$</td><td>63.56±2.92</td><td>49.75±4.50</td><td>${}_{{88.68} \pm {4.10}}$</td><td>${94.85} \pm {1.90}$</td><td>${46.17}_{\pm {6.12}}$</td><td>87.58±2.22</td><td>${64.49}_{\pm {6.52}}$</td><td>${48.59}_{\pm {3.33}}$</td><td>${70.67} \pm {3.58}$</td><td>${84.13} \pm {2.12}$</td></tr><tr><td>Edge Minus</td><td>${63.17}_{\pm {2.96}}$</td><td>${63.65} \pm {4.63}$</td><td>50.37±4.01</td><td>${89.81}_{\pm {1.80}}$</td><td>94.53±2.09</td><td>${45.93} \pm {6.09}$</td><td>${\mathbf{{88.37}}}_{\pm {2.00}}$</td><td>67.06±11.03</td><td>47.56±1.88</td><td>${\mathbf{{71.10}}}_{\pm {1.90}}$</td><td>${83.23} \pm {2.62}$</td></tr><tr><td>Edge Mean</td><td>${61.46} \pm {4.64}$</td><td>${63.74} \pm {4.20}$</td><td>46.97±6.49</td><td>89.86±2.62</td><td>93.98±2.88</td><td>${43.48} \pm {7.74}$</td><td>${}_{{88.16} \pm {2.11}}$</td><td>66.73±6.79</td><td>47.66±2.91</td><td>${71.09} \pm {2.68}$</td><td>${82.48} \pm {1.99}$</td></tr><tr><td>Edge Att</td><td>${63.26} \pm {3.33}$</td><td>${60.64} \pm {4.29}$</td><td>49.71±4.40</td><td>${88.87} \pm {4.71}$</td><td>${94.49} \pm {1.51}$</td><td>${44.94} \pm {5.37}$</td><td>87.92±1.45</td><td>${65.93} \pm {8.55}$</td><td>${48.19}_{\pm {2.70}}$</td><td>70.03±3.05</td><td>${\mathbf{{84.38}}}_{\pm {2.54}}$</td></tr><tr><td rowspan="5">PLNLP</td><td>Original</td><td>${58.77} \pm {2.59}$</td><td>57.21±3.91</td><td>${40.03} \pm {3.46}$</td><td>${88.87} \pm {2.75}$</td><td>93.76±1.65</td><td>${38.90} \pm {43}\mathrm{\;s}$</td><td>${81.17} \pm {3.54}$</td><td>66.36±5.65</td><td>43.52±6.47</td><td>${34.61} \pm {11.29}$</td><td>${65.68} \pm {1.56}$ 。_____</td></tr><tr><td>Edge Plus</td><td>${66.79} \pm {2.77}$</td><td>${67.69} \pm {4.13}$</td><td>${44.44} \pm {14.29}$</td><td>${95.19}_{\pm {1.60}}$</td><td>${95.84} \pm {1.09}$</td><td>${45.18} \pm {4.87}$</td><td>${}_{{88.04} \pm {2.42}}$</td><td>${71.21} \pm {8.04}$</td><td>${52.37} \pm {3.95}$</td><td>${\mathbf{{75.01}}}_{\pm {1.83}}$</td><td>${84.73} \pm {1.70}$</td></tr><tr><td>Edge Minus</td><td>67.40±3.53</td><td>${62.84} \pm {2.88}$</td><td>47.80±11.11</td><td>${94.10} \pm {2.42}$</td><td>${95.22} \pm {1.60}$</td><td>${45.40} \pm {6.29}$</td><td>87.94±1.64</td><td>${69.91}_{\pm {6.80}}$</td><td>${52.19} \pm {4.23}$</td><td>${68.24} \pm {4.01}$</td><td>${83.59}_{\pm {1.56}}$</td></tr><tr><td>Edge Mean</td><td>${\mathbf{{68.61}}}_{\pm {3.40}}$</td><td>${64.81}_{\pm {3.57}}$</td><td>${51.92} \pm {13.30}$</td><td>${95.24} \pm {2.09}$</td><td>${\mathbf{{95.95}}}_{\pm {0.78}}$</td><td>${45.37} \pm {5.07}$</td><td>${}_{{88.08} \pm {2.30}}$</td><td>71.26±8.05</td><td>${51.97} \pm {3.41}$</td><td>74.42±2.33</td><td>${84.78} \pm {1.82}$</td></tr><tr><td>Edge Att</td><td>${67.82} \pm {358}$</td><td>64.37±3.73</td><td>${48.47} \pm {12.01}$</td><td>${\mathbf{{95.38}}}_{\pm {2.02}}$</td><td>${95.62} \pm {0.81}$</td><td>${45.28} \pm {5.11}$</td><td>${\mathbf{{88.57}}}_{\pm {1.80}}$</td><td>70.65±8.11</td><td>${51.79}_{\pm {4.07}}$</td><td>74.99±1.92</td><td>${\mathbf{{85.10}}}_{\pm {1.88}}$</td></tr><tr><td rowspan="5">SEAL</td><td>Original</td><td>${60.95} \pm {8.00}$</td><td>61.56±2.12</td><td>${48.80} \pm {3.33}$</td><td>${91.27} \pm {2.53}$</td><td>${91.72} \pm {2.01}$</td><td>${43.44} \pm {6.82}$</td><td>${85.33} \pm {1.76}$</td><td>${64.21} \pm {5.86}$</td><td>${39.30} \pm {3.79}$</td><td>${59.47} \pm {6.66}$</td><td>${84.15} \pm {2.16}$</td></tr><tr><td>Edge Plus</td><td>${60.51}_{\pm {7.70}}$</td><td>${65.12} \pm {2.18}$</td><td>${50.90}_{\pm {3.96}}$</td><td>90.85±4.12</td><td>${93.61}_{\pm {1.87}}$</td><td>${46.77} \pm {4.80}$</td><td>${86.66} \pm {1.59}$</td><td>65.47±7.68</td><td>${45.90} \pm {2.85}$</td><td>70.06±3.57</td><td>${}_{{85.76} \pm {2.04}}$</td></tr><tr><td>Edge Minus</td><td>${60.74}_{\pm {6.60}}$</td><td>${\mathbf{{65.14}}}_{\pm {2.93}}$</td><td>${51.23} \pm {3.82}$</td><td>90.66±3.49</td><td>${92.19} \pm {2.03}$</td><td>47.21 $\pm {473}$</td><td>${86.49}_{\pm {2.08}}$</td><td>${63.64}_{\pm {6.93}}$</td><td>46.42±3.42</td><td>70.43±4.40</td><td>${85.50}_{\pm {2.06}}$</td></tr><tr><td>Edge Mean</td><td>${\mathbf{{62.94}}}_{\pm {5.78}}$</td><td>64.99±4.36</td><td>${51.83} \pm {3.66}$</td><td>${\mathbf{{91.84}}}_{\pm {2.93}}$</td><td>${92.92} \pm {2.12}$</td><td>${46.02} \pm {4.22}$</td><td>${86.25} \pm {2.17}$</td><td>${\mathbf{{65.93}}}_{\pm {6.87}}$</td><td>${46.57} \pm {3.22}$</td><td>70.08±3.85</td><td>${85.85} \pm {1.81}$</td></tr><tr><td>Edge Att</td><td>${62.03} \pm {4.95}$</td><td>63.52±4.39</td><td>${48.42} \pm {5.69}$</td><td>${91.42} \pm {3.31}$</td><td>${94.64} \pm {1.49}$</td><td>${44.73} \pm {5.29}$</td><td>$\mathbf{{86.83}} \pm {1.63}$</td><td>${65.93} \pm {4.74}$</td><td>${\mathbf{{47.91}}}_{\pm {3.45}}$</td><td>${67.46} \pm {3.49}$</td><td>$\mathbf{{86.02}} \pm {1.58}$</td></tr><tr><td rowspan="5">WalkPool</td><td>Original</td><td>69.98 $\pm {3.37}$</td><td>${64.22} \pm {2.84}$</td><td>${57.30} \pm {2.56}$</td><td>${95.09} \pm {2.78}$</td><td>96.02±1.64</td><td>${47.74} \pm {5.81}$</td><td>${}_{{88.24} \pm {1.33}}$</td><td>${\mathbf{{78.55}}}_{\pm {5.83}}$</td><td>43.58±4.40</td><td>${56.21} \pm {13.92}$</td><td>${83.41}_{\pm {1.72}}$</td></tr><tr><td>Edge Plus</td><td>${69.13} \pm {231}$</td><td>${64.51} \pm {2.25}$</td><td>${59.23} \pm {3.09}$</td><td>${95.00} \pm {3.09}$</td><td>96.06±1.65</td><td>${46.18} \pm {5.40}$</td><td>${89.79}_{\pm {0.70}}$</td><td>${}_{{78.36} \pm {5.30}}$</td><td>${56.27} \pm {4.17}$</td><td>77.65±2.83</td><td>${86.44} \pm {1.52}$</td></tr><tr><td>Edge Minus</td><td>${69.34}_{\pm {2.45}}$</td><td>${64.26} \pm {1.93}$</td><td>${59.44}_{\pm {3.10}}$</td><td>95.14±2.93</td><td>${95.99}_{\pm {1.67}}$</td><td>${46.79} \pm {4.88}$</td><td>89.57±0.85</td><td>77.90±4.49</td><td>${55.72} \pm {3.63}$</td><td>77.62±2.64</td><td>${\mathbf{{87.24}}}_{\pm {0.77}}$</td></tr><tr><td>Edge Mean</td><td>70.27±2.96</td><td>${62.84} \pm {4.79}$</td><td>${\mathbf{{59.85}}}_{\pm {3.84}}$</td><td>${95.24} \pm {2.45}$</td><td>96.17±1.63</td><td>46.27±5.00</td><td>${89.58}_{\pm {0.91}}$</td><td>77.94±455</td><td>${56.18} \pm {3.74}$</td><td>76.88±2.76</td><td>${86.89}_{\pm {0.84}}$</td></tr><tr><td>Edge Att</td><td>${69.60} \pm {4.11}$</td><td>${64.35} \pm {3.64}$</td><td>${59.63} \pm {3.28}$</td><td>${\mathbf{{95.61}}}_{\pm {2.53}}$</td><td>96.06±1.62</td><td>${46.77} \pm {5.36}$</td><td>89.84±0.71</td><td>77.94±4.89</td><td>${56.46} \pm {3.55}$</td><td>${76.90} \pm {2.82}$</td><td>87.02±1.64</td></tr></table>
406
+
407
+ Table 5: $p$ -values by comparing AUC scores with Original and Edge Att. Significant differences are highlighted in bold.
408
+
409
+ <table><tr><td>Models</td><td>$\mathbf{{Cora}}$</td><td>Citeseer</td><td>Pubmed</td><td>USAir</td><td>NS</td><td>$\mathbf{{PB}}$</td><td>Yeast</td><td>C.ele</td><td>Power</td><td>Router</td><td>E.coli</td></tr><tr><td>GCN</td><td>${3.50} \cdot {10}^{-{09}}$</td><td>${6.92} \cdot {10}^{-{12}}$</td><td>${1.52} \cdot {10}^{-{09}}$</td><td>${1.10} \cdot {10}^{-{05}}$</td><td>${9.89} \cdot {10}^{-{04}}$</td><td>${1.21} \cdot {10}^{-{09}}$</td><td>${4.95} \cdot {10}^{-{13}}$</td><td>${2.76} \cdot {10}^{-{01}}$</td><td>${1.55} \cdot {10}^{-{05}}$</td><td>${2.62} \cdot {10}^{-{13}}$</td><td>${2.44} \cdot {10}^{-{14}}$</td></tr><tr><td>SAGE</td><td>${1.32} \cdot {10}^{-{08}}$</td><td>${2.04} \cdot {10}^{-{06}}$</td><td>${3.48} \cdot {10}^{-{14}}$</td><td>${2.78} \cdot {10}^{-{02}}$</td><td>${2.33} \cdot {10}^{-{02}}$</td><td>${4.13} \cdot {10}^{-{06}}$</td><td>${4.87} \cdot {10}^{-{08}}$</td><td>${1.23} \cdot {10}^{-{03}}$</td><td>${6.12} \cdot {10}^{-{10}}$</td><td>${4.40} \cdot {10}^{-{12}}$</td><td>${3.54} \cdot {10}^{-{13}}$</td></tr><tr><td>GIN</td><td>${4.86} \cdot {10}^{-{10}}$</td><td>${6.09} \cdot {10}^{-{11}}$</td><td>${1.46} \cdot {10}^{-{12}}$</td><td>${1.27} \cdot {10}^{-{03}}$</td><td>${1.29} \cdot {10}^{-{05}}$</td><td>${2.47} \cdot {10}^{-{10}}$</td><td>${5.34} \cdot {10}^{-{11}}$</td><td>${3.84} \cdot {10}^{-{04}}$</td><td>${5.10} \cdot {10}^{-{09}}$</td><td>${3.11} \cdot {10}^{-{16}}$</td><td>${3.04} \cdot {10}^{-{12}}$</td></tr><tr><td>PLNLP</td><td>${1.47} \cdot {10}^{-{10}}$</td><td>${5.30} \cdot {10}^{-{07}}$</td><td>${1.22} \cdot {10}^{-{06}}$</td><td>${1.66} \cdot {10}^{-{07}}$</td><td>${1.70} \cdot {10}^{-{02}}$</td><td>${3.40} \cdot {10}^{-{08}}$</td><td>${7.69} \cdot {10}^{-{06}}$</td><td>${2.46} \cdot {10}^{-{03}}$</td><td>${7.84} \cdot {10}^{-{06}}$</td><td>${2.68} \cdot {10}^{-{13}}$</td><td>${5.27} \cdot {10}^{-{11}}$</td></tr><tr><td>SEAL</td><td>${2.59} \cdot {10}^{-{01}}$</td><td>${1.72} \cdot {10}^{-{02}}$</td><td>${6.45} \cdot {10}^{-{05}}$</td><td>${4.82} \cdot {10}^{-{01}}$</td><td>${1.15} \cdot {10}^{-{02}}$</td><td>${5.20} \cdot {10}^{-{01}}$</td><td>${5.91} \cdot {10}^{-{04}}$</td><td>${4.12} \cdot {10}^{-{01}}$</td><td>${3.78} \cdot {10}^{-{06}}$</td><td>${3.91} \cdot {10}^{-{06}}$</td><td>${5.67} \cdot {10}^{-{04}}$</td></tr><tr><td>WalkPool</td><td>${9.52} \cdot {10}^{-{01}}$</td><td>${4.96} \cdot {10}^{-{01}}$</td><td>${2.83} \cdot {10}^{-{07}}$</td><td>${4.77} \cdot {10}^{-{01}}$</td><td>${8.91} \cdot {10}^{-{01}}$</td><td>${1.84} \cdot {10}^{-{05}}$</td><td>${1.07} \cdot {10}^{-{04}}$</td><td>${8.74} \cdot {10}^{-{01}}$</td><td>${4.15} \cdot {10}^{-{07}}$</td><td>${5.89} \cdot {10}^{-{04}}$</td><td>${1.83} \cdot {10}^{-{10}}$</td></tr></table>
410
+
papers/LOG/LOG 2022/LOG 2022 Conference/QDN0jSXuvtX/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,306 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § FAKEEDGE: ALLEVIATE DATASET SHIFT IN LINK PREDICTION
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ Link prediction is a crucial problem in graph-structured data. Due to the recent success of graph neural networks (GNNs), a variety of GNN-based models were proposed to tackle the link prediction task. Specifically, GNNs leverage the message passing paradigm to obtain node representation, which relies on link connectivity. However, in a link prediction task, links in the training set are always present while ones in the testing set are not yet formed, resulting in a discrepancy of the connectivity pattern and bias of the learned representation. It leads to a problem of dataset shift which degrades the model performance. In this paper, we first identify the dataset shift problem in the link prediction task and provide theoretical analyses on how existing link prediction methods are vulnerable to it. We then propose FakeEdge, a model-agnostic technique, to address the problem by mitigating the graph topological gap between training and testing sets. Extensive experiments demonstrate the applicability and superiority of FakeEdge on multiple datasets across various domains.
12
+
13
+ § 1 INTRODUCTION
14
+
15
+ Graph structured data is ubiquitous across a variety of domains, including social networks [1], protein-protein interactions [2], movie recommendations [3], and citation networks [4]. It provides a non-Euclidean structure to describe the relations among entities. The link prediction task is to predict missing links or new forming links in an observed network [5]. Recently, with the success of graph neural networks (GNNs) for graph representation learning [6-9], several GNN-based methods have been developed [10-14] to solve link prediction tasks. These methods encode the representation of target links with the topological structures and node/edge attributes in their local neighborhood. After recognizing the pattern of observed links (training sets), they predict the likelihood of forming new links between node pairs (testing sets) where no link is yet observed.
16
+
17
+ Nevertheless, existing methods pose a discrepancy of the target link representation between training and testing sets. As the target link is never observed in the testing set by the nature of the task, it will have a different local topological structure when compared to its counterpart from the training set. Thus, the corrupted topological structure shifts the target link representation in the testing set, which we recognize as a dataset shift problem [15, 16] in link prediction.
18
+
19
+ We give a concrete example to illustrate how dataset shift can happen in the link prediction task, especially for GNN-based models with message passing paradigm [17] simulating the 1-dimensional Weisfeiler-Lehman (1-WL) test [18]. In Figure 1, we have two local neighborhoods sampled as subgraphs from the training (top) and testing (bottom) set respectively. The node pairs of interest, which we call focal node pairs, are denoted by black bold circles. From a bird's-eye viewpoint, these two subgraphs are isomorphic when we consider the existence of the positive test link (dashed line), even though the test link has not been observed. Ideally, two isomorphic graphs should have the same representation encoded by GNNs, leading to the same link prediction outcome. However, one iteration of 1-WL in Figure 1 produces different colors for the focal node pairs between training and testing sets, which indicates that the one-layer GNN can encode different representations for these two isomorphic subgraphs, giving rise to dataset shift issue.
20
+
21
+ Dataset shift can substantially degrade model performance since it violates the common assumption that the joint distribution of inputs and outputs stays the same in both the training and testing set. The root cause of this phenomenon in link prediction is the unique characteristic of the target link: the link always plays a dual role in the problem setting and determines both the input and the output for a link prediction task. The existence of the link apparently decides whether it is a positive or negative sample (output). Simultaneously, the presence of the link can also influence how the representation is learned through the introduction of different topological structures around the link (input). Thus, it entangles representation learning and labels in the link prediction problem.
22
+
23
+ < g r a p h i c s >
24
+
25
+ Figure 1: 1-WL test is performed to exhibit the learning process of GNNs. Two node pairs (denoted as bold black circles) and their surrounding subgraphs are sampled from the graph as a training (top) and testing (bottom) instance respectively. Two subgraphs are isomorphic when we omit the focal links. One iteration of 1-WL assigns different colors, indicating the occurrence of dataset shift.
26
+
27
+ To decouple the dual role of the link, we advocate a framework, namely subgraph link prediction, which disentangles the label of the link and its topological structure. As most practical link prediction methods make a prediction by capturing the local neighborhood of the link [1, 11, 12, 19, 20], we unify them all into this framework, where the input is the extracted subgraph around the focal node pair and the output is the likelihood of forming a link incident with the focal node pair in the subgraph. From the perspective of the framework, we find that the dataset shift issue is mainly caused by the presence/absence of the focal link in the subgraph from the training/testing set. This motivates us to propose a simple but effective technique, FakeEdge, to deliberately add or remove the focal link in the subgraph so that the subgraph can stay consistent across training and testing. FakeEdge is a model-agnostic technique, allowing it to be applied to any subgraph link prediction model. It assures that the model would learn the same subgraph representation regardless of the existence of the focal link. Lastly, empirical experiments prove that diminishing the dataset shift issue can significantly boost the link prediction performance on different baseline models.
28
+
29
+ We summarize our contributions as follows. We first unify most of the link prediction methods into a common framework named as subgraph link prediction, which treats link prediction as a subgraph classification task. In the view of the framework, we theoretically investigate the dataset shift issue in link prediction tasks, which motivates us to propose FakeEdge, a model-agnostic augmentation technique, to ease the distribution gap between the training and testing. We further conduct extensive experiments on a variety of baseline models to reveal the performance improvement with FakeEdge to show its capability of alleviating the dataset shift issue on a broad range of benchmarks.
30
+
31
+ § 2 RELATED WORK
32
+
33
+ Link Prediction. Early studies on link prediction problems mainly focus on heuristics methods, which require expertise on the underlying trait of network or hand-crafted features, including Common Neighbor [1], Adamic-Adar index [20] and Preferential Attachment [21], etc. WLNM [22] suggests a method to encode the induced subgraph of the target link as an adjacency matrix to represent the link. With the huge success of GNN [9], GNN-based link prediction methods have become dominant across different areas. Graph Auto Encoder(GAE) and Variational Graph Auto Encoder(VGAE) [10] perform link prediction tasks by reconstructing the graph structure. SEAL [11] and DE [13] propose methods to label the nodes according to the distance to the focal node pair. To better exploit the structural motifs [23] in distinct graphs, a walk-based pooling method (WalkPool) [12] is designed to extract the representation of the local neighborhood. PLNLP [14] sheds light on pairwise learning to rank the node pairs of interest. Based on two-dimensional Weisfeiler-Lehman tests, Hu et al. propose a link prediction method that can directly obtain node pair representation [24].
34
+
35
+ Graph Data Augmentation. Several data augmentation methods are introduced to modify the graph connectivity by adding or removing edges [25]. DropEdge [26] acts like a message passing reducer to tackle over-smoothing or overfitting problems [27]. Topping et al. modify the graph's topological structure by removing negatively curved edges to solve the bottleneck issue [29] of message passing [28]. GDC [30] applies graph diffusion methods on the observed graph to generate a diffused counterpart as the computation graph. For the link prediction task, CFLP [31] generates counterfactual links to augment the original graph. Edge Proposal Set [32] injects edges into the training graph, which are recognized by other link predictors in order to improve performance.
36
+
37
+ § 3 A PROPOSED UNIFIED FRAMEWORK FOR LINK PREDICTION
38
+
39
+ In this section, we formally introduce the link prediction task and formulate several existing GNN-based methods into a common general framework.
40
+
41
+ § 3.1 PRELIMINARY
42
+
43
+ Let $\mathcal{G} = \left( {V,E,{\mathbf{x}}^{V},{\mathbf{x}}^{E}}\right)$ be an undirected graph. $V$ is the set of nodes with size $n$ , which can be indexed as $\{ i{\} }_{i = 1}^{n}.E \subseteq V \times V$ is the observed set of edges. ${\mathbf{x}}_{i}^{V} \in {\mathcal{X}}^{V}$ represents the feature of node i. ${\mathbf{x}}_{i,j}^{E} \in {\mathcal{X}}^{E}$ represents the feature of the edge(i, j)if $\left( {i,j}\right) \in E$ . The other unobserved set of edges is ${E}_{c} \subseteq V \times V \smallsetminus E$ , which are either missing or going to form in the future in the original graph $\mathcal{G}$ . $d\left( {i,j}\right)$ denotes the shortest path distance between node $i$ and $j$ . The $r$ -hop enclosing subgraph ${\mathcal{G}}_{i,j}^{r}$ for node $i,j$ is the subgraph induced from $\mathcal{G}$ by node sets ${V}_{i,j}^{r} = \{ v \mid v \in V,d\left( {v,i}\right) \leq r$ or $d\left( {v,j}\right) \leq r\}$ . The edges set of ${\mathcal{G}}_{i,j}^{r}$ are ${E}_{i,j}^{r} = \left\{ {\left( {p,q}\right) \mid \left( {p,q}\right) \in E\text{ and }p,q \in {V}_{i,j}^{r}}\right\}$ . An enclosing subgraph ${\mathcal{G}}_{i,j}^{r} = \left( {{V}_{i,j}^{r},{E}_{i,j}^{r},{\mathbf{x}}_{{V}_{i,j}}^{V},{\mathbf{x}}_{{E}_{i,j}}^{E}}\right)$ contains all the information in the neighborhood of node $i,j$ . The node set $\{ i,j\}$ is called the focal node pair, where we are interested in if there exists (observed) or should exist (unobserved) an edge between nodes $i,j$ . In the context of link prediction, we will use the term subgraph to denote enclosing subgraph in the following sections.
44
+
45
+ § 3.2 SUBGRAPH LINK PREDICTION
46
+
47
+ In this section, we discuss the definition of Subgraph Link Prediction and investigate how current link prediction methods can be unified in this framework. We mainly focus on link prediction methods based on GNNs, which propagate the message to each node's neighbors in order to learn the representation. We start by giving the definition of the subgraph's properties:
48
+
49
+ Definition 1. Given a graph $\mathcal{G} = \left( {V,E,{\mathbf{x}}^{V},{\mathbf{x}}^{E}}\right)$ and the unobserved edge set ${E}_{c}$ , a subgraph ${\mathcal{G}}_{i,j}^{r}$ have the following properties:
50
+
51
+ 1. a label $\mathrm{y} \in \{ 0,1\}$ of the subgraph indicates if there exists, or will form, an edge incident with focal node pair $\{ i,j\}$ . That is, ${\mathcal{G}}_{i,j}^{r}$ label $\mathrm{y} = 1$ if and only if $\left( {i,j}\right) \in E \cup {E}_{c}$ . Otherwise, label $\mathrm{y} = 0$ .
52
+
53
+ 2. the existence $\mathrm{e} \in \{ 0,1\}$ of an edge in the subgraph indicates whether there is an edge observed at the focal node pair $\{ i,j\}$ . If $\left( {i,j}\right) \in E,\mathrm{e} = 1$ . Otherwise $\mathrm{e} = 0$ .
54
+
55
+ 3. a phase $\mathrm{c} \in \{$ train, test $\}$ denotes whether the subgraph belongs to training or testing stage. Especially for a positive subgraph $\left( {\mathrm{y} = 1}\right)$ , if $\left( {i,j}\right) \in E$ , then $\mathrm{c} =$ train. If $\left( {i,j}\right) \in {E}_{c}$ , then $\mathrm{c} =$ test.
56
+
57
+ Note that, the label $\mathrm{y} = 1$ does not necessarily indicate the observation of the edge at the focal node pair $\{ i,j\}$ . A subgraph in the testing set may have the label $\mathrm{y} = 1$ but the edge may not be present. The existence $\mathrm{e} = 1$ only when the edge is observed at the focal node pair.
58
+
59
+ Definition 2. Given a subgraph ${\mathcal{G}}_{i,j}^{r}$ , Subgraph Link Prediction is a task to learn a feature $\mathbf{h}$ of the subgraph ${\mathcal{G}}_{i,j}^{r}$ and uses it to predict the label $\mathrm{y} \in \{ 0,1\}$ of the subgraph.
60
+
61
+ Generally, subgraph link prediction regards the link prediction task as a subgraph classification task. The pipeline of subgraph link prediction starts with extracting the subgraph ${\mathcal{G}}_{i,j}^{r}$ around the focal node pair $\{ i,j\}$ , and then applies GNNs to encode the node representation $\mathbf{Z}$ . The latent feature $\mathbf{h}$ of the subgraph is obtained by pooling methods on $\mathbf{Z}$ . In the end, the subgraph feature $\mathbf{h}$ is fed into a classifier. In summary, the whole pipeline entails:
62
+
63
+ 1. Subgraph Extraction: Extract the subgraph ${\mathcal{G}}_{i,j}^{r}$ around the focal node pair $\{ i,j\}$ ;
64
+
65
+ 2. Node Representation Learning: $Z = \operatorname{GNN}\left( {\mathcal{G}}_{i,j}^{r}\right)$ , where $Z \in {\mathbb{R}}^{\left| {V}_{i,j}^{r}\right| \times {F}_{\text{ hidden }}}$ is the node embedding matrix learned by the GNN encoder;
66
+
67
+ 3. Pooling: $\mathbf{h} = \operatorname{Pooling}\left( {\mathbf{Z};{\mathcal{G}}_{i,j}^{r}}\right)$ , where $\mathbf{h} \in {\mathbb{R}}^{{F}_{\text{ pooled }}}$ is the latent feature of the subgraph ${\mathcal{G}}_{i,j}^{r}$ ; 4. Classification: $y =$ Classifier(h).
68
+
69
+ There are two main streams of GNN-based link prediction models. Models like SEAL [11] and WalkPool [12] can naturally fall into the subgraph link prediction framework, as they thoroughly follow the pipeline. In SEAL, SortPooling [33] serves as a readout to aggregate the node's features in the subgraph. WalkPool designs a random-walk based pooling method to extract the subgraph feature h. Both methods take advantage of the node's representation from the entire subgraph.
70
+
71
+ In addition, there is another stream of link prediction models, such as GAE [10] and PLNLP [14], which learns the node representation and then devises a score function on the representation of the focal node pair to represent the likelihood of forming a link. We find that these GNN-based methods with the message passing paradigm also belong to a subgraph link prediction task. Considering a GAE with $l$ layers, each node $v$ essentially learns its embedding from its $l$ -hop neighbors $\{ i \mid i \in$ $V,d\left( {i,v}\right) \leq l\}$ . The score function can be then regarded as a center pooling on the subgraph, which only aggregates the features of the focal node pair as $\mathbf{h}$ to represent the subgraph. For a focal node pair $\{ i,j\}$ and GAE with $l$ layers, an $l$ -hop subgraph ${\mathcal{G}}_{i,j}^{l}$ sufficiently contains all the information needed to learn the representation of nodes in the subgraph and score the focal node pair $i,j$ . Thus, the GNN-based models can also be seen as a citizen of subgraph link prediction. In terms of the score function, there are plenty of options depending on the predictive power in practice. In general, the common choices are: (1) Hadamard product: $\mathbf{h} = {z}_{i} \circ {z}_{j}$ ; (2) MLP: $\mathbf{h} = \operatorname{MLP}\left( {{z}_{i} \circ {z}_{j}}\right)$ where MLP is the Multi-Layer Perceptron; (3) BiLinear: $\mathbf{h} = {z}_{i}\mathbf{W}{z}_{j}$ where $\mathbf{W}$ is a learnable matrix; (4) BiLinearMLP: $\mathbf{h} = \operatorname{MLP}\left( {z}_{i}\right) \circ \operatorname{MLP}\left( {z}_{j}\right)$ .
72
+
73
+ In addition to GNN-based methods, the concept of the subgraph link prediction can be extended to low-order heuristics link predictors, like Common Neighbor [1], Adamic-Adar index [20], Preferential Attachment [21], Jaccard Index [34], and Resource Allocation [35]. The predictors with the order $r$ can be computed by the subgraph ${\mathcal{G}}_{i,j}^{r}$ . The scalar value can be seen as the latent feature $\mathbf{h}$ .
74
+
75
+ § 4 FAKEEDGE: MITIGATES DATASET SHIFT IN SUBGRAPH LINK PREDICTION
76
+
77
+ In this section, we start by giving the definition of dataset shift in the general case, and then formally discuss how dataset shift occurs with regard to subgraph link prediction. Then we propose FakeEdge as a graph augmentation technique to ease the distribution gap of the subgraph representation between the training and testing sets. Lastly, we discuss how FakeEdge can enhance the expressive power of any GNN-based subgraph link prediction model.
78
+
79
+ § 4.1 DATASET SHIFT
80
+
81
+ Definition 3. Dataset Shift happens when the joint distribution between train and test is different. That is, $p\left( {\mathbf{h},\mathrm{y} \mid \mathrm{c} = \text{ train }}\right) \neq p\left( {\mathbf{h},\mathrm{y} \mid \mathrm{c} = \text{ test }}\right)$ .
82
+
83
+ A simple example of dataset shift is an object detection system. If the system is only designed and trained under good weather conditions, it may fail to capture objects in bad weather. In general, dataset shift is often caused by some unknown latent variable, like the weather condition in the example above. The unknown variable is not observable during the training phase so the model cannot fully capture the conditions during testing. Similarly, the edge existence $\mathrm{e} \in \{ 0,1\}$ in the subgraph poses as an "unknown" variable in the subgraph link prediction task. Most of the current GNN-based models neglect the effect of the edge existence on encoding the subgraph's feature.
84
+
85
+ Definition 4. A subgraph’s feature $\mathbf{h}$ is called Edge Invariant if $p\left( {\mathbf{h},\mathbf{y} \mid \mathbf{e}}\right) = p\left( {\mathbf{h},\mathbf{y}}\right)$ .
86
+
87
+ To explain, the Edge Invariant subgraph embedding stays the same no matter if the edge is present at the focal node pair or not. It disentangles the edge's existence and the subgraph representation learning. For example, common neighbor predictor is Edge Invariant because the existence of an edge at the focal node pair will not affect the number of common neighbors that two nodes can have. However, Preferential Attachment, another widely used heuristics link prediction predictor, is not Edge Invariant because the node degree varies depending on the existence of the edge.
88
+
89
+ Theorem 1. GNN cannot learn the subgraph feature $\mathbf{h}$ to be Edge Invariant.
90
+
91
+ Recall that the subgraphs in Figure 1 are encoded differently between the training and testing set because of the presence/absence of the focal link. Thus, the vanilla GNN cannot learn the Edge
92
+
93
+ < g r a p h i c s >
94
+
95
+ Figure 2: The proposed four FakeEdge methods. In general, FakeEdge encourages the link prediction model to learn the subgraph representation by always deliberately adding or removing the edges at the focal node pair in each subgraph. In this way, FakeEdge can reduce the distribution gap of the learned subgraph representation between the training and testing set.
96
+
97
+ Invariant subgraph feature. Learning Edge Invariant subgraph feature is crucial to mitigate the dataset shift problem. Here, we give our main theorem about the issue in the link prediction task:
98
+
99
+ Theorem 2. Given $p\left( {\mathbf{h},\mathbf{y} \mid \mathbf{e},\mathbf{c}}\right) = p\left( {\mathbf{h},\mathbf{y} \mid \mathbf{e}}\right)$ , there is no Dataset Shift in the link prediction if the subgraph embedding is Edge Invariant. That is, $p\left( {\mathbf{h},\mathrm{y} \mid \mathbf{e}}\right) = p\left( {\mathbf{h},\mathrm{y}}\right) \Rightarrow p\left( {\mathbf{h},\mathrm{y} \mid \mathbf{c}}\right) = p\left( {\mathbf{h},\mathrm{y}}\right)$ .
100
+
101
+ The assumption $p\left( {\mathbf{h},\mathbf{y} \mid \mathbf{e},\mathbf{c}}\right) = p\left( {\mathbf{h},\mathbf{y} \mid \mathbf{e}}\right)$ states that when the edge at the focal node pair is taken into consideration, the joint distribution keeps the same across the training and testing stages, which means that there is no other underlying unobserved latent variable shifting the distribution. The theorem shows an Edge Invariant subgraph embedding will not cause a dataset shift phenomenon.
102
+
103
+ Theorem 2 gives us the motivation to design the subgraph embedding to be Edge Invariant. When it comes to GNNs, the practical GNN is essentially a message passing neural network [17]. The existence of the edge incident at the focal node pair can determine the computational graph for message passing when learning the node representation.
104
+
105
+ § 4.2 PROPOSED METHODS
106
+
107
+ Having developed conditions of dataset shift phenomenon in link prediction, we next introduce several straightforward subgraph augmentation techniques, FakeEdge (Figure 2), which satisfies the conditions in Theorem 2. The motivation is to mitigate the distribution shift of the subgraph embedding by eliminating the different patterns of target link existence between training and testing sets. Note that all of the strategies follow the same discipline: align the topological structure around the focal node pair in the training and testing datasets, especially for the isomorphic subgraphs. Therefore, we expect that it can gain comparable performance improvement across different strategies.
108
+
109
+ Compared to the vanilla GNN-based subgraph link prediction methods, FakeEdge augments the computation graph for node representation learning and subgraph pooling step to obtain an Edge Invariant embedding for the entire subgraph.
110
+
111
+ Edge Plus A simple strategy is to always make the edge present at the focal node pair for all training and testing samples. Namely, we add an edge into the edge set of subgraph by ${E}_{i,j}^{r + } = {E}_{i,j}^{r} \cup \{ \left( {i,j}\right) \}$ , and use this edge set to calculate the representation ${\mathbf{h}}^{\text{ plus }}$ of the subgraph ${\mathcal{G}}_{i,j}^{r + }$ .
112
+
113
+ Edge Minus Another straightforward modification is to remove the edge at the focal node pair if existing. That is, we remove the edge from the edge set of subgraph by ${E}_{i,j}^{r - } = {E}_{i,j}^{r} \smallsetminus \{ \left( {i,j}\right) \}$ , and obtain the representation ${\mathbf{h}}^{\text{ minus }}$ from ${\mathcal{G}}_{i,j}^{r - }$ .
114
+
115
+ For GNN-based models, adding or removing edges at the focal node pair can amplify or reduce message propagation along the subgraph. It may also change the connectivity of the subgraph. We are interested to see if it can be beneficial to take both situations into consideration by combining them. Based on Edge Plus and Edge Minus, we further develop another two Edge Invariant methods:
116
+
117
+ Edge Mean To combine Edge Plus and Edge Minus, one can extract these two features and fuse them into one view. One way is to take the average of the two latent features by ${\mathbf{h}}^{\text{ mean }} = \frac{{\mathbf{h}}^{\text{ plus }} + {\mathbf{h}}^{\text{ minus }}}{2}$ .
118
+
119
+ Edge Att Edge Mean weighs ${\mathcal{G}}_{i,j}^{r + }$ and ${\mathcal{G}}_{i,j}^{r - }$ equally on all subgraphs. To vary the importance of two modified subgraphs, we can apply an adaptive weighted sum operation. Similar to the practice in the text translation [36], we apply an attention mechanism to fuse the ${\mathbf{h}}^{\text{ plus }}$ and ${\mathbf{h}}^{\text{ minus }}$ by:
120
+
121
+ $$
122
+ {\mathbf{h}}^{\text{ att }} = {w}^{\text{ plus }} * {\mathbf{h}}^{\text{ plus }} + {w}^{\text{ minus }} * {\mathbf{h}}^{\text{ minus }}, \tag{1}
123
+ $$
124
+
125
+ $$
126
+ \text{ where }{w}^{ \cdot } = \operatorname{SoftMax}\left( {{\mathbf{q}}^{\top } \cdot \tanh \left( {\mathbf{W} \cdot {\mathbf{h}}^{ \cdot } + \mathbf{b}}\right) }\right) \tag{2}
127
+ $$
128
+
129
+ 4.3 Expressive power of structural representation
130
+
131
+ In addition to solving the issue of dataset shift, FakeEdge can tackle another problem that impedes the expressive power of link prediction methods on the structural representation [37]. In general, a powerful model is expected to discriminate most of the non-isomorphic focal node pairs. For instance, in Figure 3 we have two isomorphic subgraphs $A$ and $B$ , which do not have any overlapping nodes. Suppose that the focal node pairs we are interested in are $\{ u,w\}$ and $\{ v,w\}$ . Obviously, those two focal node pairs have different structural roles in the graph, and we expect different structural representations for them. With GNN-based methods like GAE, the node representation of the node $u$ and $v$ will be the same ${z}_{u} = {z}_{v}$ , due to the fact that they have isomorphic neighborhoods. GAE applies a score function on the focal node pair to pool the subgraph's feature. Hence, the structural representation of node sets $\{ u,w\}$ and $\{ v,w\}$ would be the same, leaving them inseparable in the embedding space. This issue is caused by the limitation of GNNs, whose expressive power is bounded by 1-WL test [38].
132
+
133
+ < g r a p h i c s >
134
+
135
+ Figure 3: Given two isomorphic but nonoverlapping subgraphs $A$ and $B$ , GNNs learn the same representation for the nodes $u$ and $v$ . Hence, GNN-based methods cannot distinguish focal node pairs $\{ u,w\}$ and $\{ v,w\}$ . However, by adding a Fa-keEdge at $\{ u,w\}$ (shown as the dashed line in the figure), it can break the tie of the representation for $u$ and $v$ , thanks to $u$ ’s modified neighborhood.
136
+
137
+ Zhang et al. address this problem by assigning distinct labels between the focal node pair and the rest of the nodes in the subgraph [19]. With FakeEdge, we can utilize the Edge Plus strategy to deliberately add an edge between nodes $u$ and $w$ (shown as the dashed line in Figure 3). Note that the edge between $v$ and $w$ has already existed, so there is no need to add an edge between them. Therefore, the node $u$ and $v$ will have different neighborhoods ( $u$ has 4 neighbors and $v$ has 3 neighbors), resulting in the different node representation between the node $u$ and $v$ after the first iteration of message propagation with GNN. In the end, we can obtain different representations for two focal node pairs.
138
+
139
+ According to Theorem 2 in [19], such non-isomorphic focal node pairs $\{ u,w\} ,\{ v,w\}$ are not sporadic cases in a graph. Given an $n$ nodes graph whose node degree is $\mathcal{O}\left( {{\log }^{\frac{1 - \epsilon }{2r}}n}\right)$ for any constant $\epsilon > 0$ , there exists $\omega \left( {n}^{2\epsilon }\right)$ pairs of such kind of $\{ u,w\}$ and $\{ v,w\}$ , which cannot be distinguished by GNN-based models like GAE. However, FakeEdge can enhance the expressive power of link prediction methods by modifying the subgraph's local connectivity.
140
+
141
+ § 5 EXPERIMENTS
142
+
143
+ In this section, we conduct extensive experiments to evaluate how FakeEdge can mitigate the dataset shift issue on various baseline models in the link prediction task. Then we empirically show the distribution gap of the subgraph representation between the training and testing and discuss how the dataset shift issue can worsen with deeper GNNs.
144
+
145
+ Table 1: Comparison with and without FakeEdge (AUC). The best results are highlighted in bold.
146
+
147
+ max width=
148
+
149
+ Models FakeEdge $\mathbf{{Cora}}$ Citeseer Pubmed USAir NS $\mathbf{{PB}}$ Yeast C.ele Power Router E.coli
150
+
151
+ 1-13
152
+ 5*GCN Original ${84.92} \pm {1.95}$ 77.05±2.18 ${81.58} \pm {4.62}$ ${94.07} \pm {1.50}$ 96.92±0.73 93.17±0.45 93.76±0.65 ${88.78} \pm {1.85}$ 76.32±4.65 ${60.72} \pm {5.88}$ 95.35±0.36
153
+
154
+ 2-13
155
+ Edge Plus ${91.94}_{\pm {0.90}}$ 89.54±1.17 ${97.91}_{\pm {0.14}}$ ${97.10} \pm {1.01}$ 98.03±0.72 ${95.48} \pm {0.42}$ 97.86±0.27 89.65±1.74 ${\mathbf{{85.42}}}_{\pm {0.91}}$ 95.96±0.41 98.05±0.30
156
+
157
+ 2-13
158
+ Edge Minus ${92.01}_{\pm {0.94}}$ 90.29±0.ss 97.87±0.15 97.16±0.97 ${\mathbf{{98.14}}}_{\pm {0.66}}$ ${95.50}_{\pm {0.43}}$ ${\mathbf{{97.90}}}_{\pm {0.29}}$ 89.47±1.86 ${85.39}_{\pm {1.08}}$ 96.05±0.37 97.97±0.31
159
+
160
+ 2-13
161
+ Edge Mean ${91.86} \pm {0.76}$ ${89.61}_{\pm {0.96}}$ ${97.94}_{\pm {0.13}}$ ${97.19}_{\pm {1.00}}$ ${98.08}_{\pm {0.66}}$ 95.52±0.43 ${97.70}_{\pm {0.36}}$ 89.62±1.82 ${85.23} \pm {1.00}$ ${\mathbf{{96.08}}}_{\pm {0.35}}$ ${\mathbf{{98.07}}}_{\pm {0.27}}$
162
+
163
+ 2-13
164
+ Edge Att ${\mathbf{{92.06}}}_{\pm {0.85}}$ ${}_{{88.96} \pm {1.05}}$ ${\mathbf{{97.96}}}_{\pm {0.12}}$ ${\mathbf{{97.20}}}_{\pm {0.69}}$ 97.96±0.39 ${95.46}_{\pm {0.45}}$ 97.65±0.17 89.76±2.06 ${}_{{85.26} \pm {1.32}}$ ${95.90}_{\pm {0.47}}$ ${98.04}_{\pm {0.16}}$
165
+
166
+ 1-13
167
+ 5*SAGE Original 89.12±0.90 87.76±0.97 ${94.95}_{\pm {0.44}}$ 96.57±0.57 ${98.11} \pm {0.48}$ ${94.12} \pm {0.45}$ ${97.11}_{\pm {0.31}}$ 87.62±1.63 79.35±1.66 ${88.37} \pm {1.46}$ 95.70±0.44
168
+
169
+ 2-13
170
+ Edge Plus ${93.21}_{\pm {0.82}}$ 90.88±0.80 ${97.91}_{\pm {0.14}}$ 97.64±0.73 ${\mathbf{{98.72}}}_{\pm {0.59}}$ 95.68±0.39 98.20±0.13 ${\mathbf{{90.94}}}_{\pm {1.48}}$ 86.36±0.97 96.46±0.38 ${98.41}_{\pm {0.19}}$
171
+
172
+ 2-13
173
+ Edge Minus ${92.45} \pm {0.78}$ 90.14±1.04 97.93±0.14 97.50±0.67 98.66±0.55 95.57±0.39 ${}_{{98.13} \pm {0.10}}$ 90.83±1.59 ${85.62} \pm {1.17}$ 92.91±1.09 98.34±0.26
174
+
175
+ 2-13
176
+ Edge Mean 92.77±0.69 90.60±0.94 97.96±0.13 97.67±0.70 ${98.62} \pm {0.61}$ ${\mathbf{{95.69}}}_{\pm {0.37}}$ ${98.20}_{\pm {0.13}}$ 90.86±1.51 ${86.24} \pm {1.01}$ ${96.22} \pm {0.38}$ ${98.41}_{\pm {0.21}}$
177
+
178
+ 2-13
179
+ Edge Att ${\mathbf{{93.31}}}_{\pm {1.02}}$ ${\mathbf{{91.01}}}_{\pm {1.14}}$ ${\mathbf{{98.01}}}_{\pm {0.13}}$ 97.40±0.94 ${98.70}_{\pm {0.59}}$ ${95.49}_{\pm {0.49}}$ ${\mathbf{{98.22}}}_{\pm {0.24}}$ 90.64±1.88 $\mathbf{{86.46}} \pm {0.91}$ 96.31±0.59 ${\mathbf{{98.43}}}_{\pm {0.13}}$
180
+
181
+ 1-13
182
+ 5*GIN Original ${82.70}_{\pm {1.93}}$ 77.85±2.64 ${91.32} \pm {1.13}$ ${94.89}_{\pm {0.89}}$ 96.05±1.10 ${92.95} \pm {0.51}$ ${94.50}_{\pm {0.65}}$ ${85.23} \pm {2.56}$ ${73.29}_{\pm {3.88}}$ ${84.29}_{\pm {1.20}}$ ${94.34}_{\pm {0.57}}$
183
+
184
+ 2-13
185
+ Edge Plus 90.72±1.11 89.54±1.19 97.63±0.14 96.03±1.37 98.51±0.55 ${95.38}_{\pm {0.35}}$ ${\mathbf{{97.84}}}_{\pm {0.40}}$ ${\mathbf{{89.71}}}_{\pm {2.06}}$ ${\mathbf{{86.61}}}_{\pm {0.87}}$ ${95.79}_{\pm {0.48}}$ 97.67±0.23
186
+
187
+ 2-13
188
+ Edge Minus 89.88±1.26 ${89.30} \pm {1.08}$ 97.27±0.17 96.36±0.83 98.62±0.45 ${95.35}_{\pm {0.35}}$ ${97.80}_{\pm {0.41}}$ ${89.40}_{\pm {1.91}}$ ${86.55} \pm {0.83}$ 95.72±0.45 97.33±0.36
189
+
190
+ 2-13
191
+ Edge Mean 90.30±1.22 89.47±1.13 97.53±0.19 ${\mathbf{{96.45}}}_{\pm {0.90}}$ 98.66±0.45 ${\mathbf{{95.39}}}_{\pm {0.37}}$ 97.78±0.40 89.66±2.00 ${86.51}_{\pm {0.92}}$ 95.73±0.43 97.57±0.32
192
+
193
+ 2-13
194
+ Edge Att 90.76±0.88 89.55±0.61 ${97.50}_{\pm {0.15}}$ ${96.34}_{\pm {0.82}}$ 98.35±0.54 ${95.29}_{\pm {0.29}}$ 97.66±0.33 ${89.39}_{\pm {1.61}}$ ${86.21}_{\pm {0.67}}$ 95.78±0.52 ${\mathbf{{97.74}}}_{\pm {0.33}}$
195
+
196
+ 1-13
197
+ 5*PLNLP Original ${82.37} \pm {1.70}$ ${82.93} \pm {1.73}$ 87.36±4.90 ${95.37}_{\pm {0.87}}$ 97.86±0.93 ${92.99}_{\pm {0.71}}$ ${95.09}_{\pm {1.47}}$ ${88.31}_{\pm {2.21}}$ ${81.59} \pm {4.31}$ ${86.41}_{\pm {1.63}}$ 90.63±1.68
198
+
199
+ 2-13
200
+ Edge Plus ${91.62} \pm {0.87}$ ${\mathbf{{89.88}}}_{\pm {1.19}}$ ${98.31}_{\pm {0.21}}$ ${98.09}_{\pm {0.73}}$ ${\mathbf{{98.77}}}_{\pm {0.39}}$ ${\mathbf{{95.33}}}_{\pm {0.39}}$ ${98.10}_{\pm {0.33}}$ ${91.77} \pm {2.16}$ 90.04±0.57 ${96.45}_{\pm {0.40}}$ ${\mathbf{{98.03}}}_{\pm {0.23}}$
201
+
202
+ 2-13
203
+ Edge Minus ${\mathbf{{91.84}}}_{\pm {1.42}}$ ${88.99}_{\pm {1.48}}$ ${\mathbf{{98.44}}}_{\pm {0.14}}$ 97.92±0.52 ${98.59}_{\pm {0.44}}$ ${95.20}_{\pm {0.34}}$ ${98.01}_{\pm {0.38}}$ ${91.60}_{\pm {2.23}}$ 89.26±0.58 ${95.01}_{\pm {0.47}}$ 97.80±0.16
204
+
205
+ 2-13
206
+ Edge Mean ${91.77} \pm {1.49}$ ${89.45} \pm {1.50}$ 98.36±0.16 ${\mathbf{{98.17}}}_{\pm {0.60}}$ ${98.66} \pm {0.56}$ ${95.30}_{\pm {0.37}}$ 98.10±0.39 ${91.70} \pm {2.18}$ 90.05±0.52 ${96.29} \pm {0.47}$ ${98.02} \pm {0.20}$
207
+
208
+ 2-13
209
+ Edge Att ${91.22} \pm {1.34}$ ${88.75} \pm {1.70}$ ${98.41}_{\pm {0.17}}$ 98.13±0.61 ${98.70} \pm {0.40}$ ${95.32} \pm {0.38}$ 98.06±0.37 ${91.72} \pm {2.12}$ ${\mathbf{{90.08}}}_{\pm {0.54}}$ ${96.40} \pm {0.40}$ ${98.01}_{\pm {0.18}}$
210
+
211
+ 1-13
212
+ 5*SEAL Original ${90.13} \pm {1.94}$ ${87.59} \pm {1.57}$ ${95.79}_{\pm {0.78}}$ 97.26±0.58 97.44±1.07 ${95.06}_{\pm {0.46}}$ ${96.91}_{\pm {0.45}}$ ${88.75} \pm {1.90}$ ${}_{{78.14} \pm {3.14}}$ ${92.35} \pm {1.21}$ 97.33±0.28
213
+
214
+ 2-13
215
+ Edge Plus ${90.01} \pm {1.95}$ 89.65±1.22 97.30±0.34 97.34±0.59 98.35±0.63 95.35±0.38 97.67±0.32 ${89.20} \pm {1.86}$ ${85.25} \pm {0.80}$ 95.47±0.58 97.84±0.25
216
+
217
+ 2-13
218
+ Edge Minus ${91.04} \pm {1.91}$ 89.74±1.16 97.50±0.33 97.27±0.63 98.17±0.74 ${\mathbf{{95.36}}}_{\pm {0.37}}$ 97.64±0.30 89.35±1.98 ${\mathbf{{85.30}}}_{\pm {0.91}}$ ${\mathbf{{95.77}}}_{\pm {0.79}}$ 97.79±0.30
219
+
220
+ 2-13
221
+ Edge Mean 90.36±2.17 89.87±1.14 ${\mathbf{{97.52}}}_{\pm {0.34}}$ ${\mathbf{{97.38}}}_{\pm {0.68}}$ ${98.23} \pm {0.70}$ ${95.30}_{\pm {0.34}}$ 97.68±0.33 ${89.19}_{\pm {1.85}}$ ${85.30}_{\pm {0.87}}$ ${95.61} \pm {0.64}$ 97.83±0.23
222
+
223
+ 2-13
224
+ Edge Att $\mathbf{{91.08}} \pm {1.67}$ 89.35±1.43 97.26±0.45 97.04±0.79 ${\mathbf{{98.52}}}_{\pm {0.57}}$ 95.19±0.43 ${\mathbf{{97.70}}}_{\pm {0.40}}$ ${\mathbf{{89.37}}}_{\pm {1.40}}$ ${}_{{85.24} \pm {1.39}}$ ${95.14}_{\pm {0.62}}$ ${\mathbf{{97.90}}}_{\pm {0.33}}$
225
+
226
+ 1-13
227
+ 5*WalkPool Original ${92.00} \pm {0.79}$ 89.64±1.01 97.70±0.19 97.83±0.97 ${99.00} \pm {0.45}$ 94.53±0.44 ${96.81} \pm {0.92}$ 93.71±1.11 ${82.43} \pm {3.57}$ 87.46±7.45 ${95.00} \pm {0.90}$
228
+
229
+ 2-13
230
+ Edge Plus 91.96±0.79 ${89.49}_{\pm {0.96}}$ 98.36±0.13 97.97±0.96 ${98.99}_{\pm {0.58}}$ 95.47±0.32 98.28±0.24 93.79±1.11 ${91.24} \pm {0.84}$ ${97.31}_{\pm {0.26}}$ 98.65±0.17
231
+
232
+ 2-13
233
+ Edge Minus ${91.97} \pm {0.80}$ 89.61±1.04 ${\mathbf{{98.43}}}_{\pm {0.10}}$ 98.03±0.95 99.02±0.54 95.47±0.32 ${\mathbf{{98.30}}}_{\pm {0.23}}$ ${\mathbf{{93.83}}}_{\pm {1.13}}$ ${\mathbf{{91.28}}}_{\pm {0.90}}$ ${\mathbf{{97.35}}}_{\pm {0.28}}$ 98.66±0.17
234
+
235
+ 2-13
236
+ Edge Mean ${91.77} \pm {0.74}$ 89.55±1.09 ${98.39}_{\pm {0.11}}$ 98.01±0.89 99.02±0.56 ${95.47} \pm {0.29}$ ${98.30}_{\pm {0.24}}$ 93.70±1.12 ${91.26} \pm {0.81}$ 97.27±0.29 98.65±0.19
237
+
238
+ 2-13
239
+ Edge Att ${91.98} \pm {0.80}$ ${}_{{89.36} \pm {0.74}}$ 98.37±0.19 ${\mathbf{{98.12}}}_{\pm {0.81}}$ 99.03±0.50 ${\mathbf{{95.47}}}_{\pm {0.27}}$ 98.28±0.24 93.63±1.11 ${91.25} \pm {0.60}$ 97.27±0.27 ${\mathbf{{98.70}}}_{\pm {0.14}}$
240
+
241
+ 1-13
242
+
243
+ § 5.1 EXPERIMENTAL SETUP
244
+
245
+ Baseline methods. We show how FakeEdge techniques can improve the existing link prediction methods, including GAE-like models [10], PLNLP [14], SEAL [11], and WalkPool [12]. To examine the effectiveness of FakeEdge, we compare the model performance with subgraph representation learned on the original unmodified subgraph and the FakeEdge augmented ones. For GAE-like models, we apply different GNN encoders, including GCN [9], SAGE [39] and GIN [38]. SEAL and WalkPool have already been implemented in the fashion of the subgraph link prediction. However, a subgraph extraction preprocessing is needed for GAE and PLNLP, since they are not initially implemented as the subgraph link prediction. GCN, SAGE, and PLNLP use a score function to pool the subgraph. GCN and SAGE use the Hadamard product as the score function, while MLP is applied for PLNLP (see Section 3.2 for discussions about the score function). Moreover, GIN applies a subgraph-level pooling strategy, called "mean readout" [38], whose pooling is based on the entire subgraph. Similarly, SEAL and WalkPool also utilize the pooling on the entire subgraph to aggregate the representation. More details about the model implementation can be found in Appendix C.
246
+
247
+ Benchmark datasets. For the experiment, we use 3 datasets with node attributes and 8 without attributes. The graph datasets with node attributes are three citation networks: Cora [40], Cite-seer [41], and Pubmed [42]. The graph datasets without node attributes are eight graphs in a variety of domains: USAir [43], NS [44], PB [45], Yeast [46], C.ele [47], Power [47], Router [48], and E.coli [49]. More details about the benchmark datasets can be found in Appendix D.
248
+
249
+ Evaluation protocols. Following the same experimental setting as of $\left\lbrack {{11},{12}}\right\rbrack$ , the links are split into 3 parts: ${85}\%$ for training, $5\%$ for validation, and ${10}\%$ for testing. The links in validation and testing are unobserved during the training phase. We also implement a universal data pipeline for different methods to eliminate the data perturbation caused by train/test split. We perform 10 random data splits to reduce the performance disturbance. Area under the curve (AUC) [50] is used as the evaluation metrics and is reported by the epoch with the highest score on the validation set.
250
+
251
+ § 5.2 RESULTS
252
+
253
+ FakeEdge on GAE-like models. The results of models with (Edge Plus, Edge Minus, Edge Mean, and Edge Att) and without (Original) FakeEdge are shown in Table 1. We observe that FakeEdge is a vital component for all different methods. With FakeEdge, the link prediction model can obtain a significant performance improvement on all datasets. GAE-like models and PLNLP achieve the most
254
+
255
+ < g r a p h i c s >
256
+
257
+ Figure 4: Distribution gap (AUC) of the positive samples between the training and testing set.
258
+
259
+ remarkable performance improvement when FakeEdge alleviates the dataset shift issue. FakeEdge boosts them by $2\% - {11}\%$ on different datasets. GCN, SAGE, and PLNLP all have a score function as the pooling methods, which is solely based on the focal node pair. In particular, the focal node pair is incident with the target link, which determines how the message passes around it. Therefore, the most severe dataset shift issues happen at the embedding of the focal node pair during the node representation learning step. FakeEdge is expected to bring a notable improvement to these situations.
260
+
261
+ Encoder matters. In addition, the choice of encoder plays an important role when GAE is deployed on the Original subgraph. We can see that SAGE shows the best performance without FakeEdge among these 3 encoders. However, after applying FakeEdge, all GAE-like methods achieve comparable better results regardless of the choice of the encoder. We come to a hypothesis that the plain SAGE itself leverages the idea of FakeEdge to partially mitigate the dataset shift issue. Each node's neighborhood in SAGE is a fixed-size set of nodes, which is uniformly sampled from the full neighborhood set. Thus, when learning the node representation of the focal node pair in the positive training sets, it is possible that one node of the focal node pair is not selected as the neighbor of the other node during the neighborhood sampling stage. In this case, the FakeEdge technique Edge Minus is applied to modify such a subgraph.
262
+
263
+ FakeEdge on subgraph-based models. In terms of SEAL and WalkPool, FakeEdge can still robustly enhance the model performance across different datasets. Especially for datasets like Power and Router, FakeEdge increases the AUC by over 10% on both methods. Both methods achieve better results across different datasets, except WalkPool model on datasets Cora and Citeseer. One of the crucial components of WalkPool is the walk-based pooling method, which actually operates on both the Edge Plus and Edge Minus graphs. Different from FakeEdge technique, WalkPool tackles the dataset shift problem mainly on the subgraph pooling stage. Thus, WalkPool shows similar model performance between the Original and FakeEdge augmented graphs. Moreover, SEAL and WalkPool have utilized one of the FakeEdge techniques as a trick in their initial implementations. However, they have failed to explicitly point out the fix of dataset shift issue from such a trick in their papers.
264
+
265
+ Different FakeEdge techniques. When comparing different FakeEdge techniques, Edge Att appears to be the most stable, with a slightly better overall performance and a smaller variance. However, there is no significant difference between these techniques. This observation is consistent with our expectation since all FakeEdge techniques follow the same discipline to fix the dataset shift issue.
266
+
267
+ § 5.3 FURTHER DISCUSSIONS
268
+
269
+ In this section, we conduct experiments to more thoroughly study why FakeEdge can improve the performance of the link prediction methods. We first give an empirical experiment to show how severe the distribution gap can be between training and testing. Then, we discuss the dataset shift issue with deeper GNNs.
270
+
271
+ § 5.3.1 DISTRIBUTION GAP BETWEEN THE TRAINING AND TESTING
272
+
273
+ FakeEdge aims to produce Edge Invariant subgraph embedding during the training and testing phases in the link prediction task, especially for those positive samples $p\left( {\mathbf{h} \mid \mathbf{y} = 1}\right)$ . That is, the subgraph representation of the positive samples between the training and testing should be difficult, if at all, to be distinguishable from each other. Formally, we ask whether $p\left( {\mathbf{h} \mid \mathrm{y} = 1,\mathrm{c} = \operatorname{train}}\right) = p(\mathbf{h} \mid \mathrm{y} =$ $1,\mathrm{c} =$ test), by conducting an empirical experiment on the subgraph embedding.
274
+
275
+ Table 2: GIN's performance improvement by Edge Att compared to Original with a different number of layers. GIN utilizes mean-pooling as the subgraph-level readout.
276
+
277
+ max width=
278
+
279
+ Layers $\mathbf{{Cora}}$ Citeseer Pubmed USAir NS $\mathbf{{PB}}$ Yeast C.ele Power Router E.coli
280
+
281
+ 1-12
282
+ 1 †2.80% ↑3.65% †4.53% 10.29% †1.30% †1.02% †1.54% ↑2.13% †5.24% †11.19% †1.67%
283
+
284
+ 1-12
285
+ 2 ↑4.66% ↑14.53% ↑6.64% 10.73% ↑1.55% ↑2.16% ↑3.40% ↑5.41% ↑25.32% ↑14.73% ↑2.59%
286
+
287
+ 1-12
288
+ 3 ↑9.78% ↑15.19% †6.57% 10.98% †2.49% †2.43% ↑3.60% †4.48% †20.46% ↑13.38% ↑3.14%
289
+
290
+ 1-12
291
+
292
+ We retrieve the subgraph embedding of the positive samples from both the training and testing stages, and randomly shuffle the embedding. Then we classify whether the sample is from training ( $\mathrm{c} =$ train) or testing $\left( {\mathrm{c} = \text{ test }}\right)$ . The shuffled positive samples are split ${80}\% /{20}\%$ as train and inference sets. Note that the train set here, as well as the inference set, contains both the shuffled positive samples from the training and testing set in the link prediction task. Then we feed the subgraph embedding into a 2-layer MLP classifier to investigate whether the classifier can differentiate the training samples (c = train) and the testing samples (c = test). In general, the classifier will struggle to undertake the classification if the embedding of training and testing samples is drawn from the same underlying distribution, which indicates there is no significant dataset shift issue.
293
+
294
+ We use GAE with the GCN as the encoder to run the experiment. AUC is used to measure the discriminating power of the classifier. The results are shown in Figure 4. Without FakeEdge, the classifier shows a significant ability to separate positive samples between training and testing. When it comes to the subgraph embedding with FakeEdge, the classifier stumbles in distinguishing the samples. The comparison clearly reveals how different the subgraph embedding can be between the training and testing, while FakeEdge can both provably and empirically diminish the distribution gap.
295
+
296
+ § 5.3.2 DATASET SHIFT WITH DEEPER GNNS
297
+
298
+ Given two graphs with $n$ nodes in each graph,1-WL test may take up to $n$ iterations to determine whether two graphs are isomorphic [51]. Thus, GNNs, which mimic 1-WL test, tend to discriminate more non-isomorphic graphs when the number of GNN layers increases. SEAL [19] has empirically witnessed a stronger representation power and obtained more expressive link representation with deeper GNNs. However, we notice that the dataset shift issue in the subgraph link prediction becomes more severe when GNNs try to capture long-range information with more layers.
299
+
300
+ We reproduce the experiments on GIN by using $l = 1,2,3$ message passing layers and compare the model performance by AUC scores with and without FakeEdge. Here we only apply Edge Att as the FakeEdge technique. The relative AUC score improvement of Edge Att is reported, namely $\left( {{AU}{C}_{\text{ EdgeAtt }} - {AU}{C}_{\text{ Original }}}\right) /{AU}{C}_{\text{ Original }}$ . The results are shown in Table 2. As we can observe, the relative performance improvement between Edge Att and Original becomes more significant with more layers, which indicates that the dataset shift issue can be potentially more critical when we seek deeper GNNs for greater predictive power.
301
+
302
+ To explain such a phenomenon, we hypothesize that GNNs with more layers will involve more nodes in the subgraph, such that their computation graph is dependent on the existence of the edge at the focal node pair. For example, select a node $v$ from the subgraph ${\mathcal{G}}_{i,j}^{r}$ , which is at least $l$ hops away from the focal node pair $\{ i,j\}$ , namely $l = \min \left( {d\left( {i,v}\right) ,d\left( {j,v}\right) }\right)$ . If the GNN has only $l$ layers, $v$ will not include the edge(i, j)in its computation graph. But with a GNN with $l + 1$ layers, the edge (i, j)will affect $v$ ’s computation graph. We leave the validation of the hypothesis to future work.
303
+
304
+ § 6 CONCLUSION
305
+
306
+ Dataset shift is arguably one of the most challenging problems in the world of machine learning. However, to the best of our knowledge, none of the previous studies sheds light on this notable phenomenon in link prediction. In this paper, we studied the issue of dataset shift in link prediction tasks with GNN-based models. We first unified several existing models into a framework of subgraph link prediction. Then, we theoretically investigated the phenomenon of dataset shift in subgraph link prediction and proposed a model-agnostic technique FakeEdge to amend the issue. Experiments with different models over a wide range of datasets verified the effectiveness of FakeEdge.
papers/LOG/LOG 2022/LOG 2022 Conference/R8v95EwI7NL/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,142 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Consensus Label Propagation with Graph Convolutional Networks for Single-Cell RNA Sequencing Cell Type Annotation
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ Single-cell RNA sequencing (scRNA-seq) data, annotated by cell type, is useful in a variety of downstream biological applications, such as profiling gene expression at the single-cell level. However, manually assigning these annotations with known marker genes is both time-consuming and subjective. We present a Graph Convolutional Network (GCN) based approach to automate the annotation process. Our process builds upon existing labelling approaches, using state-of-the-art tools to find highly-confident cells through consensus and spreading these confident labels with a semi-supervised GCN. Using simulated data and two scRNA-seq datasets from different tissues, we show that our method improves accuracy over a simple consensus algorithm and the average of the underlying tools. We also demonstrate that our GCN method allows for feature interpretation, pulling out important genes for cell type classification. We present our completed pipeline, written in Pytorch, as an end-to-end tool for automating and interpreting the classification of scRNA-seq data.
12
+
13
+ ## 1 Introduction
14
+
15
+ Single-cell RNA sequencing (scRNA-seq) measures the RNA from each gene present in an individual cell, serving as a proxy for gene expression. High-quality labels of cell type based on the transcriptional profile produced by scRNA-seq have proven valuable for characterizing gene expression of cells, and for discovering cell types and genetic drivers of disease. Traditionally, these labels are produced by unsupervised clustering followed by labelling clusters with known marker genes. However, unsupervised clustering is limited by issues such as the size of scRNA-seq datasets as well as subjectivity in reclustering and biological interpretation of clusters [1].
16
+
17
+ The limitations of traditional cell type annotation methods have necessitated the development of automated methods for cell labelling. Three main categories of tools have emerged: marker gene based, correlation based, and supervised classification based [2]. Marker based approaches employ previously known marker genes for labelling, while correlation and supervised learning based approaches require previous manually labelled scRNA-seq data sets with the cell types of interest. Within these broad categories, the performance of individual tools varies widely across data sets [3]. As a result, using the consensus of multiple classification tools could yield higher accuracy. However, to the best knowledge of the authors, there currently exist no tools for researchers to easily apply multiple classification algorithms to their scRNA-seq data.
18
+
19
+ We address these issues in two ways. First, we provide a pipeline for annotation of scRNA-seq data with multiple state-of-the-art annotation algorithms. Second, we implement and test a semi-supervised Graph Convolutional Network (GCN) as a mechanism to propagate labels from confidently labelled cells to unconfidently labelled cells. We show our method improves overall classification accuracy (and, more specifically, classification accuracy on unconfidently labelled cells) compared to taking the consensus of the labels from the underlying tools. We also demonstrate the use of DeepLIFT[4] as an effective interpretation tool for our GCN model, allowing researchers insight into classification decisions and important cell type gene markers.
20
+
21
+ ![01963f06-dde0-7c9b-97ca-c5db72c65905_1_386_193_1026_432_0.jpg](images/01963f06-dde0-7c9b-97ca-c5db72c65905_1_386_193_1026_432_0.jpg)
22
+
23
+ Figure 1: The model takes in scRNA-seq counts matrix, cell type marker genes, and/or a pre labelled reference data set. This data is passed to any number of cell type annotation tools. Each cell is labelled confidently if a majority of tools agree on a cell type. The scRNA-seq data is then embedded as a k-NN graph and passed through a GCN to propagate confident labels to all other cells.
24
+
25
+ ## 2 Our Model
26
+
27
+ Picking Confident Labels via Consensus. Our method first involves picking confident labels for a subset of cells in a given data set. Our pipeline currently includes five different state-of-the-art annotation methods: SCINA [5], ScType [6], ScSorter [7], SingleR [8], and ScPred [9]. Our pipeline also optionally allows researchers to upload their own predictions and utilize other tools. We designate a cell as being confidently labelled (and keep that cell's label) if a majority of tools agree on that label.
28
+
29
+ Semi-supervised GCN. We construct a GCN with $l$ EdgeConv [10] layers with SiLU activation function and summation aggregation. Each layer propagates embedding vectors between each node and its $k$ nearest neighbors (including itself). A final linear layer projects node embeddings into label space (whose dimension is the number of cell types in our dataset). For architecture details see Appendix A. We train our GCN for 150 epochs, with the Adam optimizer[11] at a learning rate of 0.0001 . Our training loss is Cross-Entropy loss on the set of confidently labelled cells. Batches are randomly selected and our graph is reconstructed for each batch and epoch.
30
+
31
+ Interpretation with DeepLIFT. We employ DeepLIFT[4] with the Rescale rule as implemented by Captum[12]. We use the same hyperparameters batch size $b$ , neighbors $k$ , number of message passing steps $l$ , and final embedding layer size $e$ as used during training. DeepLIFT uses the gradient of a neural network's outputs with respect to inputs to determine how much a given classification depended on a given input variable (in our case, how much classification as a given cell type depends on each gene). Calculating these attribution scores is only possible for a differentiable model (like our GCN) - making it impossible to use on any of the underlying tools in the pipeline.
32
+
33
+ ## 3 Data Sets
34
+
35
+ Preprocessing Data. The initial input to our pipeline is a scRNA-seq count matrix $X$ where ${X}_{nq}$ corresponds to observed gene counts for gene $g$ in cell $n$ . Cells expressing $< {200}$ genes and genes expressed in $< 3$ cells are removed, and $X$ is row-normalized according to ${x}_{i} = \log \left( {1 + \left( \left( {{x}_{i} * }\right. \right. }\right.$ ${10000})/\operatorname{sum}\left( {x}_{i}\right)$ ). Both of these steps are common scRNA-seq preprocessing steps[13][14]. Finally, we use Principal Components Analysis (PCA) to project $X$ down to 500 features per cell.
36
+
37
+ Simulated Data. We generated our simulated data sets with Splatter[15] and parameters estimated from 4000 Pan T Cells from a healthy donor [16]. Each simulated data set consists of 1000 reference cells and 1000 query cells separated by a batch.facScale of 0.5 . The reference cells allow for the use of correlation-based annotation tools. Each set of cells contains four cell types with equal proportions and the transcriptomic profiles of each type corresponding across reference and query. Simulated data sets vary by the de.facScale parameter which determines how different the gene expression profile of different cell type groups are. Five markers were selected randomly from the top ten differentially expressed genes from each cell type. The 0.7 de.facScale and 0.8 de.facScale simulated data sets had 156 and 85 unconfidently labelled cells respectively.
38
+
39
+ Table 1: Accuracy scores for all datasets for both all cells and unconfidently labelled cells only. GCN accuracies are the mean of accuracies from five randomly initialized trials.
40
+
41
+ <table><tr><td rowspan="2">Method</td><td colspan="2">Simulation 0.7</td><td colspan="2">Simulation 0.8</td><td colspan="2">Testis</td><td colspan="2">PBMC</td></tr><tr><td>All</td><td>Unconf.</td><td>All</td><td>Unconf.</td><td>All</td><td>Unconf.</td><td>All</td><td>Unconf.</td></tr><tr><td>Ours (GCN)</td><td>90.0</td><td>66.0</td><td>96.1</td><td>83.3</td><td>86.2</td><td>80.9</td><td>93.1</td><td>70.6</td></tr><tr><td>Max Consensus</td><td>86.4</td><td>42.9</td><td>91.2</td><td>25.9</td><td>80.8</td><td>0.0</td><td>91.2</td><td>6.8</td></tr><tr><td>Tool Avg.</td><td>69.3</td><td>33.1</td><td>76.7</td><td>34.6</td><td>72.1</td><td>21.7</td><td>75.0</td><td>36.9</td></tr><tr><td>ScType</td><td>64.8</td><td>13.5</td><td>79.9</td><td>29.4</td><td>84.8</td><td>63.0</td><td>85.5</td><td>81.8</td></tr><tr><td>ScSorter</td><td>85.8</td><td>51.3</td><td>88.7</td><td>38.8</td><td>77.5</td><td>2.2</td><td>71.5</td><td>65.9</td></tr><tr><td>SCINA</td><td>53.7</td><td>7.7</td><td>58.7</td><td>2.4</td><td>53.9</td><td>0.0</td><td>73.3</td><td>6.2</td></tr><tr><td>SingleR</td><td>83.1</td><td>80.1</td><td>85.2</td><td>77.6</td><td>${NA}$</td><td>${NA}$</td><td>70.3</td><td>13.4</td></tr><tr><td>ScPred</td><td>59.2</td><td>12.8</td><td>71.0</td><td>24.7</td><td>${NA}$</td><td>${NA}$</td><td>74.4</td><td>17.1</td></tr></table>
42
+
43
+ Real Data. An essential aspect of creating and testing a labelling tool is ground truth labels. However, it is not usually feasible to acquire ground truth labels for scRNA-seq data. An alternative gold standard is Fluorescence-activated Cell Sorting (FACS), a technique of pre-sorting cells by markers prior to conduction of scRNA-seq [17]. We selected two such data sets to test our method, giving us gold-standard cell type labels.
44
+
45
+ First, we use a scRNA-seq data set generated from mouse testis cells[18]. This data contains three cell types: 292 Spermatogonia, 244 Spermatocytes, and 156 Spermatids after filtering. Cell type markers were selected from relevant literature[19] and no reference data set was used for this data. There were 46 unconfidently labelled cells after prediction tool voting on the data set.
46
+
47
+ Second, we use an scRNA-seq data set generated from Human peripheral blood mononuclear cells (PBMCs)[20]. This data contains ten cell types, however, we removed cell types not purely sorted by FACs, combined CD4+ positive T cells, and combined CD8+ positive T Cells. This resulted in five cell types: 9,106 B Cells, 2,341 Monocytes, 7,572 Natural killer (NK) Cells, 38,006 CD4+ T Cells, and 19,856 CD8+ T Cells. We used the same markers as ScSorter[7] and used the 10X PBMC $3\mathrm{k}$ data set annotated as shown in [21] as a reference. There were 2,310 unconfidently labelled cells after prediction tool voting on the data set.
48
+
49
+ ## 4 Results
50
+
51
+ ### 4.1 Accuracy on Test Sets
52
+
53
+ Experiment Settings. For each data set, 20 percent of confidently labelled cells were masked and held out as a validation set. We performed hyperparameter optimization search (see Appendix A for details) for options of batch size $b$ , neighbors $k$ , layers $l$ , and embedding layer size $e$ , selecting the GCN architecture with the highest validation accuracy. Five random initializations of the optimal model were then trained for 150 epochs as described above and mean accuracy was recorded. We use max consensus and other tool accuracies as baselines for our model. Max consensus simply chooses the cell type with the most votes. In the event of a tie, this method returns "unknown". We report total accuracy and unconfident cell accuracy for each data set. The unconfident cell accuracy refers to cells that had no majority vote for a cell type by underlying tools.
54
+
55
+ Simulated Data Sets. For both simulated datasets, the optimal model has batch size 20, 2 nearest neighbors, and 2 EdgeConv layers. For the simulation with 0.7 de.facScale, 25-dimensional embedding space was optimal, whereas for the simulation with 0.8 de.facScale the optimal value was 40. Table 1 shows our model outperforms all other methods for total accuracy and slightly under performs SingleR for unconfident cell accuracy on the 0.7 de.facScale data.
56
+
57
+ Testis Data Set. Only marker based prediction tools were used for this data set as no pre labelled 0 reference was easily available. The optimal model for this data set used batch size 20,2 nearest neighbors, 2 EdgeConv layers, and embedding layer size of 25. Table 1 shows accuracy results, demonstrating our GCN model outperforms all other methods for both total accuracy and accuracy on unconfident cells.
58
+
59
+ ![01963f06-dde0-7c9b-97ca-c5db72c65905_3_382_198_1026_544_0.jpg](images/01963f06-dde0-7c9b-97ca-c5db72c65905_3_382_198_1026_544_0.jpg)
60
+
61
+ Figure 2: Heatmap of DeepLIFT attribution scores after absolute value and scaling by cell type and heatmap of average log normalized gene expression scaled by gene. a. Top five most important genes for testis data set. b. Top three most important genes for PBMC data set. See Appendix B for extended versions of these plots.
62
+
63
+ PBMC Data Set. The optimal model for this data set used batch size 50, 2 nearest neighbors, 2 EdgeConv layers, and embedding layer size of 25. Table 1 shows accuracy results. For accuracy on unconfident cells, the GCN model places second behind ScType. Our model still outperforms all other tools and the max consensus. We view max consensus and underlying tool average as the true baseline comparison for the task as which tool has the highest accuracy is unknown in practice.
64
+
65
+ ### 4.2 Feature Interpretation
66
+
67
+ Figure 2A shows the top five most important genes by cell type for the testis data set, as identified by running DeepLIFT on our trained model. Interestingly, all of these top genes seem to have uniquely high attribution in their important cell type. It also shows the average scaled gene expression of the same genes in each cell type group. The highly attributed genes for a cell type also have relatively high gene expression in that cell type. We also observe high expression of Spermatocyte genes in Spermatid cells.
68
+
69
+ Figure 2B shows the top three most important genes by cell type for the PBMC data, and the scaled gene expression of the same set of important genes. For B Cells, Monocytes, and NK Cells we see a clear connection between the genes picked out as important by DeepLIFT and the genes expressed by those cell types. However, for CD4 and CD8 T Cells, the expression is not clearly higher for all genes. Importantly, we do observe CD8B as the most important gene for CD8 T Cell classification, a key marker for the cell type. We also observe CD3E (another important marker for all T Cells) as an important gene for both sub types of T Cells. One potential reason for less informative DeepLIFT scores for CD4 and CD8 T Cells is that the GCN often misclassifies CD8 T Cells as CD4 T Cells.
70
+
71
+ ## 5 Discussion
72
+
73
+ In this work we propose a novel framework for scRNA-seq cell type annotation. Building upon existing annotation tools, we implement an EdgeConv based GCN model to propagate consensus based confident labels to the remaining unlabelled cells. We show an improvement in accuracy over a baseline max consensus algorithm and the average tool accuracy. We also demonstrate the ability to identify important genes for classification via model interpretation with DeepLIFT. The model interpretation is especially valuable for researchers as it has the potential to uncover novel gene markers and provide insight into the model's decisions.
74
+
75
+ References
76
+
77
+ [1] Vladimir Yu Kiselev, Tallulah S Andrews, and Martin Hemberg. Challenges in unsupervised clustering of single-cell ma-seq data. Nature Reviews Genetics, 20(5):273-282, 2019. 1
78
+
79
+ [2] Giovanni Pasquini, Jesus Eduardo Rojo Arias, Patrick Schäfer, and Volker Busskamp. Automated methods for cell type annotation on scrna-seq data. Computational and Structural Biotechnology Journal, 19:961-969, 2021. 1
80
+
81
+ [3] Tamim Abdelaal, Lieke Michielsen, Davy Cats, Dylan Hoogduin, Hailiang Mei, Marcel JT Reinders, and Ahmed Mahfouz. A comparison of automatic cell identification methods for single-cell rna sequencing data. Genome biology, 20:1-19, 2019. 1
82
+
83
+ [4] Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagating activation differences. In International conference on machine learning, pages 3145-3153. PMLR, 2017. 1, 2
84
+
85
+ [5] Ze Zhang, Danni Luo, Xue Zhong, Jin Huk Choi, Yuanqing Ma, Stacy Wang, Elena Mahrt, Wei Guo, Eric W Stawiski, Zora Modrusan, et al. Scina: A semi-supervised subtyping algorithm of single cells and bulk samples. Genes, page 531, 2019. 2
86
+
87
+ [6] Aleksandr Ianevski, Anil K Giri, and Tero Aittokallio. Fully-automated and ultra-fast cell-type identification using specific marker combinations from single-cell transcriptomic data. Nature communications, 13(1):1-10, 2022. 2
88
+
89
+ [7] H. Guo and J Li. scsorter: assigning cells to known cell types according to marker genes. Genome Biol, 2021. 2, 3
90
+
91
+ [8] D. Aran, A.P. Looney, and L. et al Liu. Reference-based analysis of lung single-cell sequencing reveals a transitional profibrotic macrophage. Nat Immunol, 2019. 2
92
+
93
+ [9] J. Alquicira-Hernandez, A. Sathe, and H.P. et al Ji. scpred: accurate supervised method for cell-type classification from single-cell rna-seq data. Genome Biol, 2019. 2
94
+
95
+ [10] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon. Dynamic graph cnn for learning on point clouds. Acm Transactions On Graphics (tog), 38(5):1-12, 2019. 2
96
+
97
+ [11] Diederik P. AKingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv, 2014.2
98
+
99
+ [12] Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, and Orion Reblitz-Richardson. Captum: A unified and generic model interpretability library for pytorch, 2020. 2
100
+
101
+ [13] Shuai He, Lin-He Wang, Yang Liu, Yi-Qi Li, Hai-Tian Chen, Jing-Hong Xu, Wan Peng, Guo-Wang Lin, Pan-Pan Wei, Bo Li, et al. Single-cell transcriptome profiling of an adult human cell atlas of 15 major organs. Genome biology, 21(1):1-34, 2020. 2
102
+
103
+ [14] Joseph Collin, Rachel Queen, Darin Zerti, Sanja Bojic, Birthe Dorgau, Nicky Moyse, Marina Moya Molina, Chunbo Yang, Sunanda Dey, Gary Reynolds, et al. A single cell atlas of human cornea that defines its development, limbal progenitor cells and their interactions with the immune cells. The ocular surface, 21:279-298, 2021. 2
104
+
105
+ [15] Luke Zappia, Belinda Phipson, and Alicia Oshlack. Splatter: simulation of single-cell rna sequencing data. Genome biology, 18(1):1-15, 2017. 2
106
+
107
+ [16] 4k pan t cells from a healthy donor. URL https://www.10xgenomics.com/resources/ datasets/4-k-pan-t-cells-from-a-healthy-donor-2-standard-2-1-0.2
108
+
109
+ [17] James W Tung, Kartoosh Heydari, Rabin Tirouvanziam, Bita Sahaf, David R Parks, Leonard A Herzenberg, and Leonore A Herzenberg. Modern flow cytometry: a practical approach. Clinics in laboratory medicine, 27(3):453-468, 2007. 3
110
+
111
+ [18] Min Jung, Daniel Wells, Jannette Rusch, Suhaira Ahmad, Jonathan Marchini, Simon R Myers, and Donald F Conrad. Unified single-cell analysis of testis gene regulation and pathology in five mouse strains. Elife, 8:e43966, 2019. 3
112
+
113
+ [19] Christopher Daniel Green, Qianyi Ma, Gabriel L Manske, Adrienne Niederriter Shami, Xianing Zheng, Simone Marini, Lindsay Moritz, Caleb Sultan, Stephen J Gurczynski, Bethany B Moore,
114
+
115
+ et al. A comprehensive roadmap of murine spermatogenesis defined by single-cell rna-seq. Developmental cell, 46(5):651-667, 2018. 3
116
+
117
+ [20] Grace XY Zheng, Jessica M Terry, Phillip Belgrader, Paul Ryvkin, Zachary W Bent, Ryan Wilson, Solongo B Ziraldo, Tobias D Wheeler, Geoff P McDermott, Junjie Zhu, et al. Massively parallel digital transcriptional profiling of single cells. Nature communications, 8(1):1-12, 2017. 3
118
+
119
+ [21] Seurat - guided clustering tutorial, Jan 2022. URL https://satijalab.org/seurat/ articles/pbmc3k_tutorial.html. 3
120
+
121
+ ![01963f06-dde0-7c9b-97ca-c5db72c65905_6_305_211_1180_910_0.jpg](images/01963f06-dde0-7c9b-97ca-c5db72c65905_6_305_211_1180_910_0.jpg)
122
+
123
+ Figure 3: Spread of validation accuracy scores as a function of various hyperparameters. The hyperparameters included are number of neighbors, batch size, GCN layers, and final embedding layer size. a. Testis data set. b. PBMC data set. c. Simulation 0.7 data set. d. Simulation 0.8 data set
124
+
125
+ ## A Hyperparameter Search Details
126
+
127
+ See Figure 3 for details of hyperparameter search on validation set of each data set.
128
+
129
+ Our model architecture consists of $l$ EdgeConv layers. Each EdgeConv layer consists of one round of message passing along edges of the graph, followed by a dense neural network model that maps from one layer's embedding space to the next layer's. Each node aggregates information using the sum of its received messages (from neighbors and itself). In all of our model architectures, the first layer takes input embedding size 500 and outputs embedding size 1000 . The middle layers accept embedding size 1000 and output embeddings of the same size. The final layer accepts embedding size 1000 and outputs final embedding size $e$ . Both hyperparameters number of layers $l$ and final embedding size $e$ are included in the hyperparameter search.
130
+
131
+ ## B Extended DeepLIFT Plots
132
+
133
+ See Figure 4 for specific gene expression of each highly important gene for all cell types in testis and PBMC data sets.
134
+
135
+ ## 215 C Anonymized Github Link
136
+
137
+ The code for our pipeline used to generate results in this paper is available at 7 https://anonymous.4open.science/r/scSHARP-DA63.
138
+
139
+ ![01963f06-dde0-7c9b-97ca-c5db72c65905_7_301_385_1198_1396_0.jpg](images/01963f06-dde0-7c9b-97ca-c5db72c65905_7_301_385_1198_1396_0.jpg)
140
+
141
+ Figure 4: Heatmap of DeepLIFT attribution scores after absolute value and scaling by cell type for top five most important features by cell type and violin plot of log normalized expression for each gene. a. Attribution heatmap for testis data set. b. Expression plots for testis data set. c. Attribution heatmap for PBMC data set. d. Expression plots for PBMC data.
142
+
papers/LOG/LOG 2022/LOG 2022 Conference/R8v95EwI7NL/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § CONSENSUS LABEL PROPAGATION WITH GRAPH CONVOLUTIONAL NETWORKS FOR SINGLE-CELL RNA SEQUENCING CELL TYPE ANNOTATION
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ Single-cell RNA sequencing (scRNA-seq) data, annotated by cell type, is useful in a variety of downstream biological applications, such as profiling gene expression at the single-cell level. However, manually assigning these annotations with known marker genes is both time-consuming and subjective. We present a Graph Convolutional Network (GCN) based approach to automate the annotation process. Our process builds upon existing labelling approaches, using state-of-the-art tools to find highly-confident cells through consensus and spreading these confident labels with a semi-supervised GCN. Using simulated data and two scRNA-seq datasets from different tissues, we show that our method improves accuracy over a simple consensus algorithm and the average of the underlying tools. We also demonstrate that our GCN method allows for feature interpretation, pulling out important genes for cell type classification. We present our completed pipeline, written in Pytorch, as an end-to-end tool for automating and interpreting the classification of scRNA-seq data.
12
+
13
+ § 1 INTRODUCTION
14
+
15
+ Single-cell RNA sequencing (scRNA-seq) measures the RNA from each gene present in an individual cell, serving as a proxy for gene expression. High-quality labels of cell type based on the transcriptional profile produced by scRNA-seq have proven valuable for characterizing gene expression of cells, and for discovering cell types and genetic drivers of disease. Traditionally, these labels are produced by unsupervised clustering followed by labelling clusters with known marker genes. However, unsupervised clustering is limited by issues such as the size of scRNA-seq datasets as well as subjectivity in reclustering and biological interpretation of clusters [1].
16
+
17
+ The limitations of traditional cell type annotation methods have necessitated the development of automated methods for cell labelling. Three main categories of tools have emerged: marker gene based, correlation based, and supervised classification based [2]. Marker based approaches employ previously known marker genes for labelling, while correlation and supervised learning based approaches require previous manually labelled scRNA-seq data sets with the cell types of interest. Within these broad categories, the performance of individual tools varies widely across data sets [3]. As a result, using the consensus of multiple classification tools could yield higher accuracy. However, to the best knowledge of the authors, there currently exist no tools for researchers to easily apply multiple classification algorithms to their scRNA-seq data.
18
+
19
+ We address these issues in two ways. First, we provide a pipeline for annotation of scRNA-seq data with multiple state-of-the-art annotation algorithms. Second, we implement and test a semi-supervised Graph Convolutional Network (GCN) as a mechanism to propagate labels from confidently labelled cells to unconfidently labelled cells. We show our method improves overall classification accuracy (and, more specifically, classification accuracy on unconfidently labelled cells) compared to taking the consensus of the labels from the underlying tools. We also demonstrate the use of DeepLIFT[4] as an effective interpretation tool for our GCN model, allowing researchers insight into classification decisions and important cell type gene markers.
20
+
21
+ < g r a p h i c s >
22
+
23
+ Figure 1: The model takes in scRNA-seq counts matrix, cell type marker genes, and/or a pre labelled reference data set. This data is passed to any number of cell type annotation tools. Each cell is labelled confidently if a majority of tools agree on a cell type. The scRNA-seq data is then embedded as a k-NN graph and passed through a GCN to propagate confident labels to all other cells.
24
+
25
+ § 2 OUR MODEL
26
+
27
+ Picking Confident Labels via Consensus. Our method first involves picking confident labels for a subset of cells in a given data set. Our pipeline currently includes five different state-of-the-art annotation methods: SCINA [5], ScType [6], ScSorter [7], SingleR [8], and ScPred [9]. Our pipeline also optionally allows researchers to upload their own predictions and utilize other tools. We designate a cell as being confidently labelled (and keep that cell's label) if a majority of tools agree on that label.
28
+
29
+ Semi-supervised GCN. We construct a GCN with $l$ EdgeConv [10] layers with SiLU activation function and summation aggregation. Each layer propagates embedding vectors between each node and its $k$ nearest neighbors (including itself). A final linear layer projects node embeddings into label space (whose dimension is the number of cell types in our dataset). For architecture details see Appendix A. We train our GCN for 150 epochs, with the Adam optimizer[11] at a learning rate of 0.0001 . Our training loss is Cross-Entropy loss on the set of confidently labelled cells. Batches are randomly selected and our graph is reconstructed for each batch and epoch.
30
+
31
+ Interpretation with DeepLIFT. We employ DeepLIFT[4] with the Rescale rule as implemented by Captum[12]. We use the same hyperparameters batch size $b$ , neighbors $k$ , number of message passing steps $l$ , and final embedding layer size $e$ as used during training. DeepLIFT uses the gradient of a neural network's outputs with respect to inputs to determine how much a given classification depended on a given input variable (in our case, how much classification as a given cell type depends on each gene). Calculating these attribution scores is only possible for a differentiable model (like our GCN) - making it impossible to use on any of the underlying tools in the pipeline.
32
+
33
+ § 3 DATA SETS
34
+
35
+ Preprocessing Data. The initial input to our pipeline is a scRNA-seq count matrix $X$ where ${X}_{nq}$ corresponds to observed gene counts for gene $g$ in cell $n$ . Cells expressing $< {200}$ genes and genes expressed in $< 3$ cells are removed, and $X$ is row-normalized according to ${x}_{i} = \log \left( {1 + \left( \left( {{x}_{i} * }\right. \right. }\right.$ ${10000})/\operatorname{sum}\left( {x}_{i}\right)$ ). Both of these steps are common scRNA-seq preprocessing steps[13][14]. Finally, we use Principal Components Analysis (PCA) to project $X$ down to 500 features per cell.
36
+
37
+ Simulated Data. We generated our simulated data sets with Splatter[15] and parameters estimated from 4000 Pan T Cells from a healthy donor [16]. Each simulated data set consists of 1000 reference cells and 1000 query cells separated by a batch.facScale of 0.5 . The reference cells allow for the use of correlation-based annotation tools. Each set of cells contains four cell types with equal proportions and the transcriptomic profiles of each type corresponding across reference and query. Simulated data sets vary by the de.facScale parameter which determines how different the gene expression profile of different cell type groups are. Five markers were selected randomly from the top ten differentially expressed genes from each cell type. The 0.7 de.facScale and 0.8 de.facScale simulated data sets had 156 and 85 unconfidently labelled cells respectively.
38
+
39
+ Table 1: Accuracy scores for all datasets for both all cells and unconfidently labelled cells only. GCN accuracies are the mean of accuracies from five randomly initialized trials.
40
+
41
+ max width=
42
+
43
+ 2*Method 2|c|Simulation 0.7 2|c|Simulation 0.8 2|c|Testis 2|c|PBMC
44
+
45
+ 2-9
46
+ All Unconf. All Unconf. All Unconf. All Unconf.
47
+
48
+ 1-9
49
+ Ours (GCN) 90.0 66.0 96.1 83.3 86.2 80.9 93.1 70.6
50
+
51
+ 1-9
52
+ Max Consensus 86.4 42.9 91.2 25.9 80.8 0.0 91.2 6.8
53
+
54
+ 1-9
55
+ Tool Avg. 69.3 33.1 76.7 34.6 72.1 21.7 75.0 36.9
56
+
57
+ 1-9
58
+ ScType 64.8 13.5 79.9 29.4 84.8 63.0 85.5 81.8
59
+
60
+ 1-9
61
+ ScSorter 85.8 51.3 88.7 38.8 77.5 2.2 71.5 65.9
62
+
63
+ 1-9
64
+ SCINA 53.7 7.7 58.7 2.4 53.9 0.0 73.3 6.2
65
+
66
+ 1-9
67
+ SingleR 83.1 80.1 85.2 77.6 ${NA}$ ${NA}$ 70.3 13.4
68
+
69
+ 1-9
70
+ ScPred 59.2 12.8 71.0 24.7 ${NA}$ ${NA}$ 74.4 17.1
71
+
72
+ 1-9
73
+
74
+ Real Data. An essential aspect of creating and testing a labelling tool is ground truth labels. However, it is not usually feasible to acquire ground truth labels for scRNA-seq data. An alternative gold standard is Fluorescence-activated Cell Sorting (FACS), a technique of pre-sorting cells by markers prior to conduction of scRNA-seq [17]. We selected two such data sets to test our method, giving us gold-standard cell type labels.
75
+
76
+ First, we use a scRNA-seq data set generated from mouse testis cells[18]. This data contains three cell types: 292 Spermatogonia, 244 Spermatocytes, and 156 Spermatids after filtering. Cell type markers were selected from relevant literature[19] and no reference data set was used for this data. There were 46 unconfidently labelled cells after prediction tool voting on the data set.
77
+
78
+ Second, we use an scRNA-seq data set generated from Human peripheral blood mononuclear cells (PBMCs)[20]. This data contains ten cell types, however, we removed cell types not purely sorted by FACs, combined CD4+ positive T cells, and combined CD8+ positive T Cells. This resulted in five cell types: 9,106 B Cells, 2,341 Monocytes, 7,572 Natural killer (NK) Cells, 38,006 CD4+ T Cells, and 19,856 CD8+ T Cells. We used the same markers as ScSorter[7] and used the 10X PBMC $3\mathrm{k}$ data set annotated as shown in [21] as a reference. There were 2,310 unconfidently labelled cells after prediction tool voting on the data set.
79
+
80
+ § 4 RESULTS
81
+
82
+ § 4.1 ACCURACY ON TEST SETS
83
+
84
+ Experiment Settings. For each data set, 20 percent of confidently labelled cells were masked and held out as a validation set. We performed hyperparameter optimization search (see Appendix A for details) for options of batch size $b$ , neighbors $k$ , layers $l$ , and embedding layer size $e$ , selecting the GCN architecture with the highest validation accuracy. Five random initializations of the optimal model were then trained for 150 epochs as described above and mean accuracy was recorded. We use max consensus and other tool accuracies as baselines for our model. Max consensus simply chooses the cell type with the most votes. In the event of a tie, this method returns "unknown". We report total accuracy and unconfident cell accuracy for each data set. The unconfident cell accuracy refers to cells that had no majority vote for a cell type by underlying tools.
85
+
86
+ Simulated Data Sets. For both simulated datasets, the optimal model has batch size 20, 2 nearest neighbors, and 2 EdgeConv layers. For the simulation with 0.7 de.facScale, 25-dimensional embedding space was optimal, whereas for the simulation with 0.8 de.facScale the optimal value was 40. Table 1 shows our model outperforms all other methods for total accuracy and slightly under performs SingleR for unconfident cell accuracy on the 0.7 de.facScale data.
87
+
88
+ Testis Data Set. Only marker based prediction tools were used for this data set as no pre labelled 0 reference was easily available. The optimal model for this data set used batch size 20,2 nearest neighbors, 2 EdgeConv layers, and embedding layer size of 25. Table 1 shows accuracy results, demonstrating our GCN model outperforms all other methods for both total accuracy and accuracy on unconfident cells.
89
+
90
+ < g r a p h i c s >
91
+
92
+ Figure 2: Heatmap of DeepLIFT attribution scores after absolute value and scaling by cell type and heatmap of average log normalized gene expression scaled by gene. a. Top five most important genes for testis data set. b. Top three most important genes for PBMC data set. See Appendix B for extended versions of these plots.
93
+
94
+ PBMC Data Set. The optimal model for this data set used batch size 50, 2 nearest neighbors, 2 EdgeConv layers, and embedding layer size of 25. Table 1 shows accuracy results. For accuracy on unconfident cells, the GCN model places second behind ScType. Our model still outperforms all other tools and the max consensus. We view max consensus and underlying tool average as the true baseline comparison for the task as which tool has the highest accuracy is unknown in practice.
95
+
96
+ § 4.2 FEATURE INTERPRETATION
97
+
98
+ Figure 2A shows the top five most important genes by cell type for the testis data set, as identified by running DeepLIFT on our trained model. Interestingly, all of these top genes seem to have uniquely high attribution in their important cell type. It also shows the average scaled gene expression of the same genes in each cell type group. The highly attributed genes for a cell type also have relatively high gene expression in that cell type. We also observe high expression of Spermatocyte genes in Spermatid cells.
99
+
100
+ Figure 2B shows the top three most important genes by cell type for the PBMC data, and the scaled gene expression of the same set of important genes. For B Cells, Monocytes, and NK Cells we see a clear connection between the genes picked out as important by DeepLIFT and the genes expressed by those cell types. However, for CD4 and CD8 T Cells, the expression is not clearly higher for all genes. Importantly, we do observe CD8B as the most important gene for CD8 T Cell classification, a key marker for the cell type. We also observe CD3E (another important marker for all T Cells) as an important gene for both sub types of T Cells. One potential reason for less informative DeepLIFT scores for CD4 and CD8 T Cells is that the GCN often misclassifies CD8 T Cells as CD4 T Cells.
101
+
102
+ § 5 DISCUSSION
103
+
104
+ In this work we propose a novel framework for scRNA-seq cell type annotation. Building upon existing annotation tools, we implement an EdgeConv based GCN model to propagate consensus based confident labels to the remaining unlabelled cells. We show an improvement in accuracy over a baseline max consensus algorithm and the average tool accuracy. We also demonstrate the ability to identify important genes for classification via model interpretation with DeepLIFT. The model interpretation is especially valuable for researchers as it has the potential to uncover novel gene markers and provide insight into the model's decisions.
papers/LOG/LOG 2022/LOG 2022 Conference/Ri2dzVt_a1h/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,354 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Anonymous Author(s)
2
+
3
+ Anonymous Affiliation
4
+
5
+ Anonymous Email
6
+
7
+ ## Abstract
8
+
9
+ Current graph representation learning techniques use Graph Neural Networks (GNNs) to extract features from dataset embeddings. In this work, we examine the quality of these embeddings and assess how changing them can affect the accuracy of GNNs. We explore different embedding extraction techniques for both images and texts. We find that the choice of embedding biases the performance of different GNN architectures and thus the choice of embedding influences the selection of GNNs regardless of the underlying dataset. In addition, we only see an improvement in accuracy from some GNN models compared to the accuracy of models trained from scratch or fine-tuned on the underlying data without utilizing the graph connections. As an alternative, we propose Graph-connected Network (GraNet) layers which use GNN message passing within large models to allow neighborhood aggregation. This gives a chance for the model to inherit weights from large pre-trained models if possible and we demonstrate that this approach improves the accuracy compared to the previous methods: on Flickr_v2, GraNet beats GAT2 and GraphSAGE by 7.7% and 1.7% respectively.
10
+
11
+ ## 17 1 Introduction
12
+
13
+ Graph Neural Networks (GNNs) have been successful on a wide array of applications ranging from computational biology [17] to social networks [5]. The input for GNNs, although sourced from many different domains, is often data that has been preprocessed to a computationally digestible format. These digestible formats are commonly known as embeddings.
14
+
15
+ Currently, improvements made to GNN architecture are tested against these embeddings and the state of the art is determined based on those results. However, this does not necessarily correlate with the GNNs accuracy on the underlying dataset and ignores the influence that the source and style of these embeddings have on the performance of particular GNN architectures. To test existing GNN architectures, and demonstrate the importance of the embeddings used in training them, we provide three new datasets each with a set of embeddings generated from different methods.
16
+
17
+ We further analyse the benefit of using GNNs on fixed embeddings. We compare GNNs to standard models that have been trained or fine-tuned on the target raw data; these models treat each data point as unconnected, ignoring the underlying graph information in data. This simple unconnected baseline surprisingly outperforms some strong GNN models. This then prompts the question: Will mixing the two approaches unlock the classification power of existing large models by allowing them to utilize the graph structure in our data?
18
+
19
+ Based on the question above, we propose a new method of mixing GNNs with large models, allowing them to train simultaneously. To achieve this we introduce a variation of the standard message passing framework. With this new framework a subset of the large model's layers can each be graph-connected - exploiting useful graph structure information during the forward pass. We demonstrate that this new approach improves the accuracy of using only a pre-trained or fine-tuned large model and outperforms a stand-alone GNN on a fixed embedding. We call this new approach GraNet (Graph-connected Network), and in summary, this paper has the following contributions:
20
+
21
+ # Revisiting Embeddings for Graph Neural Networks
22
+
23
+ ---
24
+
25
+ - We provide new datasets and a rich set of accompanying embeddings to better test the performance of GNNs.
26
+
27
+ ---
28
+
29
+ - We empirically demonstrate that only some existing GNNs improve on unconnected model accuracy and those that do vary depending on the embeddings used. We urge unconnected models as a baseline for assessing GNN performance.
30
+
31
+ - We provide a new method, named GraNet, that combines GNNs and large models (fine-tuned or trained from scratch) to efficiently exploit the graph structure in raw data.
32
+
33
+ - We empirically show that GraNet outperforms both unconnected models (the strong baseline) and GNNs on a range of datasets and accompanying embeddings.
34
+
35
+ ## 2 Related Work
36
+
37
+ Table 1: An overview of popular datasets detailing statistics and how they are built
38
+
39
+ <table><tr><td>Name</td><td>Classification Task</td><td>Classes</td><td>Feature Length</td><td>Embedding Function</td></tr><tr><td>Amazon [12]</td><td>Text</td><td>10,8</td><td>767,745</td><td>Bag of Words</td></tr><tr><td>AmazonProducts [16]</td><td>Text</td><td>107</td><td>200</td><td>4-gram with SVD</td></tr><tr><td>Flickr [16]</td><td>Image</td><td>7</td><td>500</td><td>Bag of Words</td></tr><tr><td>Reddit [5, 16]</td><td>Text</td><td>41</td><td>602</td><td>Avg. GloVe vectors</td></tr><tr><td>Cora [8]</td><td>Text</td><td>7</td><td>1,433</td><td>Bag of Words</td></tr><tr><td>CiteSeer [8]</td><td>Text</td><td>6</td><td>3,703</td><td>Bag of Words</td></tr><tr><td>PubMed [8]</td><td>Text</td><td>3</td><td>500</td><td>Bag of Words</td></tr></table>
40
+
41
+ Graph Neural Networks Let $\mathcal{G}\left( {\mathcal{V},\mathcal{E},\mathbf{X}}\right)$ denote a graph where $\mathcal{V} = \left\{ {{v}_{1},{v}_{2},\ldots ,{v}_{n}}\right\}$ is the set of nodes and $N = \left| \mathcal{V}\right|$ is the number of nodes in the graph, $\mathcal{E}$ is the set of edges between nodes in $\mathcal{V}$ such that ${e}_{i, j} \in \mathcal{E}$ denotes a directed connection from node ${v}_{i}$ to node ${v}_{j}$ . We say each node ${v}_{i}$ has a neighbourhood ${\mathcal{N}}_{i}$ such that ${v}_{j} \in {\mathcal{N}}_{i} \Leftrightarrow {e}_{j, i} \in \mathcal{E}$ and we say that ${v}_{j}$ is a neighbour node to ${v}_{i}$ .
42
+
43
+ $\mathbf{X}$ is the raw data matrix; there normally exists a transformation function to project the raw data to a more compact feature ${\mathbf{X}}_{e}$ space such that ${\mathbf{X}}_{e} = {f}_{e}\left( \mathbf{X}\right)$ .
44
+
45
+ For instance, we can transform a set of images $\left( {\mathbf{X} \in {\mathbb{R}}^{\left| \mathcal{V}\right| \times C \times H \times W}\text{, where}C, H}\right.$ and $W$ are the number of channels, width and height of an image) to 1D features $\left( {{\mathbf{X}}_{e} \in {\mathbb{R}}^{\left| \mathcal{V}\right| \times F}}\right.$ , where $F$ denotes the feature dimension). In this case, we have an embedding function ${f}_{e} : {\mathbb{R}}^{\left| \mathcal{V}\right| \times C \times H \times W} \rightarrow {\mathbb{R}}^{\left| \mathcal{V}\right| \times F}$ for the dimensional reduction.
46
+
47
+ This paper puts a special focus on the design of ${f}_{e}$ , and reveals later how the design choice of ${f}_{e}$ can influence the performance of GNN models without making any changes to the underlying data $\mathcal{G}\left( {\mathcal{V},\mathcal{E},\mathbf{X}}\right)$ . An overview of popular datasets and the processes they used to create their embeddings is presented in Table 1. We see that the popular graph datasets [5, 8, 12, 16] focus heavily on Bag of Words (BoW) and word vectors. This implies that current GNNs are being tested on and designed for a very narrow class of embedding styles. A more detailed discussion is available in Appendix C.
48
+
49
+ Current GNNs can be thought of as Message Passing layers, the $l$ -th layer can be represented as
50
+
51
+ $$
52
+ {\mathbf{h}}_{i}^{l} = {\gamma }_{{\mathbf{\theta }}_{\gamma }}\left( {{\mathbf{h}}_{i}^{l - 1},{\psi }_{j \in \mathcal{N}\left( i\right) }\left( {{\phi }_{{\mathbf{\theta }}_{\phi }}\left( {{\mathbf{h}}_{i}^{l - 1},{\mathbf{h}}_{j}^{l - 1},{\mathbf{e}}_{j, i}}\right) }\right) }\right) \tag{1}
53
+ $$
54
+
55
+ where $\psi$ is a differentiable aggregation function and ${\gamma }_{{\theta }_{\gamma }}$ and ${\phi }_{{\theta }_{\phi }}$ represent differentiable functions with trainable parameters ${\theta }_{\gamma }$ and ${\theta }_{\phi }$ respectively. ${\mathbf{h}}_{i}^{l}$ is the node representation of ${v}_{i}$ at layer $l$ , with ${\mathbf{h}}_{i}^{0} = {f}_{e}\left( {\mathbf{x}}_{i}\right)$ . Kipf et al. [8] introduce GCN (Graph Convolutional Networks) - a method of applying convolutional layers from CNNs to graph neural networks. It focuses on spectral filters applied to the whole graph structure rather than at the node level. Hamilton et al. [5] introduce the GraphSAGE model which builds on prior work from GCN focusing on individual node representations. This gives rise to the iterative message passing process on the node level. Though simpler than newer models we find that this approach, when given the right embedding style, can outperform some recently published GNNs. Velickovic et al. [14] introduce the idea of graph attention which alters how a node aggregates its neighbours representation. This adds an additional attention mechansim to discern which aspects of the node representations in a nodes neighbourhood are important at a given layer. This approach is improved upon in Brody et al. [1] to provide a more attentive network. Further discussion of GAT is available in Appendix E.
56
+
57
+ ## 3 Method
58
+
59
+ 3 Our proposed method of converting pre-trained models into graph-connected models can easily be broken down into individual layers. Taking ${f}_{\mathbf{\theta }}^{l}$ , as the $l$ -th layer in a pre-trained model, ${f}_{\mathbf{\theta }}$ , where we are given pre-trained weights $\mathbf{\theta }$ we can describe this new layer by reformulating Equation (1) as such
60
+
61
+ $$
62
+ {\mathbf{h}}_{i}^{l} = {\gamma }_{{\mathbf{\theta }}_{\gamma }}\left( {{\mathbf{h}}_{i}^{l - 1},{\psi }_{j \in \mathcal{N}\left( i\right) }\left( {{\phi }_{{\mathbf{\theta }}_{\phi }}\left( {{f}_{{\mathbf{\theta }}_{l - 1}}^{l - 1}\left( {\mathbf{h}}_{i}^{l - 1}\right) ,{f}_{{\mathbf{\theta }}_{l - 1}}^{l - 1}\left( {\mathbf{h}}_{j}^{l - 1}\right) ,{\mathbf{e}}_{j, i}}\right) }\right) }\right) \tag{2}
63
+ $$
64
+
65
+ where ${\mathbf{h}}_{i}^{0} = {\mathbf{x}}_{i}$ (without an embedding function), more details and definitions can be found in Appendix A.
66
+
67
+ Figure 1: Graphical representation of how our proposed GraNet layer operates for image networks
68
+
69
+ ${A}_{i}$ ${\gamma }_{{\theta }_{\gamma }}$ ${\mathbf{h}}_{i}^{l}$ $\mathcal{B}$ ${f}_{{\theta }_{l - 1}}^{l - 1}$ $\left\{ {{A}_{i + 1,\ldots ,}{A}_{n}}\right\} \in {\mathcal{N}}_{i}$ ${A}_{i + 1}$ ${\mathcal{N}}_{i}$ ${\mathcal{N}}_{i}$ ${A}_{n}$
70
+
71
+ Figure 1 is a representation of Equation (2) specifically in the case where the pre-trained model is a CNN and $\left( {f}_{{\theta }_{l - 1}}^{l - 1}\right)$ is a convolutional layer. The light blue regions perform the standard convolution feature extraction, these extracted feature maps are then adjusted, in the light orange region, by a Message Passing layer with graph attentions. These two representations are then combined.
72
+
73
+ The light blue regions and resulting channel stacks, ${A}_{i} \in {\mathbb{R}}^{1 \times {C}_{l} \times {H}_{l} \times {W}_{l}}$ through ${A}_{n} \in$ ${\mathbb{R}}^{1 \times {C}_{l} \times {H}_{l} \times {W}_{l}}$ , represent the forward pass through a single CNN layer ${f}_{{\theta }_{l - 1}}^{l - 1}.{A}_{i}$ represents the forward pass of the current node ${h}_{i}$ and ${A}_{j..n}$ represent the forward pass of the neighbours of ${h}_{i}$ .
74
+
75
+ The light orange region and resulting channel stacks, $\mathcal{B}$ , represent the graph-based Message Passing stage where the new representations are altered $\left( {\phi }_{{\theta }_{\phi }}\right)$ , aggregated $\left( \psi \right)$ and finally combined with the current node representation $\left( {\gamma }_{{\mathbf{\theta }}_{\gamma }}\right)$ , following the description in Equation (2).
76
+
77
+ ## 4 Evaluation
78
+
79
+ Experiment Setup For all test results we run 3 train-test runs each with a different random seed and take the arithmetic mean. Training takes 300 epochs of training unless otherwise stated. A discussion of how the datasets were created is available in Appendix D and a breakdown of the hyperparameters used is available in Appendix G.
80
+
81
+ Re-evaluating Embeddings Table 2 demonstrates that the particular embedding function used determines which GNN model performs best on the dataset. For instance, GAT2 has the best performance if the data is embedded as BoW (Bag of Words), but performs poorly on other embeddings. Graph-SAGE, in contrast, performs poorly on BoW but shows a good performance otherwise. This also signifies the importance of good embeddings as in this case BoW is better than roBERTa. We make the following key observations:
82
+
83
+ - The function ${f}_{e}$ used to extract embeddings influences the performance of different GNNs, so the embeddings should influence the choice of GNNs regardless of the underlying data.
84
+
85
+ - GNN models do not always outperform simple unconnected models, graph structure is not enough to compete against good classifiers.
86
+
87
+ - The choice of an embedding function contributes more to the final performance compared to the choice of a GNN model.
88
+
89
+ We also see the same effect when using Flickr_v2 and AmazonInstruments in Appendix F, Tables 7 and 8 respectively.
90
+
91
+ Graph-connected Network (GraNet) Layers Table 3 demonstrates that extending our large models with graph connections provides a significant improvement compared to both GNNs and the unconnected baseline models.
92
+
93
+ In Flickr_v2, both GraNet and Unconnected Model inherit either ResNet18 or ResNet50 weights pre-trained on ImageNet [3] and then fine-tuned on the target dataset. We observe a significant
94
+
95
+ Table 2: Test accuracy on AmazonElectronics with different embeddings compared against a standard unconnected MLP model. The embedding styles are explained in Appendix D, Table 5.
96
+
97
+ <table><tr><td rowspan="2">Model</td><td colspan="4">Embedding styles</td></tr><tr><td>Bag of Words</td><td>Byte Pair</td><td>roBERTa Encoded</td><td>roBERTa</td></tr><tr><td>Unconnected MLP</td><td>71.6% (+0.0)</td><td>21.6% (+0.0)</td><td>55.8% (+0.0)</td><td>51.9% (+0.0)</td></tr><tr><td>GCN</td><td>69.1% (-2.5)</td><td>21.7% (+0.1)</td><td>22.7% (-33.1)</td><td>22.3% (-29.6)</td></tr><tr><td>GAT</td><td>81.1% (+10.5)</td><td>22.2% (+0.6)</td><td>46.1% (-9.7)</td><td>40.3% (-11.6)</td></tr><tr><td>GAT2</td><td>81.8% (+10.2)</td><td>22.2% (+0.6)</td><td>41.8% (-14.0)</td><td>35.7% (-16.2)</td></tr><tr><td>GraphSAGE (Random)</td><td>71.3% (-0.3)</td><td>26.8%(+5.2)</td><td>57.0% (+1.2)</td><td>53.7% (+1.8)</td></tr><tr><td>GraphSAGE (Neighbour)</td><td>76.4% (+4.8)</td><td>40.4% (+20.8)</td><td>67.8%(+12.0)</td><td>66.4% (+12.5)</td></tr></table>
98
+
99
+ increase $\left( {+{1.5}\% }\right)$ from the GraNet style of training compared to both Unconnected Model and other GNN models. Intuitively, GraNet layers reframe graph representation learning from training a GNN on a fixed pre-extracted embedding to training both the GNN and the embedding function $\left( {f}_{e}\right)$ together on the underlying data. We also make the following key observations:
100
+
101
+ - Graph-connecting a pre-trained network improves performance by fine-tuning feature extraction based on the graph structure.
102
+
103
+ - GraNet models outperform their counterparts by facilitating fine-tuning on the graph dataset.
104
+
105
+ The architecture for each GraNet is a mixture of the best performing GNN for that embedding and the embedding extraction model $\left( {f}_{e}\right)$ (a Multi-Layer Perceptron in the case of Bag of Words). The details of the GraNet models are described in Appendix B.
106
+
107
+ Table 3: Comparison of GraNet models against the best performing GNNs for a specific embedding. Unconnected Model refers to ResNet18 or ResNet50, in the case of Flickr_v2, and an MLP otherwise. The setup of each GraNet model is detailed in Appendix B and the specifics of the 3 datasets can be found in Appendix D.
108
+
109
+ <table><tr><td rowspan="2">Model</td><td colspan="2">Flickr_v2</td><td>Electronics</td><td>Instruments</td></tr><tr><td>ResNet18</td><td>ResNet50</td><td>Bag of Words</td><td>Bag of Words</td></tr><tr><td>Unconnected Model</td><td>45.2% (+0.0)</td><td>46.9% (+0.0)</td><td>71.6% (+0.0)</td><td>66.4% (+0.0)</td></tr><tr><td>GAT2</td><td>42.1% (-3.1)</td><td>41.0% (-5.9)</td><td>81.8% (+10.2)</td><td>79.4% (+13.0)</td></tr><tr><td>GraphSAGE (Random)</td><td>45.4% (+0.2)</td><td>47.0% (+0.1)</td><td>71.3% (-0.3)</td><td>67.5% (+1.1)</td></tr><tr><td>GraphSAGE (Neighbour)</td><td>45.8% (+0.4)</td><td>44.5% (-2.4)</td><td>76.4% (+4.8)</td><td>72.6% (+6.2)</td></tr><tr><td>GraNet ${}^{1}$</td><td>46.7% (+1.5)</td><td>48.7% (+1.8)</td><td>81.9% (+10.3)</td><td>79.7% (+13.3)</td></tr></table>
110
+
111
+ ## 5 Conclusion
112
+
113
+ In this paper, we reveal that GNN model designs are overfitting to certain embeddings styles (eg. BoW and word vectors). To demonstrate this we introduced three new datasets each with a range of embedding styles to be used as a better all round benchmark of GNN performance. We demonstrated that embedding style influences the performance of GNNs regardless of the underlying raw data of the dataset. Equally, the quality of the embedding, measured by how well an unconnected baseline model performs, is a greater indicator of GNN accuracy than the GNN model chosen. We therefore stress the importance of creating high quality embeddings and choosing the best GNN model based on the style of embedding created rather than using (or trying to improve) the same GNN model for every task.
114
+
115
+ We then introduced a new approach named GraNet. This approach allows for any large pre-trained model to be fine-tuned to a graph-connected dataset by altering the standard message passing function. In this way we exploit graph structure information to enhance the pre-trained model performance on graph-connected datasets. We have demonstrated that GraNet outperforms both unconnected pre-trained models and GNNs on a range of datasets.
116
+
117
+ There is an increasing trend towards large pre-trained models and graph-connected datasets. Our work demonstrates potential pitfalls in the way GNN architectures are currently evaluated and proposes a new technique to fully exploit the benefits of pre-trained models within a GNN.
118
+
119
+ ## References
120
+
121
+ [1] S. Brody, U. Alon, and E. Yahav. How attentive are graph attention networks? CoRR, abs/2105.14491, 2021. 2, 10
122
+
123
+ [2] T.-S. Chua, J. Tang, R. Hong, H. Li, Z. Luo, and Y.-T. Zheng. Nus-wide: A real-world web image database from national university of singapore. In Proc. of ACM Conf. on Image and Video Retrieval (CIVR'09), Santorini, Greece., July 8-10, 2009. 8
124
+
125
+ [3] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. Ieee, 2009. 3, 5
126
+
127
+ [4] M. Everingham, S. M. A. Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object classes challenge: A retrospective. International Journal of Computer Vision, 111(1):98-136, Jan. 2015. 8
128
+
129
+ [5] W. L. Hamilton, Z. Ying, and J. Leskovec. Inductive Representation Learning on Large Graphs. In NIPS, pages 1024-1034, 2017. 1, 2, 8, 13
130
+
131
+ [6] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770- 778,2016.5,8
132
+
133
+ [7] M. J. Huiskes and M. S. Lew. The mir flickr retrieval evaluation. In MIR '08: Proceedings of the 2008 ACM International Conference on Multimedia Information Retrieval, New York, NY, USA, 2008. ACM. 8
134
+
135
+ [8] T. N. Kipf and M. Welling. Semi-Supervised Classification with Graph Convolutional Networks. In ${ICLR},{2017.2},8$
136
+
137
+ [9] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692,2019.5,8
138
+
139
+ [10] J. McAuley and J. Leskovec. Image labeling on a network: using social-network metadata for image classification. In European conference on computer vision, pages 828-841. Springer, 2012.7,8
140
+
141
+ [11] J. McAuley, C. Targett, Q. Shi, and A. van den Hengel. Image-based recommendations on styles and substitutes. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '15, page 43-52, New York, NY, USA, 2015. Association for Computing Machinery. 7
142
+
143
+ [12] O. Shchur, M. Mumme, A. Bojchevski, and S. Günnemann. Pitfalls of graph neural network evaluation. arXiv preprint arXiv:1811.05868, 2018. 2, 7, 8
144
+
145
+ [13] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 8
146
+
147
+ [14] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio. Graph Attention Networks. In ICLR, 2018. 2, 9
148
+
149
+ [15] A. Williams, N. Nangia, and S. Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122. Association for Computational Linguistics, 2018. 8
150
+
151
+ [16] H. Zeng, H. Zhou, A. Srivastava, R. Kannan, and V. Prasanna. Graphsaint: Graph sampling based inductive learning method. arXiv preprint arXiv:1907.04931, 2019. 2, 7, 8, 13
152
+
153
+ [17] M. Zitnik and J. Leskovec. Predicting multicellular function through multi-layer tissue networks. Bioinformatics, 33(14):i190-i198, 2017. 1
154
+
155
+ ## A Definitions
156
+
157
+ Pre-trained models are models where a specific neural network architecture exists that have been trained on a dataset $\mathcal{D}$ for a specific task, this could be ImageNet classification [3] for vision networks [6] or the entire English Wikipedia for language models [9]. These networks therefore have pre-trained weights $\mathbf{\theta }$ that can be loaded into the model for further training or evaluation.
158
+
159
+ We denote these pre-trained models as ${f}_{\mathbf{\theta }}$ that is parameterised by weights $\mathbf{\theta }$ . We say that a pre-trained model has a set of functions $\left\{ {{f}_{{\mathbf{\theta }}_{1}}^{1},{f}_{{\mathbf{\theta }}_{2}}^{2},\ldots ,{f}_{{\mathbf{\theta }}_{M}}^{M}}\right\}$ for an $M$ -layer model, where ${f}_{{\mathbf{\theta }}_{i}}^{i} : {\mathbb{R}}^{F} \rightarrow {\mathbb{R}}^{{F}^{\prime }}$ and ${f}_{{\mathbf{\theta }}_{i + 1}}^{i + 1} : {\mathbb{R}}^{{F}^{\prime }} \rightarrow {\mathbb{R}}^{{F}^{\prime \prime }}$ and $F$ is the feature dimension. If we concatenate these layers, we obviously have ${f}_{\mathbf{\theta }}\left( x\right) = {f}_{{\mathbf{\theta }}_{M}}^{M}\left( {\ldots {f}_{{\mathbf{\theta }}_{2}}^{2}\left( {{f}_{{\mathbf{\theta }}_{1}}^{1}\left( x\right) }\right) }\right)$
160
+
161
+ Fine-tuning is therefore adapting $\mathbf{\theta }$ to a new dataset ${\mathcal{D}}^{\prime }$ which is related to $\mathcal{G}\left( {\mathcal{V},\mathcal{E},\mathbf{X}}\right)$ as illustrated in Section 2.
162
+
163
+ In a more standard setup, the target dataset we fine-tune to has the same underlying data-structure as the pre-training dataset, for instance, they might both be images, but the target dataset is a different type of classification. This may involve adding, removing or altering specific layers within the model or simply retraining the model with different labels on $\mathcal{D}$ .
164
+
165
+ If the architecture of a pre-trained model is altered then a new weight matrix ${\mathbf{\theta }}^{\prime }$ must be created from $\mathbf{\theta }$ by adding, removing or reshaping weights.
166
+
167
+ Freezing layers is the process whereby a selection of weights ${\mathbf{\theta }}_{f} \subset \mathbf{\theta }$ do not have gradients and thus do not change during back-propagation.
168
+
169
+ These ideas allow us to alter these pre-trained models to use information about the graph connections whilst utilising their pre-trained weight $\mathbf{\theta }$ .
170
+
171
+ Blending and Fine-tuning models is therefore the process of using existing models, ${f}_{{\mathbf{\theta }}_{f}}$ and ${g}_{{\mathbf{\theta }}_{q}}$ , with defined set of layers, $\left\{ {{f}_{{\mathbf{\theta }}_{f1}}^{1},{f}_{{\mathbf{\theta }}_{f2}}^{2},\ldots ,{f}_{{\mathbf{\theta }}_{fN}}^{N}}\right\}$ and $\left\{ {{g}_{{\mathbf{\theta }}_{g1}}^{1},{g}_{{\mathbf{\theta }}_{g2}}^{2},\ldots ,{g}_{{\mathbf{\theta }}_{gM}}^{M}}\right\}$ , and creating a new model, ${h}_{{\mathbf{\theta }}_{f, g}}$ , such that ${h}_{{\mathbf{\theta }}_{f, g}}^{i} = {f}_{{\mathbf{\theta }}_{f}}^{j} \circ {g}_{{\mathbf{\theta }}_{g}}^{k}$ . We can use pre-trained weights ${\mathbf{\theta }}_{f}$ or ${\mathbf{\theta }}_{g}$ and/or freeze either model, and where one of these models is a GNN we say this is fine-tuning on a graph dataset.
172
+
173
+ ## B GraNet
174
+
175
+ Below are the specifics of the GraNet models we present in this paper
176
+
177
+ ### B.1 Flickr_v2 GraNet
178
+
179
+ Graphly connecting every layer in a large pre-trained model is hardware intensive. Therefore, rather than graph-connect every single layer in a pre-trained network ${f}_{\mathbf{\theta }}$ we can graph-connect only the final layer. We thus split the model into two portions: the first set of unconnected layers and the final GraNet layer.
180
+
181
+ We can therefore look at equation Equation (2) and see in this case that we partially carry out the forward pass of ${f}_{\theta }$ which we will denote as ${f}_{e}$ and then carry out a forward pass through a GraNet layer. This allows us to ignore all the steps involved in applying ${f}_{e}$ and focus on the final GraNet layer. Denoting this final GraNet as $g$ we achieve the following equation
182
+
183
+ $$
184
+ y = g\left( {{f}_{e}\left( \mathbf{X}\right) }\right) \tag{3}
185
+ $$
186
+
187
+ where $y$ is our output for the given task.
188
+
189
+ ${f}_{e}\left( \mathbf{X}\right)$ is the same as described in Section 2 and indeed if we were to freeze ${f}_{\mathbf{\theta }}$ this would be equivalent to training GraNet with a single layer on the embeddings created by embedding function ${f}_{e}$ . So instead we also allow ${f}_{e}$ to train thus fine-tuning the weights $\mathbf{\theta }$ of ${f}_{\mathbf{\theta }}$ . This indirectly allows ${f}_{\mathbf{\theta }}$ to learn the graph structure by providing $g$ with better embeddings.
190
+
191
+ ### B.2 Amazon GraNet
192
+
193
+ In the case of the Amazon dataset we found that bag of words embedding performed the best. As this does not have an associated pre-trained model we design a multi-layer perceptron (MLP) as a baseline. We then convert all the layers within the MLP to GraNet layers.
194
+
195
+ However, a single layer of an MLP is ${f}_{{\mathbf{\theta }}_{l}}^{l}\left( \mathbf{x}\right) = {\mathbf{\theta }}_{l}\mathbf{x}$ . This would therefore mean that Equation (2) becomes
196
+
197
+ $$
198
+ {\alpha }_{ij} = \frac{\exp \left( {{\mathbf{a}}^{T}\operatorname{LeakyReLU}\left( \left\lbrack {{\mathbf{\theta }}_{l}{\mathbf{h}}_{i}\parallel {\mathbf{\theta }}_{l}{\mathbf{h}}_{j}}\right\rbrack \right) }\right) }{\mathop{\sum }\limits_{{k \in {\mathcal{N}}_{i}}}\exp \left( {{\mathbf{a}}^{T}\operatorname{LeakyReLU}\left( \left\lbrack {{\mathbf{\theta }}_{l}{\mathbf{h}}_{i}\parallel {\mathbf{\theta }}_{l}{\mathbf{h}}_{j}}\right\rbrack \right) }\right) }
199
+ $$
200
+
201
+ This is incredibly similar to Equations (4) and (6) with the only difference being when we apply the weight matrix, concatenation and attention mechanism. We find this model behaves the same as GAT2 and therefore, given the small size tried a different approach as a comparison.
202
+
203
+ We introduce a new weight matrix $\mathbf{A}$ , to replace $\mathbf{a}$ , as our attention matrix. This allows more complex interactions between the node representations to be exploited by our attention. As demonstrated in Table 7 in Appendix F we do see an increase in accuracy.
204
+
205
+ This approach was too costly to apply to a pre-trained CNN as the size of $\mathbf{A}$ far too large.
206
+
207
+ ## C An In-depth Overview of Existing Datasets
208
+
209
+ Due to the discussed importance of ${f}_{e}$ on GNN performance there is a lot to discuss about prior datasets that exist within the space of GNNs in regards to these functions.
210
+
211
+ ### C.1 Pytorch Geometric
212
+
213
+ Pytorch Geometric python library provides a standard interface on top of Pytorch to allow for the development of graph based machine learning. The library also provides a sample of datasets from previous papers published in this field. A brief overview of popular graph-connected datasets, their task and embeddings is available in table 1 .
214
+
215
+ As is clear from the table the current standard embedding for datasets is bag of words. In the cases where bag of words approaches are not used the approach is grounded in classical text representations such as n-grams and word vectors.
216
+
217
+ The tasks in these popular datasets are node classification where the node data is frequently text. We therefore say that on the node level these are text classification tasks. The only instance of a non-text classification task is Flickr [16], though based on the fact that the underlying data is image descriptions this could also be considered a text classification task.
218
+
219
+ This demonstrates how limited the reach of GNNs currently stand as they are being trained on datasets that behave very similarly where the only difference is the specifics of the available data. We feel that this does not therefore fully test the capabilities of GNNs and puts too much emphasis on bag of words and text classification.
220
+
221
+ ### C.2 Flickr
222
+
223
+ The prior Flickr dataset used in Zeng et al. [16] originated from McAuley et al. [10] which aimed to utilize network connections and image descriptions rather than the images themselves. The specific embedding function that the paper used is Bag of Words.
224
+
225
+ This embedding function is a valid representation of images but it is not easily applicable to other image datasets. Thus GNNs trained on this dataset are confined to images with descriptions that have been transformed using the same top 500 words. Noting that this list of top 500 words is not readily available.
226
+
227
+ ### C.3 Amazon
228
+
229
+ Zeng et al. [16] also provides an Amazon dataset (AmazonProducts) covering the entirety of Amazon. Without a known source we instead use available Amazon databases online to download and generate our own dataset. The embedding function used is to tokenise the reviews by 4-grams and take the single value decomposition. This is, as with Flickr, not easily applicable outside of the original dataset.
230
+
231
+ An alternative Amazon dataset (Amazon) is also available from Shchur et al. [12] created originally in McAuley et al. [11]. Though the original source of the dataset used a pre-trained Caffe model to embed the product images this dataset did not use these. Instead they created their own embeddings using the bag of words standard with the product reviews as the raw data.
232
+
233
+ ## D Our Datasets
234
+
235
+ In this paper we look at the Flickr [5] and Amazon [5, 12] datasets (specifically looking at the Electronics and Instruments subsection of Amazon)
236
+
237
+ As demonstrated in Section 2 the process of building a graph dataset is currently signified by choosing an embedding function ${f}_{e}$ to apply to the underlying dataset $\mathbf{X}$ . We provide three cases of the underlying dataset $\mathbf{X}$ and provide a selection of embeddings from sensible embedding functions. We also provide a raw graph-connected image dataset, by which we mean the underlying image dataset $\mathbf{X}$ is not reduced to a matrix of 1D feature vectors but remains a multi-dimensional tensor of multi-dimensional image tensors.
238
+
239
+ As discussed in Section 2, versions of these datasets [12, 16] utilize classical word embeddings. We propose using large pre-trained models as embedding functions as these can extract richer features $\left( {\mathbf{X}}_{e}\right)$ that may better represent the underlying data(X)of the dataset.
240
+
241
+ We will host our datasets in a website and make both these baselines and datsets accessible to the community (website link will be added after acceptance).
242
+
243
+ ### D.1 Flickr_v2
244
+
245
+ The prior Flickr [16] dataset does not contain the raw image data nor the flickr identifiers only the embeddings created in McAuley et al. [10] but SNAP provides an adjacency matrix and flickr identifiers. We follow how Zeng et al. [16] generated their dataset by downloading all the images still available, but we are unable to match the exact images they sourced or the labels. This does mean we cannot directly compare results but we find that the benchmark results we have achieved are similar.
246
+
247
+ The underlying dataset $\mathbf{X}$ is now the images themselves and so Convolutional Neural Networks (CNNs) are best placed as our embedding functions. As a sample of existing pre-trained CNNs we use ResNet18, ResNet50 [6] and VGG16 [13]. In all three cases we use the pre-trained models provided by torchvision, using the feature vectors after the final pooling stage before the classification stage.
248
+
249
+ As the dataset is created from four separate image classification challenges $\left( \left\lbrack {2,4,7}\right\rbrack \right)$ the existing labels and tags for these datasets are not well aligned. We therefore only select the NUS-Wide [2] subsection. To create labels for our dataset we use GloVe to generate vector representations of the NUS-Wide tags and find the closest vector representation of 7 hand-picked labels.
250
+
251
+ The hand-picked labels: ["People", "Buildings", "Places", "Plants", "Animals", "Vehicles", "Scenery"]
252
+
253
+ This new dataset we call Flickr_v2.
254
+
255
+ ### D.2 AmazonElectronics and AmazonInstruments
256
+
257
+ We use the review text as our raw input features, this means a number of Amazon items are unsuitable as they do not have any reviews. The connections between nodes are based on Amazons three similarity metric "Co-viewed", "Co-bought" and "Similar Items". We only consider direct connections between nodes in a specific Amazon category (in our case Electronics and instruments) making sure removing any nodes that are not directly connected to another node.
258
+
259
+ The raw data is text so text classification transformers such as roBERTa [9] are ideal, specifically we use large roBERTa finetuned for the MNLI task. [15] We extract three different embeddings from roBERTa using the pre-trained model provided by fairseq. The first is the byte pair tokenisation used by roBERTa, the second is the feature extraction provided by fairseq which occurs after roBERTa's transformer heads and before classification, and the final is the feature vector present before the last fully connected layer. Due to restrictions in the token size for roBERTa we remove all nodes that have reviews with greater than 512 tokens.
260
+
261
+ We also provide the standard bag of words embedding as in the case of text classification this is a common embedding practice, keeping in line with prior datasets $\left\lbrack {8,{16}}\right\rbrack$ we use the top 500 words to create our Bag of Word embeddings.
262
+
263
+ Amazon products do not belong to a single category and therefore a products categories are closer to tags than labels. Thus, similar to Flickr_v2, we use GloVe word vectors to label our dataset based on the set of categories that each product belongs to against a hand-picked selection of categories.
264
+
265
+ Hand-picked labels for Electronics: ["Camera Photo and Lighting", "Audio and Video", "Bags, Cases and Covers", "Batteries and Chargers", "Peripherals, Keyboards and Mice", "Storage and Networking"]
266
+
267
+ Hand-picked labels for Instruments: ["Electric Guitar", "Acoustic Guitar", "Percussion", "Live Sound & Stage", "Studio Recording Equipment", "Microphones & Cable", "Amplifiers & Effects"]
268
+
269
+ These new datasets we call AmazonElectronics and AmazonInstruments.
270
+
271
+ The statistics for these datasets are available in tables Tables 4 to 6
272
+
273
+ Table 4: Flickr_v2
274
+
275
+ <table><tr><td>Embedding</td><td>Number of Nodes</td><td>Number of Classes</td><td>Input Shape</td></tr><tr><td>-</td><td>76659</td><td>7</td><td>(3, 224, 224)</td></tr><tr><td>ResNet50</td><td>76659</td><td>7</td><td>(1024)</td></tr><tr><td>ResNet18</td><td>76659</td><td>7</td><td>(512)</td></tr><tr><td>VGG16</td><td>76659</td><td>7</td><td>(25088)</td></tr></table>
276
+
277
+ Table 5: AmazonElectronics
278
+
279
+ <table><tr><td>Embedding</td><td>Number of Nodes</td><td>Number of Classes</td><td>Input Shape</td></tr><tr><td>Bag of Words</td><td>92048</td><td>6</td><td>(500)</td></tr><tr><td>roBERTa Byte Pair</td><td>92048</td><td>6</td><td>(512)</td></tr><tr><td>roBERTa Encoded</td><td>92048</td><td>6</td><td>(1024)</td></tr><tr><td>roBERTa</td><td>92048</td><td>6</td><td>(1024)</td></tr></table>
280
+
281
+ Table 6: AmazonInstruments
282
+
283
+ <table><tr><td>Embedding</td><td>Number of Nodes</td><td>Number of Classes</td><td>Input Shape</td></tr><tr><td>Bag of Words</td><td>21127</td><td>7</td><td>(500)</td></tr><tr><td>roBERTa Byte Pair</td><td>21127</td><td>7</td><td>(512)</td></tr><tr><td>roBERTa Encoded</td><td>21127</td><td>7</td><td>(1024)</td></tr><tr><td>roBERTa</td><td>21127</td><td>7</td><td>(1024)</td></tr></table>
284
+
285
+ ## E Graph Attention Networks
286
+
287
+ The original paper Velickovic et al. [14] proposed a new approach to graph representation learning incorporating the popular attention mechanisms of sequence-based tasks.
288
+
289
+ The basic concept is to provide and attention mechanism $\mathbf{a}$ which calculates the attention coefficients between a node ${v}_{i}$ and its neighbours ${\mathcal{N}}_{i}$ , employing self-attention. This attention coefficient between node ${v}_{i}$ and a neighbour ${v}_{j}$ . These coefficients are then normalised to compare across nodes using the softmax transform resulting in the final coefficient for a given neighbour ${\alpha }_{ij}$
290
+
291
+ $$
292
+ {\alpha }_{ij} = \frac{\exp \left( {\operatorname{LeakyReLU}\left( {{\mathbf{a}}^{T}\left\lbrack {\mathbf{\theta }{\mathbf{h}}_{i}\parallel \mathbf{\theta }{\mathbf{h}}_{j}}\right\rbrack }\right) }\right) }{\mathop{\sum }\limits_{{k \in {\mathcal{N}}_{i}}}\exp \left( {\operatorname{LeakyReLU}\left( {{\mathbf{a}}^{T}\left\lbrack {\mathbf{\theta }{\mathbf{h}}_{i}\parallel \mathbf{\theta }{\mathbf{h}}_{j}}\right\rbrack }\right) }\right) } \tag{4}
293
+ $$
294
+
295
+ This attention mechanism occurs for each head in a layer allowing for multi-headed attention. The attention coefficient and weight matrices for each head are denoted with a superscript $k$ where $k \in \left\lbrack K\right\rbrack$ and $K$ is the total number of heads. We can thus relate GAT to the basic MESSAGE-PASSING network equation 1 as such
296
+
297
+ $$
298
+ {\mathbf{h}}_{i}^{l} = {\operatorname{AGGREGATE}}_{k}^{K}\left( {\sigma \left( {\mathop{\sum }\limits_{{j \in {\mathcal{N}}_{i}}}{\alpha }_{ij}^{k}{\mathbf{\theta }}^{k}{\mathbf{h}}_{j}^{l - 1}}\right) }\right) \tag{5}
299
+ $$
300
+
301
+ where AGGREGATE is some aggregation function, the original paper used MEAN and CONCATENATE.
302
+
303
+ A later paper Brody et al. [1] questioned the attentiveness of the proposed method and provided the following attention mechanism instead of equation 4
304
+
305
+ $$
306
+ {\alpha }_{ij} = \frac{\exp \left( {{\mathbf{a}}^{T}\operatorname{LeakyReLU}\left( {\mathbf{\theta }\left\lbrack {{\mathbf{h}}_{i}\parallel {\mathbf{h}}_{j}}\right\rbrack }\right) }\right) }{\mathop{\sum }\limits_{{k \in {\mathcal{N}}_{i}}}\exp \left( {{\mathbf{a}}^{T}\operatorname{LeakyReLU}\left( {\mathbf{\theta }\left\lbrack {{\mathbf{h}}_{i}\parallel {\mathbf{h}}_{j}}\right\rbrack }\right) }\right) } \tag{6}
307
+ $$
308
+
309
+ This graph connection approach allows us to train the attention mechanism $\mathbf{a}$ across the graph $\mathcal{G}$ without having to train the weight matrix $\mathbf{\theta }$ . So if we are provided with suitable weights we can greatly reduce the training cost.
310
+
311
+ ## F Further Tests
312
+
313
+ Tables 7 and 8 both demonstrate similar results to those seen in Table 2.
314
+
315
+ Table 7 demonstrates the same pattern, that the embedding function (in this case a pre-trained vision model) influences which GNN performs the best. We do not provide a BoW embedding for Flickr_v2 as there is no text for raw images. It is interesting to note that we see that none of the GAT models achieve the highest accuracy on any of the Flickr_v2 embeddings. Instead, similar to the roBERTa embeddings, we see that GraphSAGE performs the best. This is the reason we chose GraphSAGE for our Flickr_v2 GraNet models.
316
+
317
+ It is important to note for VGG16 that the surprisingly poor performance of GNNs is more likely due to the size of the embedding produced by VGG16, 25088. With a smaller embedding space, potentially by using layers within the VGG16 classifier, may produce better results on par with ResNet. But there is also the possibility that, similar to roBERTa, VGG16 provides worse embeddings than ResNet. Further tests were not carried on VGG16 due to limitations with time.
318
+
319
+ In comparison to Table 2 we see a far smaller increase in accuracy from the best performing GNN compared to the unconnected pre-trained model. This is mainly due to the fact that we were unable to fully fine-tune roBERTa to our datasets given limited hardware and time and hypothesise that the improvements seen in Table 2 would be smaller when compared to a fine-tuned roBERTa. Similarly, we did not attempt to create a GraNet model using VGG16 as, for one, the results on Flickr_v2 are worse than the ResNet models but also due to a limitation of resources.
320
+
321
+ Table 8 demonstrates very similar results to Table 2 which is to be expected as they are generated in the same way. We see an overall decrease in accuracy across the models but this attributed to the fact that there are more classes for the dataset.
322
+
323
+ Tables 9 and 10 are identical to Tables 2 and 3 but provide standard deviation for each of the runs
324
+
325
+ ## G Hyperparameters
326
+
327
+ Table 11 details the layers of each model used providing the output hidden features of each layer, the sampler used (the specifics shown in Table 12) and the maximum and minimum learning rates. Where there is a difference in learning rates we use a learning rate scheduler that decreases the learning rate when validation accuracy plateaus. Where two models use the same sampler the parameters of those samplers are identical to keep consistency across the tests.
328
+
329
+ In the case of GraNet we start with a high learning rate for 20 epochs with the pre-trained model frozen to allow the GNN to train. We then unfreeze the entire model dropping the learning rate sharply and train for another 100 epochs. In the first stage no learning rate scheduler is employed, same as for all GNNs, and in the second stage we apply the learning rate scheduler.
330
+
331
+ Table 7: Test accuracy on Flickr_v2 with different embeddings, compared against the corresponding unconnected vision model. Included is the difference $\Delta$ of each model to the unconnected model and the standard deviation of each result. The details of the embedding styles are available in Appendix D, Table 4.
332
+
333
+ <table><tr><td rowspan="2">Model</td><td colspan="3">Embedding Styles</td></tr><tr><td>ResNet18</td><td>ResNet50</td><td>VGG16</td></tr><tr><td>Unconnected Model $\Delta \uparrow$</td><td>45.2% ± 0.1 +0.0</td><td>46.9% ± 0.0 +0.0</td><td>${47.0} \pm {0.1}$ +0.0</td></tr><tr><td>GCN</td><td>${41.8}\% \pm {0.4}$</td><td>${38.3}\% \pm {0.5}$</td><td>35.5% ± 0.3</td></tr><tr><td>Δ↑</td><td>-3.4</td><td>-8.6</td><td>-11.5</td></tr><tr><td>GAT</td><td>${38.1}\% \pm {0.6}$</td><td>37.1% ± 1.1</td><td>27.3% ± 1.2</td></tr><tr><td>Δ↑</td><td>-7.1</td><td>-9.8</td><td>-19.7</td></tr><tr><td>GAT2</td><td>42.1% ± 1.8</td><td>41.0% ± 1.5</td><td>${34.2}\% \pm {0.8}$</td></tr><tr><td>Δ↑</td><td>-3.1</td><td>-5.9</td><td>-12.8</td></tr><tr><td>GraphSAGE (Random)</td><td>45.4% ± 0.1</td><td>47.0% ± 0.0</td><td>35.2% ± 0.2</td></tr><tr><td>Δ↑</td><td>+0.2</td><td>+0.1</td><td>-11.8</td></tr><tr><td>GraphSAGE (Neighbour)</td><td>$\mathbf{{45.8}\% \pm {0.2}}$</td><td>44.5% ± 0.1</td><td>34.5% ± 0.2</td></tr><tr><td>Δ↑</td><td>+0.6</td><td>-2.4</td><td>-12.5</td></tr></table>
334
+
335
+ Table 8: Test accuracy on AmazonInstruments with different embeddings compared against a standard unconnected MLP model. Included is the difference $\Delta$ of each model to the unconnected MLP and the standard deviation of each result. The embedding styles are explained in Appendix D, Table 6.
336
+
337
+ <table><tr><td rowspan="2">Model</td><td colspan="4">Embedding Styles</td></tr><tr><td>Bag of Words</td><td>Byte Pair</td><td>roBERTa Encoded</td><td>roBERTa</td></tr><tr><td>Unconnected MLP $\Delta \uparrow$</td><td>${66.1}\% \pm {0.2}$ +0.0</td><td>${21.0}\% \pm {0.3}$ +0.0</td><td>43.9% ± 0.4 +0.0</td><td>39.8% ± 0.7 +0.0</td></tr><tr><td>GCN</td><td>${64.0}\% \pm {0.5}$</td><td>${20.8}\% \pm {0.3}$</td><td>${20.4}\% \pm {0.8}$</td><td>${20.4}\% \pm {0.8}$</td></tr><tr><td>Δ↑</td><td>-2.1</td><td>-0.2</td><td>-23.5</td><td>-19.4</td></tr><tr><td>GAT</td><td>79.3% ± 0.6</td><td>${21.6}\% \pm {0.9}$</td><td>47.5% ± 1.9</td><td>46.1% ± 4.3</td></tr><tr><td>Δ↑</td><td>+13.2</td><td>+0.6</td><td>+3.6</td><td>+6.3</td></tr><tr><td>GAT2</td><td>$\mathbf{{79.4}\% \pm {0.3}}$</td><td>${21.2}\% \pm {0.6}$</td><td>49.8% ± 5.0</td><td>47.8% ± 2.8</td></tr><tr><td>Δ↑</td><td>+13.3</td><td>+0.2</td><td>+5.9</td><td>+8.0</td></tr><tr><td>GraphSAGE (Random)</td><td>67.5% ± 0.3</td><td>23.9% ± 0.6</td><td>45.1% ± 1.2</td><td>41.9% ± 0.6</td></tr><tr><td>Δ↑</td><td>+1.4</td><td>+2.9</td><td>+1.2</td><td>+2.1</td></tr><tr><td>GraphSAGE (Neighbour)</td><td>${72.6}\% \pm {0.3}$</td><td>$\mathbf{{43.4}\% \pm {0.5}}$</td><td>$\mathbf{{62.4}\% \pm {0.5}}$</td><td>$\mathbf{{59.9}\% \pm {0.6}}$</td></tr><tr><td>$\Delta \uparrow$</td><td>+6.5</td><td>+22.8</td><td>+18.5</td><td>+20.1</td></tr></table>
338
+
339
+ Table 9: Test accuracy on AmazonElectronics with different embeddings compared against a standard unconnected MLP model. Included is the difference $\Delta$ of each model to the unconnected MLP and the standard deviation of each result. The embedding styles are explained in Appendix D, Table 5.
340
+
341
+ <table><tr><td rowspan="2">Model</td><td colspan="4">Embedding styles</td></tr><tr><td>Bag of Words</td><td>Byte Pair</td><td>roBERTa Encoded</td><td>roBERTa</td></tr><tr><td>Unconnected MLP $\Delta \uparrow$</td><td>71.6% ± 0.3 +0.0</td><td>${21.6}\% \pm {0.0}$ +0.0</td><td>${55.8}\% \pm {0.1}$ +0.0</td><td>${51.9}\% \pm {0.2}$ +0.0</td></tr><tr><td>GCN</td><td>69.1% ± 0.1</td><td>21.7% ± 0.2</td><td>${22.7}\% \pm {1.1}$</td><td>${22.3}\% \pm {1.2}$</td></tr><tr><td>Δ↑</td><td>-2.5</td><td>+0.1</td><td>-33.1</td><td>-29.6</td></tr><tr><td>GAT</td><td>81.1% ± 0.2</td><td>${22.2}\% \pm {0.5}$</td><td>46.1% ± 1.5</td><td>40.3% ± 2.9</td></tr><tr><td>Δ↑</td><td>$+ {10.5}$</td><td>+0.6</td><td>-9.7</td><td>-11.6</td></tr><tr><td>GAT2</td><td>$\mathbf{{81.8}\% \pm {0.3}}$</td><td>${22.2}\% \pm {0.6}$</td><td>${41.8}\% \pm {5.1}$</td><td>35.7% ± 5.6</td></tr><tr><td>Δ↑</td><td>+10.2</td><td>+0.6</td><td>-14.0</td><td>-16.2</td></tr><tr><td>GraphSAGE (Random)</td><td>71.3% ± 0.1</td><td>26.3% ± 0.3</td><td>57.0% ± 0.5</td><td>53.7% ± 0.5</td></tr><tr><td>Δ↑</td><td>-0.3</td><td>+4.7</td><td>$+ {1.2}$</td><td>+1.8</td></tr><tr><td>GraphSAGE (Neighbour)</td><td>${76.4}\% \pm {0.3}$</td><td>$\mathbf{{40.4}\% } \pm {0.4}$</td><td>$\mathbf{{67.8}\% \pm {0.4}}$</td><td>$\mathbf{{66.4}\% \pm {0.3}}$</td></tr><tr><td>Δ↑</td><td>+4.8</td><td>+20.8</td><td>+12.0</td><td>+12.5</td></tr></table>
342
+
343
+ Table 10: Comparison of GraNet models against the best performing GNNs for a specific embedding. Included is the difference $\Delta$ of each model to the unconnected model and the standard deviation of each result. The specifics of the 3 datasets is explained in Appendix D.
344
+
345
+ <table><tr><td rowspan="2">Model</td><td colspan="2">Flickr_v2</td><td>Electronics</td><td>Instruments</td></tr><tr><td>ResNet18</td><td>ResNet50</td><td>Bag of Words</td><td>Bag of Words</td></tr><tr><td>Unconnected Model $\Delta \uparrow$</td><td>45.2% ± 0.1 +0.0</td><td>46.9% ± 0.0 +0.0</td><td>${71.6}\% \pm {0.3}$ +0.0</td><td>${66.1}\% \pm {0.2}$ +0.0</td></tr><tr><td>GAT2</td><td>${42.1}\% \pm {1.8}$</td><td>41.0% ± 1.5</td><td>81.8%</td><td>79.4% ± 0.3</td></tr><tr><td>Δ↑</td><td>-3.1</td><td>-5.9</td><td>$+ {10.2}$</td><td>+13.3</td></tr><tr><td>GraphSAGE (Random)</td><td>45.4% ± 0.1</td><td>47.0% ± 0.0</td><td>71.3%</td><td>67.5% ± 0.3</td></tr><tr><td>Δ↑</td><td>+0.2</td><td>+0.1</td><td>-0.3</td><td>+1.4</td></tr><tr><td>GraphSAGE (Neighbour)</td><td>45.8% ± 0.2</td><td>44.5% ± 0.1</td><td>76.4%</td><td>${72.6}\% \pm {0.3}$</td></tr><tr><td>Δ↑</td><td>+0.4</td><td>-2.4</td><td>+4.8</td><td>+6.5</td></tr><tr><td>GraNet</td><td>46.7% ± 0.1</td><td>$\mathbf{{48.7}\% \pm {0.1}}$</td><td>$\mathbf{{81.9}\% \pm {0.5}}$</td><td>79.7% ± 0.9</td></tr><tr><td>$\Delta \uparrow$</td><td>+1.5</td><td>+1.8</td><td>+10.3</td><td>+13.6</td></tr></table>
346
+
347
+ Table 11: Model architecture, sampler and learning rate
348
+
349
+ <table><tr><td rowspan="2">Model</td><td rowspan="2">Hidden Features</td><td rowspan="2">Sampler</td><td colspan="2">Learning Rate</td></tr><tr><td>Max.</td><td>Min.</td></tr><tr><td rowspan="2">GCN</td><td>256</td><td rowspan="2">Random Node</td><td>1e-2</td><td>1e-2</td></tr><tr><td>256</td><td/><td/></tr><tr><td>GAT</td><td>256 256</td><td>GraphSAINT RW [16]</td><td>1e-2</td><td>1e-2</td></tr><tr><td>GAT2</td><td>256 256</td><td>GraphSAINT RW [16]</td><td>1e-2</td><td>1e-2</td></tr><tr><td rowspan="2">GraphSAGE (Random)</td><td>256</td><td rowspan="2">Random Node</td><td>1e-3</td><td>1e-3</td></tr><tr><td>256</td><td/><td/></tr><tr><td rowspan="2">GraphSAGE (Neighbour)</td><td>256</td><td rowspan="2">Neighbour [5]</td><td rowspan="2">1e-3</td><td rowspan="2">1e-3</td></tr><tr><td>256</td></tr><tr><td rowspan="3">MLP</td><td>256</td><td rowspan="3">-</td><td rowspan="3">1e-5</td><td rowspan="3">5e-7</td></tr><tr><td>256</td></tr><tr><td>128</td></tr><tr><td>ResNet18</td><td>as provided</td><td>-</td><td>1e-4</td><td>5e-6</td></tr><tr><td>ResNet50</td><td>as provided</td><td>-</td><td>1e-4</td><td>5e-6</td></tr><tr><td>VGG16</td><td>as provided</td><td>-</td><td>1e-4</td><td>5e-6</td></tr><tr><td rowspan="3">GraNet (MLP + GAT2)</td><td>256</td><td rowspan="3">GraphSAINT RW [16]</td><td rowspan="3">1e-2</td><td rowspan="3">1e-2</td></tr><tr><td>256</td></tr><tr><td>128</td></tr><tr><td>GraNet (ResNet18 + GraphSAGE)</td><td>as provided</td><td>Random Node</td><td>1e-3</td><td>5e-8</td></tr><tr><td>GraNet (ResNet50 + GraphSAGE)</td><td>as provided</td><td>Random Node</td><td>1e-3</td><td>5e-8</td></tr></table>
350
+
351
+ Table 12: Sampler parameters
352
+
353
+ <table><tr><td>Sampler</td><td>Dataset Split</td><td>Setup</td></tr><tr><td rowspan="3">GraphSAINT[16] RW</td><td>Train</td><td>roots: 6000, walk length:2, # steps:5, coverage:100</td></tr><tr><td>Validation</td><td>roots: 1250, walk length:2, # steps:5, coverage:100</td></tr><tr><td>Test</td><td>roots: 2000, walk length:2, # steps:5, coverage:100</td></tr><tr><td rowspan="3">Random Node</td><td>Train</td><td>#partitions:512</td></tr><tr><td>Validation</td><td>#partitions:128</td></tr><tr><td>Test</td><td>#partitions:256</td></tr><tr><td rowspan="3">Neighbour[5]</td><td>Train</td><td>#neighbours: $\left\lbrack {{25},{10}}\right\rbrack$ , batch size:512</td></tr><tr><td>Validation</td><td>#neighbours: $\left\lbrack {{25},{10}}\right\rbrack$ , batch size:512</td></tr><tr><td>Test</td><td>#neighbours: $\left\lbrack {{25},{10}}\right\rbrack$ , batch size:512</td></tr></table>
354
+
papers/LOG/LOG 2022/LOG 2022 Conference/Ri2dzVt_a1h/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Anonymous Author(s)
2
+
3
+ Anonymous Affiliation
4
+
5
+ Anonymous Email
6
+
7
+ § ABSTRACT
8
+
9
+ Current graph representation learning techniques use Graph Neural Networks (GNNs) to extract features from dataset embeddings. In this work, we examine the quality of these embeddings and assess how changing them can affect the accuracy of GNNs. We explore different embedding extraction techniques for both images and texts. We find that the choice of embedding biases the performance of different GNN architectures and thus the choice of embedding influences the selection of GNNs regardless of the underlying dataset. In addition, we only see an improvement in accuracy from some GNN models compared to the accuracy of models trained from scratch or fine-tuned on the underlying data without utilizing the graph connections. As an alternative, we propose Graph-connected Network (GraNet) layers which use GNN message passing within large models to allow neighborhood aggregation. This gives a chance for the model to inherit weights from large pre-trained models if possible and we demonstrate that this approach improves the accuracy compared to the previous methods: on Flickr_v2, GraNet beats GAT2 and GraphSAGE by 7.7% and 1.7% respectively.
10
+
11
+ § 17 1 INTRODUCTION
12
+
13
+ Graph Neural Networks (GNNs) have been successful on a wide array of applications ranging from computational biology [17] to social networks [5]. The input for GNNs, although sourced from many different domains, is often data that has been preprocessed to a computationally digestible format. These digestible formats are commonly known as embeddings.
14
+
15
+ Currently, improvements made to GNN architecture are tested against these embeddings and the state of the art is determined based on those results. However, this does not necessarily correlate with the GNNs accuracy on the underlying dataset and ignores the influence that the source and style of these embeddings have on the performance of particular GNN architectures. To test existing GNN architectures, and demonstrate the importance of the embeddings used in training them, we provide three new datasets each with a set of embeddings generated from different methods.
16
+
17
+ We further analyse the benefit of using GNNs on fixed embeddings. We compare GNNs to standard models that have been trained or fine-tuned on the target raw data; these models treat each data point as unconnected, ignoring the underlying graph information in data. This simple unconnected baseline surprisingly outperforms some strong GNN models. This then prompts the question: Will mixing the two approaches unlock the classification power of existing large models by allowing them to utilize the graph structure in our data?
18
+
19
+ Based on the question above, we propose a new method of mixing GNNs with large models, allowing them to train simultaneously. To achieve this we introduce a variation of the standard message passing framework. With this new framework a subset of the large model's layers can each be graph-connected - exploiting useful graph structure information during the forward pass. We demonstrate that this new approach improves the accuracy of using only a pre-trained or fine-tuned large model and outperforms a stand-alone GNN on a fixed embedding. We call this new approach GraNet (Graph-connected Network), and in summary, this paper has the following contributions:
20
+
21
+ § REVISITING EMBEDDINGS FOR GRAPH NEURAL NETWORKS
22
+
23
+ * We provide new datasets and a rich set of accompanying embeddings to better test the performance of GNNs.
24
+
25
+ * We empirically demonstrate that only some existing GNNs improve on unconnected model accuracy and those that do vary depending on the embeddings used. We urge unconnected models as a baseline for assessing GNN performance.
26
+
27
+ * We provide a new method, named GraNet, that combines GNNs and large models (fine-tuned or trained from scratch) to efficiently exploit the graph structure in raw data.
28
+
29
+ * We empirically show that GraNet outperforms both unconnected models (the strong baseline) and GNNs on a range of datasets and accompanying embeddings.
30
+
31
+ § 2 RELATED WORK
32
+
33
+ Table 1: An overview of popular datasets detailing statistics and how they are built
34
+
35
+ max width=
36
+
37
+ Name Classification Task Classes Feature Length Embedding Function
38
+
39
+ 1-5
40
+ Amazon [12] Text 10,8 767,745 Bag of Words
41
+
42
+ 1-5
43
+ AmazonProducts [16] Text 107 200 4-gram with SVD
44
+
45
+ 1-5
46
+ Flickr [16] Image 7 500 Bag of Words
47
+
48
+ 1-5
49
+ Reddit [5, 16] Text 41 602 Avg. GloVe vectors
50
+
51
+ 1-5
52
+ Cora [8] Text 7 1,433 Bag of Words
53
+
54
+ 1-5
55
+ CiteSeer [8] Text 6 3,703 Bag of Words
56
+
57
+ 1-5
58
+ PubMed [8] Text 3 500 Bag of Words
59
+
60
+ 1-5
61
+
62
+ Graph Neural Networks Let $\mathcal{G}\left( {\mathcal{V},\mathcal{E},\mathbf{X}}\right)$ denote a graph where $\mathcal{V} = \left\{ {{v}_{1},{v}_{2},\ldots ,{v}_{n}}\right\}$ is the set of nodes and $N = \left| \mathcal{V}\right|$ is the number of nodes in the graph, $\mathcal{E}$ is the set of edges between nodes in $\mathcal{V}$ such that ${e}_{i,j} \in \mathcal{E}$ denotes a directed connection from node ${v}_{i}$ to node ${v}_{j}$ . We say each node ${v}_{i}$ has a neighbourhood ${\mathcal{N}}_{i}$ such that ${v}_{j} \in {\mathcal{N}}_{i} \Leftrightarrow {e}_{j,i} \in \mathcal{E}$ and we say that ${v}_{j}$ is a neighbour node to ${v}_{i}$ .
63
+
64
+ $\mathbf{X}$ is the raw data matrix; there normally exists a transformation function to project the raw data to a more compact feature ${\mathbf{X}}_{e}$ space such that ${\mathbf{X}}_{e} = {f}_{e}\left( \mathbf{X}\right)$ .
65
+
66
+ For instance, we can transform a set of images $\left( {\mathbf{X} \in {\mathbb{R}}^{\left| \mathcal{V}\right| \times C \times H \times W}\text{ , where }C,H}\right.$ and $W$ are the number of channels, width and height of an image) to 1D features $\left( {{\mathbf{X}}_{e} \in {\mathbb{R}}^{\left| \mathcal{V}\right| \times F}}\right.$ , where $F$ denotes the feature dimension). In this case, we have an embedding function ${f}_{e} : {\mathbb{R}}^{\left| \mathcal{V}\right| \times C \times H \times W} \rightarrow {\mathbb{R}}^{\left| \mathcal{V}\right| \times F}$ for the dimensional reduction.
67
+
68
+ This paper puts a special focus on the design of ${f}_{e}$ , and reveals later how the design choice of ${f}_{e}$ can influence the performance of GNN models without making any changes to the underlying data $\mathcal{G}\left( {\mathcal{V},\mathcal{E},\mathbf{X}}\right)$ . An overview of popular datasets and the processes they used to create their embeddings is presented in Table 1. We see that the popular graph datasets [5, 8, 12, 16] focus heavily on Bag of Words (BoW) and word vectors. This implies that current GNNs are being tested on and designed for a very narrow class of embedding styles. A more detailed discussion is available in Appendix C.
69
+
70
+ Current GNNs can be thought of as Message Passing layers, the $l$ -th layer can be represented as
71
+
72
+ $$
73
+ {\mathbf{h}}_{i}^{l} = {\gamma }_{{\mathbf{\theta }}_{\gamma }}\left( {{\mathbf{h}}_{i}^{l - 1},{\psi }_{j \in \mathcal{N}\left( i\right) }\left( {{\phi }_{{\mathbf{\theta }}_{\phi }}\left( {{\mathbf{h}}_{i}^{l - 1},{\mathbf{h}}_{j}^{l - 1},{\mathbf{e}}_{j,i}}\right) }\right) }\right) \tag{1}
74
+ $$
75
+
76
+ where $\psi$ is a differentiable aggregation function and ${\gamma }_{{\theta }_{\gamma }}$ and ${\phi }_{{\theta }_{\phi }}$ represent differentiable functions with trainable parameters ${\theta }_{\gamma }$ and ${\theta }_{\phi }$ respectively. ${\mathbf{h}}_{i}^{l}$ is the node representation of ${v}_{i}$ at layer $l$ , with ${\mathbf{h}}_{i}^{0} = {f}_{e}\left( {\mathbf{x}}_{i}\right)$ . Kipf et al. [8] introduce GCN (Graph Convolutional Networks) - a method of applying convolutional layers from CNNs to graph neural networks. It focuses on spectral filters applied to the whole graph structure rather than at the node level. Hamilton et al. [5] introduce the GraphSAGE model which builds on prior work from GCN focusing on individual node representations. This gives rise to the iterative message passing process on the node level. Though simpler than newer models we find that this approach, when given the right embedding style, can outperform some recently published GNNs. Velickovic et al. [14] introduce the idea of graph attention which alters how a node aggregates its neighbours representation. This adds an additional attention mechansim to discern which aspects of the node representations in a nodes neighbourhood are important at a given layer. This approach is improved upon in Brody et al. [1] to provide a more attentive network. Further discussion of GAT is available in Appendix E.
77
+
78
+ § 3 METHOD
79
+
80
+ 3 Our proposed method of converting pre-trained models into graph-connected models can easily be broken down into individual layers. Taking ${f}_{\mathbf{\theta }}^{l}$ , as the $l$ -th layer in a pre-trained model, ${f}_{\mathbf{\theta }}$ , where we are given pre-trained weights $\mathbf{\theta }$ we can describe this new layer by reformulating Equation (1) as such
81
+
82
+ $$
83
+ {\mathbf{h}}_{i}^{l} = {\gamma }_{{\mathbf{\theta }}_{\gamma }}\left( {{\mathbf{h}}_{i}^{l - 1},{\psi }_{j \in \mathcal{N}\left( i\right) }\left( {{\phi }_{{\mathbf{\theta }}_{\phi }}\left( {{f}_{{\mathbf{\theta }}_{l - 1}}^{l - 1}\left( {\mathbf{h}}_{i}^{l - 1}\right) ,{f}_{{\mathbf{\theta }}_{l - 1}}^{l - 1}\left( {\mathbf{h}}_{j}^{l - 1}\right) ,{\mathbf{e}}_{j,i}}\right) }\right) }\right) \tag{2}
84
+ $$
85
+
86
+ where ${\mathbf{h}}_{i}^{0} = {\mathbf{x}}_{i}$ (without an embedding function), more details and definitions can be found in Appendix A.
87
+
88
+ Figure 1: Graphical representation of how our proposed GraNet layer operates for image networks
89
+
90
+ < g r a p h i c s >
91
+
92
+ Figure 1 is a representation of Equation (2) specifically in the case where the pre-trained model is a CNN and $\left( {f}_{{\theta }_{l - 1}}^{l - 1}\right)$ is a convolutional layer. The light blue regions perform the standard convolution feature extraction, these extracted feature maps are then adjusted, in the light orange region, by a Message Passing layer with graph attentions. These two representations are then combined.
93
+
94
+ The light blue regions and resulting channel stacks, ${A}_{i} \in {\mathbb{R}}^{1 \times {C}_{l} \times {H}_{l} \times {W}_{l}}$ through ${A}_{n} \in$ ${\mathbb{R}}^{1 \times {C}_{l} \times {H}_{l} \times {W}_{l}}$ , represent the forward pass through a single CNN layer ${f}_{{\theta }_{l - 1}}^{l - 1}.{A}_{i}$ represents the forward pass of the current node ${h}_{i}$ and ${A}_{j..n}$ represent the forward pass of the neighbours of ${h}_{i}$ .
95
+
96
+ The light orange region and resulting channel stacks, $\mathcal{B}$ , represent the graph-based Message Passing stage where the new representations are altered $\left( {\phi }_{{\theta }_{\phi }}\right)$ , aggregated $\left( \psi \right)$ and finally combined with the current node representation $\left( {\gamma }_{{\mathbf{\theta }}_{\gamma }}\right)$ , following the description in Equation (2).
97
+
98
+ § 4 EVALUATION
99
+
100
+ Experiment Setup For all test results we run 3 train-test runs each with a different random seed and take the arithmetic mean. Training takes 300 epochs of training unless otherwise stated. A discussion of how the datasets were created is available in Appendix D and a breakdown of the hyperparameters used is available in Appendix G.
101
+
102
+ Re-evaluating Embeddings Table 2 demonstrates that the particular embedding function used determines which GNN model performs best on the dataset. For instance, GAT2 has the best performance if the data is embedded as BoW (Bag of Words), but performs poorly on other embeddings. Graph-SAGE, in contrast, performs poorly on BoW but shows a good performance otherwise. This also signifies the importance of good embeddings as in this case BoW is better than roBERTa. We make the following key observations:
103
+
104
+ * The function ${f}_{e}$ used to extract embeddings influences the performance of different GNNs, so the embeddings should influence the choice of GNNs regardless of the underlying data.
105
+
106
+ * GNN models do not always outperform simple unconnected models, graph structure is not enough to compete against good classifiers.
107
+
108
+ * The choice of an embedding function contributes more to the final performance compared to the choice of a GNN model.
109
+
110
+ We also see the same effect when using Flickr_v2 and AmazonInstruments in Appendix F, Tables 7 and 8 respectively.
111
+
112
+ Graph-connected Network (GraNet) Layers Table 3 demonstrates that extending our large models with graph connections provides a significant improvement compared to both GNNs and the unconnected baseline models.
113
+
114
+ In Flickr_v2, both GraNet and Unconnected Model inherit either ResNet18 or ResNet50 weights pre-trained on ImageNet [3] and then fine-tuned on the target dataset. We observe a significant
115
+
116
+ Table 2: Test accuracy on AmazonElectronics with different embeddings compared against a standard unconnected MLP model. The embedding styles are explained in Appendix D, Table 5.
117
+
118
+ max width=
119
+
120
+ 2*Model 4|c|Embedding styles
121
+
122
+ 2-5
123
+ Bag of Words Byte Pair roBERTa Encoded roBERTa
124
+
125
+ 1-5
126
+ Unconnected MLP 71.6% (+0.0) 21.6% (+0.0) 55.8% (+0.0) 51.9% (+0.0)
127
+
128
+ 1-5
129
+ GCN 69.1% (-2.5) 21.7% (+0.1) 22.7% (-33.1) 22.3% (-29.6)
130
+
131
+ 1-5
132
+ GAT 81.1% (+10.5) 22.2% (+0.6) 46.1% (-9.7) 40.3% (-11.6)
133
+
134
+ 1-5
135
+ GAT2 81.8% (+10.2) 22.2% (+0.6) 41.8% (-14.0) 35.7% (-16.2)
136
+
137
+ 1-5
138
+ GraphSAGE (Random) 71.3% (-0.3) 26.8%(+5.2) 57.0% (+1.2) 53.7% (+1.8)
139
+
140
+ 1-5
141
+ GraphSAGE (Neighbour) 76.4% (+4.8) 40.4% (+20.8) 67.8%(+12.0) 66.4% (+12.5)
142
+
143
+ 1-5
144
+
145
+ increase $\left( {+{1.5}\% }\right)$ from the GraNet style of training compared to both Unconnected Model and other GNN models. Intuitively, GraNet layers reframe graph representation learning from training a GNN on a fixed pre-extracted embedding to training both the GNN and the embedding function $\left( {f}_{e}\right)$ together on the underlying data. We also make the following key observations:
146
+
147
+ * Graph-connecting a pre-trained network improves performance by fine-tuning feature extraction based on the graph structure.
148
+
149
+ * GraNet models outperform their counterparts by facilitating fine-tuning on the graph dataset.
150
+
151
+ The architecture for each GraNet is a mixture of the best performing GNN for that embedding and the embedding extraction model $\left( {f}_{e}\right)$ (a Multi-Layer Perceptron in the case of Bag of Words). The details of the GraNet models are described in Appendix B.
152
+
153
+ Table 3: Comparison of GraNet models against the best performing GNNs for a specific embedding. Unconnected Model refers to ResNet18 or ResNet50, in the case of Flickr_v2, and an MLP otherwise. The setup of each GraNet model is detailed in Appendix B and the specifics of the 3 datasets can be found in Appendix D.
154
+
155
+ max width=
156
+
157
+ 2*Model 2|c|Flickr_v2 Electronics Instruments
158
+
159
+ 2-5
160
+ ResNet18 ResNet50 Bag of Words Bag of Words
161
+
162
+ 1-5
163
+ Unconnected Model 45.2% (+0.0) 46.9% (+0.0) 71.6% (+0.0) 66.4% (+0.0)
164
+
165
+ 1-5
166
+ GAT2 42.1% (-3.1) 41.0% (-5.9) 81.8% (+10.2) 79.4% (+13.0)
167
+
168
+ 1-5
169
+ GraphSAGE (Random) 45.4% (+0.2) 47.0% (+0.1) 71.3% (-0.3) 67.5% (+1.1)
170
+
171
+ 1-5
172
+ GraphSAGE (Neighbour) 45.8% (+0.4) 44.5% (-2.4) 76.4% (+4.8) 72.6% (+6.2)
173
+
174
+ 1-5
175
+ GraNet ${}^{1}$ 46.7% (+1.5) 48.7% (+1.8) 81.9% (+10.3) 79.7% (+13.3)
176
+
177
+ 1-5
178
+
179
+ § 5 CONCLUSION
180
+
181
+ In this paper, we reveal that GNN model designs are overfitting to certain embeddings styles (eg. BoW and word vectors). To demonstrate this we introduced three new datasets each with a range of embedding styles to be used as a better all round benchmark of GNN performance. We demonstrated that embedding style influences the performance of GNNs regardless of the underlying raw data of the dataset. Equally, the quality of the embedding, measured by how well an unconnected baseline model performs, is a greater indicator of GNN accuracy than the GNN model chosen. We therefore stress the importance of creating high quality embeddings and choosing the best GNN model based on the style of embedding created rather than using (or trying to improve) the same GNN model for every task.
182
+
183
+ We then introduced a new approach named GraNet. This approach allows for any large pre-trained model to be fine-tuned to a graph-connected dataset by altering the standard message passing function. In this way we exploit graph structure information to enhance the pre-trained model performance on graph-connected datasets. We have demonstrated that GraNet outperforms both unconnected pre-trained models and GNNs on a range of datasets.
184
+
185
+ There is an increasing trend towards large pre-trained models and graph-connected datasets. Our work demonstrates potential pitfalls in the way GNN architectures are currently evaluated and proposes a new technique to fully exploit the benefits of pre-trained models within a GNN.
papers/LOG/LOG 2022/LOG 2022 Conference/RqN8W3R76J/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,491 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AutoGDA: Automated Graph Data Augmentation for Node Classification
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ Graph data augmentation has been used to improve generalizability of graph machine learning. However, by only applying fixed augmentation operations on entire graphs, existing methods overlook the unique characteristics of communities which naturally exist in the graphs. For example, different communities can have various degree distributions and homophily ratios. Ignoring such discrepancy with unified augmentation strategies on the entire graph could lead to sub-optimal performance for graph data augmentation methods. In this paper, we study a novel problem of automated graph data augmentation for node classification from the localized perspective of communities. We formulate it as a bilevel optimization problem: finding a set of augmentation strategies for each community, which maximizes the performance of graph neural networks on node classification. As the bilevel optimization is hard to solve directly and the search space for community-customized augmentations strategy is huge, we propose a reinforcement learning framework AutoGDA that learns the local-optimal augmentation strategy for each community sequentially. Our proposed approach outperforms established and popular baselines on public node classification benchmarks as well as real industry e-commerce networks by up to +12.5% accuracy.
12
+
13
+ ## 191 Introduction
14
+
15
+ Data augmentation methods are widely used to improve the generalizability and robustness of machine learning (ML) models [1]. They aim to create plausible variations of existing data without the need of additional human efforts. It has been proved that customized data augmentation, i.e., customizing augmentation strategies for each (batch of) object, are beneficial for ML models [2-4]. For example, customized augmentation strategies $\left\lbrack {2,5}\right\rbrack$ have shown improved performance over having a uniform augmentation on the entire dataset. To this end, automated data augmentation methods efficiently seek the optimal customized augmentation strategies for samples/batches [2,4-6].
16
+
17
+ Recently, with graph neural networks (GNNs) [7-10] emerging as one of the preferred approaches for learning on graph structured data, graph data augmentation methods [11-17] have shown promising results in improving GNNs. For example, DropEdge [18] randomly removes a fraction of edges in each training epoch to promote GNN's robustness during test-time inference. The AdaEdge [11] approach iteratively adds (or removes) edges between nodes that are predicted to have the same (or different) labels with high confidence. GAugM and GAugO [13] manipulate the graph structure according to edge probabilities learned by link predictors. Despite the promising improvements on various node classification tasks, existing graph data augmentation approaches are manually designed for the entire graph and only explore graph properties and characteristics globally.
18
+
19
+ It is more involved to apply automated data augmentation on graphs compared to images and text, because of the unique properties of graph data bring a great challenge to the effort. While existing automated augmentation approaches $\left\lbrack {2,5}\right\rbrack$ assume that samples are independent and identically distributed (i.i.d.) in the dataset, nodes in the graph are naturally connected and are dependent on each other in a non-Euclidean manner. Therefore, it is not straightforward to apply existing automated augmentation methods for graph data. On the other side, the unique properties of graph data may give us some clue to design new and effective solutions. Nodes in the graph are naturally grouped into communities $\left\lbrack {{19},{20}}\right\rbrack$ , providing a natural separation of data objects (nodes) for node classification. Chiang et al. [21] show that nodes from the same community are the most important neighbors for aggregation-based graph learning algorithms. As communities in graphs such as social networks are usually disparate in characteristics [22-24] such as density, centrality, homophily, etc., we argue that data augmentation strategies should be localized (community-specific) to achieve optimal results. However, how to augment graph data according to the localized characteristics of communities in the graph remains underexplored.
20
+
21
+ To address the aforementioned challenges, we propose to tackle down the problem of automated graph data augmentation from the local perspective, i.e., communities in graphs. We first analyze the disparate characteristics of communities using benchmark datasets. Motivated by observations and insights, we define automated graph data augmentation as a bilevel optimization problem, that is, to learn the augmentation strategies that lead to the best node classification performance of GNNs. As finding the optimal augmentation strategies requests combinatorial optimization, it is impractical in real world due to huge computational cost. We propose AutoGDA that learns community-customized augmentation strategies with a reinforcement learning (RL) approach, inspired by the auto-augmentation literature in computer vision [2, 5]. Specifically, given communities in a graph, AutoGDA relies on an RL-agent to sequentially pick up the optimal strategy from several graph data augmentation operations for each community. The RL-agent in AutoGDA generalizes the learning and selection of augmentation method from one community to another, and thus automates and accelerates the process of finding localized augmentation strategies.
22
+
23
+ We conduct extensive experiments across different GNN backbones and datasets to evaluate AutoGDA against state-of-the-art baselines. We demonstrate that AutoGDA with traditional community detection algorithms (e.g., the Louvain method [25]) and existing graph data augmentation operations (DropEdge [18], GAugM [13], and AttrMask [26]) can achieve consistent performance improvements over the baselines. Specifically, AutoGDA shows up to 12.5% over the best-performed baseline method. Moreover, we show that the graph representations learned by AutoGDA are robust against graph adversarial attacks [27].
24
+
25
+ Our main contributions are as follows.
26
+
27
+ - We tackle down the problem of automated graph data augmentation for supervised node classification by proposing community-customized augmentations from a localized perspective. To the best of our knowledge, we are the first to investigate community-customized graph data augmentation for the task of node classification.
28
+
29
+ - We propose AutoGDA, an RL-based framework that automatically learns optimal community customized graph data augmentation strategies. The AutoGDA framework is flexible on the augmentation operations and can be easily generalized to heterogeneous graphs.
30
+
31
+ - We conduct extensive experiments on eight benchmark datasets (including two real industrial graph anomaly detection benchmarks) with three widely used GNN backbones to validate AutoGDA. The experimental results show that (1) AutoGDA consistently outperforms state-of-the-art graph data augmentation baselines across all datasets, and (2) AutoGDA learns robust representations that give comparable or better classification performance than state-of-the-art graph defensing methods against adversarial attacks.
32
+
33
+ ## 2 Preliminaries and Problem Definition
34
+
35
+ Notations Let $G = \left( {\mathcal{V},\mathcal{E}}\right)$ be an undirected graph of $N$ nodes, where $\mathcal{V} = \left\{ {{v}_{1},{v}_{2},\ldots ,{v}_{N}}\right\}$ is the set of nodes and $\mathcal{E} \subseteq \mathcal{V} \times \mathcal{V}$ is the set of edges. We denote the adjacency matrix as $\mathbf{A} \in \{ 0,1{\} }^{N \times N}$ , where ${A}_{i, j} = 1$ indicates nodes ${v}_{i}$ and ${v}_{j}$ are connected. We denote the node feature matrix as $\mathbf{X} \in {\mathbb{R}}^{N \times F}$ , where $F$ is the number of features and ${\mathbf{x}}_{i}$ (the $i$ -th row of $\mathbf{X}$ ) indicates the feature vector of node ${v}_{i}$ . We denote the node labels for classification as $\mathbf{y} \in \{ 1,\ldots , M{\} }^{N}$ , where $M$ is the number of classes. We denote the set of graph communities as $\mathcal{C} = \left\{ {{C}_{1},{C}_{2},\ldots ,{C}_{{N}_{c}}}\right\}$ where ${N}_{c}$ is the number of communities and each community ${C}_{k}$ is defined by a set of nodes ${\mathcal{V}}_{Ck}$ s.t. ${\mathcal{V}}_{Ci} \cap {\mathcal{V}}_{Cj} = \varnothing ,\forall i, j \in \left\{ {1,2,\ldots ,{N}_{c}}\right\}$ and $i \neq j$ . We denote the subgraph containing the community ${C}_{k}$ as ${G}_{Ck} = \left( {{\mathcal{V}}_{Ck},{\mathcal{E}}_{Ck}}\right)$ , where ${\mathcal{E}}_{Ck} \subseteq {\mathcal{V}}_{Ck} \times {\mathcal{V}}_{Ck}$ is the set of edges within this subgraph. With a bit of notation abuse, we use the union symbol to denote the combination of subgraphs, i.e., $G = \mathop{\bigcup }\limits_{{k = 1}}^{{N}_{c}}{G}_{Ck}$ .
36
+
37
+ Graph Neural Networks Without the loss of generality, we take the commonly used graph convolutional network (GCN) [7] as an example when explaining GNNs in the following sections. The graph convolution operation of each GCN layer is defined as ${\mathbf{H}}^{\left( l\right) } = \sigma \left( {{\widetilde{\mathbf{D}}}^{-\frac{1}{2}}\widetilde{\mathbf{A}}{\widetilde{\mathbf{D}}}^{-\frac{1}{2}}{\mathbf{H}}^{\left( l - 1\right) }{\mathbf{W}}^{\left( l\right) }}\right)$ , where $l$ is the layer index, $\widetilde{\mathbf{A}} = \mathbf{A} + \mathbf{I}$ is the adjacency matrix with added self-loops, $\widetilde{\mathbf{D}}$ is the diagonal degree matrix ${\widetilde{D}}_{ii} = \mathop{\sum }\limits_{j}{\widetilde{A}}_{ij},{\mathbf{H}}^{\left( 0\right) } = \mathbf{X},{\mathbf{W}}^{\left( l\right) }$ is the learnable weight matrix at the $l$ -th layer, and $\sigma \left( \cdot \right)$ denotes a nonlinear activation such as ReLU.
38
+
39
+ Graph Data Augmentation We follow prior literature [13] to classified graph data augmentation methods for node classification into two categories: stochastic operations for original-graph setting and deterministic operations for modified-graph setting. Let $h : G \rightarrow {G}_{m}$ be a graph data augmentation operation that generates a variant ${G}_{m}$ of the original graph $G$ . In the original-graph setting, $h$ can be stochastic and applying it for $T$ times results with $T$ graph variants ${G}_{m}$ , such that $G \cup {\left\{ {G}_{m}^{i}\right\} }_{i = 1}^{T}$ is used in training while only $G$ is used for inference. On the other hand, in the modified-graph setting, $h$ is deterministic and outputs one ${G}_{m}$ , such that ${G}_{m}$ replaces $G$ for both training and inference.
40
+
41
+ In this work, we consider four typical state-of-the-art graph data augmentation operations for node classification: $\mathcal{A} = \{$ DROPEDGE, ATTRMASK, GAUGM_ADD, GAUGM_RM ${\} }^{1}$ , which include both stochastic and deterministic operations and they apply on both graph structure and the node features. It's worth noting that our proposed AutoGDA is not limited to these four augmentation operations and can take any graph data augmentation operations in $\mathcal{A}$ .
42
+
43
+ DROPEDGE [18] and ATTRMASK [26] are stochastic augmentation operations, where they randomly drop (mask) a given percentage of edges (node attributes) in each training epoch of GNN. DROPEDGE implies the graph structure has certain robustness to the edge connectivity and also alleviates the well-known over-smoothing problem of GNNs [18]. ATTRMASK encourages GNNs to recover masked node attributes with their context information in the local neighborhood. On the other hand, GAUGM_ADD and GAUGM_RM [13] are deterministic augmentation operations that deterministically modify the graph structure to promote the graph's homophily and hence improve the model's performance for node classification [13]. GAUGM_ADD, and GAUGM_RM use the predicted edge probabilities for all node pairs by VGAE [28] and add new edges (remove existing edges) with highest (lowest) edge probabilities.
44
+
45
+ Each of the above four augmentation operations has one parameter controlling the percentage magnitude of the augmentation, resulting with a ${100}^{4}$ searching space when finding the optimal strategy for each dataset. As we separate the graph into multiple communities and aim to search for the optimal community customized augmentation strategy, the searching space would be ${100}^{4 \times {N}_{c}}$ , where ${N}_{c}$ is the number of communities. As ${N}_{c}$ gets larger, the searching space becomes infeasible for traditional parameter searching methods such as grid search. Therefore, a new efficient approach for automated graph augmentation search is desired.
46
+
47
+ Problem Definition Following the definition of previous literature [13, 18] on graph data augmentation for node classification, the problem of finding a hand-crafted graph data augmentation strategy can be defined as follows. Given the graph data $G$ , a set of graph data augmentation operations $\mathcal{A}$ , and GNN model $f : G \rightarrow \widehat{\mathbf{y}}$ , find the best strategy of applying $\mathcal{A}$ on $G$ such that the node classification performance of $f$ on $G$ is maximized.
48
+
49
+ In this work, as we propose to find the best augmentation strategy for each community in the graph, the problem of automated graph data augmentation is then defined as: given the graph data $G$ , graph communities $\mathcal{C}$ in $G$ (the community detection method for generating $\mathcal{C}$ is treated as a hyperparameter), a set of graph data augmentation operations $\mathcal{A}$ , and GNN model $f : G \rightarrow \widehat{\mathbf{y}}$ , find the best strategy of applying $\mathcal{A}$ on the subgraphs of each community ${C}_{k} \in \mathcal{C}$ such that the node classification performance of $f$ on $G$ is maximized. The main difference between our problem definition and the previous literature is the use of graph communities $\mathcal{C}$ to find the best data augmentation strategies.
50
+
51
+ ---
52
+
53
+ ${}^{1}$ To differentiate from the baseline methods (normal font), we use the small caps font to denote the augmentation operations.
54
+
55
+ ---
56
+
57
+ ![01963ecd-3031-794b-bbc2-e8594ad0053c_3_323_203_1154_238_0.jpg](images/01963ecd-3031-794b-bbc2-e8594ad0053c_3_323_203_1154_238_0.jpg)
58
+
59
+ Figure 1: Graph communities detected by the Louvain method on the PubMed dataset show diverse distribution on different characteristics of graph structure.
60
+
61
+ ![01963ecd-3031-794b-bbc2-e8594ad0053c_3_322_528_1149_315_0.jpg](images/01963ecd-3031-794b-bbc2-e8594ad0053c_3_322_528_1149_315_0.jpg)
62
+
63
+ Figure 2: Overview of one iteration of our proposed AutoGDA on an example graph with three communities. In each step, the policy network takes the observation of one graph community as input and outputs the augmentation strategy for it. The GNN is then fine-tuned with the augmented graph and the validation accuracy is used as the reward to update the policy network.
64
+
65
+ ## 3 Automated Graph Data Augmentation
66
+
67
+ Section 3.1 shows our motivation of customizing graph data augmentation strategies for different communities. Section 3.2 models automated graph data augmentation as a bilevel optimization problem. Section 3.3 presents the reinforcement learning-based framework AutoGDA.
68
+
69
+ ### 3.1 Motivation
70
+
71
+ Community structures naturally exist in graphs $\left\lbrack {{19},{20}}\right\rbrack$ and graph community detection has been extensively studied in the past few decades [29]. Graph community detection methods (e.g., the Louvain method [25]) separates the set of nodes in the graph into disjoint subsets such that the quality of the communities, which is usually measured by modularity [20], are maximized. Thus, the nodes within the same community are more densely connected and also more important to each other for node classification [21] comparing with the nodes in different communities.
72
+
73
+ Our idea is based on the observation that different communities in the same graph mostly show disparate data distribution, which was also shown in previous literature [22-24]. The structure of communities commonly varies in terms of density, centrality, etc. For example, Figure 1 shows the characteristics of communities (detected by Louvain) on the PubMed graph [7]. Figures 1(a), 1(b), and 1(c) show the distributions of averaged degree, number of triangles, and centrality, respectively. Figure 1d presents the homophily ratio [30] of community subgraphs, where the homophily ratio is calculated by the fraction of edges which connect nodes that have the same class label. Previous works $\left\lbrack {{11},{13}}\right\rbrack$ have shown that the graph homophily is strongly correlated with node classification performance of GNNs, because semi-supervised graph learning methods are mainly based on the homophily assumption [31]. From Figure 1 we observe that the communities disparate with different distributions under different measurements.
74
+
75
+ Moreover, for certain deterministic graph data augmentation methods such as GAugM [13]. The minority communities may be ignored during the augmentation process. For example, GAugM [13] modifies the graph structure according to the edge probabilities given an trained edge predictor, in which it adds missing edges with highest probabilities and remove existing edges with lowest probabilities. However, it is possible that all of the modification would happen in only one or few graph communities, which shows the strongest homophily patterns.
76
+
77
+ With the above observations on the disparate characteristics of graph communities, we argue that the state-of-the-art methods that apply the augmentation operations on the whole graph may not be the best practice of graph data augmentation. Auto-augmentation literature in computer vision [2, 5] has shown that customizing augmentation operations for data objects/batches is more effective than using the same strategy for the entire dataset. Although it is infeasible to learn the best operation for each data object (node) for node classification due to the dependent nature of graph data, graph data augmentation could benefit from having customized augmentation strategy for each community. The next sub-section formulates the problem of automated graph data augmentation via community customization.
78
+
79
+ ### 3.2 Bilevel Optimization Formulation
80
+
81
+ We formulate the problem of automated graph data augmentation in a similar way to the auto-augmentation problem in vision tasks $\left\lbrack {2,5}\right\rbrack$ : it aims to find a set of graph data augmentation operations for each community in the graph, which maximizes the performance of a graph neural network model on the task of (semi-)supervised node classification.
82
+
83
+ Let the graph data augmentation policy network be defined as ${g}_{\theta } : G \rightarrow \{ 0,1,\ldots ,{99}\} \left| \mathcal{A}\right|$ , which is a multi-layer perceptron (MLP) that is parameteraized by $\theta$ . The policy takes a (sub)graph as input and outputs the augmentation strategy for this (sub)graph, which in our case is the four percentage magnitudes for $\mathcal{A} = \{$ DROPEDGE, ATTRMASK, GAUGM_ADD, GAUGM_RM $\}$ . Let $\operatorname{aug}\left( {{g}_{\theta }\left( G\right) }\right)$ be a function that applies the augmentation strategy ${g}_{\theta }\left( G\right)$ on (sub)graph $G$ , then the automated graph data augmentation process for subgraph ${G}_{Ck}$ is formulated as
84
+
85
+ $$
86
+ {G}_{Ck}^{\prime } = \operatorname{aug}\left( {{g}_{\theta }\left( {G}_{Ck}\right) }\right) \tag{1}
87
+ $$
88
+
89
+ where ${G}_{Ck}^{\prime }$ denotes the subgraph ${G}_{Ck}$ after augmentation.
90
+
91
+ We denote the GNN model as ${f}_{\omega } : G \rightarrow \widehat{\mathbf{y}}$ , which is parameteraized by $\omega$ . It takes the graph as input and outputs the predicted node labels $\widehat{\mathbf{y}}$ . Let the ${\mathbf{y}}_{tr}$ and ${\mathbf{y}}_{val}$ be the node labels for training set and validation set, respectively. The objective of obtaining the best augmentation policy (solving for $\theta$ ) could be described as a bilevel optimization problem [32]. The inner level is the model weight optimization, which is solving for the optimal ${\omega }_{\theta }$ given a fixed augmentation policy $\left( {g}_{\theta }\right)$ :
92
+
93
+ $$
94
+ {\omega }_{\theta } = \underset{\omega }{\arg \min }\mathcal{L}\left( {{f}_{\omega }\left( {\mathop{\bigcup }\limits_{{k = 1}}^{{N}_{c}}\operatorname{aug}\left( {{g}_{\theta }\left( {G}_{Ck}\right) }\right) }\right) ,{\mathbf{y}}_{tr}}\right) , \tag{2}
95
+ $$
96
+
97
+ where $\mathcal{L}$ denotes the loss function (cross entropy).
98
+
99
+ The outer level is the augmentation policy optimization, which is optimizing the policy parameter $\theta$ using the result of the inner level problem. Here we take the validation performance (accuracy) as the optimization objective. Then we have the problem formulated as below:
100
+
101
+ $$
102
+ {\theta }^{ * } = \underset{\theta }{\arg \max }{ACC}\left( {{f}_{{\omega }_{\theta }}\left( {\mathop{\bigcup }\limits_{{k = 1}}^{{N}_{c}}\operatorname{aug}\left( {{g}_{\theta }\left( {G}_{Ck}\right) }\right) }\right) ,{\mathbf{y}}_{\text{val }}}\right) , \tag{3}
103
+ $$
104
+
105
+ $$
106
+ \text{where}{\omega }_{\theta } = \underset{\omega }{\arg \min }\mathcal{L}\left( {\omega ,\theta }\right) \left( {\mathrm{{Eq}}.\left( 2\right) }\right) \text{,}
107
+ $$
108
+
109
+ where ${\theta }^{ * }$ denotes the parameter of the optimal policy, and ${ACC}\left( {f\left( G\right) ,{\mathbf{y}}_{\text{val }}}\right)$ denotes the validation accuracy.
110
+
111
+ ### 3.3 AutoGDA Framework
112
+
113
+ As the graph data augmentation operations $\mathcal{A}$ modifies the graph structure and also affects the training of the GNN model, when applying the augmentation operations community by community, the graph data augmentation can be formulated as a sequential process on the graph. Figure 2 illustrates the sequential process: in each step of the iteration, the policy network takes the observation of a community and outputs the action containing the set of augmentation magnitudes which will be applied on this community. We finetune the GNN model after applying the augmentations and take the validation accuracy as rewards.
114
+
115
+ Parameter Sharing As the solving of bilevel optimization problems is extremely time consuming due the repeated solving of the inner loop [32], we utilize the weight sharing scheme for automated
116
+
117
+ augmentation proposed by Tian et al. [5]. At the start of each episode, we reset the current graph to 4 the original graph, pretrain the GNN model on the original graph (without optimizing the outer loop or applying any data augmentation operation), and obtain $\bar{\omega }$ . That is,
118
+
119
+ $$
120
+ \bar{\omega } = \underset{\omega }{\arg \min }\mathcal{L}\left( {{f}_{\omega }\left( G\right) ,{\mathbf{y}}_{tr}}\right) \tag{4}
121
+ $$
122
+
123
+ In each step of this episode, instead of training a new GNN model from scratch to get ${\omega }_{\theta }$ , we load the parameters $\bar{\omega }$ from pretraining and finetune the GNN model for only a small number of epochs 8 with the given actions (augmentation strategy) to get ${\bar{\omega }}_{\theta }$ . Therefore, the outer level for optimizing the augmentation policy parameters becomes
124
+
125
+ $$
126
+ {\theta }^{ * } = \underset{\theta }{\arg \max }{ACC}\left( {{f}_{{\bar{\omega }}_{\theta }}\left( {\mathop{\bigcup }\limits_{{k = 1}}^{{N}_{c}}\operatorname{aug}\left( {{g}_{\theta }\left( {G}_{Ck}\right) }\right) }\right) ,{\mathbf{y}}_{val}}\right) . \tag{5}
127
+ $$
128
+
129
+ Reinforcement Learning (RL) Environment The set of graph data augmentation operations $\mathcal{A}$ contains both stochastic and deterministic augmentation operations, where the stochastic operations (DROPEDGE, ATTRMASK) affect the GNN model in training and the deterministic operations (GAUGM_ADD, GAUGM_RM) directly modifies the graph structure. Therefore, as we apply the augmentation strategy on each community, the node/graph representation obtained by GNN trained with the augmentations also changes. This process forms a Markov decision process, whose length is equal to the number of communities.
130
+
131
+ In the RL environment, we take the current graph with the given augmentation strategy as the state and use the graph representation of one community as the observation for one step. That is, for each graph community ${C}_{k}$ , the observation is the pooled graph representation of the subgraph ${G}_{Ck}$ , i.e., the element-wise mean of the node representations for all nodes in ${\mathcal{V}}_{Ck}$ . As the output dimension of GNN's last layer is the number of unique labels, which is usually very small, we take the output of the GNN’s second last layer as the node representations. The policy network ${g}_{\theta }$ takes the observation (i.e., graph representation of the community) as input and outputs the magnitudes of different augmentation operations for the community. Note that the augmentation operation would not be applied if its magnitude is zero.
132
+
133
+ For optimizing our proposed method, we opt for simplicity and employ the widely-used Proximal Policy Optimization (PPO) [33] algorithm. We use the validation performance of the GNN model after finetuning in step as the reward to the RL policy.
134
+
135
+ Summary Algorithm 1 summarizes
136
+
137
+ ---
138
+
139
+ Algorithm 1: AutoGDA
140
+
141
+ ---
142
+
143
+ Input : $G,\mathcal{C},{\mathbf{y}}_{tr},{\mathbf{y}}_{val}$
144
+
145
+ /* Policy Optimization
146
+
147
+ for episode in range $\left( {n\text{_episodes}}\right)$ do
148
+
149
+ Pretrain GNN and obtain $\bar{\omega }$ by Eq. (4) ;
150
+
151
+ $\left\{ {{G}_{C1}^{\prime },\ldots ,{G}_{C{N}_{c}}^{\prime }}\right\} = \left\{ {{G}_{C1},\ldots ,{G}_{C{N}_{c}}}\right\} ;$
152
+
153
+ for $k$ in $\left\{ {1,2,\ldots ,{N}_{c}}\right\}$ do
154
+
155
+ ${G}_{Ck}^{\prime } = \operatorname{aug}\left( {{g}_{\theta }\left( {G}_{Ck}\right) }\right) ;\;//$ Eq. (1)
156
+
157
+ Load $\bar{\omega }$ ;
158
+
159
+ Finetune GNN with $\mathop{\bigcup }\limits_{{k = 1}}^{{N}_{c}}{G}_{Ck}^{\prime }$ and get ${\bar{\omega }}_{\theta }$ ;
160
+
161
+ Use val. ACC to update $\theta ;\;//$ Eq. (5)
162
+
163
+ end
164
+
165
+ end
166
+
167
+ ${\theta }^{ * } = \theta$ ;
168
+
169
+ * Inference
170
+
171
+ ${G}^{ * } = \mathop{\bigcup }\limits_{{k = 1}}^{{N}_{c}}\operatorname{aug}\left( {{g}_{{\theta }^{ * }}\left( {G}_{Ck}\right) }\right) ;$
172
+
173
+ Reset and train GNN with ${G}^{ * }$ and get ${\widehat{\mathbf{y}}}_{\text{test }}$ ;
174
+
175
+ Output: ${\widehat{\mathbf{y}}}_{\text{test }}$
176
+
177
+ ---
178
+
179
+ ---
180
+
181
+ the whole process of AutoGDA. In each episode of the policy optimization stage, we first pretrain the GNN model by Eq. (4) and reset the graph. The subgraphs ${G}_{Ck}^{\prime }$ in line 3 are for tracking the augmented communities. Then for each community, we first obtain and apply its augmentations strategy given by the policy (line 5), then load the pretrained parameters $\bar{\omega }$ for the GNN model, finetune it with the current augmentation strategies, and use the validation accuracy as reward to update the policy. After the policy network is sufficiently trained, we get the final graph data augmentation strategy for the whole graph (all communities) by the trained policy ${g}_{{\theta }^{ * }}$ . Finally, we reset GNN and train it with $\mathop{\bigcup }\limits_{{k = 1}}^{{N}_{c}}\operatorname{aug}\left( {{g}_{{\theta }^{ * }}\left( {G}_{Ck}\right) }\right)$ to get the predicted labels ${\widehat{\mathbf{y}}}_{\text{test }}$ .
182
+
183
+ For the edges which connect nodes that belong to different communities, we make them an optional special community of edges $\left( {C}_{{N}_{c} + 1}\right)$ in AutoGDA. In the case we use this community, it only
184
+
185
+ Table 1: Node classification accuracy across GNN architectures and public benchmarks. Bolded are the best performance and the comparable ones (within the standard deviation of the best performance).
186
+
187
+ <table><tr><td>GNN</td><td>Method</td><td>Cora</td><td>CiteSeer</td><td>PubMed</td><td>Flickr</td></tr><tr><td rowspan="7">GCN</td><td>Original</td><td>${81.5} \pm {0.4}$</td><td>${70.3} \pm {0.5}$</td><td>${79.0} \pm {0.3}$</td><td>${61.2} \pm {0.4}$</td></tr><tr><td>+AdaEdge</td><td>${81.9} \pm {0.7}$</td><td>${72.8} \pm {0.7}$</td><td>79.8±0.4</td><td>${61.2} \pm {0.5}$</td></tr><tr><td>+DropEdge</td><td>${82.0} \pm {0.8}$</td><td>${71.8} \pm {0.2}$</td><td>79.3±0.3</td><td>61.4±0.7</td></tr><tr><td>+FLAG</td><td>${80.2} \pm {0.3}$</td><td>${68.1} \pm {0.5}$</td><td>${78.5} \pm {0.2}$</td><td>${62.3} \pm {0.4}$</td></tr><tr><td>+GAugM</td><td>${83.5} \pm {0.4}$</td><td>${72.3} \pm {0.4}$</td><td>${80.2} \pm {0.3}$</td><td>${68.2} \pm {0.7}$</td></tr><tr><td>+GAugO</td><td>${83.6} \pm {0.5}$</td><td>73.3±1.1</td><td>79.3±0.4</td><td>${62.2} \pm {0.3}$</td></tr><tr><td>+AutoGDA</td><td>$\mathbf{{84.4}} \pm {0.3}$</td><td>73.0±0.4</td><td>$\mathbf{{81.6}} \pm {0.5}$</td><td>71.4±0.5</td></tr><tr><td rowspan="7">GSAGE</td><td>Original</td><td>${81.3} \pm {0.5}$</td><td>${70.6} \pm {0.5}$</td><td>${78.3} \pm {0.6}$</td><td>57.4±0.5</td></tr><tr><td>+AdaEdge</td><td>${81.5} \pm {0.6}$</td><td>${71.3} \pm {0.8}$</td><td>${78.5} \pm {0.2}$</td><td>57.7±0.7</td></tr><tr><td>+DropEdge</td><td>${81.6} \pm {0.5}$</td><td>${70.8} \pm {0.5}$</td><td>78.7±0.7</td><td>${58.4} \pm {0.7}$</td></tr><tr><td>+FLAG</td><td>79.2±0.9</td><td>${67.9} \pm {1.4}$</td><td>77.4±0.3</td><td>${48.5} \pm {0.6}$</td></tr><tr><td>+GAugM</td><td>83.2±0.4</td><td>71.2±0.4</td><td>${78.7} \pm {0.3}$</td><td>${65.2} \pm {0.4}$</td></tr><tr><td>+GAugO</td><td>${82.0} \pm {0.5}$</td><td>${72.7} \pm {0.7}$</td><td>79.4±0.9</td><td>${56.3} \pm {0.6}$</td></tr><tr><td>+AutoGDA</td><td>83.2±0.5</td><td>${72.5} \pm {0.4}$</td><td>$\mathbf{{80.0}} \pm {0.5}$</td><td>73.4±0.6</td></tr><tr><td rowspan="7">GAT</td><td>Original</td><td>${83.0} \pm {0.7}$</td><td>${72.5} \pm {0.7}$</td><td>${79.0} \pm {0.3}$</td><td>${46.9} \pm {1.6}$</td></tr><tr><td>+AdaEdge</td><td>${82.0} \pm {0.6}$</td><td>${71.1} \pm {0.8}$</td><td>79.1±0.6</td><td>${48.2} \pm {1.0}$</td></tr><tr><td>+DropEdge</td><td>${81.9} \pm {0.6}$</td><td>${71.0} \pm {0.5}$</td><td>${78.9} \pm {0.6}$</td><td>${50.0} \pm {1.6}$</td></tr><tr><td>+FLAG</td><td>79.6±0.6</td><td>67.7±0.7</td><td>${78.2} \pm {0.5}$</td><td>${48.9} \pm {1.1}$</td></tr><tr><td>+GAugM</td><td>${82.1} \pm {1.0}$</td><td>${71.5} \pm {0.5}$</td><td>79.0±0.5</td><td>${63.7} \pm {0.9}$</td></tr><tr><td>+GAugO</td><td>${82.2} \pm {0.8}$</td><td>71.6±1.1</td><td>${78.5} \pm {0.8}$</td><td>${51.9} \pm {0.5}$</td></tr><tr><td>+AutoGDA</td><td>$\mathbf{{84.8}} \pm {0.2}$</td><td>73.2±0.4</td><td>79.8±0.6</td><td>65.1±0.9</td></tr></table>
188
+
189
+ disjoint with others in edges (i.e., ${\mathcal{V}}_{C{N}_{c} + 1} \cap \left( {{ \cup }_{k = 1}^{{N}_{c}}{\mathcal{V}}_{Ck}}\right) = \varnothing$ and ${\mathcal{E}}_{C{N}_{c} + 1} = \mathcal{E} \smallsetminus \left( {{ \cup }_{k = 1}^{{N}_{c}}{\mathcal{E}}_{Ck}}\right)$ ), so we only apply DROPEDGE and GAUG_RM on it as the other two augmentation operations are covered by other communities.
190
+
191
+ ## 4 Experiments
192
+
193
+ In this section, we evaluate the performance of the proposed AutoGDA across different GNN backbones and datasets, and over alternative graph data augmentation methods.
194
+
195
+ ### 4.1 Experimental Setup
196
+
197
+ Datasets We evaluate with 4 public benchmark datasets across domains: citation networks with strong homophily (Cora, CiteSeer, PubMed [7]) and social networks that exhibits heterophily [30] (Flickr [34]). We also evaluate with 2 real industry application benchmarks ECom20K and ECom43K for the task of graph anomaly detection. To validate the possibility of extending AutoGDA on heterogeneous graphs, we also evaluate with 2 heterogeneous graph benchmarks for the task of entity classification: AIFB and MUTAG [35]. Details for the datasets are provided in Appendix B.
198
+
199
+ Baselines We evaluate the AutoGDA and baselines using 3 widely used GNN architectures: GCN [7], GraphSAGE [8], and GAT [36]. We compare the node classification performance of AutoGDA with that achieved by standard GNN, as well as four state-of-the-art graph data augmentation baselines: AdaEdge [11], DropEdge [18], GAugM [13], and GAugO [13]. To evaluate the robustness of AutoGDA under graph adversarial attacks, we evaluate AutoGDA and state-of-the-art graph defensing methods (GNN-Jaccard [37] and ElasticGNN [38]) against graph adversarial attacks: DICE [39] and Metattack [27]. Moreover, to evaluate the generalizability of AutoGDA on heterogeneous graph, we also test AutoGDA and baseline methods with R-GCN [40] on the heterogeneous graph benchmarks.
200
+
201
+ ### 4.2 Experimental Results
202
+
203
+ We show comparative results of AutoGDA and baseline methods in Tables 1 and 2 and Figure 3. Table 1 is organized per GNN architecture (row), per dataset (column), and different methods (within-row). Table 2 is organized per dataset (row), per graph adversarial attack method (column), per attack level (within-column), and different methods (within-row). For customer privacy concern, relative improvements instead of the performances are reported in Figure 3. We bold the best and comparable performances. In short, our proposed AutoGDA consistently achieve the best or comparable performances in all combinations of GNN architectures and datasets.
204
+
205
+ ![01963ecd-3031-794b-bbc2-e8594ad0053c_7_524_209_756_599_0.jpg](images/01963ecd-3031-794b-bbc2-e8594ad0053c_7_524_209_756_599_0.jpg)
206
+
207
+ Figure 3: Relative improvements over GNNs for accuracy on two real-world industry anomaly detection datasets.
208
+
209
+ Table 2: Node classification accuracy against different levels of adversarial attacks. Bolded are the best performance and the comparable ones (within the standard deviation of the best performance).
210
+
211
+ <table><tr><td rowspan="2">Dataset</td><td rowspan="2">Attack Method Ptb. Rate</td><td colspan="3">DICE [39]</td><td colspan="3">Metattack [27]</td></tr><tr><td>10%</td><td>30%</td><td>50%</td><td>10%</td><td>30%</td><td>50%</td></tr><tr><td rowspan="5">Cora</td><td>GCN [7]</td><td>${78.4} \pm {0.6}$</td><td>${73.6} \pm {0.9}$</td><td>${66.8} \pm {1.2}$</td><td>${70.2} \pm {0.9}$</td><td>${32.6} \pm {2.0}$</td><td>${16.6} \pm {0.8}$</td></tr><tr><td>GAugO [13]</td><td>${78.8} \pm {0.4}$</td><td>74.0±0.3</td><td>67.3±0.5</td><td>${72.5} \pm {1.1}$</td><td>57.1±0.7</td><td>${40.6} \pm {1.1}$</td></tr><tr><td>GNN-Jaccard [37]</td><td>73.1±0.5</td><td>${66.4} \pm {0.5}$</td><td>${66.9} \pm {0.4}$</td><td>${72.1} \pm {0.7}$</td><td>${51.6} \pm {0.5}$</td><td>${38.4} \pm {0.7}$</td></tr><tr><td>ElasticGNN [38]</td><td>79.8±0.8</td><td>${74.5} \pm {0.8}$</td><td>67.6±1.4</td><td>74.3±1.2</td><td>${49.0} \pm {1.3}$</td><td>${35.4} \pm {1.4}$</td></tr><tr><td>AutoGDA</td><td>$\mathbf{{80.2}} \pm {0.7}$</td><td>74.7±0.3</td><td>67.9±0.9</td><td>${74.5} \pm {1.0}$</td><td>$\mathbf{{63.2}} \pm {1.9}$</td><td>$\mathbf{{53.7}} \pm {1.2}$</td></tr><tr><td rowspan="5">CiteSeer</td><td>GCN [7]</td><td>${65.8} \pm {1.1}$</td><td>${60.1} \pm {0.8}$</td><td>${56.5} \pm {0.8}$</td><td>${38.4} \pm {1.0}$</td><td>${15.9} \pm {0.8}$</td><td>${10.7} \pm {1.9}$</td></tr><tr><td>GAugO [13]</td><td>67.0±0.5</td><td>${60.9} \pm {0.4}$</td><td>${56.5} \pm {0.9}$</td><td>${50.2} \pm {0.9}$</td><td>${34.3} \pm {0.6}$</td><td>${29.1} \pm {1.2}$</td></tr><tr><td>GNN-Jaccard [37]</td><td>${66.5} \pm {1.3}$</td><td>59.8±0.7</td><td>${54.1} \pm {1.0}$</td><td>${43.7} \pm {1.2}$</td><td>${27.0} \pm {0.7}$</td><td>${19.8} \pm {0.4}$</td></tr><tr><td>ElasticGNN [38]</td><td>${66.7} \pm {1.4}$</td><td>${59.7} \pm {0.8}$</td><td>${56.3} \pm {1.4}$</td><td>47.5±1.3</td><td>${31.8} \pm {0.3}$</td><td>${23.5} \pm {4.3}$</td></tr><tr><td>AutoGDA</td><td>${67.8} \pm {1.1}$</td><td>${62.1} \pm {1.3}$</td><td>57.6±1.1</td><td>${61.5} \pm {1.6}$</td><td>$\mathbf{{52.6}} \pm {1.3}$</td><td>47.8±1.7</td></tr><tr><td rowspan="5">PubMed</td><td>GCN [7]</td><td>${73.9} \pm {0.3}$</td><td>${67.1} \pm {0.3}$</td><td>63.6±0.7</td><td>${67.2} \pm {0.4}$</td><td>${41.3} \pm {1.5}$</td><td>${27.5} \pm {1.7}$</td></tr><tr><td>GAugO [13]</td><td>${74.6} \pm {0.4}$</td><td>${67.8} \pm {0.3}$</td><td>${64.8} \pm {0.5}$</td><td>${71.6} \pm {0.9}$</td><td>${51.8} \pm {0.7}$</td><td>${40.7} \pm {1.0}$</td></tr><tr><td>GNN-Jaccard [37]</td><td>${73.7} \pm {0.4}$</td><td>${67.3} \pm {0.5}$</td><td>${64.2} \pm {0.4}$</td><td>${68.2} \pm {0.4}$</td><td>${41.9} \pm {0.9}$</td><td>${29.1} \pm {1.1}$</td></tr><tr><td>ElasticGNN [38]</td><td>${75.5} \pm {0.6}$</td><td>${68.7} \pm {0.7}$</td><td>65.4±0.7</td><td>${71.6} \pm {0.5}$</td><td>${49.8} \pm {0.4}$</td><td>${40.1} \pm {0.8}$</td></tr><tr><td>AutoGDA</td><td>75.1±0.8</td><td>${68.6} \pm {0.5}$</td><td>65.6±0.4</td><td>73.1±1.2</td><td>${55.9} \pm {1.8}$</td><td>${42.2} \pm {1.6}$</td></tr></table>
212
+
213
+ Effectiveness on graph data augmentation From Table 1 and Figure 3 we observe that our proposed AutoGDA achieves improvements over all three GNN architectures (averaged across datasets): AutoGDA improves 5.0% (GCN), 6.1% (GraphSAGE), and 7.5% (GAT), repsectively. From the dataset perspective, AutoGDA also achieves improvements over all 6 datasets (averaged across GNN architectures): AutoGDA improves 2.7%, 2.2%, 2.1%, 27.6%, 0.8%, and 1.7% respectively for each dataset (Cora, CiteSeer, PubMed, Flickr, ECom20k, ECom43k). Notably, AutoGDA is especially effective on social networks (Flickr), which are naturally more heterophily. Although GAugM [13] outperformed all other baselines with large margin, AutoGDA still significantly improved from GAugM. Finally, we note that AutoGDA outperforms all graph data augmentation baselines: specifically, AutoGDA improves 5.6%, 5.4%, 6.1% 2.0%, and 4.7% respectively over AdaEdge, DropEdge, FLAG, GAugM, and GAugO (averaged across datasets and GNNs). We reason that learning cus-
214
+
215
+ Table 3: Ablation experiments using GCN on PubMed.
216
+
217
+ <table><tr><td/><td>PubMed</td></tr><tr><td>DropEdge</td><td>${79.3} \pm {0.3}$</td></tr><tr><td>AutoGDA with DROPEDGE (single community)</td><td>79.3±0.4</td></tr><tr><td>AutoGDA with DROPEDGE</td><td>79.4±0.2</td></tr><tr><td>GAugM</td><td>${80.2} \pm {0.3}$</td></tr><tr><td>AutoGDA with GAUGM (single community)</td><td>${80.2} \pm {0.3}$</td></tr><tr><td>AutoGDA with GAUGM</td><td>${81.2} \pm {0.4}$</td></tr><tr><td>AutoGDA (single community)</td><td>${80.4} \pm {0.3}$</td></tr><tr><td>AutoGDA</td><td>$\mathbf{{81.6}} \pm {0.5}$</td></tr></table>
218
+
219
+ ![01963ecd-3031-794b-bbc2-e8594ad0053c_8_308_620_1183_268_0.jpg](images/01963ecd-3031-794b-bbc2-e8594ad0053c_8_308_620_1183_268_0.jpg)
220
+
221
+ Figure 4: AutoGDA learns diverse augmentation strategies for different communities in the PubMed graph.
222
+
223
+ tomized augmentation for each graph community and combining several state-of-the-art graph data augmentation operations both contributed to the performance improvement of AutoGDA.
224
+
225
+ Robustness against graph adversarial attacks From Table 2 we observe the proposed AutoGDA is able to effectively learn robust representation under graph adversarial attacks. Although the recently baseline ElasticGNN [38] also achieved good performance against Random Injection and DICE [39], AutoGDA outperformed ElasticGNN with large margins under Metattack [27]. In short, our proposed AutoGDA achieved the best performance for 21 our of 27 combinations of datasets and graph adversarial attack methods.
226
+
227
+ Necessity of community adaptive augmentations Table 3 shows ablation experiments on PubMed using GCN. We compare the performances of AutoGDA using only one augmentation operations versus using all augmentation operations in $\mathcal{A}$ , and we also compare the performances of AutoGDA with single community (viewing the whole graph as the only community) versus our default setting (using Louvain method for community detection). We note that when using only one augmentation operation (combining GAUGM_ADD and GAUGM_RM as GAUGM) and under single community setting, AutoGDA performs parameter search on the existing graph data augmentation methods. We also observe from Table 3 that the default AutoGDA (with multiple graph communities) consistently outperform the single community setting, showing the community customized augmentation strategy is crucial for the performance improvement.
228
+
229
+ Case Study Figure 4 showcases the learned augmentation strategies for seven different communities on PubMed dataset by our proposed AutoGDA with GCN. We observe that our propose AutoGDA is able to learn diverse augmentations strategies for different communities in the graph.
230
+
231
+ ## 5 Conclusions
232
+
233
+ In this paper, we studied the problem of community-customized data augmentation for node classification. Our work showed that different communities require different augmentation strategies for the best node classification performance due to their disparate characteristics. We proposed an automated graph data augmentation framework AutoGDA that adopted reinforcement learning to automatically learn the optimal augmentation strategy for each community. Through extensive experiments on benchmark graph datasets from multiple domains, our proposed approach AutoGDA achieved up to 12.5% accuracy improvement over the state-of-the-art graph data augmentation baselines.
234
+
235
+ References
236
+
237
+ [1] Connor Shorten and Taghi M Khoshgoftaar. A survey on image data augmentation for deep learning. Journal of Big Data, 6(1):1-48, 2019. 1
238
+
239
+ [2] Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. Autoaugment: Learning augmentation strategies from data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 113-123, 2019. 1, 2, 5, 15
240
+
241
+ [3] Sungbin Lim, Ildoo Kim, Taesup Kim, Chiheon Kim, and Sungwoong Kim. Fast autoaugment. Advances in Neural Information Processing Systems, 32:6665-6675, 2019. 15
242
+
243
+ [4] Daniel Ho, Eric Liang, Xi Chen, Ion Stoica, and Pieter Abbeel. Population based augmentation: Efficient learning of augmentation policy schedules. In International Conference on Machine Learning, pages 2731-2741. PMLR, 2019. 1, 15
244
+
245
+ [5] Keyu Tian, Chen Lin, Ming Sun, Luping Zhou, Junjie Yan, and Wanli Ouyang. Improving auto-augment via augmentation-wise weight sharing. In arXiv:2009.14737, 2020. 1, 2, 5, 6, 15
246
+
247
+ [6] Chen Lin, Minghao Guo, Chuming Li, Xin Yuan, Wei Wu, Junjie Yan, Dahua Lin, and Wanli Ouyang. Online hyper-parameter learning for auto-augmentation strategy. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6579-6588, 2019. 1, 15
248
+
249
+ [7] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In arXiv:1609.02907, 2016. 1, 3, 4, 7, 8, 15, 16
250
+
251
+ [8] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, pages 1024-1034, 2017. 7, 15
252
+
253
+ [9] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems, 32(1):4-24, 2020.
254
+
255
+ [10] Ziwei Zhang, Peng Cui, and Wenwu Zhu. Deep learning on graphs: A survey. IEEE Transactions on Knowledge and Data Engineering, 2020. 1
256
+
257
+ [11] Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 3438-3445, 2020. 1, 4, 7, 15, 16, 17, 18
258
+
259
+ [12] Kezhi Kong, Guohao Li, Mucong Ding, Zuxuan Wu, Chen Zhu, Bernard Ghanem, Gavin Taylor, and Tom Goldstein. Flag: Adversarial data augmentation for graph neural networks. In arXiv:2010.09891, 2020. 15, 18
260
+
261
+ [13] Tong Zhao, Yozen Liu, Leonardo Neves, Oliver Woodford, Meng Jiang, and Neil Shah. Data augmentation for graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 11015-11023, 2021. 1, 2, 3, 4, 7, 8, 15, 16, 17, 18
262
+
263
+ [14] Hyeonjin Park, Seunghun Lee, Sihyeon Kim, Jinyoung Park, Jisu Jeong, Kyung-Min Kim, Jung-Woo Ha, and Hyunwoo J Kim. Metropolis-hastings data augmentation for graph neural networks. Advances in Neural Information Processing Systems, 34:19010-19020, 2021.
264
+
265
+ [15] Zhengzheng Tang, Ziyue Qiao, Xuehai Hong, Yang Wang, Fayaz Ali Dharejo, Yuanchun Zhou, and Yi Du. Data augmentation for graph convolutional network on semi-supervised classification. In arXiv:2106.08848, 2021. 15
266
+
267
+ [16] Tong Zhao, Bo Ni, Wenhao Yu, Zhichun Guo, Neil Shah, and Meng Jiang. Action sequence augmentation for early graph-based anomaly detection. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pages 2668-2678, 2021. 15
268
+
269
+ [17] Tong Zhao, Gang Liu, Stephan Günnemann, and Meng Jiang. Graph data augmentation for graph machine learning: A survey. arXiv preprint arXiv:2202.08871, 2022. 1, 15
270
+
271
+ [18] Yu Rong, Wenbing Huang, Tingyang Xu, and Junzhou Huang. Dropedge: Towards deep graph convolutional networks on node classification. In International Conference on Learning Representations, 2019. 1, 2, 3, 7, 15, 16, 17, 18
272
+
273
+ [19] Michelle Girvan and Mark EJ Newman. Community structure in social and biological networks. Proceedings of the national academy of sciences, 99:7821-7826, 2002. 2, 4
274
+
275
+ [20] Mark EJ Newman and Michelle Girvan. Finding and evaluating community structure in networks. Physical review E, 69(2):026113, 2004. 2, 4
276
+
277
+ [21] Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, and Cho-Jui Hsieh. Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 257-266, 2019. 2, 4
278
+
279
+ [22] Yong-Yeol Ahn, James P Bagrow, and Sune Lehmann. Link communities reveal multiscale complexity in networks. nature, 466(7307):761-764, 2010. 2, 4
280
+
281
+ [23] Mohammad Hamdaqa, Ladan Tahvildari, Neil LaChapelle, and Brian Campbell. Cultural scene detection using reverse louvain optimization. Science of Computer Programming, 95:44-72, 2014.
282
+
283
+ [24] Filippo Radicchi, Claudio Castellano, Federico Cecconi, Vittorio Loreto, and Domenico Parisi. Defining and identifying communities in networks. Proceedings of the national academy of sciences, 101(9):2658-2663, 2004. 2, 4
284
+
285
+ [25] Vincent D Blondel, Jean-Loup Guillaume, Renaud Lambiotte, and Etienne Lefebvre. Fast unfolding of communities in large networks. Journal of statistical mechanics: theory and experiment, 2008(10):P10008, 2008. 2, 4, 17, 18
286
+
287
+ [26] Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. Graph contrastive learning with augmentations. Advances in Neural Information Processing Systems, 33:5812-5823, 2020. 2, 3, 15
288
+
289
+ [27] Daniel Zügner and Stephan Günnemann. Adversarial attacks on graph neural networks via meta learning. In International Conference on Learning Representations, 2019. 2, 7, 8, 9
290
+
291
+ [28] Thomas N Kipf and Max Welling. Variational graph auto-encoders. In arXiv:1611.07308, 2016. 3, 15
292
+
293
+ [29] Fragkiskos D Malliaros and Michalis Vazirgiannis. Clustering and community detection in directed networks: A survey. Physics reports, 533(4):95-142, 2013. 4
294
+
295
+ [30] Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, and Danai Koutra. Beyond homophily in graph neural networks: Current limitations and effective designs. Advances in Neural Information Processing Systems, 33:7793-7804, 2020. 4, 7
296
+
297
+ [31] Wenzheng Feng, Jie Zhang, Yuxiao Dong, Yu Han, Huanbo Luan, Qian Xu, Qiang Yang, Evgeny Kharlamov, and Jie Tang. Graph random neural network for semi-supervised learning on graphs. In arXiv:2005.11079, 2020. 4, 15
298
+
299
+ [32] Benoît Colson, Patrice Marcotte, and Gilles Savard. An overview of bilevel optimization. Annals of operations research, 153(1):235-256, 2007. 5
300
+
301
+ [33] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. In arXiv:1707.06347, 2017. 6, 17
302
+
303
+ [34] Xiao Huang, Jundong Li, and Xia Hu. Label informed attributed network embedding. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, pages 731-739, 2017. 7, 16
304
+
305
+ [35] Petar Ristoski, Gerben Klaas Dirk de Vries, and Heiko Paulheim. A collection of benchmark datasets for systematic evaluations of machine learning on the semantic web. In International Semantic Web Conference, pages 186-194, 2016. 7, 16
306
+
307
+ [36] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. In arXiv:1710.10903, 2017. 7, 15, 16
308
+
309
+ [37] Huijun Wu, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, Kai Lu, and Liming Zhu. Adversarial examples on graph data: Deep insights into attack and defense. In arXiv:1903.01610, 2019.7,8
310
+
311
+ [38] Xiaorui Liu, Wei Jin, Yao Ma, Yaxin Li, Hua Liu, Yiqi Wang, Ming Yan, and Jiliang Tang. Elastic graph neural networks. In International Conference on Machine Learning, pages 6837-6849, 2021. 7, 8, 9
312
+
313
+ [39] Marcin Waniek, Tomasz P Michalak, Michael J Wooldridge, and Talal Rahwan. Hiding individuals and communities in a social network. volume 2, pages 139-147, 2018. 7, 8, 9
314
+
315
+ [40] Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. Modeling relational data with graph convolutional networks. In European semantic web conference, pages 593-607. Springer, 2018. 7, 16, 18
316
+
317
+ [41] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 701-710, 2014. 15
318
+
319
+ [42] Daixin Wang, Peng Cui, and Wenwu Zhu. Structural deep network embedding. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1225-1234, 2016.
320
+
321
+ [43] Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scale information network embedding. In Proceedings of the 24th international conference on world wide web, pages 1067-1077, 2015. 15
322
+
323
+ [44] Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in neural information processing systems, pages 3844-3852, 2016. 15
324
+
325
+ [45] Mikael Henaff, Joan Bruna, and Yann LeCun. Deep convolutional networks on graph-structured data. In arXiv:1506.05163, 2015.
326
+
327
+ [46] Ruoyu Li, Sheng Wang, Feiyun Zhu, and Junzhou Huang. Adaptive graph convolutional neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018. 15
328
+
329
+ [47] Ron Levie, Federico Monti, Xavier Bresson, and Michael M Bronstein. Cayleynets: Graph convolutional neural networks with complex rational spectral filters. IEEE Transactions on Signal Processing, 67(1):97-109, 2018.
330
+
331
+ [48] Yao Ma, Xiaorui Liu, Tong Zhao, Yozen Liu, Jiliang Tang, and Neil Shah. A unified view on graph neural networks as graph signal denoising. In arXiv:2010.01777, 2020. 15
332
+
333
+ [49] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. In arXiv:1312.6203, 2013. 15
334
+
335
+ [50] Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda, and Michael M Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5115-5124, 2017. 15
336
+
337
+ [51] Hongyang Gao, Zhengyang Wang, and Shuiwang Ji. Large-scale learnable graph convolutional networks. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1416-1424, 2018.
338
+
339
+ [52] Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural networks for graphs. In International conference on machine learning, pages 2014-2023, 2016. 15
340
+
341
+ [53] Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L Hamilton, and Jure Leskovec. Graph convolutional neural networks for web-scale recommender systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 974-983, 2018. 15
342
+
343
+ [54] Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. Representation learning on graphs with jumping knowledge networks. In arXiv:1806.03536, 2018. 15
344
+
345
+ [55] Guohao Li, Matthias Muller, Ali Thabet, and Bernard Ghanem. Deepgcns: Can gcns go as deep as cnns? In Proceedings of the IEEE International Conference on Computer Vision, pages 9267-9276,2019. 15
346
+
347
+ [56] Vikas Verma, Meng Qu, Alex Lamb, Yoshua Bengio, Juho Kannala, and Jian Tang. Graph-mix: Regularized training of graph neural networks for semi-supervised learning. In arXiv:1909.11715, 2019. 15
348
+
349
+ [57] Yang Gao, Hong Yang, Peng Zhang, Chuan Zhou, and Yue Hu. Graph neural architecture search. In IJCAI, volume 20, pages 1403-1409, 2020. 15
350
+
351
+ [58] Kaixiong Zhou, Qingquan Song, Xiao Huang, and Xia Hu. Auto-gnn: Neural architecture search of graph neural networks. In arXiv:1909.03184, 2019. 15
352
+
353
+ [59] Kaize Ding, Zhe Xu, Hanghang Tong, and Huan Liu. Data augmentation for deep graph learning: A survey. arXiv preprint arXiv:2202.08235, 2022. 15
354
+
355
+ [60] Gang Liu, Tong Zhao, Jiaxin Xu, Tengfei Luo, and Meng Jiang. Graph rationalization with environment-based augmentations. In Proceedings of the 28th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2022.
356
+
357
+ [61] Xiaotian Han, Zhimeng Jiang, Ninghao Liu, and Xia Hu. G-mixup: Graph data augmentation for graph classification. International Conference on Machine Learning, 2022. 15
358
+
359
+ [62] Yiwei Wang, Yujun Cai, Yuxuan Liang, Henghui Ding, Changhu Wang, Siddharth Bhatia, and Bryan Hooi. Adaptive data augmentation on temporal graphs. Advances in Neural Information Processing Systems, 34, 2021. 15
360
+
361
+ [63] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958, 2014. 15
362
+
363
+ [64] Zhan Gao, Subhrajit Bhattacharya, Leiming Zhang, Rick S Blum, Alejandro Ribeiro, and Brian M Sadler. Training robust graph neural networks with topology adaptive edge dropping. In arXiv:2106.02892, 2021. 15
364
+
365
+ [65] Dongsheng Luo, Wei Cheng, Wenchao Yu, Bo Zong, Jingchao Ni, Haifeng Chen, and Xiang Zhang. Learning to drop: Robust graph neural network via topological denoising. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, pages 779-787, 2021.
366
+
367
+ [66] Cheng Zheng, Bo Zong, Wei Cheng, Dongjin Song, Jingchao Ni, Wenchao Yu, Haifeng Chen, and Wei Wang. Robust graph representation learning via neural sparsification. In International Conference on Machine Learning, pages 11458-11468. PMLR, 2020. 15
368
+
369
+ [67] Thomas Kipf, Ethan Fetaya, Kuan-Chieh Wang, Max Welling, and Richard Zemel. Neural relational inference for interacting systems. In International Conference on Machine Learning, pages 2688-2697. PMLR, 2018. 15
370
+
371
+ [68] Aditya Grover, Aaron Zweig, and Stefano Ermon. Graphite: Iterative generative modeling of graphs. In International conference on machine learning, pages 2434-2444. PMLR, 2019.
372
+
373
+ [69] Luca Franceschi, Mathias Niepert, Massimiliano Pontil, and Xiao He. Learning discrete structures for graph neural networks. In International conference on machine learning, pages 1972-1982. PMLR, 2019. 15
374
+
375
+ [70] Yingxue Zhang, Soumyasundar Pal, Mark Coates, and Deniz Ustebay. Bayesian graph convolutional neural networks for semi-supervised classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 5829-5836, 2019. 15
376
+
377
+ [71] Yiwei Wang, Wei Wang, Yuxuan Liang, Yujun Cai, Juncheng Liu, and Bryan Hooi. Nodeaug: Semi-supervised node classification with data augmentation. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 207-217, 2020. 15
378
+
379
+ [72] Tong Zhao, Gang Liu, Daheng Wang, Wenhao Yu, and Meng Jiang. Learning from counterfactual links for link prediction. In International Conference on Machine Learning, pages 26911-26926. PMLR, 2022. 15
380
+
381
+ [73] Xinyu Zhang, Qiang Wang, Jian Zhang, and Zhao Zhong. Adversarial autoaugment. In arXiv:1912.11188, 2019. 15
382
+
383
+ [74] Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical automated data augmentation with a reduced search space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 702-703, 2020. 15
384
+
385
+ [75] Wei Jin, Xiaorui Liu, Xiangyu Zhao, Yao Ma, Neil Shah, and Jiliang Tang. Automated self-supervised learning for graphs. In arXiv:2106.05470, 2021. 15
386
+
387
+ [76] Yuning You, Tianlong Chen, Yang Shen, and Zhangyang Wang. Graph contrastive learning automated. In arXiv:2106.07594, 2021.
388
+
389
+ [77] Susheel Suresh, Pan Li, Cong Hao, and Jennifer Neville. Adversarial graph augmentation to improve graph contrastive learning. Advances in Neural Information Processing Systems, 34, 2021. 15
390
+
391
+ [78] Youzhi Luo, Michael McThrow, Wing Yee Au, Tao Komikado, Kanji Uchino, Koji Maruhash, and Shuiwang Ji. Automated data augmentations for graph classification. arXiv preprint arXiv:2202.13248, 2022. 15
392
+
393
+ [79] Leo A Goodman. Snowball sampling. The annals of mathematical statistics, pages 148-170, 1961. 16
394
+
395
+ [80] James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. Journal of machine learning research, 13(2), 2012. 18
396
+
397
+ [81] Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. Optuna: A next-generation hyperparameter optimization framework. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pages 2623-2631, 2019.18
398
+
399
+ [82] Amine Abou-Rjeili and George Karypis. Multilevel algorithms for partitioning power-law graphs. In Proceedings 20th IEEE International Parallel & Distributed Processing Symposium, pages 10-pp. IEEE, 2006. 18
400
+
401
+ Table 4: Summary statistics and experimental setup for the homogeneous datasets.
402
+
403
+ <table><tr><td/><td>Cora</td><td>CiteSeer</td><td>PubMed</td><td>Flickr</td><td>ECom20k</td><td>ECom43k</td></tr><tr><td>#Nodes</td><td>2,708</td><td>3,327</td><td>19,717</td><td>7,575</td><td>20,799</td><td>43,117</td></tr><tr><td>#Edges</td><td>5,278</td><td>4,552</td><td>44,338</td><td>239,738</td><td>47,661</td><td>117,469</td></tr><tr><td>#Features</td><td>1,433</td><td>3,703</td><td>500</td><td>12,047</td><td>132</td><td>132</td></tr><tr><td>#Classes</td><td>7</td><td>6</td><td>3</td><td>9</td><td>2</td><td>2</td></tr><tr><td>#Training nodes</td><td>140</td><td>120</td><td>60</td><td>757</td><td>2,275</td><td>4,209</td></tr><tr><td>#Validation nodes</td><td>500</td><td>500</td><td>500</td><td>1,515</td><td>2,275</td><td>4,209</td></tr><tr><td>#Test nodes</td><td>1,000</td><td>1,000</td><td>1,000</td><td>5,303</td><td>6,825</td><td>12,627</td></tr></table>
404
+
405
+ ## A Related Work
406
+
407
+ Graph Neural Networks GNNs enjoy widespread use in modern graph-based machine learning due to their flexibility to incorporate node features, custom aggregations and inductive operation, unlike earlier works which were based on embedding lookups [41-43]. In recent years, many spectral GNN variants $\left\lbrack {7,{44} - {48}}\right\rbrack$ were proposed following the initial idea of convolution based on spectral graph theory [49]. As spectral GNNs usually requires (expensive) operations on the full adjacency matrix, spatial GNNs which perform graph convolution with neighborhood aggregation became prominent $\left\lbrack {8,{36},{50} - {52}}\right\rbrack$ owing to their scalability and flexibility [53]. Several other works also proposed advanced architectures which add residual connections to facilitate deep GNN training $\left\lbrack {{54},{55}}\right\rbrack$ . GraphMix $\left\lbrack {56}\right\rbrack$ proposed to regularize the GNN model with a fully connected network. More recently, graph neural architecture search (NAS) methods [57, 58] utilizing reinforcement learning were proposed to learn the optimal GNN architecture.
408
+
409
+ Graph Data Augmentation As GNNs have emerged as a rising approach for learning with graph data, Graph Data Augmentation (GDA) [17, 59-61] for GNNs were proposed and studied in recent years. Due to the complex, non-Euclidean structure of graphs, most GDA work focused on manipulating the graph structure $\left\lbrack {{13},{62},{62}}\right\rbrack$ . DropEdge $\left\lbrack {18}\right\rbrack$ randomly drops a fraction of edges during each training epoch, in a way similar to Dropout [63]. Following DropEdge, several works [64-66] proposed methods that learns to drop instead of dropping at random. Graph structure learning methods $\left\lbrack {{46},{67} - {69}}\right\rbrack$ can also be viewed as graph data augmentation as they learn from graphs whose structures are partially or totally unknown. AdaEdge [11] and BGCN [70] are iterative methods that updates the graph structure with the prediction of GNNs. Zhao et al. [13] showed that graph homophily critically affects message passing-based GNNs and proposed GAugM and GAugO that manipulate the graph structure with the edge probabilities given by VGAE [28] to augment the graph data. Kong et al. [12] and Tang et al. [15] proposed augmentation methods that operates on the node features. Aside from the methods that directly use GDA for semi-supervised node classification, several works that used GDA in self-supervised graph learning were also proposed. NodeAug [71] and Grand [31] used augmentation with self-supervised consistency loss as an additional term to the cross entropy loss. GraphCL [26] used augmentation in self-supervised graph contrastive learning for graph classification. Eland [16] proposed action sequence augmentation for graph anomaly detection. Zhao et al. [72] studied counterfactual data augmentation for link prediction.
410
+
411
+ Automated Data Augmentation Several automated data augmentation approaches [2-4, 6, 73] have been proposed in CV in the past few years. These methods seek to find the optimal data augmentation policies for each given dataset automatically. Cubuk et al. [2] formulated the automated data augmentation problem as a discrete search problem and proposed a reinforcement learning framework to search the best augmentation operations via proxy tasks (i.e., a smaller model). Several works $\left\lbrack {3,4,6}\right\rbrack$ were then proposed to improve the efficiency of automated data augmentation. Cubuk et al. [74] showed that the models with different number of parameters benefits from different magnitude of augmentation operations and proposed RandAug that searches for one augmentation magnitude that is used for all operations. Tian et al. [5] proposed the Augmentation-Wise Weight Sharing method that enables a fast evaluation process on the original model while not sacrificing the efficiency. Recently, several automated augmentation methods [75-77] have been proposed for self-supervised graph representation learning. Luo et al. [78] studied automated data augmentation for the task of graph classification. Nevertheless, automated data augmentation is rather under-explored for (semi-)supervised node classification tasks.
412
+
413
+ Table 5: Summary statistics and experimental setup for the heterogeneous datasets.
414
+
415
+ <table><tr><td/><td>AIFB</td><td>MUTAG</td></tr><tr><td>#Nodes</td><td>8,285</td><td>23,644</td></tr><tr><td>#Relations</td><td>45</td><td>23</td></tr><tr><td>#Edges</td><td>29,043</td><td>74,227</td></tr><tr><td>#Classes</td><td>4</td><td>2</td></tr><tr><td>#Training nodes</td><td>218</td><td>112</td></tr><tr><td>#Validation nodes</td><td>54</td><td>28</td></tr><tr><td>#Test nodes</td><td>68</td><td>36</td></tr></table>
416
+
417
+ ## B Additional Dataset Details
418
+
419
+ In this section, we provide some additional, relevant dataset details. The statistics of all datasets are provided in Tables 4 and 5.
420
+
421
+ Citation networks. Cora, CiteSeer and PubMed are citation networks that are commonly used as benchmarks in GNN-related prior works [7, 11, 13, 18, 36]. In these citation networks, the nodes are published papers; the features are preprocessed (e.g., bag-of-words) vectors of the corresponding paper title and/or abstract; the edges represent the citation relation between papers; the labels are the category of each paper.
422
+
423
+ Social networks. Flickr is an online social network platform, where users can also follow each other as well as posting images and videos. The user-specified list of interest tags are used as user features and the groups that users joined are used as labels [34].
424
+
425
+ E-commerce networks. ECom20k and ECom43k are e-commerce networks that were constructed with customer purchase/review records from a leading international e-commerce website. The network contains four types of nodes: customers, sellers, products, and reviews, in which customer purchases products from sellers and leave reviews to products. They are two graphs constructed with records in different time periods. The task for these two datasets is abusive customer detection and the customers with golden labels are split into train/validation/test sets for node classification. The node features contains original node attributes as well as the one-hot encoded vectors of the node types. As the golden labels are limited and severely biased, the datasets are sampled with snowball sampling [79] with labeled abusive customers to ensure the relative independence of the graph. The node attributes used is anonymized and do not contain any personally identifiable information.
426
+
427
+ Heterogeneous networks. AIFB and MUTAG are commonly used Resource Description Framework (RDF) formatted datasets [35, 40]. Each graph contains multiple types of entities (nodes) and multiple types of relations (edges). In each dataset, the task is to classify the target properties of one group of entities. The datasets do not contain raw node features.
428
+
429
+ Validation Method. For Cora, Citeseer, PubMed, and Flickr, we follow the commonly used semi-supervised setting in most GNN literature [7, 13, 36]. For ECom20k and ECom43k, we use 20/20/60% for train/validation/test splitting. For AIFB and MUTAG, we use the official splitting provided in the DGL package ${}^{2}$ .
430
+
431
+ ## C Implementation Details
432
+
433
+ Our code package can be found at https://tinyurl.com/autogda for anonymous reviewing, and it will be made publicly available.
434
+
435
+ All the experiments in this work were conducted on either an AWS EC2 P4 Instance ${}^{3}$ or a G4dn Instance ${}^{4}$ . The P4 instance is equipped with 48 Intel Cascade Lake processor cores (96 vCPUs),1.1
436
+
437
+ ---
438
+
439
+ 2 https://www.dgl.ai/
440
+
441
+ ${}^{3}$ https://aws.amazon.com/ec2/instance-types/p4/
442
+
443
+ ${}^{4}$ https://aws.amazon.com/ec2/instance-types/g4/
444
+
445
+ ---
446
+
447
+ Table 6: Node classification accuracy and total hyperparameter searching time on PubMed dataset.
448
+
449
+ <table><tr><td/><td>Accuracy</td><td>Search Time (mins)</td></tr><tr><td>AdaEdge</td><td>${79.8} \pm {0.4}$</td><td>283.5</td></tr><tr><td>DropEdge</td><td>79.3±0.3</td><td>48.1</td></tr><tr><td>FLAG</td><td>${78.5} \pm {0.2}$</td><td>23.6</td></tr><tr><td>GAugM</td><td>${80.2} \pm {0.3}$</td><td>53.2</td></tr><tr><td>GAugO</td><td>${79.3} \pm {0.4}$</td><td>149.4</td></tr></table>
450
+
451
+ <table><tr><td colspan="3">Different hyperparameter searching method for</td></tr><tr><td colspan="3">community-specific augmentations</td></tr><tr><td>Random Search</td><td>${81.8} \pm {0.6}$</td><td>196.2</td></tr><tr><td>Bayesian Search</td><td>${81.3} \pm {0.5}$</td><td>126.5</td></tr><tr><td>AutoGDA</td><td>${81.6} \pm {0.5}$</td><td>74.6</td></tr></table>
452
+
453
+ TB of RAM, and 8 Nvidia A100 GPU cards (40 GB of RAM each). The G4dn instance is equipped with 48 Intel Cascade Lake vCPUs, 192 GB of RAM, and 4 Nvidia T4 GPU cards (16 GB of RAM each). To ensure fair comparison for the search time experiments, all the experiments in Table 6 are conducted on the same G4dn instance. Our method are implemented with Python 3.8.5 with PyTorch. A list of used packages is included in requirements.txt in the code package. The commands for reproducing our results can be found in README . md in the code package. Note that although the EC2 instances are equipped with multiple GPU cards, AutoGDA only need one GPU to run all the experiments.
454
+
455
+ We report test accuracy averaged over 20 runs along with respective standard deviations. For baseline methods with same datasets in their original paper, we directly use their reported performances - numbers reported in the original paper are more preferred than the reproduced results in other papers.
456
+
457
+ ### C.1 Baseline methods
458
+
459
+ For the new reproduced results in this work, all original GNN architectures are implemented in DGL ${}^{5}$ with Adam optimizer. For a fair comparison, we use hidden size of 128 for all methods. For baseline methods, we implemented AdaEdge [11] and DropEdge [18] with PyTorch and DGL, and used the official code packages ${}^{6}$ from the authors for GAugM and GAugO [13].
460
+
461
+ ### C.2 AutoGDA variants
462
+
463
+ As the same as for the baselines, all GNNs in AutoGDA are implemented with DGL and we used hidden size of 128. We use the public PPO [33] implementation in the stable-baselines3 package ${}^{7}$ and implement our RL environment with gym ${}^{8}$ . All parameters for PPO algorithm are set as default from stable-baselines3. We use the Louvain method [25] as the default community detection method for all experiments in Section 4 except the ones in the sensitivity analysis (Figure 5). The number of communities is treated as a hyperparameter and determined with the help of modularity measurement as described in the sensitivity analysis in Section 4.
464
+
465
+ ## D Additional Experimental Results
466
+
467
+ Efficiency of the proposed AutoGDA . Table 6 shows the node classification performance and total runtime (including hyperparameter searching) of baseline graph data augmentation methods, our proposed AutoGDA, and community-specific augmentations with other searching methods. All baseline methods require additional hyperparameter search on the magnitudes of augmentation operations, while the policy network in AutoGDA is able to search for community customized augmentations strategies automatically. Therefore, our proposed AutoGDA is capable of out-performing baseline augmentation methods without requiring much additional runtime. Moreover, we also evaluate when substituting the policy network in AutoGDA with other hyperparameter searching methods: Random Search [80] and Bayesian Search (implemented with Optuna [81]). We observe that although they can achieve similar performance as AutoGDA, they require much more time than AutoGDA to find the optimal augmentation strategy due to the huge searching space.
468
+
469
+ ---
470
+
471
+ ${}^{5}$ https://www.dgl.ai/
472
+
473
+ ${}^{6}$ https://github.com/zhao-tong/GAug
474
+
475
+ ${}^{7}$ https://github.com/DLR-RM/stable-baselines3
476
+
477
+ ${}^{8}$ https://github.com/openai/gym
478
+
479
+ ---
480
+
481
+ Table 7: Node classification accuracy of different graph data augmentation methods on heterogeneous graphs.
482
+
483
+ <table><tr><td/><td>AIFB</td><td>MUTAG</td></tr><tr><td>RGCN</td><td>${93.1} \pm {1.4}$</td><td>${68.4} \pm {3.1}$</td></tr><tr><td>RGCN+AdaEdge</td><td>${93.9} \pm {1.1}$</td><td>${69.4} \pm {3.4}$</td></tr><tr><td>RGCN+DropEdge</td><td>94.1±2.7</td><td>${70.6} \pm {2.1}$</td></tr><tr><td>RGCN+AutoGDA</td><td>${95.6} \pm {2.2}$</td><td>${72.2} \pm {1.8}$</td></tr></table>
484
+
485
+ ![01963ecd-3031-794b-bbc2-e8594ad0053c_17_343_549_1104_460_0.jpg](images/01963ecd-3031-794b-bbc2-e8594ad0053c_17_343_549_1104_460_0.jpg)
486
+
487
+ Figure 5: AutoGDA is robust to the choices of community detection algorithms as well as the number of communities. The number of communities can be decided with the modularity measurement.
488
+
489
+ Extension of AutoGDA on heterogeneous graphs. Table 7 shows the entity classification performances of AdaEdge [11], DropEdge [18], and AutoGDA using RGCN [40] as backbone. We note that more advanced graph data augmentation baselines FLAG [12], GAugM, and GAugO [13] are not compatible with hetergeneous graphs. Moreover, as the datasets do not have raw node features, we only search for the DROPEDGE operation in AutoGDA on these datasets (i.e., $\mathcal{A} = \{$ DROPEDGE $\}$ ). We observe that our proposed AutoGDA achieves the best performance on both datasets, showing that AutoGDA can be easily generalized to heterogeneous graphs.
490
+
491
+ Sensitivity of AutoGDA. Figure 5a shows the sensitivity analysis of our proposed AutoGDA on the choices of community detection methods and the number of communities. Figure 5b shows the modularity measurement for the two community detection methods (Louvain [25] and METIS [82]) at different number of communities. We observe that AutoGDA is generally good when $8 \leq {N}_{c} \leq {10}$ for the PubMed dataset, which is also where the modularity curve converge. Thus, a good number of communities for AutoGDA is generally easy to find with the help of the modularity measurement.
papers/LOG/LOG 2022/LOG 2022 Conference/RqN8W3R76J/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,368 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § AUTOGDA: AUTOMATED GRAPH DATA AUGMENTATION FOR NODE CLASSIFICATION
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ Graph data augmentation has been used to improve generalizability of graph machine learning. However, by only applying fixed augmentation operations on entire graphs, existing methods overlook the unique characteristics of communities which naturally exist in the graphs. For example, different communities can have various degree distributions and homophily ratios. Ignoring such discrepancy with unified augmentation strategies on the entire graph could lead to sub-optimal performance for graph data augmentation methods. In this paper, we study a novel problem of automated graph data augmentation for node classification from the localized perspective of communities. We formulate it as a bilevel optimization problem: finding a set of augmentation strategies for each community, which maximizes the performance of graph neural networks on node classification. As the bilevel optimization is hard to solve directly and the search space for community-customized augmentations strategy is huge, we propose a reinforcement learning framework AutoGDA that learns the local-optimal augmentation strategy for each community sequentially. Our proposed approach outperforms established and popular baselines on public node classification benchmarks as well as real industry e-commerce networks by up to +12.5% accuracy.
12
+
13
+ § 191 INTRODUCTION
14
+
15
+ Data augmentation methods are widely used to improve the generalizability and robustness of machine learning (ML) models [1]. They aim to create plausible variations of existing data without the need of additional human efforts. It has been proved that customized data augmentation, i.e., customizing augmentation strategies for each (batch of) object, are beneficial for ML models [2-4]. For example, customized augmentation strategies $\left\lbrack {2,5}\right\rbrack$ have shown improved performance over having a uniform augmentation on the entire dataset. To this end, automated data augmentation methods efficiently seek the optimal customized augmentation strategies for samples/batches [2,4-6].
16
+
17
+ Recently, with graph neural networks (GNNs) [7-10] emerging as one of the preferred approaches for learning on graph structured data, graph data augmentation methods [11-17] have shown promising results in improving GNNs. For example, DropEdge [18] randomly removes a fraction of edges in each training epoch to promote GNN's robustness during test-time inference. The AdaEdge [11] approach iteratively adds (or removes) edges between nodes that are predicted to have the same (or different) labels with high confidence. GAugM and GAugO [13] manipulate the graph structure according to edge probabilities learned by link predictors. Despite the promising improvements on various node classification tasks, existing graph data augmentation approaches are manually designed for the entire graph and only explore graph properties and characteristics globally.
18
+
19
+ It is more involved to apply automated data augmentation on graphs compared to images and text, because of the unique properties of graph data bring a great challenge to the effort. While existing automated augmentation approaches $\left\lbrack {2,5}\right\rbrack$ assume that samples are independent and identically distributed (i.i.d.) in the dataset, nodes in the graph are naturally connected and are dependent on each other in a non-Euclidean manner. Therefore, it is not straightforward to apply existing automated augmentation methods for graph data. On the other side, the unique properties of graph data may give us some clue to design new and effective solutions. Nodes in the graph are naturally grouped into communities $\left\lbrack {{19},{20}}\right\rbrack$ , providing a natural separation of data objects (nodes) for node classification. Chiang et al. [21] show that nodes from the same community are the most important neighbors for aggregation-based graph learning algorithms. As communities in graphs such as social networks are usually disparate in characteristics [22-24] such as density, centrality, homophily, etc., we argue that data augmentation strategies should be localized (community-specific) to achieve optimal results. However, how to augment graph data according to the localized characteristics of communities in the graph remains underexplored.
20
+
21
+ To address the aforementioned challenges, we propose to tackle down the problem of automated graph data augmentation from the local perspective, i.e., communities in graphs. We first analyze the disparate characteristics of communities using benchmark datasets. Motivated by observations and insights, we define automated graph data augmentation as a bilevel optimization problem, that is, to learn the augmentation strategies that lead to the best node classification performance of GNNs. As finding the optimal augmentation strategies requests combinatorial optimization, it is impractical in real world due to huge computational cost. We propose AutoGDA that learns community-customized augmentation strategies with a reinforcement learning (RL) approach, inspired by the auto-augmentation literature in computer vision [2, 5]. Specifically, given communities in a graph, AutoGDA relies on an RL-agent to sequentially pick up the optimal strategy from several graph data augmentation operations for each community. The RL-agent in AutoGDA generalizes the learning and selection of augmentation method from one community to another, and thus automates and accelerates the process of finding localized augmentation strategies.
22
+
23
+ We conduct extensive experiments across different GNN backbones and datasets to evaluate AutoGDA against state-of-the-art baselines. We demonstrate that AutoGDA with traditional community detection algorithms (e.g., the Louvain method [25]) and existing graph data augmentation operations (DropEdge [18], GAugM [13], and AttrMask [26]) can achieve consistent performance improvements over the baselines. Specifically, AutoGDA shows up to 12.5% over the best-performed baseline method. Moreover, we show that the graph representations learned by AutoGDA are robust against graph adversarial attacks [27].
24
+
25
+ Our main contributions are as follows.
26
+
27
+ * We tackle down the problem of automated graph data augmentation for supervised node classification by proposing community-customized augmentations from a localized perspective. To the best of our knowledge, we are the first to investigate community-customized graph data augmentation for the task of node classification.
28
+
29
+ * We propose AutoGDA, an RL-based framework that automatically learns optimal community customized graph data augmentation strategies. The AutoGDA framework is flexible on the augmentation operations and can be easily generalized to heterogeneous graphs.
30
+
31
+ * We conduct extensive experiments on eight benchmark datasets (including two real industrial graph anomaly detection benchmarks) with three widely used GNN backbones to validate AutoGDA. The experimental results show that (1) AutoGDA consistently outperforms state-of-the-art graph data augmentation baselines across all datasets, and (2) AutoGDA learns robust representations that give comparable or better classification performance than state-of-the-art graph defensing methods against adversarial attacks.
32
+
33
+ § 2 PRELIMINARIES AND PROBLEM DEFINITION
34
+
35
+ Notations Let $G = \left( {\mathcal{V},\mathcal{E}}\right)$ be an undirected graph of $N$ nodes, where $\mathcal{V} = \left\{ {{v}_{1},{v}_{2},\ldots ,{v}_{N}}\right\}$ is the set of nodes and $\mathcal{E} \subseteq \mathcal{V} \times \mathcal{V}$ is the set of edges. We denote the adjacency matrix as $\mathbf{A} \in \{ 0,1{\} }^{N \times N}$ , where ${A}_{i,j} = 1$ indicates nodes ${v}_{i}$ and ${v}_{j}$ are connected. We denote the node feature matrix as $\mathbf{X} \in {\mathbb{R}}^{N \times F}$ , where $F$ is the number of features and ${\mathbf{x}}_{i}$ (the $i$ -th row of $\mathbf{X}$ ) indicates the feature vector of node ${v}_{i}$ . We denote the node labels for classification as $\mathbf{y} \in \{ 1,\ldots ,M{\} }^{N}$ , where $M$ is the number of classes. We denote the set of graph communities as $\mathcal{C} = \left\{ {{C}_{1},{C}_{2},\ldots ,{C}_{{N}_{c}}}\right\}$ where ${N}_{c}$ is the number of communities and each community ${C}_{k}$ is defined by a set of nodes ${\mathcal{V}}_{Ck}$ s.t. ${\mathcal{V}}_{Ci} \cap {\mathcal{V}}_{Cj} = \varnothing ,\forall i,j \in \left\{ {1,2,\ldots ,{N}_{c}}\right\}$ and $i \neq j$ . We denote the subgraph containing the community ${C}_{k}$ as ${G}_{Ck} = \left( {{\mathcal{V}}_{Ck},{\mathcal{E}}_{Ck}}\right)$ , where ${\mathcal{E}}_{Ck} \subseteq {\mathcal{V}}_{Ck} \times {\mathcal{V}}_{Ck}$ is the set of edges within this subgraph. With a bit of notation abuse, we use the union symbol to denote the combination of subgraphs, i.e., $G = \mathop{\bigcup }\limits_{{k = 1}}^{{N}_{c}}{G}_{Ck}$ .
36
+
37
+ Graph Neural Networks Without the loss of generality, we take the commonly used graph convolutional network (GCN) [7] as an example when explaining GNNs in the following sections. The graph convolution operation of each GCN layer is defined as ${\mathbf{H}}^{\left( l\right) } = \sigma \left( {{\widetilde{\mathbf{D}}}^{-\frac{1}{2}}\widetilde{\mathbf{A}}{\widetilde{\mathbf{D}}}^{-\frac{1}{2}}{\mathbf{H}}^{\left( l - 1\right) }{\mathbf{W}}^{\left( l\right) }}\right)$ , where $l$ is the layer index, $\widetilde{\mathbf{A}} = \mathbf{A} + \mathbf{I}$ is the adjacency matrix with added self-loops, $\widetilde{\mathbf{D}}$ is the diagonal degree matrix ${\widetilde{D}}_{ii} = \mathop{\sum }\limits_{j}{\widetilde{A}}_{ij},{\mathbf{H}}^{\left( 0\right) } = \mathbf{X},{\mathbf{W}}^{\left( l\right) }$ is the learnable weight matrix at the $l$ -th layer, and $\sigma \left( \cdot \right)$ denotes a nonlinear activation such as ReLU.
38
+
39
+ Graph Data Augmentation We follow prior literature [13] to classified graph data augmentation methods for node classification into two categories: stochastic operations for original-graph setting and deterministic operations for modified-graph setting. Let $h : G \rightarrow {G}_{m}$ be a graph data augmentation operation that generates a variant ${G}_{m}$ of the original graph $G$ . In the original-graph setting, $h$ can be stochastic and applying it for $T$ times results with $T$ graph variants ${G}_{m}$ , such that $G \cup {\left\{ {G}_{m}^{i}\right\} }_{i = 1}^{T}$ is used in training while only $G$ is used for inference. On the other hand, in the modified-graph setting, $h$ is deterministic and outputs one ${G}_{m}$ , such that ${G}_{m}$ replaces $G$ for both training and inference.
40
+
41
+ In this work, we consider four typical state-of-the-art graph data augmentation operations for node classification: $\mathcal{A} = \{$ DROPEDGE, ATTRMASK, GAUGM_ADD, GAUGM_RM ${\} }^{1}$ , which include both stochastic and deterministic operations and they apply on both graph structure and the node features. It's worth noting that our proposed AutoGDA is not limited to these four augmentation operations and can take any graph data augmentation operations in $\mathcal{A}$ .
42
+
43
+ DROPEDGE [18] and ATTRMASK [26] are stochastic augmentation operations, where they randomly drop (mask) a given percentage of edges (node attributes) in each training epoch of GNN. DROPEDGE implies the graph structure has certain robustness to the edge connectivity and also alleviates the well-known over-smoothing problem of GNNs [18]. ATTRMASK encourages GNNs to recover masked node attributes with their context information in the local neighborhood. On the other hand, GAUGM_ADD and GAUGM_RM [13] are deterministic augmentation operations that deterministically modify the graph structure to promote the graph's homophily and hence improve the model's performance for node classification [13]. GAUGM_ADD, and GAUGM_RM use the predicted edge probabilities for all node pairs by VGAE [28] and add new edges (remove existing edges) with highest (lowest) edge probabilities.
44
+
45
+ Each of the above four augmentation operations has one parameter controlling the percentage magnitude of the augmentation, resulting with a ${100}^{4}$ searching space when finding the optimal strategy for each dataset. As we separate the graph into multiple communities and aim to search for the optimal community customized augmentation strategy, the searching space would be ${100}^{4 \times {N}_{c}}$ , where ${N}_{c}$ is the number of communities. As ${N}_{c}$ gets larger, the searching space becomes infeasible for traditional parameter searching methods such as grid search. Therefore, a new efficient approach for automated graph augmentation search is desired.
46
+
47
+ Problem Definition Following the definition of previous literature [13, 18] on graph data augmentation for node classification, the problem of finding a hand-crafted graph data augmentation strategy can be defined as follows. Given the graph data $G$ , a set of graph data augmentation operations $\mathcal{A}$ , and GNN model $f : G \rightarrow \widehat{\mathbf{y}}$ , find the best strategy of applying $\mathcal{A}$ on $G$ such that the node classification performance of $f$ on $G$ is maximized.
48
+
49
+ In this work, as we propose to find the best augmentation strategy for each community in the graph, the problem of automated graph data augmentation is then defined as: given the graph data $G$ , graph communities $\mathcal{C}$ in $G$ (the community detection method for generating $\mathcal{C}$ is treated as a hyperparameter), a set of graph data augmentation operations $\mathcal{A}$ , and GNN model $f : G \rightarrow \widehat{\mathbf{y}}$ , find the best strategy of applying $\mathcal{A}$ on the subgraphs of each community ${C}_{k} \in \mathcal{C}$ such that the node classification performance of $f$ on $G$ is maximized. The main difference between our problem definition and the previous literature is the use of graph communities $\mathcal{C}$ to find the best data augmentation strategies.
50
+
51
+ ${}^{1}$ To differentiate from the baseline methods (normal font), we use the small caps font to denote the augmentation operations.
52
+
53
+ < g r a p h i c s >
54
+
55
+ Figure 1: Graph communities detected by the Louvain method on the PubMed dataset show diverse distribution on different characteristics of graph structure.
56
+
57
+ < g r a p h i c s >
58
+
59
+ Figure 2: Overview of one iteration of our proposed AutoGDA on an example graph with three communities. In each step, the policy network takes the observation of one graph community as input and outputs the augmentation strategy for it. The GNN is then fine-tuned with the augmented graph and the validation accuracy is used as the reward to update the policy network.
60
+
61
+ § 3 AUTOMATED GRAPH DATA AUGMENTATION
62
+
63
+ Section 3.1 shows our motivation of customizing graph data augmentation strategies for different communities. Section 3.2 models automated graph data augmentation as a bilevel optimization problem. Section 3.3 presents the reinforcement learning-based framework AutoGDA.
64
+
65
+ § 3.1 MOTIVATION
66
+
67
+ Community structures naturally exist in graphs $\left\lbrack {{19},{20}}\right\rbrack$ and graph community detection has been extensively studied in the past few decades [29]. Graph community detection methods (e.g., the Louvain method [25]) separates the set of nodes in the graph into disjoint subsets such that the quality of the communities, which is usually measured by modularity [20], are maximized. Thus, the nodes within the same community are more densely connected and also more important to each other for node classification [21] comparing with the nodes in different communities.
68
+
69
+ Our idea is based on the observation that different communities in the same graph mostly show disparate data distribution, which was also shown in previous literature [22-24]. The structure of communities commonly varies in terms of density, centrality, etc. For example, Figure 1 shows the characteristics of communities (detected by Louvain) on the PubMed graph [7]. Figures 1(a), 1(b), and 1(c) show the distributions of averaged degree, number of triangles, and centrality, respectively. Figure 1d presents the homophily ratio [30] of community subgraphs, where the homophily ratio is calculated by the fraction of edges which connect nodes that have the same class label. Previous works $\left\lbrack {{11},{13}}\right\rbrack$ have shown that the graph homophily is strongly correlated with node classification performance of GNNs, because semi-supervised graph learning methods are mainly based on the homophily assumption [31]. From Figure 1 we observe that the communities disparate with different distributions under different measurements.
70
+
71
+ Moreover, for certain deterministic graph data augmentation methods such as GAugM [13]. The minority communities may be ignored during the augmentation process. For example, GAugM [13] modifies the graph structure according to the edge probabilities given an trained edge predictor, in which it adds missing edges with highest probabilities and remove existing edges with lowest probabilities. However, it is possible that all of the modification would happen in only one or few graph communities, which shows the strongest homophily patterns.
72
+
73
+ With the above observations on the disparate characteristics of graph communities, we argue that the state-of-the-art methods that apply the augmentation operations on the whole graph may not be the best practice of graph data augmentation. Auto-augmentation literature in computer vision [2, 5] has shown that customizing augmentation operations for data objects/batches is more effective than using the same strategy for the entire dataset. Although it is infeasible to learn the best operation for each data object (node) for node classification due to the dependent nature of graph data, graph data augmentation could benefit from having customized augmentation strategy for each community. The next sub-section formulates the problem of automated graph data augmentation via community customization.
74
+
75
+ § 3.2 BILEVEL OPTIMIZATION FORMULATION
76
+
77
+ We formulate the problem of automated graph data augmentation in a similar way to the auto-augmentation problem in vision tasks $\left\lbrack {2,5}\right\rbrack$ : it aims to find a set of graph data augmentation operations for each community in the graph, which maximizes the performance of a graph neural network model on the task of (semi-)supervised node classification.
78
+
79
+ Let the graph data augmentation policy network be defined as ${g}_{\theta } : G \rightarrow \{ 0,1,\ldots ,{99}\} \left| \mathcal{A}\right|$ , which is a multi-layer perceptron (MLP) that is parameteraized by $\theta$ . The policy takes a (sub)graph as input and outputs the augmentation strategy for this (sub)graph, which in our case is the four percentage magnitudes for $\mathcal{A} = \{$ DROPEDGE, ATTRMASK, GAUGM_ADD, GAUGM_RM $\}$ . Let $\operatorname{aug}\left( {{g}_{\theta }\left( G\right) }\right)$ be a function that applies the augmentation strategy ${g}_{\theta }\left( G\right)$ on (sub)graph $G$ , then the automated graph data augmentation process for subgraph ${G}_{Ck}$ is formulated as
80
+
81
+ $$
82
+ {G}_{Ck}^{\prime } = \operatorname{aug}\left( {{g}_{\theta }\left( {G}_{Ck}\right) }\right) \tag{1}
83
+ $$
84
+
85
+ where ${G}_{Ck}^{\prime }$ denotes the subgraph ${G}_{Ck}$ after augmentation.
86
+
87
+ We denote the GNN model as ${f}_{\omega } : G \rightarrow \widehat{\mathbf{y}}$ , which is parameteraized by $\omega$ . It takes the graph as input and outputs the predicted node labels $\widehat{\mathbf{y}}$ . Let the ${\mathbf{y}}_{tr}$ and ${\mathbf{y}}_{val}$ be the node labels for training set and validation set, respectively. The objective of obtaining the best augmentation policy (solving for $\theta$ ) could be described as a bilevel optimization problem [32]. The inner level is the model weight optimization, which is solving for the optimal ${\omega }_{\theta }$ given a fixed augmentation policy $\left( {g}_{\theta }\right)$ :
88
+
89
+ $$
90
+ {\omega }_{\theta } = \underset{\omega }{\arg \min }\mathcal{L}\left( {{f}_{\omega }\left( {\mathop{\bigcup }\limits_{{k = 1}}^{{N}_{c}}\operatorname{aug}\left( {{g}_{\theta }\left( {G}_{Ck}\right) }\right) }\right) ,{\mathbf{y}}_{tr}}\right) , \tag{2}
91
+ $$
92
+
93
+ where $\mathcal{L}$ denotes the loss function (cross entropy).
94
+
95
+ The outer level is the augmentation policy optimization, which is optimizing the policy parameter $\theta$ using the result of the inner level problem. Here we take the validation performance (accuracy) as the optimization objective. Then we have the problem formulated as below:
96
+
97
+ $$
98
+ {\theta }^{ * } = \underset{\theta }{\arg \max }{ACC}\left( {{f}_{{\omega }_{\theta }}\left( {\mathop{\bigcup }\limits_{{k = 1}}^{{N}_{c}}\operatorname{aug}\left( {{g}_{\theta }\left( {G}_{Ck}\right) }\right) }\right) ,{\mathbf{y}}_{\text{ val }}}\right) , \tag{3}
99
+ $$
100
+
101
+ $$
102
+ \text{ where }{\omega }_{\theta } = \underset{\omega }{\arg \min }\mathcal{L}\left( {\omega ,\theta }\right) \left( {\mathrm{{Eq}}.\left( 2\right) }\right) \text{ , }
103
+ $$
104
+
105
+ where ${\theta }^{ * }$ denotes the parameter of the optimal policy, and ${ACC}\left( {f\left( G\right) ,{\mathbf{y}}_{\text{ val }}}\right)$ denotes the validation accuracy.
106
+
107
+ § 3.3 AUTOGDA FRAMEWORK
108
+
109
+ As the graph data augmentation operations $\mathcal{A}$ modifies the graph structure and also affects the training of the GNN model, when applying the augmentation operations community by community, the graph data augmentation can be formulated as a sequential process on the graph. Figure 2 illustrates the sequential process: in each step of the iteration, the policy network takes the observation of a community and outputs the action containing the set of augmentation magnitudes which will be applied on this community. We finetune the GNN model after applying the augmentations and take the validation accuracy as rewards.
110
+
111
+ Parameter Sharing As the solving of bilevel optimization problems is extremely time consuming due the repeated solving of the inner loop [32], we utilize the weight sharing scheme for automated
112
+
113
+ augmentation proposed by Tian et al. [5]. At the start of each episode, we reset the current graph to 4 the original graph, pretrain the GNN model on the original graph (without optimizing the outer loop or applying any data augmentation operation), and obtain $\bar{\omega }$ . That is,
114
+
115
+ $$
116
+ \bar{\omega } = \underset{\omega }{\arg \min }\mathcal{L}\left( {{f}_{\omega }\left( G\right) ,{\mathbf{y}}_{tr}}\right) \tag{4}
117
+ $$
118
+
119
+ In each step of this episode, instead of training a new GNN model from scratch to get ${\omega }_{\theta }$ , we load the parameters $\bar{\omega }$ from pretraining and finetune the GNN model for only a small number of epochs 8 with the given actions (augmentation strategy) to get ${\bar{\omega }}_{\theta }$ . Therefore, the outer level for optimizing the augmentation policy parameters becomes
120
+
121
+ $$
122
+ {\theta }^{ * } = \underset{\theta }{\arg \max }{ACC}\left( {{f}_{{\bar{\omega }}_{\theta }}\left( {\mathop{\bigcup }\limits_{{k = 1}}^{{N}_{c}}\operatorname{aug}\left( {{g}_{\theta }\left( {G}_{Ck}\right) }\right) }\right) ,{\mathbf{y}}_{val}}\right) . \tag{5}
123
+ $$
124
+
125
+ Reinforcement Learning (RL) Environment The set of graph data augmentation operations $\mathcal{A}$ contains both stochastic and deterministic augmentation operations, where the stochastic operations (DROPEDGE, ATTRMASK) affect the GNN model in training and the deterministic operations (GAUGM_ADD, GAUGM_RM) directly modifies the graph structure. Therefore, as we apply the augmentation strategy on each community, the node/graph representation obtained by GNN trained with the augmentations also changes. This process forms a Markov decision process, whose length is equal to the number of communities.
126
+
127
+ In the RL environment, we take the current graph with the given augmentation strategy as the state and use the graph representation of one community as the observation for one step. That is, for each graph community ${C}_{k}$ , the observation is the pooled graph representation of the subgraph ${G}_{Ck}$ , i.e., the element-wise mean of the node representations for all nodes in ${\mathcal{V}}_{Ck}$ . As the output dimension of GNN's last layer is the number of unique labels, which is usually very small, we take the output of the GNN’s second last layer as the node representations. The policy network ${g}_{\theta }$ takes the observation (i.e., graph representation of the community) as input and outputs the magnitudes of different augmentation operations for the community. Note that the augmentation operation would not be applied if its magnitude is zero.
128
+
129
+ For optimizing our proposed method, we opt for simplicity and employ the widely-used Proximal Policy Optimization (PPO) [33] algorithm. We use the validation performance of the GNN model after finetuning in step as the reward to the RL policy.
130
+
131
+ Summary Algorithm 1 summarizes
132
+
133
+ Algorithm 1: AutoGDA
134
+
135
+ Input : $G,\mathcal{C},{\mathbf{y}}_{tr},{\mathbf{y}}_{val}$
136
+
137
+ /* Policy Optimization
138
+
139
+ for episode in range $\left( {n\text{ \_episodes }}\right)$ do
140
+
141
+ Pretrain GNN and obtain $\bar{\omega }$ by Eq. (4) ;
142
+
143
+ $\left\{ {{G}_{C1}^{\prime },\ldots ,{G}_{C{N}_{c}}^{\prime }}\right\} = \left\{ {{G}_{C1},\ldots ,{G}_{C{N}_{c}}}\right\} ;$
144
+
145
+ for $k$ in $\left\{ {1,2,\ldots ,{N}_{c}}\right\}$ do
146
+
147
+ ${G}_{Ck}^{\prime } = \operatorname{aug}\left( {{g}_{\theta }\left( {G}_{Ck}\right) }\right) ;\;//$ Eq. (1)
148
+
149
+ Load $\bar{\omega }$ ;
150
+
151
+ Finetune GNN with $\mathop{\bigcup }\limits_{{k = 1}}^{{N}_{c}}{G}_{Ck}^{\prime }$ and get ${\bar{\omega }}_{\theta }$ ;
152
+
153
+ Use val. ACC to update $\theta ;\;//$ Eq. (5)
154
+
155
+ end
156
+
157
+ end
158
+
159
+ ${\theta }^{ * } = \theta$ ;
160
+
161
+ * Inference
162
+
163
+ ${G}^{ * } = \mathop{\bigcup }\limits_{{k = 1}}^{{N}_{c}}\operatorname{aug}\left( {{g}_{{\theta }^{ * }}\left( {G}_{Ck}\right) }\right) ;$
164
+
165
+ Reset and train GNN with ${G}^{ * }$ and get ${\widehat{\mathbf{y}}}_{\text{ test }}$ ;
166
+
167
+ Output: ${\widehat{\mathbf{y}}}_{\text{ test }}$
168
+
169
+ the whole process of AutoGDA. In each episode of the policy optimization stage, we first pretrain the GNN model by Eq. (4) and reset the graph. The subgraphs ${G}_{Ck}^{\prime }$ in line 3 are for tracking the augmented communities. Then for each community, we first obtain and apply its augmentations strategy given by the policy (line 5), then load the pretrained parameters $\bar{\omega }$ for the GNN model, finetune it with the current augmentation strategies, and use the validation accuracy as reward to update the policy. After the policy network is sufficiently trained, we get the final graph data augmentation strategy for the whole graph (all communities) by the trained policy ${g}_{{\theta }^{ * }}$ . Finally, we reset GNN and train it with $\mathop{\bigcup }\limits_{{k = 1}}^{{N}_{c}}\operatorname{aug}\left( {{g}_{{\theta }^{ * }}\left( {G}_{Ck}\right) }\right)$ to get the predicted labels ${\widehat{\mathbf{y}}}_{\text{ test }}$ .
170
+
171
+ For the edges which connect nodes that belong to different communities, we make them an optional special community of edges $\left( {C}_{{N}_{c} + 1}\right)$ in AutoGDA. In the case we use this community, it only
172
+
173
+ Table 1: Node classification accuracy across GNN architectures and public benchmarks. Bolded are the best performance and the comparable ones (within the standard deviation of the best performance).
174
+
175
+ max width=
176
+
177
+ GNN Method Cora CiteSeer PubMed Flickr
178
+
179
+ 1-6
180
+ 7*GCN Original ${81.5} \pm {0.4}$ ${70.3} \pm {0.5}$ ${79.0} \pm {0.3}$ ${61.2} \pm {0.4}$
181
+
182
+ 2-6
183
+ +AdaEdge ${81.9} \pm {0.7}$ ${72.8} \pm {0.7}$ 79.8±0.4 ${61.2} \pm {0.5}$
184
+
185
+ 2-6
186
+ +DropEdge ${82.0} \pm {0.8}$ ${71.8} \pm {0.2}$ 79.3±0.3 61.4±0.7
187
+
188
+ 2-6
189
+ +FLAG ${80.2} \pm {0.3}$ ${68.1} \pm {0.5}$ ${78.5} \pm {0.2}$ ${62.3} \pm {0.4}$
190
+
191
+ 2-6
192
+ +GAugM ${83.5} \pm {0.4}$ ${72.3} \pm {0.4}$ ${80.2} \pm {0.3}$ ${68.2} \pm {0.7}$
193
+
194
+ 2-6
195
+ +GAugO ${83.6} \pm {0.5}$ 73.3±1.1 79.3±0.4 ${62.2} \pm {0.3}$
196
+
197
+ 2-6
198
+ +AutoGDA $\mathbf{{84.4}} \pm {0.3}$ 73.0±0.4 $\mathbf{{81.6}} \pm {0.5}$ 71.4±0.5
199
+
200
+ 1-6
201
+ 7*GSAGE Original ${81.3} \pm {0.5}$ ${70.6} \pm {0.5}$ ${78.3} \pm {0.6}$ 57.4±0.5
202
+
203
+ 2-6
204
+ +AdaEdge ${81.5} \pm {0.6}$ ${71.3} \pm {0.8}$ ${78.5} \pm {0.2}$ 57.7±0.7
205
+
206
+ 2-6
207
+ +DropEdge ${81.6} \pm {0.5}$ ${70.8} \pm {0.5}$ 78.7±0.7 ${58.4} \pm {0.7}$
208
+
209
+ 2-6
210
+ +FLAG 79.2±0.9 ${67.9} \pm {1.4}$ 77.4±0.3 ${48.5} \pm {0.6}$
211
+
212
+ 2-6
213
+ +GAugM 83.2±0.4 71.2±0.4 ${78.7} \pm {0.3}$ ${65.2} \pm {0.4}$
214
+
215
+ 2-6
216
+ +GAugO ${82.0} \pm {0.5}$ ${72.7} \pm {0.7}$ 79.4±0.9 ${56.3} \pm {0.6}$
217
+
218
+ 2-6
219
+ +AutoGDA 83.2±0.5 ${72.5} \pm {0.4}$ $\mathbf{{80.0}} \pm {0.5}$ 73.4±0.6
220
+
221
+ 1-6
222
+ 7*GAT Original ${83.0} \pm {0.7}$ ${72.5} \pm {0.7}$ ${79.0} \pm {0.3}$ ${46.9} \pm {1.6}$
223
+
224
+ 2-6
225
+ +AdaEdge ${82.0} \pm {0.6}$ ${71.1} \pm {0.8}$ 79.1±0.6 ${48.2} \pm {1.0}$
226
+
227
+ 2-6
228
+ +DropEdge ${81.9} \pm {0.6}$ ${71.0} \pm {0.5}$ ${78.9} \pm {0.6}$ ${50.0} \pm {1.6}$
229
+
230
+ 2-6
231
+ +FLAG 79.6±0.6 67.7±0.7 ${78.2} \pm {0.5}$ ${48.9} \pm {1.1}$
232
+
233
+ 2-6
234
+ +GAugM ${82.1} \pm {1.0}$ ${71.5} \pm {0.5}$ 79.0±0.5 ${63.7} \pm {0.9}$
235
+
236
+ 2-6
237
+ +GAugO ${82.2} \pm {0.8}$ 71.6±1.1 ${78.5} \pm {0.8}$ ${51.9} \pm {0.5}$
238
+
239
+ 2-6
240
+ +AutoGDA $\mathbf{{84.8}} \pm {0.2}$ 73.2±0.4 79.8±0.6 65.1±0.9
241
+
242
+ 1-6
243
+
244
+ disjoint with others in edges (i.e., ${\mathcal{V}}_{C{N}_{c} + 1} \cap \left( {{ \cup }_{k = 1}^{{N}_{c}}{\mathcal{V}}_{Ck}}\right) = \varnothing$ and ${\mathcal{E}}_{C{N}_{c} + 1} = \mathcal{E} \smallsetminus \left( {{ \cup }_{k = 1}^{{N}_{c}}{\mathcal{E}}_{Ck}}\right)$ ), so we only apply DROPEDGE and GAUG_RM on it as the other two augmentation operations are covered by other communities.
245
+
246
+ § 4 EXPERIMENTS
247
+
248
+ In this section, we evaluate the performance of the proposed AutoGDA across different GNN backbones and datasets, and over alternative graph data augmentation methods.
249
+
250
+ § 4.1 EXPERIMENTAL SETUP
251
+
252
+ Datasets We evaluate with 4 public benchmark datasets across domains: citation networks with strong homophily (Cora, CiteSeer, PubMed [7]) and social networks that exhibits heterophily [30] (Flickr [34]). We also evaluate with 2 real industry application benchmarks ECom20K and ECom43K for the task of graph anomaly detection. To validate the possibility of extending AutoGDA on heterogeneous graphs, we also evaluate with 2 heterogeneous graph benchmarks for the task of entity classification: AIFB and MUTAG [35]. Details for the datasets are provided in Appendix B.
253
+
254
+ Baselines We evaluate the AutoGDA and baselines using 3 widely used GNN architectures: GCN [7], GraphSAGE [8], and GAT [36]. We compare the node classification performance of AutoGDA with that achieved by standard GNN, as well as four state-of-the-art graph data augmentation baselines: AdaEdge [11], DropEdge [18], GAugM [13], and GAugO [13]. To evaluate the robustness of AutoGDA under graph adversarial attacks, we evaluate AutoGDA and state-of-the-art graph defensing methods (GNN-Jaccard [37] and ElasticGNN [38]) against graph adversarial attacks: DICE [39] and Metattack [27]. Moreover, to evaluate the generalizability of AutoGDA on heterogeneous graph, we also test AutoGDA and baseline methods with R-GCN [40] on the heterogeneous graph benchmarks.
255
+
256
+ § 4.2 EXPERIMENTAL RESULTS
257
+
258
+ We show comparative results of AutoGDA and baseline methods in Tables 1 and 2 and Figure 3. Table 1 is organized per GNN architecture (row), per dataset (column), and different methods (within-row). Table 2 is organized per dataset (row), per graph adversarial attack method (column), per attack level (within-column), and different methods (within-row). For customer privacy concern, relative improvements instead of the performances are reported in Figure 3. We bold the best and comparable performances. In short, our proposed AutoGDA consistently achieve the best or comparable performances in all combinations of GNN architectures and datasets.
259
+
260
+ < g r a p h i c s >
261
+
262
+ Figure 3: Relative improvements over GNNs for accuracy on two real-world industry anomaly detection datasets.
263
+
264
+ Table 2: Node classification accuracy against different levels of adversarial attacks. Bolded are the best performance and the comparable ones (within the standard deviation of the best performance).
265
+
266
+ max width=
267
+
268
+ 2*Dataset 2*Attack Method Ptb. Rate 3|c|DICE [39] 3|c|Metattack [27]
269
+
270
+ 3-8
271
+ 10% 30% 50% 10% 30% 50%
272
+
273
+ 1-8
274
+ 5*Cora GCN [7] ${78.4} \pm {0.6}$ ${73.6} \pm {0.9}$ ${66.8} \pm {1.2}$ ${70.2} \pm {0.9}$ ${32.6} \pm {2.0}$ ${16.6} \pm {0.8}$
275
+
276
+ 2-8
277
+ GAugO [13] ${78.8} \pm {0.4}$ 74.0±0.3 67.3±0.5 ${72.5} \pm {1.1}$ 57.1±0.7 ${40.6} \pm {1.1}$
278
+
279
+ 2-8
280
+ GNN-Jaccard [37] 73.1±0.5 ${66.4} \pm {0.5}$ ${66.9} \pm {0.4}$ ${72.1} \pm {0.7}$ ${51.6} \pm {0.5}$ ${38.4} \pm {0.7}$
281
+
282
+ 2-8
283
+ ElasticGNN [38] 79.8±0.8 ${74.5} \pm {0.8}$ 67.6±1.4 74.3±1.2 ${49.0} \pm {1.3}$ ${35.4} \pm {1.4}$
284
+
285
+ 2-8
286
+ AutoGDA $\mathbf{{80.2}} \pm {0.7}$ 74.7±0.3 67.9±0.9 ${74.5} \pm {1.0}$ $\mathbf{{63.2}} \pm {1.9}$ $\mathbf{{53.7}} \pm {1.2}$
287
+
288
+ 1-8
289
+ 5*CiteSeer GCN [7] ${65.8} \pm {1.1}$ ${60.1} \pm {0.8}$ ${56.5} \pm {0.8}$ ${38.4} \pm {1.0}$ ${15.9} \pm {0.8}$ ${10.7} \pm {1.9}$
290
+
291
+ 2-8
292
+ GAugO [13] 67.0±0.5 ${60.9} \pm {0.4}$ ${56.5} \pm {0.9}$ ${50.2} \pm {0.9}$ ${34.3} \pm {0.6}$ ${29.1} \pm {1.2}$
293
+
294
+ 2-8
295
+ GNN-Jaccard [37] ${66.5} \pm {1.3}$ 59.8±0.7 ${54.1} \pm {1.0}$ ${43.7} \pm {1.2}$ ${27.0} \pm {0.7}$ ${19.8} \pm {0.4}$
296
+
297
+ 2-8
298
+ ElasticGNN [38] ${66.7} \pm {1.4}$ ${59.7} \pm {0.8}$ ${56.3} \pm {1.4}$ 47.5±1.3 ${31.8} \pm {0.3}$ ${23.5} \pm {4.3}$
299
+
300
+ 2-8
301
+ AutoGDA ${67.8} \pm {1.1}$ ${62.1} \pm {1.3}$ 57.6±1.1 ${61.5} \pm {1.6}$ $\mathbf{{52.6}} \pm {1.3}$ 47.8±1.7
302
+
303
+ 1-8
304
+ 5*PubMed GCN [7] ${73.9} \pm {0.3}$ ${67.1} \pm {0.3}$ 63.6±0.7 ${67.2} \pm {0.4}$ ${41.3} \pm {1.5}$ ${27.5} \pm {1.7}$
305
+
306
+ 2-8
307
+ GAugO [13] ${74.6} \pm {0.4}$ ${67.8} \pm {0.3}$ ${64.8} \pm {0.5}$ ${71.6} \pm {0.9}$ ${51.8} \pm {0.7}$ ${40.7} \pm {1.0}$
308
+
309
+ 2-8
310
+ GNN-Jaccard [37] ${73.7} \pm {0.4}$ ${67.3} \pm {0.5}$ ${64.2} \pm {0.4}$ ${68.2} \pm {0.4}$ ${41.9} \pm {0.9}$ ${29.1} \pm {1.1}$
311
+
312
+ 2-8
313
+ ElasticGNN [38] ${75.5} \pm {0.6}$ ${68.7} \pm {0.7}$ 65.4±0.7 ${71.6} \pm {0.5}$ ${49.8} \pm {0.4}$ ${40.1} \pm {0.8}$
314
+
315
+ 2-8
316
+ AutoGDA 75.1±0.8 ${68.6} \pm {0.5}$ 65.6±0.4 73.1±1.2 ${55.9} \pm {1.8}$ ${42.2} \pm {1.6}$
317
+
318
+ 1-8
319
+
320
+ Effectiveness on graph data augmentation From Table 1 and Figure 3 we observe that our proposed AutoGDA achieves improvements over all three GNN architectures (averaged across datasets): AutoGDA improves 5.0% (GCN), 6.1% (GraphSAGE), and 7.5% (GAT), repsectively. From the dataset perspective, AutoGDA also achieves improvements over all 6 datasets (averaged across GNN architectures): AutoGDA improves 2.7%, 2.2%, 2.1%, 27.6%, 0.8%, and 1.7% respectively for each dataset (Cora, CiteSeer, PubMed, Flickr, ECom20k, ECom43k). Notably, AutoGDA is especially effective on social networks (Flickr), which are naturally more heterophily. Although GAugM [13] outperformed all other baselines with large margin, AutoGDA still significantly improved from GAugM. Finally, we note that AutoGDA outperforms all graph data augmentation baselines: specifically, AutoGDA improves 5.6%, 5.4%, 6.1% 2.0%, and 4.7% respectively over AdaEdge, DropEdge, FLAG, GAugM, and GAugO (averaged across datasets and GNNs). We reason that learning cus-
321
+
322
+ Table 3: Ablation experiments using GCN on PubMed.
323
+
324
+ max width=
325
+
326
+ X PubMed
327
+
328
+ 1-2
329
+ DropEdge ${79.3} \pm {0.3}$
330
+
331
+ 1-2
332
+ AutoGDA with DROPEDGE (single community) 79.3±0.4
333
+
334
+ 1-2
335
+ AutoGDA with DROPEDGE 79.4±0.2
336
+
337
+ 1-2
338
+ GAugM ${80.2} \pm {0.3}$
339
+
340
+ 1-2
341
+ AutoGDA with GAUGM (single community) ${80.2} \pm {0.3}$
342
+
343
+ 1-2
344
+ AutoGDA with GAUGM ${81.2} \pm {0.4}$
345
+
346
+ 1-2
347
+ AutoGDA (single community) ${80.4} \pm {0.3}$
348
+
349
+ 1-2
350
+ AutoGDA $\mathbf{{81.6}} \pm {0.5}$
351
+
352
+ 1-2
353
+
354
+ < g r a p h i c s >
355
+
356
+ Figure 4: AutoGDA learns diverse augmentation strategies for different communities in the PubMed graph.
357
+
358
+ tomized augmentation for each graph community and combining several state-of-the-art graph data augmentation operations both contributed to the performance improvement of AutoGDA.
359
+
360
+ Robustness against graph adversarial attacks From Table 2 we observe the proposed AutoGDA is able to effectively learn robust representation under graph adversarial attacks. Although the recently baseline ElasticGNN [38] also achieved good performance against Random Injection and DICE [39], AutoGDA outperformed ElasticGNN with large margins under Metattack [27]. In short, our proposed AutoGDA achieved the best performance for 21 our of 27 combinations of datasets and graph adversarial attack methods.
361
+
362
+ Necessity of community adaptive augmentations Table 3 shows ablation experiments on PubMed using GCN. We compare the performances of AutoGDA using only one augmentation operations versus using all augmentation operations in $\mathcal{A}$ , and we also compare the performances of AutoGDA with single community (viewing the whole graph as the only community) versus our default setting (using Louvain method for community detection). We note that when using only one augmentation operation (combining GAUGM_ADD and GAUGM_RM as GAUGM) and under single community setting, AutoGDA performs parameter search on the existing graph data augmentation methods. We also observe from Table 3 that the default AutoGDA (with multiple graph communities) consistently outperform the single community setting, showing the community customized augmentation strategy is crucial for the performance improvement.
363
+
364
+ Case Study Figure 4 showcases the learned augmentation strategies for seven different communities on PubMed dataset by our proposed AutoGDA with GCN. We observe that our propose AutoGDA is able to learn diverse augmentations strategies for different communities in the graph.
365
+
366
+ § 5 CONCLUSIONS
367
+
368
+ In this paper, we studied the problem of community-customized data augmentation for node classification. Our work showed that different communities require different augmentation strategies for the best node classification performance due to their disparate characteristics. We proposed an automated graph data augmentation framework AutoGDA that adopted reinforcement learning to automatically learn the optimal augmentation strategy for each community. Through extensive experiments on benchmark graph datasets from multiple domains, our proposed approach AutoGDA achieved up to 12.5% accuracy improvement over the state-of-the-art graph data augmentation baselines.
papers/LOG/LOG 2022/LOG 2022 Conference/Sq9Orta9l5i/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,495 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # REFACTOR GNNs: Revisiting Factorisation-based Models from a Message-Passing Perspective
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ Factorisation-based Models (FMs), such as DistMult, have enjoyed enduring success for Knowledge Graph Completion (KGC) tasks, often outperforming Graph Neural Networks (GNNs). However, unlike GNNs, FMs struggle to incorporate node features and generalise to unseen nodes in inductive settings. Our work bridges the gap between FMs and GNNs by proposing REFACTOR GNNs. This new architecture draws upon both modelling paradigms, which previously were largely thought of as disjoint. Concretely, using a message-passing formalism, we show how FMs can be cast as GNNs by reformulating the gradient descent procedure as message-passing operations, which forms the basis of our REFACTOR GNNs. Our REFACTOR GNNs achieve state-of-the-art inductive performance while using an order of magnitude fewer parameters.
12
+
13
+ ## 13 1 Introduction
14
+
15
+ In recent years, machine learning on graphs has attracted significant attention due to the abundance of graph-structured data and developments in graph learning algorithms. Graph Neural Networks (GNNs) have shown state-of-the-art performance for many graph-related problems, such as node classification [1] and graph classification [2]. Their main advantage is that they can easily be applied in an inductive setting: generalising to new nodes and graphs without re-training. However, despite many attempts at applying GNNs for multi-relational link prediction such as Knowledge Graph Completion [3], there are still few positive results compared to factorisation-based models (FMs) [4, 5]. As it stands, GNNs either - after resolving reproducibility concerns - deliver significantly lower performance $\left\lbrack {6,7}\right\rbrack$ or yield negligible performance gains at the cost of highly sophisticated architecture designs [8]. A notable exception is NBFNet [9], but even here the advance comes at the price of a high computational inference cost compared to FMs. Furthermore, it is unclear how NBFNet could incorporate node features, which - as we will see in this work - leads to remarkably lower performance in an inductive setting. On the flip side, FMs, despite being a simpler architecture, have been found to be very accurate for knowledge graph completion when coupled with appropriate training strategies [10] and training objectives [11, 12]. However, they also come with shortcomings in that they, unlike GNNs, can not be applied in an inductive setting.
16
+
17
+ Given the respective strengths and weaknesses of FMs and GNNs, can we bridge these two seemingly different model categories? While exploring this question, we make the following contributions:
18
+
19
+ - By reformulating the training process using message-passing primitives, we show a practical connection between FMs and GNNs, i.e. FMs can be treated as a special instance of GNNs.
20
+
21
+ - Based on this connection, we propose a new family of architectures, REFACTOR GNNs, that interpolates between FMs and GNNs and allow FMs to be used inductively.
22
+
23
+ - In an empirical investigation across well-established benchmarks (see the appendix), our REFAC-TOR GNNS achieve state-of-the-art inductive performance across the board and comparable transductive performance to FMs - despite using an order of magnitude fewer parameters.
24
+
25
+ ## 2 Background
26
+
27
+ Knowledge Graph Completion [KGC, 13] is a canonical task of multi-relational link prediction. The goal is to predict missing edges given the existing edges in the knowledge graph. Formally, a knowledge graph contains a set of entities (nodes) $\mathcal{E} = \{ 1,\ldots ,\left| \mathcal{E}\right| \}$ , a set of relation (or edge) types $\mathcal{R} = \{ 1,\ldots ,\left| \mathcal{R}\right| \}$ , and a set of typed edges between the entities $\mathcal{T} = {\left\{ \left( {v}_{i},{r}_{i},{w}_{i}\right) \right\} }_{i = 1}^{\left| \mathcal{T}\right| }$ , where each triple $\left( {{v}_{i},{r}_{i},{w}_{i}}\right)$ indicates a relationship of type ${r}_{i} \in \mathcal{R}$ between the subject ${v}_{i} \in \mathcal{E}$ and the object ${w}_{i} \in \mathcal{E}$ of the triple. Given a (training) knowledge graph, the KGC task [3] aims at identifying missing links by answering $\left( {v, r,?}\right)$ queries i.e. predicting the object given the subject and the relation.
28
+
29
+ Multi-relational link prediction models can be trained via maximum likelihood, by fitting a parameterized conditional categorical distribution ${P}_{\theta }\left( {w \mid v, r}\right)$ over the candidate objects of a relation, given the subject $v$ and the relation type $r : {P}_{\theta }\left( {w \mid v, r}\right) = \operatorname{Softmax}\left( {{\Gamma }_{\theta }\left( {v, r, \cdot }\right) }\right) \left\lbrack w\right\rbrack$ , where ${\Gamma }_{\theta } : \mathcal{E} \times \mathcal{R} \times \mathcal{E} \rightarrow \mathbb{R}$ is a scoring function that, given a triple(v, r, w), returns the likelihood that the corresponding edge appears in the graph. In this paper, we illustrate our derivations using DistMult [4] as the score function $\Gamma$ and defer extensions to general score functions, e.g. ComplEx [5] to the appendix. In DistMult, the score function ${\Gamma }_{\theta }$ is defined as the tri-linear dot product of the embeddings of the subject, relation type, and object of the triple: ${\Gamma }_{\theta }\left( {v, r, w}\right) = \mathop{\sum }\limits_{{i = 1}}^{K}{f}_{\phi }{\left( v\right) }_{i}{f}_{\phi }{\left( w\right) }_{i}{g}_{\psi }{\left( r\right) }_{i}$ , where ${f}_{\phi } : \mathcal{E} \rightarrow {\mathbb{R}}^{K}$ and ${g}_{\psi } : \mathcal{R} \rightarrow {\mathbb{R}}^{K}$ are learnable maps parameterised by $\phi$ and $\psi$ that encode entities and relation types into $K$ -dimensional representations, and $\theta = \left( {\phi ,\psi }\right)$ . We will refer to $f$ and $g$ as the entity and relational encoders, respectively.
30
+
31
+ We can learn the model parameters $\theta$ by minimising the expected negative log-likelihood $\mathcal{L}\left( \theta \right)$ of the ground-truth entities for the queries $\left( {v, r,?}\right)$ obtained from $\mathcal{T}$ :
32
+
33
+ $$
34
+ \arg \mathop{\min }\limits_{\theta }\mathcal{L}\left( \theta \right) \;\text{ where }\;\mathcal{L}\left( \theta \right) = - \frac{1}{\left| \mathcal{T}\right| }\mathop{\sum }\limits_{{\left( {v, r, w}\right) \in \mathcal{T}}}\log {P}_{\theta }\left( {w \mid v, r}\right) . \tag{1}
35
+ $$
36
+
37
+ During inference, we use the distribution ${P}_{\theta }$ for ranking missing links.
38
+
39
+ Factorisation-based Models for KGC. In factorisation-based models, which we assume to be DistMult, ${f}_{\phi }$ and ${g}_{\psi }$ are simply parameterised as look-up tables, associating each entity and relation with a continuous distributed representation:
40
+
41
+ $$
42
+ {f}_{\phi }\left( v\right) = \phi \left\lbrack v\right\rbrack ,\phi \in {\mathbb{R}}^{\left| \mathcal{E}\right| \times K}\;\text{ and }\;{g}_{\psi }\left( r\right) = \psi \left\lbrack r\right\rbrack ,\psi \in {\mathbb{R}}^{\left| \mathcal{R}\right| \times K}. \tag{2}
43
+ $$
44
+
45
+ GNN-based Models for KGC. GNNs were originally proposed for node or graph classification tasks $\left\lbrack {{14},{15}}\right\rbrack$ . To adapt them to KGC, previous work has explored two different paradigms: node-wise entity representations [16] and pair-wise entity representations [9, 17]. Though the latter paradigm has shown promising results, it requires computing an embedding representation for any pair of nodes, which can be too computationally expensive for large-scale graphs with millions of entities. Additionally, node-wise representations allow for using a single evaluation of ${f}_{\phi }\left( v\right)$ for multiple queries involving $v$ . Models based on the first paradigm differ from pure FMs only in the entity encoder and lend themselves well for a fairer comparison with pure FMs. We will therefore focus on this class and leave the investigation of pair-wise representations to future work.
46
+
47
+ Let ${q}_{\phi } : \mathcal{G} \times \mathcal{X} \rightarrow \mathop{\bigcup }\limits_{{S \in {\mathbb{N}}^{ + }}}{\mathbb{R}}^{S \times K}$ be a GNN encoder, where $\mathcal{G} = \{ G \mid G \subseteq \mathcal{E} \times \mathcal{R} \times \mathcal{E}\}$ is the set of all possible multi-relational graphs defined over $\mathcal{E}$ and $\mathcal{R}$ , and $\mathcal{X}$ is the input feature space, respectively. Then we can set ${f}_{\phi }\left( v\right) = {q}_{\phi }\left( {\mathcal{T}, X}\right) \left\lbrack v\right\rbrack$ . Following the standard message-passing framework $\left\lbrack {2,{18}}\right\rbrack$ used by the GNNs, we view ${q}_{\phi } = {q}^{L} \circ \ldots \circ {q}^{1}$ as the recursive composition of $L \in {\mathbb{N}}^{ + }$ layers that compute intermediate representations ${h}^{l}$ for $l \in \{ 1,\ldots , L\}$ (and ${h}^{0} = X$ ) for all entities in the KG. Each layer is made up of the following three functions:
48
+
49
+ - A message function ${q}_{\mathrm{M}}^{l} : {\mathbb{R}}^{K} \times \mathcal{R} \times {\mathbb{R}}^{K} \rightarrow {\mathbb{R}}^{K}$ that computes the message along each edge. Given an edge $\left( {v, r, w}\right) \in \mathcal{T},{q}_{\mathrm{M}}^{l}$ not only makes use of the node states ${h}^{l - 1}\left\lbrack v\right\rbrack$ and ${h}^{l - 1}\left\lbrack w\right\rbrack$ (as in standard GNNs) but also uses the relation $r$ . Denote the message as ${m}^{l}\left\lbrack {v, r, w}\right\rbrack = {q}_{\mathrm{M}}^{l}\left( {{h}^{l - 1}\left\lbrack v\right\rbrack , r,{h}^{l - 1}\left\lbrack w\right\rbrack }\right)$ ;
50
+
51
+ - An aggregation function ${q}_{\mathrm{A}}^{l} : \mathop{\bigcup }\limits_{{S \in \mathbb{N}}}{\mathbb{R}}^{S \times K} \rightarrow {\mathbb{R}}^{K}$ that aggregates all messages from the 1-hop neighbourhood of a node; denote the aggregated message as ${z}^{l}\left\lbrack v\right\rbrack =$ ${q}_{\mathrm{A}}^{l}\left( \left\{ {{m}^{l}\left\lbrack {v, r, w}\right\rbrack \mid \left( {r, w}\right) \in {\mathcal{N}}^{1}\left\lbrack v\right\rbrack }\right\} \right) ;$
52
+
53
+ - An update function ${q}_{\mathrm{{UI}}}^{l} : {\mathbb{R}}^{K} \times {\mathbb{R}}^{K} \rightarrow {\mathbb{R}}^{K}$ that produces the new node states ${h}^{l}$ by combining previous node states ${h}^{l - 1}$ and the aggregated messages ${z}^{l} : {h}^{l}\left\lbrack v\right\rbrack = {q}_{\mathrm{U}}^{l}\left( {{h}^{l - 1}\left\lbrack v\right\rbrack ,{z}^{l}\left\lbrack v\right\rbrack }\right)$ .
54
+
55
+ ## 3 Implicit Message-Passing in FMs
56
+
57
+ The sharp difference in analytical forms might give rise to the misconception that GNNs incorporate message-passing over the neighbourhood of each node (up to $L$ -hops), while FMs do not. In this work, we show that by explicitly considering the training dynamics of FMs, we can uncover and analyse the hidden message-passing mechanism within FMs. In turn, this will lead us to the formulation of a novel class of GNNs well suited for multi-relational link prediction tasks (Section 4). Specifically, we propose to interpret the FMs' optimisation process of their objective (1) as the entity encoder. If we consider, for simplicity, a gradient descent training dynamic, then
58
+
59
+ $$
60
+ {f}_{{\phi }^{t}}\left( v\right) = {\phi }^{t}\left\lbrack v\right\rbrack = {\mathrm{{GD}}}^{t}\left( {{\phi }^{t - 1},\mathcal{T}}\right) \left\lbrack v\right\rbrack = \underset{t}{\underbrace{{\mathrm{{GD}}}^{t}\ldots {\mathrm{{GD}}}^{1}}}\left( {{\phi }^{0},\mathcal{T}}\right) \left\lbrack v\right\rbrack , \tag{3}
61
+ $$
62
+
63
+ where ${\phi }^{t}$ is the embedding vector at the $t$ -th step, $t \in {\mathbb{N}}^{ + }$ is the total number of iterations and ${\phi }^{0}$ is a random initialisation. GD is the gradient descent operator:
64
+
65
+ $$
66
+ \mathrm{{GD}}\left( {\phi ,\mathcal{T}}\right) = \phi - \alpha {\nabla }_{\phi }\mathcal{L} = \phi + \alpha \mathop{\sum }\limits_{{\left( {v, r, w}\right) \in \mathcal{T}}}\frac{\partial \log P\left( {w \mid v, r}\right) }{\partial \phi }, \tag{4}
67
+ $$
68
+
69
+ where $\alpha = \beta {\left| \mathcal{T}\right| }^{-1}$ , with a $\eta > 0$ learning rate. We now dissect Equation (4) in two different (but equivalent) ways. In the first, which we dub the edge view, we separately consider each addend of the gradient ${\nabla }_{\phi }\mathcal{L}$ . In the second, we aggregate the contributions from all the triples to the update of a particular node. With this latter decomposition, which we call the node view, we can explicate the message-passing mechanism at the core of the FMs. While the edge view suits a vectorised implementation better, the node view further exposes the information flow among nodes, allowing us to draw an analogy to message-passing GNNs.
70
+
71
+ To fully uncover the message-passing mechanism of FMs, we now focus on the gradient descent operation over a single node $v \in \mathcal{E}$ , referred to as the central node in the GNN literature. Recalling Equation (4), we have:
72
+
73
+ $$
74
+ \mathrm{{GD}}\left( {\phi ,\mathcal{T}}\right) \left\lbrack v\right\rbrack = \phi \left\lbrack v\right\rbrack + \alpha \mathop{\sum }\limits_{{\left( {v, r, w}\right) \in \mathcal{T}}}\frac{\partial \log P\left( {\overline{\mathrm{w}} \mid \overline{\mathrm{v}},\overline{\mathrm{r}}}\right) }{\partial \phi \left\lbrack v\right\rbrack }, \tag{5}
75
+ $$
76
+
77
+ which aggregates the information stemming from the updates presented in the edge view. The next theorem describes how this total information flow to a particular node can be recast as an instance of message passing (cf. Section 2). We defer the proof to the appendix.
78
+
79
+ Theorem 3.1 (Message passing in FMs). The gradient descent operator GD (Equation (5)) on the node embeddings of a DistMult model (Equation (2)) with the maximum likelihood objective in Equation (1) and a multi-relational graph $\mathcal{T}$ defined over entities $\mathcal{E}$ induces a message-passing operator whose composing functions are:
80
+
81
+ $$
82
+ {q}_{\mathrm{M}}\left( {\phi \left\lbrack v\right\rbrack , r,\phi \left\lbrack w\right\rbrack }\right) = \left\{ \begin{array}{ll} \phi \left\lbrack w\right\rbrack \odot g\left( r\right) & \text{ if }\left( {r, w}\right) \in {\mathcal{N}}_{ + }^{1}\left\lbrack v\right\rbrack , \\ \left( {1 - {P}_{\theta }\left( {v \mid w, r}\right) }\right) \phi \left\lbrack w\right\rbrack \odot g\left( r\right) & \text{ if }\left( {r, w}\right) \in {\mathcal{N}}_{ - }^{1}\left\lbrack v\right\rbrack ; \end{array}\right. \tag{6}
83
+ $$
84
+
85
+ $$
86
+ {q}_{\mathrm{A}}\left( \left\{ {m\left\lbrack {v, r, w}\right\rbrack : \left( {r, w}\right) \in {\mathcal{N}}^{1}\left\lbrack v\right\rbrack }\right\} \right) = \mathop{\sum }\limits_{{\left( {r, w}\right) \in {\mathcal{N}}^{1}\left\lbrack v\right\rbrack }}m\left\lbrack {v, r, w}\right\rbrack ; \tag{7}
87
+ $$
88
+
89
+ $$
90
+ {q}_{\mathrm{U}}\left( {\phi \left\lbrack v\right\rbrack , z\left\lbrack v\right\rbrack }\right) = \phi \left\lbrack v\right\rbrack + {\alpha z}\left\lbrack v\right\rbrack - {\beta n}\left\lbrack v\right\rbrack , \tag{8}
91
+ $$
92
+
93
+ where, defining the sets of triples ${\mathcal{T}}^{-v} = \{ \left( {s, r, o}\right) \in \mathcal{T} : s \neq v \land o \neq v\}$ ,
94
+
95
+ $$
96
+ n\left\lbrack v\right\rbrack = \frac{\left| {\mathcal{N}}_{ + }^{1}\left\lbrack v\right\rbrack \right| }{\left| \mathcal{T}\right| }{\mathbb{E}}_{{P}_{{\mathcal{N}}_{ + }^{1}\left\lbrack v\right\rbrack }}{\mathbb{E}}_{u \sim {P}_{\theta }\left( {\cdot \mid v, r}\right) }g\left( r\right) \odot \phi \left\lbrack u\right\rbrack + \frac{\left| {\mathcal{T}}^{-v}\right| }{\left| \mathcal{T}\right| }{\mathbb{E}}_{{P}_{{\mathcal{T}}^{-v}}}{P}_{\theta }\left( {v \mid s, r}\right) g\left( r\right) \odot \phi \left\lbrack s\right\rbrack , \tag{9}
97
+ $$
98
+
99
+ where ${P}_{{\mathcal{N}}_{ + }^{1}\left\lbrack v\right\rbrack }$ and ${P}_{{\mathcal{T}}^{-v}\text{are the empirical probability distributions associated to the respective sets.}}$
100
+
101
+ What emerges from the equations is that each gradient step contains an explicit information flow from the neighbourhood of each node, which is then aggregated with a simple summation. Through this direct information path, $t$ steps of gradient descent cover $t$ -hop neighbourhood of $v$ . As $t$ goes towards infinity - or in practice - as training converges, FMs capture the global graph structure. The update function (8) somewhat deviates from classic message-passing as $n\left\lbrack v\right\rbrack$ of Equation (9) involves global information. However, we note that we can interpret this mechanism under the framework of augmented message passing [19] and, in particular, as an instance of graph rewiring.
102
+
103
+ Based on Theorem 3.1 and Equation (3), we can now view $\phi$ as the transient node states $h$ (cf. Section 2) and GD on node embeddings as a message-passing layer. This dualism sits at the core of the ReFactor GNN model, which we describe next.
104
+
105
+ ## 4 REFACTOR GNNs
106
+
107
+ FMs are trained by minimising the objective (1), initialising both sets of parameters $\left( {\phi \text{and}\psi }\right)$ and performing GD until approximate convergence (or until early stopping terminates the training). The implications are twofold: $i$ ) the initial value of the entity lookup table $\phi$ does not play any major role in the final model after convergence; and ii) if we introduce a new set of entities, the conventional wisdom is to retrain ${}^{1}$ the model on the expanded knowledge graph. This is computationally rather expensive compared to the "inductive" models that require no additional training and can leverage node features like entity descriptions. However, as we have just seen in Theorem 3.1, the training procedure of FMs may be naturally recast as a message-passing operation, which suggests that it is possible to use FMs for inductive learning tasks. In fact, we envision that there is an entire novel spectrum of model architectures interpolating between pure FMs and (various instantiations of) GNNs. Here we propose one simple implementation of such an architecture which we dub REFACTOR GNNS. Figure 1 gives an overview of REFACTOR GNNs.
108
+
109
+ The ReFactor Layer. A REFACTOR GNN contains $L$ REFACTOR layers, that we derive from Theorem 3.1. Aligning with the GNN notations we introduced in Section 2, given a KG $\mathcal{T}$ and entity representations ${h}^{l - 1} \in {\mathbb{R}}^{\left| \mathcal{E}\right| \times K}$ , the REFACTOR layer computes the representation of a node $v$ as follows:
110
+
111
+ $$
112
+ {h}^{l}\left\lbrack v\right\rbrack = {q}^{l}\left( {\mathcal{T},{h}^{l - 1}}\right) \left\lbrack v\right\rbrack = {h}^{l - 1}\left\lbrack v\right\rbrack - \beta {n}^{l}\left\lbrack v\right\rbrack + \alpha \mathop{\sum }\limits_{{\left( {r, w}\right) \in {\mathcal{N}}^{1}\left\lbrack v\right\rbrack }}{q}_{M}^{l}\left( {{h}^{l - 1}\left\lbrack v\right\rbrack , r,{h}^{l - 1}\left\lbrack w\right\rbrack }\right) , \tag{10}
113
+ $$
114
+
115
+ where the terms ${n}^{l}$ and ${q}_{\mathrm{M}}^{l}$ derive from Equation (9) and Equation (6), respectively. Differing from the R-GCN, the first GNN on multi-relational graphs, where the incoming and outgoing neighbourhoods are treated equally [16], REFACTOR GNNS treat incoming and outgoing neighbourhoods differently. As we will show in the experiments, this allows REFACTOR GNNS to achieve good performances also on datasets containing non-symmetric relationships. In fact, the REFACTOR layer is built upon DistMult, which, despite being a symmetric operator, induces asymmetry into the final representation.
116
+
117
+ Equation (10) describes the full batch setting, which can be expensive if the KG contains many edges. Therefore, in practice, whenever the graph is big, we adopt a stochastic evaluation of the REFACTOR layer by decomposing the evaluation into several mini-batches. We partition $\mathcal{T}$ into a set of computationally tractable mini-batches. For each of them, we restrict the neighbourhoods to the subparagraph induced by it and readjust the computation of ${n}^{l}\left\lbrack v\right\rbrack$ to include only entities and edges present in it. We leave the investigation of other stochastic strategies (e.g. by taking Monte Carlo estimations of the expectations in Equation (9)) to future work. Finally, we cascade the mini-batch evaluation to produce one full layer evaluation.
118
+
119
+ Training. The learnable parameters of REFACTOR GNNS are the relation embeddings $\psi$ . Inspired by [20], we learn $\psi$ by layer-wise (stochastic) gradient descent. This is in contrast to conventional GNN training, where we need to backpropagate through all the layers. A (full-batch) GD training dynamic for $\psi$ can be written as ${\psi }_{t + 1} = {\psi }_{t} - \eta \nabla {\mathcal{L}}_{t}\left( {\psi }_{t}\right)$ , where ${\mathcal{L}}_{t}\left( {\psi }_{t}\right) = - {\left| \mathcal{T}\right| }^{-1}\mathop{\sum }\limits_{\mathcal{T}}\log {P}_{{\psi }_{t}}\left( {w \mid v, r}\right)$ , with:
120
+
121
+ $$
122
+ {P}_{{\psi }_{t}}\left( {w \mid v, r}\right) = \operatorname{Softmax}\left( {\Gamma \left( {v, r, \cdot }\right) }\right) \left\lbrack w\right\rbrack ,\;\Gamma \left( {v, r, w}\right) = \left\langle {{h}^{t}\left\lbrack v\right\rbrack ,{h}^{t}\left\lbrack w\right\rbrack ,{g}_{{\psi }_{t}}\left( r\right) }\right\rangle
123
+ $$
124
+
125
+ and the node state update as
126
+
127
+ $$
128
+ {h}^{t} = \left\{ \begin{matrix} X & \text{ if }t{\;\operatorname{mod}\;L} = 0 \\ {q}^{t{\;\operatorname{mod}\;L}}\left( {\mathcal{T},{h}^{t - 1}}\right) & \text{ otherwise } \end{matrix}\right. \tag{11}
129
+ $$
130
+
131
+ Implementation-wise, such a training dynamic equals to using an external memory for storing historical node states ${h}^{t - 1}$ akin to the procedure introduced in [21]. The memory can then be queried to compute ${h}^{t}$ using Equation (10). Under this perspective, we periodically clear the node state cache every $L$ full batches to force the model to predict based on on-the-fly $L$ -layer message-passing. After training, we obtain ${\psi }^{ * }$ and do the inference by running $L$ -layer message-passing with ${\psi }^{ * }$ .
132
+
133
+ Due to page limits, we leave the empirical study over the proposed REFACTOR GNNs in the appendix. In general, we observe REFACTOR GNNs to achieve state-of-the-art inductive performance.
134
+
135
+ ---
136
+
137
+ ${}^{1}$ Typically until convergence, possibly by partially warm-starting $\theta$ .
138
+
139
+ ---
140
+
141
+ References
142
+
143
+ [1] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016. 1, 11
144
+
145
+ [2] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International conference on machine learning, pages 1263-1272. PMLR, 2017. 1, 2, 11
146
+
147
+ [3] Maximilian Nickel, Lorenzo Rosasco, and Tomaso A. Poggio. Holographic embeddings of knowledge graphs. In AAAI, pages 1955-1961. AAAI Press, 2016. 1, 2, 10
148
+
149
+ [4] Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Embedding entities and relations for learning and inference in knowledge bases. In ICLR (Poster), 2015. 1, 2
150
+
151
+ [5] Théo Trouillon, Johannes Welbl, Sebastian Riedel, Eric Gaussier, and Guillaume Bouchard. Complex embeddings for simple link prediction. In Maria Florina Balcan and Kilian Q. Weinberger, editors, Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 2071-2080, New York, New York, USA, 20-22 Jun 2016. PMLR. URL https://proceedings.mlr.press/v48/ trouillon16.html. 1, 2, 10
152
+
153
+ [6] Deepak Nathani, Jatin Chauhan, Charu Sharma, and Manohar Kaul. Learning attention-based embeddings for relation prediction in knowledge graphs. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4710-4723, 2019. 1
154
+
155
+ [7] Zhiqing Sun, Shikhar Vashishth, Soumya Sanyal, Partha Talukdar, and Yiming Yang. A re-evaluation of knowledge graph completion methods. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5516-5522, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.489. URL https://aclanthology.org/2020.acl-main.489.1
156
+
157
+ [8] Xiaoran Xu, Wei Feng, Yunsheng Jiang, Xiaohui Xie, Zhiqing Sun, and Zhi-Hong Deng. Dynamically pruned message passing networks for large-scale knowledge graph reasoning. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=rkeuAhVKvB.1
158
+
159
+ [9] Zhaocheng Zhu, Zuobai Zhang, Louis-Pascal A. C. Xhonneux, and Jian Tang. Neural bellman-ford networks: A general graph neural network framework for link prediction. CoRR, abs/2106.06935, 2021. URL https://arxiv.org/abs/2106.06935.1, 2, 8, 9, 10, 13
160
+
161
+ [10] Daniel Ruffinelli, Samuel Broscheit, and Rainer Gemulla. You can teach an old dog new tricks! on training knowledge graph embeddings. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=BkxSmlBFvr.1, 10
162
+
163
+ [11] Timothée Lacroix, Nicolas Usunier, and Guillaume Obozinski. Canonical tensor decomposition for knowledge base completion. In ICML, volume 80 of Proceedings of Machine Learning Research, pages 2869-2878. PMLR, 2018. 1, 10, 13, 16
164
+
165
+ [12] Yihong Chen, Pasquale Minervini, Sebastian Riedel, and Pontus Stenetorp. Relation prediction as an auxiliary training objective for improving multi-relational graph representations. In ${3rd}$ Conference on Automated Knowledge Base Construction, 2021. URL https://openreview.net/forum?id=Qa3uS3H7-Le.1, 10
166
+
167
+ [13] Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. A review of relational machine learning for knowledge graphs. Proc. IEEE, 104(1):11-33, 2016. 2
168
+
169
+ [14] Marco Gori, Gabriele Monfardini, and Franco Scarselli. A new model for learning in graph domains. Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005., 2:729-734 vol. 2, 2005. 2
170
+
171
+ [15] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Trans. Neural Networks, 20(1):61-80, 2009. 2
172
+
173
+ [16] Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. Modeling relational data with graph convolutional networks. In European semantic web conference, pages 593-607. Springer, 2018. 2, 4, 10
174
+
175
+ [17] Komal K. Teru, Etienne G. Denis, and William L. Hamilton. Inductive relation prediction by subgraph reasoning. In ICML, volume 119 of Proceedings of Machine Learning Research, pages 9448-9457. PMLR, 2020. 2, 7, 8, 13
176
+
177
+ [18] William L. Hamilton. Graph representation learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 14(3):1-159. 2
178
+
179
+ [19] Petar Veličković. Message passing all the way up, 2022. URL https://arxiv.org/abs/ 2202.11097.3,9,11
180
+
181
+ [20] Yuning You, Tianlong Chen, Zhangyang Wang, and Yang Shen. L2-gcn: Layer-wise and learned efficient training of graph convolutional networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2127-2135, 2020. 4
182
+
183
+ [21] Matthias Fey, Jan Eric Lenssen, Frank Weichert, and Jure Leskovec. Gnnautoscale: Scalable and expressive graph neural networks via historical embeddings. In ICML, 2021. 4
184
+
185
+ [22] Charles Kemp, Joshua B. Tenenbaum, Thomas L. Griffiths, Takeshi Yamada, and Naonori Ueda. Learning systems of concepts with an infinite relational model. In AAAI, pages 381-388. AAAI Press, 2006. 7
186
+
187
+ [23] Tara Safavi and Danai Koutra. Codex: A comprehensive knowledge graph completion benchmark. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8328-8350, 2020. 7
188
+
189
+ [24] Kristina Toutanova and Danqi Chen. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd workshop on continuous vector space models and their compositionality, pages 57-66, 2015. 7
190
+
191
+ [25] William L Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 1025-1035, 2017. 8
192
+
193
+ [26] Difan Zou, Ziniu Hu, Yewen Wang, Song Jiang, Yizhou Sun, and Quanquan Gu. Few-shot representation learning for out-of-vocabulary words. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems, NeurIPS, 2019. 8, 11, 16
194
+
195
+ [27] Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, and Viktor Prasanna. GraphSAINT: Graph sampling based inductive learning method. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=BJe8pkHFwS.8, 11
196
+
197
+ [28] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In International Conference on Learning Representations, 2018.8, 11
198
+
199
+ [29] Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 11 2019. URL https://arxiv.org/ abs/1908.10084.8,15
200
+
201
+ [30] Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. A three-way model for collective learning on multi-relational data. In ICML, pages 809-816. Omnipress, 2011. 10
202
+
203
+ [31] Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. Convolutional 2d knowledge graph embeddings. In AAAI, pages 1811-1818. AAAI Press, 2018.
204
+
205
+ [32] Dai Quoc Nguyen, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Q. Phung. A novel embedding model for knowledge base completion based on convolutional neural network. In NAACL-HLT (2), pages 327-333. Association for Computational Linguistics, 2018. 10
206
+
207
+ [33] Xiaoran Xu, Wei Feng, Yunsheng Jiang, Xiaohui Xie, Zhiqing Sun, and Zhi-Hong Deng. Dynamically pruned message passing networks for large-scale knowledge graph reasoning. In ICLR. OpenReview.net, 2020. 10
208
+
209
+ [34] Zhao Zhang, Fuzhen Zhuang, Hengshu Zhu, Zhi-Ping Shi, Hui Xiong, and Qing He. Relational graph neural network with hierarchical attention for knowledge graph completion. In AAAI, pages 9612-9619. AAAI Press, 2020.
210
+
211
+ [35] Ren Li, Yanan Cao, Qiannan Zhu, Guanqun Bi, Fang Fang, Yi Liu, and Qian Li. How does knowledge graph embedding extrapolate to unseen data: a semantic evidence view. CoRR, abs/2109.11800, 2021. 10
212
+
213
+ [36] Prachi Jain, Sushant Rathi, Mausam, and Soumen Chakrabarti. Knowledge base completion: Baseline strikes back (again). CoRR, abs/2005.00804, 2020. URL https://arxiv.org/abs/ 2005.00804.10
214
+
215
+ [37] Balasubramaniam Srinivasan and Bruno Ribeiro. On the equivalence between positional node embeddings and structural graph representations. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=SJxzFySKwH.10
216
+
217
+ [38] Yifei Shen, Yongji Wu, Yao Zhang, Caihua Shan, Jun Zhang, Khaled B Letaief, and Dongsheng Li. How powerful is graph convolution for recommendation? arXiv preprint arXiv:2108.07567, 2021.10
218
+
219
+ [39] David JC MacKay. Information theory, inference and learning algorithms. Cambridge university press, 2003. 11
220
+
221
+ [40] Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard S. Zemel. Gated graph sequence neural networks. In ICLR (Poster), 2016. 11
222
+
223
+ [41] Léon Bottou. Stochastic Gradient Descent Tricks, pages 421-436. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012. ISBN 978-3-642-35289-8. doi: 10.1007/978-3-642-35289-8_25. URL https://doi.org/10.1007/978-3-642-35289-8_25.11
224
+
225
+ [42] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(61):2121-2159, 2011. URL http://jmlr.org/papers/v12/duchi11a.html.13
226
+
227
+ ## A Architecture
228
+
229
+ ![01963eef-1097-7b31-b870-0251d4b72293_6_357_1016_1082_318_0.jpg](images/01963eef-1097-7b31-b870-0251d4b72293_6_357_1016_1082_318_0.jpg)
230
+
231
+ Figure 1: ReFactor GNN architecture - the left figure describes the messages (coloured edges) used to update the representation of node ${v}_{1}$ , which depend on the type of relationship between the sender nodes and ${v}_{1}$ in the graph $G = \left\{ {\left( {{v}_{2},{r}_{1},{v}_{1}}\right) ,\left( {{v}_{3},{r}_{2},{v}_{1}}\right) ,\left( {{v}_{1},{r}_{3},{v}_{4}}\right) }\right\}$ ; the right figure describes the computation graph for calculating $P\left( {v \mid w, r}\right)$ , where $v, w \in \mathcal{E}$ and $r \in \mathcal{R}$ : the embedding representations of $w, r$ , and $v$ are used to score the edge(w, r, v)via the scoring function $\Gamma$ , which is then normalised via the Softmax function.
232
+
233
+ Figure 1 shows the architecture of REFACTOR GNNs.
234
+
235
+ ## B Experiments
236
+
237
+ We perform experiments to answer the following questions regarding REFACTOR GNNS:
238
+
239
+ - Q1. REFACTOR GNNS are derived from a message-passing reformulation of FMs: do they also inherit their predictive accuracy in transductive KGC tasks? Appendix B. 1
240
+
241
+ - Q2. Are REFACTOR GNNs more statistically accurate than other GNN baselines in inductive KBC tasks? Appendix B. 2
242
+
243
+ - Q3. Can we simplify REFACTOR GNNS by removing the term $n\left\lbrack v\right\rbrack$ , which involves nodes not in the 1-hop neighbourhood? Appendix B. 3
244
+
245
+ For transductive experiments, we used three well-established KGC datasets: UMLS [22], CoDEx- $S$ [23], and FB15K237 [24]. For inductive experiments, we used the inductive KGC benchmarks introduced by GraIL [17], which include 12 datasets, or rather 12 pairs of knowledge graphs:
246
+
247
+ <table><tr><td>Entity Encoder</td><td>UMLS</td><td>CoDEx-S</td><td>FB15K237</td></tr><tr><td>Lookup (FM, specif. DistMult)</td><td>0.90</td><td>0.43</td><td>0.30</td></tr><tr><td>REFACTOR GNNS $\left( {L = \infty }\right)$</td><td>0.93</td><td>0.44</td><td>0.33</td></tr></table>
248
+
249
+ Table 1: Test MRR for transductive KGC tasks.
250
+
251
+ (FB15K237_vi, FB15K237_vi_ind), (WN18RR_vi, WN18RR_vi_ind), and (NELL_vi, NELL_vi_ind), where $i \in \left\lbrack {1,2,3,4}\right\rbrack$ , and $\left( {\_ {vi},\_ {vi}\_ {ind}}\right)$ represents a pair of graphs with a shared relation vocabulary and non-overlapping entities. We follow the standard KGC evaluation protocol by fully ranking all the candidate entities and computing two metrics using the ranks of the ground-truth entities: Mean Reciprocal Ranking (MRR) and Hit Ratios at Top $\mathrm{K}$ (Hits@ $K$ ) with $K \in \left\lbrack {1,3,{10}}\right\rbrack$ . For the inductive KGC, we additionally consider the partial-ranking evaluation protocol used by GraIL for fair comparison. Empirically, we find full ranking more difficult than partial ranking, and thus more suitable for reflecting the differences among models on GraIL datasets - we would like to call for future work on GraIL datasets to also adopt full ranking protocol on these datasets.
252
+
253
+ We grid-searched over the hyper-parameters, and selected the best configuration based on validation MRR. Since training deep GNNs with full-graph message passing might be slow for large knowledge graphs, we follow the literature [25-27] to sample sub-graphs for training GNNs. Considering that sampling on-the-fly often prevents high utilisation of GPUs, we resort to a two-stage process: we first sampled and serialised sub-graphs around the target edges in the mini-batches; we then trained the GNNs with the serialised sub-graphs. To ensure we have sufficient sub-graphs for training the models, we sampled for 20 epochs for each knowledge graph, i.e. 20 full-passes over the full graph. The sub-graph sampler we currently used is LADIES [26].
254
+
255
+ ### B.1 REFACTOR GNNS for Transductive Learning (Q1)
256
+
257
+ REFACTOR GNNs are derived from the message-passing reformulation of FMs. We expect them to have roughly the same performance as FMs for transductive KGC tasks. To verify this, we run experiments on the datatsets UMLS, CoDEx-S, and FB15K237. For fair comparison, we use ?? as the decoder and consider i) lookup embedding table as the entity encoder, which forms the FM when combined with the decoder (Section 2), and ii) REFACTOR GNNS as the entity encoder. REFACTOR GNNs are trained with $L = \infty$ , i.e. we never clear the node state cache. Since transductive KGC tasks do not involve new entities, the node state cache in REFACTOR GNNs can be directly used for link prediction. Table 1 summarises the result. We observe that REFACTOR GNNs achieve a similar or slightly better performance compared to the FM. This shows that REFACTOR GNNs are able to capture the essence of FMs and thus maintain strong at transductive KGC.
258
+
259
+ ### B.2 REFACTOR GNNS for Inductive Learning (Q2)
260
+
261
+ Despite FMs' good empirical performance on transductive KGC tasks, they fail to be inductive as GNNs. According to our reformulation, this is due to the infinite message-passing layers hidden in FMs' optimisation. Discarding infinite message-passing layers, REFACTOR GNNs enable FMs to perform inductive reasoning tasks by learning to use a finite set of message-passing layers for prediction similarly to GNNs.
262
+
263
+ Here we present experiments to verify REFACTOR GNNS's capability for inductive reasoning. Specifically, we study the task of inductive KGC and investigate whether REFACTOR GNNs can generalise to unseen entities. Following [17], on GraIL datasets, we trained models on the original graph, and run 0 -shot link prediction on the _ind test graph. Similar as the transductive experiments, we use ?? as the decoder and vary the entity encoder. We denote three-layer REFACTOR GNNS as REFACTOR GNNs (3) and six-layer REFACTOR GNNS as REFACTOR GNNS (6). We consider several baseline entity encoders: i) no-pretrain, models without any pretraining on the original graph; ii) GAT (3), three-layer graph attention network [28]; iii) GAT(6), six-layer graph attention network; iv) GraIL, a sub-graph-based relational GNN [17]; v) NBFNet, a path-based GNN [9], current SoTA on GraIL datasets. In addition to randomly initialised vectors as the node features, we also use as node features RoBERTa Encodings of the entity descriptions, which are produced by SentenceBERT [29]. Due to space reason, we present the results on (FB15K237_v1, FB15K237_v1_ind) in Figure 2. Results on other datasets are similar and can be found in the appendix. We can see that without RoBERTa Embeddings as node features, REFACTOR GNNS perform better than GraIL (+23%); with RoBERTa Embeddings as node features, REFACTOR GNNS outperform both GraIL (+43%) and NBFNet (+10%), achieving new SoTA results on inductive KGC tasks.
264
+
265
+ ![01963eef-1097-7b31-b870-0251d4b72293_8_340_231_1181_698_0.jpg](images/01963eef-1097-7b31-b870-0251d4b72293_8_340_231_1181_698_0.jpg)
266
+
267
+ Figure 2: Inductive KGC Performance. Models are trained on the KG FB15K237_v1 and tested on another KG FB15K237_v1_ind, where the entities are completely new. The results of GraIL and NBFNet are taken from [9]. It is unclear how to incorporate node features in GraIL and NBFNet.
268
+
269
+ Performance vs Parameter Efficiency as #Message-Passing Layers Increases. Usually, as the number of message-passing layers increases in GNNs, the over-smoothing issue occurs while the computational cost also increases exponentially. REFACTOR GNNs avoid this by layer-wise training and sharing the weights across layers. Here we compare REFACTOR GNNs with $\{ 1,3,6,9\}$ message-passing layer(s) with same-depth GATs - results are summarised in Figure 3. We observe that increasing the number of message-passing layers in GATs does not necessarily improve the predictive accuracy - the best results were obtained with 3 message-passing layers on FB15K237_v1 while using 6 and 9 layers leads to performance degradation. On the other hand, REFACTOR GNNS obtain consistent improvements when increasing #Layers from 1 to 3,6, and 9. REFACTOR GNNS (6,6)and(9,9)clearly outperform their GAT counterparts. Most importantly, REFACTOR GNNS are more parameter-efficient than GATs, with a constant #Parameters as #Layers increases.
270
+
271
+ ### B.3 Beyond Message-Passing (Q3)
272
+
273
+ As shown by Theorem 3.1, REFACTOR GNNS contain not only terms capturing information flow from the 1-hop neighbourhood, which falls into the classic message-passing framework, but also a term $n\left\lbrack v\right\rbrack$ that involve nodes outside the 1-hop neighbourhood. The term $n\left\lbrack v\right\rbrack$ can be treated as augmented message-passing on a dynamically rewired graph [19]. Here we perform ablation experiments to measure the impact of the $n\left\lbrack v\right\rbrack$ term. Table 2 summarises the ablation results: we can see that, without the term $n\left\lbrack v\right\rbrack$ , REFACTOR GNNs with random vectors as node features yield a 2% lower MRR, while REFACTOR GNNs with RoBERTa encodings as node features produce a 7% lower MRR. This suggests that augmented message-passing also plays a significant role in REFACTOR GNNS' generalisation properties in downstream link prediction tasks. Future work might gain more insights by further dissecting the $n\left\lbrack v\right\rbrack$ term.
274
+
275
+ ![01963eef-1097-7b31-b870-0251d4b72293_9_339_232_1196_725_0.jpg](images/01963eef-1097-7b31-b870-0251d4b72293_9_339_232_1196_725_0.jpg)
276
+
277
+ Figure 3: Performance vs Parameter Efficiency as #Layers Increases on FB15K237_v1. The left axis is Test MRR while the right axis is #Parameters. The solid lines and dashed lines indicate the changes of Test MRR and the changes of #Parameters.
278
+
279
+ <table><tr><td>Test MRR</td><td>Frozen Random Representations</td><td>RoBERTa Encodings</td></tr><tr><td>with $n\left\lbrack v\right\rbrack$</td><td>0.425</td><td>0.486</td></tr><tr><td>without $n\left\lbrack v\right\rbrack$</td><td>0.418</td><td>0.452</td></tr></table>
280
+
281
+ Table 2: Ablation on $n\left\lbrack v\right\rbrack$ for REFACTOR GNNs (6) trained on FB15K237_v1.
282
+
283
+ ## C Related Work
284
+
285
+ Multi-Relational Graph Representation Learning. Previous work on multi-relational graph representation learning focused either on FMs [3-5, 11, 12, 30-32] or on GNN-based models [16, 33- 35]. Recently, FMs were found to be significantly more accurate than GNNs in KGC tasks, when coupled with specific training strategies [10, 11, 36]. While more advanced GNNs [9] for KBC are showing promise at the cost of extra algorithm complexity, little effort has been devoted to establish the links between plain GNNs and FMs, which are strong multi-relational link predictors despite their simplicity. Our work aims to align GNNs with FMs so that we can combine the strengths from both families of models.
286
+
287
+ Relationships between FMs and GNNs. We would like to clarify our scope, by highlighting that our "FM" refers to factorisation-based models used for KGC, different from matrix factorisation, where there are no relational parameters. Similarly, our "GNN" refers to GNNs developed for KGC, which incorporate (positional) node features as elaborated in Section 2. We recognise that a very recent work [37] builds a theoretical link between structural GNNs and node (positional) embeddings, where the second model category encompasses not only FMs but also many practical GNNs. Both our FMs and GNNs fall into the second model category. Therefore, we consider our work building a more fine-grained connection between positional node embeddings produced by FMs and positional node embeddings produced by GNNs, while at the same time focusing on KGC. Beyond FMs in KGC, using graph signal processing theory, [38] show that matrix factorisation (MF) based recommender models correspond to ideal low-pass graph convolutional filters. Coincidentally, they also find infinite neighbourhood coverage in MF although using a completely different approach and focusing on a different domain in contrast to our work.
288
+
289
+ Message Passing in GNNs. Message passing allows to recursively decompose a global function into simple local, parallelisable computations [39]. Recently, [2] provided a unified message-passing reformulation for various GNN architectures, including Graph Attention Networks [28], Gated Graph Neural Networks [40], and Graph Convolutional Networks [1]. In this work, we show that FMs can also be cast as a special type of GNNs, by considering SGD updates [41] over node embeddings as message-passing operations between nodes. To the best of our knowledge, our work is the first to provide such connections between FMs and GNNs. In our work, we show that FMs can be seen as instances of GNNs, with a characteristic feature being in the nodes being considered during the message-passing process: our REFACTOR GNNS can be seen as using an Augmented Message-Passing process on a dynamically re-wired graph [19].
290
+
291
+ ## D Conclusion & Future Work
292
+
293
+ Our work establishes a link between FMs and GNNs on the task of multi-relational link prediction. The reformulation of FMs as GNNs addresses the question why FMs are stronger multi-relational link predictors compared to plain GNNs. Guided by the reformulation, we further propose a new variant of GNNs, REFACTOR GNNs, which combines the strengths of both FMs and classic GNNs. Empirical experiments show that REFACTOR GNNs produce significantly more accurate results than our GNN baselines on link prediction tasks.
294
+
295
+ Since we adopt a two-stage (sub-graph serialisation and model training) approach instead of online sampling, there can be side effects from the low sub-graph diversity. In our experiments, we only used LADIES [26] for sub-graph sampling. We plan to experiment with different sub-graph sampling algorithms, such as GraphSaint [27], and see how this affects the downstream link prediction results Furthermore, it would be interesting to analyse decoders other than DistMult, as well as additional optimisation schemes beyond SGD and AdaGrad.
296
+
297
+ ## E Theorem 1 Proof
298
+
299
+ In this section, we prove Theorem 1, which we restate here for convenience.
300
+
301
+ Theorem E.1 (Message passing in FMs). The gradient descent operator GD (5) on the node embeddings of a DistMult model (Equation (2)) with the maximum likelihood objective in Equation (1) and a multi-relational graph $\mathcal{T}$ defined over entities $\mathcal{E}$ induces a message-passing operator whose composing functions are:
302
+
303
+ $$
304
+ {q}_{\mathrm{M}}\left( {\phi \left\lbrack v\right\rbrack , r,\phi \left\lbrack w\right\rbrack }\right) = \left\{ \begin{array}{ll} \phi \left\lbrack w\right\rbrack \odot g\left( r\right) & \text{ if }\left( {r, w}\right) \in {\mathcal{N}}_{ + }^{1}\left\lbrack v\right\rbrack , \\ \left( {1 - {P}_{\theta }\left( {v \mid w, r}\right) }\right) \phi \left\lbrack w\right\rbrack \odot g\left( r\right) & \text{ if }\left( {r, w}\right) \in {\mathcal{N}}_{ - }^{1}\left\lbrack v\right\rbrack ; \end{array}\right. \tag{12}
305
+ $$
306
+
307
+ $$
308
+ {q}_{\mathrm{A}}\left( \left\{ {m\left\lbrack {v, r, w}\right\rbrack : \left( {r, w}\right) \in {\mathcal{N}}^{1}\left\lbrack v\right\rbrack }\right\} \right) = \mathop{\sum }\limits_{{\left( {r, w}\right) \in {\mathcal{N}}^{1}\left\lbrack v\right\rbrack }}m\left\lbrack {v, r, w}\right\rbrack ; \tag{13}
309
+ $$
310
+
311
+ $$
312
+ {q}_{\mathrm{U}}\left( {\phi \left\lbrack v\right\rbrack , z\left\lbrack v\right\rbrack }\right) = \phi \left\lbrack v\right\rbrack + {\alpha z}\left\lbrack v\right\rbrack - {\beta n}\left\lbrack v\right\rbrack , \tag{14}
313
+ $$
314
+
315
+ where, defining the sets of triplets ${\mathcal{T}}^{-v} = \{ \left( {s, r, o}\right) \in \mathcal{T} : s \neq v \land o \neq v\}$ ,
316
+
317
+ $$
318
+ n\left\lbrack v\right\rbrack = \frac{\left| {\mathcal{N}}_{ + }^{1}\left\lbrack v\right\rbrack \right| }{\left| \mathcal{T}\right| }{\mathbb{E}}_{{P}_{{\mathcal{N}}_{ + }^{1}\left\lbrack v\right\rbrack }}{\mathbb{E}}_{u \sim {P}_{\theta }\left( {\cdot \mid v, r}\right) }g\left( r\right) \odot \phi \left\lbrack u\right\rbrack + \frac{\left| {\mathcal{T}}^{-v}\right| }{\left| \mathcal{T}\right| }{\mathbb{E}}_{{P}_{{\mathcal{T}}^{-v}}}{P}_{\theta }\left( {v \mid s, r}\right) g\left( r\right) \odot \phi \left\lbrack s\right\rbrack , \tag{15}
319
+ $$
320
+
321
+ where ${P}_{{\mathcal{N}}_{ + }^{1}\left\lbrack v\right\rbrack }$ and ${P}_{{\mathcal{T}}^{-v}\text{are the empirical probability distributions associated to the respective sets.}}$
322
+
323
+ Proof. Remember that we assume that there are no triplets where the source and the target node are the same (i.e.(v, r, v), with $v \in \mathcal{E}$ and $r \in \mathcal{R}$ ), and let $v \in \mathcal{E}$ be a node in $\mathcal{E}$ . First, let us consider the gradient descent operator GD over $v$ ’s node embedding $\phi \left\lbrack v\right\rbrack$ :
324
+
325
+ $$
326
+ \mathrm{{GD}}\left( {\phi ,\mathcal{T}}\right) \left\lbrack v\right\rbrack = \phi \left\lbrack v\right\rbrack + \alpha \mathop{\sum }\limits_{{\left( {\overline{\mathbf{v}},\overline{\mathbf{r}},\overline{\mathbf{w}}}\right) \in \mathcal{T}}}\frac{\partial \log P\left( {\overline{\mathbf{w}} \mid \overline{\mathbf{v}},\overline{\mathbf{r}}}\right) }{\partial \phi \left\lbrack v\right\rbrack }.
327
+ $$
328
+
329
+ The gradient is a sum over components associated with the triplets $\left( {\bar{\nabla },\overline{\mathrm{r}},\overline{\mathrm{w}}}\right) \in \mathcal{T}$ ; based on whether the corresponding triplet involves $v$ in the subject or object position, or does not involve $v$ at all, these components can be grouped into three categories:
330
+
331
+ 439 1. Components corresponding to the triplets where $\overline{\mathrm{v}} = v \land \overline{\mathrm{w}} \neq v$ . The sum over these components
332
+
333
+ 440 is given by:
334
+
335
+ $$
336
+ \mathop{\sum }\limits_{{\left( {v,\overline{\mathrm{r}},\overline{\mathrm{w}}}\right) \in \mathcal{T}}}\frac{\partial \log P\left( {\overline{\mathrm{w}} \mid v,\overline{\mathrm{r}}}\right) }{\partial \phi \left\lbrack v\right\rbrack } = \mathop{\sum }\limits_{{\left( {v,\overline{\mathrm{r}},\overline{\mathrm{w}}}\right) \in \mathcal{T}}}\left\lbrack {\frac{\partial \Gamma \left( {v,\overline{\mathrm{r}},\overline{\mathrm{w}}}\right) }{\partial \phi \left\lbrack v\right\rbrack } - \mathop{\sum }\limits_{u}P\left( {u \mid v,\overline{\mathrm{r}}}\right) \frac{\partial \Gamma \left( {v,\overline{\mathrm{r}}, u}\right) }{\partial \phi \left\lbrack v\right\rbrack }}\right\rbrack
337
+ $$
338
+
339
+ $$
340
+ = \mathop{\sum }\limits_{{\left( {\overline{\mathrm{r}},\overline{\mathrm{w}}}\right) \in {\mathcal{N}}_{ + }^{1}\left\lbrack v\right\rbrack }}\phi \left\lbrack \overline{\mathrm{w}}\right\rbrack \odot g\left( \overline{\mathrm{r}}\right) - \mathop{\sum }\limits_{{\left( {v,\overline{\mathrm{r}},\overline{\mathrm{w}}}\right) \in \mathcal{T}}}\mathop{\sum }\limits_{u}P\left( {u \mid v,\overline{\mathrm{r}}}\right) g\left( \overline{\mathrm{r}}\right) \odot \phi \left\lbrack u\right\rbrack .
341
+ $$
342
+
343
+ 2. Components corresponding to the triplets where $\overline{\mathrm{v}} \neq v \land \overline{\mathrm{w}} = v$ . The sum over these components is given by:
344
+
345
+ $$
346
+ \mathop{\sum }\limits_{{\left( {\overline{\mathbf{v}},\overline{\mathbf{r}}, v}\right) \in \mathcal{T}}}\frac{\partial \log P\left( {v \mid \overline{\mathbf{v}},\overline{\mathbf{r}}}\right) }{\partial \phi \left\lbrack v\right\rbrack } = \mathop{\sum }\limits_{{\left( {\overline{\mathbf{v}},\overline{\mathbf{r}}, v}\right) \in \mathcal{T}}}\left\lbrack {\frac{\partial \Gamma \left( {\overline{\mathbf{v}},\overline{\mathbf{r}}, v}\right) }{\partial \phi \left\lbrack v\right\rbrack } - \mathop{\sum }\limits_{u}P\left( {u \mid \overline{\mathbf{v}},\overline{\mathbf{r}}}\right) \frac{\partial \Gamma \left( {\overline{\mathbf{v}},\overline{\mathbf{r}}, u}\right) }{\partial \phi \left\lbrack v\right\rbrack }}\right\rbrack
347
+ $$
348
+
349
+ $$
350
+ = \mathop{\sum }\limits_{{\left( {\overline{\mathrm{v}},\overline{\mathrm{r}}}\right) \in {\mathcal{N}}_{ - }^{1}\left\lbrack v\right\rbrack }}g\left( \overline{\mathrm{r}}\right) \odot \phi \left\lbrack \overline{\mathrm{v}}\right\rbrack \left( {1 - P\left( {v \mid \overline{\mathrm{v}},\overline{\mathrm{r}}}\right) }\right) .
351
+ $$
352
+
353
+ 3. Components corresponding to the triplets where $\overline{\mathrm{v}} \neq v \land \overline{\mathrm{w}} \neq v$ . The sum over these components
354
+
355
+ 444 is given by:
356
+
357
+ $$
358
+ \mathop{\sum }\limits_{{\left( {\bar{\mathbf{v}},\bar{\mathbf{r}},\bar{\mathbf{w}}}\right) \in \mathcal{T}}}\frac{\partial \log P\left( {\overline{\mathbf{w}} \mid \overline{\mathbf{v}},\overline{\mathbf{r}}}\right) }{\partial \phi \left\lbrack v\right\rbrack } = \mathop{\sum }\limits_{{\left( {\overline{\mathbf{v}},\overline{\mathbf{r}},\overline{\mathbf{w}}}\right) \in \mathcal{T}}}\left\lbrack {0 - \mathop{\sum }\limits_{u}P\left( {u \mid \overline{\mathbf{v}},\overline{\mathbf{r}}}\right) \frac{\partial \Gamma \left( {\overline{\mathbf{v}},\overline{\mathbf{r}}, u}\right) }{\partial \phi \left\lbrack v\right\rbrack }}\right\rbrack
359
+ $$
360
+
361
+ $$
362
+ = \mathop{\sum }\limits_{{\left( {\overline{\mathrm{v}},\overline{\mathrm{r}},\overline{\mathrm{w}}}\right) \in \mathcal{T}}} - P\left( {v \mid \overline{\mathrm{v}},\overline{\mathrm{r}}}\right) \frac{\partial \Gamma \left( {\overline{\mathrm{v}},\overline{\mathrm{r}}, v}\right) }{\partial \phi \left\lbrack v\right\rbrack }.
363
+ $$
364
+
365
+ $$
366
+ = \mathop{\sum }\limits_{{\left( {\overline{\mathrm{v}},\overline{\mathrm{r}},\overline{\mathrm{w}}}\right) \in \mathcal{T}}} - P\left( {v \mid \overline{\mathrm{v}},\overline{\mathrm{r}}}\right) g\left( \overline{\mathrm{r}}\right) \odot \phi \left\lbrack \overline{\mathrm{v}}\right\rbrack .
367
+ $$
368
+
369
+ Collecting these three categories, the GD operator over $\phi \left\lbrack v\right\rbrack$ , or rather the node representation update 16 in DistMult, can be rewritten as:
370
+
371
+ $$
372
+ \mathrm{{GD}}\left( {\phi ,\mathcal{T}}\right) \left\lbrack v\right\rbrack = \phi \left\lbrack v\right\rbrack + \alpha \underset{{v}^{ * }\text{ s neighbourhood }v}{\underbrace{\mathop{\sum }\limits_{{\{ \left( {\bar{\mathrm{r}},\bar{\mathrm{w}}}\right) \in {\mathcal{N}}_{ + }^{1}\left\lbrack v\right\rbrack \} }}\phi \left\lbrack \bar{\mathrm{w}}\right\rbrack \odot g\left( \bar{\mathrm{r}}\right) + \mathop{\sum }\limits_{{\left( {\bar{\mathrm{r}},\bar{\mathrm{v}}}\right) \in {\mathcal{N}}_{ - }^{1}\left\lbrack v\right\rbrack }}\phi \left\lbrack \bar{\mathrm{v}}\right\rbrack \odot g\left( \bar{\mathrm{r}}\right) \left( {1 - P\left( {v|\bar{\mathrm{v}},\bar{\mathrm{r}}}\right) }\right) }} \tag{16}
373
+ $$
374
+
375
+ $$
376
+ - \alpha \underset{\text{beyond neighbourhood } \rightarrow v}{\underbrace{\mathop{\sum }\limits_{{\left( {\bar{v},\bar{r},\bar{w}}\right) \in \mathcal{T},\bar{v} \neq v,\bar{w} \neq v}}P\left( {v \mid \bar{v},\bar{r}}\right) g\left( \bar{r}\right) \odot \phi \left\lbrack \bar{v}\right\rbrack - \alpha \mathop{\sum }\limits_{{\left( {v,\bar{r},\bar{w}}\right) \in \mathcal{T}}}\mathop{\sum }\limits_{u}P\left( {u \mid v,\bar{r}}\right) g\left( \bar{r}\right) \odot \phi \left\lbrack u\right\rbrack }}. \tag{17}
377
+ $$
378
+
379
+ Note that the component " $v$ ’s neighbourhood $\rightarrow v$ ’ (highlighted in red) in Equation (16) is a sum over $v$ ’s neighbourhood - gathering information from positive neighbours $\phi \left\lbrack \overline{\mathrm{w}}\right\rbrack ,\left( {\cdot ,\overline{\mathrm{w}}}\right) \in {\mathcal{N}}_{ + }^{1}\left\lbrack v\right\rbrack$ and negative neighbours $\phi \left\lbrack \overline{\mathrm{v}}\right\rbrack ,\left( {\cdot ,\overline{\mathrm{v}}}\right) \in {\mathcal{N}}_{ - }^{1}\left\lbrack v\right\rbrack$ . Hence, each atomic term of the sum can be seen as a message vector between $v$ and $v$ ’s neighbouring node. Formally, letting $w$ be $v$ ’s neighbouring node, the message vector can be written as follows
380
+
381
+ $$
382
+ m\left\lbrack {v, r, w}\right\rbrack = {q}_{\mathrm{M}}\left( {\phi \left\lbrack v\right\rbrack , r,\phi \left\lbrack w\right\rbrack }\right) = \left\{ \begin{array}{l} \phi \left\lbrack w\right\rbrack \odot g\left( r\right) ,\text{ if }\left( {r, w}\right) \in {\mathcal{N}}_{ + }^{1}\left\lbrack v\right\rbrack , \\ \phi \left\lbrack w\right\rbrack \odot g\left( r\right) \left( {1 - P\left( {v \mid w, r}\right) }\right) ,\text{ if }\left( {r, w}\right) \in {\mathcal{N}}_{ - }^{1}\left\lbrack v\right\rbrack , \end{array}\right. \tag{18}
383
+ $$
384
+
385
+ which induces a bi-directional message function ${q}_{M}$ . On the other hand, the summation over these atomic terms (message vectors) induces the aggregate function ${q}_{\mathrm{A}}$ :
386
+
387
+ $$
388
+ z\left\lbrack v\right\rbrack = {q}_{\mathrm{A}}\left( \left\{ {m\left\lbrack {v, r, w}\right\rbrack : \left( {r, w}\right) \in {\mathcal{N}}^{1}\left\lbrack v\right\rbrack }\right\} \right)
389
+ $$
390
+
391
+ $$
392
+ = \mathop{\sum }\limits_{{\left( {\overline{\mathrm{r}},\overline{\mathrm{w}}}\right) \in {\mathcal{N}}_{ + }^{1}\left\lbrack v\right\rbrack }}{m}^{l}\left\lbrack {v,\overline{\mathrm{r}},\overline{\mathrm{w}}}\right\rbrack + \mathop{\sum }\limits_{{\left( {\overline{\mathrm{r}},\overline{\mathrm{v}}}\right) \in {\mathcal{N}}_{ - }^{1}\left\lbrack v\right\rbrack }}{m}^{l}\left\lbrack {\overline{\mathrm{v}},\overline{\mathrm{r}}, v}\right\rbrack = \mathop{\sum }\limits_{{\left( {r, w}\right) \in {\mathcal{N}}^{1}\left\lbrack v\right\rbrack }}m\left\lbrack {v, r, w}\right\rbrack . \tag{19}
393
+ $$
394
+
395
+ Finally, the component "beyond neighbourhood $\rightarrow v$ " (highlighted in blue) is a term that contains dynamic information flow from global nodes to $v$ . If we define:
396
+
397
+ $$
398
+ n\left\lbrack v\right\rbrack = \frac{1}{\left| \mathcal{T}\right| }\mathop{\sum }\limits_{{\left( {v,\bar{r},\overline{\mathrm{w}}}\right) \in \mathcal{T}}}\mathop{\sum }\limits_{u}P\left( {u \mid v,\overline{\mathrm{r}}}\right) g\left( \overline{\mathrm{r}}\right) \odot \phi \left\lbrack u\right\rbrack + \frac{1}{\left| \mathcal{T}\right| }\mathop{\sum }\limits_{{\left( {\overline{\mathrm{v}},\overline{\mathrm{r}},\overline{\mathrm{w}}}\right) \in \mathcal{T},\overline{\mathrm{v}} \neq v,\overline{\mathrm{w}} \neq v}}P\left( {v \mid \overline{\mathrm{v}},\overline{\mathrm{r}}}\right) g\left( \overline{\mathrm{r}}\right) \odot \phi \left\lbrack \overline{\mathrm{v}}\right\rbrack ,
399
+ $$
400
+
401
+ the GD operator over $\phi \left\lbrack v\right\rbrack$ then boils down to an update function which utilises previous node state $\phi \left\lbrack v\right\rbrack$ , aggregated message $z\left\lbrack v\right\rbrack$ and a global term $n\left\lbrack v\right\rbrack$ to produce the new node state:
402
+
403
+ $$
404
+ \mathrm{{GD}}\left( {\phi ,\mathcal{T}}\right) \left\lbrack v\right\rbrack = {q}_{\mathrm{U}}\left( {\phi \left\lbrack v\right\rbrack , z\left\lbrack v\right\rbrack }\right) = \phi \left\lbrack v\right\rbrack + {\alpha z}\left\lbrack v\right\rbrack - {\beta n}\left\lbrack v\right\rbrack . \tag{20}
405
+ $$
406
+
407
+ Furthermore, $n\left\lbrack v\right\rbrack$ can be seen as a weighted sum of expectations by recasting the summations over triplets as expectations:
408
+
409
+ $$
410
+ n\left\lbrack v\right\rbrack = \frac{\left| {\mathcal{N}}_{ + }^{1}\left\lbrack v\right\rbrack \right| }{\left| \mathcal{T}\right| }{\mathbb{E}}_{\left( {v,\overline{\mathbf{r}},\overline{\mathbf{v}}}\right) \sim {P}_{{\mathcal{N}}_{ + }^{1}\left\lbrack v\right\rbrack }}{\mathbb{E}}_{u \sim P\left( {\cdot \mid v,\overline{\mathbf{r}}}\right) }g\left( \overline{\mathbf{r}}\right) \odot \phi \left\lbrack u\right\rbrack + \frac{\left| {\mathcal{T}}^{-v}\right| }{\left| \mathcal{T}\right| }{\mathbb{E}}_{\left( {\overline{\mathbf{v}},\overline{\mathbf{r}},\overline{\mathbf{w}}}\right) \sim {P}_{{\mathcal{T}}^{-v}}}P\left( {v \mid \overline{\mathbf{r}},\overline{\mathbf{r}},}\right) g\left( \overline{\mathbf{r}}\right) \odot \phi \left\lbrack \overline{\mathbf{r}}\right\rbrack
411
+ $$
412
+
413
+ (21)
414
+
415
+ where ${\mathcal{T}}^{-v} = \left\{ {\left( {\overline{\mathrm{v}},\overline{\mathrm{r}},{\overline{\mathrm{v}}}^{\prime }}\right) \in \mathcal{T} \mid \overline{\mathrm{v}} \neq v \land {\overline{\mathrm{v}}}^{\prime } \neq v}\right\}$ is the set of triplets that do not contain $v$ .
416
+
417
+ ### E.1 Extension to AdaGrad and N3 Regularisation
418
+
419
+ State-of-the-art FMs are often trained with training strategies adapted for each model category. For example, using an N3 regularizer [11] and AdaGrad optimiser [42], which we use for our experiments. For N3 regularizer, we add a gradient term induced by the regularised loss:
420
+
421
+ $$
422
+ \frac{\partial L}{\partial \phi \left\lbrack v\right\rbrack } = \frac{\partial {L}_{\mathrm{{fit}}}}{\partial \phi \left\lbrack v\right\rbrack } + \lambda \frac{\partial {L}_{\mathrm{{reg}}}}{\partial \phi \left\lbrack v\right\rbrack } = \frac{\partial {L}_{\mathrm{{fit}}}}{\partial \phi \left\lbrack v\right\rbrack } + \lambda \operatorname{sign}\left( {\phi \left\lbrack v\right\rbrack }\right) \phi {\left\lbrack v\right\rbrack }^{2}
423
+ $$
424
+
425
+ where ${L}_{\text{fit }}$ is the training loss, ${L}_{\text{reg }}$ is the regularisation term, $\operatorname{sign}\left( \cdot \right)$ is a element-wise sign function, and $\lambda \in {\mathbb{R}}_{ + }$ is a hyper-parameter specifying the regularisation strength. The added component relative to this regularizer fits into the message function as follows:
426
+
427
+ $$
428
+ {q}_{\mathrm{M}}\left( {\phi \left\lbrack v\right\rbrack , r,\phi \left\lbrack w\right\rbrack }\right) = \left\{ \begin{array}{l} \phi \left\lbrack w\right\rbrack \odot g\left( r\right) - \lambda \operatorname{sign}\left( {\phi \left\lbrack w\right\rbrack }\right) \phi {\left\lbrack w\right\rbrack }^{2},\text{ if }\left( {r, w}\right) \in {\mathcal{N}}_{ + }^{1}\left\lbrack v\right\rbrack , \\ \phi \left\lbrack w\right\rbrack \odot g\left( r\right) \left( {1 - P\left( {v \mid w, r}\right) }\right) - \lambda \operatorname{sign}\left( {\phi \left\lbrack w\right\rbrack }\right) \phi {\left\lbrack w\right\rbrack }^{2},\text{ if }\left( {w, r}\right) \in {\mathcal{N}}_{ - }^{1}\left\lbrack v\right\rbrack ; \end{array}\right.
429
+ $$
430
+
431
+ (22)
432
+
433
+ Our derivation in Section 3 focuses on (stochastic) gradient descent as the optimiser for training FMs. Going beyond this, complex gradient-based optimisers like AdaGrad use running statistics of the gradients. For example, for an AdaGrad optimiser, the gradient is element-wisely re-scaled by $\frac{1}{\sqrt{{s}_{v}} + \epsilon }{\nabla }_{\phi \left\lbrack v\right\rbrack }L$ where $s$ is the running sum of squared gradients and $\epsilon > 0$ is a hyper-parameter added to the denominator to improve numerical stability. Such re-scaling can be absorbed into the update
434
+
435
+ equation:
436
+
437
+ $$
438
+ \operatorname{AdaGrad}\left( {\phi ,\mathcal{T}}\right) \left\lbrack v\right\rbrack = \phi \left\lbrack v\right\rbrack + \left( {{\alpha z}\left\lbrack v\right\rbrack - {\beta n}\left\lbrack v\right\rbrack }\right) * \frac{1}{\sqrt{s\left\lbrack v\right\rbrack } + \epsilon }.
439
+ $$
440
+
441
+ ## F Additional Results on Inductive KGC Tasks
442
+
443
+ In this paper, we describe the results on FB15K237_v1_ind under some random seed. To confirm the significance and sensitivity, we further experiment with additional 5 random seeds. Due to our computational budget, for this experiment, we resorted to a coarse grid when performing the hyper-parameters sweeps. Following standard evaluation protocols, we report the mean values and standard deviations of the filtered Hits@10 over 5 random seeds. Numbers for Neural-LP, DRUM, RuleN, GraIL, and NBFNet are taken from the literature [9, 17]. "-" means the numbers are not applicable. Table 3 summarises the results. REFACTOR GNNS are able to make use of both types of input features, while textual features benefit both GAT and REFACTOR GNNs for most datasets. Increasing depth benefits WN18RR_vi_ind $\left( {i \in \left\lbrack {1,2,3,4}\right\rbrack }\right)$ most. Future work could consider the impact of textual node features provided by different pretrained language models. Another interesting direction is to investigate the impact of depth on GNNs for datasets like WN18RR, where many kinds of hierarchies are observed in the data. In addition to the partial ranking evaluation protocol, where the ground-truth subject/object entity is ranked against 50 sampled entities, ${}^{2}$ we also consider the full ranking evaluation protocol, where the ground-truth subject/object entity is ranked against all the entities. Table 4 summarises the results. Empirically, we observe that full ranking is more suitable for reflecting the differences between models than partial ranking. It also has less variance than partial ranking, since it requires no sampling from the candidate entities. Hence, we believe there is good reason to recommend the 82 community to use full ranking for these datasets in the future.
444
+
445
+ ---
446
+
447
+ ${}^{2}$ One implementation for such evaluation can be found in https://github.com/kkteru/grail/blob/ master/test_ranking.py#L448.
448
+
449
+ ---
450
+
451
+ Table 3: Hits@10 with Partial Ranking against 50 Negative Samples. "[T]" indicates using textual encodings of entity descriptions [29] as input (positional) node features; "[R]" indicates using frozen random vectors as input (positional) node feature.
452
+
453
+ <table><tr><td rowspan="2"/><td colspan="4">WN18RR</td><td colspan="4">FB15k-237</td><td/><td colspan="3">NELL-995</td></tr><tr><td>v1</td><td>v2</td><td>v3</td><td>v4</td><td>v1</td><td>v2</td><td>v3</td><td>v4</td><td>v1</td><td>v2</td><td>v3</td><td>v4</td></tr><tr><td>No Pretrain [R]</td><td>${0.220} \pm {0.048}$</td><td>${0.226} \pm {0.013}$</td><td>${0.244} \pm {0.020}$</td><td>${0.218} \pm {0.050}$</td><td>${0.215} \pm {0.019}$</td><td>${0.207} \pm {0.008}$</td><td>${0.211} \pm {0.002}$</td><td>${0.205} \pm {0.008}$</td><td>${0.543} \pm {0.022}$</td><td>${0.207} \pm {0.008}$</td><td>${0.216} \pm {0.004}$</td><td>${0.198} \pm {0.006}$</td></tr><tr><td>No Pretrain [T]</td><td>${0.267} \pm {0.020}$</td><td>${0.236} \pm {0.020}$</td><td>${0.292} \pm {0.025}$</td><td>${0.253} \pm {0.022}$</td><td>${0.242} \pm {0.018}$</td><td>${0.227} \pm {0.007}$</td><td>${0.240} \pm {0.011}$</td><td>${0.244} \pm {0.003}$</td><td>${0.538} \pm {0.079}$</td><td>${0.234} \pm {0.017}$</td><td>${0.242} \pm {0.020}$</td><td>${0.191} \pm {0.036}$</td></tr><tr><td>Neural-LP</td><td>0.744</td><td>0.689</td><td>0.462</td><td>0.671</td><td>0.529</td><td>0.589</td><td>0.529</td><td>0.559</td><td>0.408</td><td>0.787</td><td>0.827</td><td>0.806</td></tr><tr><td>DRUM</td><td>0.744</td><td>0.689</td><td>0.462</td><td>0.671</td><td>0.529</td><td>0.587</td><td>0.529</td><td>0.559</td><td>0.194</td><td>0.786</td><td>0.827</td><td>0.806</td></tr><tr><td>RuleN</td><td>0.809</td><td>0.782</td><td>0.534</td><td>0.716</td><td>0.498</td><td>0.778</td><td>0.877</td><td>0.856</td><td>0.535</td><td>0.818</td><td>0.773</td><td>0.614</td></tr><tr><td>GAT(3) [R]</td><td>${0.583} \pm {0.022}$</td><td>${0.797} \pm {0.002}$</td><td>${0.560} \pm {0.005}$</td><td>${0.660} \pm {0.015}$</td><td>${0.333} \pm {0.042}$</td><td>${0.312} \pm {0.036}$</td><td>${0.407} \pm {0.072}$</td><td>${0.363} \pm {0.050}$</td><td>${0.906} \pm {0.004}$</td><td>${0.303} \pm {0.031}$</td><td>${0.351} \pm {0.009}$</td><td>${0.187} \pm {0.098}$</td></tr><tr><td>$\operatorname{GAT}\left( 6\right) \left\lbrack \mathrm{R}\right\rbrack$</td><td>${0.850} \pm {0.014}$</td><td>${0.841} \pm {0.001}$</td><td>${0.631} \pm {0.020}$</td><td>${0.802} \pm {0.004}$</td><td>${0.401} \pm {0.020}$</td><td>${0.445} \pm {0.018}$</td><td>${0.461} \pm {0.048}$</td><td>${0.406} \pm {0.143}$</td><td>${0.811} \pm {0.039}$</td><td>${0.670} \pm {0.055}$</td><td>${0.341} \pm {0.042}$</td><td>${0.301} \pm {0.002}$</td></tr><tr><td>GAT(3) [T]</td><td>${0.970} \pm {0.002}$</td><td>${0.980} \pm {0.001}$</td><td>${0.897} \pm {0.005}$</td><td>${0.960} \pm {0.001}$</td><td>${0.806} \pm {0.003}$</td><td>${0.942} \pm {0.001}$</td><td>${0.941} \pm {0.002}$</td><td>${0.954} \pm {0.001}$</td><td>${0.938} \pm {0.005}$</td><td>${0.839} \pm {0.001}$</td><td>${0.962} \pm {0.001}$</td><td>${0.354} \pm {0.002}$</td></tr><tr><td>$\operatorname{GAT}\left( 6\right) \left\lbrack \mathrm{T}\right\rbrack$</td><td>${0.965} \pm {0.002}$</td><td>${0.986} \pm {0.001}$</td><td>${0.920} \pm {0.002}$</td><td>${0.970} \pm {0.003}$</td><td>${0.826} \pm {0.004}$</td><td>${0.943} \pm {0.001}$</td><td>${0.927} \pm {0.003}$</td><td>${0.927} \pm {0.001}$</td><td>${0.904} \pm {0.000}$</td><td>${0.811} \pm {0.001}$</td><td>${0.880} \pm {0.001}$</td><td>${0.297} \pm {0.003}$</td></tr><tr><td>Grall</td><td>0.825</td><td>0.787</td><td>0.584</td><td>0.734</td><td>0.642</td><td>0.818</td><td>0.828</td><td>0.893</td><td>0.595</td><td>0.933</td><td>0.914</td><td>0.732</td></tr><tr><td>NBFNet</td><td>0.948</td><td>0.905</td><td>0.893</td><td>0.890</td><td>0.834</td><td>0.949</td><td>0.951</td><td>0.960</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>ReFactorGNN(3) [R]</td><td>${0.899} \pm {0.003}$</td><td>${0.842} \pm {0.004}$</td><td>${0.605} \pm {0.000}$</td><td>${0.801} \pm {0.002}$</td><td>${0.673} \pm {0.000}$</td><td>${0.812} \pm {0.002}$</td><td>${0.833} \pm {0.003}$</td><td>${0.877} \pm {0.002}$</td><td>${0.913} \pm {0.000}$</td><td>${0.913} \pm {0.011}$</td><td>${0.893} \pm {0.000}$</td><td>${0.838} \pm {0.002}$</td></tr><tr><td>ReFactorGNN(6) [R]</td><td>${0.885} \pm {0.000}$</td><td>${0.854} \pm {0.003}$</td><td>${0.738} \pm {0.006}$</td><td>${0.817} \pm {0.004}$</td><td>${0.787} \pm {0.007}$</td><td>${0.903} \pm {0.003}$</td><td>${0.903} \pm {0.002}$</td><td>${0.920} \pm {0.002}$</td><td>${0.971} \pm {0.007}$</td><td>${0.957} \pm {0.003}$</td><td>${0.935} \pm {0.003}$</td><td>${0.927} \pm {0.001}$</td></tr><tr><td>ReFactorGNN(3) [T]</td><td>${0.918} \pm {0.002}$</td><td>${0.973} \pm {0.001}$</td><td>${0.910} \pm {0.003}$</td><td>${0.934} \pm {0.001}$</td><td>${0.900} \pm {0.004}$</td><td>${0.959} \pm {0.001}$</td><td>${0.952} \pm {0.002}$</td><td>${0.968} \pm {0.001}$</td><td>${0.955} \pm {0.004}$</td><td>${0.931} \pm {0.001}$</td><td>${0.978} \pm {0.001}$</td><td>${0.929} \pm {0.001}$</td></tr><tr><td>ReFactorGNN(6) [T]</td><td>${0.970} \pm {0.002}$</td><td>${0.988} \pm {0.001}$</td><td>${0.944} \pm {0.002}$</td><td>${0.987} \pm {0.000}$</td><td>${0.920} \pm {0.001}$</td><td>${0.963} \pm {0.001}$</td><td>${0.962} \pm {0.002}$</td><td>${0.970} \pm {0.002}$</td><td>${0.949} \pm {0.011}$</td><td>${0.963} \pm {0.001}$</td><td>${0.994} \pm {0.000}$</td><td>${0.955} \pm {0.002}$</td></tr></table>
454
+
455
+ Table 4: Hits@10 with Full Ranking against All Candidate Entities. "[T]" indicates using textual encodings of entity descriptions [29] as input (positional) node features; "[R]" indicates using frozen random vectors as input (positional) node feature.
456
+
457
+ <table><tr><td rowspan="2"/><td colspan="4">WN18RR</td><td colspan="4">FB15k-237</td><td colspan="4">NELL-995</td></tr><tr><td>v1</td><td>v2</td><td>v3</td><td>v4</td><td>v1</td><td>v2</td><td>v3</td><td>v4</td><td>v1</td><td>v2</td><td>v3</td><td>v4</td></tr><tr><td>No Pretrain [R]</td><td>${0.020} \pm {0.006}$</td><td>${0.004} \pm {0.001}$</td><td>${0.004} \pm {0.003}$</td><td>${0.003} \pm {0.001}$</td><td>${0.013} \pm {0.003}$</td><td>${0.012} \pm {0.001}$</td><td>${0.004} \pm {0.001}$</td><td>${0.002} \pm {0.001}$</td><td>${0.255} \pm {0.021}$</td><td>${0.004} \pm {0.001}$</td><td>${0.001} \pm {0.001}$</td><td>${0.003} \pm {0.001}$</td></tr><tr><td>No Pretrain [T]</td><td>${0.027} \pm {0.009}$</td><td>${0.007} \pm {0.003}$</td><td>${0.006} \pm {0.001}$</td><td>${0.005} \pm {0.001}$</td><td>${0.014} \pm {0.001}$</td><td>${0.010} \pm {0.001}$</td><td>${0.007} \pm {0.001}$</td><td>${0.006} \pm {0.001}$</td><td>${0.262} \pm {0.031}$</td><td>${0.006} \pm {0.002}$</td><td>${0.006} \pm {0.002}$</td><td>${0.003} \pm {0.001}$</td></tr><tr><td>GAT(3) [R]</td><td>${0.171} \pm {0.008}$</td><td>${0.504} \pm {0.026}$</td><td>${0.260} \pm {0.022}$</td><td>${0.089} \pm {0.017}$</td><td>${0.074} \pm {0.003}$</td><td>${0.050} \pm {0.014}$</td><td>${0.051} \pm {0.019}$</td><td>${0.023} \pm {0.012}$</td><td>${0.806} \pm {0.019}$</td><td>${0.003} \pm {0.002}$</td><td>${0.008} \pm {0.007}$</td><td>${0.008} \pm {0.004}$</td></tr><tr><td>$\operatorname{GAT}\left( 6\right) \left\lbrack \mathrm{R}\right\rbrack$</td><td>${0.575} \pm {0.005}$</td><td>${0.698} \pm {0.003}$</td><td>${0.312} \pm {0.000}$</td><td>${0.606} \pm {0.002}$</td><td>${0.048} \pm {0.004}$</td><td>${0.028} \pm {0.004}$</td><td>${0.033} \pm {0.018}$</td><td>${0.015} \pm {0.026}$</td><td>${0.491} \pm {0.112}$</td><td>${0.110} \pm {0.048}$</td><td>${0.031} \pm {0.010}$</td><td>${0.031} \pm {0.002}$</td></tr><tr><td>$\operatorname{GAT}\left( 3\right) \left\lbrack \mathrm{T}\right\rbrack$</td><td>${0.794} \pm {0.000}$</td><td>${0.826} \pm {0.000}$</td><td>${0.468} \pm {0.000}$</td><td>${0.705} \pm {0.000}$</td><td>${0.331} \pm {0.000}$</td><td>${0.585} \pm {0.000}$</td><td>${0.505} \pm {0.000}$</td><td>${0.449} \pm {0.000}$</td><td>${0.856} \pm {0.000}$</td><td>${0.245} \pm {0.000}$</td><td>${0.345} \pm {0.000}$</td><td>${0.078} \pm {0.000}$</td></tr><tr><td>GAT(6) [T]</td><td>${0.815} \pm {0.000}$</td><td>${0.808} \pm {0.000}$</td><td>${0.469} \pm {0.000}$</td><td>${0.701} \pm {0.000}$</td><td>${0.416} \pm {0.000}$</td><td>${0.483} \pm {0.000}$</td><td>${0.391} \pm {0.000}$</td><td>${0.388} \pm {0.000}$</td><td>${0.851} \pm {0.000}$</td><td>${0.189} \pm {0.000}$</td><td>${0.137} \pm {0.000}$</td><td>${0.023} \pm {0.000}$</td></tr><tr><td>ReFactorGNN(3) [R]</td><td>${0.826} \pm {0.000}$</td><td>0.758 ± 0.002</td><td>${0.374} \pm {0.004}$</td><td>${0.707} \pm {0.000}$</td><td>${0.455} \pm {0.010}$</td><td>${0.603} \pm {0.008}$</td><td>${0.556} \pm {0.003}$</td><td>${0.587} \pm {0.003}$</td><td>${0.907} \pm {0.004}$</td><td>${0.700} \pm {0.001}$</td><td>${0.630} \pm {0.001}$</td><td>${0.511} \pm {0.001}$</td></tr><tr><td>ReFactorGNN(6) [R]</td><td>${0.826} \pm {0.001}$</td><td>${0.769} \pm {0.005}$</td><td>${0.440} \pm {0.001}$</td><td>${0.731} \pm {0.000}$</td><td>${0.558} \pm {0.007}$</td><td>${0.694} \pm {0.006}$</td><td>${0.639} \pm {0.006}$</td><td>${0.640} \pm {0.000}$</td><td>${0.967} \pm {0.005}$</td><td>${0.764} \pm {0.009}$</td><td>${0.697} \pm {0.005}$</td><td>${0.703} \pm {0.001}$</td></tr><tr><td>ReFactorGNN(3) [T]</td><td>${0.805} \pm {0.000}$</td><td>${0.796} \pm {0.003}$</td><td>${0.483} \pm {0.000}$</td><td>${0.682} \pm {0.000}$</td><td>${0.589} \pm {0.001}$</td><td>${0.672} \pm {0.001}$</td><td>${0.610} \pm {0.001}$</td><td>${0.611} \pm {0.001}$</td><td>${0.918} \pm {0.000}$</td><td>${0.629} \pm {0.001}$</td><td>${0.634} \pm {0.000}$</td><td>${0.305} \pm {0.000}$</td></tr><tr><td>ReFactorGNN(6) [T]</td><td>${0.844} \pm {0.004}$</td><td>${0.848} \pm {0.003}$</td><td>${0.522} \pm {0.001}$</td><td>${0.781} \pm {0.001}$</td><td>${0.619} \pm {0.000}$</td><td>${0.721} \pm {0.001}$</td><td>${0.663} \pm {0.000}$</td><td>${0.660} \pm {0.000}$</td><td>${0.913} \pm {0.000}$</td><td>${0.733} \pm {0.000}$</td><td>${0.711} \pm {0.000}$</td><td>${0.417} \pm {0.000}$</td></tr></table>
458
+
459
+ <table><tr><td>Depth</td><td>3</td><td>6</td><td>$\infty$</td></tr><tr><td>$\Delta$ Test MRR</td><td>0.060</td><td>0.045</td><td>0.016</td></tr></table>
460
+
461
+ Table 5: The Impact of Meaningful Node Feature on FB15K237_v1. ΔTest MRR is computed by test mrr (Roberta Encodings as node features) - test mrr (Random vectors as node features). Larger $\Delta$ means meaningful node features bring more benefit.
462
+
463
+ ## G Additional Results on The Impact of Meaningful Node Features
464
+
465
+ To better understand the impact meaningful node features have on REFACTOR GNNs for the task of knowledge graph completion, we compare REFACTOR GNNS trained with Roberta Encodings (one example of meaningful node features) and REFACTOR GNNS trained with Random Vectors (not meaningful node features). We perform experiments on ${FB15K237}\_ {v1}$ and vary the number of message-passing layers: $L \in \{ 3,6,\infty \}$ . Table 5 summarises the differences. We can see that meaningful node features are highly beneficial if REFACTOR GNNS are only provided with a small number of message-passing layers. As more message-passing layers are allowed, the benefit of REFACTOR GNNS diminishes. The extreme case would be $L = \infty$ , where the benefit of meaningful node features becomes negligible. We hypothesise that this might be why meaningful node features haven not been found to be useful for transductive knowledge graph completion.
466
+
467
+ ## H Additional Results on Parameter Efficiency
468
+
469
+ Figure 4 shows the parameter efficiency on the dataset ${FB15K237}\_ {v2}$ .
470
+
471
+ ## I Experimental Details: Setup, Hyper-Parameters, and Implementation
472
+
473
+ As we stated in the experiments section, we used a two-stage training process. In stage one, we sample subgraphs around query links and serialise them. In stage two, we load the serialised subgraphs and train the GNNs. For transductive knowledge graph completion, we test the model on the same graph (but different splits). For inductive knowledge graph completion, we test the model on the new graph, where the relation vocabulary is shared with the training graph, while the entities are novel. We use the validation split for selecting the best hyper-parameter configuration and report the corresponding test performance. We include reciprocal triplets into the training triplets following standard practice [11].
474
+
475
+ For subgraph serialisation, we first sample a mini-batch of triplets and then use these nodes as seed nodes for sampling subgraphs. We also randomly draw a node globally and add it to the seed nodes. The training batch size is 256 while the valid/test batch size is 8 . We use the LADIES algorithm [26] and sample subgraphs with depths in $\left\lbrack {1,2,3,6,9}\right\rbrack$ and a width of 256. For each graph, we keep sampling for 20 epochs, i.e. roughly 20 full passes over the graph.
476
+
477
+ For general model training, we consider hyper-parameters including learning rates in $\left\lbrack {{0.01},{0.001}}\right\rbrack$ , weight decay values in $\left\lbrack {0,{0.1},{0.01}}\right\rbrack$ , and dropout values in $\left\lbrack {0,{0.5}}\right\rbrack$ . For GATs, we use 768 as the hidden size and 8 as the number of attention heads. We train GATs with 3 layers and 6 layers. We also consider whether or not to combine the outputs from all the layers. For REFACTOR GNNs, we use the same hidden size as GAT. We consider whether the ReFactor Layer is induced by a SGD operator or by a AdaGrad operator. Within a ReFactor Layer, we also consider the N3 regulariser strength values $\left\lbrack {0,{0.005},{0.0005}}\right\rbrack$ , the $\alpha$ values $\left\lbrack {{0.1},{0.01}}\right\rbrack$ , and the option of removing the $n\left\lbrack v\right\rbrack$ , where the message-passing layer only involves information flow within 1 -hop neighbourhood as most the classic message-passing GNNs do.
478
+
479
+ We use grid search to find the best hyper-parameter configuration based on the validation MRR. Each training run is done using two Tesla V100 (16GB) GPUs with, where data parallelism was implemented via the DistributedDataParallel component of pytorch-lightning. For inductive learning experiments, inference for all the validation and test queries on small datasets like FB15K237_v1 takes about 1-5 seconds, while on medium datasets it takes approximately 20 seconds, and on big datasets like WN18RR_v4 it requires approximately 60 seconds. For most training runs, the memory footprint is less than ${40}\%$ (13GB). The training time for 20 full passes over the graph is about 1,7, and 21 minutes respectively for small, medium, and large datasets.
480
+
481
+ ![01963eef-1097-7b31-b870-0251d4b72293_16_339_231_1118_671_0.jpg](images/01963eef-1097-7b31-b870-0251d4b72293_16_339_231_1118_671_0.jpg)
482
+
483
+ Figure 4: Performance vs Parameter Efficiency as #Layers Increases FB15K237_v2. The left axis is Test MRR while the right axis is #Parameters. The solid lines and dashed lines indicate the changes of Test MRR and the changes of #Parameters.
484
+
485
+ We adapt the LADIES code base for sampling on knowledge graphs ${}^{3}$ . The datasets we use can be downloaded from https://github.com/villmow/datasets_knowledge_embedding and https://github.com/kkteru/grail.We implement REFACTOR GNNS using the MessagePassing ${\mathrm{{API}}}^{4}$ in PyG. Specially, we consider using message_and_aggregate function ${}^{5}$ 531 to compute the aggregated messages.
486
+
487
+ ---
488
+
489
+ ${}^{3}$ https://github.com/acbull/LADIES
490
+
491
+ ${}^{4}$ https://pytorch-geometric.readthedocs.io/en/latest/notes/create_gnn.html
492
+
493
+ ${}^{5}$ https://pytorch-geometric.readthedocs.io/en/latest/notes/sparse_tensor.html
494
+
495
+ ---
papers/LOG/LOG 2022/LOG 2022 Conference/Sq9Orta9l5i/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § REFACTOR GNNS: REVISITING FACTORISATION-BASED MODELS FROM A MESSAGE-PASSING PERSPECTIVE
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ Factorisation-based Models (FMs), such as DistMult, have enjoyed enduring success for Knowledge Graph Completion (KGC) tasks, often outperforming Graph Neural Networks (GNNs). However, unlike GNNs, FMs struggle to incorporate node features and generalise to unseen nodes in inductive settings. Our work bridges the gap between FMs and GNNs by proposing REFACTOR GNNs. This new architecture draws upon both modelling paradigms, which previously were largely thought of as disjoint. Concretely, using a message-passing formalism, we show how FMs can be cast as GNNs by reformulating the gradient descent procedure as message-passing operations, which forms the basis of our REFACTOR GNNs. Our REFACTOR GNNs achieve state-of-the-art inductive performance while using an order of magnitude fewer parameters.
12
+
13
+ § 13 1 INTRODUCTION
14
+
15
+ In recent years, machine learning on graphs has attracted significant attention due to the abundance of graph-structured data and developments in graph learning algorithms. Graph Neural Networks (GNNs) have shown state-of-the-art performance for many graph-related problems, such as node classification [1] and graph classification [2]. Their main advantage is that they can easily be applied in an inductive setting: generalising to new nodes and graphs without re-training. However, despite many attempts at applying GNNs for multi-relational link prediction such as Knowledge Graph Completion [3], there are still few positive results compared to factorisation-based models (FMs) [4, 5]. As it stands, GNNs either - after resolving reproducibility concerns - deliver significantly lower performance $\left\lbrack {6,7}\right\rbrack$ or yield negligible performance gains at the cost of highly sophisticated architecture designs [8]. A notable exception is NBFNet [9], but even here the advance comes at the price of a high computational inference cost compared to FMs. Furthermore, it is unclear how NBFNet could incorporate node features, which - as we will see in this work - leads to remarkably lower performance in an inductive setting. On the flip side, FMs, despite being a simpler architecture, have been found to be very accurate for knowledge graph completion when coupled with appropriate training strategies [10] and training objectives [11, 12]. However, they also come with shortcomings in that they, unlike GNNs, can not be applied in an inductive setting.
16
+
17
+ Given the respective strengths and weaknesses of FMs and GNNs, can we bridge these two seemingly different model categories? While exploring this question, we make the following contributions:
18
+
19
+ * By reformulating the training process using message-passing primitives, we show a practical connection between FMs and GNNs, i.e. FMs can be treated as a special instance of GNNs.
20
+
21
+ * Based on this connection, we propose a new family of architectures, REFACTOR GNNs, that interpolates between FMs and GNNs and allow FMs to be used inductively.
22
+
23
+ * In an empirical investigation across well-established benchmarks (see the appendix), our REFAC-TOR GNNS achieve state-of-the-art inductive performance across the board and comparable transductive performance to FMs - despite using an order of magnitude fewer parameters.
24
+
25
+ § 2 BACKGROUND
26
+
27
+ Knowledge Graph Completion [KGC, 13] is a canonical task of multi-relational link prediction. The goal is to predict missing edges given the existing edges in the knowledge graph. Formally, a knowledge graph contains a set of entities (nodes) $\mathcal{E} = \{ 1,\ldots ,\left| \mathcal{E}\right| \}$ , a set of relation (or edge) types $\mathcal{R} = \{ 1,\ldots ,\left| \mathcal{R}\right| \}$ , and a set of typed edges between the entities $\mathcal{T} = {\left\{ \left( {v}_{i},{r}_{i},{w}_{i}\right) \right\} }_{i = 1}^{\left| \mathcal{T}\right| }$ , where each triple $\left( {{v}_{i},{r}_{i},{w}_{i}}\right)$ indicates a relationship of type ${r}_{i} \in \mathcal{R}$ between the subject ${v}_{i} \in \mathcal{E}$ and the object ${w}_{i} \in \mathcal{E}$ of the triple. Given a (training) knowledge graph, the KGC task [3] aims at identifying missing links by answering $\left( {v,r,?}\right)$ queries i.e. predicting the object given the subject and the relation.
28
+
29
+ Multi-relational link prediction models can be trained via maximum likelihood, by fitting a parameterized conditional categorical distribution ${P}_{\theta }\left( {w \mid v,r}\right)$ over the candidate objects of a relation, given the subject $v$ and the relation type $r : {P}_{\theta }\left( {w \mid v,r}\right) = \operatorname{Softmax}\left( {{\Gamma }_{\theta }\left( {v,r, \cdot }\right) }\right) \left\lbrack w\right\rbrack$ , where ${\Gamma }_{\theta } : \mathcal{E} \times \mathcal{R} \times \mathcal{E} \rightarrow \mathbb{R}$ is a scoring function that, given a triple(v, r, w), returns the likelihood that the corresponding edge appears in the graph. In this paper, we illustrate our derivations using DistMult [4] as the score function $\Gamma$ and defer extensions to general score functions, e.g. ComplEx [5] to the appendix. In DistMult, the score function ${\Gamma }_{\theta }$ is defined as the tri-linear dot product of the embeddings of the subject, relation type, and object of the triple: ${\Gamma }_{\theta }\left( {v,r,w}\right) = \mathop{\sum }\limits_{{i = 1}}^{K}{f}_{\phi }{\left( v\right) }_{i}{f}_{\phi }{\left( w\right) }_{i}{g}_{\psi }{\left( r\right) }_{i}$ , where ${f}_{\phi } : \mathcal{E} \rightarrow {\mathbb{R}}^{K}$ and ${g}_{\psi } : \mathcal{R} \rightarrow {\mathbb{R}}^{K}$ are learnable maps parameterised by $\phi$ and $\psi$ that encode entities and relation types into $K$ -dimensional representations, and $\theta = \left( {\phi ,\psi }\right)$ . We will refer to $f$ and $g$ as the entity and relational encoders, respectively.
30
+
31
+ We can learn the model parameters $\theta$ by minimising the expected negative log-likelihood $\mathcal{L}\left( \theta \right)$ of the ground-truth entities for the queries $\left( {v,r,?}\right)$ obtained from $\mathcal{T}$ :
32
+
33
+ $$
34
+ \arg \mathop{\min }\limits_{\theta }\mathcal{L}\left( \theta \right) \;\text{ where }\;\mathcal{L}\left( \theta \right) = - \frac{1}{\left| \mathcal{T}\right| }\mathop{\sum }\limits_{{\left( {v,r,w}\right) \in \mathcal{T}}}\log {P}_{\theta }\left( {w \mid v,r}\right) . \tag{1}
35
+ $$
36
+
37
+ During inference, we use the distribution ${P}_{\theta }$ for ranking missing links.
38
+
39
+ Factorisation-based Models for KGC. In factorisation-based models, which we assume to be DistMult, ${f}_{\phi }$ and ${g}_{\psi }$ are simply parameterised as look-up tables, associating each entity and relation with a continuous distributed representation:
40
+
41
+ $$
42
+ {f}_{\phi }\left( v\right) = \phi \left\lbrack v\right\rbrack ,\phi \in {\mathbb{R}}^{\left| \mathcal{E}\right| \times K}\;\text{ and }\;{g}_{\psi }\left( r\right) = \psi \left\lbrack r\right\rbrack ,\psi \in {\mathbb{R}}^{\left| \mathcal{R}\right| \times K}. \tag{2}
43
+ $$
44
+
45
+ GNN-based Models for KGC. GNNs were originally proposed for node or graph classification tasks $\left\lbrack {{14},{15}}\right\rbrack$ . To adapt them to KGC, previous work has explored two different paradigms: node-wise entity representations [16] and pair-wise entity representations [9, 17]. Though the latter paradigm has shown promising results, it requires computing an embedding representation for any pair of nodes, which can be too computationally expensive for large-scale graphs with millions of entities. Additionally, node-wise representations allow for using a single evaluation of ${f}_{\phi }\left( v\right)$ for multiple queries involving $v$ . Models based on the first paradigm differ from pure FMs only in the entity encoder and lend themselves well for a fairer comparison with pure FMs. We will therefore focus on this class and leave the investigation of pair-wise representations to future work.
46
+
47
+ Let ${q}_{\phi } : \mathcal{G} \times \mathcal{X} \rightarrow \mathop{\bigcup }\limits_{{S \in {\mathbb{N}}^{ + }}}{\mathbb{R}}^{S \times K}$ be a GNN encoder, where $\mathcal{G} = \{ G \mid G \subseteq \mathcal{E} \times \mathcal{R} \times \mathcal{E}\}$ is the set of all possible multi-relational graphs defined over $\mathcal{E}$ and $\mathcal{R}$ , and $\mathcal{X}$ is the input feature space, respectively. Then we can set ${f}_{\phi }\left( v\right) = {q}_{\phi }\left( {\mathcal{T},X}\right) \left\lbrack v\right\rbrack$ . Following the standard message-passing framework $\left\lbrack {2,{18}}\right\rbrack$ used by the GNNs, we view ${q}_{\phi } = {q}^{L} \circ \ldots \circ {q}^{1}$ as the recursive composition of $L \in {\mathbb{N}}^{ + }$ layers that compute intermediate representations ${h}^{l}$ for $l \in \{ 1,\ldots ,L\}$ (and ${h}^{0} = X$ ) for all entities in the KG. Each layer is made up of the following three functions:
48
+
49
+ * A message function ${q}_{\mathrm{M}}^{l} : {\mathbb{R}}^{K} \times \mathcal{R} \times {\mathbb{R}}^{K} \rightarrow {\mathbb{R}}^{K}$ that computes the message along each edge. Given an edge $\left( {v,r,w}\right) \in \mathcal{T},{q}_{\mathrm{M}}^{l}$ not only makes use of the node states ${h}^{l - 1}\left\lbrack v\right\rbrack$ and ${h}^{l - 1}\left\lbrack w\right\rbrack$ (as in standard GNNs) but also uses the relation $r$ . Denote the message as ${m}^{l}\left\lbrack {v,r,w}\right\rbrack = {q}_{\mathrm{M}}^{l}\left( {{h}^{l - 1}\left\lbrack v\right\rbrack ,r,{h}^{l - 1}\left\lbrack w\right\rbrack }\right)$ ;
50
+
51
+ * An aggregation function ${q}_{\mathrm{A}}^{l} : \mathop{\bigcup }\limits_{{S \in \mathbb{N}}}{\mathbb{R}}^{S \times K} \rightarrow {\mathbb{R}}^{K}$ that aggregates all messages from the 1-hop neighbourhood of a node; denote the aggregated message as ${z}^{l}\left\lbrack v\right\rbrack =$ ${q}_{\mathrm{A}}^{l}\left( \left\{ {{m}^{l}\left\lbrack {v,r,w}\right\rbrack \mid \left( {r,w}\right) \in {\mathcal{N}}^{1}\left\lbrack v\right\rbrack }\right\} \right) ;$
52
+
53
+ * An update function ${q}_{\mathrm{{UI}}}^{l} : {\mathbb{R}}^{K} \times {\mathbb{R}}^{K} \rightarrow {\mathbb{R}}^{K}$ that produces the new node states ${h}^{l}$ by combining previous node states ${h}^{l - 1}$ and the aggregated messages ${z}^{l} : {h}^{l}\left\lbrack v\right\rbrack = {q}_{\mathrm{U}}^{l}\left( {{h}^{l - 1}\left\lbrack v\right\rbrack ,{z}^{l}\left\lbrack v\right\rbrack }\right)$ .
54
+
55
+ § 3 IMPLICIT MESSAGE-PASSING IN FMS
56
+
57
+ The sharp difference in analytical forms might give rise to the misconception that GNNs incorporate message-passing over the neighbourhood of each node (up to $L$ -hops), while FMs do not. In this work, we show that by explicitly considering the training dynamics of FMs, we can uncover and analyse the hidden message-passing mechanism within FMs. In turn, this will lead us to the formulation of a novel class of GNNs well suited for multi-relational link prediction tasks (Section 4). Specifically, we propose to interpret the FMs' optimisation process of their objective (1) as the entity encoder. If we consider, for simplicity, a gradient descent training dynamic, then
58
+
59
+ $$
60
+ {f}_{{\phi }^{t}}\left( v\right) = {\phi }^{t}\left\lbrack v\right\rbrack = {\mathrm{{GD}}}^{t}\left( {{\phi }^{t - 1},\mathcal{T}}\right) \left\lbrack v\right\rbrack = \underset{t}{\underbrace{{\mathrm{{GD}}}^{t}\ldots {\mathrm{{GD}}}^{1}}}\left( {{\phi }^{0},\mathcal{T}}\right) \left\lbrack v\right\rbrack , \tag{3}
61
+ $$
62
+
63
+ where ${\phi }^{t}$ is the embedding vector at the $t$ -th step, $t \in {\mathbb{N}}^{ + }$ is the total number of iterations and ${\phi }^{0}$ is a random initialisation. GD is the gradient descent operator:
64
+
65
+ $$
66
+ \mathrm{{GD}}\left( {\phi ,\mathcal{T}}\right) = \phi - \alpha {\nabla }_{\phi }\mathcal{L} = \phi + \alpha \mathop{\sum }\limits_{{\left( {v,r,w}\right) \in \mathcal{T}}}\frac{\partial \log P\left( {w \mid v,r}\right) }{\partial \phi }, \tag{4}
67
+ $$
68
+
69
+ where $\alpha = \beta {\left| \mathcal{T}\right| }^{-1}$ , with a $\eta > 0$ learning rate. We now dissect Equation (4) in two different (but equivalent) ways. In the first, which we dub the edge view, we separately consider each addend of the gradient ${\nabla }_{\phi }\mathcal{L}$ . In the second, we aggregate the contributions from all the triples to the update of a particular node. With this latter decomposition, which we call the node view, we can explicate the message-passing mechanism at the core of the FMs. While the edge view suits a vectorised implementation better, the node view further exposes the information flow among nodes, allowing us to draw an analogy to message-passing GNNs.
70
+
71
+ To fully uncover the message-passing mechanism of FMs, we now focus on the gradient descent operation over a single node $v \in \mathcal{E}$ , referred to as the central node in the GNN literature. Recalling Equation (4), we have:
72
+
73
+ $$
74
+ \mathrm{{GD}}\left( {\phi ,\mathcal{T}}\right) \left\lbrack v\right\rbrack = \phi \left\lbrack v\right\rbrack + \alpha \mathop{\sum }\limits_{{\left( {v,r,w}\right) \in \mathcal{T}}}\frac{\partial \log P\left( {\overline{\mathrm{w}} \mid \overline{\mathrm{v}},\overline{\mathrm{r}}}\right) }{\partial \phi \left\lbrack v\right\rbrack }, \tag{5}
75
+ $$
76
+
77
+ which aggregates the information stemming from the updates presented in the edge view. The next theorem describes how this total information flow to a particular node can be recast as an instance of message passing (cf. Section 2). We defer the proof to the appendix.
78
+
79
+ Theorem 3.1 (Message passing in FMs). The gradient descent operator GD (Equation (5)) on the node embeddings of a DistMult model (Equation (2)) with the maximum likelihood objective in Equation (1) and a multi-relational graph $\mathcal{T}$ defined over entities $\mathcal{E}$ induces a message-passing operator whose composing functions are:
80
+
81
+ $$
82
+ {q}_{\mathrm{M}}\left( {\phi \left\lbrack v\right\rbrack ,r,\phi \left\lbrack w\right\rbrack }\right) = \left\{ \begin{array}{ll} \phi \left\lbrack w\right\rbrack \odot g\left( r\right) & \text{ if }\left( {r,w}\right) \in {\mathcal{N}}_{ + }^{1}\left\lbrack v\right\rbrack , \\ \left( {1 - {P}_{\theta }\left( {v \mid w,r}\right) }\right) \phi \left\lbrack w\right\rbrack \odot g\left( r\right) & \text{ if }\left( {r,w}\right) \in {\mathcal{N}}_{ - }^{1}\left\lbrack v\right\rbrack ; \end{array}\right. \tag{6}
83
+ $$
84
+
85
+ $$
86
+ {q}_{\mathrm{A}}\left( \left\{ {m\left\lbrack {v,r,w}\right\rbrack : \left( {r,w}\right) \in {\mathcal{N}}^{1}\left\lbrack v\right\rbrack }\right\} \right) = \mathop{\sum }\limits_{{\left( {r,w}\right) \in {\mathcal{N}}^{1}\left\lbrack v\right\rbrack }}m\left\lbrack {v,r,w}\right\rbrack ; \tag{7}
87
+ $$
88
+
89
+ $$
90
+ {q}_{\mathrm{U}}\left( {\phi \left\lbrack v\right\rbrack ,z\left\lbrack v\right\rbrack }\right) = \phi \left\lbrack v\right\rbrack + {\alpha z}\left\lbrack v\right\rbrack - {\beta n}\left\lbrack v\right\rbrack , \tag{8}
91
+ $$
92
+
93
+ where, defining the sets of triples ${\mathcal{T}}^{-v} = \{ \left( {s,r,o}\right) \in \mathcal{T} : s \neq v \land o \neq v\}$ ,
94
+
95
+ $$
96
+ n\left\lbrack v\right\rbrack = \frac{\left| {\mathcal{N}}_{ + }^{1}\left\lbrack v\right\rbrack \right| }{\left| \mathcal{T}\right| }{\mathbb{E}}_{{P}_{{\mathcal{N}}_{ + }^{1}\left\lbrack v\right\rbrack }}{\mathbb{E}}_{u \sim {P}_{\theta }\left( {\cdot \mid v,r}\right) }g\left( r\right) \odot \phi \left\lbrack u\right\rbrack + \frac{\left| {\mathcal{T}}^{-v}\right| }{\left| \mathcal{T}\right| }{\mathbb{E}}_{{P}_{{\mathcal{T}}^{-v}}}{P}_{\theta }\left( {v \mid s,r}\right) g\left( r\right) \odot \phi \left\lbrack s\right\rbrack , \tag{9}
97
+ $$
98
+
99
+ where ${P}_{{\mathcal{N}}_{ + }^{1}\left\lbrack v\right\rbrack }$ and ${P}_{{\mathcal{T}}^{-v}\text{ are the empirical probability distributions associated to the respective sets. }}$
100
+
101
+ What emerges from the equations is that each gradient step contains an explicit information flow from the neighbourhood of each node, which is then aggregated with a simple summation. Through this direct information path, $t$ steps of gradient descent cover $t$ -hop neighbourhood of $v$ . As $t$ goes towards infinity - or in practice - as training converges, FMs capture the global graph structure. The update function (8) somewhat deviates from classic message-passing as $n\left\lbrack v\right\rbrack$ of Equation (9) involves global information. However, we note that we can interpret this mechanism under the framework of augmented message passing [19] and, in particular, as an instance of graph rewiring.
102
+
103
+ Based on Theorem 3.1 and Equation (3), we can now view $\phi$ as the transient node states $h$ (cf. Section 2) and GD on node embeddings as a message-passing layer. This dualism sits at the core of the ReFactor GNN model, which we describe next.
104
+
105
+ § 4 REFACTOR GNNS
106
+
107
+ FMs are trained by minimising the objective (1), initialising both sets of parameters $\left( {\phi \text{ and }\psi }\right)$ and performing GD until approximate convergence (or until early stopping terminates the training). The implications are twofold: $i$ ) the initial value of the entity lookup table $\phi$ does not play any major role in the final model after convergence; and ii) if we introduce a new set of entities, the conventional wisdom is to retrain ${}^{1}$ the model on the expanded knowledge graph. This is computationally rather expensive compared to the "inductive" models that require no additional training and can leverage node features like entity descriptions. However, as we have just seen in Theorem 3.1, the training procedure of FMs may be naturally recast as a message-passing operation, which suggests that it is possible to use FMs for inductive learning tasks. In fact, we envision that there is an entire novel spectrum of model architectures interpolating between pure FMs and (various instantiations of) GNNs. Here we propose one simple implementation of such an architecture which we dub REFACTOR GNNS. Figure 1 gives an overview of REFACTOR GNNs.
108
+
109
+ The ReFactor Layer. A REFACTOR GNN contains $L$ REFACTOR layers, that we derive from Theorem 3.1. Aligning with the GNN notations we introduced in Section 2, given a KG $\mathcal{T}$ and entity representations ${h}^{l - 1} \in {\mathbb{R}}^{\left| \mathcal{E}\right| \times K}$ , the REFACTOR layer computes the representation of a node $v$ as follows:
110
+
111
+ $$
112
+ {h}^{l}\left\lbrack v\right\rbrack = {q}^{l}\left( {\mathcal{T},{h}^{l - 1}}\right) \left\lbrack v\right\rbrack = {h}^{l - 1}\left\lbrack v\right\rbrack - \beta {n}^{l}\left\lbrack v\right\rbrack + \alpha \mathop{\sum }\limits_{{\left( {r,w}\right) \in {\mathcal{N}}^{1}\left\lbrack v\right\rbrack }}{q}_{M}^{l}\left( {{h}^{l - 1}\left\lbrack v\right\rbrack ,r,{h}^{l - 1}\left\lbrack w\right\rbrack }\right) , \tag{10}
113
+ $$
114
+
115
+ where the terms ${n}^{l}$ and ${q}_{\mathrm{M}}^{l}$ derive from Equation (9) and Equation (6), respectively. Differing from the R-GCN, the first GNN on multi-relational graphs, where the incoming and outgoing neighbourhoods are treated equally [16], REFACTOR GNNS treat incoming and outgoing neighbourhoods differently. As we will show in the experiments, this allows REFACTOR GNNS to achieve good performances also on datasets containing non-symmetric relationships. In fact, the REFACTOR layer is built upon DistMult, which, despite being a symmetric operator, induces asymmetry into the final representation.
116
+
117
+ Equation (10) describes the full batch setting, which can be expensive if the KG contains many edges. Therefore, in practice, whenever the graph is big, we adopt a stochastic evaluation of the REFACTOR layer by decomposing the evaluation into several mini-batches. We partition $\mathcal{T}$ into a set of computationally tractable mini-batches. For each of them, we restrict the neighbourhoods to the subparagraph induced by it and readjust the computation of ${n}^{l}\left\lbrack v\right\rbrack$ to include only entities and edges present in it. We leave the investigation of other stochastic strategies (e.g. by taking Monte Carlo estimations of the expectations in Equation (9)) to future work. Finally, we cascade the mini-batch evaluation to produce one full layer evaluation.
118
+
119
+ Training. The learnable parameters of REFACTOR GNNS are the relation embeddings $\psi$ . Inspired by [20], we learn $\psi$ by layer-wise (stochastic) gradient descent. This is in contrast to conventional GNN training, where we need to backpropagate through all the layers. A (full-batch) GD training dynamic for $\psi$ can be written as ${\psi }_{t + 1} = {\psi }_{t} - \eta \nabla {\mathcal{L}}_{t}\left( {\psi }_{t}\right)$ , where ${\mathcal{L}}_{t}\left( {\psi }_{t}\right) = - {\left| \mathcal{T}\right| }^{-1}\mathop{\sum }\limits_{\mathcal{T}}\log {P}_{{\psi }_{t}}\left( {w \mid v,r}\right)$ , with:
120
+
121
+ $$
122
+ {P}_{{\psi }_{t}}\left( {w \mid v,r}\right) = \operatorname{Softmax}\left( {\Gamma \left( {v,r, \cdot }\right) }\right) \left\lbrack w\right\rbrack ,\;\Gamma \left( {v,r,w}\right) = \left\langle {{h}^{t}\left\lbrack v\right\rbrack ,{h}^{t}\left\lbrack w\right\rbrack ,{g}_{{\psi }_{t}}\left( r\right) }\right\rangle
123
+ $$
124
+
125
+ and the node state update as
126
+
127
+ $$
128
+ {h}^{t} = \left\{ \begin{matrix} X & \text{ if }t{\;\operatorname{mod}\;L} = 0 \\ {q}^{t{\;\operatorname{mod}\;L}}\left( {\mathcal{T},{h}^{t - 1}}\right) & \text{ otherwise } \end{matrix}\right. \tag{11}
129
+ $$
130
+
131
+ Implementation-wise, such a training dynamic equals to using an external memory for storing historical node states ${h}^{t - 1}$ akin to the procedure introduced in [21]. The memory can then be queried to compute ${h}^{t}$ using Equation (10). Under this perspective, we periodically clear the node state cache every $L$ full batches to force the model to predict based on on-the-fly $L$ -layer message-passing. After training, we obtain ${\psi }^{ * }$ and do the inference by running $L$ -layer message-passing with ${\psi }^{ * }$ .
132
+
133
+ Due to page limits, we leave the empirical study over the proposed REFACTOR GNNs in the appendix. In general, we observe REFACTOR GNNs to achieve state-of-the-art inductive performance.
134
+
135
+ ${}^{1}$ Typically until convergence, possibly by partially warm-starting $\theta$ .
papers/LOG/LOG 2022/LOG 2022 Conference/UiBiLRXR0G/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,330 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Effective Higher-order Link Prediction and Reconstruction from Simplicial Complex Embeddings
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ Methods that learn graph topological representations are becoming the usual choice to extract features to help solving machine learning tasks on graphs. In particular, low-dimensional encoding of graph nodes can be exploited in tasks such as link prediction and network reconstruction, where pairwise node embedding similarity is interpreted as the likelihood of an edge incidence. The presence of polyadic interactions in many real-world complex systems is leading to the emergence of representation learning techniques able to describe systems that include such polyadic relations. Despite this, their application on estimating the likelihood of tuple-wise edges is still underexplored.
12
+
13
+ Here we focus on the reconstruction and prediction of simplices (higher-order links) in the form of classification tasks, where the likelihood of interacting groups is computed from the embedding features of a simplicial complex. Using similarity scores based on geometric properties of the learned metric space, we show how the resulting node-level and group-level feature embeddings are beneficial to predict unseen simplices, as well as to reconstruct the topology of the original simplicial structure, even when training data contain only records of lower-order simplices.
14
+
15
+ ## 18 1 Introduction
16
+
17
+ Network science provides the dominant paradigm for the study of structure and dynamics of complex systems, thanks to its focus on their underlying relational properties. In data mining applications, topological node embeddings of networks are standard representation learning methods that help solving downstream tasks, such as network reconstruction, link prediction, and node classification [1]. Complex interacting systems have been usually represented as graphs. This representation however suffers from the obvious limitation that it can only capture pairwise relations among nodes, while many systems are characterized by group interactions [2]. Indeed, simplicial complexes are generalized graphs that encode group-wise edges as sets of nodes, or simplices, with the additional requirement that any subset of nodes forming a simplex must also itself form a simplex belonging to the complex. Unlike alternative high-order representations, e.g. hypergraphs, which also overcome the dyadic limitation of the graph formalism [3], the simplicial downward closure constraint works particularly well when studying systems with subset dependencies, such as brain networks and social networks (e.g. people interacting as a group also engage in pairwise interactions)
18
+
19
+ Due to the increased interest in studying complex systems as generalized graph structures, topological representation learning techniques on simplicial complexes are also emerging as tools to solve learning tasks on systems with polyadic relations. In particular, here we focus on tasks based on reconstruction and prediction of higher-order edges. While for standard graphs these problems have been extensively studied with traditional machine learning approaches $\left\lbrack {4,5}\right\rbrack$ and representation learning [6,7], the literature for their higher-order counterparts is more limited. In fact, reconstruction and prediction of higher-order interactions have been investigated mainly starting from pairwise data $\left\lbrack {8,9}\right\rbrack$ or time series $\left\lbrack {{10},{11}}\right\rbrack$ , without particular attention to representation learning methods. Here we study low-dimensional embeddings of simplicial complexes for link prediction and reconstruction in higher-order networks. Our main contributions are: (i) we introduce an embedding framework to compute low-rank representations of simplicial complexes; (ii) we formalize network reconstruction and link prediction tasks for polyadic graph structures; and (iii) we show that simplicial similarities computed from embedding representations outperform classical network-based reconstruction and link prediction methods. Since the problems of link prediction and network reconstruction are not yet well defined in the literature for the higher-order case, none of available state-of-the-art methods were previously evaluated in terms of both these tasks. In this paper we properly delineate the formal steps to perform higher-order link prediction and reconstruction, and we make a comprehensive evaluation of different methods adding many variations such as the use of multi-node proximities and simplicial weighted random walks.
20
+
21
+ ## 2 Related Work
22
+
23
+ Representation Learning beyond Graphs. Representation learning for graphs [1] allows to obtain node feature vectors that convey information useful for solving machine learning tasks. Most methods fit in one of two categories: node embeddings and graph neural networks (GNNs). Node embeddings methods explicitly learn low-dimensional representations of nodes, typically from a self-supervised task, while GNN methods implicitly generate vector representations of nodes by combining information about node neighborhoods via message passing operations, e.g. graph convolutions and graph attention networks [12]. In hypergraph settings, node embedding methods typically leverage hyperedge relations similarly to what is done for standard graph edges: for example, spectral decomposition [13], random walk sampling [14, 15], autoencoders [16]. Recently, Maleki et al. [17] proposed a hierarchical approach for scalable node embedding in hypergraphs. In simplicial complexes, random walks over simplices are exploited to compute embeddings of interacting groups with uniform or mixed sizes [18, 19], extending hypergraph methods that compute only node representations. Extensions of GNNs have been proposed to generalize convolution and attention mechanisms to hypergraphs [20-22] and simplicial complexes [23-25].
24
+
25
+ Link Prediction and Network Reconstruction beyond Graphs. The link prediction [4] task predicts the presence of unobserved links in a graph by estimating their occurrence likelihood, while network reconstruction consists in the inference of a graph structure based on indirect data [26], missing or noisy observations [27]. In this work, we will use latent embedding variables to assess the reconstruction and prediction of a given edge, relying on similarity indices. In higher-order systems, link prediction has been investigated primarily for hypergraphs, in particular with methods based on matrix factorization [28, 29], resource allocation metric [30], loop structure [31], and representation learning [32,33]. The higher-order link prediction problem was introduced in a temporal setting by Benson et al. [9] (reformulating the term simplicial closure [34]), while Liu et al. [35] studied the prediction of several higher-order patterns with neural networks. Yoon et al. [36] investigated the use of opportune $k$ -order projected graphs to represent group interactions, and Patil et al. [37] analyzed the problem of finding relevant candidate hyperlinks as negative examples. Despite this early results, reconstruction of higher-order interactions is an ongoing challenge: for example, Young et al. [8] proposed a Bayesian inference method to distinguish between hyperedges and combinations of low-order edges in pairwise data, while Musciotto et al. [38] developed a filtering approach to detect statistically significant hyperlinks in hypergraph data. In addition, some works studied approaches for the inference of higher-order structures from time series data $\left\lbrack {{10},{11}}\right\rbrack$ .
26
+
27
+ ## 3 3 Methods and Tasks Description
28
+
29
+ ### 3.1 Higher-order Systems and Simplicial Complexes
30
+
31
+ Simplicial complexes can be considered as generalized graphs that include higher-order interactions. Given a set of nodes $\mathcal{V}$ , a simplicial complex $\mathcal{K}$ is a collection of subsets of $\mathcal{V}$ , called simplices, satisfying downward closure: for any simplex $\sigma \in \mathcal{K}$ , any other simplex $\tau$ which is a subset of $\sigma$ belongs to the simplicial complex $\mathcal{K}$ (for any $\sigma \in \mathcal{K}$ and $\tau \subset \sigma$ , we also have $\tau \in \mathcal{K}$ ). This constraint makes simplicial complexes different from hypergraphs, for which there is no prescribed relation between hyper-edges. A simplex $\sigma$ is called a $k$ -simplex if $\left| \sigma \right| = k + 1$ , where $k$ is its dimension (or order). A simplex $\sigma$ is a co-face of $\tau$ (or equivalently, $\tau$ is a face of $\sigma$ ) if $\tau \subset \sigma$ and $\dim \left( \tau \right) = \dim \left( \sigma \right) - 1$ . We denote with ${n}_{k}$ the number of $k$ -simplices in $\mathcal{K}$ .
32
+
33
+ Each simplicial complex can be unfolded in its canonical graph of inclusions, called Hasse Diagram (HD): formally, the Hasse Diagram $\mathcal{H}\left( \mathcal{K}\right)$ of complex $\mathcal{K}$ is the multipartite graph $\mathcal{H}\left( \mathcal{K}\right) = \left( {{\mathcal{V}}_{\mathcal{H}},{\mathcal{E}}_{\mathcal{H}}}\right)$ , such that each node ${v}_{\sigma } \in {\mathcal{V}}_{\mathcal{H}}$ corresponds to a simplex $\sigma \in \mathcal{K}$ , and two simplices $\sigma ,\tau \in \mathcal{K}$ are connected by the undirected edge $\left( {{v}_{\sigma },{v}_{\tau }}\right) \in {\mathcal{E}}_{\mathcal{H}}$ iff $\sigma$ is a co-face of $\tau$ . In other words, each simplicial order corresponds to a graph layer in $\mathcal{H}\left( \mathcal{K}\right)$ , and two simplices in different layers are linked if they are (upper/lower) adjacent in the original simplicial complex.
34
+
35
+ ### 3.2 Simplicial Complex Embedding
36
+
37
+ Given a complex $\mathcal{K}$ , we want to learn a mapping function $f : \mathcal{K} \rightarrow {\mathbb{R}}^{d}$ from elements of $\mathcal{K}$ to a $d$ -dimensional low-rank feature space $\left( {d \ll \left| \mathcal{K}\right| }\right)$ . The mapping $f$ must preserve topological information incorporated in the simplicial complex, in such a way that adjacency relations are preserved into geometric distances between data points of the embedding space. Here we propose that representations of simplices can be obtained by random-walking over the inclusions hierarchy of $\mathcal{K}$ and learning the embeddings space according to the simplex proximity observed through such walks, preserving high-order information about the topological structure of the complex itself. We will discuss later in the Experimental Setup (section 4) the procedure of random walk sampling that we adopted, while here we focus on the optimization problem.
38
+
39
+ Inspired by language models such as WORD2VEC [39], we start from a corpus $\mathcal{W} = \left\{ {{\sigma }_{1},\ldots ,{\sigma }_{\left| \mathcal{W}\right| }}\right\}$ of simplicial random walks, and we aim to maximize the log-likelihood of a simplex ${\sigma }_{i}$ given the multi-set ${\mathcal{C}}_{T}\left( {\sigma }_{i}\right)$ of context simplices within a distance $T$ , i.e. ${\mathcal{C}}_{T}\left( {\sigma }_{i}\right) = \left\{ {{\sigma }_{i - T}\ldots {\sigma }_{i + 1},{\sigma }_{i + 1}\ldots {\sigma }_{i + T}}\right\}$ . The objective function is as follows:
40
+
41
+ $$
42
+ \mathop{\max }\limits_{f}\mathop{\sum }\limits_{{i = 1}}^{\left| \mathcal{W}\right| }\log \Pr \left( {{\sigma }_{i} \mid \left\{ {f\left( \tau \right) : \tau \in {\mathcal{C}}_{T}\left( {\sigma }_{i}\right) }\right\} }\right) \tag{1}
43
+ $$
44
+
45
+ where the probability is the soft-max $\Pr \left( {\sigma \mid \{ f\left( \tau \right) ,\ldots \} }\right) \propto \exp \left\lbrack {\mathop{\sum }\limits_{{\tau \in {\mathcal{C}}_{T}\left( \sigma \right) }}f\left( \sigma \right) \cdot f\left( \tau \right) }\right\rbrack$ , normalized via the standard partition function ${Z}_{i} = \mathop{\sum }\limits_{{\kappa \in \mathcal{K}}}\exp \left\lbrack {\mathop{\sum }\limits_{{\tau \in {\mathcal{C}}_{T}\left( {\sigma }_{i}\right) }}f\left( \kappa \right) \cdot f\left( \tau \right) }\right\rbrack$ , and it represents the likelihood of observing simplex $\sigma$ given context simplices in ${\mathcal{C}}_{T}\left( \sigma \right)$ . This leads to the maximization of the function:
46
+
47
+ $$
48
+ \mathop{\max }\limits_{f}\mathop{\sum }\limits_{{i = 1}}^{\left| \mathcal{W}\right| }\left\lbrack {-\log {Z}_{i} + \mathop{\sum }\limits_{{\tau \in {\mathcal{C}}_{T}\left( {\sigma }_{i}\right) }}f\left( {\sigma }_{i}\right) \cdot f\left( \tau \right) }\right\rbrack \tag{2}
49
+ $$
50
+
51
+ Our method of choice -SIMPLEX2VEC [19]- is implemented by sampling random walks from $\mathcal{H}\left( \mathcal{K}\right)$ and learning simplicial embeddings with continuous-bag-of-words (CBOW) model [39]. To overcome the expensive computation of ${Z}_{i}$ , we train CBOW with negative sampling. While SIMPLEX2VEC is conceptually similar to $k$ -SIMPLEX2VEC [18], there are important differences: (i) by fixing $k$ as simplex dimension, $k$ -SIMPLEX2VEC uses exclusively upper connections through $\left( {k + 1}\right)$ -cofaces and lower connections through(k - 1)-faces to compute random walk transitions; (ii) random walks focus on a fixed dimension, allowing the embedding computation only for $k$ -simplices. SIMPLEX2VEC instead computes embedding representations for all simplex orders simultaneously, because the random walks are sampled from the entire Hasse Diagram.
52
+
53
+ ### 3.3 Reconstruction and prediction of higher-order interactions
54
+
55
+ Network reconstruction [6] and link prediction [7] are common tasks performed to assess the quality of node embeddings. In standard graphs, most popular methods are based on similarity metrics, used to rank pairs of nodes (candidate links), and can be obtained from local (e.g. number of common neighbors) or global indices (e.g. number of connecting paths). Here we formalize these tasks as binary classification problems for simplicial complexes, based on latent embedding indices that we define in section 4.
56
+
57
+ Given a complex $\mathcal{K}$ and feature representations learned from it, by reconstruction of higher-order interactions we mean the task of correctly classifying whether a group of $k + 1$ nodes $s = \left( {{i}_{0},{i}_{1},\ldots ,{i}_{k}}\right)$ is a $k$ -simplex of $\mathcal{K}$ or not. This task is intended to study if the information encoded in the embedding space can preserve the original training structure. More specifically, we consider $\mathcal{S} = \{ s \in \mathcal{K} : \left| s\right| > 1\}$ as the set of interactions (simplices with order greater than 0 ) that belongs to the simplicial complex $\mathcal{K}$ . Given any group $s = \left( {{i}_{0},{i}_{1},\ldots ,{i}_{k}}\right)$ , with the reconstruction task we aim to discern if the elements in $s$ interact within the same simplex, and so $s \in \mathcal{S}$ , or $s$ is a group of lower-order simplices, i.e. $s \notin \mathcal{S}$ (but subsets of $s$ may be existing simplices). When group $s$ interacts within a simplex, we say that $s$ is closed, conversely it is open. By higher-order interaction prediction we mean instead the task of predicting whether an interaction ${\mathcal{S}}^{ * }$ that has not been observed at a certain time (i.e. the simplex has not been added to the complex yet) will appear in the future. This task is intended to study the generalization ability of embedding methods to predict unseen interactions that are likely to occur. Given any open configuration $\bar{s} \in {\mathcal{U}}_{\mathcal{S}}$ coming from the set of unobserved interactions ${\mathcal{U}}_{\mathcal{S}} = \left\{ {s \in {2}^{\mathcal{V}} : \left| s\right| > 1, s \notin \mathcal{S}}\right\}$ , i.e. the complement ${}^{1}$ of $\mathcal{S}$ , the prediction task is to classify which groups will give rise to a simplicial closure in the future $\left( {\bar{s} \in {\mathcal{S}}^{ * }}\right)$ versus those that will remain open $\left( {\bar{s} \in {\mathcal{U}}_{\mathcal{S}} \smallsetminus {\mathcal{S}}^{ * }}\right)$ .
58
+
59
+ ![01963f0c-f110-7e99-b85a-c00e06b0f6ca_3_362_205_1120_396_0.jpg](images/01963f0c-f110-7e99-b85a-c00e06b0f6ca_3_362_205_1120_396_0.jpg)
60
+
61
+ Figure 1: (Left) Schematic view of SIMPLEX2VEC: starting from simplicial sequential data (a), we construct a simplicial complex on whose Hasse Diagram we sample random walks (b) with different weighting (c), from which we construct the embedding space (d). (Right) Schematic description of classification tasks (reconstruction and prediction) in the case of 3-node group interactions.
62
+
63
+ ## 4 Experimental Setup
64
+
65
+ Here we describe the experimental setup used to quantify the accuracy of SIMPLEX2VEC in reconstructing and predicting higher-order interactions. In the next paragraphs we illustrate which datasets we use, how we sample non-existing hyperlinks, and how we use them in downstream tasks.
66
+
67
+ ### 4.1 Feature Learning Pipeline
68
+
69
+ Consider a collection $\mathcal{D}$ of time-stamped interactions ${\left\{ \left( {s}_{i},{t}_{i}\right) ,{s}_{i} \in \mathcal{F},{t}_{i} \in \mathcal{T}\right\} }_{i = 1\ldots N}$ , where each ${s}_{i} = \left( {{i}_{0},{i}_{1},\ldots ,{i}_{k}}\right)$ is a $k$ -simplex of the node set $\mathcal{V},\mathcal{F}$ is the set of distinct simplices and $\mathcal{T}$ is the set of time-stamps at which interactions occur. We split $\mathcal{D}$ in two subsets, ${\mathcal{D}}^{\text{train }}$ and ${\mathcal{D}}^{\text{test }}$ , corresponding to the 80th percentile ${t}^{\left( {80}\right) }$ of time-stamps, i.e. ${\mathcal{D}}^{\text{train }} = \left\{ {\left( {{s}_{i},{t}_{i}}\right) \in \mathcal{D},{t}^{\left( 0\right) } \leq {t}_{i} \leq {t}^{\left( {80}\right) }}\right\}$ and ${\mathcal{D}}^{\text{test }} = \left\{ {\left( {{s}_{i},{t}_{i}}\right) \in \mathcal{D},{t}^{\left( {80}\right) } < {t}_{i} \leq {t}^{\left( {100}\right) }}\right\}$ , where ${t}^{\left( 0\right) }$ and ${t}^{\left( {100}\right) }$ are the 0th and the 100th percentiles of the set $\mathcal{T}$ . We build from ${\mathcal{D}}^{\text{train }}$ a simplicial complex ${\mathcal{K}}_{\mathcal{D}}{}^{\text{train }}$ disregarding time-stamps. In all the experiments we train SIMPLEX2VEC ${}^{2}$ on the Hasse Diagram $\mathcal{H}\left( {\mathcal{K}}_{\mathcal{D}}\right) {}^{\text{train }}$ ), to obtain $d$ - dimensional feature representations ${\mathbf{v}}_{\sigma } \in {\mathbb{R}}^{d}$ of every simplex $\sigma \in {\mathcal{K}}_{\mathcal{D}}{}^{\text{train }}$ . Due to the combinatorial explosion of the number of simplicial vertices in the HD, we constrain the maximum order of the interactions to $M \in \{ 1,2,3\}$ in a reduced Hasse diagram ${\mathcal{H}}_{M}\left( {\mathcal{K}}_{\mathcal{D}}\right)$ referred simply as ${\mathcal{H}}_{M}$ . Consequently, every simplex with dimension larger than $m = \max M$ is represented in ${\mathcal{H}}_{M}$ by node combinations of size up to $m$ . In Figure 1 (Left) we show the feature learning process explained before. We consider several weighting schemes ${}^{3}$ [19] to bias the random walks between the vertices $\left\{ {v}_{\tau }\right\}$ of the HD:
70
+
71
+ - Unweighted The jump to a given ${v}_{\tau }$ is made by a uniform sampling among the set of neighbors ${\mathcal{N}}_{\sigma } = {\mathcal{N}}_{\sigma }^{ \downarrow } \cup {\mathcal{N}}_{\sigma }^{ \uparrow }$ of the node ${v}_{\sigma }$ in the HD (corresponding to faces ${\mathcal{N}}_{\sigma }^{ \downarrow }$ and co-faces ${\mathcal{N}}_{\sigma }^{ \uparrow }$ of the simplex $\sigma$ in the simplicial complex).
72
+
73
+ ---
74
+
75
+ ${}^{1}$ Here we used ${2}^{\mathcal{V}}$ to identify the power set of the vertices.
76
+
77
+ ${}^{2}$ We used the word2vec implementation from Gensim (https://radimrehurek.com/gensim/) and ran the CBOW model with window $T = {10}$ and 5 epochs.
78
+
79
+ ${}^{3}$ For every case, we sample 10 random walks of length 80 per simplex as input to SIMPLEX2VEC.
80
+
81
+ ---
82
+
83
+ - Counts. To every node ${v}_{\tau }$ of the HD is attached an empirical weight ${\omega }_{\tau }$ , i.e. the number of times that $\tau$ appears in the data $\mathcal{D}$ . The probability to jump from $\sigma$ to $\tau$ is given by ${p}_{\sigma \tau } = \frac{{\omega }_{\tau }}{\mathop{\sum }\limits_{{r \in {\mathcal{N}}_{\sigma }}}{\omega }_{r}}$ .
84
+
85
+ - LObias. With the definition of transition probability as before, the weight ${\omega }_{\tau }$ is defined to introduce a bias for the random walker towards low-order simplices: as explained in [19], every time a $n$ -simplex $\sigma$ appears in the data its weight is increased by 1, and the weight of any subface of dimension $n - k$ is increased by ${\left( \frac{\left( {n - k + 1}\right) !}{\left( {n + 1}\right) !}\right) }^{-1}$ . There is an equivalent scheme for biasing towards high-order simplices, but we empirically observed that the performance of the first one is slightly better.
86
+
87
+ - EQbias. Starting from the weight set $\left\{ {\omega }_{\sigma }\right\}$ computed with empirical counts, we attach additional weights $\left\{ {\omega }_{\sigma \tau }\right\}$ to the Hasse diagram’s edges in order to have equal probability of choosing neighbors from ${\mathcal{N}}_{\sigma }^{ \downarrow }$ or ${\mathcal{N}}_{\sigma }^{ \uparrow }$ . Transition weights for the downward (upward) step $\left( {\sigma ,\tau }\right)$ are defined by normalizing ${\omega }_{\tau }$ respect to all the downward (upward) weights ${\omega }_{\sigma \tau } \propto \frac{{\omega }_{\tau }}{\mathop{\sum }\limits_{{r \in {\mathcal{N}}_{\sigma }^{ \downarrow \left( \uparrow \right) }}}{\omega }_{r}}$ , with the probability of the step given by ${p}_{\sigma \tau } = \frac{{\omega }_{\sigma \tau }}{\mathop{\sum }\limits_{{r \in {\mathcal{N}}_{\sigma }^{ \downarrow } \cup {\mathcal{N}}_{\sigma }^{ \uparrow }}}{\omega }_{\sigma r}}$
88
+
89
+ ### 4.2 Data
90
+
91
+ We use time-stamped data, indicated above with the collection $\mathcal{D}$ , consisting on sequences of interactions in empirical systems from different domains [9]: face-to-face proximity (contact-high-school and contact-primary-school), email exchange (email-Eu and email-Enron), ONline tags (tags-math-sx), US congress bills (congress-bills), coauthorships (coauth-MAG-History and coauth-MAG-Geology). When the datasets came in pairwise format, we associated simplices to cliques obtained integrating edge information over short time intervals [9]. We considered, for any dataset, only nodes in the largest connected component of the projected graph (two nodes of the projected graph are connected if they appear in at least one simplex of $\mathcal{D}$ ). In addition, to lighten the embedding computations, for congress, tags and coauth datasets we apply a filtering approach in order to reduce their sizes: similarly to [40] with the Core set, here we selected the nodes incident in at least 5 cliques in every temporal quartiles (except in coauth-MAG-History where we applied a threshold of 1 clique per temporal quartile). In the Appendix we report a table with statistics for every dataset after the described pre-processing steps.
92
+
93
+ ### 4.3 Similarity Scores and Baseline Metrics
94
+
95
+ Using the learned simplicial embeddings we assign to each higher-order link candidate $\delta$ a likelihood score based on the average pairwise inner product among 0 -simplex embeddings of nodes $\left\{ {{\mathbf{v}}_{i}, i \in \delta }\right\}$ or any high-order $k$ -simplices $\left\{ {{\mathbf{v}}_{\sigma },\sigma \subset \delta }\right\}$ :
96
+
97
+ $$
98
+ {s}_{k}\left( \delta \right) = \frac{1}{\left| \left( \begin{matrix} \delta \\ 2 \end{matrix}\right) \right| }\mathop{\sum }\limits_{{\left( {\sigma ,\tau }\right) \in \left( \begin{matrix} \delta \\ 2 \end{matrix}\right) }}{\mathbf{v}}_{\sigma } \cdot {\mathbf{v}}_{\tau } \tag{3}
99
+ $$
100
+
101
+ To assess the reconstruction and prediction performances of the embedding model, we compare likelihood scores defined in Eq. 3 with other baseline metrics:
102
+
103
+ - Projected metrics. Local and global node-level features computed from the projected graph. The projected graph is defined as ${\mathcal{G}}_{\mathcal{D}}^{\text{train }} = \left( {\mathcal{V},\mathcal{E}}\right)$ , where $\mathcal{V}$ is the set of 0 -simplices of the complex ${\mathcal{K}}_{\mathcal{D}}^{\text{train }}$ and $\mathcal{E} = \left\{ {s \in {\mathcal{K}}_{\mathcal{D}}^{\text{train }} : \left| s\right| = 2}\right\}$ is the set of links between training nodes that interacted in at least one simplex of ${\mathcal{D}}^{\text{train }}$ . Moreover, edges(i, j)may be accompanied with weights equal to the number of simplices of $\mathcal{D}$ containing both $i$ and $j$ . For triangles-related tasks we considered several 3-way metrics computed with the code ${}^{4}$ released by [9] (we show the best performant, i.e. Harmonic mean, Geometric mean, Katz, PPR, Logistic Regression). We exploited also the pair-wise random walk measure ${\mathrm{{PPMI}}}_{T}$ [41], for tetrahedra-related tasks where 4-way implementations of the above listed scores are not available. PPMI is widely used as similarity function for node embeddings, and variations of the window size $T$ allow to take into account both local and global information.
104
+
105
+ ---
106
+
107
+ ${}^{4}$ https://github.com/arbenson/ScHoLP-Tutorial
108
+
109
+ ---
110
+
111
+ Table 1: Number of unobserved configurations obtained with the sampling approach in different datasets.
112
+
113
+ <table><tr><td rowspan="2">Dataset</td><td colspan="4">Unseen configurations sampled from ${\mathcal{U}}_{\Delta }$ ${n}_{\mathcal{E}}\left( {\times {10}^{3}}\right)$</td></tr><tr><td>0</td><td>1</td><td>2</td><td>3</td></tr><tr><td>contact-high-school</td><td>3,476</td><td>1,150</td><td>107</td><td>25</td></tr><tr><td>email-Eu</td><td>8,096</td><td>1,392</td><td>1,654</td><td>186</td></tr><tr><td>tags-math-sx</td><td>6,229</td><td>2,473</td><td>5,467</td><td>1,725</td></tr><tr><td>coauth-MAG-History</td><td>9,958</td><td>30</td><td>60</td><td>2</td></tr></table>
114
+
115
+ <table><tr><td rowspan="2">Dataset</td><td colspan="5">Unseen configurations sampled from ${\mathcal{U}}_{\Theta }$ ${n}_{\Delta }\left( {\times {10}^{3}}\right)$</td></tr><tr><td>0</td><td>1</td><td>2</td><td>3</td><td>4</td></tr><tr><td>contact-primary-school</td><td>17,683</td><td>396</td><td>19</td><td>2</td><td>< 1</td></tr><tr><td>email-Enron</td><td>7.048</td><td>400</td><td>28</td><td>2</td><td>< 1</td></tr><tr><td>congress-bills</td><td>1,462</td><td>1.264</td><td>325</td><td>149</td><td>80</td></tr><tr><td>coauth-MAG-Geology</td><td>15,473</td><td>593</td><td>30</td><td>3</td><td>< 1</td></tr></table>
116
+
117
+ - Spectral embedding. Features from the spectral decomposition of the combinatorial $k$ - Laplacian [42]. Given the set of boundary matrices $\left\{ {\mathbf{B}}_{k}\right\}$ , which incorporate incidence relationships between $k$ -simplices and their co-faces ${}^{5}$ , the unweighted $k$ -Laplacian is ${\mathbf{L}}_{k} =$ ${\mathbf{B}}_{k}^{\mathrm{T}}{\mathbf{B}}_{k} + {\mathbf{B}}_{k + 1}{\mathbf{B}}_{k + 1}^{\mathrm{T}}$ . We consider also the weighted $k$ -Laplacian [43], calculated with the substitutions ${\mathbf{B}}_{k} \rightarrow {\mathbf{W}}_{k - 1}^{-1/2}{\mathbf{B}}_{k}{\mathbf{W}}_{k}^{1/2}$ , where every ${\mathbf{W}}_{k}$ is a diagonal matrix containing empirical counts of any $k$ -simplex ${}^{6}$ . Following the same procedure used in graph spectral embeddings [44], we compute the eigenvectors matrix ${\mathbf{Q}}_{k} \in {\mathbb{R}}^{{n}_{k} \times d}$ corresponding to the first $d$ smallest nonzero eigenvalues of ${\mathbf{L}}_{k}$ and we use the rows of ${\mathbf{Q}}_{k}$ as $d$ -dimensional spectral embeddings for $k$ -simplices.
118
+
119
+ - k-SIMPLEX2VEC embedding. Features learned with an extension of NODE2VEC [18] that samples random walks from higher-order transition probabilities ${}^{7}$ (e.g. edge-to-edge occurrences) in a single simplicial dimension. This model is based on sampling from a uniform structure without taking into account simplicial weights.
120
+
121
+ Likelihood scores of candidate higher-order links are assigned for the embedding models with the same inner product metric of Eq. 3 used for SIMPLEX2VEC embeddings. In $k$ -SIMPLEX2VEC, we sample the same number of random walks per simplex, with the same length, of the ones used for SIMPLEX2VEC.
122
+
123
+ ### 4.4 Downstream Tasks and Open Configurations Sampling
124
+
125
+ Similarly to the standard graph case, non-existing links are usually the majority class and this imbalance is even more pronounced in the higher-order case [28] (in graphs we have $\mathcal{O}\left( {\left| \mathcal{V}\right| }^{2}\right)$ potential links, but the number of potential hyperlinks/simplices is $\mathcal{O}\left( {2}^{\left| \mathcal{V}\right| }\right)$ in higher-order structures).
126
+
127
+ To compensate, we focus the work on 3-node and 4-node groups, reducing the number of potential hyperedges to $\mathcal{O}\left( {\left| \mathcal{V}\right| }^{3}\right)$ and $\mathcal{O}\left( {\left| \mathcal{V}\right| }^{4}\right)$ respectively. For a concise presentation, in the next paragraphs we describe mainly the 3-way case. Hence, we restrict the set of possible interactions $\mathcal{S}$ to be exclusively closed triangles $\Delta$ and the corresponding 3-node complementary set ${\mathcal{U}}_{\Delta }$ :
128
+
129
+ $$
130
+ \Delta = \left\{ {s \in {\mathcal{K}}_{\mathcal{D}}^{\text{train }} : \left| s\right| = 3}\right\} ,\;{\mathcal{U}}_{\Delta } = \left( \begin{matrix} \mathcal{V} \\ 3 \end{matrix}\right) \smallsetminus \Delta \tag{4}
131
+ $$
132
+
133
+ where we used $\left( \begin{array}{l} \mathcal{V} \\ 3 \end{array}\right)$ as the set of 3-node combinations of elements from $\mathcal{V}$ (we will instead denote $\Theta$ and ${\mathcal{U}}_{\Theta }$ respectively the observed and unobserved tetrahedra). In Figure 1 (Right) we sketch the task’s formulation based on 2-simplices (3-node configurations). Positive examples for reconstruction and prediction tasks, respectively in $\Delta$ and ${\Delta }^{ * }$ , are trivial to find from empirical data. For negative examples instead, enumerating all the unseen 3-node configurations in ${\mathcal{U}}_{\Delta }$ is typically unfeasible. Thus, we perform sampling of fixed-size groups of nodes to collect unseen instances for the classification tasks. In practice we sample stars, cliques and other network motifs [37] from the projected graph to collect group configurations with distinct densities of lower-order interactions. We independently sample nodes to obtain (more likely) groups with unconnected units. For each sampled 3-node group $\delta$ we count the number of training edges ${n}_{\mathcal{E}}\left( \delta \right)$ involved in it, and we analyse tasks performances for open configurations characterized by fixing ${n}_{\mathcal{E}}\left( \delta \right) \in \{ 0,1,2,3\}$ . For 4-node configurations, instead of ${n}_{\mathcal{E}}\left( \delta \right)$ , we consider the number of training triangles ${n}_{\Delta }\left( \delta \right) \in \{ 0,1,2,3,4\}$ to differentiate open groups. We claim that quantities ${n}_{\mathcal{E}}\left( \delta \right)$ and ${n}_{\Delta }\left( \delta \right)$ are related to the concept of hardness of non-hyperlinks [37], i.e. the propensity of open groups to be misclassified as closed interaction, and they influence the difficulty of downstream classification tasks.
134
+
135
+ ---
136
+
137
+ ${}^{5}$ Boundary matrix ${\mathbf{B}}_{k} \in \{ 0, \pm 1{\} }^{{n}_{k - 1} \times {n}_{k}}$ requires the definition of oriented simplices, see [2] for additional details.
138
+
139
+ ${}^{6}$ Weights matrices satisfy the consistency relations ${\mathbf{W}}_{k} = \left| {\mathbf{B}}_{k + 1}\right| {\mathbf{W}}_{k + 1}$ , see [43] for further details.
140
+
141
+ ${}^{7}$ https://github.com/celiahacker/k-simplex2vec
142
+
143
+ ---
144
+
145
+ ![01963f0c-f110-7e99-b85a-c00e06b0f6ca_6_336_199_1121_450_0.jpg](images/01963f0c-f110-7e99-b85a-c00e06b0f6ca_6_336_199_1121_450_0.jpg)
146
+
147
+ Figure 2: Performance on 3-way link reconstruction (a, c) and prediction (b, d) for SIMPLEX2VEC and $k$ -SIMPLEX2VEC with: (a, b) similarity score ${s}_{0}$ varying the parameter ${n}_{\mathcal{E}}$ ; (c, d) score ${s}_{k}$ (with $k$ in $\{ 0,1\}$ ) on highly edge-dense open configurations $\left( {{n}_{\mathcal{E}} = 3}\right)$ . Metrics are computed in unweighted representations. The label unbalancing in each sample is uniformly drawn between 1:1 and 1:5000. A schematic view of positive and negative examples is reported for each classification task.
148
+
149
+ In Table 1 we report the number of open configurations randomly selected from ${\mathcal{U}}_{\Delta }$ and ${\mathcal{U}}_{\Theta }$ . We sampled open configurations with ${10}^{7}$ extractions for each pattern (stars, cliques, motifs and independent node groups). Reported numbers refer to the exact number of negative examples available for reconstruction tasks. For prediction tasks instead, they may contain also future closed interactions (that are anyway unobserved in the training interval) that must be substracted to obtain negative examples.
150
+
151
+ ## 5 Results and Discussion
152
+
153
+ With the previously described setup, we conducted experiments with 3-node configurations on datasets contact-high-school, email-Eu, tags-math-sx, coauth-MAG-History and with 4-node configurations on the remaining ones. Due to the limited space available, we only report 3-way results leaving the 4-way analysis in the Appendix.
154
+
155
+ We highlight the classification performance when using different embedding similarities ${s}_{k}\left( \delta \right)$ on open configurations with different ${n}_{\mathcal{E}}\left( \delta \right)$ (in the case of triangles, or ${n}_{\Delta }\left( \delta \right)$ for tetrahedra). For each case, i.e. triangles and tetrahedra classification, we examine: (i) the comparison with $k$ - SIMPLEX2VEC embeddings in the unweighted scenario, to study how different embedding models learn statistical patterns from the simplicial structure; (ii) the comparison with classical metrics in the weighted scenario, to study how the addition of empirical weights influences the embedding performance respect to traditional weighted approaches.
156
+
157
+ Results are presented in terms of average binary classification scores, where test sets are generated by randomly chosen open and closed groups. Contrarily to previous work $\left\lbrack {9,{33}}\right\rbrack$ , we evaluate models without a fixed class imbalance because we cannot access the entire negative classes (e.g. ${\mathcal{U}}_{\Delta }$ and ${\mathcal{U}}_{\Delta } \smallsetminus {\Delta }^{ * }$ respectively in 3-way reconstruction and prediction). Instead, in every test set we uniformly sample the cardinality of the two classes to be between 1 and the number of available samples according to the task. We report calibrated AUC-PR scores [45] to account the difference in class imbalance as a consequence of our sampling choice ${}^{8}$ . In Figure 2, for a fair comparison with the other projected and embedding metrics, we report the similarity ${s}_{k}$ training SIMPLEX2VEC on ${\mathcal{H}}_{k + 1}$ . Best average scores are chosen for embedding models with a grid search on vector sizes in the list $\{ 8,{16},{32},{64},{128},{256},{512},{1024}\}$ .
158
+
159
+ ---
160
+
161
+ ${}^{8}$ For this purpose we fix the reference class ratio ${\pi }_{0} = {0.5}$ . See [45] for additional details. We also tested the AUC-ROC metric with similar findings.
162
+
163
+ ---
164
+
165
+ Table 2: Balanced AUC-PR scores for higher-order link reconstruction (Top) and prediction (Bottom) on 3-node groups, with the hardest class of negative configurations $\left( {{n}_{\mathcal{E}} = 3}\right)$ . Best scores for different methods are reported in boldface letters; among these ones, the best overall score is blue shaded and the second best score is grey shaded.
166
+
167
+ <table><tr><td colspan="3" rowspan="2">Features Type</td><td colspan="8">Dataset contact-high-schoolemail-Eutags-math-sxcoauth-MAG-History</td></tr><tr><td>${s}_{0}\left( \delta \right)$</td><td>${s}_{1}\left( \delta \right)$</td><td>${s}_{0}\left( \delta \right)$</td><td>${s}_{1}\left( \delta \right)$</td><td>${s}_{0}\left( \delta \right)$</td><td>${s}_{1}\left( \delta \right)$</td><td>${s}_{0}\left( \delta \right)$</td><td>${s}_{1}\left( \delta \right)$</td></tr><tr><td rowspan="7">Embedding</td><td rowspan="3">Hasse diagram ${\mathcal{H}}_{1}$</td><td>Unweighted</td><td>${57.5} \pm {1.9}$</td><td>${51.4} \pm {1.2}$</td><td>${72.0} \pm {0.3}$</td><td>${64.0} \pm {0.2}$</td><td>${66.7} \pm {0.2}$</td><td>${57.1} \pm {0.1}$</td><td>${41.1} \pm {0.9}$</td><td>${75.5} \pm {1.1}$</td></tr><tr><td>Counts</td><td>${79.5} \pm {1.0}$</td><td>${84.4} \pm {0.9}$</td><td>${76.3} \pm {0.4}$</td><td>${73.3} \pm {0.2}$</td><td>${80.5} \pm {0.1}$</td><td>${87.8} \pm {0.1}$</td><td>${41.6} \pm {1.0}$</td><td>${76.0} \pm {1.1}$</td></tr><tr><td>LObias</td><td>${81.6} \pm {2.4}$</td><td>${89.5} \pm {0.8}$</td><td>${76.1} \pm {0.3}$</td><td>${71.2} \pm {0.2}$</td><td>${76.9} \pm {0.1}$</td><td>${83.7} \pm {0.1}$</td><td>${41.7} \pm {0.7}$</td><td>${57.7} \pm {1.2}$</td></tr><tr><td rowspan="4">Hasse diagram ${\mathcal{H}}_{2}$</td><td>Unweighted</td><td>${55.5} \pm {3.0}$</td><td>99.5±0.1</td><td>${61.0} \pm {0.4}$</td><td>$\mathbf{{97.9} \pm {0.0}}$</td><td>${66.7} \pm {0.1}$</td><td>95.1±0.0</td><td>${40.0} \pm {0.5}$</td><td>${83.1} \pm {1.3}$</td></tr><tr><td>Counts</td><td>${57.0} \pm {1.3}$</td><td>${91.2} \pm {0.9}$</td><td>${54.5} \pm {0.2}$</td><td>${92.6} \pm {0.1}$</td><td>${66.2} \pm {0.1}$</td><td>${89.4} \pm {0.1}$</td><td>${35.3} \pm {0.4}$</td><td>${82.1} \pm {1.3}$</td></tr><tr><td>LObias</td><td>${84.7} \pm {2.2}$</td><td>${91.9} \pm {0.8}$</td><td>${80.6} \pm {0.3}$</td><td>${81.6} \pm {0.2}$</td><td>${77.9} \pm {0.1}$</td><td>${84.3} \pm {0.1}$</td><td>${57.3} \pm {1.0}$</td><td>${70.4} \pm {1.4}$</td></tr><tr><td>EQbias</td><td>${72.7} \pm {1.1}$</td><td>${89.2} \pm {0.7}$</td><td>${71.8} \pm {0.3}$</td><td>${75.0} \pm {0.2}$</td><td>${78.2} \pm {0.2}$</td><td>${88.0} \pm {0.1}$</td><td>${39.3} \pm {0.7}$</td><td>$\mathbf{{87.3} \pm {1.1}}$</td></tr><tr><td rowspan="3">Spectral Embedding</td><td rowspan="2">Combinatorial Laplacians</td><td>Unweighted</td><td>${52.4} \pm {3.7}$</td><td>${77.0} \pm {1.3}$</td><td>${67.3} \pm {0.3}$</td><td>${65.3} \pm {0.2}$</td><td>${58.4} \pm {0.2}$</td><td>${50.7} \pm {0.1}$</td><td>${72.1} \pm {1.1}$</td><td>${63.5} \pm {1.4}$</td></tr><tr><td>Weighted</td><td>${70.4} \pm {1.6}$</td><td>${75.3} \pm {1.6}$</td><td>$\mathbf{{79.4} \pm {0.2}}$</td><td>${76.4} \pm {0.1}$</td><td>${79.9} \pm {0.1}$</td><td>${50.4} \pm {0.1}$</td><td>${82.3} \pm {1.0}$</td><td>${68.4} \pm {1.2}$</td></tr><tr><td>Harm. mean</td><td rowspan="4">Weighted</td><td colspan="2">${85.5} \pm {1.5}$</td><td colspan="2">${74.0} \pm {0.2}$</td><td colspan="2">${83.1} \pm {0.1}$</td><td colspan="2">${53.3} \pm {1.1}$</td></tr><tr><td rowspan="3">Projected Metrics</td><td>Geom. mean</td><td colspan="2">${85.8} \pm {1.1}$</td><td colspan="2">${72.5} \pm {0.2}$</td><td colspan="2">${86.8} \pm {0.1}$</td><td colspan="2">${52.9} \pm {1.3}$</td></tr><tr><td>Katz</td><td colspan="2">${78.6} \pm {1.1}$</td><td colspan="2">${65.6} \pm {0.2}$</td><td colspan="2">${81.8} \pm {0.1}$</td><td>${49.2} \pm {1.5}$</td><td/></tr><tr><td>PPR</td><td colspan="2">${76.9} \pm {1.4}$</td><td colspan="2">${70.7} \pm {0.2}$</td><td colspan="2">${81.8} \pm {0.1}$</td><td colspan="2">${74.8} \pm {1.3}$</td></tr><tr><td rowspan="3"/><td rowspan="3">Features Type</td><td rowspan="3"/><td colspan="8">Dataset</td></tr><tr><td colspan="2">contact-high-school</td><td colspan="2">email-Eu</td><td colspan="2">tags-math-sx</td><td colspan="2">coauth-MAG-History</td></tr><tr><td>${s}_{0}\left( \delta \right)$</td><td>${s}_{1}\left( \delta \right)$</td><td>${s}_{0}\left( \delta \right)$</td><td>${s}_{1}\left( \delta \right)$</td><td>${s}_{0}\left( \delta \right)$</td><td>${s}_{1}\left( \delta \right)$</td><td>${s}_{0}\left( \delta \right)$</td><td>${s}_{1}\left( \delta \right)$</td></tr><tr><td rowspan="7">Neural Embedding</td><td rowspan="3">Hasse diagram ${\mathcal{H}}_{1}$</td><td>Unweighted</td><td>${62.9} \pm {5.2}$</td><td>${50.6} \pm {4.7}$</td><td>${68.5} \pm {0.7}$</td><td>${57.6} \pm {0.5}$</td><td>${63.2} \pm {0.3}$</td><td>${54.0} \pm {0.5}$</td><td>${69.5} \pm {8.2}$</td><td>${63.2} \pm {6.6}$</td></tr><tr><td>Counts</td><td>${74.2} \pm {3.0}$</td><td>${73.0} \pm {3.4}$</td><td>${74.3} \pm {0.8}$</td><td>${67.3} \pm {0.7}$</td><td>${74.3} \pm {0.4}$</td><td>$\mathbf{{84.0} \pm {0.3}}$</td><td>${68.7} \pm {8.4}$</td><td>${66.6} \pm {8.6}$</td></tr><tr><td>LObias</td><td>${70.6} \pm {2.8}$</td><td>${65.6} \pm {5.3}$</td><td>${70.5} \pm {0.6}$</td><td>${64.5} \pm {0.8}$</td><td>${71.3} \pm {0.5}$</td><td>${79.1} \pm {0.5}$</td><td>${68.8} \pm {8.7}$</td><td>${66.5} \pm {8.7}$</td></tr><tr><td rowspan="4">Hasse diagram ${\mathcal{H}}_{2}$</td><td>Unweighted</td><td>${62.5} \pm {6.3}$</td><td>${69.5} \pm {4.9}$</td><td>${66.2} \pm {0.7}$</td><td>${67.8} \pm {0.6}$</td><td>${62.5} \pm {0.2}$</td><td>${83.1} \pm {0.2}$</td><td>${65.9} \pm {8.5}$</td><td>${55.6} \pm {8.0}$</td></tr><tr><td>Counts</td><td>${64.3} \pm {3.6}$</td><td>${72.8} \pm {3.6}$</td><td>${61.8} \pm {0.7}$</td><td>${69.1} \pm {0.6}$</td><td>${62.9} \pm {0.3}$</td><td>${82.3} \pm {0.3}$</td><td>${67.3} \pm {8.2}$</td><td>${61.0} \pm {9.6}$</td></tr><tr><td>LObias</td><td>${69.7} \pm {3.5}$</td><td>${65.4} \pm {5.1}$</td><td>${69.0} \pm {0.6}$</td><td>${60.3} \pm {0.6}$</td><td>${71.2} \pm {0.7}$</td><td>${79.2} \pm {0.4}$</td><td>${67.3} \pm {7.9}$</td><td>${64.2} \pm {9.6}$</td></tr><tr><td>EQbias</td><td>${72.4} \pm {3.6}$</td><td>${73.5} \pm {3.5}$</td><td>${71.3} \pm {0.6}$</td><td>${66.1} \pm {0.6}$</td><td>${71.2} \pm {0.4}$</td><td>${82.3} \pm {0.3}$</td><td>${67.8} \pm {8.6}$</td><td>${65.7} \pm {9.3}$</td></tr><tr><td rowspan="7">Spectral Embedding Projected Metrics</td><td rowspan="2">Combinatorial Laplacians</td><td>Unweighted</td><td>${56.4} \pm {3.6}$</td><td>${56.7} \pm {6.8}$</td><td>${63.8} \pm {0.6}$</td><td>${53.5} \pm {0.7}$</td><td>${55.1} \pm {0.2}$</td><td>${50.4} \pm {0.2}$</td><td>${57.8} \pm {6.0}$</td><td>${56.4} \pm {5.7}$</td></tr><tr><td>Weighted</td><td>${66.5} \pm {5.3}$</td><td>${56.1} \pm {6.5}$</td><td>${65.2} \pm {0.8}$</td><td>${55.6} \pm {0.7}$</td><td>${72.8} \pm {0.4}$</td><td>${50.3} \pm {0.3}$</td><td>${70.1} \pm {8.3}$</td><td>${53.5} \pm {6.8}$</td></tr><tr><td>Harm. mean</td><td rowspan="4">Weighted</td><td colspan="2">${71.4} \pm {4.3}$</td><td colspan="2">${64.5} \pm {0.8}$</td><td colspan="2">${79.0} \pm {0.2}$</td><td colspan="2">${61.6} \pm {8.2}$</td></tr><tr><td>Geom. mean</td><td colspan="2">73.1±3.8</td><td colspan="2">${66.7} \pm {0.8}$</td><td colspan="2">$\mathbf{{83.3} \pm {0.2}}$</td><td colspan="2">${62.4} \pm {7.7}$</td></tr><tr><td>Katz</td><td/><td>${69.3} \pm {3.7}$</td><td colspan="2">${63.2} \pm {0.6}$</td><td colspan="2">${77.8} \pm {0.3}$</td><td colspan="2">${62.4} \pm {7.0}$</td></tr><tr><td>PPR</td><td colspan="2">${69.8} \pm {3.9}$</td><td colspan="2">${68.8} \pm {0.5}$</td><td colspan="2">${75.7} \pm {0.4}$</td><td colspan="2">${57.7} \pm {4.6}$</td></tr><tr><td>Logistic Regression</td><td>Unweighted</td><td colspan="2">${68.7} \pm {3.1}$</td><td colspan="2">${68.1} \pm {0.7}$</td><td colspan="2">${81.2} \pm {0.2}$</td><td colspan="2">${65.4} \pm {6.9}$</td></tr></table>
168
+
169
+ ### 5.1 Reconstruction and prediction of 3-way interactions: the unweighted scenario and $k$ -SIMPLEX2VEC
170
+
171
+ #### 5.1.1 Comparison of pairwise node proximities
172
+
173
+ In Figure 2(a, b) we show evaluation metrics on higher-order link classification (reconstruction and prediction) for 3-way interactions, computed with unweighted node-level information from different models, varying the quantity ${n}_{\mathcal{E}}\left( \delta \right)$ referred to the open configurations. We recall that in this case $k$ -SIMPLEX2VEC is equivalent to the standard embedding of the projected graph. Hasse diagram ${\mathcal{H}}_{1}$ scores ${s}_{0}\left( \delta \right)$ computed with SIMPLEX2VEC perform overall better than proximities of the projected graph (i.e. $k$ -SIMPLEX2VEC scores) in almost all cases, meaning that the information given by the pairwise structures is enriched by considering multiple layers of interactions, even without leveraging interaction weights (both in ${\mathcal{G}}_{\mathcal{D}}^{\text{train }}$ and ${\mathcal{K}}_{\mathcal{D}}^{\text{train }}$ ).
174
+
175
+ Generally, we observe an expected decrease in performance for every model with respect to parameter ${n}_{\mathcal{E}}$ . For example, a few datasets show less sensitivity in the performance of prediction tasks to variations of ${n}_{\mathcal{E}}\left( \delta \right)$ (e.g., email-Eu). We ascribe this difference to domain-specific effects and peculiarities of those datasets. Embedding similarity ${s}_{0}\left( \delta \right)$ from ${\mathcal{H}}_{1}$ diagram outperforms $k$ -SIMPLEX2VEC proximities in almost every reconstruction task, except for coauth-MAG-History on open configurations with ${n}_{\mathcal{E}} = 3$ . In prediction tasks, we observe the same advantage of SIMPLEX2VEC respect to $k$ -SIMPLEX2VEC, except in contact-high-school where the models perform similarly on ${n}_{\mathcal{E}} < 2$ .
176
+
177
+ #### 5.1.2 Comparison of higher-order edge proximities
178
+
179
+ In the previous sections the metric ${s}_{0}\left( \delta \right)$ was computed from feature representations of 0 -simplices. Here we analyse instead how performances change when we use embedding representations of
180
+
181
+ 1-simplices (edge representations) to compute ${s}_{1}\left( \delta \right)$ . Intuitively, group representations like 1-simplex embeddings should convey higher-order information useful to improve classification with respect to node-level features.
182
+
183
+ In Figures 2(c, d) we show evaluation metrics on higher-order link classification for 3-way interactions, comparing unweighted node-level and edge-level information from different models, fixing the quantity ${n}_{\mathcal{E}}\left( \delta \right) = 3$ referred to the open configurations. We consider fully connected triangle configurations because, besides being the harder configurations to be classified, they consist in the set of links necessary to compute ${s}_{1}\left( \delta \right)$ .
184
+
185
+ Generally we notice an increase in classification scores when using ${s}_{1}\left( \delta \right)$ similarity rather ${s}_{0}\left( \delta \right)$ with SIMPLEX2VEC embeddings. The performance gain is quite large (between 30% and 100%) in all reconstruction tasks, and for prediction tasks it is noticeable on contact-high-school and tags-math-sx while it is even negative on coauth-MAG-History. This is also true for $k$ -SIMPLEX2VEC in the majority of datasets, but with a reduced gain.
186
+
187
+ ### 5.2 Reconstruction and prediction of 3-way interactions: the role of simplicial weights
188
+
189
+ Previously we showed that feature representations learned through the hierarchical organization of the HD enhance the classification accuracy of closed triangles, when considering unweighted complexes. We now integrate these results by studying the effect of introducing weights. In particular, we analyze the importance of weighted interactions in our framework, focusing on the case where fully connected open triangles are the negative examples for downstream tasks.
190
+
191
+ In Table 2 (Top) we show higher-order link reconstruction results: simplicial similarity ${s}_{1}\left( \delta \right)$ on the unweighted HD ${\mathcal{H}}_{2}$ outperforms all other methods, in particular weighted metrics based on Laplacian similarity and projected graph geometric mean, allowing almost perfect reconstruction in 3 out of 4 datasets. Compared with projected graph metrics, this was expected since 3-way information is incorporated in ${\mathcal{H}}_{2}$ , and the optimal scores reflect the goodness of fit of the embedding algorithm. Weighting schemes Counts and EQbias also obtain excellent scores with ${s}_{1}\left( \delta \right)$ metric, while metric ${s}_{0}\left( \delta \right)$ benefits from the use of ${LO}$ bias weights. Differently, even simplicial similarity ${s}_{1}\left( \delta \right)$ on Hasse diagram ${\mathcal{H}}_{1}$ outperforms baseline scores in half of datasets (with weighting schemes Counts and LObias), showing the feasibility of reconstructing 2-order interactions from weighted lower-order simplices (vertices in ${\mathcal{H}}_{1}$ are simplices of dimension 0 and 1) similarly to previous work on hypergraph reconstruction [8].
192
+
193
+ In Table 2 (Bottom) we show higher-order link prediction results. Overall, SIMPLEX2VEC embeddings trained on ${\mathcal{H}}_{1}$ with Counts and EQbias weights give better results: in contact-high-school and email-Eu with ${s}_{0}\left( \delta \right)$ metric, in tags-math-sx with ${s}_{1}\left( \delta \right)$ metric. In dataset coauth-MAG-History the unweighted ${s}_{0}\left( \delta \right)$ score is outperformed uniquely by the weighted ${\mathbf{L}}_{0}$ embedding, with weighted simplicial counterparts resulting in similar performances. In the space of projected graph scores, good results are obtained with geometric mean and logistic regression, which were among the best metrics in one of the seminal works on higher-order link prediction [9].
194
+
195
+ Finally, we observe that weighting schemes for neural simplicial embeddings overall positively contribute to classification tasks both for reconstruction and prediction.
196
+
197
+ ## 6 Conclusions and Future Work
198
+
199
+ In this paper, we introduced SIMPLEX2VEC for representation learning on simplicial complexes. In particular, we focused on formalizing reconstruction and link prediction tasks for higher-order structures, and we tested the proposed model on solving such downstream tasks. We showed that SIMPLEX2VEC-based representations are more effective in classification than traditional approaches and previous higher-order embedding methods. In particular, we prove the feasibility of using simplicial embedding of Hasse diagrams in reconstructing system's polyadic interactions from lower-order edges, in addition to adequately predict future simplicial closures. SIMPLEX2VEC enables the investigation of the impact of different topological features, and we showed that weighted and unweighted models have different predictive power. Future work should focus on understanding these differences through the analysis of link predictability $\left\lbrack {{46},{47}}\right\rbrack$ with higher-order edges as a function of datasets' peculiarities. Future work includes algorithmic approaches to tame the scalability limits set by the combinatorial structure of the Hasse diagram, which could for example be tackled via different optimization frameworks [48, 49] and hierarchical approaches [17, 50].
200
+
201
+ References
202
+
203
+ [1] William L Hamilton. Graph representation learning. Synthesis Lectures on Artifical Intelligence and Machine Learning, 14(3):1-159, 2020. 1, 2
204
+
205
+ [2] Federico Battiston, Giulia Cencetti, Iacopo Iacopini, Vito Latora, Maxime Lucas, Alice Patania, Jean-Gabriel Young, and Giovanni Petri. Networks beyond pairwise interactions: structure and dynamics. Physics Reports, 874:1-92, August 2020. 1, 6
206
+
207
+ [3] Leo Torres, Ann S. Blevins, Danielle Bassett, and Tina Eliassi-Rad. The Why, How, and When of Representations for Complex Systems. SIAM Review, 63(3):435-485, January 2021. Publisher: Society for Industrial and Applied Mathematics. 1
208
+
209
+ [4] Linyuan Lü and Tao Zhou. Link prediction in complex networks: A survey. Physica A: statistical mechanics and its applications, 390(6):1150-1170, 2011. 1, 2
210
+
211
+ [5] Giulio Cimini, Rossana Mastrandrea, and Tiziano Squartini. Reconstructing networks. Cambridge University Press, 2021. 1
212
+
213
+ [6] Sudhanshu Chanpuriya, Cameron Musco, Konstantinos Sotiropoulos, and Charalampos Tsourakakis. Deepwalking backwards: From embeddings back to graphs. In International Conference on Machine Learning, pages 1473-1483. PMLR, 2021. 1, 3
214
+
215
+ [7] Alexandru Cristian Mara, Jefrey Lijffijt, and Tijl De Bie. Benchmarking network embedding models for link prediction: are we making progress? In 2020 IEEE 7th International Conference on Data Science and Advanced Analytics (DSAA), pages 138-147. IEEE, 2020. 1, 3
216
+
217
+ [8] Jean-Gabriel Young, Giovanni Petri, and Tiago P Peixoto. Hypergraph reconstruction from network data. Communications Physics, 4(1):1-11, 2021. 1, 2, 9, 13
218
+
219
+ [9] Austin R. Benson, Rediet Abebe, Michael T. Schaub, Ali Jadbabaie, and Jon Kleinberg. Simplicial Closure and higher-order link prediction. Proceedings of the National Academy of Sciences, 115(48):E11221-E11230, November 2018. 1, 2, 5, 7, 9
220
+
221
+ [10] Huan Wang, Chuang Ma, Han-Shuang Chen, Ying-Cheng Lai, and Hai-Feng Zhang. Full reconstruction of simplicial complexes from binary contagion and ising data. Nature Communications, ${13}\left( 1\right) : 1 - {10},{2022.1},2$
222
+
223
+ [11] Andrea Santoro, Federico Battiston, Giovanni Petri, and Enrico Amico. Unveiling the higher-order organization of multivariate time series. arXiv:2203.10702, 2022. 1, 2
224
+
225
+ [12] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In International Conference on Learning Representations, 2018. 2
226
+
227
+ [13] Dengyong Zhou, Jiayuan Huang, and Bernhard Schölkopf. Learning with hypergraphs: Clustering, classification, and embedding. Advances in neural information processing systems, 19, 2006. 2
228
+
229
+ [14] Jie Huang, Chuan Chen, Fanghua Ye, Jiajing Wu, Zibin Zheng, and Guohui Ling. Hyper2vec: Biased Random Walk for Hyper-network Embedding. In Guoliang Li, Jun Yang, Joao Gama, Juggapong Natwichai, and Yongxin Tong, editors, Database Systems for Advanced Applications, pages 273-277. Springer International Publishing, 2019. 2
230
+
231
+ [15] Jie Huang, Xin Liu, and Yangqiu Song. Hyper-path-based representation learning for hyper-networks. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pages 449-458, 2019. 2
232
+
233
+ [16] Ke Tu, Peng Cui, Xiao Wang, Fei Wang, and Wenwu Zhu. Structural deep embedding for hyper-networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018. 2
234
+
235
+ [17] Sepideh Maleki, Donya Saless, Dennis P Wall, and Keshav Pingali. Hypernetvec: Fast and scalable hierarchical embedding for hypergraphs. In International Conference on Network Science, pages 169-183. Springer, 2022. 2, 9
236
+
237
+ [18] Celia Hacker. k-simplex2vec: a simplicial extension of node2vec. In NeurIPS 2020 Workshop on Topological Data Analysis and Beyond, 2020. 2, 3, 6
238
+
239
+ [19] Jacob Charles Wright Billings, Mirko Hu, Giulia Lerda, Alexey N Medvedev, Francesco Mottes, Adrian Onicas, Andrea Santoro, and Giovanni Petri. Simplex2vec embeddings for community detection in simplicial complexes. arXiv:1906.09068, 2019. 2, 3, 4, 5
240
+
241
+ [20] Yifan Feng, Haoxuan You, Zizhao Zhang, Rongrong Ji, and Yue Gao. Hypergraph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3558-3565, 2019. 2
242
+
243
+ [21] Ruochi Zhang, Yuesong Zou, and Jian Ma. Hyper-sagnn: a self-attention based graph neural network for hypergraphs. In International Conference on Learning Representations, 2020. 2
244
+
245
+ [22] Song Bai, Feihu Zhang, and Philip HS Torr. Hypergraph convolution and hypergraph attention. Pattern Recognition, 110:107637, 2021. 2
246
+
247
+ [23] Stefania Ebli, Michaël Defferrard, and Gard Spreemann. Simplicial neural networks. In NeurIPS 2020 Workshop on Topological Data Analysis and Beyond, 2020. 2
248
+
249
+ [24] Cristian Bodnar, Fabrizio Frasca, Yuguang Wang, Nina Otter, Guido F Montufar, Pietro Lio, and Michael Bronstein. Weisfeiler and lehman go topological: Message passing simplicial networks. In International Conference on Machine Learning, pages 1026-1037. PMLR, 2021. 2
250
+
251
+ [25] Christopher Wei Jin Goh, Cristian Bodnar, and Pietro Lio. Simplicial attention networks. In ICLR 2022 Workshop on Geometrical and Topological Representation Learning, 2022. 2
252
+
253
+ [26] Tiago P Peixoto. Network reconstruction and community detection from dynamics. Physical review letters, 123(12):128301, 2019. 2
254
+
255
+ [27] Mark EJ Newman. Network structure from rich but noisy data. Nature Physics, 14(6):542-545, 2018.2
256
+
257
+ [28] Muhan Zhang, Zhicheng Cui, Shali Jiang, and Yixin Chen. Beyond link prediction: Predicting hyperlinks in adjacency space. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018. 2, 6
258
+
259
+ [29] Govind Sharma, Prasanna Patil, and M. Narasimha Murty. C3MM: Clique-Closure based Hyperlink Prediction. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, pages 3364-3370, July 2020. 2
260
+
261
+ [30] Tarun Kumar, K Darwin, Srinivasan Parthasarathy, and Balaraman Ravindran. Hpra: Hyperedge prediction using resource allocation. In 12th ACM conference on web science, pages 135-143, 2020. 2
262
+
263
+ [31] Liming Pan, Hui-Juan Shang, Peiyan Li, Haixing Dai, Wei Wang, and Lixin Tian. Predicting hyperlinks via hypernetwork loop structure. EPL (Europhysics Letters), 135(4):48005, 2021. 2
264
+
265
+ [32] Naganand Yadati, Vikram Nitin, Madhav Nimishakavi, Prateek Yadav, Anand Louis, and Partha Talukdar. NHP: Neural Hypergraph Link Prediction. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pages 1705-1714. ACM, October 2020. 2
266
+
267
+ [33] Neeraj Chavan and Katerina Potika. Higher-order Link Prediction Using Triangle Embeddings. In 2020 IEEE International Conference on Big Data (Big Data), pages 4535-4544, December 2020.2,7
268
+
269
+ [34] Alice Patania, Giovanni Petri, and Francesco Vaccarino. The shape of collaborations. EPJ Data Science, 6:1-16, 2017. 2
270
+
271
+ [35] Yunyu Liu, Jianzhu Ma, and Pan Li. Neural predicting higher-order patterns in temporal networks. In Proceedings of the ACM Web Conference 2022, pages 1340-1351, 2022. 2
272
+
273
+ [36] Se-eun Yoon, Hyungseok Song, Kijung Shin, and Yung Yi. How Much and When Do We Need Higher-order Information in Hypergraphs? A Case Study on Hyperedge Prediction. In Proceedings of The Web Conference 2020, pages 2627-2633, Taipei Taiwan, April 2020. ACM. 2
274
+
275
+ [37] Prasanna Patil, Govind Sharma, and M. Narasimha Murty. Negative Sampling for Hyperlink Prediction in Networks. In Hady W. Lauw, Raymond Chi-Wing Wong, Alexandros Ntoulas, Ee-Peng Lim, See-Kiong Ng, and Sinno Jialin Pan, editors, Advances in Knowledge Discovery and Data Mining, pages 607-619. Springer International Publishing, 2020. 2, 6, 7
276
+
277
+ [38] Federico Musciotto, Federico Battiston, and Rosario N. Mantegna. Detecting informative higher-order interactions in statistically validated hypergraphs. arXiv:2103.16484 [physics, stat], March 2021. 2
278
+
279
+ [39] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv:1301.3781, 2013. 3
280
+
281
+ [40] David Liben-Nowell and Jon Kleinberg. The link-prediction problem for social networks. Journal of the American society for information science and technology, 58(7):1019-1031, 2007.5
282
+
283
+ [41] Sudhanshu Chanpuriya and Cameron Musco. Infinitewalk: Deep network embeddings as laplacian embeddings with a nonlinearity. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1325-1333, 2020. 5
284
+
285
+ [42] Timothy E Goldberg. Combinatorial laplacians of simplicial complexes. Senior Thesis, Bard College, 2002. 6
286
+
287
+ [43] Yu-Chia Chen and Marina Meila. The decomposition of the higher-order homology embedding constructed from the $k$ -laplacian. Advances in Neural Information Processing Systems,34, 2021. 6
288
+
289
+ [44] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation, 15(6):1373-1396, 2003. 6
290
+
291
+ [45] Wissam Siblini, Jordan Fréry, Liyun He-Guelton, Frédéric Oblé, and Yi-Qing Wang. Master your metrics with calibration. In International Symposium on Intelligent Data Analysis, pages 457-469. Springer, 2020. 7
292
+
293
+ [46] Linyuan Lü, Liming Pan, Tao Zhou, Yi-Cheng Zhang, and H Eugene Stanley. Toward link predictability of complex networks. Proceedings of the National Academy of Sciences, 112(8):2325- 2330, 2015. 9
294
+
295
+ [47] Jiachen Sun, Ling Feng, Jiarong Xie, Xiao Ma, Dashun Wang, and Yanqing Hu. Revealing the predictability of intrinsic structure in complex networks. Nature communications, 11(1):1-10, 2020.9
296
+
297
+ [48] Jie Zhang, Yuxiao Dong, Yan Wang, Jie Tang, and Ming Ding. Prone: Fast and scalable network representation learning. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, 2019. 9
298
+
299
+ [49] Hao Zhu and Piotr Koniusz. Refine: Random range finder for network embedding. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pages 3682-3686, 2021. 9
300
+
301
+ [50] Ayan Kumar Bhowmick, Koushik Meneni, Maximilien Danisch, Jean-Loup Guillaume, and Bivas Mitra. Louvainne: Hierarchical louvain method for high quality and scalable network embedding. In Proceedings of the 13th International Conference on Web Search and Data Mining, pages 43-51, 2020. 9
302
+
303
+ ## 6 A Appendix
304
+
305
+ Table A1: Summary statistics of empirical datasets, referring to the largest connected component of the projected graph. In order: total number of time-stamped simplices $\left| \mathcal{D}\right|$ ; number of unique simplices $\left| \mathcal{F}\right|$ ; number of training nodes $\left| \mathcal{V}\right|$ and edges $\left| \mathcal{E}\right|$ in the first ${80}\%$ of $\mathcal{D}$ ; number of triangles in the first ${80}\% \left| \Delta \right| /$ new triangles in the last ${20}\% \left| {\Delta }^{ * }\right|$ ; number of training tetrahedra in the first ${80}\% \left| \Theta \right|$ / new tetrahedra in the last ${20}\% \left| {\Theta }^{ * }\right|$ .
306
+
307
+ <table><tr><td>Dataset</td><td>$\left| \mathcal{D}\right|$</td><td>$\left| \mathcal{F}\right|$</td><td>$\left| \mathcal{V}\right|$</td><td>$\left| \mathcal{E}\right|$</td><td>$\left| \Delta \right| /\left| {\Delta }^{ * }\right|$</td><td>$\left| \Theta \right| /\left| {\Theta }^{ * }\right|$</td></tr><tr><td>contact-high-school</td><td>172,035</td><td>7,818</td><td>327</td><td>5,225</td><td>2,050 / 320</td><td>218 / 20</td></tr><tr><td>contact-primary-school</td><td>106,879</td><td>12,704</td><td>242</td><td>7,575</td><td>4,259 / 880</td><td>310 / 71</td></tr><tr><td>email-Eu</td><td>234,559</td><td>25,008</td><td>952</td><td>26,582</td><td>143,280 / 17,325</td><td>631,590 / 82,945</td></tr><tr><td>email-Enron</td><td>10,883</td><td>1,512</td><td>140</td><td>1,607</td><td>5,517 / 1,061</td><td>14,902 / 3,547</td></tr><tr><td>tags-math-sx</td><td>819,546</td><td>150,346</td><td>893</td><td>60,258</td><td>167,306 / 34,801</td><td>101,649 / 26,344</td></tr><tr><td>congress-bills</td><td>103,758</td><td>18,626</td><td>97</td><td>3,207</td><td>32.692 / 371</td><td>90,316 / 3,309</td></tr><tr><td>coauth-MAG-History</td><td>114,447</td><td>11,072</td><td>4,034</td><td>9,255</td><td>4,714 / 1,297</td><td>3,966 / 1,008</td></tr><tr><td>coauth-MAG-Geology</td><td>275,565</td><td>29,414</td><td>3,835</td><td>27,950</td><td>17,946 / 3,852</td><td>12,072 / 3,168</td></tr></table>
308
+
309
+ ### A.1 Beyond 3-way interactions: the case of tetrahedra
310
+
311
+ #### A.1.1 Unweighted analysis
312
+
313
+ In Figure A1(a) we show node-level evaluation metrics for 4-way higher-order reconstruction. Metric ${s}_{0}\left( \delta \right)$ of SIMPLEX2VEC computed on ${\mathcal{H}}_{1}$ shows overall slightly better performances respect to $k$ -SIMPLEX2VEC similarities, especially when the density of triangles is low $\left( {{n}_{\Delta } < 3}\right)$ . In coauth-MAG-Geology we observe also a remarkable increment of $k$ -SIMPLEX2VEC reconstruction scores for negative examples with increasing ${n}_{\Delta }\left( \delta \right)$ , and this is also observable in email-Enron. In Figure A1(b) we report node-level evaluation metrics for 4-way higher-order prediction. Node-level SIMPLEX2VEC embedding performs better than $k$ -SIMPLEX2VEC, on contact-primary-school and, to a lesser extent, on coauth-MAG-Geology. In email-Enron and congress-bills SIMPLEX2VEC performance increases when the density of triangles is low $\left( {{n}_{\Delta } \leq 2}\right)$ .
314
+
315
+ Higher-order similarity measures from $k$ -SIMPLEX2VEC in Figure A1(c, d) are outperformed by the SIMPLEX2VEC ones in many cases, especially ${s}_{2}\left( \delta \right)$ metric for contact-primary-school, email-Enron and congress-bills in reconstruction tasks. In prediction tasks with email-Enron and coauth-MAG-Geology SIMPLEX2VEC obtain mainly good results overcoming the simplicial baseline. These results generally confirm our previous findings on 3-way tasks, which displayed an increasing classification capability when using higher-order proximities ${s}_{k}\left( {k > 0}\right)$ for SIMPLEX2VEC.
316
+
317
+ #### A.1.2 Weighted analysis
318
+
319
+ In Table A2 (Top) we show reconstruction scores of tetrahedra, when simplicial embeddings are trained on Hasse diagram ${\mathcal{H}}_{2}$ and negative examples are given by open 4-way configurations with four triangular faces. Due to ${\mathcal{H}}_{2}$ characteristics, feature learned from the simplicial complex are not aware of tetrahedral structures and this task results on reconstructing 4-node groups from training data with at most triadic structures. Previous work analyzed the problem of higher-order edge reconstruction from pair-wise data [8], but here we focus on a not previously studied task based on triadic data. From the comparison with spectral embeddings and PPMI proximities, we notice that SIMPLEX2VEC weighted ${s}_{2}\left( \delta \right)$ similarity (LObias and EQbias) is the best on half of the datasets in classifying closed tetrahedra respect to triangle-rich open groups. In email-Enron weighted ${\mathbf{L}}_{1}$ embedding outperforms the unweighted (and weighted ones) ${s}_{0}\left( \delta \right)$ simplicial metric, while in coauth-MAG-Geology the best score is given by the unweighted ${\mathrm{{PPMI}}}_{1}$ (which is also the best projected metric in the other 3 datasets).
320
+
321
+ In Table A2 (Bottom) we report classification scores for the prediction of simplicial closures on tetrahedra, when neural embeddings are trained on Hasse diagram ${\mathcal{H}}_{3}$ (we empirically observed better results respect to ${\mathcal{H}}_{2}$ ). We compare these results with spectral embeddings and PPMI projected metrics in predicting which mostly triangle-dense configurations will close in a tetrahedron in the last 20% of data. Unusually, best scores obtained with SIMPLEX2VEC come from the unweighted setting in email-Enron and congress-bills with respectively ${s}_{1}\left( \delta \right)$ and ${s}_{2}\left( \delta \right)$ metrics. There is not a unique best metric, which was also observed in the 3-way prediction reports of Table 2 (Bottom). Spectral embedding outperforms neural methods for contact-primary-school (unweigthed ${s}_{2}$ ) and coauth-MAG-Geology (weighted ${s}_{0}$ ).
322
+
323
+ ![01963f0c-f110-7e99-b85a-c00e06b0f6ca_13_336_223_1118_443_0.jpg](images/01963f0c-f110-7e99-b85a-c00e06b0f6ca_13_336_223_1118_443_0.jpg)
324
+
325
+ Figure A1: Performance on 4-way link reconstruction (a, c) and prediction (b, d) for SIMPLEX2VEC and $k$ -SIMPLEX2VEC with: (a, b) similarity score ${s}_{0}$ varying the parameter ${n}_{\Delta }$ ; (c, d) score ${s}_{k}$ (with $k$ in $\{ 0,1,2\}$ ) on highly triangle-dense open configurations $\left( {{n}_{\Delta } = 4}\right)$ . Metrics are computed in unweighted representations. Label unbalancing in each sample is uniformly drawn between 1:1 and 1:5000. A schematic view of positive and negative examples is reported for each classification task.
326
+
327
+ Table A2: Balanced AUC-PR scores for higher-order link reconstruction (Top) and prediction (Bottom) on 4-node groups, with the hardest class of negative configurations $\left( {{n}_{\Delta } = 4}\right)$ . Best scores for different methods are reported in boldface letters; among these ones, the best overall score is blue shaded and the second best score is grey shaded.
328
+
329
+ <table><tr><td rowspan="2">Dataset</td><td colspan="4">Neural Embedding (Hasse Diagram ${\mathcal{H}}_{2}$ )</td><td colspan="4">Spectral Embedding (Combinatorial Laplacians)</td><td colspan="3">Projected Graph PPMI Metric</td></tr><tr><td/><td>${s}_{0}\left( \delta \right)$</td><td>${s}_{1}\left( \delta \right)$</td><td>${s}_{2}\left( \delta \right)$</td><td/><td>${s}_{0}\left( \delta \right)$</td><td>${s}_{1}\left( \delta \right)$</td><td>${s}_{2}\left( \delta \right)$</td><td>$T = 1$</td><td>$T = {10}$</td><td>$T = \infty$</td></tr><tr><td rowspan="4">contact-primary-school</td><td>Unweighted</td><td>${52.9} \pm {3.3}$</td><td>${45.2} \pm {2.7}$</td><td>${64.5} \pm {2.8}$</td><td rowspan="2">Unweighted</td><td rowspan="2">${52.1} \pm {3.8}$</td><td>${58.2} \pm {2.0}$</td><td>${53.4} \pm {3.0}$</td><td>${51.5} \pm {3.1}$</td><td>${50.2} \pm {3.0}$</td><td>${50.2} \pm {3.0}$</td></tr><tr><td>Counts</td><td>${48.4} \pm {3.0}$</td><td>${46.2} \pm {2.8}$</td><td>${59.1} \pm {3.3}$</td><td/><td/><td/><td/><td/></tr><tr><td>LObias</td><td>${50.6} \pm {3.2}$</td><td>${61.6} \pm {3.3}$</td><td>$\mathbf{{70.7} \pm {3.9}}$</td><td rowspan="2">Weighted</td><td rowspan="2">${54.0} \pm {2.8}$</td><td>${55.9} \pm {2.8}$</td><td>${53.4} \pm {2.1}$</td><td>${47.9} \pm {3.1}$</td><td>${47.0} \pm {2.7}$</td><td>${48.5} \pm {2.5}$</td></tr><tr><td>EQbias</td><td>${45.2} \pm {3.6}$</td><td>${47.0} \pm {3.0}$</td><td>${58.5} \pm {3.3}$</td><td/><td/><td/><td/><td/></tr><tr><td rowspan="4">email-Enron</td><td>Unweighted</td><td>${69.0} \pm {0.4}$</td><td>${56.0} \pm {0.4}$</td><td>${58.2} \pm {0.3}$</td><td rowspan="2">Unweighted</td><td rowspan="2">${69.0} \pm {0.5}$</td><td>${68.0} \pm {0.4}$</td><td>${55.5} \pm {0.3}$</td><td>${68.5} \pm {0.4}$</td><td>${66.7} \pm {0.5}$</td><td>${66.9} \pm {0.4}$</td></tr><tr><td>Counts</td><td>${60.6} \pm {0.5}$</td><td>${61.3} \pm {0.5}$</td><td>${54.0} \pm {0.4}$</td><td/><td/><td/><td/><td/></tr><tr><td>LObias</td><td>${68.0} \pm {0.5}$</td><td>${46.5} \pm {0.5}$</td><td>${57.4} \pm {0.5}$</td><td rowspan="2">Weighted</td><td rowspan="2">${71.1} \pm {0.4}$</td><td>$\mathbf{{79.0} \pm {0.3}}$</td><td>${76.9} \pm {0.2}$</td><td>${58.3} \pm {0.4}$</td><td>${57.9} \pm {0.5}$</td><td>${62.0} \pm {0.5}$</td></tr><tr><td>EQbias</td><td>${62.1} \pm {0.7}$</td><td>${44.4} \pm {0.3}$</td><td>${53.1} \pm {0.4}$</td><td/><td/><td/><td/><td/></tr><tr><td rowspan="4">congress-bills</td><td>Unweighted</td><td>${63.1} \pm {0.2}$</td><td>${64.4} \pm {0.1}$</td><td>${51.8} \pm {0.2}$</td><td rowspan="2">Unweighted</td><td rowspan="2">${56.1} \pm {0.2}$</td><td>${58.4} \pm {0.1}$</td><td>${49.8} \pm {0.1}$</td><td>${65.9} \pm {0.1}$</td><td>${66.0} \pm {0.1}$</td><td>${65.9} \pm {0.1}$</td></tr><tr><td>Counts</td><td>${43.1} \pm {0.1}$</td><td>${70.4} \pm {0.1}$</td><td>${72.5} \pm {0.1}$</td><td/><td/><td/><td/><td/></tr><tr><td>LObias</td><td>${49.0} \pm {0.1}$</td><td>${74.2} \pm {0.1}$</td><td>${60.6} \pm {0.2}$</td><td rowspan="2">Weighted</td><td rowspan="2">${55.0} \pm {0.1}$</td><td>${62.8} \pm {0.2}$</td><td>${55.3} \pm {0.2}$</td><td>${49.1} \pm {0.1}$</td><td>${47.8} \pm {0.1}$</td><td>${47.3} \pm {0.1}$</td></tr><tr><td>EQbias</td><td>${65.7} \pm {0.2}$</td><td>${69.0} \pm {0.1}$</td><td>${74.2} \pm {0.1}$</td><td/><td/><td/><td/><td/></tr><tr><td rowspan="4">coauth-MAG-Geology</td><td>Unweighted</td><td>${71.6} \pm {0.5}$</td><td>${34.6} \pm {0.3}$</td><td>${84.2} \pm {0.7}$</td><td rowspan="2">Unweighted</td><td>${62.6} \pm {0.6}$</td><td>${61.7} \pm {0.9}$</td><td>${49.3} \pm {0.9}$</td><td>${86.0} \pm {0.4}$</td><td>${77.8} \pm {0.4}$</td><td>${75.5} \pm {0.5}$</td></tr><tr><td>Counts</td><td>${40.5} \pm {0.3}$</td><td>${36.2} \pm {0.4}$</td><td>${74.1} \pm {0.3}$</td><td/><td/><td/><td/><td/><td/></tr><tr><td>LObias</td><td>${64.1} \pm {0.5}$</td><td>${34.4} \pm {0.3}$</td><td>${73.3} \pm {0.5}$</td><td rowspan="2">Weighted</td><td>${85.8} \pm {0.7}$</td><td>${65.7} \pm {0.5}$</td><td>${44.9} \pm {0.7}$</td><td>${76.3} \pm {0.6}$</td><td>${71.9} \pm {0.5}$</td><td>${70.6} \pm {0.6}$</td></tr><tr><td>EQbias</td><td>${36.7} \pm {0.3}$</td><td>${37.5} \pm {0.2}$</td><td>${79.2} \pm {0.4}$</td><td/><td/><td/><td/><td/><td/></tr><tr><td rowspan="2">Dataset</td><td colspan="4">Neural Embedding (Hasse Diagram ${\mathcal{H}}_{3}$ )</td><td colspan="4">Spectral Embedding (Combinatorial Laplacians)</td><td colspan="3">Projected Graph PPMI Metric</td></tr><tr><td/><td>${s}_{0}\left( \delta \right)$</td><td>${s}_{1}\left( \delta \right)$</td><td>${s}_{2}\left( \delta \right)$</td><td colspan="4">${s}_{0}\left( \delta \right)$${s}_{1}\left( \delta \right)$${s}_{2}\left( \delta \right)$</td><td>$T = 1$</td><td>$T = {10}$</td><td>$T = \infty$</td></tr><tr><td rowspan="4">contact-primary-school</td><td>Unweighted</td><td>${56.4} \pm {1.8}$</td><td>${58.6} \pm {2.3}$</td><td>${66.8} \pm {2.4}$</td><td rowspan="2">Unweighted</td><td rowspan="2">${82.1} \pm {4.0}$</td><td>${85.4} \pm {1.7}$</td><td>${85.9} \pm {3.1}$</td><td>${49.3} \pm {2.2}$</td><td>${45.8} \pm {1.6}$</td><td>${45.7} \pm {1.7}$</td></tr><tr><td>Counts</td><td>${63.0} \pm {2.7}$</td><td>${67.8} \pm {0.7}$</td><td>${72.2} \pm {1.6}$</td><td/><td/><td/><td/><td/></tr><tr><td>LObias</td><td>${60.4} \pm {1.6}$</td><td>${61.2} \pm {2.2}$</td><td>${62.4} \pm {2.6}$</td><td rowspan="2">Weighted</td><td rowspan="2">${57.8} \pm {2.4}$</td><td>${81.3} \pm {4.4}$</td><td>${70.6} \pm {1.5}$</td><td>${61.1} \pm {2.3}$</td><td>${47.4} \pm {1.6}$</td><td>${48.6} \pm {1.6}$</td></tr><tr><td>EObias</td><td>${62.7} \pm {2.0}$</td><td>${65.6} \pm {1.2}$</td><td>${68.3} \pm {2.2}$</td><td/><td/><td/><td/><td/></tr><tr><td rowspan="4">email-Enron</td><td>Unweighted</td><td>${88.3} \pm {6.6}$</td><td>98.0±2.1</td><td>${96.9} \pm {2.3}$</td><td rowspan="2">Unweighted</td><td rowspan="2">${92.7} \pm {2.9}$</td><td>${67.6} \pm {5.7}$</td><td>$\mathbf{{97.1} \pm {1.8}}$</td><td>${50.3} \pm {0.2}$</td><td>${50.9} \pm {0.5}$</td><td>${50.8} \pm {0.5}$</td></tr><tr><td>Counts</td><td>${77.0} \pm {5.6}$</td><td>${88.7} \pm {4.0}$</td><td>${83.5} \pm {4.5}$</td><td/><td/><td/><td/><td/></tr><tr><td>LObias</td><td>${60.5} \pm {3.1}$</td><td>${73.7} \pm {5.4}$</td><td>${88.4} \pm {4.0}$</td><td rowspan="2">Weighted</td><td rowspan="2">${84.8} \pm {5.6}$</td><td>${88.7} \pm {3.7}$</td><td>${95.8} \pm {2.4}$</td><td>${55.8} \pm {2.2}$</td><td>${53.3} \pm {1.3}$</td><td>${54.7} \pm {1.5}$</td></tr><tr><td>EObias</td><td>${57.9} \pm {2.5}$</td><td>${84.9} \pm {3.6}$</td><td>${80.4} \pm {5.6}$</td><td/><td/><td/><td/><td/></tr><tr><td rowspan="4">congress-bills</td><td>Unweighted</td><td>${47.9} \pm {0.1}$</td><td>${34.0} \pm {0.0}$</td><td>$\mathbf{{77.7} \pm {0.3}}$</td><td rowspan="2">Unweighted</td><td rowspan="2">${60.8} \pm {0.2}$</td><td>${64.3} \pm {0.3}$</td><td>${48.8} \pm {0.2}$</td><td>${74.7} \pm {0.2}$</td><td>${74.7} \pm {0.2}$</td><td>${74.7} \pm {0.2}$</td></tr><tr><td>Counts</td><td>${49.9} \pm {0.2}$</td><td>${37.4} \pm {0.1}$</td><td>${74.6} \pm {0.3}$</td><td/><td/><td/><td/><td/></tr><tr><td>LObias</td><td>${40.2} \pm {0.2}$</td><td>${76.9} \pm {0.3}$</td><td>${74.0} \pm {0.3}$</td><td rowspan="2">Weighted</td><td rowspan="2">${40.2} \pm {0.1}$</td><td>${53.1} \pm {0.3}$</td><td>${50.8} \pm {0.2}$</td><td>${40.2} \pm {0.1}$</td><td>${40.8} \pm {0.1}$</td><td>${40.2} \pm {0.1}$</td></tr><tr><td>EObias</td><td>${64.2} \pm {0.2}$</td><td>${58.4} \pm {0.3}$</td><td>${71.4} \pm {0.2}$</td><td/><td/><td/><td/><td/></tr><tr><td rowspan="4">coauth-MAG-Geology</td><td>Unweighted</td><td>${55.1} \pm {7.7}$</td><td>${60.1} \pm {7.2}$</td><td>${74.8} \pm {4.8}$</td><td rowspan="2">Unweighted</td><td rowspan="2">${57.0} \pm {6.9}$</td><td>${48.1} \pm {7.8}$</td><td>${52.1} \pm {7.3}$</td><td>${50.7} \pm {3.5}$</td><td>${54.6} \pm {6.3}$</td><td>${55.3} \pm {7.4}$</td></tr><tr><td>Counts</td><td>${54.0} \pm {5.9}$</td><td>${74.1} \pm {3.6}$</td><td>${78.6} \pm {4.4}$</td><td/><td/><td/><td/><td/></tr><tr><td>LObias</td><td>${75.9} \pm {5.0}$</td><td>${84.2} \pm {2.9}$</td><td>${73.9} \pm {4.3}$</td><td rowspan="2">Weighted</td><td rowspan="2">${88.5} \pm {3.2}$</td><td>${52.0} \pm {7.7}$</td><td>${52.7} \pm {7.3}$</td><td>${54.9} \pm {4.5}$</td><td>${56.1} \pm {5.9}$</td><td>${55.3} \pm {4.8}$</td></tr><tr><td>EObias</td><td>${51.3} \pm {4.7}$</td><td>${76.1} \pm {4.3}$</td><td>${72.8} \pm {6.1}$</td><td/><td/><td/><td/><td/></tr></table>
330
+
papers/LOG/LOG 2022/LOG 2022 Conference/UiBiLRXR0G/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,324 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § EFFECTIVE HIGHER-ORDER LINK PREDICTION AND RECONSTRUCTION FROM SIMPLICIAL COMPLEX EMBEDDINGS
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ Methods that learn graph topological representations are becoming the usual choice to extract features to help solving machine learning tasks on graphs. In particular, low-dimensional encoding of graph nodes can be exploited in tasks such as link prediction and network reconstruction, where pairwise node embedding similarity is interpreted as the likelihood of an edge incidence. The presence of polyadic interactions in many real-world complex systems is leading to the emergence of representation learning techniques able to describe systems that include such polyadic relations. Despite this, their application on estimating the likelihood of tuple-wise edges is still underexplored.
12
+
13
+ Here we focus on the reconstruction and prediction of simplices (higher-order links) in the form of classification tasks, where the likelihood of interacting groups is computed from the embedding features of a simplicial complex. Using similarity scores based on geometric properties of the learned metric space, we show how the resulting node-level and group-level feature embeddings are beneficial to predict unseen simplices, as well as to reconstruct the topology of the original simplicial structure, even when training data contain only records of lower-order simplices.
14
+
15
+ § 18 1 INTRODUCTION
16
+
17
+ Network science provides the dominant paradigm for the study of structure and dynamics of complex systems, thanks to its focus on their underlying relational properties. In data mining applications, topological node embeddings of networks are standard representation learning methods that help solving downstream tasks, such as network reconstruction, link prediction, and node classification [1]. Complex interacting systems have been usually represented as graphs. This representation however suffers from the obvious limitation that it can only capture pairwise relations among nodes, while many systems are characterized by group interactions [2]. Indeed, simplicial complexes are generalized graphs that encode group-wise edges as sets of nodes, or simplices, with the additional requirement that any subset of nodes forming a simplex must also itself form a simplex belonging to the complex. Unlike alternative high-order representations, e.g. hypergraphs, which also overcome the dyadic limitation of the graph formalism [3], the simplicial downward closure constraint works particularly well when studying systems with subset dependencies, such as brain networks and social networks (e.g. people interacting as a group also engage in pairwise interactions)
18
+
19
+ Due to the increased interest in studying complex systems as generalized graph structures, topological representation learning techniques on simplicial complexes are also emerging as tools to solve learning tasks on systems with polyadic relations. In particular, here we focus on tasks based on reconstruction and prediction of higher-order edges. While for standard graphs these problems have been extensively studied with traditional machine learning approaches $\left\lbrack {4,5}\right\rbrack$ and representation learning [6,7], the literature for their higher-order counterparts is more limited. In fact, reconstruction and prediction of higher-order interactions have been investigated mainly starting from pairwise data $\left\lbrack {8,9}\right\rbrack$ or time series $\left\lbrack {{10},{11}}\right\rbrack$ , without particular attention to representation learning methods. Here we study low-dimensional embeddings of simplicial complexes for link prediction and reconstruction in higher-order networks. Our main contributions are: (i) we introduce an embedding framework to compute low-rank representations of simplicial complexes; (ii) we formalize network reconstruction and link prediction tasks for polyadic graph structures; and (iii) we show that simplicial similarities computed from embedding representations outperform classical network-based reconstruction and link prediction methods. Since the problems of link prediction and network reconstruction are not yet well defined in the literature for the higher-order case, none of available state-of-the-art methods were previously evaluated in terms of both these tasks. In this paper we properly delineate the formal steps to perform higher-order link prediction and reconstruction, and we make a comprehensive evaluation of different methods adding many variations such as the use of multi-node proximities and simplicial weighted random walks.
20
+
21
+ § 2 RELATED WORK
22
+
23
+ Representation Learning beyond Graphs. Representation learning for graphs [1] allows to obtain node feature vectors that convey information useful for solving machine learning tasks. Most methods fit in one of two categories: node embeddings and graph neural networks (GNNs). Node embeddings methods explicitly learn low-dimensional representations of nodes, typically from a self-supervised task, while GNN methods implicitly generate vector representations of nodes by combining information about node neighborhoods via message passing operations, e.g. graph convolutions and graph attention networks [12]. In hypergraph settings, node embedding methods typically leverage hyperedge relations similarly to what is done for standard graph edges: for example, spectral decomposition [13], random walk sampling [14, 15], autoencoders [16]. Recently, Maleki et al. [17] proposed a hierarchical approach for scalable node embedding in hypergraphs. In simplicial complexes, random walks over simplices are exploited to compute embeddings of interacting groups with uniform or mixed sizes [18, 19], extending hypergraph methods that compute only node representations. Extensions of GNNs have been proposed to generalize convolution and attention mechanisms to hypergraphs [20-22] and simplicial complexes [23-25].
24
+
25
+ Link Prediction and Network Reconstruction beyond Graphs. The link prediction [4] task predicts the presence of unobserved links in a graph by estimating their occurrence likelihood, while network reconstruction consists in the inference of a graph structure based on indirect data [26], missing or noisy observations [27]. In this work, we will use latent embedding variables to assess the reconstruction and prediction of a given edge, relying on similarity indices. In higher-order systems, link prediction has been investigated primarily for hypergraphs, in particular with methods based on matrix factorization [28, 29], resource allocation metric [30], loop structure [31], and representation learning [32,33]. The higher-order link prediction problem was introduced in a temporal setting by Benson et al. [9] (reformulating the term simplicial closure [34]), while Liu et al. [35] studied the prediction of several higher-order patterns with neural networks. Yoon et al. [36] investigated the use of opportune $k$ -order projected graphs to represent group interactions, and Patil et al. [37] analyzed the problem of finding relevant candidate hyperlinks as negative examples. Despite this early results, reconstruction of higher-order interactions is an ongoing challenge: for example, Young et al. [8] proposed a Bayesian inference method to distinguish between hyperedges and combinations of low-order edges in pairwise data, while Musciotto et al. [38] developed a filtering approach to detect statistically significant hyperlinks in hypergraph data. In addition, some works studied approaches for the inference of higher-order structures from time series data $\left\lbrack {{10},{11}}\right\rbrack$ .
26
+
27
+ § 3 3 METHODS AND TASKS DESCRIPTION
28
+
29
+ § 3.1 HIGHER-ORDER SYSTEMS AND SIMPLICIAL COMPLEXES
30
+
31
+ Simplicial complexes can be considered as generalized graphs that include higher-order interactions. Given a set of nodes $\mathcal{V}$ , a simplicial complex $\mathcal{K}$ is a collection of subsets of $\mathcal{V}$ , called simplices, satisfying downward closure: for any simplex $\sigma \in \mathcal{K}$ , any other simplex $\tau$ which is a subset of $\sigma$ belongs to the simplicial complex $\mathcal{K}$ (for any $\sigma \in \mathcal{K}$ and $\tau \subset \sigma$ , we also have $\tau \in \mathcal{K}$ ). This constraint makes simplicial complexes different from hypergraphs, for which there is no prescribed relation between hyper-edges. A simplex $\sigma$ is called a $k$ -simplex if $\left| \sigma \right| = k + 1$ , where $k$ is its dimension (or order). A simplex $\sigma$ is a co-face of $\tau$ (or equivalently, $\tau$ is a face of $\sigma$ ) if $\tau \subset \sigma$ and $\dim \left( \tau \right) = \dim \left( \sigma \right) - 1$ . We denote with ${n}_{k}$ the number of $k$ -simplices in $\mathcal{K}$ .
32
+
33
+ Each simplicial complex can be unfolded in its canonical graph of inclusions, called Hasse Diagram (HD): formally, the Hasse Diagram $\mathcal{H}\left( \mathcal{K}\right)$ of complex $\mathcal{K}$ is the multipartite graph $\mathcal{H}\left( \mathcal{K}\right) = \left( {{\mathcal{V}}_{\mathcal{H}},{\mathcal{E}}_{\mathcal{H}}}\right)$ , such that each node ${v}_{\sigma } \in {\mathcal{V}}_{\mathcal{H}}$ corresponds to a simplex $\sigma \in \mathcal{K}$ , and two simplices $\sigma ,\tau \in \mathcal{K}$ are connected by the undirected edge $\left( {{v}_{\sigma },{v}_{\tau }}\right) \in {\mathcal{E}}_{\mathcal{H}}$ iff $\sigma$ is a co-face of $\tau$ . In other words, each simplicial order corresponds to a graph layer in $\mathcal{H}\left( \mathcal{K}\right)$ , and two simplices in different layers are linked if they are (upper/lower) adjacent in the original simplicial complex.
34
+
35
+ § 3.2 SIMPLICIAL COMPLEX EMBEDDING
36
+
37
+ Given a complex $\mathcal{K}$ , we want to learn a mapping function $f : \mathcal{K} \rightarrow {\mathbb{R}}^{d}$ from elements of $\mathcal{K}$ to a $d$ -dimensional low-rank feature space $\left( {d \ll \left| \mathcal{K}\right| }\right)$ . The mapping $f$ must preserve topological information incorporated in the simplicial complex, in such a way that adjacency relations are preserved into geometric distances between data points of the embedding space. Here we propose that representations of simplices can be obtained by random-walking over the inclusions hierarchy of $\mathcal{K}$ and learning the embeddings space according to the simplex proximity observed through such walks, preserving high-order information about the topological structure of the complex itself. We will discuss later in the Experimental Setup (section 4) the procedure of random walk sampling that we adopted, while here we focus on the optimization problem.
38
+
39
+ Inspired by language models such as WORD2VEC [39], we start from a corpus $\mathcal{W} = \left\{ {{\sigma }_{1},\ldots ,{\sigma }_{\left| \mathcal{W}\right| }}\right\}$ of simplicial random walks, and we aim to maximize the log-likelihood of a simplex ${\sigma }_{i}$ given the multi-set ${\mathcal{C}}_{T}\left( {\sigma }_{i}\right)$ of context simplices within a distance $T$ , i.e. ${\mathcal{C}}_{T}\left( {\sigma }_{i}\right) = \left\{ {{\sigma }_{i - T}\ldots {\sigma }_{i + 1},{\sigma }_{i + 1}\ldots {\sigma }_{i + T}}\right\}$ . The objective function is as follows:
40
+
41
+ $$
42
+ \mathop{\max }\limits_{f}\mathop{\sum }\limits_{{i = 1}}^{\left| \mathcal{W}\right| }\log \Pr \left( {{\sigma }_{i} \mid \left\{ {f\left( \tau \right) : \tau \in {\mathcal{C}}_{T}\left( {\sigma }_{i}\right) }\right\} }\right) \tag{1}
43
+ $$
44
+
45
+ where the probability is the soft-max $\Pr \left( {\sigma \mid \{ f\left( \tau \right) ,\ldots \} }\right) \propto \exp \left\lbrack {\mathop{\sum }\limits_{{\tau \in {\mathcal{C}}_{T}\left( \sigma \right) }}f\left( \sigma \right) \cdot f\left( \tau \right) }\right\rbrack$ , normalized via the standard partition function ${Z}_{i} = \mathop{\sum }\limits_{{\kappa \in \mathcal{K}}}\exp \left\lbrack {\mathop{\sum }\limits_{{\tau \in {\mathcal{C}}_{T}\left( {\sigma }_{i}\right) }}f\left( \kappa \right) \cdot f\left( \tau \right) }\right\rbrack$ , and it represents the likelihood of observing simplex $\sigma$ given context simplices in ${\mathcal{C}}_{T}\left( \sigma \right)$ . This leads to the maximization of the function:
46
+
47
+ $$
48
+ \mathop{\max }\limits_{f}\mathop{\sum }\limits_{{i = 1}}^{\left| \mathcal{W}\right| }\left\lbrack {-\log {Z}_{i} + \mathop{\sum }\limits_{{\tau \in {\mathcal{C}}_{T}\left( {\sigma }_{i}\right) }}f\left( {\sigma }_{i}\right) \cdot f\left( \tau \right) }\right\rbrack \tag{2}
49
+ $$
50
+
51
+ Our method of choice -SIMPLEX2VEC [19]- is implemented by sampling random walks from $\mathcal{H}\left( \mathcal{K}\right)$ and learning simplicial embeddings with continuous-bag-of-words (CBOW) model [39]. To overcome the expensive computation of ${Z}_{i}$ , we train CBOW with negative sampling. While SIMPLEX2VEC is conceptually similar to $k$ -SIMPLEX2VEC [18], there are important differences: (i) by fixing $k$ as simplex dimension, $k$ -SIMPLEX2VEC uses exclusively upper connections through $\left( {k + 1}\right)$ -cofaces and lower connections through(k - 1)-faces to compute random walk transitions; (ii) random walks focus on a fixed dimension, allowing the embedding computation only for $k$ -simplices. SIMPLEX2VEC instead computes embedding representations for all simplex orders simultaneously, because the random walks are sampled from the entire Hasse Diagram.
52
+
53
+ § 3.3 RECONSTRUCTION AND PREDICTION OF HIGHER-ORDER INTERACTIONS
54
+
55
+ Network reconstruction [6] and link prediction [7] are common tasks performed to assess the quality of node embeddings. In standard graphs, most popular methods are based on similarity metrics, used to rank pairs of nodes (candidate links), and can be obtained from local (e.g. number of common neighbors) or global indices (e.g. number of connecting paths). Here we formalize these tasks as binary classification problems for simplicial complexes, based on latent embedding indices that we define in section 4.
56
+
57
+ Given a complex $\mathcal{K}$ and feature representations learned from it, by reconstruction of higher-order interactions we mean the task of correctly classifying whether a group of $k + 1$ nodes $s = \left( {{i}_{0},{i}_{1},\ldots ,{i}_{k}}\right)$ is a $k$ -simplex of $\mathcal{K}$ or not. This task is intended to study if the information encoded in the embedding space can preserve the original training structure. More specifically, we consider $\mathcal{S} = \{ s \in \mathcal{K} : \left| s\right| > 1\}$ as the set of interactions (simplices with order greater than 0 ) that belongs to the simplicial complex $\mathcal{K}$ . Given any group $s = \left( {{i}_{0},{i}_{1},\ldots ,{i}_{k}}\right)$ , with the reconstruction task we aim to discern if the elements in $s$ interact within the same simplex, and so $s \in \mathcal{S}$ , or $s$ is a group of lower-order simplices, i.e. $s \notin \mathcal{S}$ (but subsets of $s$ may be existing simplices). When group $s$ interacts within a simplex, we say that $s$ is closed, conversely it is open. By higher-order interaction prediction we mean instead the task of predicting whether an interaction ${\mathcal{S}}^{ * }$ that has not been observed at a certain time (i.e. the simplex has not been added to the complex yet) will appear in the future. This task is intended to study the generalization ability of embedding methods to predict unseen interactions that are likely to occur. Given any open configuration $\bar{s} \in {\mathcal{U}}_{\mathcal{S}}$ coming from the set of unobserved interactions ${\mathcal{U}}_{\mathcal{S}} = \left\{ {s \in {2}^{\mathcal{V}} : \left| s\right| > 1,s \notin \mathcal{S}}\right\}$ , i.e. the complement ${}^{1}$ of $\mathcal{S}$ , the prediction task is to classify which groups will give rise to a simplicial closure in the future $\left( {\bar{s} \in {\mathcal{S}}^{ * }}\right)$ versus those that will remain open $\left( {\bar{s} \in {\mathcal{U}}_{\mathcal{S}} \smallsetminus {\mathcal{S}}^{ * }}\right)$ .
58
+
59
+ < g r a p h i c s >
60
+
61
+ Figure 1: (Left) Schematic view of SIMPLEX2VEC: starting from simplicial sequential data (a), we construct a simplicial complex on whose Hasse Diagram we sample random walks (b) with different weighting (c), from which we construct the embedding space (d). (Right) Schematic description of classification tasks (reconstruction and prediction) in the case of 3-node group interactions.
62
+
63
+ § 4 EXPERIMENTAL SETUP
64
+
65
+ Here we describe the experimental setup used to quantify the accuracy of SIMPLEX2VEC in reconstructing and predicting higher-order interactions. In the next paragraphs we illustrate which datasets we use, how we sample non-existing hyperlinks, and how we use them in downstream tasks.
66
+
67
+ § 4.1 FEATURE LEARNING PIPELINE
68
+
69
+ Consider a collection $\mathcal{D}$ of time-stamped interactions ${\left\{ \left( {s}_{i},{t}_{i}\right) ,{s}_{i} \in \mathcal{F},{t}_{i} \in \mathcal{T}\right\} }_{i = 1\ldots N}$ , where each ${s}_{i} = \left( {{i}_{0},{i}_{1},\ldots ,{i}_{k}}\right)$ is a $k$ -simplex of the node set $\mathcal{V},\mathcal{F}$ is the set of distinct simplices and $\mathcal{T}$ is the set of time-stamps at which interactions occur. We split $\mathcal{D}$ in two subsets, ${\mathcal{D}}^{\text{ train }}$ and ${\mathcal{D}}^{\text{ test }}$ , corresponding to the 80th percentile ${t}^{\left( {80}\right) }$ of time-stamps, i.e. ${\mathcal{D}}^{\text{ train }} = \left\{ {\left( {{s}_{i},{t}_{i}}\right) \in \mathcal{D},{t}^{\left( 0\right) } \leq {t}_{i} \leq {t}^{\left( {80}\right) }}\right\}$ and ${\mathcal{D}}^{\text{ test }} = \left\{ {\left( {{s}_{i},{t}_{i}}\right) \in \mathcal{D},{t}^{\left( {80}\right) } < {t}_{i} \leq {t}^{\left( {100}\right) }}\right\}$ , where ${t}^{\left( 0\right) }$ and ${t}^{\left( {100}\right) }$ are the 0th and the 100th percentiles of the set $\mathcal{T}$ . We build from ${\mathcal{D}}^{\text{ train }}$ a simplicial complex ${\mathcal{K}}_{\mathcal{D}}{}^{\text{ train }}$ disregarding time-stamps. In all the experiments we train SIMPLEX2VEC ${}^{2}$ on the Hasse Diagram $\mathcal{H}\left( {\mathcal{K}}_{\mathcal{D}}\right) {}^{\text{ train }}$ ), to obtain $d$ - dimensional feature representations ${\mathbf{v}}_{\sigma } \in {\mathbb{R}}^{d}$ of every simplex $\sigma \in {\mathcal{K}}_{\mathcal{D}}{}^{\text{ train }}$ . Due to the combinatorial explosion of the number of simplicial vertices in the HD, we constrain the maximum order of the interactions to $M \in \{ 1,2,3\}$ in a reduced Hasse diagram ${\mathcal{H}}_{M}\left( {\mathcal{K}}_{\mathcal{D}}\right)$ referred simply as ${\mathcal{H}}_{M}$ . Consequently, every simplex with dimension larger than $m = \max M$ is represented in ${\mathcal{H}}_{M}$ by node combinations of size up to $m$ . In Figure 1 (Left) we show the feature learning process explained before. We consider several weighting schemes ${}^{3}$ [19] to bias the random walks between the vertices $\left\{ {v}_{\tau }\right\}$ of the HD:
70
+
71
+ * Unweighted The jump to a given ${v}_{\tau }$ is made by a uniform sampling among the set of neighbors ${\mathcal{N}}_{\sigma } = {\mathcal{N}}_{\sigma }^{ \downarrow } \cup {\mathcal{N}}_{\sigma }^{ \uparrow }$ of the node ${v}_{\sigma }$ in the HD (corresponding to faces ${\mathcal{N}}_{\sigma }^{ \downarrow }$ and co-faces ${\mathcal{N}}_{\sigma }^{ \uparrow }$ of the simplex $\sigma$ in the simplicial complex).
72
+
73
+ ${}^{1}$ Here we used ${2}^{\mathcal{V}}$ to identify the power set of the vertices.
74
+
75
+ ${}^{2}$ We used the word2vec implementation from Gensim (https://radimrehurek.com/gensim/) and ran the CBOW model with window $T = {10}$ and 5 epochs.
76
+
77
+ ${}^{3}$ For every case, we sample 10 random walks of length 80 per simplex as input to SIMPLEX2VEC.
78
+
79
+ * Counts. To every node ${v}_{\tau }$ of the HD is attached an empirical weight ${\omega }_{\tau }$ , i.e. the number of times that $\tau$ appears in the data $\mathcal{D}$ . The probability to jump from $\sigma$ to $\tau$ is given by ${p}_{\sigma \tau } = \frac{{\omega }_{\tau }}{\mathop{\sum }\limits_{{r \in {\mathcal{N}}_{\sigma }}}{\omega }_{r}}$ .
80
+
81
+ * LObias. With the definition of transition probability as before, the weight ${\omega }_{\tau }$ is defined to introduce a bias for the random walker towards low-order simplices: as explained in [19], every time a $n$ -simplex $\sigma$ appears in the data its weight is increased by 1, and the weight of any subface of dimension $n - k$ is increased by ${\left( \frac{\left( {n - k + 1}\right) !}{\left( {n + 1}\right) !}\right) }^{-1}$ . There is an equivalent scheme for biasing towards high-order simplices, but we empirically observed that the performance of the first one is slightly better.
82
+
83
+ * EQbias. Starting from the weight set $\left\{ {\omega }_{\sigma }\right\}$ computed with empirical counts, we attach additional weights $\left\{ {\omega }_{\sigma \tau }\right\}$ to the Hasse diagram’s edges in order to have equal probability of choosing neighbors from ${\mathcal{N}}_{\sigma }^{ \downarrow }$ or ${\mathcal{N}}_{\sigma }^{ \uparrow }$ . Transition weights for the downward (upward) step $\left( {\sigma ,\tau }\right)$ are defined by normalizing ${\omega }_{\tau }$ respect to all the downward (upward) weights ${\omega }_{\sigma \tau } \propto \frac{{\omega }_{\tau }}{\mathop{\sum }\limits_{{r \in {\mathcal{N}}_{\sigma }^{ \downarrow \left( \uparrow \right) }}}{\omega }_{r}}$ , with the probability of the step given by ${p}_{\sigma \tau } = \frac{{\omega }_{\sigma \tau }}{\mathop{\sum }\limits_{{r \in {\mathcal{N}}_{\sigma }^{ \downarrow } \cup {\mathcal{N}}_{\sigma }^{ \uparrow }}}{\omega }_{\sigma r}}$
84
+
85
+ § 4.2 DATA
86
+
87
+ We use time-stamped data, indicated above with the collection $\mathcal{D}$ , consisting on sequences of interactions in empirical systems from different domains [9]: face-to-face proximity (contact-high-school and contact-primary-school), email exchange (email-Eu and email-Enron), ONline tags (tags-math-sx), US congress bills (congress-bills), coauthorships (coauth-MAG-History and coauth-MAG-Geology). When the datasets came in pairwise format, we associated simplices to cliques obtained integrating edge information over short time intervals [9]. We considered, for any dataset, only nodes in the largest connected component of the projected graph (two nodes of the projected graph are connected if they appear in at least one simplex of $\mathcal{D}$ ). In addition, to lighten the embedding computations, for congress, tags and coauth datasets we apply a filtering approach in order to reduce their sizes: similarly to [40] with the Core set, here we selected the nodes incident in at least 5 cliques in every temporal quartiles (except in coauth-MAG-History where we applied a threshold of 1 clique per temporal quartile). In the Appendix we report a table with statistics for every dataset after the described pre-processing steps.
88
+
89
+ § 4.3 SIMILARITY SCORES AND BASELINE METRICS
90
+
91
+ Using the learned simplicial embeddings we assign to each higher-order link candidate $\delta$ a likelihood score based on the average pairwise inner product among 0 -simplex embeddings of nodes $\left\{ {{\mathbf{v}}_{i},i \in \delta }\right\}$ or any high-order $k$ -simplices $\left\{ {{\mathbf{v}}_{\sigma },\sigma \subset \delta }\right\}$ :
92
+
93
+ $$
94
+ {s}_{k}\left( \delta \right) = \frac{1}{\left| \left( \begin{matrix} \delta \\ 2 \end{matrix}\right) \right| }\mathop{\sum }\limits_{{\left( {\sigma ,\tau }\right) \in \left( \begin{matrix} \delta \\ 2 \end{matrix}\right) }}{\mathbf{v}}_{\sigma } \cdot {\mathbf{v}}_{\tau } \tag{3}
95
+ $$
96
+
97
+ To assess the reconstruction and prediction performances of the embedding model, we compare likelihood scores defined in Eq. 3 with other baseline metrics:
98
+
99
+ * Projected metrics. Local and global node-level features computed from the projected graph. The projected graph is defined as ${\mathcal{G}}_{\mathcal{D}}^{\text{ train }} = \left( {\mathcal{V},\mathcal{E}}\right)$ , where $\mathcal{V}$ is the set of 0 -simplices of the complex ${\mathcal{K}}_{\mathcal{D}}^{\text{ train }}$ and $\mathcal{E} = \left\{ {s \in {\mathcal{K}}_{\mathcal{D}}^{\text{ train }} : \left| s\right| = 2}\right\}$ is the set of links between training nodes that interacted in at least one simplex of ${\mathcal{D}}^{\text{ train }}$ . Moreover, edges(i, j)may be accompanied with weights equal to the number of simplices of $\mathcal{D}$ containing both $i$ and $j$ . For triangles-related tasks we considered several 3-way metrics computed with the code ${}^{4}$ released by [9] (we show the best performant, i.e. Harmonic mean, Geometric mean, Katz, PPR, Logistic Regression). We exploited also the pair-wise random walk measure ${\mathrm{{PPMI}}}_{T}$ [41], for tetrahedra-related tasks where 4-way implementations of the above listed scores are not available. PPMI is widely used as similarity function for node embeddings, and variations of the window size $T$ allow to take into account both local and global information.
100
+
101
+ ${}^{4}$ https://github.com/arbenson/ScHoLP-Tutorial
102
+
103
+ Table 1: Number of unobserved configurations obtained with the sampling approach in different datasets.
104
+
105
+ max width=
106
+
107
+ 2*Dataset 4|c|Unseen configurations sampled from ${\mathcal{U}}_{\Delta }$ ${n}_{\mathcal{E}}\left( {\times {10}^{3}}\right)$
108
+
109
+ 2-5
110
+ 0 1 2 3
111
+
112
+ 1-5
113
+ contact-high-school 3,476 1,150 107 25
114
+
115
+ 1-5
116
+ email-Eu 8,096 1,392 1,654 186
117
+
118
+ 1-5
119
+ tags-math-sx 6,229 2,473 5,467 1,725
120
+
121
+ 1-5
122
+ coauth-MAG-History 9,958 30 60 2
123
+
124
+ 1-5
125
+
126
+ max width=
127
+
128
+ 2*Dataset 5|c|Unseen configurations sampled from ${\mathcal{U}}_{\Theta }$ ${n}_{\Delta }\left( {\times {10}^{3}}\right)$
129
+
130
+ 2-6
131
+ 0 1 2 3 4
132
+
133
+ 1-6
134
+ contact-primary-school 17,683 396 19 2 < 1
135
+
136
+ 1-6
137
+ email-Enron 7.048 400 28 2 < 1
138
+
139
+ 1-6
140
+ congress-bills 1,462 1.264 325 149 80
141
+
142
+ 1-6
143
+ coauth-MAG-Geology 15,473 593 30 3 < 1
144
+
145
+ 1-6
146
+
147
+ * Spectral embedding. Features from the spectral decomposition of the combinatorial $k$ - Laplacian [42]. Given the set of boundary matrices $\left\{ {\mathbf{B}}_{k}\right\}$ , which incorporate incidence relationships between $k$ -simplices and their co-faces ${}^{5}$ , the unweighted $k$ -Laplacian is ${\mathbf{L}}_{k} =$ ${\mathbf{B}}_{k}^{\mathrm{T}}{\mathbf{B}}_{k} + {\mathbf{B}}_{k + 1}{\mathbf{B}}_{k + 1}^{\mathrm{T}}$ . We consider also the weighted $k$ -Laplacian [43], calculated with the substitutions ${\mathbf{B}}_{k} \rightarrow {\mathbf{W}}_{k - 1}^{-1/2}{\mathbf{B}}_{k}{\mathbf{W}}_{k}^{1/2}$ , where every ${\mathbf{W}}_{k}$ is a diagonal matrix containing empirical counts of any $k$ -simplex ${}^{6}$ . Following the same procedure used in graph spectral embeddings [44], we compute the eigenvectors matrix ${\mathbf{Q}}_{k} \in {\mathbb{R}}^{{n}_{k} \times d}$ corresponding to the first $d$ smallest nonzero eigenvalues of ${\mathbf{L}}_{k}$ and we use the rows of ${\mathbf{Q}}_{k}$ as $d$ -dimensional spectral embeddings for $k$ -simplices.
148
+
149
+ * k-SIMPLEX2VEC embedding. Features learned with an extension of NODE2VEC [18] that samples random walks from higher-order transition probabilities ${}^{7}$ (e.g. edge-to-edge occurrences) in a single simplicial dimension. This model is based on sampling from a uniform structure without taking into account simplicial weights.
150
+
151
+ Likelihood scores of candidate higher-order links are assigned for the embedding models with the same inner product metric of Eq. 3 used for SIMPLEX2VEC embeddings. In $k$ -SIMPLEX2VEC, we sample the same number of random walks per simplex, with the same length, of the ones used for SIMPLEX2VEC.
152
+
153
+ § 4.4 DOWNSTREAM TASKS AND OPEN CONFIGURATIONS SAMPLING
154
+
155
+ Similarly to the standard graph case, non-existing links are usually the majority class and this imbalance is even more pronounced in the higher-order case [28] (in graphs we have $\mathcal{O}\left( {\left| \mathcal{V}\right| }^{2}\right)$ potential links, but the number of potential hyperlinks/simplices is $\mathcal{O}\left( {2}^{\left| \mathcal{V}\right| }\right)$ in higher-order structures).
156
+
157
+ To compensate, we focus the work on 3-node and 4-node groups, reducing the number of potential hyperedges to $\mathcal{O}\left( {\left| \mathcal{V}\right| }^{3}\right)$ and $\mathcal{O}\left( {\left| \mathcal{V}\right| }^{4}\right)$ respectively. For a concise presentation, in the next paragraphs we describe mainly the 3-way case. Hence, we restrict the set of possible interactions $\mathcal{S}$ to be exclusively closed triangles $\Delta$ and the corresponding 3-node complementary set ${\mathcal{U}}_{\Delta }$ :
158
+
159
+ $$
160
+ \Delta = \left\{ {s \in {\mathcal{K}}_{\mathcal{D}}^{\text{ train }} : \left| s\right| = 3}\right\} ,\;{\mathcal{U}}_{\Delta } = \left( \begin{matrix} \mathcal{V} \\ 3 \end{matrix}\right) \smallsetminus \Delta \tag{4}
161
+ $$
162
+
163
+ where we used $\left( \begin{array}{l} \mathcal{V} \\ 3 \end{array}\right)$ as the set of 3-node combinations of elements from $\mathcal{V}$ (we will instead denote $\Theta$ and ${\mathcal{U}}_{\Theta }$ respectively the observed and unobserved tetrahedra). In Figure 1 (Right) we sketch the task’s formulation based on 2-simplices (3-node configurations). Positive examples for reconstruction and prediction tasks, respectively in $\Delta$ and ${\Delta }^{ * }$ , are trivial to find from empirical data. For negative examples instead, enumerating all the unseen 3-node configurations in ${\mathcal{U}}_{\Delta }$ is typically unfeasible. Thus, we perform sampling of fixed-size groups of nodes to collect unseen instances for the classification tasks. In practice we sample stars, cliques and other network motifs [37] from the projected graph to collect group configurations with distinct densities of lower-order interactions. We independently sample nodes to obtain (more likely) groups with unconnected units. For each sampled 3-node group $\delta$ we count the number of training edges ${n}_{\mathcal{E}}\left( \delta \right)$ involved in it, and we analyse tasks performances for open configurations characterized by fixing ${n}_{\mathcal{E}}\left( \delta \right) \in \{ 0,1,2,3\}$ . For 4-node configurations, instead of ${n}_{\mathcal{E}}\left( \delta \right)$ , we consider the number of training triangles ${n}_{\Delta }\left( \delta \right) \in \{ 0,1,2,3,4\}$ to differentiate open groups. We claim that quantities ${n}_{\mathcal{E}}\left( \delta \right)$ and ${n}_{\Delta }\left( \delta \right)$ are related to the concept of hardness of non-hyperlinks [37], i.e. the propensity of open groups to be misclassified as closed interaction, and they influence the difficulty of downstream classification tasks.
164
+
165
+ ${}^{5}$ Boundary matrix ${\mathbf{B}}_{k} \in \{ 0, \pm 1{\} }^{{n}_{k - 1} \times {n}_{k}}$ requires the definition of oriented simplices, see [2] for additional details.
166
+
167
+ ${}^{6}$ Weights matrices satisfy the consistency relations ${\mathbf{W}}_{k} = \left| {\mathbf{B}}_{k + 1}\right| {\mathbf{W}}_{k + 1}$ , see [43] for further details.
168
+
169
+ ${}^{7}$ https://github.com/celiahacker/k-simplex2vec
170
+
171
+ < g r a p h i c s >
172
+
173
+ Figure 2: Performance on 3-way link reconstruction (a, c) and prediction (b, d) for SIMPLEX2VEC and $k$ -SIMPLEX2VEC with: (a, b) similarity score ${s}_{0}$ varying the parameter ${n}_{\mathcal{E}}$ ; (c, d) score ${s}_{k}$ (with $k$ in $\{ 0,1\}$ ) on highly edge-dense open configurations $\left( {{n}_{\mathcal{E}} = 3}\right)$ . Metrics are computed in unweighted representations. The label unbalancing in each sample is uniformly drawn between 1:1 and 1:5000. A schematic view of positive and negative examples is reported for each classification task.
174
+
175
+ In Table 1 we report the number of open configurations randomly selected from ${\mathcal{U}}_{\Delta }$ and ${\mathcal{U}}_{\Theta }$ . We sampled open configurations with ${10}^{7}$ extractions for each pattern (stars, cliques, motifs and independent node groups). Reported numbers refer to the exact number of negative examples available for reconstruction tasks. For prediction tasks instead, they may contain also future closed interactions (that are anyway unobserved in the training interval) that must be substracted to obtain negative examples.
176
+
177
+ § 5 RESULTS AND DISCUSSION
178
+
179
+ With the previously described setup, we conducted experiments with 3-node configurations on datasets contact-high-school, email-Eu, tags-math-sx, coauth-MAG-History and with 4-node configurations on the remaining ones. Due to the limited space available, we only report 3-way results leaving the 4-way analysis in the Appendix.
180
+
181
+ We highlight the classification performance when using different embedding similarities ${s}_{k}\left( \delta \right)$ on open configurations with different ${n}_{\mathcal{E}}\left( \delta \right)$ (in the case of triangles, or ${n}_{\Delta }\left( \delta \right)$ for tetrahedra). For each case, i.e. triangles and tetrahedra classification, we examine: (i) the comparison with $k$ - SIMPLEX2VEC embeddings in the unweighted scenario, to study how different embedding models learn statistical patterns from the simplicial structure; (ii) the comparison with classical metrics in the weighted scenario, to study how the addition of empirical weights influences the embedding performance respect to traditional weighted approaches.
182
+
183
+ Results are presented in terms of average binary classification scores, where test sets are generated by randomly chosen open and closed groups. Contrarily to previous work $\left\lbrack {9,{33}}\right\rbrack$ , we evaluate models without a fixed class imbalance because we cannot access the entire negative classes (e.g. ${\mathcal{U}}_{\Delta }$ and ${\mathcal{U}}_{\Delta } \smallsetminus {\Delta }^{ * }$ respectively in 3-way reconstruction and prediction). Instead, in every test set we uniformly sample the cardinality of the two classes to be between 1 and the number of available samples according to the task. We report calibrated AUC-PR scores [45] to account the difference in class imbalance as a consequence of our sampling choice ${}^{8}$ . In Figure 2, for a fair comparison with the other projected and embedding metrics, we report the similarity ${s}_{k}$ training SIMPLEX2VEC on ${\mathcal{H}}_{k + 1}$ . Best average scores are chosen for embedding models with a grid search on vector sizes in the list $\{ 8,{16},{32},{64},{128},{256},{512},{1024}\}$ .
184
+
185
+ ${}^{8}$ For this purpose we fix the reference class ratio ${\pi }_{0} = {0.5}$ . See [45] for additional details. We also tested the AUC-ROC metric with similar findings.
186
+
187
+ Table 2: Balanced AUC-PR scores for higher-order link reconstruction (Top) and prediction (Bottom) on 3-node groups, with the hardest class of negative configurations $\left( {{n}_{\mathcal{E}} = 3}\right)$ . Best scores for different methods are reported in boldface letters; among these ones, the best overall score is blue shaded and the second best score is grey shaded.
188
+
189
+ max width=
190
+
191
+ 3|c|Features Type 8|c|Dataset contact-high-schoolemail-Eutags-math-sxcoauth-MAG-History
192
+
193
+ 4-11
194
+ 3|c|X ${s}_{0}\left( \delta \right)$ ${s}_{1}\left( \delta \right)$ ${s}_{0}\left( \delta \right)$ ${s}_{1}\left( \delta \right)$ ${s}_{0}\left( \delta \right)$ ${s}_{1}\left( \delta \right)$ ${s}_{0}\left( \delta \right)$ ${s}_{1}\left( \delta \right)$
195
+
196
+ 1-11
197
+ 7*Embedding 3*Hasse diagram ${\mathcal{H}}_{1}$ Unweighted ${57.5} \pm {1.9}$ ${51.4} \pm {1.2}$ ${72.0} \pm {0.3}$ ${64.0} \pm {0.2}$ ${66.7} \pm {0.2}$ ${57.1} \pm {0.1}$ ${41.1} \pm {0.9}$ ${75.5} \pm {1.1}$
198
+
199
+ 3-11
200
+ Counts ${79.5} \pm {1.0}$ ${84.4} \pm {0.9}$ ${76.3} \pm {0.4}$ ${73.3} \pm {0.2}$ ${80.5} \pm {0.1}$ ${87.8} \pm {0.1}$ ${41.6} \pm {1.0}$ ${76.0} \pm {1.1}$
201
+
202
+ 3-11
203
+ LObias ${81.6} \pm {2.4}$ ${89.5} \pm {0.8}$ ${76.1} \pm {0.3}$ ${71.2} \pm {0.2}$ ${76.9} \pm {0.1}$ ${83.7} \pm {0.1}$ ${41.7} \pm {0.7}$ ${57.7} \pm {1.2}$
204
+
205
+ 2-11
206
+ 4*Hasse diagram ${\mathcal{H}}_{2}$ Unweighted ${55.5} \pm {3.0}$ 99.5±0.1 ${61.0} \pm {0.4}$ $\mathbf{{97.9} \pm {0.0}}$ ${66.7} \pm {0.1}$ 95.1±0.0 ${40.0} \pm {0.5}$ ${83.1} \pm {1.3}$
207
+
208
+ 3-11
209
+ Counts ${57.0} \pm {1.3}$ ${91.2} \pm {0.9}$ ${54.5} \pm {0.2}$ ${92.6} \pm {0.1}$ ${66.2} \pm {0.1}$ ${89.4} \pm {0.1}$ ${35.3} \pm {0.4}$ ${82.1} \pm {1.3}$
210
+
211
+ 3-11
212
+ LObias ${84.7} \pm {2.2}$ ${91.9} \pm {0.8}$ ${80.6} \pm {0.3}$ ${81.6} \pm {0.2}$ ${77.9} \pm {0.1}$ ${84.3} \pm {0.1}$ ${57.3} \pm {1.0}$ ${70.4} \pm {1.4}$
213
+
214
+ 3-11
215
+ EQbias ${72.7} \pm {1.1}$ ${89.2} \pm {0.7}$ ${71.8} \pm {0.3}$ ${75.0} \pm {0.2}$ ${78.2} \pm {0.2}$ ${88.0} \pm {0.1}$ ${39.3} \pm {0.7}$ $\mathbf{{87.3} \pm {1.1}}$
216
+
217
+ 1-11
218
+ 3*Spectral Embedding 2*Combinatorial Laplacians Unweighted ${52.4} \pm {3.7}$ ${77.0} \pm {1.3}$ ${67.3} \pm {0.3}$ ${65.3} \pm {0.2}$ ${58.4} \pm {0.2}$ ${50.7} \pm {0.1}$ ${72.1} \pm {1.1}$ ${63.5} \pm {1.4}$
219
+
220
+ 3-11
221
+ Weighted ${70.4} \pm {1.6}$ ${75.3} \pm {1.6}$ $\mathbf{{79.4} \pm {0.2}}$ ${76.4} \pm {0.1}$ ${79.9} \pm {0.1}$ ${50.4} \pm {0.1}$ ${82.3} \pm {1.0}$ ${68.4} \pm {1.2}$
222
+
223
+ 2-11
224
+ Harm. mean 4*Weighted 2|c|${85.5} \pm {1.5}$ 2|c|${74.0} \pm {0.2}$ 2|c|${83.1} \pm {0.1}$ 2|c|${53.3} \pm {1.1}$
225
+
226
+ 1-2
227
+ 4-11
228
+ 3*Projected Metrics Geom. mean 2|c|${85.8} \pm {1.1}$ 2|c|${72.5} \pm {0.2}$ 2|c|${86.8} \pm {0.1}$ 2|c|${52.9} \pm {1.3}$
229
+
230
+ 2-2
231
+ 4-11
232
+ Katz 2|c|${78.6} \pm {1.1}$ 2|c|${65.6} \pm {0.2}$ 2|c|${81.8} \pm {0.1}$ ${49.2} \pm {1.5}$ X
233
+
234
+ 2-2
235
+ 4-11
236
+ PPR 2|c|${76.9} \pm {1.4}$ 2|c|${70.7} \pm {0.2}$ 2|c|${81.8} \pm {0.1}$ 2|c|${74.8} \pm {1.3}$
237
+
238
+ 1-11
239
+ 3*X 3*Features Type 3*X 8|c|Dataset
240
+
241
+ 4-11
242
+ 2|c|contact-high-school 2|c|email-Eu 2|c|tags-math-sx 2|c|coauth-MAG-History
243
+
244
+ 4-11
245
+ ${s}_{0}\left( \delta \right)$ ${s}_{1}\left( \delta \right)$ ${s}_{0}\left( \delta \right)$ ${s}_{1}\left( \delta \right)$ ${s}_{0}\left( \delta \right)$ ${s}_{1}\left( \delta \right)$ ${s}_{0}\left( \delta \right)$ ${s}_{1}\left( \delta \right)$
246
+
247
+ 1-11
248
+ 7*Neural Embedding 3*Hasse diagram ${\mathcal{H}}_{1}$ Unweighted ${62.9} \pm {5.2}$ ${50.6} \pm {4.7}$ ${68.5} \pm {0.7}$ ${57.6} \pm {0.5}$ ${63.2} \pm {0.3}$ ${54.0} \pm {0.5}$ ${69.5} \pm {8.2}$ ${63.2} \pm {6.6}$
249
+
250
+ 3-11
251
+ Counts ${74.2} \pm {3.0}$ ${73.0} \pm {3.4}$ ${74.3} \pm {0.8}$ ${67.3} \pm {0.7}$ ${74.3} \pm {0.4}$ $\mathbf{{84.0} \pm {0.3}}$ ${68.7} \pm {8.4}$ ${66.6} \pm {8.6}$
252
+
253
+ 3-11
254
+ LObias ${70.6} \pm {2.8}$ ${65.6} \pm {5.3}$ ${70.5} \pm {0.6}$ ${64.5} \pm {0.8}$ ${71.3} \pm {0.5}$ ${79.1} \pm {0.5}$ ${68.8} \pm {8.7}$ ${66.5} \pm {8.7}$
255
+
256
+ 2-11
257
+ 4*Hasse diagram ${\mathcal{H}}_{2}$ Unweighted ${62.5} \pm {6.3}$ ${69.5} \pm {4.9}$ ${66.2} \pm {0.7}$ ${67.8} \pm {0.6}$ ${62.5} \pm {0.2}$ ${83.1} \pm {0.2}$ ${65.9} \pm {8.5}$ ${55.6} \pm {8.0}$
258
+
259
+ 3-11
260
+ Counts ${64.3} \pm {3.6}$ ${72.8} \pm {3.6}$ ${61.8} \pm {0.7}$ ${69.1} \pm {0.6}$ ${62.9} \pm {0.3}$ ${82.3} \pm {0.3}$ ${67.3} \pm {8.2}$ ${61.0} \pm {9.6}$
261
+
262
+ 3-11
263
+ LObias ${69.7} \pm {3.5}$ ${65.4} \pm {5.1}$ ${69.0} \pm {0.6}$ ${60.3} \pm {0.6}$ ${71.2} \pm {0.7}$ ${79.2} \pm {0.4}$ ${67.3} \pm {7.9}$ ${64.2} \pm {9.6}$
264
+
265
+ 3-11
266
+ EQbias ${72.4} \pm {3.6}$ ${73.5} \pm {3.5}$ ${71.3} \pm {0.6}$ ${66.1} \pm {0.6}$ ${71.2} \pm {0.4}$ ${82.3} \pm {0.3}$ ${67.8} \pm {8.6}$ ${65.7} \pm {9.3}$
267
+
268
+ 1-11
269
+ 7*Spectral Embedding Projected Metrics 2*Combinatorial Laplacians Unweighted ${56.4} \pm {3.6}$ ${56.7} \pm {6.8}$ ${63.8} \pm {0.6}$ ${53.5} \pm {0.7}$ ${55.1} \pm {0.2}$ ${50.4} \pm {0.2}$ ${57.8} \pm {6.0}$ ${56.4} \pm {5.7}$
270
+
271
+ 3-11
272
+ Weighted ${66.5} \pm {5.3}$ ${56.1} \pm {6.5}$ ${65.2} \pm {0.8}$ ${55.6} \pm {0.7}$ ${72.8} \pm {0.4}$ ${50.3} \pm {0.3}$ ${70.1} \pm {8.3}$ ${53.5} \pm {6.8}$
273
+
274
+ 2-11
275
+ Harm. mean 4*Weighted 2|c|${71.4} \pm {4.3}$ 2|c|${64.5} \pm {0.8}$ 2|c|${79.0} \pm {0.2}$ 2|c|${61.6} \pm {8.2}$
276
+
277
+ 2-2
278
+ 4-11
279
+ Geom. mean 2|c|73.1±3.8 2|c|${66.7} \pm {0.8}$ 2|c|$\mathbf{{83.3} \pm {0.2}}$ 2|c|${62.4} \pm {7.7}$
280
+
281
+ 2-2
282
+ 4-11
283
+ Katz X ${69.3} \pm {3.7}$ 2|c|${63.2} \pm {0.6}$ 2|c|${77.8} \pm {0.3}$ 2|c|${62.4} \pm {7.0}$
284
+
285
+ 2-2
286
+ 4-11
287
+ PPR 2|c|${69.8} \pm {3.9}$ 2|c|${68.8} \pm {0.5}$ 2|c|${75.7} \pm {0.4}$ 2|c|${57.7} \pm {4.6}$
288
+
289
+ 2-11
290
+ Logistic Regression Unweighted 2|c|${68.7} \pm {3.1}$ 2|c|${68.1} \pm {0.7}$ 2|c|${81.2} \pm {0.2}$ 2|c|${65.4} \pm {6.9}$
291
+
292
+ 1-11
293
+
294
+ § 5.1 RECONSTRUCTION AND PREDICTION OF 3-WAY INTERACTIONS: THE UNWEIGHTED SCENARIO AND $K$ -SIMPLEX2VEC
295
+
296
+ § 5.1.1 COMPARISON OF PAIRWISE NODE PROXIMITIES
297
+
298
+ In Figure 2(a, b) we show evaluation metrics on higher-order link classification (reconstruction and prediction) for 3-way interactions, computed with unweighted node-level information from different models, varying the quantity ${n}_{\mathcal{E}}\left( \delta \right)$ referred to the open configurations. We recall that in this case $k$ -SIMPLEX2VEC is equivalent to the standard embedding of the projected graph. Hasse diagram ${\mathcal{H}}_{1}$ scores ${s}_{0}\left( \delta \right)$ computed with SIMPLEX2VEC perform overall better than proximities of the projected graph (i.e. $k$ -SIMPLEX2VEC scores) in almost all cases, meaning that the information given by the pairwise structures is enriched by considering multiple layers of interactions, even without leveraging interaction weights (both in ${\mathcal{G}}_{\mathcal{D}}^{\text{ train }}$ and ${\mathcal{K}}_{\mathcal{D}}^{\text{ train }}$ ).
299
+
300
+ Generally, we observe an expected decrease in performance for every model with respect to parameter ${n}_{\mathcal{E}}$ . For example, a few datasets show less sensitivity in the performance of prediction tasks to variations of ${n}_{\mathcal{E}}\left( \delta \right)$ (e.g., email-Eu). We ascribe this difference to domain-specific effects and peculiarities of those datasets. Embedding similarity ${s}_{0}\left( \delta \right)$ from ${\mathcal{H}}_{1}$ diagram outperforms $k$ -SIMPLEX2VEC proximities in almost every reconstruction task, except for coauth-MAG-History on open configurations with ${n}_{\mathcal{E}} = 3$ . In prediction tasks, we observe the same advantage of SIMPLEX2VEC respect to $k$ -SIMPLEX2VEC, except in contact-high-school where the models perform similarly on ${n}_{\mathcal{E}} < 2$ .
301
+
302
+ § 5.1.2 COMPARISON OF HIGHER-ORDER EDGE PROXIMITIES
303
+
304
+ In the previous sections the metric ${s}_{0}\left( \delta \right)$ was computed from feature representations of 0 -simplices. Here we analyse instead how performances change when we use embedding representations of
305
+
306
+ 1-simplices (edge representations) to compute ${s}_{1}\left( \delta \right)$ . Intuitively, group representations like 1-simplex embeddings should convey higher-order information useful to improve classification with respect to node-level features.
307
+
308
+ In Figures 2(c, d) we show evaluation metrics on higher-order link classification for 3-way interactions, comparing unweighted node-level and edge-level information from different models, fixing the quantity ${n}_{\mathcal{E}}\left( \delta \right) = 3$ referred to the open configurations. We consider fully connected triangle configurations because, besides being the harder configurations to be classified, they consist in the set of links necessary to compute ${s}_{1}\left( \delta \right)$ .
309
+
310
+ Generally we notice an increase in classification scores when using ${s}_{1}\left( \delta \right)$ similarity rather ${s}_{0}\left( \delta \right)$ with SIMPLEX2VEC embeddings. The performance gain is quite large (between 30% and 100%) in all reconstruction tasks, and for prediction tasks it is noticeable on contact-high-school and tags-math-sx while it is even negative on coauth-MAG-History. This is also true for $k$ -SIMPLEX2VEC in the majority of datasets, but with a reduced gain.
311
+
312
+ § 5.2 RECONSTRUCTION AND PREDICTION OF 3-WAY INTERACTIONS: THE ROLE OF SIMPLICIAL WEIGHTS
313
+
314
+ Previously we showed that feature representations learned through the hierarchical organization of the HD enhance the classification accuracy of closed triangles, when considering unweighted complexes. We now integrate these results by studying the effect of introducing weights. In particular, we analyze the importance of weighted interactions in our framework, focusing on the case where fully connected open triangles are the negative examples for downstream tasks.
315
+
316
+ In Table 2 (Top) we show higher-order link reconstruction results: simplicial similarity ${s}_{1}\left( \delta \right)$ on the unweighted HD ${\mathcal{H}}_{2}$ outperforms all other methods, in particular weighted metrics based on Laplacian similarity and projected graph geometric mean, allowing almost perfect reconstruction in 3 out of 4 datasets. Compared with projected graph metrics, this was expected since 3-way information is incorporated in ${\mathcal{H}}_{2}$ , and the optimal scores reflect the goodness of fit of the embedding algorithm. Weighting schemes Counts and EQbias also obtain excellent scores with ${s}_{1}\left( \delta \right)$ metric, while metric ${s}_{0}\left( \delta \right)$ benefits from the use of ${LO}$ bias weights. Differently, even simplicial similarity ${s}_{1}\left( \delta \right)$ on Hasse diagram ${\mathcal{H}}_{1}$ outperforms baseline scores in half of datasets (with weighting schemes Counts and LObias), showing the feasibility of reconstructing 2-order interactions from weighted lower-order simplices (vertices in ${\mathcal{H}}_{1}$ are simplices of dimension 0 and 1) similarly to previous work on hypergraph reconstruction [8].
317
+
318
+ In Table 2 (Bottom) we show higher-order link prediction results. Overall, SIMPLEX2VEC embeddings trained on ${\mathcal{H}}_{1}$ with Counts and EQbias weights give better results: in contact-high-school and email-Eu with ${s}_{0}\left( \delta \right)$ metric, in tags-math-sx with ${s}_{1}\left( \delta \right)$ metric. In dataset coauth-MAG-History the unweighted ${s}_{0}\left( \delta \right)$ score is outperformed uniquely by the weighted ${\mathbf{L}}_{0}$ embedding, with weighted simplicial counterparts resulting in similar performances. In the space of projected graph scores, good results are obtained with geometric mean and logistic regression, which were among the best metrics in one of the seminal works on higher-order link prediction [9].
319
+
320
+ Finally, we observe that weighting schemes for neural simplicial embeddings overall positively contribute to classification tasks both for reconstruction and prediction.
321
+
322
+ § 6 CONCLUSIONS AND FUTURE WORK
323
+
324
+ In this paper, we introduced SIMPLEX2VEC for representation learning on simplicial complexes. In particular, we focused on formalizing reconstruction and link prediction tasks for higher-order structures, and we tested the proposed model on solving such downstream tasks. We showed that SIMPLEX2VEC-based representations are more effective in classification than traditional approaches and previous higher-order embedding methods. In particular, we prove the feasibility of using simplicial embedding of Hasse diagrams in reconstructing system's polyadic interactions from lower-order edges, in addition to adequately predict future simplicial closures. SIMPLEX2VEC enables the investigation of the impact of different topological features, and we showed that weighted and unweighted models have different predictive power. Future work should focus on understanding these differences through the analysis of link predictability $\left\lbrack {{46},{47}}\right\rbrack$ with higher-order edges as a function of datasets' peculiarities. Future work includes algorithmic approaches to tame the scalability limits set by the combinatorial structure of the Hasse diagram, which could for example be tackled via different optimization frameworks [48, 49] and hierarchical approaches [17, 50].
papers/LOG/LOG 2022/LOG 2022 Conference/hM5UIWqZ7d/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,255 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # PyTorch-Geometric Edge – a library for learning representations of graph edges
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ Machine learning on graphs (GraphML) has been successfully deployed in a wide variety of problem areas, as many real-world datasets are inherently relational. However, both research and industrial applications require a solid, robust, and well-designed code base. In recent years, frameworks and libraries, such as PyTorch-Geometric (PyG) or Deep Graph Library (DGL), have been developed and become first-choice solutions for implementing and evaluating GraphML models. These frameworks are designed so that one can solve any graph-related task, including node- and graph-centric approaches (e.g., node classification, graph regression). However, there are no edge-centric models implemented, and edge-based tasks are often limited to link prediction. In this extended abstract, we introduce PyTorch-Geometric Edge (PyGE), a deep learning library that focuses on models for learning vector representations of edges. As the name suggests, it is built upon the PyG library and implements edge-oriented ML models, including simple baselines and graph neural networks, as well as corresponding datasets, data transformations, and evaluation mechanisms. The main goal of the presented library is to make edge representation learning more accessible for both researchers and industrial applications, simultaneously accelerating the development of the aforementioned methods, datasets and benchmarks.
12
+
13
+ ## 201 Introduction
14
+
15
+ Nowadays, one of the most prominent research areas in machine learning is representation learning. Solving classification, regression, or clustering tasks by means of popular machine learning models, like decision trees, SVMs, logistic regression, linear regression, or feed-forward neural networks, requires the presence of object features in the form of real-valued number vectors (also called embeddings, or representation vectors). Representation learning aims at finding algorithms and models that can extract such numeric features from arbitrary objects (images, texts, or graphs) in an automated and reliable way. In terms of machine learning on graphs (GraphML), these models / algorithms are called graph representation learning (GRL) methods. In recent years, GRL methods have been successfully deployed in a wide variety of domains, including social networks, financial networks, and computational chemistry [1-4].
16
+
17
+ This wide adoption of graph-based models led to the creation of publicly available implementations, often in the form of frameworks or libraries with standardized APIs, which describe data formats, model building blocks, and scalable parameter optimization techniques. First-choice solutions are currently frameworks like PyTorch-Geometric (PyG) [5] or the Deep Graph Library (DGL) [6]. They include most of the existing graph neural networks and some traditional models, as well as datasets, preprocessing transformations, and basic evaluation mechanisms. This simplifies both production-ready model development and conducting GraphML research.
18
+
19
+ The implemented design choices allow solving any graph-related task (e.g., node classification, graph regression). Nevertheless, the main focus in these libraries is on node- and graph-centric models and tasks, whereas edge-based tasks are often limited to link prediction.
20
+
21
+ Present work. We aim to fill the gap for edge-centric GRL models and tasks. In this extended abstract, we introduce PyTorch-Geometric Edge (PyGE), a deep learning library focused on models for learning vector representations of graph edges. We build upon the PyTorch-Geometric (PyG) library and provide implementations: (1) for edge-centric models, including simple baselines and graph neural networks, (2) edge-based GNN layers, (3) datasets and corresponding preprocessing functions (in a PyTorch- and PyG-compliant format), and (4) evaluation mechanisms for edge tasks. PyGE should make edge representation learning more accessible for both researchers and industrial applications, simultaneously accelerating the development of edge-centric methods, datasets and benchmarks. Disclaimer: Please note that the introduced library is still under active development. We provide a summary of our planned work in Section 4.
22
+
23
+ Contributions. We summarize our contributions as follows: (C1) We publicly release ${}^{1}$ PyTorch-Geometric Edge, the first deep learning library for edge representation learning. (C2) We implement a subset of available edge-based models, graph neural network layers, datasets, and corresponding data transformations.
24
+
25
+ ## 2 Preliminaries
26
+
27
+ We start by introducing definitions for basic concepts covered in our presented library and explore the current state of node and edge embedding approaches, as well as GraphML software.
28
+
29
+ Graph. A graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ describes a set of nodes $\mathcal{V}$ that are connected (pairwise) by a set of edges $\mathcal{E} \in \mathcal{V} \times \mathcal{V}$ . An attributed graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E},\mathbf{X},{\mathbf{X}}^{\text{edge }}}\right)$ extends this definition by a set of node attributes: $\mathbf{X} \in {\mathbb{R}}^{\left| \mathcal{V}\right| \times {d}_{\text{node }}}$ , and optionally also edge attributes: ${\mathbf{X}}^{\text{edge }} \in {\mathbb{R}}^{\left| \mathcal{E}\right| \times {d}_{\text{edge }}}$ .
30
+
31
+ Edge representation learning. The goal is to find a function ${f}_{\theta } : \mathcal{E} \rightarrow {\mathbb{R}}^{{d}_{\text{edge }}}$ that maps an edge ${e}_{\left( u, v\right) } \in \mathcal{E}$ into a low-dimensional $\left( {{d}_{\text{edge }} \ll \dim \left( \mathcal{E}\right) }\right)$ vector representation (embedding) ${\mathbf{z}}_{uv}$ that preserves selected properties of the edge (e.g., features or local structural neighborhood information).
32
+
33
+ Edge-based tasks. Evaluation tasks for edge embeddings include: (1) link prediction - binary classification problem of the existence (future appearance) of an edge; (2) edge classification - label/type prediction of an existing edge (e.g., kind of social network relation); (3) edge regression - prediction a numerical edge feature (e.g., bond strength in a molecule).
34
+
35
+ Node representation learning methods. Early approaches were built around the transductive setting with an enormous trainable lookup-embedding matrix, whose rows denote representation vectors for each node. The optimization process would preserve structural node information. For instance, DeepWalk [7], and its successor Node2vec [8] use the Skipgram [9] objective to model random walk-based co-occurrence probabilities. TADW [10] extended this approach to attributed graphs and reformulated the model as a matrix factorization problem. Other early approaches include: LINE [11], SDNE [12], or FSCNMF [13]. Recent methods are based on Graph Neural Networks (GNNs) - trainable functions that transform feature vectors of a node and its neighbors to a new embedding vector (inductive setting). These functions can be stacked to create a deep (graph) neural network. The most popular ideas include: a graph reformulation of the convolution operator (GCN [14]), neighborhood sampling and aggregation of sampled features (GraphSAGE [15]), attention mechanism over graph structure (GAT [16]) or modeling injective functions (GIN [17]).
36
+
37
+ Edge representation learning methods. This area is still underdeveloped, i.e., only a handful of proposed models and algorithms exists. Most early approaches are node-based transformations, i.e., the edge embedding ${\mathbf{z}}_{uv}$ is computed from two node embeddings ${\mathbf{z}}_{u}$ and ${\mathbf{z}}_{v}$ . There are simple non-trainable binary operators [8], such as the average $\left( {{\mathbf{z}}_{uv} = \frac{{\mathbf{z}}_{u} + {\mathbf{z}}_{v}}{2}}\right)$ , the Hadamard product $\left( {{\mathbf{z}}_{uv} = {\mathbf{z}}_{u} * {\mathbf{z}}_{v}}\right)$ , or the weighted L1 $\left( {{\mathbf{z}}_{uv} = \left| {{\mathbf{z}}_{u} - {\mathbf{z}}_{v}}\right| }\right)$ or L2 $\left( {{\mathbf{z}}_{uv} = {\left| {\mathbf{z}}_{u} - {\mathbf{z}}_{v}\right| }^{2}}\right)$ operators. NRIM [18] proposes trainable transformations as two kinds of neural network layers: node2edge $\left( {{\mathbf{z}}_{uv} = {f}_{\theta }\left( \left\lbrack {{\mathbf{z}}_{u},{\mathbf{z}}_{v},{\mathbf{x}}_{uv}^{\text{edge }}}\right\rbrack \right) }\right)$ and edge 2node $\left( {{\mathbf{z}}_{u} = {f}_{\omega }\left( \left\lbrack {\mathop{\sum }\limits_{{v \in \mathcal{N}\left( u\right) }}{\mathbf{z}}_{uv},{\mathbf{x}}_{u}}\right\rbrack \right) }\right)$ . Another group of edge embedding methods directly learn the edge embeddings, i.e., without an intermediate node embedding step. Line2vec [19] utilizes a line graph transformation (converting nodes into edges and vice versa), applies a custom edge weighting method and runs Node2vec on the line graph. The loss function extends the Skipgram loss with a so-called collective homophily loss (to ensure closeness of neighboring edges in the embedding space). This method is inherently transductive (due to Node2vec) and completely ignores any attributes. Those problems are addressed by AttrE2vec [20]. It samples a fixed number of uniform random walks from two edge neighborhoods $(\mathcal{N}\left( u\right)$ , $\mathcal{N}\left( v\right)$ ) and aggregates feature vectors of encountered edges (using average, exponential decaying, or recurrent neural networks) into summary vectors ${\mathbf{S}}_{u},{\mathbf{S}}_{v}$ , respectively. An MLP encoder network with a self-attention-like mechanism transforms the summary vectors and the edge features into the final edge embedding. AttrE2vec is trained using a contrastive cosine learning objective and a feature reconstruction loss. PairE [21] utilizes two kinds of edge feature aggregations: (1) concatenated node features (self features), (2) concatenation of averaged neighbor features for both nodes (agg features). An MLP encoder with skip-connections transforms these two vectors into the edge embedding. Two shallow decoders reconstruct the feature probability distribution. The resulting PairE autoencoder is trained using the sum of the KL-divergences of the self and agg features. Other methods include: EGNN [22], ConPI [23] or Edge2vec [24].
38
+
39
+ ---
40
+
41
+ ${}^{1}$ The link to the repository will be included in the final version and is now omitted due to double-blind policy. We include an anonymized version of our library in the attachments on OpenReview.
42
+
43
+ ---
44
+
45
+ GraphML software. The backbone of all modern deep learning frameworks are tools for automatic differentiation, such as: Tensorflow [25] or PyTorch [26]. GraphML libraries are mostly built upon these tools, e.g., PyG uses PyTorch, GEM [27] and DynGEM [28] use Tensorflow, DGL can be used both with Tensorflow and PyTorch, whereas some like KarateClub [29] are using a custom backend. All of these libraries are focused on node- and graph-centric models. Our proposed PyTorch-Geometric Edge library is the first one that focuses on edge-centric models and layers. It adapts the PyG library API and uses PyTorch as its backend.
46
+
47
+ ## 3 PyTorch-Geometric Edge
48
+
49
+ Relation to PyG. Our proposed PyGE library re-uses the API and data format implemented in PyTorch-Geometric. The graph is stored as a Data( ) object with edges in form of a sparse COO matrix (edge_index). Other fields include: x (node attributes), edge_attr (edge attributes), y (node/edge labels). We also keep a similar layout of the library package structure, i.e., we have a module for datasets, models, neural network layers (nn), data transformations (transforms) and data samplers (samplers). The forward( ) method in all implemented models/layers accepts two parameters: $\mathrm{x}$ (node or edge features) and edge_index (adjacency matrix). Hence, the implemented models/layers can be integrated with other PyG models/layers and vice versa (we show that in the examples/ folder in the repository). The same applies for the datasets.
50
+
51
+ ### 3.1 Current state of implementation
52
+
53
+ We now show the current state of the library and what is already implemented. Please refer to Section 4 where we explain our future plans.
54
+
55
+ Datasets. We currently include 5 datasets (Cora, PubMed, KarateClub, Dolphin and Cuneiform) that were originally used for evaluation of the implemented methods. We summarize their statistic in Table 1. Note most of them also require preprocessing steps (see: AttrE2vec [20] for details) for the edge classification evaluation - we implement appropriate data transformations.
56
+
57
+ Table 1: Summary of included datasets. The $*$ symbol denotes the number of edge classes after applying an appropriate data transformation.
58
+
59
+ <table><tr><td>Name</td><td>V</td><td>$\left| \mathcal{E}\right|$</td><td>${d}_{\text{node }}$</td><td>${d}_{\text{edge }}$</td><td>classes</td></tr><tr><td>KarateClub [30]</td><td>34</td><td>156</td><td>-</td><td>-</td><td>4*</td></tr><tr><td>Dolphin [31]</td><td>62</td><td>318</td><td>-</td><td>-</td><td>5*</td></tr><tr><td>Cora [32]</td><td>2 708</td><td>10 556</td><td>1 433</td><td>-</td><td>${8}^{ * }$</td></tr><tr><td>PubMed [33]</td><td>19717</td><td>88 648</td><td>500</td><td>-</td><td>${4}^{ * }$</td></tr><tr><td>Cuneiform [34]</td><td>5 680</td><td>23 922</td><td>3</td><td>2</td><td>2</td></tr></table>
60
+
61
+ Models and layers. We implement most of the edge representation learning methods discussed in Section 2 into our proposed PyGE library (see: Table 2). Nevertheless, more of them will be implemented in future versions.
62
+
63
+ Table 2: Models and layers implemented in PyGE.
64
+
65
+ <table><tr><td>$\mathbf{{Method}}$</td><td>Type</td><td>Inductive</td><td>Attributed</td><td>Characteristics</td></tr><tr><td>Node pair operator [8]</td><td>layer</td><td>✓</td><td>✘</td><td>non-trainable</td></tr><tr><td>node2edge [18]</td><td>layer</td><td>✓</td><td>✓</td><td>trainable</td></tr><tr><td>Line2vec [19]</td><td>model</td><td>✘</td><td>✘</td><td>line graph, random-walk</td></tr><tr><td>AttrE2vec [20]</td><td>model</td><td>✓</td><td>✓</td><td>contrastive, AE, random-walk</td></tr><tr><td>PairE [21]</td><td>model</td><td>✓</td><td>✓</td><td>AE, KL-div</td></tr></table>
66
+
67
+ Embedding evaluation. We implement a ready-to-use edge classification evaluator class, which takes edge embeddings and edge labels, applies a logistic regression classifier and returns typical classification metrics, like ROC-AUC, F1 or accuracy. This is a widely adopted technique in unsupervised learning, called the linear evaluation protocol [35].
68
+
69
+ Example usage. In the repository, we provide an end-to-end script showing the usage of a given model/layer. Every script: (1) loads a dataset and applies the required data transformations (preprocessing), (2) prepares the data split of edges into train and test sets, (3) build a model, (4) trains the model for a certain amount of epochs, (5) evaluates the learned edge embeddings. We provide also an example script in this extended abstract - see Section A.
70
+
71
+ ### 3.2 Maintenance
72
+
73
+ An open-source library requires continuous maintenance. We host our code base at GitHub, which allows to track all development progress and user-generated issues. We will build library releases and announce them on GitHub and host them later on the Python Package Index (PyPI) to allow users to simply run a pip install torch-geometric-edge command to install our library. We use the MIT license to give potential users, researchers, and industrial adopters a good user experience without worrying about the rights to use or modify our code base. Another aspect of software development and maintenance is Continuous Integration. We use the GitHub Actions module to automatically execute code quality checks and unit tests with every pull request to our library. This ensures no change will break existing functionality or lower our assumed code quality.
74
+
75
+ ## 4 Summary and roadmap
76
+
77
+ In this extended abstract, we presented an initial version of PyTorch-Geometric Edge, the first deep learning library that focuses on representation learning for graph edges. We provided information about currently implemented models/layers and datasets. Our roadmap is extensive and includes: (I) preparation of a complete documentation (right now: we rely on code quality checks and example scripts on how to use particular models/layers), (II) addition of more datasets (e.g., Enron Email Dataset ${}^{2}$ , FF-TW-YT ${}^{3}$ , among others),(III) implementation of other mentioned edge-centric models (and a continuous extension of the literature review to find new methods), (IV) we want to add more edge evaluation schemes, (V) in the full paper, we want to include an extensive benchmark of all implemented models and compare them in different downstream tasks; moreover we want to provide the entire reproducible experimental pipeline and pretrained models. With such an amount of incoming work, we want to encourage readers interested in edge representation learning to contact the authors and contribute to our library. We are convinced that edge representation learning can be widely adopted in networked tasks, like message classification in social networks, connection/attack classification in cybersecurity applications, to name only a few.
78
+
79
+ ---
80
+
81
+ ${}^{2}$ https://www.cs.cmu.edu/~enron/
82
+
83
+ ${}^{3}$ http://multilayer.it.uu.se/datasets.html
84
+
85
+ ---
86
+
87
+ References
88
+
89
+ [1] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, Jure Leskovec, Regina Barzilay, Peter Battaglia, Yoshua Bengio, Michael Bronstein, Stephan Günnemann, Will Hamilton, Tommi Jaakkola, Stefanie Jegelka, Maximilian Nickel, Chris Re, Le Song, Jian Tang, Max Welling, and Rich Zemel. Open graph benchmark: Datasets for machine learning on graphs, may 2020. URL http://arxiv.org/abs/2005.00687.1
90
+
91
+ [2] Daokun Zhang, Jie Yin, Xingquan Zhu, and Chengqi Zhang. Network Representation Learning: A Survey. IEEE Transactions on Big Data, 6(1):3-28, 2018. doi: 10.1109/tbdata.2018.2850013.
92
+
93
+ [3] Bentian Li and Dechang Pi. Network representation learning: a systematic literature review. Neural Computing and Applications, 32(21):16647-16679, nov 2020. ISSN 0941-0643. doi: 10.1007/s00521-020-04908-5.
94
+
95
+ [4] Ines Chami, Sami Abu-El-Haija, Bryan Perozzi, Christopher Ré, and Kevin Murphy. Machine Learning on Graphs: A Model and Comprehensive Taxonomy, 2020. URL http://arxiv.org/abs/2005.03675.1
96
+
97
+ [5] Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019. 1
98
+
99
+ [6] Minjie Wang, Da Zheng, Zihao Ye, Quan Gan, Mufei Li, Xiang Song, Jinjing Zhou, Chao Ma, Lingfan Yu, Yu Gai, Tianjun Xiao, Tong He, George Karypis, Jinyang Li, and Zheng Zhang. Deep graph library: A graph-centric, highly-performant package for graph neural networks. arXiv preprint arXiv:1909.01315, 2019. 1
100
+
101
+ [7] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. DeepWalk: Online Learning of Social Representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD '14, pages 701-710, New York, New York, USA, 2014. ACM Press. ISBN 9781450329569. doi: 10.1145/2623330.2623732. URL http://dl.acm.org/citation.cfm?doid=2623330.2623732.2
102
+
103
+ [8] Aditya Grover and Jure Leskovec. Node2vec: Scalable feature learning for networks. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, volume 13-17-Augu, pages 855-864, 2016. ISBN 9781450342322. doi: 10.1145/ 2939672.2939754. 2, 4
104
+
105
+ [9] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, NIPS'13, pages 3111-3119, USA, 2013. Curran Associates Inc. URL http://dl.acm.org/citation.cfm?id=2999792.2999959.2
106
+
107
+ [10] Cheng Yang, Zhiyuan Liu, Deli Zhao, Maosong Sun, and Edward Y. Chang. Network representation learning with rich text information. In Proceedings of the 24th International Conference on Artificial Intelligence, IJCAI'15, pages 2111-2117. AAAI Press, 2015. ISBN 978-1-57735-738-4. URL http://dl.acm.org/citation.cfm?id=2832415.2832542.2
108
+
109
+ [11] Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. LINE: Large-scale information network embedding. In WWW 2015 - Proceedings of the 24th International Conference on World Wide Web, pages 1067-1077, 2015. ISBN 9781450334693. doi: 10.1145/ 2736277.2741093.2
110
+
111
+ [12] Daixin Wang, Peng Cui, and Wenwu Zhu. Structural deep network embedding. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, volume 13-17-Augu, pages 1225-1234, 2016. ISBN 9781450342322. doi: 10.1145/2939672.2939753. 2
112
+
113
+ [13] Sambaran Bandyopadhyay, Harsh Kara, Aswin Kannan, and M N Murty. FSCNMF: Fusing structure and content via non-negative matrix factorization for embedding information networks, 2018. 2
114
+
115
+ [14] Thomas N. Kipf and Max Welling. Semi-Supervised Classification with Graph Convolutional Networks. In ICLR, 2017. 2
116
+
117
+ [15] William L. Hamilton, Zhitao Ying, and Jure Leskovec. Inductive Representation Learning on Large Graphs. In NIPS, pages 1024-1034, 2017. 2
118
+
119
+ [16] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph Attention Networks. In ICLR, 2018. 2
120
+
121
+ [17] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? CoRR, abs/1810.00826, 2018. URL http://arxiv.org/abs/1810.00826.2
122
+
123
+ [18] Thomas Kipf, Ethan Fetaya, Kuan Chieh Wang, Max Welling, and Richard Zemel. Neural relational inference for Interacting systems. In 35th International Conference on Machine Learning, ICML 2018, volume 6, pages 4209-4225, 2018. ISBN 9781510867963. 2, 4
124
+
125
+ [19] Sambaran Bandyopadhyay, Anirban Biswas, Narasimha Murty, and Ramasuri Narayanam. Beyond node embedding: A direct unsupervised edge representation framework for homogeneous networks, 2019. 3, 4
126
+
127
+ [20] Piotr Bielak, Tomasz Kajdanowicz, and Nitesh V. Chawla. Attre2vec: Unsupervised attributed edge representation learning. Information Sciences, 592:82-96, 2022. ISSN 0020-0255. doi: https://doi.org/10.1016/j.ins.2022.01.048.URL https://www.sciencedirect.com/ science/article/pii/S0020025522000779. 3, 4
128
+
129
+ [21] You Li, Bei Lin, Binli Luo, and Ning Gui. Graph representation learning beyond node and homophily. IEEE Transactions on Knowledge and Data Engineering, pages 1-1, 2022. doi: 10.1109/tkde.2022.3146270. URL https://doi.org/10.1109%2Ftkde.2022.3146270.3, 4
130
+
131
+ [22] Liyu Gong and Qiang Cheng. Adaptive edge features guided graph attention networks. CoRR, abs/1809.02709, 2018. URL http://arxiv.org/abs/1809.02709.3
132
+
133
+ [23] Zhen Wang, Bo Zong, and Huan Sun. Modeling context pair interaction for pairwise tasks on graphs. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, WSDM '21, page 851-859, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450382977. doi: 10.1145/3437963.3441744. URL https://doi.org/ 10.1145/3437963.3441744.3
134
+
135
+ [24] Changping Wang, Chaokun Wang, Zheng Wang, Xiaojun Ye, and Philip S. Yu. Edge2vec: Edge-based social network embedding. ACM Trans. Knowl. Discov. Data, 14(4), may 2020. ISSN 1556-4681. doi: 10.1145/3391298. URL https://doi.org/10.1145/3391298.3
136
+
137
+ [25] TensorFlow Developers. Tensorflow, May 2022. URL https://doi.org/10.5281/ zenodo. 6574269. Specific TensorFlow versions can be found in the "Versions" list on the right side of this page.<br>See the full list of authors <a href="htt ps://github.com/tensorflow/tensorflow/graphs/contr ibutors">on GitHub</a>. 3
138
+
139
+ [26] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché- Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/paper/ 9015-pytorch-an-imperative-style-high-performance-deep-learning-library. pdf. 3
140
+
141
+ [27] Palash Goyal and Emilio Ferrara. Gem: A python package for graph embedding methods. Journal of Open Source Software, 3(29):876. 3
142
+
143
+ [28] Palash Goyal, Nitin Kamra, Xinran He, and Yan Liu. Dyngem: Deep embedding method for dynamic graphs. CoRR, abs/1805.11273, 2018. 3
144
+
145
+ [29] Benedek Rozemberczki, Oliver Kiss, and Rik Sarkar. Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM '20), page 3125-3132. ACM, 2020. 3
146
+
147
+ [30] Wayne W. Zachary. An information flow model for conflict and fission in small groups. Journal of Anthropological Research, 33(4):452-473, 1977. ISSN 00917710. URL http: //www.jstor.org/stable/3629752.3
148
+
149
+ [31] D Lusseau, K Schneider, O J Boisseau, P Haase, E Slooten, and S M Dawson. The bottlenose dolphin community of doubtful sound features a large proportion of long-lasting associations -
150
+
151
+ can geographic isolation explain this unique trait? Behavioral Ecology and Sociobiology, 54: 396-405, 2003. ISSN 0340-5443. doi: 10.1007/s00265-003-0651-y. 3
152
+
153
+ [32] Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. AI Magazine, 29(3):93, Sep. 2008. doi: 10.1609/aimag.v29i3.2157. URL https://ojs.aaai.org/index.php/aimagazine/ article/view/2157. 3
154
+
155
+ [33] Galileo Namata, Ben London, Lise Getoor, and Bert Huang. Query-driven Active Surveying for Collective Classification. In Proceedings ofthe Workshop on Mining and Learn- ing with Graphs, pages 1-8, Edinburgh, Scotland, UK., 2012. 3
156
+
157
+ [34] Nils M. Kriege, Matthias Fey, Denis Fisseler, Petra Mutzel, and Frank Weichert. Recognizing cuneiform signs using graph based methods. CoRR, abs/1802.05908, 2018. URL http: //arxiv.org/abs/1802.05908. 3
158
+
159
+ [35] Petar Veličković, William Fedus, William L. Hamilton, Pietro Liò, Yoshua Bengio, and R Devon Hjelm. Deep Graph Infomax. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=rklz9iAcKQ.4
160
+
161
+ ## A Example: PairE model
162
+
163
+ Let's explore how to use PyGE in practice. We will be using the PairE model to classify the citation type between academic papers (citation within a research area or cross citation; if the same research area, then which one). We start by loading the Cora dataset and extracting the target edge labels using our implemented MatchingNodeLabelsTransform( ) (if two node labels match, use this label, else use special label -1 ):
164
+
165
+ from torch_geometric_edge.datasets import Cora
166
+
167
+ from torch_geometric_edge.transforms import MatchingNodeLabelsTransform
168
+
169
+ data = Cora("/tmp/pyge/", transform=MatchingNodeLabelsTransform( ))[0]
170
+
171
+ Next, we split the edges into train and test sets:
172
+
173
+ ---
174
+
175
+ import torch
176
+
177
+ from sklearn.model_selection import train_test_split
178
+
179
+ train_mask, test_mask = train_test_split( )
180
+
181
+ torch.arange(data.num_edges),
182
+
183
+ stratify=data.y,
184
+
185
+ test_size=0.8,
186
+
187
+ )
188
+
189
+ Now, let's create the PairE model:
190
+
191
+ from torch_geometric_edge.models import PairE
192
+
193
+ model = PairE( )
194
+
195
+ num_nodes=data.num_nodes,
196
+
197
+ node_feature_dim=data.num_node_features,
198
+
199
+ emb_dim=128,
200
+
201
+ )
202
+
203
+ We can train our model using standard PyTorch training-loop boilerplate code. Note, that we only
204
+
205
+ use training edges (data.edge_index[:, train_mask]).
206
+
207
+ optimizer = torch.optim.AdamW(model.parameters( ), lr=1e-3)
208
+
209
+ model.train( )
210
+
211
+ for _ in range(100):
212
+
213
+ optimizer.zero_grad( )
214
+
215
+ x_self, x_aggr = model.extract_self_aggr(data.x, data.edge_index[:, train_mask])
216
+
217
+ h_edge = model(data.x, data.edge_index[:, train_mask])
218
+
219
+ x_self_rec, x_aggr_rec = model.decode(h_edge)
220
+
221
+ loss = model.loss(x_self, x_aggr, x_self_rec, x_aggr_rec)
222
+
223
+ loss.backward( )
224
+
225
+ optimizer.step( )
226
+
227
+ ---
228
+
229
+ Finally, we can evaluate our model's edge embedding in the edge classification task using the LogisticRegressionEvaluator. The returned metrics will be prefixed to indicate the train/test split. Note that we use now all edges during inference:
230
+
231
+ ---
232
+
233
+ from torch_geometric_edge.evaluation import LogisticRegressionEvaluator
234
+
235
+ model.eval( )
236
+
237
+ with torch.no_grad( ):
238
+
239
+ Z = model (data.x, data.edge_index)
240
+
241
+ metrics = LogisticRegressionEvaluator(["auc"]).evaluate( )
242
+
243
+ Z=Z,
244
+
245
+ $\mathrm{Y} = \mathrm{{data}}.\mathrm{y}$ ,
246
+
247
+ train_mask=train_mask,
248
+
249
+ test_mask=test_mask,
250
+
251
+ )
252
+
253
+ print (metrics)
254
+
255
+ ---
papers/LOG/LOG 2022/LOG 2022 Conference/hZ3b8CskgC/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,263 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Anonymous Author(s)
2
+
3
+ Anonymous Affiliation
4
+
5
+ Anonymous Email
6
+
7
+ ## Abstract
8
+
9
+ One of the most critical operations in graph neural networks (GNNs) is the aggregation operation, which aims to extract information from neighbors of the target node. Several convolution methods have been proposed such as standard graph convolution (GCN), graph attention (GAT), and message passing (MPNN). In this study, we propose an aggregation method called Multi-Mask Aggregators (MMA), where the model learns a weighted mask for each aggregator before collecting neighboring messages. MMA draws similarities with the GAT and MPNN but has some theoretical and practical advantages. Intuitively, our framework is not limited by the number of heads from GAT and has more discriminative than an MPNN. The performance of MMA was compared with the well-known baseline methods in both node classification and graph regression tasks on widely-used benchmarking datasets, and it has shown improved performance.
10
+
11
+ ## 14 1 Introduction
12
+
13
+ Graph Neural Networks (GNNs) have attracted great interest in recent years due to their performance and the ability to extract complex information [1-4]. One of the most important operations in graph neural networks is the aggregation operation, where the aim is iteratively exploiting information from the neighbours of a target node to update its latent representation [2, 5]. Several different aggregators were used, such as mean, sum, max, min, long short-term memory (LSTM), to extract more meaningful information from the neighbors of a particular node [4], [5]. According to [6] an ideal learnable and flexible aggregation should have the following conditions: 1) permutation invariant [7]; 2) adaptive to deal with various neighborhood information [3] [8]; 3) explainable learned representations in relation to the predictions and robustness to the noise [9] 4) discriminative to graph structures [5].
14
+
15
+ In recent years, several methods have been proposed in the graph neural network area that use different types of aggregators. For example, Graph Attention Network (GAT) borrows the idea of attention mechanisms that perform aggregations by assigning different weights to different neighbors [3]. However, it is not adaptive to deal with various neighborhood information at the feature-level since all individual features are considered equally [3] [8]. Learnable graph convolutional layer (LGCL) method applies convolution operation in the aggregation process by assigning different weights to different features [8].LGCL can deal with different neighborhood information; however, there might be loss information on graphs during the selections of since it breaks the original correspondence between node features by selecting the d-largest feature values from the neighboring nodes [3] [8]. Dehmamy et al. [10] empirically showed that using multiple aggregators (i.e., mean, max, and normalized mean) improves the performance of GNNs on the task of graph moments. Principal Neighbourhood Aggregation (PNA) method theoretically formalized this observation, and the authors demonstrated that using a single type of aggregator is insufficient to extract enough information from neighboring nodes which causes limited learning abilities and expressive power [11].
16
+
17
+ A mask aggregator uses an auxiliary model such as multi-layer perceptrons (MLPs), which has no requirement for size or order of the input datasets [12] [13]. To satisfy the four conditions mentioned above, The Learnable Aggregator for GCN (LA-GCN) was proposed, which filters the neighborhood information with a mask aggregator before the aggregation process [6]. LA-GCN learns a specific mask for each neighbor of a given node, allowing both node-level and feature-level attention by the
18
+
19
+ # Multi-Mask Aggregators for Graph Neural Networks
20
+
21
+ ![01963ef8-58f9-7a88-81fa-f7d3b0f88f4d_1_319_199_1158_257_0.jpg](images/01963ef8-58f9-7a88-81fa-f7d3b0f88f4d_1_319_199_1158_257_0.jpg)
22
+
23
+ Figure 1: Architecture of MMA with the different aggregators: a) training auxiliary model with a given node and its neighbors' feature vectors; b) getting the masks for each neighbor from the auxiliary model and multiplying with Hadamard product with node feature and Learned Mask; c) aggregating the neighbors (after multiplying the corresponding mask) to get the central node's new representation. d) combining the final aggregators with scalers.
24
+
25
+ auxiliary model. This mechanism assigns different weights to nodes and features, which provides interpretable results and increases the model robustness. However, LA-GCN is only based on a sum aggregator, which loses its stability with the increasing average degree of a graph [11], and the other types of aggregators are overlooked.
26
+
27
+ In this study, we propose Multi-Mask Aggregators (MMA), a novel graph neural network method that combines trainable auxiliary models with different or same types of aggregators. MMA utilizes a given node and its neighbors to train auxiliary models with the aim of extracting information in a graph where different neighborhood information is learned using different masks. We use multiple types of aggregators (i.e., mean, max, and min) and create a mask for each neighbor and aggregator. MMA can learn high-level rules (e.g., focusing on the important neighbors and features for node representation learning) to guide the aggregators for better utilization of the neighbourhood information. It is a flexible method where different or the same kinds of multiple learnable aggregators can be used. We evaluated MMA on well-known benchmark datasets and compared its performance with the well-known baseline graph neural network methods. The datasets, source code, experimental settings, and user instructions are available publicly at https://github.com/mmalogcanonnym/mmalogc.
28
+
29
+ The main contributions of can be summarized as the followings: 1) It provides flexible multi-aggregators with the mask aggregation;2) It unlocks the limitation on number of heads; 3) It enables to extract local informations by local parameters instead of using global parameters like in MPNNs/PNA; 4) It behaves between in GAT and MPNN/PNA; 5) It increases the performance in node classification and graph regression benchmarks.
30
+
31
+ ## 2 Multi-Mask Aggregators
32
+
33
+ The proposed Multi-Mask Aggregators method leverages the increased expressivity from the multi-aggregators models such as PNA [11] and the learnable masks from LA-GCN [6]. The Hadamard product is performed to multiply the neighbor's feature vector with the corresponding learned masks in the aggregation process, allowing each heuristic aggregator (e.g., min, mean etc.) to learn different features coming from the neighbors. Finally, the resulting aggregators are combined with scalers [11]. MMA architecture is given in Figure 1.
34
+
35
+ ### 2.1 Motivation
36
+
37
+ Several methods have been proposed in graph neural network area. Most of them working by aggregating neighbouring node features using a permutation invariant function (PMI). One of the most popular framework is the graph convolution which uses a PMI to aggregate features from neighbouring nodes ${n}_{j}$ into a given node ${n}_{i}$ (See Appendix B). Another one is the message-passing that generates a message from each pair of nodes $\left\{ {{n}_{i},{n}_{j}}\right\}$ and aggregates them via a PMI. Furthermore, the graph attention computes the attention weight between $\left\{ {{n}_{i},{n}_{j}}\right\}$ and aggregates the neighbouring features ${n}_{j}$ via a weighted sum of the attention weights.
38
+
39
+ In this work, we propose a different framework called multi-masked aggregators (MMA) where the network learns multiple weighted masks from pairs $\{ u, v\}$ and aggregates them via a weighted PMI. Hence, the aggregation mechanism lies in-between the graph attention which learns multiple masks, and the message-passing which uses invariant functions. Similarly to PNA [11], it benefits from increased expressivity from having multiple independent aggregators, and contrarily to GAT [3], it is not limited to a fixed number of heads during masking.
40
+
41
+ ### 2.2 Flexible multi-aggregators
42
+
43
+ In recent work, it was demonstrated that the use of multiple uncorrelated aggregators during the message-passing increased the expressiveness while avoiding the exponential growth of the parameter space [11]. Their work proposed to use the mean, max, min and std operators to extract rich statistical features.
44
+
45
+ In this work, we build on idea by using multiple learned aggregators that can also exhibit high-frequency filtering. We further combine the mean, max, min aggregators with multi-learned masks to provide a more expressive framework.
46
+
47
+ Learning the Mask. The first step is to learn the mask ${m}_{ij}^{l}$ , with a unique value for each layer $l$ and pair of neighbouring nodes $\{ i, j\}$ . To do so, we employ an MLP on the pair of node features ${h}_{i},{h}_{j}$ and optionally the edge features ${e}_{ij}$ in a similar fashion to the MPNN. However, this does not constitute the message, but rather the weights that will multiply the aggregated neighbouring features. The equation is formalized in (1), with $\sigma$ being the activation function, ${W}_{m}^{l + 1}$ a learned matrix for the $l$ -th layer, and || the column-wise concatenation.
48
+
49
+ $$
50
+ {m}_{j}^{l} = {ML}{P}^{l}\left( {\parallel {h}_{i}^{l},{h}_{j}^{l},{e}_{ij}^{l}}\right) = \sigma \left( {{W}_{m}^{l + 1}\left( {\parallel {h}_{i}^{l},{h}_{j}^{l},{e}_{ij}^{l}}\right) }\right) \tag{1}
51
+ $$
52
+
53
+ Masked Max/Min Aggregators. Max/Min aggregators have shown to effective for discrete tasks and domains where credit assignment and extrapolating to unseen distributions of graphs is important [14]. In this study, we extend max/min aggregators by adding a learned mask ${m}_{j}^{l}$ as follows. This allows the network to learn to ignore certain "undesired" nodes when propagating information.
54
+
55
+ $$
56
+ \mathop{\max }\limits_{i}^{l} = \mathop{\max }\limits_{{j \in {N}_{i}}}\left( {{X}_{j}^{l} * {m}_{j}^{l}}\right) \;\mathop{\min }\limits_{i}^{l} = \mathop{\min }\limits_{{j \in {N}_{i}}}\left( {{X}_{j}^{l} * {m}_{j}^{l}}\right) \tag{2}
57
+ $$
58
+
59
+ Masked Mean Aggregator. One of the most widely used aggregators in the literature is the mean aggregators in which, each node computes a weighted average or sum of its incoming messages. Using a degree-scaler, it was also shown that the sum aggregation can be represented from the mean 6 [11]. In this work, we first apply the same operation as in the LA-GCN [6] and then divide by the node's degree:
60
+
61
+ $$
62
+ {\mu }_{i}\left( {X}^{l}\right) = \frac{1}{{d}_{i}}\mathop{\sum }\limits_{{j \in {N}_{i}}}{X}_{j}^{l} * {m}_{j}^{l} \tag{3}
63
+ $$
64
+
65
+ Degree Scalers. In MMA, we further make use of degree scalers, motivated by their ability to amplify and attenuate signals using the node's degrees and increase expressivity [11]. The general equation is given below, with $S$ being the scaling factor, $d$ the node degree, $\alpha$ the amplification factor, and delta the average degree in the training set. In our study, we use $\alpha = \{ - 1,0,1\}$ , corresponding respectively to attenuation, no change, and amplification of the signal from its degree.
66
+
67
+ $$
68
+ S\left( {d,\alpha }\right) = {\left( \frac{\log \left( {d + 1}\right) }{\delta }\right) }^{\alpha }, d > 0, - 1 < \alpha < 1 \tag{4}
69
+ $$
70
+
71
+ Combining Aggregators. We further combine multiple aggregators and degree scalers to increase expressivity of the network following the equation below. Here, $\otimes$ denotes the Tensor product and ${ \oplus }_{\text{mask }}$ the general aggregation function of the proposed MMA framework.
72
+
73
+ $$
74
+ { \oplus }_{\text{mask }} = \left( \begin{matrix} I \\ S\left( {D,\alpha = 1}\right) \\ S\left( {D,\alpha = - 1}\right) \end{matrix}\right) \otimes \left( \begin{matrix} \text{ Masked } & \text{ Max } \\ \text{ Masked } & \text{ Min } \\ \text{ Masked } & \text{ Mean } \end{matrix}\right) \tag{5}
75
+ $$
76
+
77
+ ## 3 Experiments
78
+
79
+ We first evaluated the performance of MMA models on four widely-used benchmarking datasets (see Appendix A.1) over two tasks using a combination of different masked aggregators. Subsequently, we investigated the performances of the models when same type of masked aggregator(s) are used. Finally, the performance results of MMA were also compared with the well-known baseline methods in the field. These methods are Message Passing Neural Networks (MPNN) [2], Graph Convolutional Networks (GCN) [1], GAT [3], LGCL [8], Graph-BERT [15], PNA [11], LA-GCN [6], Adaptive kernel graph neural network (AKGNN) [16], respectively.
80
+
81
+ ### 3.1 Results
82
+
83
+ We trained several models using the multiple aggregator(s). Here, we used two different settings: In Setting 1 (see Table 3), we measured the performance of MMA by utilizing a combination of different kinds of aggregators. In the Setting-2 (see Table 4), the same type of aggregator(s) were used where each aggregator has a different trained mask. We used the same training/validation/test settings for a fair performance comparison with other methods. In Appendix A.2, we also demonstrated some ablation studies.
84
+
85
+ Finally, we compared our best performing results with the well-known baseline methods in the literature. The results are given in Table 1. Our results have shown an improved performance over the compared methods in majority cases. 133
86
+
87
+ Table 1: Benchmarking MMA on Pubmed, Citeseer, Cora and ZINC datasets.
88
+
89
+ <table><tr><td>Models</td><td>Pubmed</td><td>Citeseer</td><td>$\mathbf{{Cora}}$</td><td>ZINC</td></tr><tr><td>MPNN [2]</td><td>75.60</td><td>64.00</td><td>78.00</td><td>0.288</td></tr><tr><td>GCN [1]</td><td>79.00</td><td>70.30</td><td>81.50</td><td>-</td></tr><tr><td>GAT [3]</td><td>79.00</td><td>72.50</td><td>83.00</td><td>-</td></tr><tr><td>LGCL [8]</td><td>79.50</td><td>73.00</td><td>83.30</td><td>-</td></tr><tr><td>GRAPH-BERT [15]</td><td>79.30</td><td>71.20</td><td>84.30</td><td>-</td></tr><tr><td>PNA [11]</td><td>-</td><td>-</td><td>-</td><td>0.188</td></tr><tr><td>LA-GCN [6]</td><td>-</td><td>-</td><td>81.50</td><td>-</td></tr><tr><td>AKGNN [16]</td><td>80.40</td><td>73.50</td><td>84.80</td><td>-</td></tr><tr><td>MMA (ours)</td><td>86.00</td><td>76.30</td><td>85.80</td><td>0.1562</td></tr></table>
90
+
91
+ ## 4 Discussion and Conclusion
92
+
93
+ In this study, we propose Multi-Mask Aggregators for graph representation learning with the aim of utilizing different and same aggregators within a learning mechanism. MMA provides a flexible learning method by allowing integration of different or same type of aggregator(s) where each of them has learnable parameters. Our contributions can be summarized as follows: 1) It provides flexible multi-aggregators with the mask aggregation;2) It unlocks the limitation on number of heads; 3) It enables to extract local informations by local parameters instead of using global parameters like in MPNNs/PNA; 4) It behaves between in GAT and MPNN/PNA; 5) It increases the performance in node classification and graph regression benchmarks.
94
+
95
+ Besides all, as shown in the ablation studies, it was observed that there is no definite consensus on how much and which aggregator should be used. That's why it is thought that this will continue to be a subject open to development in aggregators for graph neural networks.
96
+
97
+ References
98
+
99
+ [1] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016. 1, 4
100
+
101
+ [2] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International conference on machine learning, pages 1263-1272. PMLR, 2017. 1, 4
102
+
103
+ [3] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017. 1, 3, 4
104
+
105
+ [4] William L Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 1025-1035, 2017. 1
106
+
107
+ [5] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018. 1, 9
108
+
109
+ [6] Li Zhang and Haiping Lu. A feature-importance-aware and robust aggregator for gcn. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pages 1813-1822, 2020. 1, 2, 3, 4, 7, 8
110
+
111
+ [7] Ryan L Murphy, Balasubramaniam Srinivasan, Vinayak Rao, and Bruno Ribeiro. Janossy pooling: Learning deep permutation-invariant functions for variable-size inputs. arXiv preprint arXiv:1811.01900, 2018. 1
112
+
113
+ [8] Hongyang Gao, Zhengyang Wang, and Shuiwang Ji. Large-scale learnable graph convolutional networks. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1416-1424, 2018. 1, 4
114
+
115
+ [9] Rex Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, and Jure Leskovec. Gnn explainer: A tool for post-hoc explanation of graph neural networks. arXiv preprint arXiv:1903.03894, 2019. 1
116
+
117
+ [10] Nima Dehmamy, Albert-László Barabási, and Rose Yu. Understanding the representation power of graph neural networks in learning graph topology. arXiv preprint arXiv:1907.05008, 2019. 1
118
+
119
+ [11] Gabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Liò, and Petar Veličković. Principal neighbourhood aggregation for graph nets. arXiv preprint arXiv:2004.05718, 2020. 1, 2, 3, 4, 9
120
+
121
+ [12] Sebastian Thrun. Lifelong learning algorithms. In Learning to learn, pages 181-209. Springer, 1998.1
122
+
123
+ [13] Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are universal approximators. Neural networks, 2(5):359-366, 1989. 1
124
+
125
+ [14] Petar Veličković, Rex Ying, Matilde Padovano, Raia Hadsell, and Charles Blundell. Neural execution of graph algorithms. arXiv preprint arXiv:1910.10593, 2019. 3
126
+
127
+ [15] Jiawei Zhang, Haopeng Zhang, Congying Xia, and Li Sun. Graph-bert: Only attention is needed for learning graph representations. arXiv preprint arXiv:2001.05140, 2020. 4, 8
128
+
129
+ [16] Mingxuan Ju, Shifu Hou, Yujie Fan, Jianan Zhao, Liang Zhao, and Yanfang Ye. Adaptive kernel graph neural network. arXiv preprint arXiv:2112.04575, 2021. 4
130
+
131
+ [17] Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. AI magazine, 29(3):93-93, 2008. 6
132
+
133
+ [18] John J Irwin, Teague Sterling, Michael M Mysinger, Erin S Bolstad, and Ryan G Coleman. Zinc: a free tool to discover chemistry for biology. Journal of chemical information and modeling, 52 (7):1757-1768,2012.6
134
+
135
+ [19] Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexander J Smola. Deep sets. Advances in neural information processing systems, 30, 2017.7,8
136
+
137
+ [20] Edward Wagstaff, Fabian Fuchs, Martin Engelcke, Ingmar Posner, and Michael A Osborne. On the limitations of representing functions on sets. In International Conference on Machine Learning, pages 6487-6494. PMLR, 2019. 9
138
+
139
+ [21] Kurt Hornik. Approximation capabilities of multilayer feedforward networks. Neural networks, 4(2):251-257, 1991. 9
140
+
141
+ ## A Experiment Details
142
+
143
+ ### A.1 Datasets
144
+
145
+ We trained our method on four different datasets which are Cora, Citeseer, PubMed and ZINC, respectively. The dataset statistics and the training, validation and test settings are given in Table 2. The benchmarking datasets are explanined below:
146
+
147
+ - Cora [17] - Cora is a citation graph dataset where each node represents a scientific publication which is classified as one of 7 classes. This dataset consists of 2,708 nodes and 5,429 edges where there is edge between two nodes if one of them cites the other. In this dataset, nodes are represented by binary feature vectors where each dimension indicates absence or presence of the word from the dictionary that has 1,433 unique words.
148
+
149
+ - Citeseeer [17] - Citeseer is also a citation graph dataset for node classification task where the nodes represent publications which are classified into 6 classes. Nodes are represented as binary feature vectors similar to the Cora dataset. In Citeseer dataset, there are 3,327 nodes and 4,732 edges.
150
+
151
+ - Pubmed [17] - Pubmed is another citation graph dataset where each node represents the papers related to diabetes. Pubmed is also a dataset for node classification where one of three classes is assigned to each node. This dataset consists of 19,717 nodes and 44,338 edges. Here, each node is represented by feature vector which shows TF/IDF weighted word vector from the dictionary that has 500 unique words.
152
+
153
+ - ZINC [18] - ZINC is graph regression dataset for constrained solubility prediction of chemical compounds. In this dataset, each compound is represented by a graph where nodes represents atoms and edges represent the bonds between atoms. ZINC dataset consists of 12,000 molecules with varying number of atoms from 9 to 37 .
154
+
155
+ The dataset statistics are summarized in Table 2.
156
+
157
+ Table 2: Summary of the datasets used in benchmarking
158
+
159
+ <table><tr><td>Domain & Construction</td><td>Dataset</td><td>#Nodes</td><td>Total #Nodes</td><td>Edges</td><td>Features</td><td>Classes</td><td>Train/Val./Tes</td><td>Task</td></tr><tr><td>Social Networks: Real-world citation graphs</td><td>Cora</td><td>2,708</td><td>2,708</td><td>5,429</td><td>1.433</td><td>7</td><td>1,208/500/1,000</td><td>Node Classification</td></tr><tr><td>Social Networks: Real-world citation graphs</td><td>Citeseer</td><td>3,327</td><td>3,327</td><td>4,732</td><td>3,703</td><td>6</td><td>1,827/500/1,000</td><td>Node Classification</td></tr><tr><td>Social Networks: Real-world citation graphs</td><td>Pubmed</td><td>19.717</td><td>19,717</td><td>44,338</td><td>500</td><td>3</td><td>18.217/500/1.000</td><td>Node Classification</td></tr><tr><td>Chemistry: Real-world molecular graphs</td><td>ZINC</td><td>9-37</td><td>277,864</td><td>-</td><td>-</td><td>-</td><td>10,000/1,000/1,000</td><td>Graph Regression</td></tr></table>
160
+
161
+ ### A.2 Ablation Studies
162
+
163
+ Table 3 shows the performance results of top four best performing models trained using different combinations of multi-mask aggregators. As it can be observed from Table 3, there is no consensus on types of aggregators when we consider the performance results based on different datasets, therefore dataset-specific aggregators should be determined empirically. For example, Masked Mean-Mean2 aggregators performed best on Cora dataset whereas Masked Min-Min2-Min3 aggregators worked best on Citeseer dataset. Similarly, using Min-Min2-Min3-Min4 and Min-Max sets of aggregators resulted in better results in Pubmed and ZINC databases, respectively.
164
+
165
+ We also evaluated the performance of the MMA using the same multi-masked aggregators. Here, MMA models were trained using single or multiple type of same aggregators to investigate how the performances of models are changes with the same of type of masked-aggregator(s). The results are given in the Table 4. Here, we trained MMA models using up to 4 same aggregators with different learnable masks. When the results are investigated it can be seen that specific single multi-aggregators works better than the remaining aggregators on different datasets. For example, on Cora dataset mean aggretagors almost consistently worked better than the min and max aggregators. Considering the Citeseer dataset, we can see that max aggregators work with less performance than the min and mean aggregators.
166
+
167
+ ## B Theoretical Background
168
+
169
+ Permutation invariance is an essential feature for aggregation functions due to lock of order in most real graphs. While aggregating representations of node's neighbors, the neighborhood aggregation
170
+
171
+ Table 3: Performance of MMA using different aggregators on independent test sets (Setting-1)
172
+
173
+ <table><tr><td>$\mathbf{{Dataset}}$</td><td>Masked Aggregators</td><td>Learning Rate</td><td>Weight Decay</td><td>Hidden Units</td><td>Epoch</td><td>Accuracy/MAE</td></tr><tr><td rowspan="4">$\mathbf{{Cora}}$</td><td>Mean-Min</td><td>0.001</td><td>5e-4</td><td>128</td><td>200</td><td>85.10</td></tr><tr><td>Mean-Max-Min</td><td>0.001</td><td>3e-4</td><td>128</td><td>1000</td><td>84.30</td></tr><tr><td>Mean-Max</td><td>0.001</td><td>1e-4</td><td>128</td><td>1000</td><td>84.10</td></tr><tr><td>Max-Min</td><td>0.01</td><td>3e-4</td><td>64</td><td>1000</td><td>83.60</td></tr><tr><td rowspan="4">Citeseer</td><td>Mean-Max</td><td>0.01</td><td>5e-4</td><td>128</td><td>500</td><td>75.90</td></tr><tr><td>Mean-Min</td><td>0.01</td><td>3e-4</td><td>64</td><td>500</td><td>75.50</td></tr><tr><td>Max-Min</td><td>0.001</td><td>1e-4</td><td>64</td><td>1000</td><td>75.30</td></tr><tr><td>Mean-Max-Min</td><td>0.01</td><td>5e-4</td><td>64</td><td>500</td><td>75.30</td></tr><tr><td rowspan="4">Pubmed</td><td>Mean-Min</td><td>0.01</td><td>1e-4</td><td>64</td><td>200</td><td>85.90</td></tr><tr><td>Mean-Max-Min</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Mean-Max</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Max-Min</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td rowspan="4">ZINC</td><td>Mean-Min</td><td>0.0001</td><td>3e-4</td><td/><td>10000</td><td>0.1585</td></tr><tr><td>Mean-Max-Min</td><td>0.00001</td><td>3e-4</td><td>-</td><td>10000</td><td>0.1606</td></tr><tr><td>Mean-Max</td><td>0.0001</td><td>3e-4</td><td>-</td><td>10000</td><td>0.1585</td></tr><tr><td>Min-Max</td><td>0.0001</td><td>$3\mathrm{e} - 4$</td><td>-</td><td>10000</td><td>0.1562</td></tr></table>
174
+
175
+ Table 4: Performance results of same multi-masked aggregators on independents test sets (Setting-2)
176
+
177
+ <table><tr><td>Masked Aggregators</td><td>Cora</td><td>Citeseer</td><td>Pubmed</td><td>ZINC</td></tr><tr><td>Mean</td><td>85.60</td><td>76.00</td><td>-</td><td>0.1631</td></tr><tr><td>Mean-Mean2</td><td>85.80</td><td>76.10</td><td>-</td><td>0.1763</td></tr><tr><td>Mean-Mean2-Mean3</td><td>84.60</td><td>74.60</td><td>-</td><td>0.1940</td></tr><tr><td>Mean-Mean2-Mean3-Mean4</td><td>84.80</td><td>75.20</td><td>-</td><td>0.1886</td></tr><tr><td>Min</td><td>83.90</td><td>76.10</td><td>85.80</td><td>0.1535</td></tr><tr><td>Min-Min2</td><td>84.20</td><td>75.40</td><td>85.30</td><td>0.1571</td></tr><tr><td>Min-Min2-Min3</td><td>84.00</td><td>76.30</td><td>85.70</td><td>0.1591</td></tr><tr><td>Min-Min2-Min3-Min4</td><td>84.00</td><td>75.70</td><td>86.00</td><td>-</td></tr><tr><td>Max</td><td>83.60</td><td>75.40</td><td>85.50</td><td>0.1519</td></tr><tr><td>Max-Max2</td><td>83.00</td><td>75.00</td><td>84.30</td><td>0.1653</td></tr><tr><td>Max-Max2-Max3</td><td>83.00</td><td>75.00</td><td>83.30</td><td>0.1717</td></tr><tr><td>Max-Max2-Max3-Max4</td><td>83.60</td><td>75.00</td><td>81.90</td><td>0.1604</td></tr></table>
178
+
179
+ scheme iteratively updates the representation of a node [6]. This intution explained for the aggregation process can be formalized as follows:
180
+
181
+ $$
182
+ {s}_{i}^{\left( k - 1\right) } = {f}_{ag}^{\left( k\right) }\left( {{h}_{j}^{\left( k - 1\right) }, j \in {N}_{i}}\right) \tag{6}
183
+ $$
184
+
185
+ where ${f}_{ag}^{\left( k\right) }$ is aggregator in the $\mathrm{k}$ -th layer. The aggregation function ${f}_{ag}^{\left( k\right) }$ should be permutation invariant function on multiset. According to [19], definition of permutation invariant function described as:
186
+
187
+ Definition 1: A function $f$ is permutation-invariant if
188
+
189
+ $$
190
+ f\left( {{h}_{1},{h}_{2},\ldots ,{h}_{\left| {N}_{i}\right| }}\right) = f\left( {{h}_{{\pi }_{\left( 1\right) }},{h}_{{\pi }_{\left( 2\right) }},\ldots ,{h}_{{\pi }_{\left( \left| {N}_{i}\right| \right) }}}\right) \tag{7}
191
+ $$
192
+
193
+ for any permutation $\pi$ and $\left| {N}_{i}\right|$ is the length of the sequence. ${\Pi }_{\left| {N}_{i}\right| }$ denotes the multiset of all permutations of the integers 1 to $\left| {N}_{i}\right|$ and ${h}_{\pi },\pi \in {\Pi }_{\left| {N}_{i}\right| }$ , denotes a reordering of the multiset according to $\pi$ . Relation between set and permutation invariant function can be shown in the following theorem in [19]:
194
+
195
+ Theorem 1: A function operating on a multiset ${h}_{1},{h}_{2},\ldots ,{h}_{\left( \left| {N}_{i}\right| \right) }$ having elements from a countable universe, is a valid set function. It is invariant to the permutation of instances in the multiset, if it can be decomposed in the form $\rho \left( {\mathop{\sum }\limits_{{\pi \in {\Pi }_{\left| {N}_{i}\right| }}}\phi \left( {h}_{\pi }\right) }\right)$ for suitable transformation $\phi$ and $\rho$ .
196
+
197
+ In Theorem 1, it can be infered that all the representations are being added and then applied nonlinear transformation.
198
+
199
+ Mean, sum aggregation functions and aggregators in GCN and GAT can be represented in this concept. As shown in Eq.(8) and Eq.(9), respectively, GCN and GAT add up all neighborhood with fixed parameters or learnable parameters.
200
+
201
+ $$
202
+ {s}_{i}^{\left( k - 1\right) } = {f}_{agg}^{\left( k\right) }\left( {h}_{j}^{\left( k - 1\right) }\right) = \mathop{\sum }\limits_{{j \in {N}_{i}}}{h}_{j}^{\left( k - 1\right) }/\sqrt{{d}_{i}{d}_{j}} \tag{8}
203
+ $$
204
+
205
+ where ${d}_{i}$ and ${d}_{j}$ are the node degrees of node ${v}_{i}$ and node ${v}_{j}$ respectively.
206
+
207
+ $$
208
+ {s}_{i}^{\left( k - 1\right) } = {f}_{aga}^{\left( k\right) }\left( {h}_{j}^{\left( k - 1\right) }\right) = \mathop{\sum }\limits_{{j \in {N}_{i}}}{\alpha }_{ij}{h}_{j}^{\left( k - 1\right) } \tag{9}
209
+ $$
210
+
211
+ where ${\alpha }_{ij}$ is a learnable attention coefficient that indicates the importance of ${v}_{j}$ to ${v}_{i}$ .
212
+
213
+ ### B.1 Mask Aggregator
214
+
215
+ [6] tried to use mask aggregator with sum aggregation function in order to assign different importance to different neighbor's features. In this aggregation process, they tried to use mask aggregator which assigns different weights to different neighbor's features and then aggregate by sum aggregation function. It can be shown as following:
216
+
217
+ $$
218
+ {s}_{i}^{\left( k - 1\right) } = {f}_{agm}^{\left( k\right) }\left( {h}_{j}^{\left( k - 1\right) }\right) = \mathop{\sum }\limits_{{j \in {N}_{i}}}{h}_{j}^{\left( k - 1\right) } * {m}_{j}^{\left( k - 1\right) } \tag{10}
219
+ $$
220
+
221
+ where ${h}_{j}^{\left( k - 1\right) } \in {\mathbb{R}}^{{d}_{k - 1}},{m}_{j}^{\left( k - 1\right) } \in {\mathbb{R}}^{{d}_{k - 1}}$ is a specific mask for each neighbor, produced by the auxiliary model. They showed that, the mask aggregator is permutation invariant as the following theorem which is proven by [15]:
222
+
223
+ Theorem 2: ${f}_{agm}^{\left( k\right) }$ is a permutation-invariant function acting on finite but arbitrary length sequence ${h}_{j}^{\left( k - 1\right) }, j \in {N}_{i}$ .
224
+
225
+ Proof 2: Given $H = {h}_{1}^{\left( k - 1\right) },{h}_{2}^{\left( k - 1\right) },\ldots ,{h}_{\left( \left| {N}_{i}\right| \right) }^{\left( k - 1\right) }$ a finite multiset, and ${h}_{j}^{\left( k - 1\right) } \in {\mathbb{R}}^{{d}_{k - 1}}$ , mask aggregator was tried to be combined with a fixed output ${s}_{i}^{\left( k - 1\right) } \in {\mathbb{R}}^{{d}_{k - 1}}$ as follows:
226
+
227
+ $$
228
+ {s}_{i}^{\left( k - 1\right) } = {f}_{agm}^{\left( k\right) }\left( {h}_{j}^{\left( k - 1\right) }\right) = \mathop{\sum }\limits_{{j \in {N}_{i}}}{h}_{j}^{\left( k - 1\right) } * {m}_{j}^{\left( k - 1\right) } \tag{11}
229
+ $$
230
+
231
+ where ${m}_{j}^{\left( k - 1\right) } \in {\mathbb{R}}^{{d}_{k - 1}}$ is a specific mask for each neighbor, produced by the auxiliary model. First, it was tried to get mask ${m}_{j}^{\left( k - 1\right) }$ for each node ${h}_{j}^{\left( k - 1\right) }$ by using auxiliary model given graph information.
232
+
233
+ There exists a mapping function $\phi : {\mathbb{R}}^{{d}_{k - 1}} \rightarrow {\mathbb{R}}^{{d}_{k - 1}}$ that can formulate ${h}_{j}^{\left( k - 1\right) } * {m}_{j}^{\left( k - 1\right) }$ to $\phi \left( {h}_{j}^{\left( k - 1\right) }\right)$ , and (11) can be written as:
234
+
235
+ $$
236
+ {s}_{i}^{\left( k - 1\right) } = {f}_{agm}^{\left( k\right) }\left( {h}_{j}^{\left( k - 1\right) }\right) = \mathop{\sum }\limits_{{j \in {N}_{i}}}\phi \left( {h}_{j}^{\left( k - 1\right) }\right) \tag{12}
237
+ $$
238
+
239
+ and $\rho$ can be seen as $\rho = 1$ . (8) can be seen as a permutation of $\mathrm{H}$ , according to [19].
240
+
241
+ Next, they prove there exist an injective mapping function $\phi$ , so that ${f}_{agm}^{\left( k\right) }\left( {h}_{j}^{\left( k - 1\right) }\right)$ is unique for each finite multiset $\mathrm{H}$ .
242
+
243
+ Since $\mathrm{H}$ is countable, each $\left( {h}_{j}^{\left( k - 1\right) }\right) \in H$ can be mapped to a unique element to prime numbers $p\left( H\right) : {\mathbb{R}}^{M} \rightarrow \mathbb{P}$ by a function $p\left( {h}_{j}^{\left( k - 1\right) }\right) .\phi \left( {h}_{j}^{\left( k - 1\right) }\right)$ can be represented as $- \operatorname{logp}\left( {h}_{j}^{\left( k - 1\right) }\right)$ . Thus,
244
+
245
+ $$
246
+ {f}_{agm}^{\left( k\right) }\left( {h}_{j}^{\left( k - 1\right) }\right) = \mathop{\sum }\limits_{{j \in {N}_{i}}}\phi \left( {h}_{j}^{\left( k - 1\right) }\right) = \operatorname{logp}\left( {h}_{j}^{\left( k - 1\right) }\right) \tag{13}
247
+ $$
248
+
249
+ takes a unique value for each distinct $\mathrm{H}$ .
250
+
251
+ Besides, the dimension ${d}_{d - 1}$ of the latent space should be at least as large as the maximum number of input elements $\left| {N}_{i}\right|$ which is both necessary and sufficient for continuous permutation-invariant functions [20].
252
+
253
+ Since any continous function can be approximated by a neural network, according to the universal approximation theorem [21], MLPs can be used as an auxilary model and learn $\phi$ and $\rho = 1$ .
254
+
255
+ ### B.2 Multi Aggregator
256
+
257
+ According to the Theorem 2, it can be concluded multi-set is a permutation-invariant function and mask aggregator can adapt which features or neighbors more important and filter the noisy information.
258
+
259
+ However, [11] showed that sum aggregation is not discriminate between graphs, and they proposed multi-aggregation, in order to tackle this problem. They showed that, the multi-aggregation can discriminate between graphs as the following theorem and proof:
260
+
261
+ Theorem 3: In order to discriminate between multisets of size $\mathrm{n}$ whose underlying set is $R$ , at least $n$ aggregators are needed.
262
+
263
+ Unlike the [5], [11] consider a continuous input feature space; this better represents many real-world tasks where the observed values have uncertainty, and better models the latent node features within a neural network's representations. Continuous features make the space uncountable, and void the injectivity proof of the sum aggregation presented by Xu et al. [5]. Hence, they redefine aggregators as continuous functions of multisets which compute a statistic on the neighbouring nodes, such as mean, max or standard deviation. The continuity is important with continuous input spaces, as small variations in the input should result in small variations of the aggregators' output.
papers/LOG/LOG 2022/LOG 2022 Conference/hZ3b8CskgC/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Anonymous Author(s)
2
+
3
+ Anonymous Affiliation
4
+
5
+ Anonymous Email
6
+
7
+ § ABSTRACT
8
+
9
+ One of the most critical operations in graph neural networks (GNNs) is the aggregation operation, which aims to extract information from neighbors of the target node. Several convolution methods have been proposed such as standard graph convolution (GCN), graph attention (GAT), and message passing (MPNN). In this study, we propose an aggregation method called Multi-Mask Aggregators (MMA), where the model learns a weighted mask for each aggregator before collecting neighboring messages. MMA draws similarities with the GAT and MPNN but has some theoretical and practical advantages. Intuitively, our framework is not limited by the number of heads from GAT and has more discriminative than an MPNN. The performance of MMA was compared with the well-known baseline methods in both node classification and graph regression tasks on widely-used benchmarking datasets, and it has shown improved performance.
10
+
11
+ § 14 1 INTRODUCTION
12
+
13
+ Graph Neural Networks (GNNs) have attracted great interest in recent years due to their performance and the ability to extract complex information [1-4]. One of the most important operations in graph neural networks is the aggregation operation, where the aim is iteratively exploiting information from the neighbours of a target node to update its latent representation [2, 5]. Several different aggregators were used, such as mean, sum, max, min, long short-term memory (LSTM), to extract more meaningful information from the neighbors of a particular node [4], [5]. According to [6] an ideal learnable and flexible aggregation should have the following conditions: 1) permutation invariant [7]; 2) adaptive to deal with various neighborhood information [3] [8]; 3) explainable learned representations in relation to the predictions and robustness to the noise [9] 4) discriminative to graph structures [5].
14
+
15
+ In recent years, several methods have been proposed in the graph neural network area that use different types of aggregators. For example, Graph Attention Network (GAT) borrows the idea of attention mechanisms that perform aggregations by assigning different weights to different neighbors [3]. However, it is not adaptive to deal with various neighborhood information at the feature-level since all individual features are considered equally [3] [8]. Learnable graph convolutional layer (LGCL) method applies convolution operation in the aggregation process by assigning different weights to different features [8].LGCL can deal with different neighborhood information; however, there might be loss information on graphs during the selections of since it breaks the original correspondence between node features by selecting the d-largest feature values from the neighboring nodes [3] [8]. Dehmamy et al. [10] empirically showed that using multiple aggregators (i.e., mean, max, and normalized mean) improves the performance of GNNs on the task of graph moments. Principal Neighbourhood Aggregation (PNA) method theoretically formalized this observation, and the authors demonstrated that using a single type of aggregator is insufficient to extract enough information from neighboring nodes which causes limited learning abilities and expressive power [11].
16
+
17
+ A mask aggregator uses an auxiliary model such as multi-layer perceptrons (MLPs), which has no requirement for size or order of the input datasets [12] [13]. To satisfy the four conditions mentioned above, The Learnable Aggregator for GCN (LA-GCN) was proposed, which filters the neighborhood information with a mask aggregator before the aggregation process [6]. LA-GCN learns a specific mask for each neighbor of a given node, allowing both node-level and feature-level attention by the
18
+
19
+ § MULTI-MASK AGGREGATORS FOR GRAPH NEURAL NETWORKS
20
+
21
+ ${ML}{P}_{1}$ After multiplying the masks, the aggregators take max, min and mean of Scalers Hadamard Product (b) (c) (d) ${ML}{P}_{2}$ ${ML}{P}_{3}$ Node Features (a)
22
+
23
+ Figure 1: Architecture of MMA with the different aggregators: a) training auxiliary model with a given node and its neighbors' feature vectors; b) getting the masks for each neighbor from the auxiliary model and multiplying with Hadamard product with node feature and Learned Mask; c) aggregating the neighbors (after multiplying the corresponding mask) to get the central node's new representation. d) combining the final aggregators with scalers.
24
+
25
+ auxiliary model. This mechanism assigns different weights to nodes and features, which provides interpretable results and increases the model robustness. However, LA-GCN is only based on a sum aggregator, which loses its stability with the increasing average degree of a graph [11], and the other types of aggregators are overlooked.
26
+
27
+ In this study, we propose Multi-Mask Aggregators (MMA), a novel graph neural network method that combines trainable auxiliary models with different or same types of aggregators. MMA utilizes a given node and its neighbors to train auxiliary models with the aim of extracting information in a graph where different neighborhood information is learned using different masks. We use multiple types of aggregators (i.e., mean, max, and min) and create a mask for each neighbor and aggregator. MMA can learn high-level rules (e.g., focusing on the important neighbors and features for node representation learning) to guide the aggregators for better utilization of the neighbourhood information. It is a flexible method where different or the same kinds of multiple learnable aggregators can be used. We evaluated MMA on well-known benchmark datasets and compared its performance with the well-known baseline graph neural network methods. The datasets, source code, experimental settings, and user instructions are available publicly at https://github.com/mmalogcanonnym/mmalogc.
28
+
29
+ The main contributions of can be summarized as the followings: 1) It provides flexible multi-aggregators with the mask aggregation;2) It unlocks the limitation on number of heads; 3) It enables to extract local informations by local parameters instead of using global parameters like in MPNNs/PNA; 4) It behaves between in GAT and MPNN/PNA; 5) It increases the performance in node classification and graph regression benchmarks.
30
+
31
+ § 2 MULTI-MASK AGGREGATORS
32
+
33
+ The proposed Multi-Mask Aggregators method leverages the increased expressivity from the multi-aggregators models such as PNA [11] and the learnable masks from LA-GCN [6]. The Hadamard product is performed to multiply the neighbor's feature vector with the corresponding learned masks in the aggregation process, allowing each heuristic aggregator (e.g., min, mean etc.) to learn different features coming from the neighbors. Finally, the resulting aggregators are combined with scalers [11]. MMA architecture is given in Figure 1.
34
+
35
+ § 2.1 MOTIVATION
36
+
37
+ Several methods have been proposed in graph neural network area. Most of them working by aggregating neighbouring node features using a permutation invariant function (PMI). One of the most popular framework is the graph convolution which uses a PMI to aggregate features from neighbouring nodes ${n}_{j}$ into a given node ${n}_{i}$ (See Appendix B). Another one is the message-passing that generates a message from each pair of nodes $\left\{ {{n}_{i},{n}_{j}}\right\}$ and aggregates them via a PMI. Furthermore, the graph attention computes the attention weight between $\left\{ {{n}_{i},{n}_{j}}\right\}$ and aggregates the neighbouring features ${n}_{j}$ via a weighted sum of the attention weights.
38
+
39
+ In this work, we propose a different framework called multi-masked aggregators (MMA) where the network learns multiple weighted masks from pairs $\{ u,v\}$ and aggregates them via a weighted PMI. Hence, the aggregation mechanism lies in-between the graph attention which learns multiple masks, and the message-passing which uses invariant functions. Similarly to PNA [11], it benefits from increased expressivity from having multiple independent aggregators, and contrarily to GAT [3], it is not limited to a fixed number of heads during masking.
40
+
41
+ § 2.2 FLEXIBLE MULTI-AGGREGATORS
42
+
43
+ In recent work, it was demonstrated that the use of multiple uncorrelated aggregators during the message-passing increased the expressiveness while avoiding the exponential growth of the parameter space [11]. Their work proposed to use the mean, max, min and std operators to extract rich statistical features.
44
+
45
+ In this work, we build on idea by using multiple learned aggregators that can also exhibit high-frequency filtering. We further combine the mean, max, min aggregators with multi-learned masks to provide a more expressive framework.
46
+
47
+ Learning the Mask. The first step is to learn the mask ${m}_{ij}^{l}$ , with a unique value for each layer $l$ and pair of neighbouring nodes $\{ i,j\}$ . To do so, we employ an MLP on the pair of node features ${h}_{i},{h}_{j}$ and optionally the edge features ${e}_{ij}$ in a similar fashion to the MPNN. However, this does not constitute the message, but rather the weights that will multiply the aggregated neighbouring features. The equation is formalized in (1), with $\sigma$ being the activation function, ${W}_{m}^{l + 1}$ a learned matrix for the $l$ -th layer, and || the column-wise concatenation.
48
+
49
+ $$
50
+ {m}_{j}^{l} = {ML}{P}^{l}\left( {\parallel {h}_{i}^{l},{h}_{j}^{l},{e}_{ij}^{l}}\right) = \sigma \left( {{W}_{m}^{l + 1}\left( {\parallel {h}_{i}^{l},{h}_{j}^{l},{e}_{ij}^{l}}\right) }\right) \tag{1}
51
+ $$
52
+
53
+ Masked Max/Min Aggregators. Max/Min aggregators have shown to effective for discrete tasks and domains where credit assignment and extrapolating to unseen distributions of graphs is important [14]. In this study, we extend max/min aggregators by adding a learned mask ${m}_{j}^{l}$ as follows. This allows the network to learn to ignore certain "undesired" nodes when propagating information.
54
+
55
+ $$
56
+ \mathop{\max }\limits_{i}^{l} = \mathop{\max }\limits_{{j \in {N}_{i}}}\left( {{X}_{j}^{l} * {m}_{j}^{l}}\right) \;\mathop{\min }\limits_{i}^{l} = \mathop{\min }\limits_{{j \in {N}_{i}}}\left( {{X}_{j}^{l} * {m}_{j}^{l}}\right) \tag{2}
57
+ $$
58
+
59
+ Masked Mean Aggregator. One of the most widely used aggregators in the literature is the mean aggregators in which, each node computes a weighted average or sum of its incoming messages. Using a degree-scaler, it was also shown that the sum aggregation can be represented from the mean 6 [11]. In this work, we first apply the same operation as in the LA-GCN [6] and then divide by the node's degree:
60
+
61
+ $$
62
+ {\mu }_{i}\left( {X}^{l}\right) = \frac{1}{{d}_{i}}\mathop{\sum }\limits_{{j \in {N}_{i}}}{X}_{j}^{l} * {m}_{j}^{l} \tag{3}
63
+ $$
64
+
65
+ Degree Scalers. In MMA, we further make use of degree scalers, motivated by their ability to amplify and attenuate signals using the node's degrees and increase expressivity [11]. The general equation is given below, with $S$ being the scaling factor, $d$ the node degree, $\alpha$ the amplification factor, and delta the average degree in the training set. In our study, we use $\alpha = \{ - 1,0,1\}$ , corresponding respectively to attenuation, no change, and amplification of the signal from its degree.
66
+
67
+ $$
68
+ S\left( {d,\alpha }\right) = {\left( \frac{\log \left( {d + 1}\right) }{\delta }\right) }^{\alpha },d > 0, - 1 < \alpha < 1 \tag{4}
69
+ $$
70
+
71
+ Combining Aggregators. We further combine multiple aggregators and degree scalers to increase expressivity of the network following the equation below. Here, $\otimes$ denotes the Tensor product and ${ \oplus }_{\text{ mask }}$ the general aggregation function of the proposed MMA framework.
72
+
73
+ $$
74
+ { \oplus }_{\text{ mask }} = \left( \begin{matrix} I \\ S\left( {D,\alpha = 1}\right) \\ S\left( {D,\alpha = - 1}\right) \end{matrix}\right) \otimes \left( \begin{matrix} \text{ Masked } & \text{ Max } \\ \text{ Masked } & \text{ Min } \\ \text{ Masked } & \text{ Mean } \end{matrix}\right) \tag{5}
75
+ $$
76
+
77
+ § 3 EXPERIMENTS
78
+
79
+ We first evaluated the performance of MMA models on four widely-used benchmarking datasets (see Appendix A.1) over two tasks using a combination of different masked aggregators. Subsequently, we investigated the performances of the models when same type of masked aggregator(s) are used. Finally, the performance results of MMA were also compared with the well-known baseline methods in the field. These methods are Message Passing Neural Networks (MPNN) [2], Graph Convolutional Networks (GCN) [1], GAT [3], LGCL [8], Graph-BERT [15], PNA [11], LA-GCN [6], Adaptive kernel graph neural network (AKGNN) [16], respectively.
80
+
81
+ § 3.1 RESULTS
82
+
83
+ We trained several models using the multiple aggregator(s). Here, we used two different settings: In Setting 1 (see Table 3), we measured the performance of MMA by utilizing a combination of different kinds of aggregators. In the Setting-2 (see Table 4), the same type of aggregator(s) were used where each aggregator has a different trained mask. We used the same training/validation/test settings for a fair performance comparison with other methods. In Appendix A.2, we also demonstrated some ablation studies.
84
+
85
+ Finally, we compared our best performing results with the well-known baseline methods in the literature. The results are given in Table 1. Our results have shown an improved performance over the compared methods in majority cases. 133
86
+
87
+ Table 1: Benchmarking MMA on Pubmed, Citeseer, Cora and ZINC datasets.
88
+
89
+ max width=
90
+
91
+ Models Pubmed Citeseer $\mathbf{{Cora}}$ ZINC
92
+
93
+ 1-5
94
+ MPNN [2] 75.60 64.00 78.00 0.288
95
+
96
+ 1-5
97
+ GCN [1] 79.00 70.30 81.50 -
98
+
99
+ 1-5
100
+ GAT [3] 79.00 72.50 83.00 -
101
+
102
+ 1-5
103
+ LGCL [8] 79.50 73.00 83.30 -
104
+
105
+ 1-5
106
+ GRAPH-BERT [15] 79.30 71.20 84.30 -
107
+
108
+ 1-5
109
+ PNA [11] - - - 0.188
110
+
111
+ 1-5
112
+ LA-GCN [6] - - 81.50 -
113
+
114
+ 1-5
115
+ AKGNN [16] 80.40 73.50 84.80 -
116
+
117
+ 1-5
118
+ MMA (ours) 86.00 76.30 85.80 0.1562
119
+
120
+ 1-5
121
+
122
+ § 4 DISCUSSION AND CONCLUSION
123
+
124
+ In this study, we propose Multi-Mask Aggregators for graph representation learning with the aim of utilizing different and same aggregators within a learning mechanism. MMA provides a flexible learning method by allowing integration of different or same type of aggregator(s) where each of them has learnable parameters. Our contributions can be summarized as follows: 1) It provides flexible multi-aggregators with the mask aggregation;2) It unlocks the limitation on number of heads; 3) It enables to extract local informations by local parameters instead of using global parameters like in MPNNs/PNA; 4) It behaves between in GAT and MPNN/PNA; 5) It increases the performance in node classification and graph regression benchmarks.
125
+
126
+ Besides all, as shown in the ablation studies, it was observed that there is no definite consensus on how much and which aggregator should be used. That's why it is thought that this will continue to be a subject open to development in aggregators for graph neural networks.
papers/LOG/LOG 2022/LOG 2022 Conference/kQsniwmGgF5/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,431 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Well-conditioned Spectral Transforms for Dynamic Graph Representation
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ This work establishes a fully-spectral framework to capture informative long-range temporal interactions in a dynamic system. We connect the spectral transform to the low-rank self-attention mechanisms and investigate its energy-balancing effect and computational efficiency. Based on the observations, we leverage the adaptive power method SVD and global graph framelet convolution to encode time-dependent features and graph structure for continuous-time dynamic graph representation learning. The former serves as an efficient high-order linear self-attention with determined propagation rules, and the latter establishes scalable and transferable geometric characterization for property prediction. Empirically, the proposed model learns well-conditioned hidden representations on a variety of online learning tasks, and it achieves top performance with a reduced number of learnable parameters and faster propagation speed.
12
+
13
+ ## 14 1 Introduction
14
+
15
+ Dynamic graphs appear in many scenarios, such as pandemic spread [1, 2], social media [3, 4], physics simulations $\left\lbrack {5,6}\right\rbrack$ , and computational biology $\left\lbrack {7,8}\right\rbrack$ . Learning dynamic graph properties, however, is a challenging task when both node attributes and graph structures evolve over time.
16
+
17
+ Many existing dynamic graph representation learning methods start from embedding the sequence of non-Euclidean graph topology to feed into recurrent networks [9-12]. Such a straightforward design assumes a discrete nature of input graphs. Graph snapshots are sliced at a sequence of fixed time steps, leaving the evolution of events on nodes and/or edges unobserved. Later, the memory module $\left\lbrack {4,{13}}\right\rbrack$ establishes a natural generalization of the learning procedure to continuous-time dynamic graphs (CTDGs), which encodes previous states for an event to its latest states. Consequently, a graph slice describes the past dynamics with implicitly encoded long-short term memory on node attributes.
18
+
19
+ Nevertheless, the memory module, e.g., recurrent neural networks [14] or gated recurrent unit [15], has trouble tracking the full picture of graph evolving, as it reserves long-term interactions in a most implicit way. Accessing the encoded message inside the black box becomes extremely hard. Alternatively, TRANSFORMER [16] enhances the long-range memory for sequential data, and it has received tremendous success in language understanding [17, 18] and image processing [19, 20]. In particular, the self-attention mechanism learns pair-wise event similarity scores in the entire range of interest. It retrieves a contextual matrix of full-landscape relationships to preserve the long-term dependency of tokens (or events). However, at the cost of comprehensiveness, the rapid growth of the sequence length can easily escalate the complexity of computation and memory. While the attention operation can be efficiently approximated by some low-rank representation [21-24], it loses 35 the expressivity at the same time.
20
+
21
+ This work provides a fully spectral-based solution for learning the representations of long-range CTDGs. First, an efficient spectral transform enhances the memory encoding of continuous events by extracting pairwise nonlinear relationships in time and feature dimensions. A global spectral graph convolution with fast framelet transforms [25] then characterizes node-wise interactions in a sequence of graphs. The proposed design tackles the two identified problems in learning CTDGs. In particular, we show that the power method singular value decomposition (SVD) is an efficient and effective implementation of the low-rank self-attention scheme. It not only fast captures the long-term evolving flow of the input events, but also preserves more even energy in the extracted pivotal components of the temporal observations. Such a design prevents ill-conditioned graph hidden representations, which results in an easier-to-fit smooth decision boundary for network training. In the final layer, the undecimated framelet-based spectral graph transform in graph representation learning commits sufficient scalability via its multi-level representation of the structured data.
22
+
23
+ ![01963ee6-403c-7136-889f-6809797d24a2_1_317_342_1016_523_0.jpg](images/01963ee6-403c-7136-889f-6809797d24a2_1_317_342_1016_523_0.jpg)
24
+
25
+ Figure 1: Illustrative SPEDGNN for learning continuous-time dynamic graphs (CTDGs). (a) A spectral transform with adaptive power method SVD processes the long-range time-dependency of the input to the spectral domain. (b) The continuous embedding is then divided in a message-memory module with enhanced short-term interactions. (c) Finally, a global framelet graph convolution with multi-scale operators forms well-conditioned graph representations for prediction tasks.
26
+
27
+ We investigate the relationship of spectral transforms and feed-forward propagation, and design Spectral Dynamic Graph Neural Network (SPEDGNN) for efficient and effective dynamic graph representation. The design network architecture captures temporal features and graph structure in CTDGs in the spectral domain. Through efficient spectral self-attention and multi-scale graph convolution, expressive hidden representations of batch events are embedded in linear complexity (proportional to the number of events). The well-conditioned final embeddings are separable by a smooth decision boundary with less main information loss.
28
+
29
+ ## 2 Spectral Transform for Long-range Sequence
30
+
31
+ This section introduces the notion of spectral transform and discusses how it fixes the ill-conditioning problem and its connection to the self-attention mechanism.
32
+
33
+ Definition 1. A spectral transform projects sample observations $\mathbf{X} \in {\mathbb{R}}^{N \times d}$ from an unknown function space to a spectral domain with a (set of) orthonormal basis $\Phi : {X}^{\prime } \mathrel{\text{:=}} {X\Phi }$ . The new representation ${\mathbf{X}}^{\prime }$ summarizes the prior knowledge of observations $\mathbf{X}$ for perfect reconstruction.
34
+
35
+ ### 2.1 Choices of the Spectral Basis
36
+
37
+ The properties of a formulated spectral transform are determined by $\mathbf{\Phi }$ . For instance, the singular vector matrix of $\mathbf{X}$ from QR decomposition or singular value decomposition (SVD) extracts principal components of the function space. Fourier transforms process time-domain signals to the frequency domain to distill local-global oscillations. When $\mathbf{X}$ is a sequence, such a transform summarizes the observations in a new coordinate system to reflect the ground truth of specific properties, such as sparsity or noise separation.
38
+
39
+ To better understand how spectral transform benefits inferring the true function space, consider an example unitary transform by orthonormal bases of SVD. Denote the raw signal input as a matrix $\mathbf{X} \in {\mathbb{R}}^{N \times d}$ . It can be factorized by $\mathbf{X} = \mathbf{U}\sum {\mathbf{V}}^{\top }$ with two orthonormal bases $\mathbf{U} \in {\mathbb{R}}^{N \times N}$ and $\mathbf{V} \in {\mathbb{R}}^{d \times d}$ . The two orthonormal bases span the row and column spaces of $\mathbf{X}$ , and they both can project $\mathbf{X}$ to a spectral domain. For instance, the spectral coefficients ${\mathbf{X}}^{\prime } = \mathbf{X}\mathbf{V}$ are the projection of aggregated features under the basis $\mathbf{V}$ .
40
+
41
+ ![01963ee6-403c-7136-889f-6809797d24a2_2_324_345_1009_248_0.jpg](images/01963ee6-403c-7136-889f-6809797d24a2_2_324_345_1009_248_0.jpg)
42
+
43
+ Figure 2: A toy example of binary classification. The 2-dimensional data are sampled from $\mathcal{N}\left( {\mu ,\sigma }\right)$ . The direction and length of $\left\{ {{v}_{1},{v}_{2}}\right\}$ illustrate two eigenvectors and eigenvalues of the feature. Artificial labels are created by a decision boundary with large curvature. Both (b) batch normalized spatial representations and (c) unnormalized spectral coefficients fail to flatten the boundary. In contrast, (d) normalized spectral representations have a closer-to-1 condition number, creating a smooth decision boundary that is easier for a classifier to fit.
44
+
45
+ ### 2.2 Transforming towards Balanced Energy
46
+
47
+ A key motivation to perform the spectral transform on a time-dependent long-range sequence is to amend the highly-imbalanced energy distribution of the original feature space. We pay special attention to the cases when the expressivity of latent representations is hurt, i.e., the detailed information with small energy in the original feature space is smoothed out. In practice, such small-energy details can be pivotal to distinguishing different entities, and ignoring them not only removes local noise but also eliminates potentially useful messages. For instance, tumor cells generally live within a small area and it has considerably small energy in a medical image. Smoothing these features could result in problems in pathology diagnosis.
48
+
49
+ Amending the energy distribution, however, can be tricky to conduct in the features' original domain. Figure 2 demonstrates a two-dimensional toy example. The sample distribution in Figure 2(a) concentrates the most variance in a certain direction with eigenvalues of sample variance $\sigma = \{ 9,2\}$ . An instant normalization in the same domain, such as BatchNorm [26] in Figure 2(b), reshapes the sample distribution. However, the energy is still centralized in ${\mathbf{v}}_{1}$ ’s direction. As a result, the two classes (colored in red and green) can only be divided by a decision boundary that has a large curvature. It is difficult to fit by classifiers such as an MLP-based model, which tends to fit smooth flat curves. Meanwhile, Figure 2(c) illustrates the unnormalized spectral representation of the original data ${\mathbf{X}}^{\prime } \mathrel{\text{:=}} \mathbf{{XV}} = \mathbf{U}\sum$ , which projects $\mathbf{X}$ to a new coordinate set by the transformation $\mathbf{V}$ . As shown in Figure 2(d), normalizing the new coordinates in the same spectral domain by $\widetilde{\mathbf{X}} \mathrel{\text{:=}} {\mathbf{X}}^{\prime }\operatorname{diag}\left( {{c}_{1},{c}_{2}}\right) {\mathbf{V}}^{T}$ results in an easy-to-fit flat decision boundary.
50
+
51
+ The spectral transform allows balancing features' energy and truncating local noise, if necessary, simultaneously. To circumvent singular decomposition, we consider an efficient approximation that relies on matrix products of $\mathbf{X}$ . We can regard $\mathbf{U}$ (from $\mathbf{X} = \mathbf{U}\sum {\mathbf{V}}^{T}$ ) to be close to the orthonormal basis $\mathbf{Q}$ (from QR decomposition $\mathbf{X} = \mathbf{{QR}}$ ). Power method SVD [27] suggests a better approximation to $\mathbf{U}$ , which is the orthonormal basis $\widetilde{\mathbf{Q}}$ from $\mathbf{X}{\left( {\mathbf{X}}^{\top }\mathbf{X}\right) }^{q} = \widetilde{\mathbf{Q}}\widetilde{\mathbf{R}}$ , i.e.,
52
+
53
+ $$
54
+ \mathbf{U} \approx \widetilde{\mathbf{Q}} = \mathbf{X}{\left( {\mathbf{X}}^{\top }\mathbf{X}\right) }^{q}{\widetilde{\mathbf{R}}}^{-1}. \tag{1}
55
+ $$
56
+
57
+ The normalized features $\widetilde{\mathbf{X}}$ is therefore approximated by including a proper diagonal matrix $\mathbf{C}$ in (1), i.e., $\widetilde{\mathbf{X}} \approx \mathbf{X}{\left( {\mathbf{X}}^{\top }\mathbf{X}\right) }^{q}{\widetilde{\mathbf{R}}}^{-1}\mathbf{C}$ . However, it is computationally expensive to directly invite the orthogonal factor $\widetilde{\mathbf{R}}$ to participant in a neural network, as the QR decomposition involves a Gram-Schmidt algorithm. To reduce the cost, we replace $\widetilde{\mathbf{R}}$ and $\mathbf{C}$ with a learnable scheme. We let
58
+
59
+ $$
60
+ \widetilde{\mathbf{X}} \approx \mathbf{X}{\left( {\mathbf{X}}^{\top }\mathbf{X}\right) }^{q}\mathbf{W}, \tag{2}
61
+ $$
62
+
63
+ where $\mathbf{W}$ is a learnable upper triangular matrix. The consequent approximation to $\widetilde{\mathbf{X}}$ supports matrix computations and it can be propagated by neural networks.
64
+
65
+ Compared to the conventional truncated SVD that ranks the orthonormal basis by singular values, the learnable spectral transform by (2) conducts a data-driven principal component distillation and normalization at the same time. The projection by $\mathbf{W}$ summarizes the principal vectors that describe entity features to a set of spectral coefficients and ranks them adaptively by their importance to the specific application. Such a learning scheme prevents important rare patterns from being removed due to their small energy.
66
+
67
+ ### 2.3 Connecting Adaptive Spectral Transform to Self-Attention
68
+
69
+ Aside from preserving small-energy rare patterns as other spectral transforms, the adaptive SVD-based spectral transform is also closely connected to linear self-attention mechanisms [21, 23, 28]. For $\mathbf{X} \in {\mathbb{R}}^{N \times d}$ , a self-attention layer reads
70
+
71
+ $$
72
+ {\mathbf{X}}_{\text{attn }} \mathrel{\text{:=}} \left( {{\mathbf{Q}}_{a}{\mathbf{K}}_{a}^{\top }{\mathbf{V}}_{a}}\right) /\sqrt{{d}_{k}}, \tag{3}
73
+ $$
74
+
75
+ $$
76
+ \text{where}{\mathbf{Q}}_{a} \mathrel{\text{:=}} \mathbf{X}{\mathbf{W}}_{Q},{\mathbf{K}}_{a} \mathrel{\text{:=}} \mathbf{X}{\mathbf{W}}_{K},{\mathbf{V}}_{a} \mathrel{\text{:=}} \mathbf{X}{\mathbf{W}}_{V}\text{.}
77
+ $$
78
+
79
+ The three square matrices ${\mathbf{Q}}_{a}$ (query), ${\mathbf{K}}_{a}$ (key) and ${\mathbf{V}}_{a}$ (value) learns basis functions at an identical size of $n \times d$ . The learning cost drops significantly when $N \gg d$ , as a smaller number of parameters are required to approximate. The conventional self-attention restricts the order of calculating the operations strictly from left to right, as the context mapping matrix comes from the normalized and activated softmax $\left( {{\mathbf{Q}}_{a}{\mathbf{K}}_{a}^{\top }/\sqrt{{d}_{k}}}\right) \in {\mathbb{R}}^{N \times N}$ . Instead, linear self-attention removes the activation for efficient computation.
80
+
81
+ To understand the intrinsic connection between linear self-attention in (3) and power method SVD, rewrite ${\mathbf{X}}_{\text{attn }}$ as a function of $\mathbf{X}$ , i.e.,
82
+
83
+ $$
84
+ {\mathbf{X}}_{\text{attn }} = {\mathbf{{XW}}}_{Q}{\mathbf{W}}_{K}^{\top }{\mathbf{X}}^{\top }\mathbf{X}{\mathbf{W}}_{V} = {\mathbf{{XW}}}_{1}{\mathbf{X}}^{\top }\mathbf{X}{\mathbf{W}}_{2} \tag{4}
85
+ $$
86
+
87
+ with ${\mathbf{W}}_{Q}{\mathbf{W}}_{K}^{\top } = {\mathbf{W}}_{1}$ and ${\mathbf{W}}_{V} = {\mathbf{W}}_{2}$ . Compared to (2), a linear self-attention step in (4) is a special implementation that approximates a 1-iteration QR approximation of the SVD basis. To approach the power of $q$ iterations adaptive power method SVD, a number of $q$ -layer linear self-attention is required. Moreover, both (2) and (4) aggregate row-wise variation and summarizes a low-rank covariance matrix of $\mathbf{X}$ with ${\mathbf{X}}^{\top }\mathbf{X}$ . However,(2) provides an efficient concentration to large-mode tokens while truncating out noises. For an extremely long sequence of input ${\mathbf{X}}_{N} \in {\mathbb{R}}^{N \times d}\left( {N \gg d}\right)$ , ${\mathbf{X}}_{N}^{\top }{\mathbf{X}}_{N} \in {\mathbb{R}}^{d \times d}$ in (2) completes the main calculation at a significantly small cost. This cost-efficient technique is important for scalable learning tasks such as time-series data learning, where the length of an input sequence could explode easily.
88
+
89
+ ## 3 Spectral Transforms for Dynamic Graphs
90
+
91
+ In this section, we expand the long-range sequence of interest to an additional dimension of topology and practice the spectral transform on dynamic graphs. We validate the efficiency and effectiveness of the spectral transform framework by dynamic graph representation learning.
92
+
93
+ ### 3.1 Problem Formulation
94
+
95
+ A static undirected graph is denoted by ${\mathcal{G}}_{p} = \left( {{\mathbb{V}}_{p},{\mathbb{E}}_{p},{\mathbf{X}}_{p}}\right)$ with $n = \left| {\mathbb{V}}_{p}\right|$ nodes, where its edge connection is described by an adjacency matrix ${\mathbf{A}}_{p} \in {\mathbb{R}}^{n \times n}$ and the $d$ -dimensional node feature is stored in ${\mathbf{X}}_{p} \in {\mathbb{R}}^{n \times d}$ . A graph convolution finds a hidden representation ${\mathbf{H}}_{p}$ that embeds information about the structure ${\mathbf{A}}_{p}$ and the node feature ${\mathbf{X}}_{p}$ . When ${\mathbf{A}}_{p}$ and/or ${\mathbf{X}}_{p}$ changes with time, ${\mathcal{G}}_{p}$ is called a dynamic graph.
96
+
97
+ Definition 2. Given a sequence of graphs $\mathbb{G} = {\left\{ {\mathcal{G}}_{p}\right\} }_{p = 1}^{P}$ where each ${\mathcal{G}}_{p} = \left( {{\mathbb{V}}_{p},{\mathbb{E}}_{p},{\mathbf{X}}_{p}}\right)$ , dynamic graph representation learning finds the hidden representation ${\mathbf{H}}_{p}$ of each ${\mathcal{G}}_{p}$ where the $i$ th row of ${\mathbf{H}}_{p}$ corresponds to the $i$ th node of ${\mathcal{G}}_{p}$ .
98
+
99
+ Depending on the particular prediction task, ${\mathbf{H}}_{p}$ can be processed for label assignments. For example, link prediction forecasts the pair-wise connection of nodes in a graph, and node classification completes unlabeled nodes. Continuous-time dynamic graphs (CTDG) is a general and complicated genre of dynamic graphs. An arbitrary observation of a CTDG is recorded as a tuple of (event, event type, timestamp). The event recorded at a specific timestamp is described by a feature vector, and the event type can be one of edge addition/deletion, or node addition/deletion. Training an adequate model for CTDG can be challenging. Compared to a static graph, the complete architecture of a CTDG is revealed sequentially during training. A powerful design for graph embedding is thus required to interpret the connection between the next graph with historical graph snapshots. In comparison to discrete-time dynamic graphs, the consecutive activity recording behavior allows CTDGs to capture the event flow of the entire graph so that the information loss is minimized.
100
+
101
+ To this end, we propose to first employ an adaptive temporal spectral transform(as introduced in Section 2) to encode the long-range evolution of the graph dynamics to normalized spectral coefficients $\widetilde{\mathbf{X}}$ for a minimum loss of energy. The short-term interaction is also enhanced by employing a message-memory module [4, 13] (See Appendix A). Next, the encoded event sequence is partitioned evenly into a sequence of subgraphs of interactive nodes, where the node attributes encode its present and recent status. Next, the graph topology is embedded by another spectral-based graph network, i.e., a global spectral graph convolution, to find a well-conditioned hidden representation for the final prediction task. We now explain the two spectral-based transforms in detail.
102
+
103
+ ### 3.2 Adaptive Temporal Spectral Transform
104
+
105
+ In the first step, we embed the long-range time-dependency with adaptive power method SVD as a particular implementation of the temporal spectral transform. As briefed in Section 2, it takes a similar role as the traditional self-attention in feature extraction but is equipped with additional scalability and reliability. We focus on the transform and ignore the adaptive normalization for conciseness.
106
+
107
+ Consider a sequence of events $\mathbf{X} \in {\mathbb{R}}^{N \times d}$ . We look for its low-dimensional projection ${\mathbf{X}}^{\prime }$ in the spectral domain that i) summarizes the principal patterns of $\mathbf{X}$ , and ii) is immune to minor disturbances to preserve expressivity in the projected representation. Analogous to self-attentions, the spectral encoder assigns a matrix of similarity scores to $\mathbf{X}$ . However, the latter follows an explicit update rule to establish a traceable learning process. The main patterns from both event attributes and time dimensions are summarized in spectral coefficients ${\mathbf{X}}^{\prime }$ (cyan box in Figure 1). Below we explain the two interpretations of such transforms.
108
+
109
+ Interpret 1. Spectral coefficients ${X}^{\prime } \approx {XV}$ extract information in feature dimension. By definition $\mathbf{X} \mathrel{\text{:=}} \mathbf{U}\sum {\mathbf{V}}^{\top }$ , SVD stores the factorized features (in columns of $\mathbf{V}$ ) and temporal shifts (in rows of $\mathbf{U}$ ). For low-rank or noisy input, Truncated SVD [29] extracts the stable main patterns by ${\mathbf{X}}^{\prime } \approx \mathbf{{XV}} \in {\mathbb{R}}^{N \times {d}^{\prime }}\left( {d > {d}^{\prime }}\right)$ . Specifically, the transformed ${\mathbf{X}}^{\prime }$ is projected by $\mathbf{V}$ to a new space of the most effective feature representation. For instance, the $j$ th feature of the $i$ th transformed event ${\mathbf{X}}_{ij}^{\prime } = {\mathbf{X}}_{i, : }{\mathbf{V}}_{ : , j}$ concretes ${\mathbf{X}}_{i}$ to a coefficient following the projection of the $j$ th factorized feature. Similar mappings by the $1,\ldots ,{d}^{\prime }$ th factorization of $\mathbf{V}$ send the raw ${\mathbf{X}}_{i}$ to a new space that is the most representative for the features.
110
+
111
+ Interpret 2. Spectral coefficients ${\mathbf{X}}^{\prime } \approx \mathbf{X}\left( {{\mathbf{X}}^{\top }\mathbf{X}}\right) {\mathbf{R}}^{-1}$ aggregates information of time dimension. We focus on the simplest case of iteration $q = 1$ for illustration purpose, i.e., ${\mathbf{X}}^{\top }\mathbf{X}{\mathbf{R}}^{-1}$ is a one-step approximation of $\mathbf{V}$ . For instance, the $j$ th element in the $j$ th row of ${\mathbf{X}}^{\top }\mathbf{X}$ is from ${\mathbf{X}}_{ : , j}$ that covers the whole time interval. The consequent ${\mathbf{X}}^{\top }\mathbf{X} \in {\mathbb{R}}^{d \times d}$ is a covariance matrix that summarizes the column-wise linear relationship of $\mathbf{X}$ , i.e., the change of attributes over time. Transforming $\mathbf{X}$ by this similarity matrix thus establishes a new presentation with the all-time temporal correlation of attributes. The same way of interpretation expands easily to $q > 1$ , where the approximation takes linear adjustments via ${\mathbf{R}}^{-1}$ and concentrates high energies on more expressive modes. However, the fundamental format of the covariance matrix ${\mathbf{X}}^{\top }\mathbf{X}$ stays unchanged.
112
+
113
+ ### 3.3 Global Framelet Graph Transforms
114
+
115
+ To allow scalable graph representation learning, a global spectral graph convolution is employed for extracting multi-level and multi-scale features. The vanilla framelet graph convolution (UFGCONV; Zheng et al. [25]) leverages fast framelet decomposition and reconstruction for efficient static graph topology embedding (See Appendix B). Working in the framelet domain has been proven robust to local perturbations and circumvents over-smoothing with Dirichlet energy preservation [30, 31]. We hereby propose a global version of framelet transforms to perform multi-scale robust graph representation learning for CTDGs. Formally, the graph framelet convolution defines in a similar manner to any typical spectral graph convolution layer that ${\mathbf{\theta }}_{p} \star {\mathbf{X}}_{p} = {\mathbf{V}}_{p}\operatorname{diag}\left( {\mathbf{\theta }}_{p}\right) {\mathbf{W}}_{p}{\mathbf{X}}_{p}^{b}$ , where ${\mathbf{X}}_{p}^{b}$ denotes the embedded input and ${\mathbf{\theta }}_{p}$ is the learnable parameters with respect to ${\mathbf{X}}_{p}^{\flat }$ . The ${\mathcal{W}}_{p}$ and ${\mathcal{V}}_{p}$ are the decomposition and reconstruction operators that transform the input graph signal ${\mathbf{X}}_{p}^{b}$ from and to the vertex domain.
116
+
117
+ Algorithm 1: SPEDGNN: Spectral Dynamic Graph Neural Network
118
+
119
+ ---
120
+
121
+ Input : raw sequential data $\mathbf{X}$
122
+
123
+ Output:label prediction $Y$
124
+
125
+ Initialization: global $\Theta$
126
+
127
+ Adaptive power method SVD $\widetilde{\mathbf{X}} \leftarrow \mathbf{X}{\left( {\mathbf{X}}^{\top }\mathbf{X}\right) }^{q}{\mathbf{R}}^{-1}$ ;
128
+
129
+ for batch $p \leftarrow 1$ to $M - 2$ do
130
+
131
+ ${\mathbf{h}}_{p}\left\lbrack t\right\rbrack \leftarrow \mathbf{{msg}}\left( {{e}_{i}\left\lbrack t\right\rbrack }\right) \parallel \mathbf{{mem}}\left( {{e}_{i}\left\lbrack { : t}\right\rbrack }\right) ;$
132
+
133
+ ${\mathcal{G}}_{p} \leftarrow \left( {{\mathbf{A}}_{p},{\mathbf{X}}_{p} \leftarrow \mathbf{{FC}}\left( {{\mathbf{h}}_{p}\left\lbrack t\right\rbrack }\right) }\right) ;$
134
+
135
+ ${\widehat{H}}_{p} \leftarrow \operatorname{UFGConv}\left( {{\mathbf{A}}_{p},{\mathbf{X}}_{p},{\theta }_{p}}\right)$ ;
136
+
137
+ ${\mathbf{Y}}_{p} \leftarrow \operatorname{Predictor}\left( {H}_{p}\right)$ ;
138
+
139
+ ${\Theta }_{p} \leftarrow {\theta }_{p};$
140
+
141
+ ${\mathbf{Y}}_{\mathrm{p},\text{val }},{\mathbf{Y}}_{\mathrm{p},\text{test }} \leftarrow \operatorname{Predictor}\left( {H}_{p + 1}\right) ,\operatorname{Predictor}\left( {H}_{p + 2}\right)$ ;
142
+
143
+ Update: score $\left( {\mathbf{Y}}_{\text{val }}\right)$ , score $\left( {\mathbf{Y}}_{\text{test }}\right)$ .
144
+
145
+ end
146
+
147
+ ---
148
+
149
+ Different from a set of independent static graphs, dynamic graphs are captured on an evolving timeline. Therefore, we preserve the intra-connections of the graph sequence in a global learnable variable $\Theta$ . At time $t$ , we initialize the framelet coefficients $\theta {\left\lbrack t\right\rbrack }^{\left( 0\right) }$ with their most recent best estimation before $t$ , i.e., $\theta {\left\lbrack : t\right\rbrack }^{\left( n\right) }$ , from $\Theta$ . For instance, the initial $\theta$ with respect to node $p$ reads ${\theta }_{p}{\left\lbrack t\right\rbrack }^{\left( 0\right) } = {\Theta }_{p}$ . Figure 1 demonstrates the update procedure of the global framelet transform with a sample global graph of 5 nodes. Suppose the subgraph at batch $t$ contains the first 3 of the 5 nodes. A UFGCONV trains $\theta \left\lbrack t\right\rbrack$ to represent these three nodes. The parameter values are initialized with the best estimated of $\left\{ {{\Theta }_{1},{\Theta }_{2},{\Theta }_{3}}\right\}$ recorded before $t$ . The optimized model is deployed for further prediction tasks. Meanwhile, $\Theta$ updates the first three parameters by $\left\{ {{\theta }_{1}{\left\lbrack t\right\rbrack }^{\left( n\right) },{\theta }_{2}{\left\lbrack t\right\rbrack }^{\left( n\right) },{\theta }_{3}{\left\lbrack t\right\rbrack }^{\left( n\right) }}\right\}$ .
150
+
151
+ The workflow of the fully spectral learning scheme is briefed in Figure 1. A temporal spectral transform first processes the input raw data to the spectral domain that encodes long-range time dependency. With the adaptive power method SVD, a group of stable principal patterns can be extracted, which is functioned similarly to a stacked efficient linear self-attention mechanism. Next, a message-memory module enhances the short-term interactions of events and generate a comprehensive node representation of batched subgraphs (Appendix A). The topology of subgraphs records the interactions among node entities, which we leverage a global graph framelet convolution to learn.
152
+
153
+ ### 3.4 Main Algorithm
154
+
155
+ The working flow of SPEDGNN is summarized in Algorithm 1. Based on this, we estimate the main algorithm has a linear computational complexity $\mathcal{O}\left( {{Nd}\log \left( d\right) }\right)$ to the number of events $N$ , which proves that our method is efficient with a small time and space complexity (See Appendix C).
156
+
157
+ ## 4 Numerical Examples
158
+
159
+ We carry out experiments on three bipartite graph datasets (Wikipedia, Reddit, and MOOC)[13, 32] for link prediction and node classification tasks [33]. Both transductive and inductive settings are examined in link predictions. We leave implementation details in Appendix E.
160
+
161
+ A fair comparison is made with JODIE [13], DYREP [34], and TGN [4]. Classic methods (e.g., TGAT and DEEPWALK) that significantly underperform the baseline methods are excluded. For model training and evaluation, we assume the interactions of a graph are given until the last timestamp in batch $t$ and make predictions on the timestamps of batch $t + 1$ and $t + 2$ , where predictions on the former set provide the validation scores, and the latter guides the test scores. Note that our training follows PyTorch Geometric [35] and makes a more strict data acquirement criterion than TGN, where the latter has access to all previous data when loading node neighbors, including those in the same
162
+
163
+ Table 1: Performance of link prediction over 10 repetitions.
164
+
165
+ <table><tr><td rowspan="2" colspan="2">Model</td><td rowspan="2">#parameters</td><td colspan="2">Wikipedia</td><td colspan="2">Reddit</td><td colspan="2">MOOC</td></tr><tr><td>precision</td><td>ROC-AUC</td><td>precision</td><td>ROC-AUC</td><td>precision</td><td>ROC-AUC</td></tr><tr><td rowspan="6">transductive</td><td>DyREP</td><td>${920} \times {10}^{3}$</td><td>${94.67} \pm {0.25}$</td><td>${94.26} \pm {0.24}$</td><td>${96.51} \pm {0.59}$</td><td>${96.64} \pm {0.48}$</td><td>${79.84} \pm {0.38}$</td><td>${81.92} \pm {0.21}$</td></tr><tr><td>JODIE-RNN</td><td>${209} \times {10}^{3}$</td><td>93.94±2.50</td><td>${94.44} \pm {1.42}$</td><td>97.12±0.57</td><td>${}_{{97.59} \pm {0.27}}$</td><td>${76.68} \pm {0.02}$</td><td>${81.40} \pm {0.02}$</td></tr><tr><td>JODIE-GRU</td><td>${324} \times {10}^{3}$</td><td>96.38±0.50</td><td>96.75±0.19</td><td>${96.84} \pm {0.39}$</td><td>97.33±0.25</td><td>${80.29} \pm {0.09}$</td><td>${84.88} \pm {0.30}$</td></tr><tr><td>TGN-GRU</td><td>${1.217} \times {10}^{3}$</td><td>${96.73} \pm {0.09}$</td><td>${96.45} \pm {0.11}$</td><td>${98.63} \pm {0.06}$</td><td>${98.61} \pm {0.03}$</td><td>${}_{{83.18} \pm {0.10}}$</td><td>${}_{{83.20} \pm {0.35}}$</td></tr><tr><td>SPEDGNN-MLP (ours)</td><td>${170} \times {10}^{3}$</td><td>97.02±0.06</td><td>96.51 $\pm {0.08}$</td><td>${98.19} \pm {0.05}$</td><td>${98.15} \pm {0.06}$</td><td>${82.40} \pm {0.24}$</td><td>${}_{{85.55} \pm {0.17}}$</td></tr><tr><td>SPEDGNN-GRU (ours)</td><td>${376} \times {10}^{3}$</td><td>${97.44} \pm {0.05}$</td><td>${}_{{97.15} \pm {0.06}}$</td><td>${98.69} \pm {0.09}$</td><td>98.66±0.12</td><td>${84.50} \pm {0.10}$</td><td>${86.88} \pm {0.09}$</td></tr><tr><td rowspan="6">inductive</td><td>DYREP</td><td>${920} \times {10}^{3}$</td><td>${92.09} \pm {0.28}$</td><td>${91.22} \pm {0.26}$</td><td>${96.07} \pm {0.34}$</td><td>${96.03} \pm {0.28}$</td><td>79.64±0.12</td><td>${82.34} \pm {0.32}$</td></tr><tr><td>JODIE-RNN</td><td>${209} \times {10}^{3}$</td><td>${92.92} \pm {1.07}$</td><td>${92.56} \pm {0.87}$</td><td>93.94±1.53</td><td>95.08±0.70</td><td>${77.17} \pm {0.02}$</td><td>${81.77} \pm {0.01}$</td></tr><tr><td>JODIE-GRU</td><td>${324} \times {10}^{3}$</td><td>${94.93} \pm {0.15}$</td><td>95.08±0.70</td><td>${92.90} \pm {0.03}$</td><td>${95.14} \pm {0.07}$</td><td>77.82±0.17</td><td>${}_{{82.90} \pm {0.60}}$</td></tr><tr><td>TGN-GRU</td><td>${1.217} \times {10}^{3}$</td><td>${94.37} \pm {0.23}$</td><td>93.83±0.27</td><td>97.38±0.07</td><td>97.33±0.11</td><td>81.75±0.24</td><td>${82.83} \pm {0.18}$</td></tr><tr><td>SPEDGNN-MLP (ours)</td><td>${170} \times {10}^{3}$</td><td>${94.27} \pm {0.05}$</td><td>${93.28} \pm {0.05}$</td><td>${97.49} \pm {0.01}$</td><td>97.34±0.02</td><td>${82.54} \pm {0.08}$</td><td>${85.23} \pm {0.09}$</td></tr><tr><td>SPEDGNN-GRU (ours)</td><td>${376} \times {10}^{3}$</td><td>${96.60} \pm {0.01}$</td><td>${95.70}_{\pm {0.02}}$</td><td>${97.47} \pm {0.05}$</td><td>${97.10} \pm {0.09}$</td><td>${82.35} \pm {0.06}$</td><td>${83.67} \pm {0.06}$</td></tr></table>
166
+
167
+ $\dagger$ The top three are highlighted by First, Second, Third.
168
+
169
+ test batch. Such an operation reveals the true connections to predict, and the test scores of TGN reported by Rossi et al. [4] are higher than in this paper.
170
+
171
+ Prediction Performance. Table 1 reports the performance of link prediction tasks. SPEDGNN constantly outperforms JODIE and DYREP with RNN, and achieves at least comparable performance to JODIE and TGN-GRU with a small volatility. It is noteworthy that JODIE-GRU outperforms the original JODIE-RNN by Kumar et al. [13]. The performance gain of GRU over RNN explains to some extent the rare outperformance of TGN over SPEDGNN-MLP, not to mention that MLP is simpler than any recurrent unit. This statement is confirmed by the performance of SPEDGNN-GRU. When the GRU module is employed in the memory layer, the highest performance score is almost always observed over different datasets, learning tasks, and evaluation metrics.
172
+
173
+ In addition to the transductive and inductive link prediction tasks, we also conduct node classification. The model performance is evaluated by the average ROC-AUC scores, which better fit the extremely imbalanced nature of node classes. The results reported in Table 2 confirm that SPEDGNN outperforms all baselines, especially with the GRU module.
174
+
175
+ Table 2: ROC-AUC of node classification
176
+
177
+ <table><tr><td>Model</td><td>Wikipedia</td><td>Reddit</td><td>MOOC</td></tr><tr><td>DyREP</td><td>${84.59} \pm {2.21}$</td><td>${62.91} \pm {2.40}$</td><td>${69.86} \pm {0.02}$</td></tr><tr><td>JODIE-RNN</td><td>${}_{{85.38} \pm {0.08}}$</td><td>${61.68} \pm {0.01}$</td><td>${66.82} \pm {0.05}$</td></tr><tr><td>JODIE-GRU</td><td>${}_{{87.90} \pm {0.09}}$</td><td>${64.30} \pm {0.21}$</td><td>${70.23} \pm {0.09}$</td></tr><tr><td>TGN-GRU</td><td>${88.95} \pm {0.07}$</td><td>${61.49} \pm {0.01}$</td><td>${}_{{70.32} \pm {0.13}}$</td></tr><tr><td>SPEDGNN-MLP (ours)</td><td>${88.37} \pm {0.03}$</td><td>${64.94} \pm {0.07}$</td><td>${69.52} \pm {0.08}$</td></tr><tr><td>SPEDGNN-GRU (ours)</td><td>${90.32} \pm {0.05}$</td><td>${}_{{65.28} \pm {0.05}}$</td><td>${71.08} \pm {0.02}$</td></tr></table>
178
+
179
+ Table 3: Training speed for link prediction
180
+
181
+ <table><tr><td>Model</td><td>Wikipedia</td><td>Reddit</td><td>MOOC</td></tr><tr><td>DyREP</td><td>${20.1}\mathrm{\;s} \pm {0.6}\mathrm{\;s}$</td><td>139.3s $\pm {0.1}$ s</td><td>${78.34}\mathrm{\;s} \pm {0.6}\mathrm{\;s}$</td></tr><tr><td>JODIE-RNN</td><td>${17.4s} \pm {2.0s}$</td><td>${121.8}\mathrm{\;s} \pm {0.3}\mathrm{\;s}$</td><td>${62.64s} \pm {0.1s}$</td></tr><tr><td>JODIE-GRU</td><td>${16.9}\mathrm{\;s} \pm {1.1}\mathrm{\;s}$</td><td>${131.6}\mathrm{\;s} \pm {1.5}\mathrm{\;s}$</td><td>${58.82}\mathrm{\;s} \pm {2.2}\mathrm{\;s}$</td></tr><tr><td>TGN-GRU</td><td>${24.9}\mathrm{\;s} \pm {0.3}\mathrm{\;s}$</td><td>${128.1s} \pm {2.2s}$</td><td>${78.11}\mathrm{\;s} \pm {0.7}\mathrm{\;s}$</td></tr><tr><td>SPEDGNN-MLP (ours)</td><td>${9.87s} \pm {0.1s}$</td><td>${63.3}\mathrm{\;s} \pm {1.1}\mathrm{\;s}$</td><td>$\mathbf{{38.41s}} \pm {0.5}\mathrm{\;s}$</td></tr><tr><td>SPEDGNN-GRU (ours)</td><td>${12.5}\mathrm{\;s} \pm {0.3}\mathrm{\;s}$</td><td>${83.6}\mathrm{\;s} \pm {0.1}\mathrm{\;s}$</td><td>${49.20}\mathrm{\;s} \pm {0.1}\mathrm{\;s}$</td></tr></table>
182
+
183
+ Computational Efficiency. Table 3 evaluates model efficiency by the training speed per epoch. Compared to the baseline models, the training speed per epoch of SPEDGNN is shorter with 50% ahead on the largest dataset Reddit. It confirms SPEDGNN's long-sequence computational privilege analysed in Section 2. In contrast, the comparable performance by TGN-GRU is achieved at the cost of doubling the training time to fit the model with seven times more learnable parameters. On the other hand, both variants of JODIE require a significantly longer training time than SPEDGNN, not to mention that their performance cannot constantly stay at the top tier.
184
+
185
+ Well-conditioned Spectral Node Embedding. We validate the energy balancing effect of the proposed SPEDGNN by investigating the distribution of hidden embedding's eigenvalues of different models. Recall that in Section 2 we demonstrated with a 2-dimensional toy example that the normalized spectral transform projects input features to well-conditioned representations, which requires a smoother decision boundary that is easier to fit by a classifier. For a higher dimensional feature representation, we describe the smoothness of the decision boundary by the decay of the associated condition number ${\lambda }_{i}/{\lambda }_{\min }$ , or the eigenvalues ${\lambda }_{i}$ .
186
+
187
+ We made the comparison on the optimized hidden representation of the test samples in Wikipedia. The distribution and total variance of condition number are visualized in Figure 3 and Figure 4, respectively. According to Figure 3, JODIE and TGN concentrate the most variance in the first few directions. Eigenvalues of the associated hidden representations decrease drastically after the first 3 or 4 epochs. Such a fast reduction of condition numbers indicates that the analyzed hidden features are highly-correlated, which gives rise to the concentration of the variance of the feature space on the first few principal components. As it challenges the classifier to find the optimal model to fit a rough decision boundary, such a circumstance with concentrated feature energy is not favored. In contrast, SPEDGNN finds a more separable hidden representation of test samples with slowly decayed singular values. As shown in Figure 4, the embedding by SPEDGNN constantly disperses the total variation in a larger number of vectors, while JODIE and TGN pick a few features to undertake most variations after the first few epochs.
188
+
189
+ ![01963ee6-403c-7136-889f-6809797d24a2_7_332_357_335_193_0.jpg](images/01963ee6-403c-7136-889f-6809797d24a2_7_332_357_335_193_0.jpg)
190
+
191
+ Figure 3: The distribution of the largest 25 singular values of the hidden representation by SPEDGNN, TGN, and JODIE.
192
+
193
+ ![01963ee6-403c-7136-889f-6809797d24a2_7_698_354_616_195_0.jpg](images/01963ee6-403c-7136-889f-6809797d24a2_7_698_354_616_195_0.jpg)
194
+
195
+ Figure 4: The number of singular vectors that provides ${50}\%$ (left) or ${90}\%$ (right) of total variance by SPEDGNN, TGN, and JODIE. SPEDGNN constantly includes more vectors to achieve the same level of total variance.
196
+
197
+ ## 5 Related work
198
+
199
+ ### 5.1 Efficient Self-Attention
200
+
201
+ The transformer is well-known for its powerful learning ability [16, 36-38]. However, the self-attention mechanism at the core of a transformer framework requires quadratic time and memory complexity, which hinders the model's scalability. A handful of recent works discuss potential improvements in the efficiency of model memory or computational cost when the input dimension is of a fixed size that is considerably large.
202
+
203
+ The prominent efficient transformer methods fall into three directions. First, prior knowledge compress or distill the self-attention architecture to a sparse attention matrix by pre-defining strides convolutions $\left\lbrack {{39},{40}}\right\rbrack$ or assuming patchwise patterns $\left\lbrack {{19},{41}}\right\rbrack$ . Some recent study also considers replacing fixed patterns with a learnable scheme that efficiently identifies chunks or clusters [42-44]. The data-driven learning procedure introduces extra flexibility to the division of the patches, blocks, or receptive fields, but the core idea of attention localization remains.
204
+
205
+ The second approach simultaneously accesses multiple tokens through a global memory module. The target is to distill the input sequence with a limited number of inducing points (or memory) $\left\lbrack {{45},{46}}\right\rbrack$ . Compared to the first approach of patching input tokens, inducing points break down the strict concept of token entities and make parameterizations on the global memory of token mixers.
206
+
207
+ The third emerging technique avoids explicitly computing the full contextual matrix of the self-attention mechanism through kernelization [28, 47] or low-rank approximation [21-23]. The projection is usually conducted on the lengthy sequence dimension that ignores the chronicle order of sequence when computing attention scores. However, the global view in compress helps the attention mechanism to manage the overall picture of the sequence on top of token-wise correlations. As is investigated by a recent study [48], the substitution of matrix decomposition to the self-attention mechanism is critical for learning the global context.
208
+
209
+ ### 5.2 Graph Structure Embedding
210
+
211
+ GNNs have seen a surge in interest and popularity for dealing with irregular graph-structured data that traditional deep learning methods such as CNNs fail to manage. Common to most GNNs and their variants is the graph embedding through the aggregation of neighbor nodes in a way of message 6 passing [49-51]. As a key ingredient for topology embedding, graph convolutions correspond to spatial methods and spectral methods that operate on node space $\left\lbrack {{52},{53}}\right\rbrack$ or on a pseudo-coordinate system that is mapped from nodes through some transform (typically Fourier) [54].
212
+
213
+ Due to the intuitive characteristics of spatial-based methods which can directly generalize the CNNs 310 to graph data with convolution on neighbors, most GNNs fall into the category of spatial methods 311 $\left\lbrack {{53},{55} - {62}}\right\rbrack$ . Many other spatial methods broadly follow the message passing scheme with different neighborhood aggregation strategies, but they inherently lack expressivity $\left\lbrack {{60},{63},{64}}\right\rbrack$ .
214
+
215
+ In contrast, spectral-based graph convolutions [25, 54, 65-72] convert the raw signal or features in the vertex domain into the frequency domain. Spectral-based methods have already been proved to have a solid mathematical foundation in graph signal processing [73], and the vastly equipped multi-scale or multi-resolution views push them to a more scalable solution of graph embedding. Versatile Fourier $\left\lbrack {{65},{66},{74}}\right\rbrack$ , wavelet transforms $\left\lbrack {68}\right\rbrack$ and framelets $\left\lbrack {25}\right\rbrack$ have also shown their capabilities in graph representation learning. Of these transforms, Fourier transforms is particularly one of the most popular ones and the work in [75] gave a detailed review of how Fourier transform enhances neural networks. In addition, with fast transforms being available in computing strategy, a big concern related to efficiency is well resolved.
216
+
217
+ ### 5.3 Temporal Encoding of Dynamic Graphs
218
+
219
+ Recurrent neural networks (RNNs) are considered exceptionally successful for sequential data modelling, such as text, video, and speech [76-78]. In particular, Long Short Term Memory (LSTM) [79] and Gated Recurrent Unit (GRU) [15] gains great popularity in application. Compared to the Vanilla RNN, they leverage a gate system to extract memory information, so that memorizing long-range dependency of sequential data becomes possible.Later, the Transformer network [16] designs an encoder-decoder architecture with the self-attention mechanism, so as to allow parallel processing on sequential tokens. The self-attention mechanism have achieved state-of-the-art performance across all NLP tasks [16, 33] and even some image tasks [20, 80].
220
+
221
+ For dynamic GNNs, it is critical to consolidate the features along the temporal dimension. Dynamic graphs consist of discrete and continuous two types according to whether they have the exact temporal information [81]. Recent advances and success in static graphs encourage researchers and enable further exploration in the direction of dynamic graphs. Nevertheless, it is still not recently until several approaches $\left\lbrack {{34},{82} - {84}}\right\rbrack$ were proposed due to the challenges of modeling the temporal dynamics. In general, a dynamic graph neural network could be thought of as a combination of static GNNs and time series models which typically come in the form of an RNN [85-87]. The first DGNN was introduced by Seo et al. [85] as a discrete DGNN and Know-Evolve [88] was the first continuous model. JODIE [13] employed a coupled RNN model to learn the embeddings of the user/item. The work in [89] learns the node representations through two joint self-attention along both dimensions of graph neighborhood and temporal dynamics. The work in [90] was the first to use RNN to regulate the GCN model, which means to adapt the GCN model along the temporal dimension at every time step rather than feeding the node embeddings learned from GCNs into an RNN. TGAT [91] is notable as the first to consider time-feature interactions. Then Rossi et al. [4] presented a more generic framework for any dynamic graphs represented as a sequence of time events with a memory module added in comparison to [91] to enable short-term memory enhancement.
222
+
223
+ ## 6 Discussion
224
+
225
+ This work analyzes the versatile spectral transform in capturing the evolution of long-range time series as well as graph topology. We investigate a particular dynamic system of continuous-time dynamic graphs (CTDGs) to find its robust representation. In particular, we implement iterative SVD approximations to encode the long-range feature evolution of the dynamic graph events, which acts a similar role as multiple layers of a low-rank self-attention mechanism. The proposed transform has linear complexity of $\mathcal{O}\left( {{Nd}\log \left( d\right) }\right)$ for a CTDG with $N$ events of $d$ dimensions. The short-term memory in learning is enhanced for the dynamic events by a learnable scheme, such as MLP or GRU. A multi-level and multi-scale fast transform of global spectral graph convolution is then employed for topological embedding, which allows sufficient scalability and transferability in learning dynamic graph representation. The final event embeddings are well-conditioned and the algorithm requests fewer calculation resources. The proposed SPEDGNN shows competitive performance on real dynamic graph prediction tasks.
226
+
227
+ ## References
228
+
229
+ 361
230
+
231
+ [1] George Panagopoulos, Giannis Nikolentzos, and Michalis Vazirgiannis. Transfer graph neural networks for pandemic forecasting. In AAAI Conference on Artificial Intelligence, 2021. 1
232
+
233
+ [2] Cornelius Fritz, Emilio Dorigatti, and David Rügamer. Combining graph neural networks and spatio-temporal disease models to improve the prediction of weekly covid-19 cases in germany. Scientific Reports, 12(1):1-18, 2022. 1
234
+
235
+ [3] Songgaojun Deng, Huzefa Rangwala, and Yue Ning. Learning dynamic context graphs for predicting social events. In the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1007-1016, 2019. 1
236
+
237
+ [4] Emanuele Rossi, Ben Chamberlain, Fabrizio Frasca, Davide Eynard, Federico Monti, and Michael Bronstein. Temporal graph networks for deep learning on dynamic graphs. In ICML 2020 Workshop on Graph Representation Learning, 2020. 1, 5, 6, 7, 9, 15, 17
238
+
239
+ [5] Alvaro Sanchez-Gonzalez, Jonathan Godwin, Tobias Pfaff, Rex Ying, Jure Leskovec, and Peter W. Battaglia. Learning to simulate complex physics with graph networks. In International Conference on Machine Learning, 2020. 1
240
+
241
+ [6] Robin Walters, Jinxi Li, and Rose Yu. Trajectory prediction using equivariant continuous convolution. In International Conference on Learning Representations, 2020. 1
242
+
243
+ [7] P Gainza, F Sverrisson, F Monti, E Rodolà, MM Bronstein, and BE Correia. Deciphering interaction fingerprints from protein molecular surfaces. Nature Methods, 17:184-192, 2019. 1
244
+
245
+ [8] Jingxuan Zhu, Juexin Wang, Weiwei Han, and Dong Xu. Neural relational inference to learn long-range allosteric interactions in proteins from molecular dynamics simulations. Nature Communications, 13(1):1-16, 2022. 1
246
+
247
+ [9] SHI Xingjian, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. Convolutional lstm network: A machine learning approach for precipitation nowcasting. In Advances in Neural Information Processing Systems, pages 802-810, 2015. 1
248
+
249
+ [10] Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. In International Conference on Learning Representations, 2016.
250
+
251
+ [11] Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural networks for graphs. In International Conference on Machine Learning, 2016.
252
+
253
+ [12] Arman Hasanzadeh, Ehsan Hajiramezanali, Krishna Narayanan, Nick Duffield, Mingyuan Zhou, and Xiaoning Qian. Variational graph recurrent neural networks. In Advances in Neural Information Processing Systems, 2019. 1
254
+
255
+ [13] Srijan Kumar, Xikun Zhang, and Jure Leskovec. Predicting dynamic embedding trajectory in temporal interaction networks. In ${KDD}$ , pages ${1269} - {1278},{2019.1},5,6,7,9,{15},{16},{17}$
256
+
257
+ [14] Jeffrey L Elman. Finding structure in time. Cognitive Science, 14(2):179-211, 1990. 1
258
+
259
+ [15] Kyunghyun Cho, Bart van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches. In SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103-111, 2014. 1,9
260
+
261
+ [16] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008, 2017. 1, 8, 9
262
+
263
+ [17] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (1), 2019. 1
264
+
265
+ [18] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21:1-67, 2020. 1
266
+
267
+ [19] Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. Image transformer. In International Conference on Machine Learning, pages 4055-4064. PMLR, 2018. 1, 8
268
+
269
+ 362 363 364 365 366 367 412
270
+
271
+ [20] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021. 1, 9
272
+
273
+ [21] Sinong Wang, Belinda Z Li, Madian Khabsa, Han Fang, and Hao Ma. Linformer: Self-attention with linear complexity. arXiv:2006.04768, 2020. 1, 4, 8
274
+
275
+ [22] Zhuoran Shen, Mingyuan Zhang, Haiyu Zhao, Shuai Yi, and Hongsheng Li. Efficient attention: Attention with linear complexities. In the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 3531-3539, 2021.
276
+
277
+ [23] Shuhao Cao. Choose a transformer: Fourier or galerkin. In Thirty-Fifth Conference on Neural Information Processing Systems, 2021. 4, 8
278
+
279
+ [24] Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah Smith, and Lingpeng Kong. Random feature attention. In International Conference on Learning Representations, 2021. 1
280
+
281
+ [25] Xuebin Zheng, Bingxin Zhou, Junbin Gao, Yu Guang Wang, Pietro Lio, Ming Li, and Guido Montúfar. How framelets enhance graph neural networks. International Conference on Machine Learning, 2021. 1, 5, 9, 15, 16
282
+
283
+ [26] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, pages 448-456. PMLR, 2015. 3
284
+
285
+ [27] Gene H Golub and Charles F Van Loan. Matrix computations. JHU press, 2013. 3
286
+
287
+ [28] Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. Transformers are mns: Fast autoregressive transformers with linear attention. In International Conference on Machine Learning, pages 5156-5165. PMLR, 2020. 4, 8
288
+
289
+ [29] Per Christian Hansen. The truncatedsvd as a method for regularization. BIT Numerical Mathematics, 27(4):534-553, 1987. 5
290
+
291
+ [30] Bingxin Zhou, Ruikun Li, Xuebin Zheng, Yu Guang Wang, and Junbin Gao. Graph denoising with framelet regularizer. arXiv:2111.03264, 2021. 5
292
+
293
+ [31] Bingxin Zhou, Yuanhong Jiang, Yu Guang Wang, Jingwei Liang, Junbin Gao, Shirui Pan, and Xiaoqun Zhang. Graph neural network for local corruption recovery. 2022. 5
294
+
295
+ [32] Tharindu Rekha Liyanagunawardena, Andrew Alexandar Adams, and Shirley Ann Williams. Moocs: A systematic study of the published literature 2008-2012. International Review of Research in Open and Distributed Learning, 14(3):202-227, 2013. 6, 16
296
+
297
+ [33] Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. Ask me anything: Dynamic memory networks for natural language processing. In International Conference on Machine Learning, pages 1378-1387, 2016. 6, 9
298
+
299
+ [34] Rakshit Trivedi, Mehrdad Farajtabar, Prasenjeet Biswal, and Hongyuan Zha. Dyrep: Learning representations over dynamic graphs. In International Conference on Learning Representations, 2019.6, 9, 17
300
+
301
+ [35] Matthias Fey and Jan Eric Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019. 6
302
+
303
+ [36] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth ${16} \times {16}$ words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2020. 8
304
+
305
+ [37] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In the IEEE/CVF International Conference on Computer Vision, pages 10012-10022, 2021.
306
+
307
+ [38] John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ron-neberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Zídek, Anna Potapenko, et al. Highly accurate protein structure prediction with alphafold. Nature, 596(7873):583-589, 2021. 8
308
+
309
+ 413 414 415 416 417 418 419 458 464
310
+
311
+ [39] Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. arXiv:1904.10509, 2019. 8
312
+
313
+ [40] Iz Beltagy, Matthew E. Peters, and Arman Cohan. Longformer: The long-document transformer. arXiv:2004.05150, 2020. 8
314
+
315
+ [41] Jiezhong Qiu, Hao Ma, Omer Levy, Wen-tau Yih, Sinong Wang, and Jie Tang. Blockwise self-attention for long document understanding. In the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 2555-2565, 2020. 8
316
+
317
+ [42] Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. Reformer: The efficient transformer. In International Conference on Learning Representations, 2020. 8
318
+
319
+ [43] Yi Tay, Dara Bahri, Liu Yang, Donald Metzler, and Da-Cheng Juan. Sparse sinkhorn attention. In International Conference on Machine Learning, pages 9438-9447. PMLR, 2020.
320
+
321
+ [44] Aurko Roy, Mohammad Saffar, Ashish Vaswani, and David Grangier. Efficient content-based sparse attention with routing transformers. Transactions of the Association for Computational Linguistics, 9:53-68, 2021. 8
322
+
323
+ [45] Juho Lee, Yoonho Lee, Jungtaek Kim, Adam Kosiorek, Seungjin Choi, and Yee Whye Teh. Set transformer: A framework for attention-based permutation-invariant neural networks. In International Conference on Machine Learning, pages 3744-3753. PMLR, 2019. 8
324
+
325
+ [46] Joshua Ainslie, Santiago Ontañón, Chris Alberti, Vaclav Cvicek, Zachary Fisher, Philip Pham, Anirudh Ravula, Sumit Sanghai, Qifan Wang, and Li Yang. Etc: Encoding long and structured data in transformers. In the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), 2020. 8
326
+
327
+ [47] Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, David Belanger, Lucy Colwell, et al. Masked language modeling for proteins via linearly scalable long-context transformers. arXiv:2006.03555, 2020.8
328
+
329
+ [48] Zhengyang Geng, Meng-Hao Guo, Hongxu Chen, Xia Li, Ke Wei, and Zhouchen Lin. Is attention better than matrix decomposition? In International Conference on Learning Representations, 2021. 8
330
+
331
+ [49] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International Conference on Machine Learning, pages 1263-1272, 2017. 8
332
+
333
+ [50] Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: going beyond Euclidean data. IEEE Signal Processing Magazine, 34 (4): ${18} - {42},{2017}$ .
334
+
335
+ [51] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 32(1):4-24, 2020. 8
336
+
337
+ [52] William L Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, pages 1025-1035, 2017. 9
338
+
339
+ [53] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2018. 9
340
+
341
+ [54] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. arXiv:1312.6203, 2013. 9
342
+
343
+ [55] James Atwood and Don Towsley. Diffusion-convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1993-2001, 2016. 9
344
+
345
+ [56] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv:1710.10903, 2017.
346
+
347
+ [57] Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda, and Michael M Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. In CVPR, pages 5115-5124, 2017.
348
+
349
+ [58] Wenbing Huang, Tong Zhang, Yu Rong, and Junzhou Huang. Adaptive sampling towards fast graph representation learning. In Advances in Neural Information Processing Systems, 2018.
350
+
351
+ 465 466 467 468 469 470
352
+
353
+ [59] Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An end-to-end deep learning architecture for graph classification. In AAAI Conference on Artificial Intelligence, 2018.
354
+
355
+ [60] Qimai Li, Zhichao Han, and Xiao-Ming Wu. Deeper insights into graph convolutional networks for semi-supervised learning. In AAAI conference on artificial intelligence, 2018. 9
356
+
357
+ [61] Rex Ying, Jiaxuan You, Christopher Morris, Xiang Ren, William L Hamilton, and Jure Leskovec. Hierarchical graph representation learning with differentiable pooling. Advances in Neural Information Processing Systems, 2018.
358
+
359
+ [62] Ziqi Liu, Chaochao Chen, Longfei Li, Jun Zhou, Xiaolong Li, Le Song, and Yuan Qi. Ge-niepath: Graph neural networks with adaptive receptive paths. In AAAI Conference on Artificial Intelligence, pages 4424-4431, 2019. 9
360
+
361
+ [63] Hoang Nt and Takanori Maehara. Revisiting graph neural networks: All we have is low-pass filters. arXiv:1905.09550, 2019. 9
362
+
363
+ [64] Devin Kreuzer, Dominique Beaini, Will Hamilton, Vincent Létourneau, and Prudencio Tossou. Rethinking graph transformers with spectral attention. In Advances in Neural Information Processing Systems, volume 34, pages 21618-21629, 2021. 9
364
+
365
+ [65] Mikael Henaff, Joan Bruna, and Yann LeCun. Deep convolutional networks on graph-structured data. arXiv:1506.05163, 2015. 9
366
+
367
+ [66] Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems, volume 29, pages 3844-3852, 2016. 9
368
+
369
+ [67] Ron Levie, Federico Monti, Xavier Bresson, and Michael M Bronstein. Cayleynets: Graph convolutional neural networks with complex rational spectral filters. IEEE Transactions on Signal Processing, 67(1):97-109, 2018.
370
+
371
+ [68] Yunxiang Zhao, Jianzhong Qi, Qingwei Liu, and Rui Zhang. Wgcn: Graph convolutional networks with weighted structural features. arXiv:2104.14060, 2021. 9
372
+
373
+ [69] Bingbing Xu, Huawei Shen, Qi Cao, Yunqi Qiu, and Xueqi Cheng. Graph wavelet neural network. In International Conference on Learning Representations, 2018.
374
+
375
+ [70] Ming Li, Zheng Ma, Yu Guang Wang, and Xiaosheng Zhuang. Fast haar transforms for graph neural networks. Neural Networks, 128:188-198, 2020.
376
+
377
+ [71] Xuebin Zheng, Bingxin Zhou, Yu Guang Wang, and Xiaosheng Zhuang. Decimated framelet system on graphs and fast g-framelet transforms. Journal of Machine Learning Research, 23 (18):1-68, 2022.
378
+
379
+ [72] Xuebin Zheng, Bingxin Zhou, Ming Li, Yu Guang Wang, and Junbin Gao. Mathnet: Haar-like wavelet multiresolution-analysis for graph representation and learning. arXiv:2007.11202, 2020. 9
380
+
381
+ [73] David I Shuman, Sunil K Narang, Pascal Frossard, Antonio Ortega, and Pierre Vandergheynst. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Processing Magazine, 30(3):83-98, 2013. 9
382
+
383
+ [74] Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017. 9
384
+
385
+ [75] James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, and Santiago Ontanon. Fnet: Mixing tokens with fourier transforms. arXiv:2105.03824, 2021. 9
386
+
387
+ [76] Alex Graves. Sequence transduction with recurrent neural networks. In ICML 2012 Workshop on Representation Learning, 2012. 9
388
+
389
+ [77] Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 6645-6649. Ieee, 2013.
390
+
391
+ [78] Atefeh Shahroudnejad. A survey on understanding, visualizations, and explanation of deep neural networks. arXiv:2102.01792, 2021. 9
392
+
393
+ [79] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9 (8):1735-1780, 1997. 9
394
+
395
+ 519 520 521 522 523 524
396
+
397
+ 567 [80] Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. Stacked attention networks 568
398
+
399
+ for image question answering. In the IEEE / CVF Computer Vision and Pattern Recognition Conference, pages 21-29, 2016. 9
400
+
401
+ [81] Joakim Skardinga, Bogdan Gabrys, and Katarzyna Musial. Foundations and modelling of dynamic networks using dynamic graph neural networks: A survey. IEEE Access, 2021. 9
402
+
403
+ [82] Giang Hoang Nguyen, John Boaz Lee, Ryan A Rossi, Nesreen K Ahmed, Eunyee Koh, and Sungchul Kim. Continuous-time dynamic network embeddings. In the Web Conference 2018, pages 969-976, 2018. 9
404
+
405
+ [83] Taisong Li, Jiawei Zhang, S Yu Philip, Yan Zhang, and Yonghong Yan. Deep dynamic network embedding for link prediction. IEEE Access, 6:29219-29230, 2018.
406
+
407
+ [84] Palash Goyal, Nitin Kamra, Xinran He, and Yan Liu. DynGEM: Deep embedding method for dynamic graphs. In 3rd International Workshop on Representation Learning for Graphs (ReLiG), IJCAI, 2017. 9
408
+
409
+ [85] Youngjoo Seo, Michaël Defferrard, Pierre Vandergheynst, and Xavier Bresson. Structured sequence modeling with graph convolutional recurrent networks. In International Conference on Neural Information Processing, pages 362-373. Springer, 2018. 9
410
+
411
+ [86] Franco Manessi, Alessandro Rozza, and Mario Manzo. Dynamic graph convolutional networks. Pattern Recognition, 97:107000, 2020.
412
+
413
+ [87] Apurva Narayan and Peter HO'N Roe. Learning graph dynamics using deep neural networks. IFAC-PapersOnLine, 51(2):433-438, 2018. 9
414
+
415
+ [88] Rakshit Trivedi, Hanjun Dai, Yichen Wang, and Le Song. Know-evolve: Deep temporal reasoning for dynamic knowledge graphs. In International Conference on Machine Learning, pages 3462-3471, 2017. 9
416
+
417
+ [89] Aravind Sankar, Yanhong Wu, Liang Gou, Wei Zhang, and Hao Yang. Dysat: Deep neural representation learning on dynamic graphs via self-attention networks. In the 13th International Conference on Web Search and Data Mining, pages 519-527, 2020. 9
418
+
419
+ [90] Aldo Pareja, Giacomo Domeniconi, Jie Chen, Tengfei Ma, Toyotaro Suzumura, Hiroki Kaneza-shi, Tim Kaler, Tao Schardl, and Charles Leiserson. Evolvegcn: Evolving graph convolutional networks for dynamic graphs. In AAAI Conference on Artificial Intelligence, pages 5363-5370, 2020.9
420
+
421
+ [91] Da Xu, Chuanwei Ruan, Evren Korpeoglu, Sushant Kumar, and Kannan Achan. Inductive representation learning on temporal graph. In International Conference on Learning Representations, 2020.9
422
+
423
+ [92] Bin Dong. Sparse representation on graphs by tight wavelet frames and applications. Applied and Computational Harmonic Analysis, 42(3):452-479, 2017. 15
424
+
425
+ [93] Nathan Halko, Per-Gunnar Martinsson, and Joel A Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Review, 53 (2):217-288,2011. 16
426
+
427
+ [94] James W Pennebaker, Martha E Francis, and Roger J Booth. Linguistic inquiry and word count: Liwc 2001. Mahway: Lawrence Erlbaum Associates, 71(2001):2001, 2001. 16
428
+
429
+ [95] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations, 2019. 17
430
+
431
+ 571 572 573 574
papers/LOG/LOG 2022/LOG 2022 Conference/kQsniwmGgF5/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,308 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § WELL-CONDITIONED SPECTRAL TRANSFORMS FOR DYNAMIC GRAPH REPRESENTATION
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ This work establishes a fully-spectral framework to capture informative long-range temporal interactions in a dynamic system. We connect the spectral transform to the low-rank self-attention mechanisms and investigate its energy-balancing effect and computational efficiency. Based on the observations, we leverage the adaptive power method SVD and global graph framelet convolution to encode time-dependent features and graph structure for continuous-time dynamic graph representation learning. The former serves as an efficient high-order linear self-attention with determined propagation rules, and the latter establishes scalable and transferable geometric characterization for property prediction. Empirically, the proposed model learns well-conditioned hidden representations on a variety of online learning tasks, and it achieves top performance with a reduced number of learnable parameters and faster propagation speed.
12
+
13
+ § 14 1 INTRODUCTION
14
+
15
+ Dynamic graphs appear in many scenarios, such as pandemic spread [1, 2], social media [3, 4], physics simulations $\left\lbrack {5,6}\right\rbrack$ , and computational biology $\left\lbrack {7,8}\right\rbrack$ . Learning dynamic graph properties, however, is a challenging task when both node attributes and graph structures evolve over time.
16
+
17
+ Many existing dynamic graph representation learning methods start from embedding the sequence of non-Euclidean graph topology to feed into recurrent networks [9-12]. Such a straightforward design assumes a discrete nature of input graphs. Graph snapshots are sliced at a sequence of fixed time steps, leaving the evolution of events on nodes and/or edges unobserved. Later, the memory module $\left\lbrack {4,{13}}\right\rbrack$ establishes a natural generalization of the learning procedure to continuous-time dynamic graphs (CTDGs), which encodes previous states for an event to its latest states. Consequently, a graph slice describes the past dynamics with implicitly encoded long-short term memory on node attributes.
18
+
19
+ Nevertheless, the memory module, e.g., recurrent neural networks [14] or gated recurrent unit [15], has trouble tracking the full picture of graph evolving, as it reserves long-term interactions in a most implicit way. Accessing the encoded message inside the black box becomes extremely hard. Alternatively, TRANSFORMER [16] enhances the long-range memory for sequential data, and it has received tremendous success in language understanding [17, 18] and image processing [19, 20]. In particular, the self-attention mechanism learns pair-wise event similarity scores in the entire range of interest. It retrieves a contextual matrix of full-landscape relationships to preserve the long-term dependency of tokens (or events). However, at the cost of comprehensiveness, the rapid growth of the sequence length can easily escalate the complexity of computation and memory. While the attention operation can be efficiently approximated by some low-rank representation [21-24], it loses 35 the expressivity at the same time.
20
+
21
+ This work provides a fully spectral-based solution for learning the representations of long-range CTDGs. First, an efficient spectral transform enhances the memory encoding of continuous events by extracting pairwise nonlinear relationships in time and feature dimensions. A global spectral graph convolution with fast framelet transforms [25] then characterizes node-wise interactions in a sequence of graphs. The proposed design tackles the two identified problems in learning CTDGs. In particular, we show that the power method singular value decomposition (SVD) is an efficient and effective implementation of the low-rank self-attention scheme. It not only fast captures the long-term evolving flow of the input events, but also preserves more even energy in the extracted pivotal components of the temporal observations. Such a design prevents ill-conditioned graph hidden representations, which results in an easier-to-fit smooth decision boundary for network training. In the final layer, the undecimated framelet-based spectral graph transform in graph representation learning commits sufficient scalability via its multi-level representation of the structured data.
22
+
23
+ < g r a p h i c s >
24
+
25
+ Figure 1: Illustrative SPEDGNN for learning continuous-time dynamic graphs (CTDGs). (a) A spectral transform with adaptive power method SVD processes the long-range time-dependency of the input to the spectral domain. (b) The continuous embedding is then divided in a message-memory module with enhanced short-term interactions. (c) Finally, a global framelet graph convolution with multi-scale operators forms well-conditioned graph representations for prediction tasks.
26
+
27
+ We investigate the relationship of spectral transforms and feed-forward propagation, and design Spectral Dynamic Graph Neural Network (SPEDGNN) for efficient and effective dynamic graph representation. The design network architecture captures temporal features and graph structure in CTDGs in the spectral domain. Through efficient spectral self-attention and multi-scale graph convolution, expressive hidden representations of batch events are embedded in linear complexity (proportional to the number of events). The well-conditioned final embeddings are separable by a smooth decision boundary with less main information loss.
28
+
29
+ § 2 SPECTRAL TRANSFORM FOR LONG-RANGE SEQUENCE
30
+
31
+ This section introduces the notion of spectral transform and discusses how it fixes the ill-conditioning problem and its connection to the self-attention mechanism.
32
+
33
+ Definition 1. A spectral transform projects sample observations $\mathbf{X} \in {\mathbb{R}}^{N \times d}$ from an unknown function space to a spectral domain with a (set of) orthonormal basis $\Phi : {X}^{\prime } \mathrel{\text{ := }} {X\Phi }$ . The new representation ${\mathbf{X}}^{\prime }$ summarizes the prior knowledge of observations $\mathbf{X}$ for perfect reconstruction.
34
+
35
+ § 2.1 CHOICES OF THE SPECTRAL BASIS
36
+
37
+ The properties of a formulated spectral transform are determined by $\mathbf{\Phi }$ . For instance, the singular vector matrix of $\mathbf{X}$ from QR decomposition or singular value decomposition (SVD) extracts principal components of the function space. Fourier transforms process time-domain signals to the frequency domain to distill local-global oscillations. When $\mathbf{X}$ is a sequence, such a transform summarizes the observations in a new coordinate system to reflect the ground truth of specific properties, such as sparsity or noise separation.
38
+
39
+ To better understand how spectral transform benefits inferring the true function space, consider an example unitary transform by orthonormal bases of SVD. Denote the raw signal input as a matrix $\mathbf{X} \in {\mathbb{R}}^{N \times d}$ . It can be factorized by $\mathbf{X} = \mathbf{U}\sum {\mathbf{V}}^{\top }$ with two orthonormal bases $\mathbf{U} \in {\mathbb{R}}^{N \times N}$ and $\mathbf{V} \in {\mathbb{R}}^{d \times d}$ . The two orthonormal bases span the row and column spaces of $\mathbf{X}$ , and they both can project $\mathbf{X}$ to a spectral domain. For instance, the spectral coefficients ${\mathbf{X}}^{\prime } = \mathbf{X}\mathbf{V}$ are the projection of aggregated features under the basis $\mathbf{V}$ .
40
+
41
+ < g r a p h i c s >
42
+
43
+ Figure 2: A toy example of binary classification. The 2-dimensional data are sampled from $\mathcal{N}\left( {\mu ,\sigma }\right)$ . The direction and length of $\left\{ {{v}_{1},{v}_{2}}\right\}$ illustrate two eigenvectors and eigenvalues of the feature. Artificial labels are created by a decision boundary with large curvature. Both (b) batch normalized spatial representations and (c) unnormalized spectral coefficients fail to flatten the boundary. In contrast, (d) normalized spectral representations have a closer-to-1 condition number, creating a smooth decision boundary that is easier for a classifier to fit.
44
+
45
+ § 2.2 TRANSFORMING TOWARDS BALANCED ENERGY
46
+
47
+ A key motivation to perform the spectral transform on a time-dependent long-range sequence is to amend the highly-imbalanced energy distribution of the original feature space. We pay special attention to the cases when the expressivity of latent representations is hurt, i.e., the detailed information with small energy in the original feature space is smoothed out. In practice, such small-energy details can be pivotal to distinguishing different entities, and ignoring them not only removes local noise but also eliminates potentially useful messages. For instance, tumor cells generally live within a small area and it has considerably small energy in a medical image. Smoothing these features could result in problems in pathology diagnosis.
48
+
49
+ Amending the energy distribution, however, can be tricky to conduct in the features' original domain. Figure 2 demonstrates a two-dimensional toy example. The sample distribution in Figure 2(a) concentrates the most variance in a certain direction with eigenvalues of sample variance $\sigma = \{ 9,2\}$ . An instant normalization in the same domain, such as BatchNorm [26] in Figure 2(b), reshapes the sample distribution. However, the energy is still centralized in ${\mathbf{v}}_{1}$ ’s direction. As a result, the two classes (colored in red and green) can only be divided by a decision boundary that has a large curvature. It is difficult to fit by classifiers such as an MLP-based model, which tends to fit smooth flat curves. Meanwhile, Figure 2(c) illustrates the unnormalized spectral representation of the original data ${\mathbf{X}}^{\prime } \mathrel{\text{ := }} \mathbf{{XV}} = \mathbf{U}\sum$ , which projects $\mathbf{X}$ to a new coordinate set by the transformation $\mathbf{V}$ . As shown in Figure 2(d), normalizing the new coordinates in the same spectral domain by $\widetilde{\mathbf{X}} \mathrel{\text{ := }} {\mathbf{X}}^{\prime }\operatorname{diag}\left( {{c}_{1},{c}_{2}}\right) {\mathbf{V}}^{T}$ results in an easy-to-fit flat decision boundary.
50
+
51
+ The spectral transform allows balancing features' energy and truncating local noise, if necessary, simultaneously. To circumvent singular decomposition, we consider an efficient approximation that relies on matrix products of $\mathbf{X}$ . We can regard $\mathbf{U}$ (from $\mathbf{X} = \mathbf{U}\sum {\mathbf{V}}^{T}$ ) to be close to the orthonormal basis $\mathbf{Q}$ (from QR decomposition $\mathbf{X} = \mathbf{{QR}}$ ). Power method SVD [27] suggests a better approximation to $\mathbf{U}$ , which is the orthonormal basis $\widetilde{\mathbf{Q}}$ from $\mathbf{X}{\left( {\mathbf{X}}^{\top }\mathbf{X}\right) }^{q} = \widetilde{\mathbf{Q}}\widetilde{\mathbf{R}}$ , i.e.,
52
+
53
+ $$
54
+ \mathbf{U} \approx \widetilde{\mathbf{Q}} = \mathbf{X}{\left( {\mathbf{X}}^{\top }\mathbf{X}\right) }^{q}{\widetilde{\mathbf{R}}}^{-1}. \tag{1}
55
+ $$
56
+
57
+ The normalized features $\widetilde{\mathbf{X}}$ is therefore approximated by including a proper diagonal matrix $\mathbf{C}$ in (1), i.e., $\widetilde{\mathbf{X}} \approx \mathbf{X}{\left( {\mathbf{X}}^{\top }\mathbf{X}\right) }^{q}{\widetilde{\mathbf{R}}}^{-1}\mathbf{C}$ . However, it is computationally expensive to directly invite the orthogonal factor $\widetilde{\mathbf{R}}$ to participant in a neural network, as the QR decomposition involves a Gram-Schmidt algorithm. To reduce the cost, we replace $\widetilde{\mathbf{R}}$ and $\mathbf{C}$ with a learnable scheme. We let
58
+
59
+ $$
60
+ \widetilde{\mathbf{X}} \approx \mathbf{X}{\left( {\mathbf{X}}^{\top }\mathbf{X}\right) }^{q}\mathbf{W}, \tag{2}
61
+ $$
62
+
63
+ where $\mathbf{W}$ is a learnable upper triangular matrix. The consequent approximation to $\widetilde{\mathbf{X}}$ supports matrix computations and it can be propagated by neural networks.
64
+
65
+ Compared to the conventional truncated SVD that ranks the orthonormal basis by singular values, the learnable spectral transform by (2) conducts a data-driven principal component distillation and normalization at the same time. The projection by $\mathbf{W}$ summarizes the principal vectors that describe entity features to a set of spectral coefficients and ranks them adaptively by their importance to the specific application. Such a learning scheme prevents important rare patterns from being removed due to their small energy.
66
+
67
+ § 2.3 CONNECTING ADAPTIVE SPECTRAL TRANSFORM TO SELF-ATTENTION
68
+
69
+ Aside from preserving small-energy rare patterns as other spectral transforms, the adaptive SVD-based spectral transform is also closely connected to linear self-attention mechanisms [21, 23, 28]. For $\mathbf{X} \in {\mathbb{R}}^{N \times d}$ , a self-attention layer reads
70
+
71
+ $$
72
+ {\mathbf{X}}_{\text{ attn }} \mathrel{\text{ := }} \left( {{\mathbf{Q}}_{a}{\mathbf{K}}_{a}^{\top }{\mathbf{V}}_{a}}\right) /\sqrt{{d}_{k}}, \tag{3}
73
+ $$
74
+
75
+ $$
76
+ \text{ where }{\mathbf{Q}}_{a} \mathrel{\text{ := }} \mathbf{X}{\mathbf{W}}_{Q},{\mathbf{K}}_{a} \mathrel{\text{ := }} \mathbf{X}{\mathbf{W}}_{K},{\mathbf{V}}_{a} \mathrel{\text{ := }} \mathbf{X}{\mathbf{W}}_{V}\text{ . }
77
+ $$
78
+
79
+ The three square matrices ${\mathbf{Q}}_{a}$ (query), ${\mathbf{K}}_{a}$ (key) and ${\mathbf{V}}_{a}$ (value) learns basis functions at an identical size of $n \times d$ . The learning cost drops significantly when $N \gg d$ , as a smaller number of parameters are required to approximate. The conventional self-attention restricts the order of calculating the operations strictly from left to right, as the context mapping matrix comes from the normalized and activated softmax $\left( {{\mathbf{Q}}_{a}{\mathbf{K}}_{a}^{\top }/\sqrt{{d}_{k}}}\right) \in {\mathbb{R}}^{N \times N}$ . Instead, linear self-attention removes the activation for efficient computation.
80
+
81
+ To understand the intrinsic connection between linear self-attention in (3) and power method SVD, rewrite ${\mathbf{X}}_{\text{ attn }}$ as a function of $\mathbf{X}$ , i.e.,
82
+
83
+ $$
84
+ {\mathbf{X}}_{\text{ attn }} = {\mathbf{{XW}}}_{Q}{\mathbf{W}}_{K}^{\top }{\mathbf{X}}^{\top }\mathbf{X}{\mathbf{W}}_{V} = {\mathbf{{XW}}}_{1}{\mathbf{X}}^{\top }\mathbf{X}{\mathbf{W}}_{2} \tag{4}
85
+ $$
86
+
87
+ with ${\mathbf{W}}_{Q}{\mathbf{W}}_{K}^{\top } = {\mathbf{W}}_{1}$ and ${\mathbf{W}}_{V} = {\mathbf{W}}_{2}$ . Compared to (2), a linear self-attention step in (4) is a special implementation that approximates a 1-iteration QR approximation of the SVD basis. To approach the power of $q$ iterations adaptive power method SVD, a number of $q$ -layer linear self-attention is required. Moreover, both (2) and (4) aggregate row-wise variation and summarizes a low-rank covariance matrix of $\mathbf{X}$ with ${\mathbf{X}}^{\top }\mathbf{X}$ . However,(2) provides an efficient concentration to large-mode tokens while truncating out noises. For an extremely long sequence of input ${\mathbf{X}}_{N} \in {\mathbb{R}}^{N \times d}\left( {N \gg d}\right)$ , ${\mathbf{X}}_{N}^{\top }{\mathbf{X}}_{N} \in {\mathbb{R}}^{d \times d}$ in (2) completes the main calculation at a significantly small cost. This cost-efficient technique is important for scalable learning tasks such as time-series data learning, where the length of an input sequence could explode easily.
88
+
89
+ § 3 SPECTRAL TRANSFORMS FOR DYNAMIC GRAPHS
90
+
91
+ In this section, we expand the long-range sequence of interest to an additional dimension of topology and practice the spectral transform on dynamic graphs. We validate the efficiency and effectiveness of the spectral transform framework by dynamic graph representation learning.
92
+
93
+ § 3.1 PROBLEM FORMULATION
94
+
95
+ A static undirected graph is denoted by ${\mathcal{G}}_{p} = \left( {{\mathbb{V}}_{p},{\mathbb{E}}_{p},{\mathbf{X}}_{p}}\right)$ with $n = \left| {\mathbb{V}}_{p}\right|$ nodes, where its edge connection is described by an adjacency matrix ${\mathbf{A}}_{p} \in {\mathbb{R}}^{n \times n}$ and the $d$ -dimensional node feature is stored in ${\mathbf{X}}_{p} \in {\mathbb{R}}^{n \times d}$ . A graph convolution finds a hidden representation ${\mathbf{H}}_{p}$ that embeds information about the structure ${\mathbf{A}}_{p}$ and the node feature ${\mathbf{X}}_{p}$ . When ${\mathbf{A}}_{p}$ and/or ${\mathbf{X}}_{p}$ changes with time, ${\mathcal{G}}_{p}$ is called a dynamic graph.
96
+
97
+ Definition 2. Given a sequence of graphs $\mathbb{G} = {\left\{ {\mathcal{G}}_{p}\right\} }_{p = 1}^{P}$ where each ${\mathcal{G}}_{p} = \left( {{\mathbb{V}}_{p},{\mathbb{E}}_{p},{\mathbf{X}}_{p}}\right)$ , dynamic graph representation learning finds the hidden representation ${\mathbf{H}}_{p}$ of each ${\mathcal{G}}_{p}$ where the $i$ th row of ${\mathbf{H}}_{p}$ corresponds to the $i$ th node of ${\mathcal{G}}_{p}$ .
98
+
99
+ Depending on the particular prediction task, ${\mathbf{H}}_{p}$ can be processed for label assignments. For example, link prediction forecasts the pair-wise connection of nodes in a graph, and node classification completes unlabeled nodes. Continuous-time dynamic graphs (CTDG) is a general and complicated genre of dynamic graphs. An arbitrary observation of a CTDG is recorded as a tuple of (event, event type, timestamp). The event recorded at a specific timestamp is described by a feature vector, and the event type can be one of edge addition/deletion, or node addition/deletion. Training an adequate model for CTDG can be challenging. Compared to a static graph, the complete architecture of a CTDG is revealed sequentially during training. A powerful design for graph embedding is thus required to interpret the connection between the next graph with historical graph snapshots. In comparison to discrete-time dynamic graphs, the consecutive activity recording behavior allows CTDGs to capture the event flow of the entire graph so that the information loss is minimized.
100
+
101
+ To this end, we propose to first employ an adaptive temporal spectral transform(as introduced in Section 2) to encode the long-range evolution of the graph dynamics to normalized spectral coefficients $\widetilde{\mathbf{X}}$ for a minimum loss of energy. The short-term interaction is also enhanced by employing a message-memory module [4, 13] (See Appendix A). Next, the encoded event sequence is partitioned evenly into a sequence of subgraphs of interactive nodes, where the node attributes encode its present and recent status. Next, the graph topology is embedded by another spectral-based graph network, i.e., a global spectral graph convolution, to find a well-conditioned hidden representation for the final prediction task. We now explain the two spectral-based transforms in detail.
102
+
103
+ § 3.2 ADAPTIVE TEMPORAL SPECTRAL TRANSFORM
104
+
105
+ In the first step, we embed the long-range time-dependency with adaptive power method SVD as a particular implementation of the temporal spectral transform. As briefed in Section 2, it takes a similar role as the traditional self-attention in feature extraction but is equipped with additional scalability and reliability. We focus on the transform and ignore the adaptive normalization for conciseness.
106
+
107
+ Consider a sequence of events $\mathbf{X} \in {\mathbb{R}}^{N \times d}$ . We look for its low-dimensional projection ${\mathbf{X}}^{\prime }$ in the spectral domain that i) summarizes the principal patterns of $\mathbf{X}$ , and ii) is immune to minor disturbances to preserve expressivity in the projected representation. Analogous to self-attentions, the spectral encoder assigns a matrix of similarity scores to $\mathbf{X}$ . However, the latter follows an explicit update rule to establish a traceable learning process. The main patterns from both event attributes and time dimensions are summarized in spectral coefficients ${\mathbf{X}}^{\prime }$ (cyan box in Figure 1). Below we explain the two interpretations of such transforms.
108
+
109
+ Interpret 1. Spectral coefficients ${X}^{\prime } \approx {XV}$ extract information in feature dimension. By definition $\mathbf{X} \mathrel{\text{ := }} \mathbf{U}\sum {\mathbf{V}}^{\top }$ , SVD stores the factorized features (in columns of $\mathbf{V}$ ) and temporal shifts (in rows of $\mathbf{U}$ ). For low-rank or noisy input, Truncated SVD [29] extracts the stable main patterns by ${\mathbf{X}}^{\prime } \approx \mathbf{{XV}} \in {\mathbb{R}}^{N \times {d}^{\prime }}\left( {d > {d}^{\prime }}\right)$ . Specifically, the transformed ${\mathbf{X}}^{\prime }$ is projected by $\mathbf{V}$ to a new space of the most effective feature representation. For instance, the $j$ th feature of the $i$ th transformed event ${\mathbf{X}}_{ij}^{\prime } = {\mathbf{X}}_{i, : }{\mathbf{V}}_{ : ,j}$ concretes ${\mathbf{X}}_{i}$ to a coefficient following the projection of the $j$ th factorized feature. Similar mappings by the $1,\ldots ,{d}^{\prime }$ th factorization of $\mathbf{V}$ send the raw ${\mathbf{X}}_{i}$ to a new space that is the most representative for the features.
110
+
111
+ Interpret 2. Spectral coefficients ${\mathbf{X}}^{\prime } \approx \mathbf{X}\left( {{\mathbf{X}}^{\top }\mathbf{X}}\right) {\mathbf{R}}^{-1}$ aggregates information of time dimension. We focus on the simplest case of iteration $q = 1$ for illustration purpose, i.e., ${\mathbf{X}}^{\top }\mathbf{X}{\mathbf{R}}^{-1}$ is a one-step approximation of $\mathbf{V}$ . For instance, the $j$ th element in the $j$ th row of ${\mathbf{X}}^{\top }\mathbf{X}$ is from ${\mathbf{X}}_{ : ,j}$ that covers the whole time interval. The consequent ${\mathbf{X}}^{\top }\mathbf{X} \in {\mathbb{R}}^{d \times d}$ is a covariance matrix that summarizes the column-wise linear relationship of $\mathbf{X}$ , i.e., the change of attributes over time. Transforming $\mathbf{X}$ by this similarity matrix thus establishes a new presentation with the all-time temporal correlation of attributes. The same way of interpretation expands easily to $q > 1$ , where the approximation takes linear adjustments via ${\mathbf{R}}^{-1}$ and concentrates high energies on more expressive modes. However, the fundamental format of the covariance matrix ${\mathbf{X}}^{\top }\mathbf{X}$ stays unchanged.
112
+
113
+ § 3.3 GLOBAL FRAMELET GRAPH TRANSFORMS
114
+
115
+ To allow scalable graph representation learning, a global spectral graph convolution is employed for extracting multi-level and multi-scale features. The vanilla framelet graph convolution (UFGCONV; Zheng et al. [25]) leverages fast framelet decomposition and reconstruction for efficient static graph topology embedding (See Appendix B). Working in the framelet domain has been proven robust to local perturbations and circumvents over-smoothing with Dirichlet energy preservation [30, 31]. We hereby propose a global version of framelet transforms to perform multi-scale robust graph representation learning for CTDGs. Formally, the graph framelet convolution defines in a similar manner to any typical spectral graph convolution layer that ${\mathbf{\theta }}_{p} \star {\mathbf{X}}_{p} = {\mathbf{V}}_{p}\operatorname{diag}\left( {\mathbf{\theta }}_{p}\right) {\mathbf{W}}_{p}{\mathbf{X}}_{p}^{b}$ , where ${\mathbf{X}}_{p}^{b}$ denotes the embedded input and ${\mathbf{\theta }}_{p}$ is the learnable parameters with respect to ${\mathbf{X}}_{p}^{\flat }$ . The ${\mathcal{W}}_{p}$ and ${\mathcal{V}}_{p}$ are the decomposition and reconstruction operators that transform the input graph signal ${\mathbf{X}}_{p}^{b}$ from and to the vertex domain.
116
+
117
+ Algorithm 1: SPEDGNN: Spectral Dynamic Graph Neural Network
118
+
119
+ Input : raw sequential data $\mathbf{X}$
120
+
121
+ Output:label prediction $Y$
122
+
123
+ Initialization: global $\Theta$
124
+
125
+ Adaptive power method SVD $\widetilde{\mathbf{X}} \leftarrow \mathbf{X}{\left( {\mathbf{X}}^{\top }\mathbf{X}\right) }^{q}{\mathbf{R}}^{-1}$ ;
126
+
127
+ for batch $p \leftarrow 1$ to $M - 2$ do
128
+
129
+ ${\mathbf{h}}_{p}\left\lbrack t\right\rbrack \leftarrow \mathbf{{msg}}\left( {{e}_{i}\left\lbrack t\right\rbrack }\right) \parallel \mathbf{{mem}}\left( {{e}_{i}\left\lbrack { : t}\right\rbrack }\right) ;$
130
+
131
+ ${\mathcal{G}}_{p} \leftarrow \left( {{\mathbf{A}}_{p},{\mathbf{X}}_{p} \leftarrow \mathbf{{FC}}\left( {{\mathbf{h}}_{p}\left\lbrack t\right\rbrack }\right) }\right) ;$
132
+
133
+ ${\widehat{H}}_{p} \leftarrow \operatorname{UFGConv}\left( {{\mathbf{A}}_{p},{\mathbf{X}}_{p},{\theta }_{p}}\right)$ ;
134
+
135
+ ${\mathbf{Y}}_{p} \leftarrow \operatorname{Predictor}\left( {H}_{p}\right)$ ;
136
+
137
+ ${\Theta }_{p} \leftarrow {\theta }_{p};$
138
+
139
+ ${\mathbf{Y}}_{\mathrm{p},\text{ val }},{\mathbf{Y}}_{\mathrm{p},\text{ test }} \leftarrow \operatorname{Predictor}\left( {H}_{p + 1}\right) ,\operatorname{Predictor}\left( {H}_{p + 2}\right)$ ;
140
+
141
+ Update: score $\left( {\mathbf{Y}}_{\text{ val }}\right)$ , score $\left( {\mathbf{Y}}_{\text{ test }}\right)$ .
142
+
143
+ end
144
+
145
+ Different from a set of independent static graphs, dynamic graphs are captured on an evolving timeline. Therefore, we preserve the intra-connections of the graph sequence in a global learnable variable $\Theta$ . At time $t$ , we initialize the framelet coefficients $\theta {\left\lbrack t\right\rbrack }^{\left( 0\right) }$ with their most recent best estimation before $t$ , i.e., $\theta {\left\lbrack : t\right\rbrack }^{\left( n\right) }$ , from $\Theta$ . For instance, the initial $\theta$ with respect to node $p$ reads ${\theta }_{p}{\left\lbrack t\right\rbrack }^{\left( 0\right) } = {\Theta }_{p}$ . Figure 1 demonstrates the update procedure of the global framelet transform with a sample global graph of 5 nodes. Suppose the subgraph at batch $t$ contains the first 3 of the 5 nodes. A UFGCONV trains $\theta \left\lbrack t\right\rbrack$ to represent these three nodes. The parameter values are initialized with the best estimated of $\left\{ {{\Theta }_{1},{\Theta }_{2},{\Theta }_{3}}\right\}$ recorded before $t$ . The optimized model is deployed for further prediction tasks. Meanwhile, $\Theta$ updates the first three parameters by $\left\{ {{\theta }_{1}{\left\lbrack t\right\rbrack }^{\left( n\right) },{\theta }_{2}{\left\lbrack t\right\rbrack }^{\left( n\right) },{\theta }_{3}{\left\lbrack t\right\rbrack }^{\left( n\right) }}\right\}$ .
146
+
147
+ The workflow of the fully spectral learning scheme is briefed in Figure 1. A temporal spectral transform first processes the input raw data to the spectral domain that encodes long-range time dependency. With the adaptive power method SVD, a group of stable principal patterns can be extracted, which is functioned similarly to a stacked efficient linear self-attention mechanism. Next, a message-memory module enhances the short-term interactions of events and generate a comprehensive node representation of batched subgraphs (Appendix A). The topology of subgraphs records the interactions among node entities, which we leverage a global graph framelet convolution to learn.
148
+
149
+ § 3.4 MAIN ALGORITHM
150
+
151
+ The working flow of SPEDGNN is summarized in Algorithm 1. Based on this, we estimate the main algorithm has a linear computational complexity $\mathcal{O}\left( {{Nd}\log \left( d\right) }\right)$ to the number of events $N$ , which proves that our method is efficient with a small time and space complexity (See Appendix C).
152
+
153
+ § 4 NUMERICAL EXAMPLES
154
+
155
+ We carry out experiments on three bipartite graph datasets (Wikipedia, Reddit, and MOOC)[13, 32] for link prediction and node classification tasks [33]. Both transductive and inductive settings are examined in link predictions. We leave implementation details in Appendix E.
156
+
157
+ A fair comparison is made with JODIE [13], DYREP [34], and TGN [4]. Classic methods (e.g., TGAT and DEEPWALK) that significantly underperform the baseline methods are excluded. For model training and evaluation, we assume the interactions of a graph are given until the last timestamp in batch $t$ and make predictions on the timestamps of batch $t + 1$ and $t + 2$ , where predictions on the former set provide the validation scores, and the latter guides the test scores. Note that our training follows PyTorch Geometric [35] and makes a more strict data acquirement criterion than TGN, where the latter has access to all previous data when loading node neighbors, including those in the same
158
+
159
+ Table 1: Performance of link prediction over 10 repetitions.
160
+
161
+ max width=
162
+
163
+ 2|c|Model 2*#parameters 2|c|Wikipedia 2|c|Reddit 2|c|MOOC
164
+
165
+ 4-9
166
+ 2|c|X precision ROC-AUC precision ROC-AUC precision ROC-AUC
167
+
168
+ 1-9
169
+ 6*transductive DyREP ${920} \times {10}^{3}$ ${94.67} \pm {0.25}$ ${94.26} \pm {0.24}$ ${96.51} \pm {0.59}$ ${96.64} \pm {0.48}$ ${79.84} \pm {0.38}$ ${81.92} \pm {0.21}$
170
+
171
+ 2-9
172
+ JODIE-RNN ${209} \times {10}^{3}$ 93.94±2.50 ${94.44} \pm {1.42}$ 97.12±0.57 ${}_{{97.59} \pm {0.27}}$ ${76.68} \pm {0.02}$ ${81.40} \pm {0.02}$
173
+
174
+ 2-9
175
+ JODIE-GRU ${324} \times {10}^{3}$ 96.38±0.50 96.75±0.19 ${96.84} \pm {0.39}$ 97.33±0.25 ${80.29} \pm {0.09}$ ${84.88} \pm {0.30}$
176
+
177
+ 2-9
178
+ TGN-GRU ${1.217} \times {10}^{3}$ ${96.73} \pm {0.09}$ ${96.45} \pm {0.11}$ ${98.63} \pm {0.06}$ ${98.61} \pm {0.03}$ ${}_{{83.18} \pm {0.10}}$ ${}_{{83.20} \pm {0.35}}$
179
+
180
+ 2-9
181
+ SPEDGNN-MLP (ours) ${170} \times {10}^{3}$ 97.02±0.06 96.51 $\pm {0.08}$ ${98.19} \pm {0.05}$ ${98.15} \pm {0.06}$ ${82.40} \pm {0.24}$ ${}_{{85.55} \pm {0.17}}$
182
+
183
+ 2-9
184
+ SPEDGNN-GRU (ours) ${376} \times {10}^{3}$ ${97.44} \pm {0.05}$ ${}_{{97.15} \pm {0.06}}$ ${98.69} \pm {0.09}$ 98.66±0.12 ${84.50} \pm {0.10}$ ${86.88} \pm {0.09}$
185
+
186
+ 1-9
187
+ 6*inductive DYREP ${920} \times {10}^{3}$ ${92.09} \pm {0.28}$ ${91.22} \pm {0.26}$ ${96.07} \pm {0.34}$ ${96.03} \pm {0.28}$ 79.64±0.12 ${82.34} \pm {0.32}$
188
+
189
+ 2-9
190
+ JODIE-RNN ${209} \times {10}^{3}$ ${92.92} \pm {1.07}$ ${92.56} \pm {0.87}$ 93.94±1.53 95.08±0.70 ${77.17} \pm {0.02}$ ${81.77} \pm {0.01}$
191
+
192
+ 2-9
193
+ JODIE-GRU ${324} \times {10}^{3}$ ${94.93} \pm {0.15}$ 95.08±0.70 ${92.90} \pm {0.03}$ ${95.14} \pm {0.07}$ 77.82±0.17 ${}_{{82.90} \pm {0.60}}$
194
+
195
+ 2-9
196
+ TGN-GRU ${1.217} \times {10}^{3}$ ${94.37} \pm {0.23}$ 93.83±0.27 97.38±0.07 97.33±0.11 81.75±0.24 ${82.83} \pm {0.18}$
197
+
198
+ 2-9
199
+ SPEDGNN-MLP (ours) ${170} \times {10}^{3}$ ${94.27} \pm {0.05}$ ${93.28} \pm {0.05}$ ${97.49} \pm {0.01}$ 97.34±0.02 ${82.54} \pm {0.08}$ ${85.23} \pm {0.09}$
200
+
201
+ 2-9
202
+ SPEDGNN-GRU (ours) ${376} \times {10}^{3}$ ${96.60} \pm {0.01}$ ${95.70}_{\pm {0.02}}$ ${97.47} \pm {0.05}$ ${97.10} \pm {0.09}$ ${82.35} \pm {0.06}$ ${83.67} \pm {0.06}$
203
+
204
+ 1-9
205
+
206
+ $\dagger$ The top three are highlighted by First, Second, Third.
207
+
208
+ test batch. Such an operation reveals the true connections to predict, and the test scores of TGN reported by Rossi et al. [4] are higher than in this paper.
209
+
210
+ Prediction Performance. Table 1 reports the performance of link prediction tasks. SPEDGNN constantly outperforms JODIE and DYREP with RNN, and achieves at least comparable performance to JODIE and TGN-GRU with a small volatility. It is noteworthy that JODIE-GRU outperforms the original JODIE-RNN by Kumar et al. [13]. The performance gain of GRU over RNN explains to some extent the rare outperformance of TGN over SPEDGNN-MLP, not to mention that MLP is simpler than any recurrent unit. This statement is confirmed by the performance of SPEDGNN-GRU. When the GRU module is employed in the memory layer, the highest performance score is almost always observed over different datasets, learning tasks, and evaluation metrics.
211
+
212
+ In addition to the transductive and inductive link prediction tasks, we also conduct node classification. The model performance is evaluated by the average ROC-AUC scores, which better fit the extremely imbalanced nature of node classes. The results reported in Table 2 confirm that SPEDGNN outperforms all baselines, especially with the GRU module.
213
+
214
+ Table 2: ROC-AUC of node classification
215
+
216
+ max width=
217
+
218
+ Model Wikipedia Reddit MOOC
219
+
220
+ 1-4
221
+ DyREP ${84.59} \pm {2.21}$ ${62.91} \pm {2.40}$ ${69.86} \pm {0.02}$
222
+
223
+ 1-4
224
+ JODIE-RNN ${}_{{85.38} \pm {0.08}}$ ${61.68} \pm {0.01}$ ${66.82} \pm {0.05}$
225
+
226
+ 1-4
227
+ JODIE-GRU ${}_{{87.90} \pm {0.09}}$ ${64.30} \pm {0.21}$ ${70.23} \pm {0.09}$
228
+
229
+ 1-4
230
+ TGN-GRU ${88.95} \pm {0.07}$ ${61.49} \pm {0.01}$ ${}_{{70.32} \pm {0.13}}$
231
+
232
+ 1-4
233
+ SPEDGNN-MLP (ours) ${88.37} \pm {0.03}$ ${64.94} \pm {0.07}$ ${69.52} \pm {0.08}$
234
+
235
+ 1-4
236
+ SPEDGNN-GRU (ours) ${90.32} \pm {0.05}$ ${}_{{65.28} \pm {0.05}}$ ${71.08} \pm {0.02}$
237
+
238
+ 1-4
239
+
240
+ Table 3: Training speed for link prediction
241
+
242
+ max width=
243
+
244
+ Model Wikipedia Reddit MOOC
245
+
246
+ 1-4
247
+ DyREP ${20.1}\mathrm{\;s} \pm {0.6}\mathrm{\;s}$ 139.3s $\pm {0.1}$ s ${78.34}\mathrm{\;s} \pm {0.6}\mathrm{\;s}$
248
+
249
+ 1-4
250
+ JODIE-RNN ${17.4s} \pm {2.0s}$ ${121.8}\mathrm{\;s} \pm {0.3}\mathrm{\;s}$ ${62.64s} \pm {0.1s}$
251
+
252
+ 1-4
253
+ JODIE-GRU ${16.9}\mathrm{\;s} \pm {1.1}\mathrm{\;s}$ ${131.6}\mathrm{\;s} \pm {1.5}\mathrm{\;s}$ ${58.82}\mathrm{\;s} \pm {2.2}\mathrm{\;s}$
254
+
255
+ 1-4
256
+ TGN-GRU ${24.9}\mathrm{\;s} \pm {0.3}\mathrm{\;s}$ ${128.1s} \pm {2.2s}$ ${78.11}\mathrm{\;s} \pm {0.7}\mathrm{\;s}$
257
+
258
+ 1-4
259
+ SPEDGNN-MLP (ours) ${9.87s} \pm {0.1s}$ ${63.3}\mathrm{\;s} \pm {1.1}\mathrm{\;s}$ $\mathbf{{38.41s}} \pm {0.5}\mathrm{\;s}$
260
+
261
+ 1-4
262
+ SPEDGNN-GRU (ours) ${12.5}\mathrm{\;s} \pm {0.3}\mathrm{\;s}$ ${83.6}\mathrm{\;s} \pm {0.1}\mathrm{\;s}$ ${49.20}\mathrm{\;s} \pm {0.1}\mathrm{\;s}$
263
+
264
+ 1-4
265
+
266
+ Computational Efficiency. Table 3 evaluates model efficiency by the training speed per epoch. Compared to the baseline models, the training speed per epoch of SPEDGNN is shorter with 50% ahead on the largest dataset Reddit. It confirms SPEDGNN's long-sequence computational privilege analysed in Section 2. In contrast, the comparable performance by TGN-GRU is achieved at the cost of doubling the training time to fit the model with seven times more learnable parameters. On the other hand, both variants of JODIE require a significantly longer training time than SPEDGNN, not to mention that their performance cannot constantly stay at the top tier.
267
+
268
+ Well-conditioned Spectral Node Embedding. We validate the energy balancing effect of the proposed SPEDGNN by investigating the distribution of hidden embedding's eigenvalues of different models. Recall that in Section 2 we demonstrated with a 2-dimensional toy example that the normalized spectral transform projects input features to well-conditioned representations, which requires a smoother decision boundary that is easier to fit by a classifier. For a higher dimensional feature representation, we describe the smoothness of the decision boundary by the decay of the associated condition number ${\lambda }_{i}/{\lambda }_{\min }$ , or the eigenvalues ${\lambda }_{i}$ .
269
+
270
+ We made the comparison on the optimized hidden representation of the test samples in Wikipedia. The distribution and total variance of condition number are visualized in Figure 3 and Figure 4, respectively. According to Figure 3, JODIE and TGN concentrate the most variance in the first few directions. Eigenvalues of the associated hidden representations decrease drastically after the first 3 or 4 epochs. Such a fast reduction of condition numbers indicates that the analyzed hidden features are highly-correlated, which gives rise to the concentration of the variance of the feature space on the first few principal components. As it challenges the classifier to find the optimal model to fit a rough decision boundary, such a circumstance with concentrated feature energy is not favored. In contrast, SPEDGNN finds a more separable hidden representation of test samples with slowly decayed singular values. As shown in Figure 4, the embedding by SPEDGNN constantly disperses the total variation in a larger number of vectors, while JODIE and TGN pick a few features to undertake most variations after the first few epochs.
271
+
272
+ < g r a p h i c s >
273
+
274
+ Figure 3: The distribution of the largest 25 singular values of the hidden representation by SPEDGNN, TGN, and JODIE.
275
+
276
+ < g r a p h i c s >
277
+
278
+ Figure 4: The number of singular vectors that provides ${50}\%$ (left) or ${90}\%$ (right) of total variance by SPEDGNN, TGN, and JODIE. SPEDGNN constantly includes more vectors to achieve the same level of total variance.
279
+
280
+ § 5 RELATED WORK
281
+
282
+ § 5.1 EFFICIENT SELF-ATTENTION
283
+
284
+ The transformer is well-known for its powerful learning ability [16, 36-38]. However, the self-attention mechanism at the core of a transformer framework requires quadratic time and memory complexity, which hinders the model's scalability. A handful of recent works discuss potential improvements in the efficiency of model memory or computational cost when the input dimension is of a fixed size that is considerably large.
285
+
286
+ The prominent efficient transformer methods fall into three directions. First, prior knowledge compress or distill the self-attention architecture to a sparse attention matrix by pre-defining strides convolutions $\left\lbrack {{39},{40}}\right\rbrack$ or assuming patchwise patterns $\left\lbrack {{19},{41}}\right\rbrack$ . Some recent study also considers replacing fixed patterns with a learnable scheme that efficiently identifies chunks or clusters [42-44]. The data-driven learning procedure introduces extra flexibility to the division of the patches, blocks, or receptive fields, but the core idea of attention localization remains.
287
+
288
+ The second approach simultaneously accesses multiple tokens through a global memory module. The target is to distill the input sequence with a limited number of inducing points (or memory) $\left\lbrack {{45},{46}}\right\rbrack$ . Compared to the first approach of patching input tokens, inducing points break down the strict concept of token entities and make parameterizations on the global memory of token mixers.
289
+
290
+ The third emerging technique avoids explicitly computing the full contextual matrix of the self-attention mechanism through kernelization [28, 47] or low-rank approximation [21-23]. The projection is usually conducted on the lengthy sequence dimension that ignores the chronicle order of sequence when computing attention scores. However, the global view in compress helps the attention mechanism to manage the overall picture of the sequence on top of token-wise correlations. As is investigated by a recent study [48], the substitution of matrix decomposition to the self-attention mechanism is critical for learning the global context.
291
+
292
+ § 5.2 GRAPH STRUCTURE EMBEDDING
293
+
294
+ GNNs have seen a surge in interest and popularity for dealing with irregular graph-structured data that traditional deep learning methods such as CNNs fail to manage. Common to most GNNs and their variants is the graph embedding through the aggregation of neighbor nodes in a way of message 6 passing [49-51]. As a key ingredient for topology embedding, graph convolutions correspond to spatial methods and spectral methods that operate on node space $\left\lbrack {{52},{53}}\right\rbrack$ or on a pseudo-coordinate system that is mapped from nodes through some transform (typically Fourier) [54].
295
+
296
+ Due to the intuitive characteristics of spatial-based methods which can directly generalize the CNNs 310 to graph data with convolution on neighbors, most GNNs fall into the category of spatial methods 311 $\left\lbrack {{53},{55} - {62}}\right\rbrack$ . Many other spatial methods broadly follow the message passing scheme with different neighborhood aggregation strategies, but they inherently lack expressivity $\left\lbrack {{60},{63},{64}}\right\rbrack$ .
297
+
298
+ In contrast, spectral-based graph convolutions [25, 54, 65-72] convert the raw signal or features in the vertex domain into the frequency domain. Spectral-based methods have already been proved to have a solid mathematical foundation in graph signal processing [73], and the vastly equipped multi-scale or multi-resolution views push them to a more scalable solution of graph embedding. Versatile Fourier $\left\lbrack {{65},{66},{74}}\right\rbrack$ , wavelet transforms $\left\lbrack {68}\right\rbrack$ and framelets $\left\lbrack {25}\right\rbrack$ have also shown their capabilities in graph representation learning. Of these transforms, Fourier transforms is particularly one of the most popular ones and the work in [75] gave a detailed review of how Fourier transform enhances neural networks. In addition, with fast transforms being available in computing strategy, a big concern related to efficiency is well resolved.
299
+
300
+ § 5.3 TEMPORAL ENCODING OF DYNAMIC GRAPHS
301
+
302
+ Recurrent neural networks (RNNs) are considered exceptionally successful for sequential data modelling, such as text, video, and speech [76-78]. In particular, Long Short Term Memory (LSTM) [79] and Gated Recurrent Unit (GRU) [15] gains great popularity in application. Compared to the Vanilla RNN, they leverage a gate system to extract memory information, so that memorizing long-range dependency of sequential data becomes possible.Later, the Transformer network [16] designs an encoder-decoder architecture with the self-attention mechanism, so as to allow parallel processing on sequential tokens. The self-attention mechanism have achieved state-of-the-art performance across all NLP tasks [16, 33] and even some image tasks [20, 80].
303
+
304
+ For dynamic GNNs, it is critical to consolidate the features along the temporal dimension. Dynamic graphs consist of discrete and continuous two types according to whether they have the exact temporal information [81]. Recent advances and success in static graphs encourage researchers and enable further exploration in the direction of dynamic graphs. Nevertheless, it is still not recently until several approaches $\left\lbrack {{34},{82} - {84}}\right\rbrack$ were proposed due to the challenges of modeling the temporal dynamics. In general, a dynamic graph neural network could be thought of as a combination of static GNNs and time series models which typically come in the form of an RNN [85-87]. The first DGNN was introduced by Seo et al. [85] as a discrete DGNN and Know-Evolve [88] was the first continuous model. JODIE [13] employed a coupled RNN model to learn the embeddings of the user/item. The work in [89] learns the node representations through two joint self-attention along both dimensions of graph neighborhood and temporal dynamics. The work in [90] was the first to use RNN to regulate the GCN model, which means to adapt the GCN model along the temporal dimension at every time step rather than feeding the node embeddings learned from GCNs into an RNN. TGAT [91] is notable as the first to consider time-feature interactions. Then Rossi et al. [4] presented a more generic framework for any dynamic graphs represented as a sequence of time events with a memory module added in comparison to [91] to enable short-term memory enhancement.
305
+
306
+ § 6 DISCUSSION
307
+
308
+ This work analyzes the versatile spectral transform in capturing the evolution of long-range time series as well as graph topology. We investigate a particular dynamic system of continuous-time dynamic graphs (CTDGs) to find its robust representation. In particular, we implement iterative SVD approximations to encode the long-range feature evolution of the dynamic graph events, which acts a similar role as multiple layers of a low-rank self-attention mechanism. The proposed transform has linear complexity of $\mathcal{O}\left( {{Nd}\log \left( d\right) }\right)$ for a CTDG with $N$ events of $d$ dimensions. The short-term memory in learning is enhanced for the dynamic events by a learnable scheme, such as MLP or GRU. A multi-level and multi-scale fast transform of global spectral graph convolution is then employed for topological embedding, which allows sufficient scalability and transferability in learning dynamic graph representation. The final event embeddings are well-conditioned and the algorithm requests fewer calculation resources. The proposed SPEDGNN shows competitive performance on real dynamic graph prediction tasks.
papers/LOG/LOG 2022/LOG 2022 Conference/kXe4Y0c4VqT/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,427 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # On the Expressive Power of Geometric Graph Neural Networks
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ We propose a geometric version of the Weisfeiler-Leman graph isomorphism test (GWL) for discriminating geometric graphs while respecting the underlying symmetries such as permutation, rotation, and translation. We use GWL to characterise the expressive power of Graph Neural Networks (GNNs) for geometric graphs and provide formal results for the following: (1) What geometric graphs can and cannot be distinguished by GNNs invariant or equivariant to spatial symmetries; (2) Equivariant GNNs are strictly more powerful than their invariant counterparts.
12
+
13
+ ## 1 Preliminaries
14
+
15
+ Geometric graphs. Geometric graphs are attributed graphs embedded in Euclidean space, and are used to model structures in biochemistry [1], material science [2], multi-agent robotics [3], and spatial networks [4]. Formally, we consider a geometric graph $\mathcal{G} = \left( {\mathbf{A},\mathbf{S},\overrightarrow{\mathbf{V}}\overrightarrow{\mathbf{X}}}\right)$ as a set $\mathcal{V}$ of $n$ nodes decorated with geometric attributes. $\mathbf{A}$ is an $n \times n$ adjacency matrix, $\mathbf{S} \in {\mathbb{R}}^{n \times {f}_{s}}$ a matrix of scalar features, $\overrightarrow{\mathbf{V}} \in {\mathbb{R}}^{n \times {f}_{v} \times d}$ a tensor of vector features, and $\overrightarrow{\mathbf{X}} \in {\mathbb{R}}^{n \times d}$ a matrix of coordinates. The geometric attributes transform equivariantly under global symmetries: (1) Permutations ${S}_{n}$ , which act on $\mathcal{G}$ in the natural way; (2) Rotations/orthogonal transformations $\mathfrak{G} = {SO}\left( d\right) /O\left( d\right)$ , which act non-trivially on $\overrightarrow{\mathbf{V}},\overrightarrow{\mathbf{X}}$ ; and (3) Translations $T\left( d\right)$ , which act on $\overrightarrow{\mathbf{X}}$ .
16
+
17
+ Geometric Graph Neural Networks. Graph Neural Networks (GNNs) with spatial symmetries 'baked in' have emerged as the architecture of choice for representation learning on geometric graphs. GNNs for geometric graphs follow the message passing paradigm [5]: scalar features (and, optionally, vector features) at each node are updated from iteration $t$ to $t + 1$ using the neighbourhood features via learnable aggregate and update functions, $\psi$ and $\phi$ , respectively:
18
+
19
+ $$
20
+ {\mathbf{s}}_{i}^{\left( t + 1\right) },{\overrightarrow{\mathbf{v}}}_{i}^{\left( t + 1\right) } = \phi \left( {\left( {{\mathbf{s}}_{i}^{\left( t\right) },{\overrightarrow{\mathbf{v}}}_{i}^{\left( t\right) }}\right) ,\psi \left( \left\{ \left\{ {\left( {{\mathbf{s}}_{j}^{\left( t\right) },{\overrightarrow{\mathbf{v}}}_{j}^{\left( t\right) },{\overrightarrow{\mathbf{x}}}_{i} - {\overrightarrow{\mathbf{x}}}_{j}}\right) \mid j \in {\mathcal{N}}_{i}}\right\} \right\} \right) }\right) . \tag{1}
21
+ $$
22
+
23
+ The scalar features ${\mathbf{S}}^{\left( T\right) }$ at the final iteration $T$ are mapped to graph-level predictions via a permutation-invariant readout $f : {\mathbb{R}}^{n \times {f}_{s}} \rightarrow \mathbb{R}$ .
24
+
25
+ We consider two broad classes of GNNs for geometric graphs: (1) $\mathfrak{G}$ -equivariant models, where the intermediate features are $\mathfrak{G}$ -equivariant and $T\left( d\right)$ -invariant [6-10]; and (2) $\mathfrak{G}$ -invariant models, which only update scalar features in a $\mathfrak{G} \rtimes T\left( d\right)$ -invariant manner [11-13]. Invariant GNNs have shown strong performance for protein design [14, 15] and electrocatalysis [16, 17], while equivariant GNNs are being used within learnt interatomic potentials for molecular dynamics [18-20]. Despite promising empirical performance, key theoretical questions remain unanswered:
26
+
27
+ 1. How to characterise the representational or expressive power of geometric GNNs? 2. What is the tradeoff between $\mathfrak{G}$ -equivariant and $\mathfrak{G}$ -invariant GNNs? Do we need equivariance?
28
+
29
+ One successful approach to studying the expressive power of (non-geometric) GNNs is through the lens of the Weisfeiler-Leman test for deciding whether two graphs are isomorphic [21, 22]. Any message passing GNN can be at most as powerful as 1-WL in distinguishing non-isomorphic graphs, and GNNs have the same expressive power as 1-WL if equipped with injective aggregate, update, and readout functions $\left\lbrack {{23},{24}}\right\rbrack$ . WL is an attractive abstract tool for studying existent architectures while removing implementation details that vary from one model to another. Two geometric graphs $\mathcal{G}$ and $\mathcal{H}$ are isomorphic (denoted $\mathcal{G} \simeq \mathcal{H}$ ) if there exists an edge preserving bijection $b : \mathcal{V}\left( \mathcal{G}\right) \rightarrow \mathcal{V}\left( \mathcal{H}\right)$ and their attributes are equivalent, up to global group actions.
30
+
31
+ ![01963f10-d36e-775a-8334-f9d4ade2aeeb_1_305_197_1186_387_0.jpg](images/01963f10-d36e-775a-8334-f9d4ade2aeeb_1_305_197_1186_387_0.jpg)
32
+
33
+ Figure 1: Geometric Weisfeiler-Leman Test. (Left) GWL distinguishes non-isomorphic geometric graphs ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ by injectively assigning colours to distinct neighbourhood patterns, up to global symmetries. Each iteration expands the neighbourhood from which geometric information can be gathered (shaded for node $i$ ). (Right) Intuitively, GWL colours geometric computation trees.
34
+
35
+ Contributions. In this work, we study the expressive power of geometric GNNs from the perspective of discriminating geometric graphs while respecting the underlying global symmetries. We propose a geometric version of the Weisfeiler-Leman graph isomorphism test (GWL) and use it to formally characterise classes of graphs that can and cannot be distinguished by $\mathfrak{G}$ -invariant and $\mathfrak{G}$ -equivariant GNNs. These results enable us to show that $\mathfrak{G}$ -equivariant GNNs are strictly more powerful than $\mathfrak{G}$ -invariant models. WL has been a major driver of progress for more expressive non-geometric GNNs [25-28]. We hope GWL will provide a roadmap for the same for geometric graphs.
36
+
37
+ ## 2 The Geometric Weisfeiler-Leman Test
38
+
39
+ Preliminaries. Given a group $\mathfrak{G}$ acting on a set $X$ , the $\mathfrak{G}$ -orbit of $x \in X$ is ${\mathcal{O}}_{\mathfrak{G}}\left( x\right) = \left\{ {\mathfrak{g}x \mid \mathfrak{g} \in }\right.$ $\mathfrak{G}\} \subseteq X$ . When $\mathfrak{G}$ is understood from the context, we simply refer to it as the orbit. A function $f : X \rightarrow Y$ is $\mathfrak{G}$ -orbit space injective if we have $f\left( {x}_{1}\right) = f\left( {x}_{2}\right)$ if and only if ${x}_{1} \in {\mathcal{O}}_{\mathfrak{G}}\left( {x}_{2}\right)$ for any ${x}_{1},{x}_{2} \in X$ . Necessarily, such a function is $\mathfrak{G}$ -invariant, since $f\left( {\mathfrak{g} \cdot x}\right) = f\left( x\right)$ . We work with $\mathfrak{G} = {SO}\left( d\right) /O\left( d\right)$ and translation invariance is handled trivially using relative positions ${\overrightarrow{\mathbf{x}}}_{ij} = {\overrightarrow{\mathbf{x}}}_{i} - {\overrightarrow{\mathbf{x}}}_{j}$ . Analogously to the WL test, we assume that all geometric vectors and scalar features come from a countable subset of ${\mathbb{R}}^{d}$ and ${\mathbb{R}}^{f}$ , respectively.
40
+
41
+ A geometric version of the WL test must satisfy a series of requirements imposed by the global symmetries of geometric graphs. As with WL, we iteratively assign node colours which will be unique for every distinct geometric neighbourhood pattern, up to global symmetries. In other words, the exact position or angle of rotation of the (sub-)graph in space should not influence the colouring of the nodes. Therefore, we would like the colouring function to be $\mathfrak{G}$ -orbit space injective, which also makes it $\mathfrak{G}$ -invariant. The colouring function must use an aggregation mechanism to capture geometric information around local neighbourhoods. To avoid any loss of information, the local aggregator must be permutation-invariant and injective. Since a $\mathfrak{G}$ -invariant function cannot be injective by construction, this aggregator must be $\mathfrak{G}$ -equivariant.
42
+
43
+ These intuitions motivate the following definition of the Geometric Weisfeiler-Leman (GWL) test. First, at initialisation time, we assign to each node $i \in \mathcal{V}$ a scalar node colour ${c}_{i} \in \mathbb{R}$ and a (nested) multiset of geometric information ${\mathbf{g}}_{i}$ :
44
+
45
+ $$
46
+ {c}_{i}^{\left( 0\right) } = {\mathbf{s}}_{i},\;{\mathbf{g}}_{i}^{\left( 0\right) } = \left( {\left( {{\mathbf{s}}_{i},{\overrightarrow{\mathbf{v}}}_{i}}\right) ,\varnothing }\right) , \tag{2}
47
+ $$
48
+
49
+ Then the test proceeds inductively and updates the node colour ${c}_{i}^{\left( t\right) }$ and the geometric multiset ${\mathbf{g}}_{i}^{\left( t\right) }$ at iteration $t$ for all $i \in \mathcal{V}$ :
50
+
51
+ $$
52
+ {c}_{i}^{\left( t\right) } = {\operatorname{I-HASH}}^{\left( t\right) }\left( {\left( {{c}_{i}^{\left( t - 1\right) },{\mathbf{g}}_{i}^{\left( t - 1\right) }}\right) ,\left\{ {\left. \left( {{c}_{j}^{\left( t - 1\right) },{\mathbf{g}}_{j}^{\left( t - 1\right) },{\overrightarrow{\mathbf{x}}}_{ij}}\right) \right| \;j \in {\mathcal{N}}_{i}}\right\} }\right) , \tag{3}
53
+ $$
54
+
55
+ $$
56
+ {\mathbf{g}}_{i}^{\left( t\right) } = \left( {\left( {{c}_{i}^{\left( t - 1\right) },{\mathbf{g}}_{i}^{\left( t - 1\right) }}\right) ,\left\{ \left\{ {\left( {{c}_{j}^{\left( t - 1\right) },{\mathbf{g}}_{j}^{\left( t - 1\right) },{\overrightarrow{\mathbf{x}}}_{ij}}\right) \mid j \in {\mathcal{N}}_{i}}\right\} \right\} }\right) , \tag{4}
57
+ $$
58
+
59
+ With each iteration, ${\mathbf{g}}_{i}^{\left( t\right) }$ aggregates geometric information in progressively larger neighbourhoods around the node $i$ . Thus, $k$ iterations of GWL enable each node to access the $k$ -hop subgraph ${\mathcal{N}}_{i}^{\left( k\right) }$ around itself via ${\mathbf{g}}_{i}^{\left( k\right) }$ . The node colours are $\mathfrak{G}$ -invariant scalars computed via I-HASH, a $\mathfrak{G}$ -orbit space injective map encoding distinct geometric neighbourhood patterns, up to global symmetries.
60
+
61
+ Given two geometric graphs $\mathcal{G}$ and $\mathcal{H}$ , if there exists some iteration $t$ for which $\left\{ \left\{ {{c}_{i}^{\left( t\right) } \mid i \in \mathcal{V}\left( \mathcal{G}\right) }\right\} \right\} \neq$ $\left\{ {\left. \left\lbrack {c}_{i}^{\left( t\right) }\right\rbrack \right| \;i \in \mathcal{V}\left( \mathcal{H}\right) }\right\}$ , then GWL deems the two graphs as being geometrically non-isomorphic. Otherwise, the test terminates and cannot distinguish the two geometric graphs when the number of colors in iterations $t$ and(t - 1)are the same.
62
+
63
+ Global symmetries. Denote by ${Q}_{\mathfrak{g}}$ the standard matrix representation of $\mathcal{G}$ . Then we have:
64
+
65
+ $$
66
+ \mathfrak{g} \cdot {\mathbf{g}}_{i}^{\left( 0\right) } = \left( {\left( {{\mathbf{s}}_{i},{\mathbf{Q}}_{\mathfrak{g}}{\overrightarrow{\mathbf{v}}}_{i}}\right) ,\varnothing }\right) \tag{5}
67
+ $$
68
+
69
+ $$
70
+ \mathfrak{g} \cdot {\mathbf{g}}_{i}^{\left( t\right) } = \left( {\left( {{c}_{i}^{\left( t - 1\right) },\mathfrak{g} \cdot {\mathbf{g}}_{i}^{\left( t - 1\right) }}\right) ,\left\{ \left\{ {\left( {{c}_{j}^{\left( t - 1\right) },\mathfrak{g} \cdot {\mathbf{g}}_{j}^{\left( t - 1\right) },{\mathbf{Q}}_{\mathfrak{g}}{\overrightarrow{\mathbf{x}}}_{ij}}\right) \mid j \in {\mathcal{N}}_{i}}\right\} \right\} }\right) \tag{6}
71
+ $$
72
+
73
+ Thus the assignment of ${\mathbf{g}}_{i}$ from Eq 3 is $\mathfrak{G}$ -equivariant by construction. Furthermore, we require the I-HASH function to be $\mathfrak{G}$ -orbit space injective. That is for any geometric neighbourhoods $\mathbf{g},{\mathbf{g}}^{\prime }$ , I- $\operatorname{HASH}\left( \mathbf{g}\right) = \operatorname{I-HASH}\left( {\mathbf{g}}^{\prime }\right)$ if and only if there exists $\mathfrak{g} \in \mathfrak{G}$ such that $\mathbf{g} = \mathfrak{g} \cdot {\mathbf{g}}^{\prime }$ .
74
+
75
+ Invariant GWL. Since we are interested in understanding the role of $\mathfrak{G}$ -equivariance, we also consider a more restrictive version of GWL that is only updates node colours using the $\mathfrak{G}$ -orbit space injective and $\mathfrak{G}$ -invariant I-HASH function without the geometric multiset. We term this the Invariant GWL (IGWL) test, which differs from GWL only in its update rule:
76
+
77
+ $$
78
+ {c}_{i}^{\left( t\right) } = {\operatorname{I-HASH}}^{\left( t\right) }\left( {\left( {{c}_{i}^{\left( t - 1\right) },{\overrightarrow{\mathbf{v}}}_{i}}\right) ,\left\{ \left\{ {\left( {{c}_{j}^{\left( t - 1\right) },{\overrightarrow{\mathbf{v}}}_{j},{\overrightarrow{\mathbf{x}}}_{ij}}\right) \mid j \in {\mathcal{N}}_{i}}\right\} \right\} }\right) \tag{7}
79
+ $$
80
+
81
+ ### 2.1 Characterising the Expressive Power of GNNs for Geometric Graphs
82
+
83
+ We would like to characterise the maximum expressive power of geometric GNNs based on the GWL test. Firstly, we show that any possible GNN can be at most as powerful as GWL in distinguishing non-isomorphic geometric (sub-)graphs.
84
+
85
+ Theorem 1. Geometric GNNs can be at most as powerful as GWL.
86
+
87
+ With sufficient number of iterations, the output of a GNN can be equivalent to GWL if certain conditions are met regarding the aggregate, update and readout functions.
88
+
89
+ Theorem 2. GNNs have the same expressive power as GWL if the following conditions hold:
90
+
91
+ 1. The aggregation $\psi$ is injective, $\mathfrak{G}$ -equivariant, and permutation-invariant.
92
+
93
+ 2. The scalar update ${\phi }_{s}$ is $\mathfrak{G}$ -orbit space injective, $\mathfrak{G}$ -invariant, and permutation-invariant.
94
+
95
+ 3. The vector update ${\phi }_{v}$ is injective, $\mathfrak{G}$ -equivariant, and permutation-invariant.
96
+
97
+ 4. The graph-level readout $f$ is injective and permutation-invariant.
98
+
99
+ ### 2.2 What Geometric Graphs can GWL Distinguish?
100
+
101
+ Let us consider what geometric graphs can and cannot be distinguished by GWL and IGWL, as well as the class of GNNs bounded by the respective tests.
102
+
103
+ GWL identifies the $\mathfrak{G}$ -orbits of geometric (sub-)graphs centred around each node, and distinguishes geometric (sub-)graphs via comparing $\mathfrak{G}$ -orbits. Given two distinct neighbourhoods ${\mathcal{N}}_{1}$ and ${\mathcal{N}}_{2}$ , the $\mathfrak{G}$ -orbits of the corresponding geometric multisets ${\mathbf{g}}_{1}$ and ${\mathbf{g}}_{2}$ are mutually exclusive, i.e. ${\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{1}\right) \cap$ ${\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{2}\right) \equiv \varnothing \Rightarrow {c}_{1} \neq {c}_{2}$ . If ${\mathcal{N}}_{1}$ and ${\mathcal{N}}_{2}$ were identical up to group actions, their $\mathfrak{G}$ -orbits would overlap, i.e. ${\mathbf{g}}_{1} = \mathfrak{g}{\mathbf{g}}_{2}$ for some $\mathfrak{g} \in \mathfrak{G}$ and ${\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{1}\right) = {\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{2}\right) \Rightarrow {c}_{1} = {c}_{2}$ .
104
+
105
+ Case 1: Underlying graphs are isomorphic. Let ${\mathcal{G}}_{1} = \left( {{\mathbf{A}}_{1},{\mathbf{S}}_{1},{\overrightarrow{\mathbf{V}}}_{1},{\overrightarrow{\mathbf{X}}}_{1}}\right)$ and ${\mathcal{G}}_{2} =$ $\left( {{\mathbf{A}}_{2},{\mathbf{S}}_{2},{\overrightarrow{\mathbf{V}}}_{2},{\overrightarrow{\mathbf{X}}}_{2}}\right)$ be two geometric graphs such that the underlying attributed graphs $\left( {{\mathbf{A}}_{1},{\mathbf{S}}_{1}}\right)$ and $\left( {{\mathbf{A}}_{2},{\mathbf{S}}_{2}}\right)$ are isomorphic, i.e. there exists a set of edge preserving bijections $\left\{ {b : {\mathcal{V}}_{1} \rightarrow {\mathcal{V}}_{2}}\right\}$ such that ${s}_{i}^{\left( {\mathcal{G}}_{1}\right) } = {s}_{b\left( i\right) }^{\left( {\mathcal{G}}_{2}\right) } =$ constant for all $b$ and all $i \in {\mathcal{V}}_{1}$ . We term ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ to be $k$ -hop distinct if for all graph isomorphisms $b$ , there is some node $i \in {\mathcal{V}}_{1}, b\left( i\right) \in {\mathcal{V}}_{2}$ such that the corresponding $k$ -hop
106
+
107
+ subgraphs ${\mathcal{N}}_{i}^{\left( k\right) }$ and ${\mathcal{N}}_{b\left( i\right) }^{\left( k\right) }$ are distinct. Otherwise, we say ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ are $k$ -hop identical if all ${\mathcal{N}}_{i}^{\left( k\right) }$ and ${\mathcal{N}}_{b\left( i\right) }^{\left( k\right) }$ are identical up to group actions.
108
+
109
+ We can now formalise what geometric graphs can and cannot be distinguished by GWL.
110
+
111
+ Proposition 3. GWL can distinguish any $k$ -hop distinct geometric graphs ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ where the underlying attributed graphs are isomorphic, and $k$ iterations are sufficient.
112
+
113
+ Proposition 4. Up to $k$ iterations of GWL cannot distinguish any $k$ -hop identical geometric graphs ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ where the underlying attributed graphs are isomorphic.
114
+
115
+ Additionally, we can state the following results about the more constrained IGWL.
116
+
117
+ Proposition 5. IGWL can distinguish any 1-hop distinct geometric graphs ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ where the underlying attributed graphs are isomorphic, and 1 iteration is sufficient.
118
+
119
+ Proposition 6. Any number of iterations of IGWL cannot distinguish any 1-hop identical geometric graphs ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ where the underlying attributed graphs are isomorphic.
120
+
121
+ Case 2: Underlying graphs are non-isomorphic. Let ${\mathcal{G}}_{1} = \left( {{\mathbf{A}}_{1},{\mathbf{S}}_{1},{\overrightarrow{\mathbf{V}}}_{1},{\overrightarrow{\mathbf{X}}}_{1}}\right)$ and ${\mathcal{G}}_{2} =$ $\left( {{\mathbf{A}}_{2},{\mathbf{S}}_{2},{\overrightarrow{\mathbf{V}}}_{2},{\overrightarrow{\mathbf{X}}}_{2}}\right)$ such that the underlying attributed graphs $\left( {{\mathbf{A}}_{1},{\mathbf{S}}_{1}}\right)$ and $\left( {{\mathbf{A}}_{2},{\mathbf{S}}_{2}}\right)$ are non-isomorphic and constructed under realistic assumptions.
122
+
123
+ Proposition 7. Assuming geometric graphs are constructed from point clouds using radial cutoffs, GWL can distinguish any geometric graphs ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ where the underlying attributed graphs are non-isomorphic. At most ${k}_{Max}$ iterations are sufficient, where ${k}_{Max}$ is the maximum graph diameter among ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ .
124
+
125
+ ### 2.3 Comparing GWL and IGWL
126
+
127
+ These results enable us to compare GWL and IGWL, and precisely characterise the expressive power of the class of GNNs bounded by the respective tests.
128
+
129
+ Theorem 8. GWL is strictly more powerful than IGWL.
130
+
131
+ Corollary 9. IGWL has the same expressive power as GWL for fully connected geometric graphs.
132
+
133
+ These results may not seem surprising as invariance is a special case of equivariance, and learning $\mathfrak{G}$ -invariant tasks may require networks to learn $\mathfrak{G}$ -equivariant sub-tasks [29]. However, to the best of our knowledge, this is the first formal proof supporting the use of $\mathfrak{G}$ -equivariant intermediate layers as prescribed in the Geometric Deep Learning blueprint [30].
134
+
135
+ ## 3 Discussion
136
+
137
+ Practical Implications. Propositions 3 and 6 suggest that when message passing is restricted to local radial neighbourhoods around each node, as is conventional for large macromolecules with thousands of nodes, $\mathfrak{G}$ -equivariant GNNs should be preferred to $\mathfrak{G}$ -invariant GNNs. Stacking multiple G-equivariant layers enables the computation of compositional geometric features, which may be key to developing 'foundation models' [31] for macromolecule structures. On the other hand, Corrolary 9 suggests that $\mathfrak{G}$ -equivariant and $\mathfrak{G}$ -invariant GNNs are equally powerful when working on fully connected geometric graphs and all-to-all message passing. These results support the empirical success of recent $\mathfrak{G}$ -invariant ’Graph Transformer’ architectures [17,32] for small molecules with tens of nodes, where working with full graphs is tractable. Additionally, Proposition 5 suggests that using wider cutoff radii when building geometric graphs can be effective for improving $\mathfrak{G}$ -invariant GNNs, as larger neighbourhoods may be easier to distinguish via scalars.
138
+
139
+ Future Work. It is non-trivial to define maximally powerful GNNs that satisfy the conditions of Theorem 2. GWL relies on injective multiset functions that are $\mathfrak{G}$ -invariant or $\mathfrak{G}$ -equivariant. But can such functions be universally approximated via neural networks?
140
+
141
+ We suspect that higher-order tensors are required for building functions with the desired properties and, consequently, for building provably powerful geometric GNNs. No existing geometric GNNs are as powerful as GWL, with the recent Multi-ACE model [20] being a possible exception under appropriate hyperparameters - infinite tensor order, and correlation order equivalent to the maximum cardinality of local neighbourhoods - both of which are intractable in practice. Future work will explore building provably powerful and practical geometric GNNs, with applications in biochemistry, material science, and multi-agent robotics.
142
+
143
+ References
144
+
145
+ [1] Arian R Jamasb, Ramon Viñas, Eric J Ma, Charlie Harris, Kexin Huang, Dominic Hall, Pietro Lió, and Tom L Blundell. Graphein-a python library for geometric deep learning and network analysis on protein structures and interaction networks. 1
146
+
147
+ [2] Lowik Chanussot, Abhishek Das, Siddharth Goyal, Thibaut Lavril, Muhammed Shuaibi, Mor-gane Riviere, Kevin Tran, Javier Heras-Domingo, Caleb Ho, Weihua Hu, Aini Palizhati, Anuroop Sriram, Brandon Wood, Junwoong Yoon, Devi Parikh, C. Lawrence Zitnick, and Zachary Ulissi. Open catalyst 2020 (oc20) dataset and community challenges. ACS Catalysis, 2021. doi: 10.1021/acscatal.0c04525. 1
148
+
149
+ [3] Qingbiao Li, Fernando Gama, Alejandro Ribeiro, and Amanda Prorok. Graph neural networks for decentralized multi-robot path planning. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 11785-11792. IEEE, 2020. 1
150
+
151
+ [4] Marc Barthélemy. Spatial networks. Physics reports, 2011. 1
152
+
153
+ [5] Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zam-baldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint, 2018. 1
154
+
155
+ [6] Nathaniel Thomas, Tess Smidt, Steven Kearnes, Lusann Yang, Li Li, Kai Kohlhoff, and Patrick Riley. Tensor field networks: Rotation-and translation-equivariant neural networks for $3\mathrm{\;d}$ point clouds. arXiv preprint arXiv:1802.08219, 2018. 1
156
+
157
+ [7] Brandon Anderson, Truong Son Hy, and Risi Kondor. Cormorant: Covariant molecular neural networks. Advances in neural information processing systems, 32, 2019.
158
+
159
+ [8] Bowen Jing, Stephan Eismann, Patricia Suriana, Raphael John Lamarre Townshend, and Ron Dror. Learning from protein structure with geometric vector perceptrons. In International Conference on Learning Representations, 2020.
160
+
161
+ [9] Victor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E (n) equivariant graph neural networks. In International Conference on Machine Learning, pages 9323-9332. PMLR, 2021.
162
+
163
+ [10] Johannes Brandstetter, Rob Hesselink, Elise van der Pol, Erik J Bekkers, and Max Welling. Geometric and physical quantities improve e(3) equivariant message passing. In International Conference on Learning Representations, 2022. 1
164
+
165
+ [11] Kristof T Schütt, Huziel E Sauceda, P-J Kindermans, Alexandre Tkatchenko, and K-R Müller. Schnet-a deep learning architecture for molecules and materials. The Journal of Chemical Physics, 148(24):241722, 2018. 1
166
+
167
+ [12] Tian Xie and Jeffrey C. Grossman. Crystal graph convolutional neural networks for an accurate and interpretable prediction of material properties. Phys. Rev. Lett., 120:145301, Apr 2018. doi: 10.1103/PhysRevLett. 120.145301.
168
+
169
+ [13] Johannes Gasteiger, Janek Groß, and Stephan Günnemann. Directional message passing for molecular graphs. arXiv preprint arXiv:2003.03123, 2020. 1
170
+
171
+ [14] Zuobai Zhang, Minghao Xu, Arian Jamasb, Vijil Chenthamarakshan, Aurelie Lozano, Payel Das, and Jian Tang. Protein representation learning by geometric structure pretraining. arXiv preprint arXiv:2203.06125, 2022. 1
172
+
173
+ [15] Justas Dauparas, Ivan Anishchenko, Nathaniel Bennett, Hua Bai, Robert J Ragotte, Lukas F Milles, Basile IM Wicky, Alexis Courbet, Rob J de Haas, Neville Bethel, et al. Robust deep learning based protein sequence design using proteinmpnn. bioRxiv, 2022. 1
174
+
175
+ [16] Johannes Gasteiger, Florian Becker, and Stephan Günnemann. Gemnet: Universal directional graph neural networks for molecules. In Advances in Neural Information Processing Systems, 2021. 1
176
+
177
+ [17] Yu Shi, Shuxin Zheng, Guolin Ke, Yifei Shen, Jiacheng You, Jiyan He, Shengjie Luo, Chang Liu, Di He, and Tie-Yan Liu. Benchmarking graphormer on large-scale molecular modeling datasets. arXiv preprint arXiv:2203.04810, 2022. 1, 4
178
+
179
+ [18] Kristof Schütt, Oliver Unke, and Michael Gastegger. Equivariant message passing for the prediction of tensorial properties and molecular spectra. In International Conference on Machine Learning, pages 9377-9388. PMLR, 2021. 1, 7, 12
180
+
181
+ [19] Simon Batzner, Albert Musaelian, Lixin Sun, Mario Geiger, Jonathan P Mailoa, Mordechai Kornbluth, Nicola Molinari, Tess E Smidt, and Boris Kozinsky. E (3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials. Nature communications, 13(1): 1-11, 2022.
182
+
183
+ [20] Ilyes Batatia, Dávid Péter Kovács, Gregor NC Simm, Christoph Ortner, and Gábor Csányi. Mace: Higher order equivariant message passing neural networks for fast and accurate force fields. arXiv preprint arXiv:2206.07697, 2022. 1, 4
184
+
185
+ [21] Boris Weisfeiler and Andrei Leman. The reduction of a graph to canonical form and the algebra which appears therein. NTI, Series, 2(9):12-16, 1968. 1, 7
186
+
187
+ [22] Ronald C Read and Derek G Corneil. The graph isomorphism disease. Journal of graph theory, 1(4):339-363, 1977. 1, 7
188
+
189
+ [23] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2019. 1, 7, 8
190
+
191
+ [24] Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, 2019. 1, 7
192
+
193
+ [25] Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably powerful graph networks. Advances in neural information processing systems, 32, 2019. 2
194
+
195
+ [26] Cristian Bodnar, Fabrizio Frasca, Yuguang Wang, Nina Otter, Guido F Montufar, Pietro Lio, and Michael Bronstein. Weisfeiler and lehman go topological: Message passing simplicial networks. In International Conference on Machine Learning, pages 1026-1037. PMLR, 2021.
196
+
197
+ [27] Cristian Bodnar, Fabrizio Frasca, Nina Otter, Yuguang Wang, Pietro Lio, Guido F Montufar, and Michael Bronstein. Weisfeiler and lehman go cellular: Cw networks. Advances in Neural Information Processing Systems, 34:2625-2640, 2021.
198
+
199
+ [28] Christopher Morris, Yaron Lipman, Haggai Maron, Bastian Rieck, Nils M Kriege, Martin Grohe, Matthias Fey, and Karsten Borgwardt. Weisfeiler and leman go machine learning: The story so far. arXiv preprint arXiv:2112.09992, 2021. 2
200
+
201
+ [29] Geoffrey E Hinton, Alex Krizhevsky, and Sida D Wang. Transforming auto-encoders. In International conference on artificial neural networks, 2011. 4
202
+
203
+ [30] Michael M Bronstein, Joan Bruna, Taco Cohen, and Petar Veličković. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. arXiv preprint arXiv:2104.13478, 2021. 4
204
+
205
+ [31] Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021. 4
206
+
207
+ [32] Chaitanya Joshi. Transformers are graph neural networks. The Gradient, 2020. 4
208
+
209
+ [33] Alexander V Shapeev. Moment tensor potentials: A class of systematically improvable interatomic potentials. Multiscale Modeling & Simulation, 14(3):1153-1173, 2016. 7
210
+
211
+ [34] Ralf Drautz. Atomic cluster expansion for accurate and transferable interatomic potentials. Physical Review B, 2019.
212
+
213
+ [35] Genevieve Dusson, Markus Bachmayr, Gabor Csanyi, Ralf Drautz, Simon Etter, Cas van der Oord, and Christoph Ortner. Atomic cluster expansion: Completeness, efficiency and stability. arXiv preprint arXiv:1911.03550, 2019.
214
+
215
+ [36] Sergey N Pozdnyakov, Michael J Willatt, Albert P Bartók, Christoph Ortner, Gábor Csányi, and Michele Ceriotti. Incompleteness of atomic structure representations. Physical Review Letters, 125(16):166001, 2020. 7
216
+
217
+ [37] Sergey N Pozdnyakov and Michele Ceriotti. Incompleteness of graph convolutional neural networks for points clouds in three dimensions. arXiv preprint arXiv:2201.07136, 2022. 7
218
+
219
+ ## A Related Work
220
+
221
+ Weisfeiler-Leman graph isomorphism test. The 1-dimensional Weisfeiler-Leman test (1-WL) and its higher-order variants are algorithms for deciding whether two (non-geometric) graphs are isomorphic [21, 22]. 1-WL injectively assigns 'colours' to distinct neighbourhood patterns, and compares graphs by comparing their multiset of colours. One successful approach to study the expressive power of (non-geometric) GNNs is through the lens of the WL heirarchy. Any message passing GNN can be at most as powerful as 1-WL in distinguishing non-isomorphic graphs, and GNNs have the same expressive power as 1-WL if equipped with injective aggregate, update, and readout functions $\left\lbrack {{23},{24}}\right\rbrack$ . WL is an attractive abstract tool for studying existent architectures while removing implementation details that vary from one model to another.
222
+
223
+ Completeness of local representations. Recent work on the completeness of atom-centred interatomic potentials has focuses on perfectly distinguishing 1-hop local neighbourhoods centred around atoms by building a spanning basis for continuous, $\mathfrak{G}$ -equivariant multiset functions [33- 36]. Subsequent works have informally implied that there may exist classes of geometric graphs (beyond 1-hop sub-graphs) that cannot be distinguished by simple $\mathfrak{G}$ -invariant scalar quantities such as distances or angles, and may require higher body-order $\mathfrak{G}$ -invariants and/or the propogation of G-equivariant geometric information beyond local neighbourhoods [18, 37].
224
+
225
+ ## B Definitions and Notations
226
+
227
+ In this section, we provide definitions and notations used throughout the paper.
228
+
229
+ ### B.1 Geometric Graphs
230
+
231
+ Definition 10 (Geometric Graphs). A geometric graph $\mathcal{G} = \left( {\mathbf{A},\mathbf{S},\overrightarrow{\mathbf{V}},\overrightarrow{\mathbf{X}}}\right)$ is a set $\mathcal{V}$ of $n$ nodes embedded in $d$ -dimensional Euclidean space and decorated with geometric attributes. $\mathbf{A}$ an $n \times n$ adjacency matrix, $\mathbf{S} \in {\mathbb{R}}^{n \times {f}_{s}}$ is a matrix of ${f}_{s}$ -dimensional scalar features per node, $\overrightarrow{\mathbf{V}} \in {\mathbb{R}}^{n \times {f}_{v} \times d}$ is a tensor of ${f}_{v} \times d$ -dimensional vector features per node ${}^{1}$ , and $\overrightarrow{\mathbf{X}} \in {\mathbb{R}}^{n \times d}$ is a matrix of $d$ - dimensional coordinates per node. The geometric attributes transform in a well-defined manner under global symmetries, as described in Table ${1}^{2}$ .
232
+
233
+ <table><tr><td>Attribute</td><td>$A$</td><td>$S = \left( {{s}_{1}\ldots {s}_{n}}\right)$</td><td>$\overrightarrow{V} = \left( {{\overrightarrow{v}}_{1}\ldots {\overrightarrow{v}}_{n}}\right)$</td><td>$\overset{\rightarrow }{\mathbf{X}} = \left( {{\overset{\rightarrow }{\mathbf{x}}}_{1}\ldots {\overset{\rightarrow }{\mathbf{x}}}_{n}}\right)$</td></tr><tr><td>Permutation ${S}_{n}$ $\{ \sigma : \left\lbrack n\right\rbrack \rightarrow \left\lbrack n\right\rbrack \}$</td><td>${P}_{\sigma }A{P}_{\sigma }^{\top }$</td><td>$\left( {{s}_{\sigma \left( 1\right) }\ldots {s}_{\sigma \left( n\right) }}\right)$</td><td>$\left( {{\overrightarrow{v}}_{\sigma \left( 1\right) }\ldots {\overrightarrow{v}}_{\sigma \left( n\right) }}\right)$</td><td>$({\overset{\rightarrow }{x}}_{\sigma (1)}\ldots {\overset{\rightarrow }{x}}_{\sigma (n)})$</td></tr><tr><td>Rotation/Reflection $O\left( d\right)$ $\{ {Q}_{\mathfrak{g}} \in {\mathbb{R}}^{d \times d}:{Q}_{\mathfrak{g}}^{\top }{Q}_{\mathfrak{g}} = {Q}_{\mathfrak{g}}{Q}_{\mathfrak{g}}^{\top } = {I}_{d}\}$</td><td>$\mathbf{\mathit{A}}$</td><td>$\left( {{s}_{1}\ldots {s}_{n}}\right)$</td><td>$({Q}_{\mathfrak{g}}{\overset{\rightarrow }{v}}_{1}\ldots {Q}_{\mathfrak{g}}{\overset{\rightarrow }{v}}_{n})$</td><td>$\left( {{Q}_{\mathfrak{g}}{\overset{\rightarrow }{x}}_{1}\ldots {Q}_{\mathfrak{g}}{\overset{\rightarrow }{x}}_{n}}\right)$</td></tr><tr><td>Translation $T\left( d\right)$ $\{ \overrightarrow{t} \in {\mathbb{R}}^{d}\}$</td><td>$\mathbf{\mathit{A}}$</td><td>$\left( {{s}_{1}\ldots {s}_{n}}\right)$</td><td>$\left( {{\overrightarrow{v}}_{1}\ldots {\overrightarrow{v}}_{n}}\right)$</td><td>$({\overset{\rightarrow }{x}}_{1} + \overset{\rightarrow }{t}\ldots {\overset{\rightarrow }{x}}_{n} + \overset{\rightarrow }{t})$</td></tr></table>
234
+
235
+ Table 1: Symmetry groups and their actions on geometric graph attributes.
236
+
237
+ We assume that a geometric graph $\mathcal{G}$ is constructed from a point cloud $\left( {\mathbf{S},\overrightarrow{\mathbf{V}},\overrightarrow{\mathbf{X}}}\right)$ using a predetermined radial cutoff $r$ . Thus, the adjacency matrix is defined as ${a}_{ij} = 1$ if ${\begin{Vmatrix}{\overrightarrow{\mathbf{x}}}_{i} - {\overrightarrow{\mathbf{x}}}_{j}\end{Vmatrix}}_{2} \leq r$ , or 0 otherwise, for all ${a}_{ij} \in \mathbf{A}$ . Such construction procedures are conventional for geometric graphs in biochemistry and material science.
238
+
239
+ Definition 11 (Geometric Graph Isomorphism). Two graphs $\mathcal{G}$ and $\mathcal{H}$ are isomorphic (denoted $\mathcal{G} \simeq \mathcal{H}$ ) if there exists an edge preserving bijection $b : \mathcal{V}\left( \mathcal{G}\right) \rightarrow \mathcal{V}\left( \mathcal{H}\right)$ , i.e. ${a}_{ij} = 1$ if and only if ${a}_{b\left( i\right) b\left( j\right) } = 1$ . In the case of geometric graph isomorphism, we additionally require that the attributes are equivalent, up to global group actions. Denote by ${\mathbf{Q}}_{\mathfrak{g}} \in {\mathbb{R}}^{d \times d}$ the standard representation of $\mathfrak{g} \in \mathfrak{G}$ and $\overrightarrow{t} \in {\mathbb{R}}^{d}$ a translation vector. Then $\mathcal{G} \simeq \mathcal{H}$ implies
240
+
241
+ $$
242
+ \left( {{\mathbf{s}}_{i}^{\left( \mathcal{G}\right) },{\overrightarrow{\mathbf{v}}}_{i}^{\left( \mathcal{G}\right) },{\overrightarrow{\mathbf{x}}}_{i}^{\left( \mathcal{G}\right) }}\right) = \left( {{\mathbf{s}}_{b\left( i\right) }^{\left( \mathcal{H}\right) },{\mathbf{Q}}_{\mathfrak{g}}{\overrightarrow{\mathbf{v}}}_{b\left( i\right) }^{\left( \mathcal{H}\right) },{\mathbf{Q}}_{\mathfrak{g}}\left( {{\overrightarrow{\mathbf{x}}}_{b\left( i\right) }^{\left( \mathcal{H}\right) } + \overrightarrow{\mathbf{t}}}\right) }\right) \;\text{ for all }i \in \mathcal{V}\left( \mathcal{G}\right) . \tag{8}
243
+ $$
244
+
245
+ ---
246
+
247
+ ${}^{1}$ We may alternatively consider higher order tensor features beyond vectors.
248
+
249
+ ${}^{2}$ We may alternatively consider $\mathfrak{G} = {SO}\left( d\right)$ instead of $O\left( d\right)$ , in which case we only consider $\det \left( {\mathbf{Q}}_{\mathfrak{g}}\right) = 1$
250
+
251
+ ---
252
+
253
+ ### B.2 Groups and Group Orbits
254
+
255
+ Let $\mathfrak{G}$ be a group acting on a set $X$ .
256
+
257
+ Definition 12 ( $\mathfrak{G}$ -orbit). The $\mathfrak{G}$ -orbit of $x \in X$ is ${\mathcal{O}}_{\mathfrak{G}}\left( x\right) = \{ \mathfrak{g}x \mid \mathfrak{g} \in \mathfrak{G}\} \subseteq X$ .
258
+
259
+ Definition 13 ( $\mathfrak{G}$ -orbit of a set). The $\mathfrak{G}$ -orbit of the set $X$ is ${\mathcal{O}}_{\mathfrak{G}}\left( X\right) = \{ \{ \mathfrak{g}x \mid x \in X\} \mid \mathfrak{g} \in \mathfrak{G}\}$ .
260
+
261
+ Definition 14 ( $\mathfrak{G}$ -invariant function.). A function $f : X \rightarrow Y$ is $\mathfrak{G}$ -invariant if we have $f\left( {\mathfrak{g}x}\right) = f\left( x\right)$ for all $\mathfrak{g} \in \mathfrak{G}$ .
262
+
263
+ Definition 15 (G-equivariant function.). A function $f : X \rightarrow Y$ is $\mathfrak{G}$ -equivariant if we have $f\left( {\mathfrak{g}x}\right) = \mathfrak{g}f\left( x\right)$ for all $\mathfrak{g} \in \mathfrak{G}$ .
264
+
265
+ Definition 16 ( $\mathfrak{G}$ -orbit space injective function.). A function $f : X \rightarrow Y$ is $\mathfrak{G}$ -orbit space injective if we have $f\left( {x}_{1}\right) = f\left( {x}_{2}\right)$ if and only if ${x}_{1} \in {\mathcal{O}}_{\mathfrak{G}}\left( {x}_{2}\right)$ for any ${x}_{1},{x}_{2} \in X$ . Necessarily, such a function is $\mathfrak{G}$ -invariant, since $f\left( {\mathfrak{g}x}\right) = f\left( x\right)$ .
266
+
267
+ Definition 17 (Injective function.). A function $f : X \rightarrow Y$ is injective if we have $f\left( {x}_{1}\right) = f\left( {x}_{2}\right)$ if and only if ${x}_{1} = {x}_{2}$ for any ${x}_{1},{x}_{2} \in X$ .
268
+
269
+ Definition 18 (Equivalence class). The equivalence class of $x \in X$ is $\left\lbrack x\right\rbrack = {\mathcal{O}}_{\mathfrak{G}}\left( x\right)$ .
270
+
271
+ ## C Proofs for equivalence between GWL and Geometric GNNs
272
+
273
+ Our proofs largely follow the techniques used in [23] for connecting 1-WL with GNNs. Note that we omit including the relative position vectors ${\overrightarrow{\mathbf{x}}}_{ij} = {\overrightarrow{\mathbf{x}}}_{i} - {\overrightarrow{\mathbf{x}}}_{j}$ in GWL and GNN updates for brevity, as relative positions vectors can be merged into the vector features.
274
+
275
+ Theorem 1. Geometric GNNs can be at most as powerful as GWL.
276
+
277
+ Proof of Theorem 1. The theorem implies that if a GNN graph-level readout outputs $f\left( \mathcal{G}\right) \neq f\left( \mathcal{H}\right)$ , then the GWL test will always determine $\mathcal{G}$ and $\mathcal{H}$ to be non-isomorphic, i.e. $\mathcal{G} \neq \mathcal{H}$ .
278
+
279
+ We will prove by contradiction. Suppose after $T$ iterations, a GNN graph-level readout outputs $f\left( \mathcal{G}\right) \neq f\left( \mathcal{H}\right)$ , but the GWL test cannot decide $\mathcal{G}$ and $\mathcal{H}$ are non-isomorphic, i.e. $\mathcal{G}$ and $\mathcal{H}$ always have the same collection of node colours for iterations 0 to $T$ . Thus, for iteration $t$ and $t + 1$ for any $t = 0\ldots T - 1,\mathcal{G}$ and $\mathcal{H}$ have the same collection of node colours $\left\{ {c}_{i}^{\left( t\right) }\right\}$ as well as the same collection of neighbourhood geometric multisets $\left\{ {\left( {{c}_{i}^{\left( t\right) },{\mathbf{g}}_{i}^{\left( t\right) }}\right) ,\left\{ {\left( {{c}_{j}^{\left( t\right) },{\mathbf{g}}_{j}^{\left( t\right) }}\right) \mid j \in {\mathcal{N}}_{i}}\right\} }\right\}$ up to group actions. Otherwise, the GWL test would have produced different node colours at iteration $t + 1$ for $\mathcal{G}$ and $\mathcal{H}$ as different geometric multisets get unique new colours.
280
+
281
+ We will show that on the same graph for nodes $i$ and $k$ , if $\left( {{c}_{i}^{\left( t\right) },{\mathbf{g}}_{i}^{\left( t\right) }}\right) = \left( {{c}_{k}^{\left( t\right) },\mathfrak{g} \cdot {\mathbf{g}}_{k}^{\left( t\right) }}\right)$ , we always have GNN features $\left( {{\mathbf{s}}_{i}^{\left( t\right) },{\overrightarrow{\mathbf{v}}}_{i}^{\left( t\right) }}\right) = \left( {{\mathbf{s}}_{k}^{\left( t\right) },{\mathbf{Q}}_{\mathfrak{g}}{\overrightarrow{\mathbf{v}}}_{k}^{\left( t\right) }}\right)$ for any iteration $t$ . This holds for $t = 0$ because GWL and the GNN start with the same initialisation. Suppose this holds for iteration $t$ . At iteration $t + 1$ , if for any $i$ and $k,\left( {{c}_{i}^{\left( t + 1\right) },{\mathbf{g}}_{i}^{\left( t + 1\right) }}\right) = \left( {{c}_{k}^{\left( t + 1\right) },\mathfrak{g} \cdot {\mathbf{g}}_{k}^{\left( t + 1\right) }}\right)$ , then
282
+
283
+ $$
284
+ \left\{ {\left( {{c}_{i}^{\left( t\right) },{\mathbf{g}}_{i}^{\left( t\right) }}\right) ,\left\{ \left\{ {\left( {{c}_{j}^{\left( t\right) },{\mathbf{g}}_{j}^{\left( t\right) }}\right) \mid j \in {\mathcal{N}}_{i}}\right\} \right\} }\right\} = \left\{ {\left( {{c}_{k}^{\left( t\right) },\mathfrak{g} \cdot {\mathbf{g}}_{k}^{\left( t\right) }}\right) ,\left\{ \left\{ {\left( {{c}_{j}^{\left( t\right) },\mathfrak{g} \cdot {\mathbf{g}}_{j}^{\left( t\right) }}\right) \mid j \in {\mathcal{N}}_{k}}\right\} \right\} }\right\} \tag{9}
285
+ $$
286
+
287
+ By our assumption on iteration $t$ ,
288
+
289
+ $$
290
+ \left\{ {\left( {{\mathbf{s}}_{i}^{\left( t\right) },{\overrightarrow{\mathbf{v}}}_{i}^{\left( t\right) }}\right) ,\left\{ \left\{ {\left( {{\mathbf{s}}_{j}^{\left( t\right) },{\overrightarrow{\mathbf{v}}}_{j}^{\left( t\right) }}\right) \mid j \in {\mathcal{N}}_{i}}\right\} \right\} }\right\} = \left\{ {\left( {{\mathbf{s}}_{k}^{\left( t\right) },{\mathbf{Q}}_{\mathfrak{g}}{\overrightarrow{\mathbf{v}}}_{k}^{\left( t\right) }}\right) ,\left\{ \left\{ {\left( {{\mathbf{s}}_{j}^{\left( t\right) },{\mathbf{Q}}_{\mathfrak{g}}{\overrightarrow{\mathbf{v}}}_{j}^{\left( t\right) }}\right) \mid j \in {\mathcal{N}}_{k}}\right\} \right\} }\right\} \tag{10}
291
+ $$
292
+
293
+ As the same aggregate and update operations are applied at each node within the GNN, the same inputs, i.e. neighbourhood features, are mapped to the same output. Thus, $\left( {{\mathbf{s}}_{i}^{\left( t + 1\right) },{\overrightarrow{\mathbf{v}}}_{i}^{\left( t + 1\right) }}\right) =$ $\left( {{\mathbf{s}}_{k}^{\left( t + 1\right) },{\mathbf{Q}}_{\mathfrak{g}}{\overrightarrow{\mathbf{v}}}_{k}^{\left( t + 1\right) }}\right)$ . By induction, if $\left( {{c}_{i}^{\left( t\right) },{\mathbf{g}}_{i}^{\left( t\right) }}\right) = \left( {{c}_{k}^{\left( t\right) },\mathfrak{g} \cdot {\mathbf{g}}_{k}^{\left( t\right) }}\right)$ , we always have GNN node features $\left( {{\mathbf{s}}_{i}^{\left( t\right) },{\overrightarrow{\mathbf{v}}}_{i}^{\left( t\right) }}\right) = \left( {{\mathbf{s}}_{k}^{\left( t\right) },{\mathbf{Q}}_{\mathfrak{g}}{\overrightarrow{\mathbf{v}}}_{k}^{\left( t\right) }}\right)$ for any iteration $t$ . This creates valid mappings ${\phi }_{s},{\phi }_{v}$ such that ${\mathbf{s}}_{i}^{\left( t\right) } = {\phi }_{s}\left( {c}_{i}^{\left( t\right) }\right)$ and ${\overrightarrow{\mathbf{v}}}_{i}^{\left( t\right) } = {\phi }_{v}\left( {{c}_{i}^{\left( t\right) },{\mathbf{g}}_{i}^{\left( t\right) }}\right)$ for any $i \in \mathcal{V}$ .
294
+
295
+ Thus, if $\mathcal{G}$ and $\mathcal{H}$ having the same collection of node colours and geometric multisets, then $\mathcal{G}$ and $\mathcal{H}$ also have the same collection of GNN neighbourhood features
296
+
297
+ $$
298
+ \left\{ {\left( {{\mathbf{s}}_{i}^{\left( t\right) },{\overrightarrow{\mathbf{v}}}_{i}^{\left( t\right) }}\right) ,\left\{ \left\{ {\left( {{\mathbf{s}}_{j}^{\left( t\right) },{\overrightarrow{\mathbf{v}}}_{j}^{\left( t\right) }}\right) \mid j \in {\mathcal{N}}_{i}}\right\} \right\} }\right\} = \left\{ {\left( {{\phi }_{s}\left( {c}_{i}^{\left( t\right) }\right) ,{\phi }_{v}\left( {{c}_{i}^{\left( t\right) },{\mathbf{g}}_{i}^{\left( t\right) }}\right) }\right) ,\left\{ \left( {\left( {{\phi }_{s}\left( {c}_{j}^{\left( t\right) }\right) ,{\phi }_{v}\left( {{c}_{i}^{\left( t\right) },{\mathbf{g}}_{i}^{\left( t\right) }}\right) }\right) \mid j \in {\mathcal{N}}_{i}}\right) \right\} }\right\}
299
+ $$
300
+
301
+ (11)
302
+
303
+ Thus, the GNN will output the same collection of node scalar features $\left\{ {\mathbf{s}}_{i}^{\left( T\right) }\right\}$ for $\mathcal{G}$ and $\mathcal{H}$ and the
304
+
305
+ permutation-invariant graph-level readout will output $f\left( \mathcal{G}\right) = f\left( \mathcal{H}\right)$ . This is a contradiction.
306
+
307
+ Theorem 2. GNNs have the same expressive power as GWL if the following conditions hold:
308
+
309
+ 1. The aggregation $\psi$ is injective, $\mathfrak{G}$ -equivariant, and permutation-invariant.
310
+
311
+ 2. The scalar update ${\phi }_{s}$ is $\mathfrak{G}$ -orbit space injective, $\mathfrak{G}$ -invariant, and permutation-invariant.
312
+
313
+ 3. The vector update ${\phi }_{v}$ is injective, $\mathfrak{G}$ -equivariant, and permutation-invariant.
314
+
315
+ 4. The graph-level readout $f$ is injective and permutation-invariant.
316
+
317
+ Proof of Theorem 2. Consider a GNN where the conditions hold. We will show that, with a sufficient number of iterations $t$ , the output of this GNN is equivalent to GWL, i.e. ${\mathbf{s}}^{\left( t\right) } \equiv {c}^{\left( t\right) }$ .
318
+
319
+ Let $\mathcal{G}$ and $\mathcal{H}$ be any geometric graphs which the GWL test decides as non-isomorphic at iteration $T$ . Because the graph-level readout function is injective, i.e. it maps distinct multiset of node scalar features into unique embeddings, it suffices to show that the GNN's neighbourhood aggregation process, with sufficient iterations, embeds $\mathcal{G}$ and $\mathcal{H}$ into different multisets of node features.
320
+
321
+ For the purpose of this proof, we replace $\mathfrak{G}$ -orbit space injective functions with injective functions over the equivalence class generated by the actions of $\mathfrak{G}$ . Thus, all elements belonging to the same $\mathfrak{G}$ -orbit will first be mapped to the same equivalence class, followed by an injective map. The result is $\mathfrak{G}$ -orbit space injective.
322
+
323
+ Let us assume the GNN updates node scalar and vector features as
324
+
325
+ $$
326
+ {\mathbf{s}}_{i}^{\left( t\right) } = {\phi }_{s}\left( \left\lbrack {\left( {{\mathbf{s}}_{i}^{\left( t - 1\right) },{\overrightarrow{\mathbf{v}}}_{i}^{\left( t - 1\right) }}\right) ,\psi \left( \left\{ \left\{ {\left( {{\mathbf{s}}_{j}^{\left( t - 1\right) },{\overrightarrow{\mathbf{v}}}_{j}^{\left( t - 1\right) }}\right) \mid j \in {\mathcal{N}}_{i}}\right\} \right\} \right) }\right\rbrack \right) \tag{12}
327
+ $$
328
+
329
+ $$
330
+ {\overrightarrow{\mathbf{v}}}_{i}^{\left( t\right) } = {\phi }_{v}\left( {\left( {{\mathbf{s}}_{i}^{\left( t - 1\right) },{\overrightarrow{\mathbf{v}}}_{i}^{\left( t - 1\right) }}\right) ,\psi \left( \left\{ \left( {\left( {{\mathbf{s}}_{j}^{\left( t - 1\right) },{\overrightarrow{\mathbf{v}}}_{j}^{\left( t - 1\right) }}\right) \mid j \in {\mathcal{N}}_{i}}\right) \right\} \right) }\right) \tag{13}
331
+ $$
332
+
333
+ with the aggregation function $\psi$ being $\mathfrak{G}$ -equivariant and injective, the scalar update function ${\phi }_{s}$ being $\mathfrak{G}$ -invariant and injective, and the vector update function ${\phi }_{v}$ being $\mathfrak{G}$ -equivariant and injective.
334
+
335
+ The GWL test updates the node colours ${c}_{i}^{\left( t\right) }$ and geometric multiset ${\mathbf{g}}_{i}^{\left( t\right) }$ as
336
+
337
+ $$
338
+ {c}_{i}^{\left( t\right) } = {h}_{s}\left( \left\lbrack {\left( {{c}_{i}^{\left( t - 1\right) },{\mathbf{g}}_{i}^{\left( t - 1\right) }}\right) ,\left\{ \left\{ {\left( {{c}_{j}^{\left( t - 1\right) },{\mathbf{g}}_{j}^{\left( t - 1\right) }}\right) \mid j \in {\mathcal{N}}_{i}}\right\} \right\} }\right\rbrack \right) , \tag{14}
339
+ $$
340
+
341
+ $$
342
+ {\mathbf{g}}_{i}^{\left( t\right) } = {h}_{v}\left( {\left( {{c}_{i}^{\left( t - 1\right) },{\mathbf{g}}_{i}^{\left( t - 1\right) }}\right) ,\left\{ \left\{ {\left( {{c}_{j}^{\left( t - 1\right) },{\mathbf{g}}_{j}^{\left( t - 1\right) }}\right) \mid j \in {\mathcal{N}}_{i}}\right\} \right\} }\right) \tag{15}
343
+ $$
344
+
345
+ where ${h}_{s}$ is $\mathfrak{G}$ -invariant and injective, and ${h}_{v}$ is $\mathfrak{G}$ -equivariant and injective.
346
+
347
+ We will show by induction that at any iteration $t$ , there always exist injective functions ${\varphi }_{s}$ and ${\varphi }_{v}$ such that ${\mathbf{s}}_{i}^{\left( t\right) } = {\varphi }_{s}\left( {c}_{i}^{\left( t\right) }\right)$ and ${\overrightarrow{\mathbf{v}}}_{i}^{\left( t\right) } = {\varphi }_{v}\left( {{c}_{i}^{\left( t\right) },{\mathbf{g}}_{i}^{\left( t\right) }}\right)$ . This holds for $t = 0$ because the initial node features are the same for GWL and GNN, ${c}_{i}^{\left( 0\right) } = {\mathbf{s}}_{i}^{\left( 0\right) }$ and ${\mathbf{g}}_{i}^{\left( 0\right) } = \left( {\left( {{\mathbf{s}}_{i}^{\left( 0\right) },{\overrightarrow{\mathbf{v}}}_{i}^{\left( 0\right) }}\right) ,\varnothing }\right)$ for all $i \in \mathcal{V}\left( \mathcal{G}\right) ,\mathcal{V}\left( \mathcal{H}\right)$ . Suppose this holds for iteration $t$ . At iteration $t + 1$ , substituting ${s}_{i}^{\left( t\right) }$ with ${\varphi }_{s}\left( {c}_{i}^{\left( t\right) }\right)$ , and ${\overrightarrow{v}}_{i}^{\left( t\right) }$ with ${\varphi }_{v}({c}_{i}^{(t)},{g}_{i}^{(t)})\;{{gives}\;{us}}$
348
+
349
+ $$
350
+ {\mathbf{s}}_{i}^{\left( t + 1\right) } = {\phi }_{s}\left( \left\lbrack {\left( {{\varphi }_{s}\left( {c}_{i}^{\left( t\right) }\right) ,{\varphi }_{v}\left( {{c}_{i}^{\left( t\right) },{\mathbf{g}}_{i}^{\left( t\right) }}\right) }\right) ,\psi \left( \left\{ {\left( \left( {{\varphi }_{s}\left( {c}_{j}^{\left( t\right) }\right) ,{\varphi }_{v}\left( {{c}_{j}^{\left( t\right) },{\mathbf{g}}_{j}^{\left( t\right) }}\right) }\right) \right) \mid j \in {\mathcal{N}}_{i}}\right\} \right) }\right\rbrack \right) \tag{16}
351
+ $$
352
+
353
+ $$
354
+ {\overrightarrow{\mathbf{v}}}_{i}^{\left( t + 1\right) } = {\phi }_{v}\left( {\left( {{\varphi }_{s}\left( {c}_{i}^{\left( t\right) }\right) ,{\varphi }_{v}\left( {{c}_{i}^{\left( t\right) },{\mathbf{g}}_{i}^{\left( t\right) }}\right) }\right) ,\psi \left( \left\{ {\left( \left( {{\varphi }_{s}\left( {c}_{j}^{\left( t\right) }\right) ,{\varphi }_{v}\left( {{c}_{j}^{\left( t\right) },{\mathbf{g}}_{j}^{\left( t\right) }}\right) }\right) \right) \mid j \in {\mathcal{N}}_{i}}\right\} \right) }\right) \tag{17}
355
+ $$
356
+
357
+ The composition of multiple injective functions is injective. Therefore, there exist some injective functions ${g}_{s}$ and ${g}_{v}$ such that
358
+
359
+ $$
360
+ {\mathbf{s}}_{i}^{\left( t + 1\right) } = {g}_{s}\left( \left\lbrack {\left( {{c}_{i}^{\left( t\right) },{\mathbf{g}}_{i}^{\left( t\right) }}\right) ,\left\{ \left\{ {\left( {{c}_{j}^{\left( t\right) },{\mathbf{g}}_{j}^{\left( t\right) }}\right) \mid j \in {\mathcal{N}}_{i}}\right\} \right\} }\right\rbrack \right) , \tag{18}
361
+ $$
362
+
363
+ $$
364
+ {\overrightarrow{\mathbf{v}}}_{i}^{\left( t + 1\right) } = {g}_{v}\left( {\left( {{c}_{i}^{\left( t\right) },{\mathbf{g}}_{i}^{\left( t\right) }}\right) ,\left\{ \left\{ {\left( {{c}_{j}^{\left( t\right) },{\mathbf{g}}_{j}^{\left( t\right) }}\right) \mid j \in {\mathcal{N}}_{i}}\right\} \right\} }\right) , \tag{19}
365
+ $$
366
+
367
+ We can then consider
368
+
369
+ $$
370
+ {\mathbf{s}}_{i}^{\left( t + 1\right) } = {g}_{s} \circ {h}_{s}^{-1}{h}_{s}\left( \left\lbrack {\left( {{c}_{i}^{\left( t\right) },{\mathbf{g}}_{i}^{\left( t\right) }}\right) ,\left\{ \left\{ {\left( {{c}_{j}^{\left( t\right) },{\mathbf{g}}_{j}^{\left( t\right) }}\right) \mid j \in {\mathcal{N}}_{i}}\right\} \right\} }\right\rbrack \right) , \tag{20}
371
+ $$
372
+
373
+ $$
374
+ {\overrightarrow{\mathbf{v}}}_{i}^{\left( t + 1\right) } = {g}_{v} \circ {h}_{v}^{-1}{h}_{v}\left( {\left( {{c}_{i}^{\left( t\right) },{\mathbf{g}}_{i}^{\left( t\right) }}\right) ,\left\{ \left\{ {\left( {{c}_{j}^{\left( t\right) },{\mathbf{g}}_{j}^{\left( t\right) }}\right) \mid j \in {\mathcal{N}}_{i}}\right\} \right\} }\right) , \tag{21}
375
+ $$
376
+
377
+ Then, we can denote ${\varphi }_{s} = {g}_{s} \circ {h}_{s}^{-1}$ and ${\varphi }_{v} = {g}_{v} \circ {h}_{v}^{-1}$ as injective functions because the composition of injective functions is injective. Hence, for any iteration $t + 1$ , there exist injective functions ${\varphi }_{s}$ and ${\varphi }_{v}$ such that ${\mathbf{s}}_{i}^{\left( t + 1\right) } = {\varphi }_{s}\left( {c}_{i}^{\left( t + 1\right) }\right)$ and ${\overrightarrow{\mathbf{v}}}_{i}^{\left( t + 1\right) } = {\varphi }_{v}\left( {{c}_{i}^{\left( t + 1\right) },{\mathbf{g}}_{i}^{\left( t + 1\right) }}\right)$ .
378
+
379
+ At the $T$ -th iteration, the GWL test decides that $\mathcal{G}$ and $\mathcal{H}$ are non-isomorphic, which means the multisets of node colours $\left\{ {c}_{i}^{\left( T\right) }\right\}$ are different for $\mathcal{G}$ and $\mathcal{H}$ . The GNN’s node scalar features $\left\{ {\mathbf{s}}_{i}^{\left( T\right) }\right\} = \left\{ {{\varphi }_{s}\left( {c}_{i}^{\left( T\right) }\right) }\right\}$ must also be different for $\mathcal{G}$ and $\mathcal{H}$ because of the injectivity of ${\varphi }_{s}$ .
380
+
381
+ ## D Proofs for what GWL can and cannot distinguish
382
+
383
+ The following results are a consequence of the construction of GWL as well as the definitions of $k$ -hop distinct and $k$ -hop identical geometric graphs. Note that $k$ -hop distinct geometric graphs are also $\left( {k + 1}\right)$ -hop distinct and(k - 1)-hop identical. Similarly, $k$ -hop identical geometric graphs are also(k - 1)-hop identical, but not necessarily $\left( {k + 1}\right)$ -hop distinct.
384
+
385
+ Proposition 3. GWL can distinguish any $k$ -hop distinct geometric graphs ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ where the underlying attributed graphs are isomorphic, and $k$ iterations are sufficient.
386
+
387
+ Proof of Proposition 3. The $k$ -th iteration of GWL identifies the $\mathfrak{G}$ -orbit of the $k$ -hop subgraph ${\mathcal{N}}_{i}^{\left( k\right) }$ at each node $i$ via the geometric multiset ${\mathbf{g}}_{i}^{\left( k\right) }.{\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ being $k$ -hop distinct implies that there exists some bijection $b$ and some node $i \in {\mathcal{V}}_{1}, b\left( i\right) \in {\mathcal{V}}_{2}$ such that the corresponding $k$ -hop subgraphs ${\mathcal{N}}_{i}^{\left( k\right) }$ and ${\mathcal{N}}_{b\left( i\right) }^{\left( k\right) }$ are distinct. Thus, the $\mathfrak{G}$ -orbits of the corresponding geometric multisets ${\mathbf{g}}_{i}^{\left( k\right) }$ and ${\mathbf{g}}_{b\left( i\right) }^{\left( k\right) }$ are mutually exclusive, i.e. ${\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{i}^{\left( k\right) }\right) \cap {\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{b\left( i\right) }^{\left( k\right) }\right) \equiv \varnothing \Rightarrow {c}_{i}^{\left( k\right) } \neq {c}_{b\left( i\right) }^{\left( k\right) }$ . Thus, $k$ iterations of GWL are sufficient to distinguish ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ .
388
+
389
+ Proposition 4. Up to $k$ iterations of GWL cannot distinguish any $k$ -hop identical geometric graphs ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ where the underlying attributed graphs are isomorphic.
390
+
391
+ Proof of Proposition 4. The $k$ -th iteration of GWL identifies the $\mathfrak{G}$ -orbit of the $k$ -hop subgraph ${\mathcal{N}}_{i}^{\left( k\right) }$ at each node $i$ via the geometric multiset ${\mathbf{g}}_{i}^{\left( k\right) }.{\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ being $k$ -hop identical implies that for all bijections $b$ and all nodes $i \in {\mathcal{V}}_{1}, b\left( i\right) \in {\mathcal{V}}_{2}$ , the corresponding $k$ -hop subgraphs ${\mathcal{N}}_{i}^{\left( k\right) }$ and ${\mathcal{N}}_{b\left( i\right) }^{\left( k\right) }$ are identical up to group actions. Thus, the $\mathfrak{G}$ -orbits of the corresponding geometric multisets ${\mathbf{g}}_{i}^{\left( k\right) }$ and ${\mathbf{g}}_{b\left( i\right) }^{\left( k\right) }$ overlap, i.e. ${\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{i}^{\left( k\right) }\right) = {\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{b\left( i\right) }^{\left( k\right) }\right) \Rightarrow {c}_{i}^{\left( k\right) } = {c}_{b\left( i\right) }^{\left( k\right) }$ . Thus, up to $k$ iterations of GWL cannot distinguish ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ .
392
+
393
+ Proposition 5. IGWL can distinguish any 1-hop distinct geometric graphs ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ where the underlying attributed graphs are isomorphic, and 1 iteration is sufficient.
394
+
395
+ Proof of Proposition 5. Each iteration of IGWL identifies the $\mathfrak{G}$ -orbit of the 1-hop local neighbourhood ${\mathcal{N}}_{i}^{\left( k = 1\right) }$ at each node $i.{\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ being 1-hop distinct implies that there exists some bijection $b$ and some node $i \in {\mathcal{V}}_{1}, b\left( i\right) \in {\mathcal{V}}_{2}$ such that the corresponding 1-hop local neighbourhoods ${\mathcal{N}}_{i}^{\left( 1\right) }$ and ${\mathcal{N}}_{b\left( i\right) }^{\left( 1\right) }$ are distinct. Thus, the $\mathfrak{G}$ -orbits of the corresponding geometric multisets ${\mathbf{g}}_{i}^{\left( 1\right) }$ and ${\mathbf{g}}_{b\left( i\right) }^{\left( 1\right) }$ are mutually exclusive, i.e. ${\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{i}^{\left( 1\right) }\right) \cap {\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{b\left( i\right) }^{\left( 1\right) }\right) \equiv \varnothing \Rightarrow {c}_{i}^{\left( 1\right) } \neq {c}_{b\left( i\right) }^{\left( 1\right) }$ . Thus,1 iteration of IGWL is sufficient to distinguish ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ .
396
+
397
+ Proposition 6. Any number of iterations of IGWL cannot distinguish any 1-hop identical geometric graphs ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ where the underlying attributed graphs are isomorphic.
398
+
399
+ Proof of Proposition 6. Each iteration of IGWL identifies the $\mathfrak{G}$ -orbit of the 1-hop local neighbourhood ${\mathcal{N}}_{i}^{\left( k = 1\right) }$ at each node $i$ , but cannot identify $\mathfrak{G}$ -orbits beyond 1-hop by the construction of IGWL as no geometric information is propagated. ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ being 1-hop identical implies that for all bijections $b$ and all nodes $i \in {\mathcal{V}}_{1}, b\left( i\right) \in {\mathcal{V}}_{2}$ , the corresponding 1-hop local neighbourhoods ${\mathcal{N}}_{i}^{\left( k\right) }$ and ${\mathcal{N}}_{b\left( i\right) }^{\left( k\right) }$ are identical up to group actions. Thus, the $\mathfrak{G}$ -orbits of the corresponding geometric multisets ${\mathbf{g}}_{i}^{\left( 1\right) }$ and ${\mathbf{g}}_{b\left( i\right) }^{\left( 1\right) }$ overlap, i.e. ${\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{i}^{\left( 1\right) }\right) = {\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{b\left( i\right) }^{\left( 1\right) }\right) \Rightarrow {c}_{i}^{\left( k\right) } = {c}_{b\left( i\right) }^{\left( k\right) }$ . Thus, any number of IGWL iterations cannot distinguish ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ .
400
+
401
+ ![01963f10-d36e-775a-8334-f9d4ade2aeeb_10_421_196_956_401_0.jpg](images/01963f10-d36e-775a-8334-f9d4ade2aeeb_10_421_196_956_401_0.jpg)
402
+
403
+ Figure 2: Invariant GWL Test. (Left) IGWL cannot distinguish ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ as they are 1-hop identical - the $\mathfrak{G}$ -orbit of the 1 -hop local neighbourhood around each node is the same, and IGWL cannot propogate geometric information beyond 1-hop.
404
+
405
+ Proposition 7. Assuming geometric graphs are constructed from point clouds using radial cutoffs, GWL can distinguish any geometric graphs ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ where the underlying attributed graphs are non-isomorphic. At most ${k}_{Max}$ iterations are sufficient, where ${k}_{Max}$ is the maximum graph diameter among ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ .
406
+
407
+ Proof of Proposition 7. We assume that a geometric graph $\mathcal{G} = \left( {\mathbf{A},\mathbf{S},\overrightarrow{\mathbf{V}},\overrightarrow{\mathbf{X}}}\right)$ is constructed from a point cloud $\left( {\mathbf{S},\overrightarrow{\mathbf{V}},\overrightarrow{\mathbf{X}}}\right)$ using a predetermined radial cutoff $r$ . Thus, the adjacency matrix is defined as ${a}_{ij} = 1$ if ${\begin{Vmatrix}{\overrightarrow{\mathbf{x}}}_{i} - {\overrightarrow{\mathbf{x}}}_{j}\end{Vmatrix}}_{2} \leq r$ , or 0 otherwise, for all ${a}_{ij} \in \mathbf{A}$ . Such construction procedures are conventional for geometric graphs in biochemistry and material science.
408
+
409
+ Given geometric graphs ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ where the underlying attributed graphs are non-isomorphic, identify ${k}_{\text{Max }}$ the maximum of the graph diameters of ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ , and chose any arbitrary nodes $i \in {\mathcal{V}}_{1}, j \in {\mathcal{V}}_{2}$ . We can define the ${k}_{\text{Max }}$ -hop subgraphs ${\mathcal{N}}_{i}^{\left( {k}_{\text{Max }}\right) }$ and ${\mathcal{N}}_{j}^{\left( {k}_{\text{Max }}\right) }$ at $i$ and $j$ , respectively. Thus, ${\mathcal{N}}_{i}^{\left( {k}_{\operatorname{Max}}\right) } = {\mathcal{V}}_{1}$ for all $i \in {\mathcal{V}}_{1}$ , and ${\mathcal{N}}_{j}^{\left( {k}_{\operatorname{Max}}\right) } = {\mathcal{V}}_{2}$ for all $j \in {\mathcal{V}}_{2}$ . Due to the assumed construction procedure of geometric graphs, ${\mathcal{N}}_{i}^{\left( {k}_{\text{Max }}\right) }$ and ${\mathcal{N}}_{j}^{\left( {k}_{\text{Max }}\right) }$ must be distinct. Otherwise, if ${\mathcal{N}}_{i}^{\left( {k}_{\text{Max }}\right) }$ and ${\mathcal{N}}_{i}^{\left( {k}_{\mathrm{{Max}}}\right) }$ were identical up to group actions, the sets $\left( {{\mathbf{S}}_{1},{\overrightarrow{\mathbf{V}}}_{1},{\overrightarrow{\mathbf{X}}}_{1}}\right)$ and $\left( {{\mathbf{S}}_{2},{\overrightarrow{\mathbf{V}}}_{2},{\overrightarrow{\mathbf{X}}}_{2}}\right)$ would have yielded isomorphic graphs.
410
+
411
+ The ${k}_{\mathrm{{Max}}}$ -th iteration of GWL identifies the $\mathfrak{G}$ -orbit of the ${k}_{\mathrm{{Max}}}$ -hop subgraph ${\mathcal{N}}_{i}^{\left( {k}_{\mathrm{{Max}}}\right) }$ at each node $i$ via the geometric multiset ${\mathbf{g}}_{i}^{\left( {k}_{\mathrm{{Max}}}\right) }$ . As ${\mathcal{N}}_{i}^{\left( {k}_{\mathrm{{Max}}}\right) }$ and ${\mathcal{N}}_{j}^{\left( {k}_{\mathrm{{Max}}}\right) }$ are distinct for any arbitrary nodes $i \in {\mathcal{V}}_{1}, j \in {\mathcal{V}}_{2}$ , the $\mathfrak{G}$ -orbits of the corresponding geometric multisets ${\mathbf{g}}_{i}^{\left( {k}_{\text{Max }}\right) }$ and ${\mathbf{g}}_{j}^{\left( {k}_{\text{Max }}\right) }$ are mutually exclusive, i.e. ${\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{i}^{\left( {k}_{\operatorname{Max}}\right) }\right) \cap {\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{j}^{\left( {k}_{\operatorname{Max}}\right) }\right) \equiv \varnothing \Rightarrow {c}_{i}^{\left( {k}_{\operatorname{Max}}\right) } \neq {c}_{j}^{\left( {k}_{\operatorname{Max}}\right) }$ . Thus, ${k}_{\operatorname{Max}}$ iterations of GWL are sufficient to distinguish ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ .
412
+
413
+ ## E Proofs for comparing GWL and IGWL
414
+
415
+ Theorem 8. GWL is strictly more powerful than IGWL.
416
+
417
+ Proof of Theorem 8. Firstly, we can show that the GWL class contains IGWL if GWL can learn the identity when updating ${\mathbf{g}}_{i}$ for all $i \in \mathcal{V}$ , i.e. ${\mathbf{g}}_{i}^{\left( t\right) } = {\mathbf{g}}_{i}^{\left( t - 1\right) } = {\mathbf{g}}_{i}^{\left( 0\right) } = \left( {\left( {{\mathbf{s}}_{i},{\overrightarrow{\mathbf{v}}}_{i}}\right) ,\varnothing }\right)$ . Thus, GWL is at least as powerful as IGWL, which does not update ${\mathbf{g}}_{i}$ .
418
+
419
+ Secondly, to show that GWL is strictly more powerful than IGWL, it suffices to show that there exist a pair of geometric graphs that can be distinguished by GWL but not by IGWL. We may consider any $k$ -hop distinct geometric graphs for $k > 1$ , where the underlying attributed graphs are isomorphic. Proposition 3 states that GWL can distinguish any such graphs, while Proposition 6 states that IGWL cannot distinguish any such graphs. An example are the pair of graphs in Fig. 1 and Fig.2, adapted from [18].
420
+
421
+ Corollary 9. IGWL has the same expressive power as GWL for fully connected geometric graphs.
422
+
423
+ Proof of Corollary 9. We will prove by contradiction. Assume that there exist a pair of fully connected geometric graphs ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ which GWL can distinguish, but IGWL cannot.
424
+
425
+ If the underlying attributed graphs of ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ are isomorphic, by Proposition 3 and Proposition 6, ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ are 1-hop identical but $k$ -hop distinct for some $k > 1$ . For all bijections $b$ and all nodes $i \in {\mathcal{V}}_{1}, b\left( i\right) \in {\mathcal{V}}_{2}$ , the local neighbourhoods ${\mathcal{N}}_{i}^{\left( 1\right) }$ and ${\mathcal{N}}_{b\left( i\right) }^{\left( 1\right) }$ are identical up to group actions, and ${\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{i}^{\left( 1\right) }\right) = {\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{b\left( i\right) }^{\left( 1\right) }\right) \Rightarrow {c}_{i}^{\left( 1\right) } = {c}_{b\left( i\right) }^{\left( 1\right) }$ . Additionally, there exists some bijection $b$ and some nodes $i \in {\mathcal{V}}_{1}, b\left( i\right) \in {\mathcal{V}}_{2}$ such that the $k$ -hop subgraphs ${\mathcal{N}}_{i}^{\left( k\right) }$ and ${\mathcal{N}}_{b\left( i\right) }^{\left( k\right) }$ are distinct, and ${\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{i}^{\left( k\right) }\right) \cap {\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{b\left( i\right) }^{\left( k\right) }\right) \equiv \varnothing \Rightarrow {c}_{i}^{\left( k\right) } \neq {c}_{b\left( i\right) }^{\left( k\right) }$ . However, as ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ are fully connected, for any $k$ , ${\mathcal{N}}_{i}^{\left( 1\right) } = {\mathcal{N}}_{i}^{\left( k\right) }$ and ${\mathcal{N}}_{b\left( i\right) }^{\left( 1\right) } = {\mathcal{N}}_{b\left( i\right) }^{\left( k\right) }$ are identical up to group actions. Thus, ${\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{i}^{\left( 1\right) }\right) = {\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{i}^{\left( k\right) }\right) =$ ${\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{b\left( i\right) }^{\left( 1\right) }\right) = {\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{b\left( i\right) }^{\left( k\right) }\right) \Rightarrow {c}_{i}^{\left( 1\right) } = {c}_{i}^{\left( k\right) } = {c}_{b\left( i\right) }^{\left( k\right) } = {c}_{b\left( i\right) }^{\left( k\right) }$ . This is a contradiction.
426
+
427
+ If ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ are non-isomorphic and fully connected, for any arbitrary $i \in {\mathcal{V}}_{1}, j \in {\mathcal{V}}_{2}$ and any $k$ -hop neighbourhood, we know that ${\mathcal{N}}_{i}^{\left( 1\right) } = {\mathcal{N}}_{i}^{\left( k\right) }$ and ${\mathcal{N}}_{j}^{\left( 1\right) } = {\mathcal{N}}_{j}^{\left( k\right) }$ . Thus, a single iteration of GWL and IGWL identify the same $\mathfrak{G}$ -orbits and assign the same node colours, i.e. ${\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{i}^{\left( 1\right) }\right) = {\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{i}^{\left( k\right) }\right) \Rightarrow$ ${c}_{i}^{\left( 1\right) } = {c}_{i}^{\left( k\right) }$ and ${\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{j}^{\left( 1\right) }\right) = {\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{j}^{\left( k\right) }\right) \Rightarrow {c}_{j}^{\left( 1\right) } = {c}_{j}^{\left( k\right) }$ . This is a contradiction.
papers/LOG/LOG 2022/LOG 2022 Conference/kXe4Y0c4VqT/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § ON THE EXPRESSIVE POWER OF GEOMETRIC GRAPH NEURAL NETWORKS
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ We propose a geometric version of the Weisfeiler-Leman graph isomorphism test (GWL) for discriminating geometric graphs while respecting the underlying symmetries such as permutation, rotation, and translation. We use GWL to characterise the expressive power of Graph Neural Networks (GNNs) for geometric graphs and provide formal results for the following: (1) What geometric graphs can and cannot be distinguished by GNNs invariant or equivariant to spatial symmetries; (2) Equivariant GNNs are strictly more powerful than their invariant counterparts.
12
+
13
+ § 1 PRELIMINARIES
14
+
15
+ Geometric graphs. Geometric graphs are attributed graphs embedded in Euclidean space, and are used to model structures in biochemistry [1], material science [2], multi-agent robotics [3], and spatial networks [4]. Formally, we consider a geometric graph $\mathcal{G} = \left( {\mathbf{A},\mathbf{S},\overrightarrow{\mathbf{V}}\overrightarrow{\mathbf{X}}}\right)$ as a set $\mathcal{V}$ of $n$ nodes decorated with geometric attributes. $\mathbf{A}$ is an $n \times n$ adjacency matrix, $\mathbf{S} \in {\mathbb{R}}^{n \times {f}_{s}}$ a matrix of scalar features, $\overrightarrow{\mathbf{V}} \in {\mathbb{R}}^{n \times {f}_{v} \times d}$ a tensor of vector features, and $\overrightarrow{\mathbf{X}} \in {\mathbb{R}}^{n \times d}$ a matrix of coordinates. The geometric attributes transform equivariantly under global symmetries: (1) Permutations ${S}_{n}$ , which act on $\mathcal{G}$ in the natural way; (2) Rotations/orthogonal transformations $\mathfrak{G} = {SO}\left( d\right) /O\left( d\right)$ , which act non-trivially on $\overrightarrow{\mathbf{V}},\overrightarrow{\mathbf{X}}$ ; and (3) Translations $T\left( d\right)$ , which act on $\overrightarrow{\mathbf{X}}$ .
16
+
17
+ Geometric Graph Neural Networks. Graph Neural Networks (GNNs) with spatial symmetries 'baked in' have emerged as the architecture of choice for representation learning on geometric graphs. GNNs for geometric graphs follow the message passing paradigm [5]: scalar features (and, optionally, vector features) at each node are updated from iteration $t$ to $t + 1$ using the neighbourhood features via learnable aggregate and update functions, $\psi$ and $\phi$ , respectively:
18
+
19
+ $$
20
+ {\mathbf{s}}_{i}^{\left( t + 1\right) },{\overrightarrow{\mathbf{v}}}_{i}^{\left( t + 1\right) } = \phi \left( {\left( {{\mathbf{s}}_{i}^{\left( t\right) },{\overrightarrow{\mathbf{v}}}_{i}^{\left( t\right) }}\right) ,\psi \left( \left\{ \left\{ {\left( {{\mathbf{s}}_{j}^{\left( t\right) },{\overrightarrow{\mathbf{v}}}_{j}^{\left( t\right) },{\overrightarrow{\mathbf{x}}}_{i} - {\overrightarrow{\mathbf{x}}}_{j}}\right) \mid j \in {\mathcal{N}}_{i}}\right\} \right\} \right) }\right) . \tag{1}
21
+ $$
22
+
23
+ The scalar features ${\mathbf{S}}^{\left( T\right) }$ at the final iteration $T$ are mapped to graph-level predictions via a permutation-invariant readout $f : {\mathbb{R}}^{n \times {f}_{s}} \rightarrow \mathbb{R}$ .
24
+
25
+ We consider two broad classes of GNNs for geometric graphs: (1) $\mathfrak{G}$ -equivariant models, where the intermediate features are $\mathfrak{G}$ -equivariant and $T\left( d\right)$ -invariant [6-10]; and (2) $\mathfrak{G}$ -invariant models, which only update scalar features in a $\mathfrak{G} \rtimes T\left( d\right)$ -invariant manner [11-13]. Invariant GNNs have shown strong performance for protein design [14, 15] and electrocatalysis [16, 17], while equivariant GNNs are being used within learnt interatomic potentials for molecular dynamics [18-20]. Despite promising empirical performance, key theoretical questions remain unanswered:
26
+
27
+ 1. How to characterise the representational or expressive power of geometric GNNs? 2. What is the tradeoff between $\mathfrak{G}$ -equivariant and $\mathfrak{G}$ -invariant GNNs? Do we need equivariance?
28
+
29
+ One successful approach to studying the expressive power of (non-geometric) GNNs is through the lens of the Weisfeiler-Leman test for deciding whether two graphs are isomorphic [21, 22]. Any message passing GNN can be at most as powerful as 1-WL in distinguishing non-isomorphic graphs, and GNNs have the same expressive power as 1-WL if equipped with injective aggregate, update, and readout functions $\left\lbrack {{23},{24}}\right\rbrack$ . WL is an attractive abstract tool for studying existent architectures while removing implementation details that vary from one model to another. Two geometric graphs $\mathcal{G}$ and $\mathcal{H}$ are isomorphic (denoted $\mathcal{G} \simeq \mathcal{H}$ ) if there exists an edge preserving bijection $b : \mathcal{V}\left( \mathcal{G}\right) \rightarrow \mathcal{V}\left( \mathcal{H}\right)$ and their attributes are equivalent, up to global group actions.
30
+
31
+ < g r a p h i c s >
32
+
33
+ Figure 1: Geometric Weisfeiler-Leman Test. (Left) GWL distinguishes non-isomorphic geometric graphs ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ by injectively assigning colours to distinct neighbourhood patterns, up to global symmetries. Each iteration expands the neighbourhood from which geometric information can be gathered (shaded for node $i$ ). (Right) Intuitively, GWL colours geometric computation trees.
34
+
35
+ Contributions. In this work, we study the expressive power of geometric GNNs from the perspective of discriminating geometric graphs while respecting the underlying global symmetries. We propose a geometric version of the Weisfeiler-Leman graph isomorphism test (GWL) and use it to formally characterise classes of graphs that can and cannot be distinguished by $\mathfrak{G}$ -invariant and $\mathfrak{G}$ -equivariant GNNs. These results enable us to show that $\mathfrak{G}$ -equivariant GNNs are strictly more powerful than $\mathfrak{G}$ -invariant models. WL has been a major driver of progress for more expressive non-geometric GNNs [25-28]. We hope GWL will provide a roadmap for the same for geometric graphs.
36
+
37
+ § 2 THE GEOMETRIC WEISFEILER-LEMAN TEST
38
+
39
+ Preliminaries. Given a group $\mathfrak{G}$ acting on a set $X$ , the $\mathfrak{G}$ -orbit of $x \in X$ is ${\mathcal{O}}_{\mathfrak{G}}\left( x\right) = \left\{ {\mathfrak{g}x \mid \mathfrak{g} \in }\right.$ $\mathfrak{G}\} \subseteq X$ . When $\mathfrak{G}$ is understood from the context, we simply refer to it as the orbit. A function $f : X \rightarrow Y$ is $\mathfrak{G}$ -orbit space injective if we have $f\left( {x}_{1}\right) = f\left( {x}_{2}\right)$ if and only if ${x}_{1} \in {\mathcal{O}}_{\mathfrak{G}}\left( {x}_{2}\right)$ for any ${x}_{1},{x}_{2} \in X$ . Necessarily, such a function is $\mathfrak{G}$ -invariant, since $f\left( {\mathfrak{g} \cdot x}\right) = f\left( x\right)$ . We work with $\mathfrak{G} = {SO}\left( d\right) /O\left( d\right)$ and translation invariance is handled trivially using relative positions ${\overrightarrow{\mathbf{x}}}_{ij} = {\overrightarrow{\mathbf{x}}}_{i} - {\overrightarrow{\mathbf{x}}}_{j}$ . Analogously to the WL test, we assume that all geometric vectors and scalar features come from a countable subset of ${\mathbb{R}}^{d}$ and ${\mathbb{R}}^{f}$ , respectively.
40
+
41
+ A geometric version of the WL test must satisfy a series of requirements imposed by the global symmetries of geometric graphs. As with WL, we iteratively assign node colours which will be unique for every distinct geometric neighbourhood pattern, up to global symmetries. In other words, the exact position or angle of rotation of the (sub-)graph in space should not influence the colouring of the nodes. Therefore, we would like the colouring function to be $\mathfrak{G}$ -orbit space injective, which also makes it $\mathfrak{G}$ -invariant. The colouring function must use an aggregation mechanism to capture geometric information around local neighbourhoods. To avoid any loss of information, the local aggregator must be permutation-invariant and injective. Since a $\mathfrak{G}$ -invariant function cannot be injective by construction, this aggregator must be $\mathfrak{G}$ -equivariant.
42
+
43
+ These intuitions motivate the following definition of the Geometric Weisfeiler-Leman (GWL) test. First, at initialisation time, we assign to each node $i \in \mathcal{V}$ a scalar node colour ${c}_{i} \in \mathbb{R}$ and a (nested) multiset of geometric information ${\mathbf{g}}_{i}$ :
44
+
45
+ $$
46
+ {c}_{i}^{\left( 0\right) } = {\mathbf{s}}_{i},\;{\mathbf{g}}_{i}^{\left( 0\right) } = \left( {\left( {{\mathbf{s}}_{i},{\overrightarrow{\mathbf{v}}}_{i}}\right) ,\varnothing }\right) , \tag{2}
47
+ $$
48
+
49
+ Then the test proceeds inductively and updates the node colour ${c}_{i}^{\left( t\right) }$ and the geometric multiset ${\mathbf{g}}_{i}^{\left( t\right) }$ at iteration $t$ for all $i \in \mathcal{V}$ :
50
+
51
+ $$
52
+ {c}_{i}^{\left( t\right) } = {\operatorname{I-HASH}}^{\left( t\right) }\left( {\left( {{c}_{i}^{\left( t - 1\right) },{\mathbf{g}}_{i}^{\left( t - 1\right) }}\right) ,\left\{ {\left. \left( {{c}_{j}^{\left( t - 1\right) },{\mathbf{g}}_{j}^{\left( t - 1\right) },{\overrightarrow{\mathbf{x}}}_{ij}}\right) \right| \;j \in {\mathcal{N}}_{i}}\right\} }\right) , \tag{3}
53
+ $$
54
+
55
+ $$
56
+ {\mathbf{g}}_{i}^{\left( t\right) } = \left( {\left( {{c}_{i}^{\left( t - 1\right) },{\mathbf{g}}_{i}^{\left( t - 1\right) }}\right) ,\left\{ \left\{ {\left( {{c}_{j}^{\left( t - 1\right) },{\mathbf{g}}_{j}^{\left( t - 1\right) },{\overrightarrow{\mathbf{x}}}_{ij}}\right) \mid j \in {\mathcal{N}}_{i}}\right\} \right\} }\right) , \tag{4}
57
+ $$
58
+
59
+ With each iteration, ${\mathbf{g}}_{i}^{\left( t\right) }$ aggregates geometric information in progressively larger neighbourhoods around the node $i$ . Thus, $k$ iterations of GWL enable each node to access the $k$ -hop subgraph ${\mathcal{N}}_{i}^{\left( k\right) }$ around itself via ${\mathbf{g}}_{i}^{\left( k\right) }$ . The node colours are $\mathfrak{G}$ -invariant scalars computed via I-HASH, a $\mathfrak{G}$ -orbit space injective map encoding distinct geometric neighbourhood patterns, up to global symmetries.
60
+
61
+ Given two geometric graphs $\mathcal{G}$ and $\mathcal{H}$ , if there exists some iteration $t$ for which $\left\{ \left\{ {{c}_{i}^{\left( t\right) } \mid i \in \mathcal{V}\left( \mathcal{G}\right) }\right\} \right\} \neq$ $\left\{ {\left. \left\lbrack {c}_{i}^{\left( t\right) }\right\rbrack \right| \;i \in \mathcal{V}\left( \mathcal{H}\right) }\right\}$ , then GWL deems the two graphs as being geometrically non-isomorphic. Otherwise, the test terminates and cannot distinguish the two geometric graphs when the number of colors in iterations $t$ and(t - 1)are the same.
62
+
63
+ Global symmetries. Denote by ${Q}_{\mathfrak{g}}$ the standard matrix representation of $\mathcal{G}$ . Then we have:
64
+
65
+ $$
66
+ \mathfrak{g} \cdot {\mathbf{g}}_{i}^{\left( 0\right) } = \left( {\left( {{\mathbf{s}}_{i},{\mathbf{Q}}_{\mathfrak{g}}{\overrightarrow{\mathbf{v}}}_{i}}\right) ,\varnothing }\right) \tag{5}
67
+ $$
68
+
69
+ $$
70
+ \mathfrak{g} \cdot {\mathbf{g}}_{i}^{\left( t\right) } = \left( {\left( {{c}_{i}^{\left( t - 1\right) },\mathfrak{g} \cdot {\mathbf{g}}_{i}^{\left( t - 1\right) }}\right) ,\left\{ \left\{ {\left( {{c}_{j}^{\left( t - 1\right) },\mathfrak{g} \cdot {\mathbf{g}}_{j}^{\left( t - 1\right) },{\mathbf{Q}}_{\mathfrak{g}}{\overrightarrow{\mathbf{x}}}_{ij}}\right) \mid j \in {\mathcal{N}}_{i}}\right\} \right\} }\right) \tag{6}
71
+ $$
72
+
73
+ Thus the assignment of ${\mathbf{g}}_{i}$ from Eq 3 is $\mathfrak{G}$ -equivariant by construction. Furthermore, we require the I-HASH function to be $\mathfrak{G}$ -orbit space injective. That is for any geometric neighbourhoods $\mathbf{g},{\mathbf{g}}^{\prime }$ , I- $\operatorname{HASH}\left( \mathbf{g}\right) = \operatorname{I-HASH}\left( {\mathbf{g}}^{\prime }\right)$ if and only if there exists $\mathfrak{g} \in \mathfrak{G}$ such that $\mathbf{g} = \mathfrak{g} \cdot {\mathbf{g}}^{\prime }$ .
74
+
75
+ Invariant GWL. Since we are interested in understanding the role of $\mathfrak{G}$ -equivariance, we also consider a more restrictive version of GWL that is only updates node colours using the $\mathfrak{G}$ -orbit space injective and $\mathfrak{G}$ -invariant I-HASH function without the geometric multiset. We term this the Invariant GWL (IGWL) test, which differs from GWL only in its update rule:
76
+
77
+ $$
78
+ {c}_{i}^{\left( t\right) } = {\operatorname{I-HASH}}^{\left( t\right) }\left( {\left( {{c}_{i}^{\left( t - 1\right) },{\overrightarrow{\mathbf{v}}}_{i}}\right) ,\left\{ \left\{ {\left( {{c}_{j}^{\left( t - 1\right) },{\overrightarrow{\mathbf{v}}}_{j},{\overrightarrow{\mathbf{x}}}_{ij}}\right) \mid j \in {\mathcal{N}}_{i}}\right\} \right\} }\right) \tag{7}
79
+ $$
80
+
81
+ § 2.1 CHARACTERISING THE EXPRESSIVE POWER OF GNNS FOR GEOMETRIC GRAPHS
82
+
83
+ We would like to characterise the maximum expressive power of geometric GNNs based on the GWL test. Firstly, we show that any possible GNN can be at most as powerful as GWL in distinguishing non-isomorphic geometric (sub-)graphs.
84
+
85
+ Theorem 1. Geometric GNNs can be at most as powerful as GWL.
86
+
87
+ With sufficient number of iterations, the output of a GNN can be equivalent to GWL if certain conditions are met regarding the aggregate, update and readout functions.
88
+
89
+ Theorem 2. GNNs have the same expressive power as GWL if the following conditions hold:
90
+
91
+ 1. The aggregation $\psi$ is injective, $\mathfrak{G}$ -equivariant, and permutation-invariant.
92
+
93
+ 2. The scalar update ${\phi }_{s}$ is $\mathfrak{G}$ -orbit space injective, $\mathfrak{G}$ -invariant, and permutation-invariant.
94
+
95
+ 3. The vector update ${\phi }_{v}$ is injective, $\mathfrak{G}$ -equivariant, and permutation-invariant.
96
+
97
+ 4. The graph-level readout $f$ is injective and permutation-invariant.
98
+
99
+ § 2.2 WHAT GEOMETRIC GRAPHS CAN GWL DISTINGUISH?
100
+
101
+ Let us consider what geometric graphs can and cannot be distinguished by GWL and IGWL, as well as the class of GNNs bounded by the respective tests.
102
+
103
+ GWL identifies the $\mathfrak{G}$ -orbits of geometric (sub-)graphs centred around each node, and distinguishes geometric (sub-)graphs via comparing $\mathfrak{G}$ -orbits. Given two distinct neighbourhoods ${\mathcal{N}}_{1}$ and ${\mathcal{N}}_{2}$ , the $\mathfrak{G}$ -orbits of the corresponding geometric multisets ${\mathbf{g}}_{1}$ and ${\mathbf{g}}_{2}$ are mutually exclusive, i.e. ${\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{1}\right) \cap$ ${\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{2}\right) \equiv \varnothing \Rightarrow {c}_{1} \neq {c}_{2}$ . If ${\mathcal{N}}_{1}$ and ${\mathcal{N}}_{2}$ were identical up to group actions, their $\mathfrak{G}$ -orbits would overlap, i.e. ${\mathbf{g}}_{1} = \mathfrak{g}{\mathbf{g}}_{2}$ for some $\mathfrak{g} \in \mathfrak{G}$ and ${\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{1}\right) = {\mathcal{O}}_{\mathfrak{G}}\left( {\mathbf{g}}_{2}\right) \Rightarrow {c}_{1} = {c}_{2}$ .
104
+
105
+ Case 1: Underlying graphs are isomorphic. Let ${\mathcal{G}}_{1} = \left( {{\mathbf{A}}_{1},{\mathbf{S}}_{1},{\overrightarrow{\mathbf{V}}}_{1},{\overrightarrow{\mathbf{X}}}_{1}}\right)$ and ${\mathcal{G}}_{2} =$ $\left( {{\mathbf{A}}_{2},{\mathbf{S}}_{2},{\overrightarrow{\mathbf{V}}}_{2},{\overrightarrow{\mathbf{X}}}_{2}}\right)$ be two geometric graphs such that the underlying attributed graphs $\left( {{\mathbf{A}}_{1},{\mathbf{S}}_{1}}\right)$ and $\left( {{\mathbf{A}}_{2},{\mathbf{S}}_{2}}\right)$ are isomorphic, i.e. there exists a set of edge preserving bijections $\left\{ {b : {\mathcal{V}}_{1} \rightarrow {\mathcal{V}}_{2}}\right\}$ such that ${s}_{i}^{\left( {\mathcal{G}}_{1}\right) } = {s}_{b\left( i\right) }^{\left( {\mathcal{G}}_{2}\right) } =$ constant for all $b$ and all $i \in {\mathcal{V}}_{1}$ . We term ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ to be $k$ -hop distinct if for all graph isomorphisms $b$ , there is some node $i \in {\mathcal{V}}_{1},b\left( i\right) \in {\mathcal{V}}_{2}$ such that the corresponding $k$ -hop
106
+
107
+ subgraphs ${\mathcal{N}}_{i}^{\left( k\right) }$ and ${\mathcal{N}}_{b\left( i\right) }^{\left( k\right) }$ are distinct. Otherwise, we say ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ are $k$ -hop identical if all ${\mathcal{N}}_{i}^{\left( k\right) }$ and ${\mathcal{N}}_{b\left( i\right) }^{\left( k\right) }$ are identical up to group actions.
108
+
109
+ We can now formalise what geometric graphs can and cannot be distinguished by GWL.
110
+
111
+ Proposition 3. GWL can distinguish any $k$ -hop distinct geometric graphs ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ where the underlying attributed graphs are isomorphic, and $k$ iterations are sufficient.
112
+
113
+ Proposition 4. Up to $k$ iterations of GWL cannot distinguish any $k$ -hop identical geometric graphs ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ where the underlying attributed graphs are isomorphic.
114
+
115
+ Additionally, we can state the following results about the more constrained IGWL.
116
+
117
+ Proposition 5. IGWL can distinguish any 1-hop distinct geometric graphs ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ where the underlying attributed graphs are isomorphic, and 1 iteration is sufficient.
118
+
119
+ Proposition 6. Any number of iterations of IGWL cannot distinguish any 1-hop identical geometric graphs ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ where the underlying attributed graphs are isomorphic.
120
+
121
+ Case 2: Underlying graphs are non-isomorphic. Let ${\mathcal{G}}_{1} = \left( {{\mathbf{A}}_{1},{\mathbf{S}}_{1},{\overrightarrow{\mathbf{V}}}_{1},{\overrightarrow{\mathbf{X}}}_{1}}\right)$ and ${\mathcal{G}}_{2} =$ $\left( {{\mathbf{A}}_{2},{\mathbf{S}}_{2},{\overrightarrow{\mathbf{V}}}_{2},{\overrightarrow{\mathbf{X}}}_{2}}\right)$ such that the underlying attributed graphs $\left( {{\mathbf{A}}_{1},{\mathbf{S}}_{1}}\right)$ and $\left( {{\mathbf{A}}_{2},{\mathbf{S}}_{2}}\right)$ are non-isomorphic and constructed under realistic assumptions.
122
+
123
+ Proposition 7. Assuming geometric graphs are constructed from point clouds using radial cutoffs, GWL can distinguish any geometric graphs ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ where the underlying attributed graphs are non-isomorphic. At most ${k}_{Max}$ iterations are sufficient, where ${k}_{Max}$ is the maximum graph diameter among ${\mathcal{G}}_{1}$ and ${\mathcal{G}}_{2}$ .
124
+
125
+ § 2.3 COMPARING GWL AND IGWL
126
+
127
+ These results enable us to compare GWL and IGWL, and precisely characterise the expressive power of the class of GNNs bounded by the respective tests.
128
+
129
+ Theorem 8. GWL is strictly more powerful than IGWL.
130
+
131
+ Corollary 9. IGWL has the same expressive power as GWL for fully connected geometric graphs.
132
+
133
+ These results may not seem surprising as invariance is a special case of equivariance, and learning $\mathfrak{G}$ -invariant tasks may require networks to learn $\mathfrak{G}$ -equivariant sub-tasks [29]. However, to the best of our knowledge, this is the first formal proof supporting the use of $\mathfrak{G}$ -equivariant intermediate layers as prescribed in the Geometric Deep Learning blueprint [30].
134
+
135
+ § 3 DISCUSSION
136
+
137
+ Practical Implications. Propositions 3 and 6 suggest that when message passing is restricted to local radial neighbourhoods around each node, as is conventional for large macromolecules with thousands of nodes, $\mathfrak{G}$ -equivariant GNNs should be preferred to $\mathfrak{G}$ -invariant GNNs. Stacking multiple G-equivariant layers enables the computation of compositional geometric features, which may be key to developing 'foundation models' [31] for macromolecule structures. On the other hand, Corrolary 9 suggests that $\mathfrak{G}$ -equivariant and $\mathfrak{G}$ -invariant GNNs are equally powerful when working on fully connected geometric graphs and all-to-all message passing. These results support the empirical success of recent $\mathfrak{G}$ -invariant ’Graph Transformer’ architectures [17,32] for small molecules with tens of nodes, where working with full graphs is tractable. Additionally, Proposition 5 suggests that using wider cutoff radii when building geometric graphs can be effective for improving $\mathfrak{G}$ -invariant GNNs, as larger neighbourhoods may be easier to distinguish via scalars.
138
+
139
+ Future Work. It is non-trivial to define maximally powerful GNNs that satisfy the conditions of Theorem 2. GWL relies on injective multiset functions that are $\mathfrak{G}$ -invariant or $\mathfrak{G}$ -equivariant. But can such functions be universally approximated via neural networks?
140
+
141
+ We suspect that higher-order tensors are required for building functions with the desired properties and, consequently, for building provably powerful geometric GNNs. No existing geometric GNNs are as powerful as GWL, with the recent Multi-ACE model [20] being a possible exception under appropriate hyperparameters - infinite tensor order, and correlation order equivalent to the maximum cardinality of local neighbourhoods - both of which are intractable in practice. Future work will explore building provably powerful and practical geometric GNNs, with applications in biochemistry, material science, and multi-agent robotics.
papers/LOG/LOG 2022/LOG 2022 Conference/kv4xUo5Pu6/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,479 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Representation Learning on Biomolecular Structures using Equivariant Graph Attention
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ Learning and reasoning about $3\mathrm{D}$ molecular structures with varying size is an emerging and important challenge in machine learning and especially in the development of biotherapeutics. Equivariant Graph Neural Networks (GNNs) can simultaneously leverage the geometric and relational detail of the problem domain and are known to learn expressive representations through the propagation of information between nodes leveraging higher-order representations to faithfully express the geometry of the data, such as directionality in their intermediate layers. In this work, we propose an equivariant GNN that operates with Cartesian coordinates to incorporate directionality and we implement a novel attention mechanism, acting as a content and spatial dependent filter when propagating information between nodes. Our proposed message function processes vector features in a geometrically meaningful way by mixing existing vectors and creating new ones based on cross products. We demonstrate the efficacy of our architecture on accurately predicting properties of large biomolecules and show its computational advantage over recent methods which rely on irreducible representations by means of the spherical harmonics expansion.
12
+
13
+ ## 18 1 Introduction
14
+
15
+ Predicting molecular properties is of central importance to applications in pharmaceutical research and protein design. Accurate computational methods can significantly accelerate the overall process of finding better molecular candidates in a faster and cost-efficient way. Learning on 3D environments of molecular structures is a rapidly growing area of machine learning with promising applications but also domain-specific challenges. While Deep Learning (DL) has replaced hand-crafted features to a large extent, many advances are crucially determined through inductive biases in deep neural networks. Developed neural models should maintain an efficient and accurate representation of structures with even up to thousand of atoms and correctly reason about their 3D geometry independent of orientation and position. A powerful method to restrict a neural network to the functions of interest, such as a molecular property, is to exploit the symmetry of the data by constraining equivariance with respect to transformations from a certain symmetry group $\left\lbrack {1,2}\right\rbrack$ .
16
+
17
+ 3D Graph Neural Networks (GNNs) have been applied on a widespread of molecular structures, such as in the prediction of quantum chemistry properties of small molecules $\left\lbrack {3,4}\right\rbrack$ but also on macromolecular structures like proteins [5-8] due to the natural representation of structures as graphs, with atoms as nodes and edges drawn based on bonding or spatial proximity. These networks generally encode the 3D geometry in terms of rotationally invariant representations, such as pairwise distances when modelling local interactions which leads to a loss of directional information, while the addition of angular information into network architecture has shown to be beneficial in representing the local geometry [9-11].
18
+
19
+ Neural models that preserve equivariance when working on point clouds in 3D space have been proposed [12-15] which can be described as Tensorfield Networks. These physics-inspired models - leverage higher-order tensor representations and require additional calculations, to construct the basis for the transformations of their learnable kernels, which can be expensive to compute. While these models enable the interaction between different-order representations, (often referred to as type- $l$ representation), many data types are often restricted to scalar values (type-0 e.g., temperature or energy) and 3D vectors (type-1 e.g., velocity or forces). Another choice of using more information with the (limited) data at hand and build data-efficient models on point clouds ${}^{1}$ through equivariant functions is to directly operate on Cartesian coordinates [16-19] and explicitly define the (equivariant) transformations which is conceptually simpler and does not require Clebsch-Gordan tensor products of irreducible representations as commonly used in Tensorfield Network-like architectures.
20
+
21
+ ![01963ed2-5f59-76f7-8b2d-366531bcefc3_1_331_199_1136_634_0.jpg](images/01963ed2-5f59-76f7-8b2d-366531bcefc3_1_331_199_1136_634_0.jpg)
22
+
23
+ Figure 1: (a) Visualization of the local neighbourhood of central carbon atom $i$ . Directed edges illustrate the message flow from neighbour $j$ to central atom $i$ , where scalar and vector features are propagated along the edges. Grey boxes $R$ represent the side-chain atoms of each residue and serve here as visual compression that include many more atoms. Here, nodes comprise scalar and vector features with 5 and 2 channels, respectively. (b) Proposed equivariant message function that computes a geometric and content related feature attention filter for scalar features, while vector messages are created based on a weighted combination of newly constructed vectors.
24
+
25
+ In this work, we introduce Equivariant Graph Attention Networks (EQGAT) that operates on large point clouds such as proteins or protein-ligand complexes and show its superior performance compared to invariant models as well as our proposed model's faster training time compared to recent architectures that achieve equivariance through the usage of irreducible representations. Our model implements a novel feature attention mechanism which is invariant to global rotations and translations of inputs and includes spatial- but also content related information which serves as powerful edge embedding when propagating information in the Message Passing Neural Networks (MPNNs) [4] framework. Since we define equivariant functions on the original Cartesian space while restricting ourselves to tensor representations of rank 1, i.e., vectors, we aim to capture as most geometrical information as possible through a geometrically motivated message function.
26
+
27
+ In summary, we make the following contributions:
28
+
29
+ - We introduce a computationally efficient equivariant Graph Neural Network that leverages geometric information by operating on vector features in Cartesian space.
30
+
31
+ - We implement a novel feature attention mechanism to propagate neighbouring node features and we define equivariant operations to combine vector features in a geometrically meaningful way.
32
+
33
+ - We benchmark our proposed architecture on large molecular systems such as protein complexes and shows its efficacy mostly relevant to industrial applications.
34
+
35
+ ---
36
+
37
+ ${}^{1}$ An example of more information preservation is when considering relative positions between points in 3D space, where the information of orientation is maintained, as opposed when only the (scalar; invariant) distances between points are considered.
38
+
39
+ ---
40
+
41
+ ## 2 Background
42
+
43
+ ### 2.1 Message Passing Neural Networks (MPNNs)
44
+
45
+ MPNNs [4] generalize Graph Neural Networks (GNNs) [1, 2, 20] and aim to parameterize a mapping from a graph to a feature space. That feature space can either be defined on the node- or graph level. Formally, a graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ contains nodes $i \in \mathcal{V}$ and edges $\left( {j, i}\right) \in \mathcal{E}$ which represent the relationship between nodes $j$ and $i$ . Since MPNNs utilize shared trainable layers among nodes, permutation equivariance is preserved.
46
+
47
+ In this work, we consider graphs representing molecular systems embedded in 3D Euclidean space, where atoms represent nodes and the edges are described through covalent bonds and/or by atom pairs within a certain cutoff distance $c$ as illustrated in Figure 1(a). When working with protein point clouds, a common design choice is the construction of residue graphs, where the nodes are represented through the ${C}_{\alpha }$ -atom of each amino acid residue [5,6,18].
48
+
49
+ We refer ${x}_{i}^{\left( l\right) } = \left( {{a}_{i},{p}_{i},{s}_{i}^{\left( l\right) },{v}_{i}^{\left( l\right) }}\right)$ to the state of the $i$ -th atom, where ${a}_{i} \in {\mathbb{Z}}_{ + }$ and ${p}_{i} \in {\mathbb{R}}^{3}$ denote atom $i$ ’s chemical element and its spatial position, while ${h}_{i}^{\left( l\right) } = \left( {{s}_{i}^{\left( l\right) },{v}_{i}^{\left( l\right) }}\right) \in {\mathbb{R}}^{1 \times {F}_{s}} \times {\mathbb{R}}^{3 \times {F}_{v}}$ are the hidden scalar and vector features that are iteratively refined through $L$ message passing steps. We distinguish between scalar and vector features because scalars can be transformed without functional restrictions, e.g., with standard MLPs, and their domain spans the entire $\mathbb{R}$ , while vector features that reside in ${\mathbb{R}}^{3}$ can only be transformed in certain ways to preserve rotation equivariance. In theory, one could also only rely on vector features (with a number of ${F}_{v}$ channels), and perform a dot product reduction to make that representation invariant. This step however, restricts the domain space of scalars onto ${\mathbb{R}}_{ + }$ only.
50
+
51
+ A general MPNN implements a learnable message and update function denoted as ${M}_{l}\left( \cdot \right)$ and ${U}_{l}\left( \cdot \right)$ to process atom $i$ -th’s hidden feature by considering its local environment $\mathcal{N}\left( i\right)$ through
52
+
53
+ $$
54
+ {m}_{i}^{\left( l + 1\right) } = \mathop{\sum }\limits_{{j \in \mathcal{N}\left( i\right) }}{M}_{l}\left( {{x}_{i}^{\left( l\right) },{x}_{j}^{\left( l\right) }}\right) , \tag{1}
55
+ $$
56
+
57
+ $$
58
+ {x}_{i}^{\left( l + 1\right) } = \left( {{a}_{i},{p}_{i},{U}_{l}\left( {{x}_{i}^{\left( l\right) },{m}_{i}^{\left( l + 1\right) }}\right) }\right) , \tag{2}
59
+ $$
60
+
61
+ where $\mathcal{N}\left( i\right) = \left\{ {j : {\begin{Vmatrix}{p}_{ij}\end{Vmatrix}}_{2} = {\begin{Vmatrix}{p}_{j} - {p}_{i}\end{Vmatrix}}_{2} = {d}_{ij} < c}\right\}$ denotes central atom’s $i$ -th neighbour set that is obtained through a distance cutoff $c > 0$ .
62
+
63
+ For our 3D GNN, we wish to implement simple, yet powerful rotation equivariant transformations in the message and update functions, to accurately describe the local environment of nodes in the point cloud.
64
+
65
+ ### 2.2 Invariance and Equivariance
66
+
67
+ In this work, we consider the special orthogonal group $\mathrm{{SO}}\left( 3\right)$ , i.e. the group of proper rotations in three dimensions. A group element of $\mathrm{{SO}}\left( 3\right)$ is commonly represented as matrix $R \in {\mathbb{R}}^{3 \times 3}$ satisfying ${R}^{\top }R = R{R}^{\top } = I$ and $\det R = 1$ .
68
+
69
+ For a node feature $h = \left( {s, v}\right) \in {\mathbb{R}}^{{F}_{s}} \times {\mathbb{R}}^{3 \times {F}_{v}}$ , an SO(3)-equivariant function $f$ must obey the following equation
70
+
71
+ $$
72
+ {f}_{.R}\left( h\right) = \left( {{Is},{Rv}}\right) = \left( {s,{Rv}}\right) , \tag{3}
73
+ $$
74
+
75
+ where ${f}_{.R}$ in this work means, a rotation acting on the input of function $f$ . As shown in (3), invariance can be regarded as special case of equivariance, where equivariance for a scalar representation $s$ means that a trivial representation, i.e. the identity, acts on the scalar embedding, while vectors $v$ are transformed with $R$ , i.e., a change of basis is performed, where the new basis is determined by the column vectors in $R$ .
76
+
77
+ ## 3 Related Work
78
+
79
+ Neural networks that specifically achieve $\mathrm{E}\left( 3\right)$ or $\mathrm{{SE}}\left( 3\right)$ equivariance have been proposed in Ten-sorfield Networks (TFNs) [12] and its variants in the covariant Cormorant [13], NequIP [15] and SE(3)-Transformer [14]. With TFNs, equivariance is achieved through the usage of equivariant function spaces such as spherical harmonics combined with Clebsch-Gordan tensor products in their intermediate layer to allow the multiplication of different ordered representations, while others resort to lifting the spatial space to higher-dimensional spaces such as Lie group spaces [21]. Since no
80
+
81
+ restriction on the order of tensors is imposed on these methods, sufficient expressive power of these models is guaranteed, but at a cost of excessive computational calculations with increased time and memory. It was recently analyzed by Brandstetter et al. [22] that the implementation of non-linear equivariant Graph Neural Networks in their model, which they term Steerable E(3) Equivariant Graph Neural Networks (SEGNN) achieves strong empirical results on small point clouds like the N-Body experiment or QM9 dataset, but also larger systems as in the OC20 dataset. One of their insights is that the construction of their (non-linear) SEGNN-layer, allows the model to better capture the local environment and enables the reduction of radius cutoff when constructing the neighbour list for each central atom $i$ , since the Clebsch-Gordan tensor products between neighbouring nodes is computationally expensive. To circumvent the expensive computational cost, another line of research proposed to directly implement equivariant operations in the original Cartesian space, providing and efficient approach to preserve equivariance as introduced in the $\mathrm{E}\left( n\right)$ -GNN [16], GVP [18,23], PaiNN [17] and ET-Transformer [24] architectures without relying on irreducible representation of the orthogonal group by means of the spherical harmonics basis as originally introduced in TFN and implemented in the e3nn framework [25].
82
+
83
+ Our proposed model also implements equivariant operations in the original Cartesian space and includes a continuous filter through the self-attention coefficients which serve as spatial- and content based edge embedding in the message propagation, as opposed to the PaiNN model where the filter solely depends on the distance. Additionally, our model constructs vector features from the given point cloud data and leverages geometrical products that are efficient to compute. The $\mathrm{E}\left( n\right)$ -GNN architecture does not learn type-1 vector features with several channels, but only updates given type-1 features ${}^{2}$ through a weighted linear combination of such, where the (learnable) scalar weights are obtained from invariant embeddings. The GVP model which was initially designed to work on macromolecular structures includes a complex message functions of concatenated node- and edge features composed with a series of GVP-blocks that enables information exchange between type-0 and type-1 features, through dot product reduction of vectors, with a potential disadvantage of discontinuities through non-smooth components for distances close to the cutoff.
84
+
85
+ ## 4 Proposed Model Architecture
86
+
87
+ ### 4.1 Input Embedding
88
+
89
+ We initially embed atoms of small molecules or proteins based on their element/amino acid type using a trainable look-up table through
90
+
91
+ $$
92
+ {s}_{i}^{\left( 0\right) } = \operatorname{embed}\left( {a}_{i}\right) ,
93
+ $$
94
+
95
+ which provides a starting (invariant) scalar representation of the node prior to the message passing. As in most cases, no directional information for atoms are available, we initialize the vector features as zero tensor ${v}_{i}^{\left( 0\right) } = 0 \in {\mathbb{R}}^{3 \times {F}_{v}}$ .
96
+
97
+ ### 4.2 Edge Filter through Feature Attention
98
+
99
+ For the two-body interaction between neighbouring node(s) $j$ to central node $i$ , we implement a non-linear edge filter that depends on content related information stored in the scalar features $\left( {{s}_{j},{s}_{i}}\right)$ and a radial basis expansion of the Euclidean distance ${d}_{ji} \leq c$ . We choose the (orthonormal) Bessel basis ${G}_{d} : \mathbb{R} \rightarrow {\mathbb{R}}^{K}$ that projects the distance into $K$ basis values as introduced by Gasteiger et al. [9] and their polynomial envelope function $\kappa : \left\lbrack {0, c}\right\rbrack \rightarrow (0,1\rbrack$ that smoothly transitions from 1 to 0 as the cutoff value $c$ is approached. The computation of the attention edge-filter is obtained through
100
+
101
+ $$
102
+ {e}_{ji}^{\left( l + 1\right) } = \left\lbrack {{s}_{i}^{\left( l\right) }\left| \right| {s}_{j}^{\left( l\right) }\parallel \kappa \left( {d}_{ji}\right) {G}_{d}\left( {d}_{ji}\right) }\right\rbrack \in {\mathbb{R}}^{2{F}_{s} + K}
103
+ $$
104
+
105
+ $$
106
+ {f}_{ji}^{\left( l + 1\right) } = \operatorname{MLP}\left( {e}_{ji}^{\left( l + 1\right) }\right) \in {\mathbb{R}}^{{F}_{s} + 3{F}_{v}}, \tag{4}
107
+ $$
108
+
109
+ where MLP refers to a 1-layer Multilayer-Perceptron with SiLU activation function [26]. The input to the MLP is a concatenation of scalar features as well as a by $\kappa$ scaled radial basis expansion of the distance between nodes $j$ and $i$ . The $\mathrm{{SO}}\left( 3\right)$ -invariant embedding ${f}_{ji}^{\left( l + 1\right) }$ represents the ${F}_{s} + 3{F}_{v}$ attention logits which are further split into ${f}_{ji}^{\left( l + 1\right) } = {\left\lbrack {a}_{ji},{b}_{ji}\right\rbrack }^{\left( l + 1\right) }$ to be used as a non-linear filter
110
+
111
+ ---
112
+
113
+ ${}^{2}$ In the $\mathrm{E}\left( n\right)$ -GNN architecture, Cartesian coordinates of particles $p \in {\mathbb{R}}^{3}$ are updated.
114
+
115
+ ---
116
+
117
+ when propagating neighbouring features.
118
+
119
+ A novelty of our approach is that the attention coefficient between two vertices $j$ and $i$ is in fact obtained per feature-channel instead for the entire embedding as commonly through a single scalar value. The feature attention for the scalar embeddings is computed using the standard softmax
120
+
121
+ activation function
122
+
123
+ $$
124
+ {\alpha }_{ji} = \frac{\exp \left( {a}_{ji}\right) }{\mathop{\sum }\limits_{{{j}^{\prime } \in \mathcal{N}\left( i\right) }}\exp \left( {a}_{{j}^{\prime }i}\right) } \in {\left( 0,1\right) }^{{F}_{s}}, \tag{5}
125
+ $$
126
+
127
+ where the normalization in the denominator runs over all neighbours ${j}^{\prime }$ and the exponential function is applied componentwise.
128
+
129
+ The embedding ${b}_{ji} \in {\mathbb{R}}^{3{F}_{v}}$ is processed to create coefficients that serve as weights for a linear combination of vector quantities to compute the vector message from $j$ to $i$ , which we will describe in the following subsection.
130
+
131
+ ### 4.3 Equivariant Message Propagation
132
+
133
+ We follow the idea of standard convolution, which is a linear transformation of the input, and compute the scalar features message for central node $i$ as
134
+
135
+ $$
136
+ {m}_{i, s}^{\left( l + 1\right) } = \mathop{\sum }\limits_{{j \in \mathcal{N}\left( i\right) }}{\alpha }_{ji}^{\left( l + 1\right) } \odot {W}_{n}^{\left( l + 1\right) }{s}_{j}^{\left( l\right) }, \tag{6}
137
+ $$
138
+
139
+ where ${W}_{n}^{\left( l + 1\right) } \in {\mathbb{R}}^{{F}_{s} \times {F}_{s}}$ is a trainable weight matrix shared among all nodes and ${\alpha }_{ii}^{\left( l + 1\right) }$ the nonlinear attention filter obtained in (5). In context of atomistic neural network potentials (NNPs), the filter ${\alpha }_{ji}^{\left( l + 1\right) }$ is commonly implemented as an MLP that only inputs the distance ${d}_{ji}$ (by means of a radial basis expansion) as in SchNet [3], PaiNN [17], NequIP [15], while recent NNPs such as Allegro [27] and BOTNet [28] implement edge-filters that depend on the distance as well as node content, e.g., the chemical elements, unifying the idea of MPNNs in the context of machine learning force fields. The recent work by Brandstetter et al. [22] analyzes modern 3D equivariant GNNs with the insight that non-linear message and non-linear update functions combined with their proposed steerable features space leads to an improved model, which they term SEGNN. The SEGNN, in similar spirit to Tensorfield-Networks, can leverage higher-order geometric representations up to a maximal rotation order ${l}_{\max }$ through the spherical harmonics expansion of relative positions, which they take as steerable feature basis. Their proposed model implements steerable MLPs ${}^{3}$ into the message- and update function to leverage non-linearity and geometric covariant information of the steerable features that go beyond $l = 0$ , i.e., scalar features while our architecture is only restricted to scalar information, albeit vector information is still processed in the layers but then reduces to a scalar by a dot product operation. Our proposed message function for scalar features in Eq. (6) can also be formulated as a linear transformation where the weight matrix depends on distances but also hidden scalar information. To see this, we rewrite ${\alpha }_{ji}^{\left( l + 1\right) } \in {\left( 0,1\right) }^{{F}_{s}}$ as matrix using the diagonal operator ${A}_{ji}^{\left( l + 1\right) } = \operatorname{diag}\left( {\alpha }_{ji}^{\left( l + 1\right) }\right) \in {\left( 0,1\right) }^{{F}_{s} \times {F}_{s}}$ and observe that the filter scales the (independent) weight matrix ${W}_{n}^{\left( l + 1\right) }$ leading to the message propagation
140
+
141
+ $$
142
+ {m}_{i, s}^{\left( l + 1\right) } = \mathop{\sum }\limits_{{j \in \mathcal{N}\left( i\right) }}{A}_{ji}^{\left( l + 1\right) }{W}_{n}^{\left( l + 1\right) }{s}_{j}^{\left( l\right) } = \mathop{\sum }\limits_{{j \in \mathcal{N}\left( i\right) }}{W}_{{ji}, n}^{\left( l + 1\right) }{s}_{j}^{\left( l\right) },
143
+ $$
144
+
145
+ where ${W}_{{ji}, n}^{\left( l + 1\right) }$ defines the linear transformation matrix whose content depends on $\mathrm{{SO}}\left( 3\right)$ -invariant information through $\left( {{s}_{i}^{\left( l\right) },{s}_{j}^{\left( l\right) },{d}_{ji}}\right)$ , which can however still be interpreted as non-linear convolution as the ${A}_{ji}^{\left( l + 1\right) }$ weight matrix is obtained through an MLP and softmax activation function. Similar to Brandstetter et al. [22], we call this a pseudo-linear transformation.
146
+
147
+ Building Equivariant Features. In many cases, no initial vector features are provided in raw point cloud data. However, when working with the backbone of proteins and representing a node as residue through the sequence of atoms ${\left( {C}_{\alpha },{C}_{\beta }, O, N\right) }_{i}$ , initial vectorial (node) features that describe the local environment of each residue can be pre-computed as described by Ingraham et al. [6] and Jing et al. [18]. In a full-atom model, initial vector features for a node $i$ can be obtained by averaging over relative position vectors ${v}_{i,0} = \frac{1}{\left| \mathcal{N}\left( i\right) \right| }\mathop{\sum }\limits_{{j \in \mathcal{N}\left( i\right) }}{p}_{ji} \in {\mathbb{R}}^{3}$ which satisfies Eq. (3) due to linearity.
148
+
149
+ ---
150
+
151
+ ${}^{3}$ The weight matrices in steerable MLPs depend on geometric information, such as relative positions.
152
+
153
+ ---
154
+
155
+ In our work, we initialize the vectors as zero tensor as described in Subsection 4.1 and in the first layer, equivariant features are obtained by utilizing normalized relative positions ${p}_{{ji}, n}$ , to compute the interaction between central node $i$ and its neighbour $j$ . In the subsequent layers, we can think about extending the set of vectors by (1) constructing vectors based on normalized relative positions again, (2) mixing existing vector channels from the previous iteration and (3) creating new vector quantities by making use of the cross product. We will describe the three options in the following paragraph.
156
+
157
+ (1) Utilizing normalized relative positions: we create equivariant vector features based on normalized relative position ${p}_{{ji}, n} = \frac{1}{{d}_{ji}}\left( {{p}_{i} - {p}_{j}}\right)$ as those provide directional information. Since we explicitly model scalar and vector features, each equipped with ${F}_{s}$ and ${F}_{v}$ channels, respectively, the tensor product offers a natural way to obtain an $l = 1$ feature, by combining an $l = 1$ and $l = 0$ feature, wherein we utilize the relative position ${p}_{{ji}, n}$ vector that obeys the rule stated in Eq. (3). Equivariant interactions between node $j$ and $i$ are computed through
158
+
159
+ $$
160
+ {v}_{{ji},0}^{\left( l + 1\right) } = {p}_{{ji}, n} \otimes {b}_{{ji},0}^{\left( l + 1\right) } = {p}_{{ji}, n}{b}_{{ji},0}^{\left( {l + 1}\right) \top } \in {\mathbb{R}}^{3 \times {F}_{v}}, \tag{7}
161
+ $$
162
+
163
+ which preserves $\mathrm{{SO}}\left( 3\right)$ equivariance, due to the linearity of the tensor product. We note that the creation of 'initial' equivariant features in such manner is also performed in architectures, like $\left\lbrack {{12},{13},{15},{22}}\right\rbrack$ just to name a few, that make use of irreducible representations of the $\mathrm{{SO}}\left( 3\right)$ group by means of the spherical harmonics and implement the Clebsch-Gordan tensor product $\left( { \otimes }_{cg}\right)$ that allows the mixing of possibly higher-order embedding representations of type $l > 1$ . The $l = 1$ representation in Eq. (7) can be interpreted as ${F}_{v}$ scaled versions of the relative position ${p}_{{ji}, n}$ .
164
+
165
+ (2) In similar fashion to the (independent) linear transformation of scalar channels, we mix the vector channels using a weight matrix ${W}_{v}^{\left( l\right) } \in {\mathbb{R}}^{{F}_{v} \times {F}_{v}}$ which preserves $\mathrm{{SO}}\left( 3\right)$ equivariant due to the linearity property.
166
+
167
+ $$
168
+ {v}_{n}^{\left( l + 1\right) } = {v}^{\left( l\right) }{W}_{v}^{\left( l + 1\right) },
169
+ $$
170
+
171
+ and is shared among all nodes. For a particular neighbouring node $j$ , we scale the linearly transformed vectors
172
+
173
+ $$
174
+ {v}_{{ji},1}^{\left( l + 1\right) } = {b}_{{ji},1}^{\left( l + 1\right) } \odot {v}_{n, j}^{\left( l + 1\right) }, \tag{8}
175
+ $$
176
+
177
+ which can be interpreted as a (pseudo-linear) gating of previously mixed vectors. (3) To capture more geometric information, while restricting the representation to be of type $l = 1$ , we utilize the vector cross product $c = \left( {a \times b}\right) \in {\mathbb{R}}^{3}$ between two $l = 1$ representations $a$ and $b$ that satisfy the following rotation invariance property
178
+
179
+ $$
180
+ {Ra} \times {Rb} = R\left( {a \times b}\right) .
181
+ $$
182
+
183
+ The output of the cross product $a \times b$ defines a vector $c$ that is perpendicular to plane spanned by $a$ and $b$ and in our network architecture, we utilize this by computing the cross product on same channels from the previous layers’ vector features of node $i$ and $j$ as
184
+
185
+ $$
186
+ {\widetilde{v}}_{{ji},2}^{\left( l + 1\right) } = \left( {{v}_{i}^{\left( l\right) } \times {v}_{j}^{\left( l\right) }}\right) \in {\mathbb{R}}^{3 \times {F}_{v}},
187
+ $$
188
+
189
+ to reduce the computational complexity. As we wish to implement an efficient architecture that can operate on large (protein) point clouds, a cross product that mixes all possible vectors for each channel, would result into a representation $\in {\mathbb{R}}^{3 \times {F}_{v} \times {F}_{v}}$ which requires to be reduced, by e.g. computing the average along the last axis.
190
+
191
+ We highlight that recent equivariant GNNs which operate on the original Cartesian space, such as GVP, PaiNN or ET-Transformer do not include the cross product in their architecture and are restricted in the creation of vector features that may span the entire ${\mathbb{R}}^{3}$ . These architecture make use of step (1) and (2) only. For example, when all atoms are placed on the ${xy}$ -plane, using step (1) and (2) would always create vectors on the ${xy}$ plane, while the coordinate on $z$ axis is always 0 . By leveraging the cross product, vectors in the $z$ direction can be computed, without increasing the rank order ${}^{4}$ .
192
+
193
+ ---
194
+
195
+ ${}^{4}$ Two rank 1 Cartesian tensors, i.e., two geometric vectors could also be combined by computing the tensor product of the two, which would result into a rank 2 Cartesian tensor with 9 elements in the matrix. This rank 2 Cartesian tensor contains elements of the cross product in its antisymmetric part.
196
+
197
+ ---
198
+
199
+ In similar fashion to Eq. (7) and (8), each channel of the representation ${\widetilde{v}}_{{ji},2}^{\left( l\right) }$ is weighted by the $\mathrm{{SO}}\left( 3\right)$ non-linear filter ${b}_{{ji},2}^{\left( l\right) } \in {\mathbb{R}}^{{F}_{v}}$ to obtain
200
+
201
+ $$
202
+ {v}_{{ji},2}^{\left( l + 1\right) } = {b}_{{ji},2}^{\left( l + 1\right) } \odot {\widetilde{v}}_{{ji},2}^{\left( l + 1\right) }, \tag{9}
203
+ $$
204
+
205
+ Finally, we define the vector message from node $j$ to central node $i$ as the sum of the three components in (7) to (9) and aggregate it across all neighbouring nodes $j \in \mathcal{N}\left( i\right)$ to obtain the vector message
206
+
207
+ $$
208
+ {m}_{i, v}^{\left( l + 1\right) } = \frac{1}{\left| \mathcal{N}\left( i\right) \right| }\mathop{\sum }\limits_{{j \in \mathcal{N}\left( i\right) }}\left( {{v}_{{ji},0}^{\left( l + 1\right) } + {v}_{{ji},1}^{\left( l + 1\right) } + {v}_{{ji},2}^{\left( l + 1\right) }}\right) , \tag{10}
209
+ $$
210
+
211
+ which results into new weighted geometric vectors by utilizing the (static) relative positions as well as neighbouring vector features and lastly, normal vectors obtained through the cross product. We provide the full proof of $\mathrm{{SO}}\left( 3\right)$ equivariance of Eq. (10) in the Appendix B.
212
+
213
+ Equivariant Update Function. After obtaining the aggregated message for central node $i$ in the representation ${m}^{\left( l + 1\right) } \in {\mathbb{R}}^{{F}_{s}} \times {\mathbb{R}}^{3 \times {F}_{v}}$ as (pseudo)-linear transformations of neighbouring node features, we deploy a residual connection as intermediate update step
214
+
215
+ $$
216
+ {\widetilde{{s}_{i}}}^{\left( l + 1\right) } = {s}_{i}^{\left( l\right) } + {m}_{i, s}^{\left( l + 1\right) },\text{ and }{\widetilde{{v}_{i}}}^{\left( l + 1\right) } = {v}_{i}^{\left( l\right) } + {m}_{i, v}^{\left( l + 1\right) }
217
+ $$
218
+
219
+ ![01963ed2-5f59-76f7-8b2d-366531bcefc3_6_1031_749_440_511_0.jpg](images/01963ed2-5f59-76f7-8b2d-366531bcefc3_6_1031_749_440_511_0.jpg)
220
+
221
+ Figure 2: A gated equivariant MLP that transforms scalar and vector features into a new representation. Here we used this block as update function ${U}_{l}\left( \cdot \right)$ .
222
+
223
+ and in the update layer, we implement an equivariant nonlinear transformation inspired by gated non-linearities proposed by [29] and used in [17] with minor modification as shown in Figure 2. Notably, the scalar features receive geometric information by concatenating the norm of linear transformed vector features, while the 1-layer scalar MLP is tasked to transform the combined embeddings to update the scalar states and retrieve non-linear weights that are used to reweight vector features. We apply these weights by element-wise multiplying with linearly transformed vector features as shown on the right which can also be interpreted as variants of the Gated Linear Unit [30, 31], followed by a linear layer to implement an equivariant MLP for vector features.
224
+
225
+ ## 5 Experiments and Results
226
+
227
+ We test the efficacy of our proposed EQGAT model on three
228
+
229
+ publicly available molecular benchmark datasets which pose significant challenges for the development of efficient and accurate prediction models in protein design.
230
+
231
+ ### 5.1 ATOM3D
232
+
233
+ The ATOM3D benchmark [32] provides datasets for representation learning on atomic-level 3D molecular structures of different kinds, i.e., proteins, RNAs, small molecules and complexes. Since proteins perform specific biological functions essential for all living organisms and hence, play a key role when investigating the most fundamental questions in the life sciences, we focus our experiments on the learning problems often encountered in structural biology with different difficulties due to data scarcity and varying structural sizes. We use provided training, validation and test splits from ATOM3D and refer the interested reader to the original work of Townshend et al. [32] for more details. For all benchmarks, we compare against the Baseline CNN and GNN models provided by Townshend et al. [32] from ATOM3D, GVP-GNN reported in [23] and we run experiments for SchNet [3], an $\mathrm{{SO}}\left( 3\right)$ invariant $\mathrm{{GNN}}$ architecture that has shown strong performance on small molecule prediction tasks, PaiNN [17] as SchNet's improved SO(3) equivariant architecture and the recently proposed SEGNN [22] that leverages higher-order representations by means of the irreducible representations and Clebsch-Gordan tensor products using their official code base.
234
+
235
+ Table 1: Benchmark results on three ATOM3D tasks. We report the results for the Baseline models from [32] and GVP-GNN [23]. We run our own experiments with the SchNet, PaiNN, SEGNN and our EQGAT model and report averaged metrics over 3 runs. For the SEGNN model we only report the results on a single run due longer training time on the PSR and RSR datasets.
236
+
237
+ ${R}_{S}$ stands for Spearman Rank Correlation, and RMSE abbreviates Root Mean Square Deviation.
238
+
239
+ <table><tr><td rowspan="2">Tasks Metric</td><td colspan="2">PSR (↑)</td><td colspan="2">RSR (↑)</td><td>LBA (↓)</td></tr><tr><td>Mean ${R}_{S}$</td><td>Global ${R}_{S}$</td><td>Mean ${R}_{S}$</td><td>Global ${R}_{S}$</td><td>RMSE</td></tr><tr><td>CNN</td><td>${0.431} \pm {0.013}$</td><td>${0.789} \pm {0.017}$</td><td>${0.264} \pm {0.046}$</td><td>${0.372} \pm {0.027}$</td><td>${1.416} \pm {0.021}$</td></tr><tr><td>GNN</td><td>${0.515} \pm {0.010}$</td><td>${0.755} \pm {0.004}$</td><td>${0.234} \pm {0.006}$</td><td>${0.512} \pm {0.049}$</td><td>${1.570} \pm {0.025}$</td></tr><tr><td>GVP-GNN</td><td>${0.511} \pm {0.010}$</td><td>${0.845} \pm {0.008}$</td><td>${0.211} \pm {0.142}$</td><td>${0.330} \pm {0.054}$</td><td>${1.594} \pm {0.073}$</td></tr><tr><td>SchNet</td><td>${0.448} \pm {0.016}$</td><td>${0.784} \pm {0.013}$</td><td>${0.247} \pm {0.029}$</td><td>${0.273} \pm {0.017}$</td><td>${1.522} \pm {0.015}$</td></tr><tr><td>PaiNN</td><td>${0.462} \pm {0.015}$</td><td>${0.809} \pm {0.003}$</td><td>${0.270} \pm {0.062}$</td><td>${0.462} \pm {0.064}$</td><td>${1.507} \pm {0.033}$</td></tr><tr><td>SEGNN</td><td>0.474</td><td>0.833</td><td>-0.099</td><td>0.252</td><td>${1.450} \pm {0.011}$</td></tr><tr><td>EQGAT</td><td>${0.491} \pm {0.008}$</td><td>${0.847} \pm {0.006}$</td><td>${0.316} \pm {0.029}$</td><td>${0.404} \pm {0.096}$</td><td>${1.440} \pm {0.027}$</td></tr></table>
240
+
241
+ For SchNet, PaiNN and our proposed EQGAT architecture, we implement a 5-layer GNN with ${F}_{s} = {100}$ scalar channels and ${F}_{v} = {16}$ vector channels for the PSR and RSR benchmark, as these benchmark consists of more training samples and comprise larger biomolecules. For the Ligand Binding Affinity (LBA) task, we utilize a 3-layer GNN with the same number of scalar- and vector channels. For the SEGNN architecture, we implement a 3-layer GNN with(100,16,8)channels for the embeddings of type $l = \left( {0,1,2}\right)$ that transform according to the irreducible representation of that order preserving $\mathrm{{SO}}\left( 3\right)$ equivariance. All architectures, apply a linear layer on the GNN encoder’s scalar embeddings, followed by a mean pooling and a 1-layer MLP with SiLU activation function and a final linear layer to predict the output target. The edges in the point clouds are constructed based on a radius cutoff of ${4.5Å}$ . All graphs are considered as full-atom graphs, i.e., the initial node feature is determined by the chemical element.
242
+
243
+ The Protein and RNA Structure Ranking tasks (PSR / RSR) in ATOM3D are both regression tasks with the objective to predict the quality score in terms of Global Distance Test (GDT_TS) or Root-Mean-Square Deviation (RMSD) for generated Protein and RNA models wrt. to its experimentally determined ground-truth structure. The ability to reliably rank a biopolymer structure requires a model to accurately learn the atomic environments such that discrepancies between a ground truth states an its corrupted version can be distinguished. We evaluated our model on the biopolymer ranking and obtained good results on the current benchmark, as reported in Table 1 in terms of Spearman rank correlation. Our proposed model performs particularly well on the PSR task outperforming the GVP-GNN [23] on the Global Rank Spearman correlation on the test set, while our model is more parameter efficient(383Kvs.640K). We believe our model could be further improved by additional hyperparameter tuning, e.g., by increasing the number of scalar or vector channels, which we did not do in our study to compare against the baseline models.
244
+
245
+ We noticed that the RSR benchmark was particularly difficult to validate as only a few dozen experimentally determined RNA structures are existent to date, and the structural models generated in the ATOM3D framework are labeled with the RMSD to its native structure, which is known to be sensitive to outlier regions, for exampling by inadequate modelling of loop regions [33], while the GDT_TS metric might be a better suited target to predict a ranking for generated RNA structures as in the PSR benchmark.
246
+
247
+ Another challenging and important task for drug discovery projects is estimating the binding strength (affinity) of a candidate drug atomistic's interaction with a target protein. We use the ligand binding affinity (LBA) dataset and found that among the GNN architectures, our proposed model obtains the best results, while also being computationally cheap and fast to train. The best performing model in the LBA-task is a 3D CNN model which works on the joint protein-ligand representation using voxel space and enforcing equivariance through data augmentation. The inferior performance of all equivariant GNNs might be caused by the need of larger filters to better capture the locality and many-body effects, where 3D CNNs have an advantage when using voxel representations, while GNNs commonly capture 2-body effects. Furthermore, as all GNN models jointly represent ligand-and protein as one graph by connecting vertices through a distance cutoff of ${4.5Å}$ , we believe that such union leads to an information loss of distinguishing the atom identity from the ligand and protein. A promising direction to investigate is to incorporate a ligand and protein GNN encoder seperately and merge the two embeddings prior the binding affinity prediction, similar to Graph Matching Networks [34] and recently realized by Stärk et al. [35] in a slightly different context.
248
+
249
+ Notably, our proposed EQGAT architecture performs on par with the SEGNN that implements geometric tensors of higher order, i.e., of rotation order $l = 2$ , that trans-foms as a rank 2 Cartesian tensor. We believe that including the cross product in our vector message in (10) allows the model to capture more of the geometric detail in a possible protein ligand binding pose for accurately predicting the binding affinity.
250
+
251
+ Model Efficiency. We assess the model efficiency of EQGAT in terms of computation time as well as trainable parameters and compare against SchNet, PaiNN and SEGNN on the LBA, PSR and RSR benchmarks. These datasets have on average 408 , 1624, and 2390 nodes per graph with 9180, 26756 and 44233 directed edges, respectively for the training set of LBA, PSR and RSR.
252
+
253
+ Table 2: Comparison on model efficiency when passing a batch of 10 macromolecular structures.
254
+
255
+ <table><tr><td>Dataset</td><td>Model (# Param.)</td><td>Inference Time [ms]</td></tr><tr><td rowspan="4">LBA</td><td>EQGAT (238K)</td><td>11.94</td></tr><tr><td>SchNet (240K)</td><td>8.25</td></tr><tr><td>PaiNN (379K)</td><td>10.66</td></tr><tr><td>SEGNN (238K)</td><td>89.53</td></tr><tr><td rowspan="4">PSR</td><td>EQGAT (383K)</td><td>49.96</td></tr><tr><td>SchNet (240K)</td><td>18.36</td></tr><tr><td>PaiNN (379K)</td><td>18.58</td></tr><tr><td>SEGNN (238K)</td><td>255.44</td></tr><tr><td rowspan="4">RSR</td><td>EQGAT (383K)</td><td>75.45</td></tr><tr><td>SchNet (240K)</td><td>27.27</td></tr><tr><td>PaiNN (379K)</td><td>26.98</td></tr><tr><td>SEGNN (238K)</td><td>390.69</td></tr></table>
256
+
257
+ As these datasets consist of graphs with up to thousands of atoms, computationally-and memory efficient models are preferred such that batches of graphs can be stored on GPU memory and processed fast during training. We measure the inference time of a random batch comprising 10 macromolecular structures on an NVIDIA V100 GPU. As shown in Table 2, SchNet and PaiNN are both parameter efficient and both achieve the fastest inference time on a forward pass, while our proposed EQGAT is slower mainly due to the softmax attention normalization in the denominator in Eq. (5) which could be improved when the softmax attention with its normalization is replaced by a sigmoid activation function, to obtain soft-attention weights. This step however, results into a edge-filter ${\alpha }_{ji}$ that does not sum up to 1 when iterating over all neighbours $j$ . The SEGNN model has the longest runtime on the forward pass across the 3 datasets. This is mostly attributed to the Clebsch-Cordan tensor products which can be very expensive in learning tasks that involve proteins, as the CG product is always performed on edges.
258
+
259
+ ## 6 Conclusion, Limitations and Future Work
260
+
261
+ In this work, we introduce a novel attention-based equivariant graph neural network for the prediction of properties of large biomolecules. Our proposed architecture makes use of rotationally equivariant features in their intermediate layers to faithfully represent the geometry of the data, while being computationally efficient, as all equivariant functions are directly implemented in the original Cartesian space. Currently, our proposed model requires more FLOPs and is therefore slower than recent equivariant GNNs that operate in Cartesian space, such as PaiN, which might be a limitation but can be further investigated and possibly improved, if the standard softmax normalization is modified and model performance is not harmed by such change. As our proposed model operates on Cartesian tensors and we restrict the representation to be of rank 1 only, a general promising future direction of investigation is the implementation of Cartesian equivariant GNNs that leverage higher-rank tensors in their layers. As it is up to date not clear, how much improvement higher-order Cartesian tensors benefit for learning tasks that involve large biomolecular systems, we hope that our work and open-source code will be useful for the graph learning and computational biology community.
262
+
263
+ ## Code Availability
264
+
265
+ We provide the implementation of our model and experiments on https://anonymous.4open.science/r/eqgat-3A3C/README.md.We use PyTorch [36] as Deep Learning framework and PyTorch Geometric [37] to implement our GNNs.
266
+
267
+ ## References
268
+
269
+ [1] Peter W. Battaglia, Jessica B. Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinícius Flores Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, Çaglar Gülçehre, H. Francis Song, Andrew J. Ballard, Justin Gilmer, George E. Dahl, Ashish Vaswani, Kelsey R. Allen, Charles Nash, Victoria Langston, Chris Dyer, Nicolas Heess, Daan Wierstra, Pushmeet Kohli, Matthew M. Botvinick, Oriol Vinyals, Yujia Li, and Razvan Pascanu. Relational inductive biases, deep learning, and graph networks. CoRR, abs/1806.01261, 2018. URL http://arxiv.org/abs/1806.01261.1, 3
270
+
271
+ [2] Michael M. Bronstein, Joan Bruna, Taco Cohen, and Petar Veličković. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges, 2021. 1, 3
272
+
273
+ [3] Kristof Schütt, Pieter-Jan Kindermans, Huziel Enoc Sauceda Felix, Stefan Chmiela, Alexandre Tkatchenko, and Klaus-Robert Müller. Schnet: A continuous-filter convolutional neural network for modeling quantum interactions. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper/2017/file/303ed4c69846ab36c2904d3ba8573050-Paper.pdf.1, 5, 7
274
+
275
+ [4] Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1263-1272. PMLR, 06-11 Aug 2017. URL https://proceedings.mlr.press/v70/gilmer17a.html.1, 2, 3
276
+
277
+ [5] Alex Fout, Jonathon Byrd, Basir Shariat, and Asa Ben-Hur. Protein interface prediction using graph convolutional networks. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper/2017/file/f507783927f2ec2737ba40afbd17efb5-Paper.pdf.1, 3
278
+
279
+ [6] John Ingraham, Vikas Garg, Regina Barzilay, and Tommi Jaakkola. Generative models for graph-based protein design. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/ file/f3a4ff4839c56a5f460c88cce3666a2b-Paper.pdf. 3, 5
280
+
281
+ [7] Federico Baldassarre, David Menéndez Hurtado, Arne Elofsson, and Hossein Azizpour. GraphQA: protein model quality assessment using graph convolutional networks. Bioinformat-ics, 37(3):360-366, 08 2020. ISSN 1367-4803. doi: 10.1093/bioinformatics/btaa714. URL https://doi.org/10.1093/bioinformatics/btaa714.
282
+
283
+ [8] Pedro Hermosilla, Marco Schäfer, Matej Lang, Gloria Fackelmann, Pere-Pau Vázquez, Barbora Kozlikova, Michael Krone, Tobias Ritschel, and Timo Ropinski. Intrinsic-extrinsic convolution and pooling for learning on $3\mathrm{\;d}$ protein structures. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=10mSUROpwY.1
284
+
285
+ [9] Johannes Gasteiger, Janek Groß, and Stephan Günnemann. Directional message passing for molecular graphs. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=B1eWbxStPH.1,4
286
+
287
+ [10] Johannes Gasteiger, Florian Becker, and Stephan Günnemann. Gemnet: Universal directional graph neural networks for molecules. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 6790-6802. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/file/35cf8659cfcb13224cbd47863a34fc58-Paper.pdf.
288
+
289
+ [11] Yi Liu, Limei Wang, Meng Liu, Yuchao Lin, Xuan Zhang, Bora Oztekin, and Shuiwang Ji. Spherical message passing for $3\mathrm{\;d}$ molecular graphs. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=givsRXsOt9r.1
290
+
291
+ [12] Nathaniel Thomas, Tess Smidt, Steven Kearnes, Lusann Yang, Li Li, Kai Kohlhoff, and Patrick Riley. Tensor field networks: Rotation- and translation-equivariant neural networks for $3\mathrm{\;d}$ point clouds,2018.1,3,6
292
+
293
+ [13] Brandon Anderson, Truong Son Hy, and Risi Kondor. Cormorant: Covariant molecular neural networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox,
294
+
295
+ and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/ file/03573b32b2746e6e8ca98b9123f2249b-Paper.pdf. 3, 6
296
+
297
+ [14] Fabian Fuchs, Daniel Worrall, Volker Fischer, and Max Welling. Se(3)-transformers: 3d roto-translation equivariant attention networks. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 1970-1981. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/15231a7ce4ba789d13b722cc5c955834-Paper.pdf.3
298
+
299
+ [15] Simon Batzner, Albert Musaelian, Lixin Sun, Mario Geiger, Jonathan P. Mailoa, Mordechai Kornbluth, Nicola Molinari, Tess E. Smidt, and Boris Kozinsky. E(3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials. Nature Communications, 13 (1):2453, 2022. doi: 10.1038/s41467-022-29939-5. URL https://doi.org/10.1038/ s41467-022-29939-5.1,3,5,6
300
+
301
+ [16] Víctor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E(n) equivariant graph neural networks. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 9323-9332. PMLR, 18-24 Jul 2021. URL https://proceedings.mlr.press/v139/ satorras21a.html. 2,4
302
+
303
+ [17] Kristof Schütt, Oliver Unke, and Michael Gastegger. Equivariant message passing for the prediction of tensorial properties and molecular spectra. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 9377-9388. PMLR, 18-24 Jul 2021. URL https://proceedings.mlr.press/v139/schutt21a.html.4, 5, 7
304
+
305
+ [18] Bowen Jing, Stephan Eismann, Patricia Suriana, Raphael John Lamarre Townshend, and Ron Dror. Learning from protein structure with geometric vector perceptrons. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum? id=1YLJDvSx6J4. 3, 4, 5
306
+
307
+ [19] C. Deng, O. Litany, Y. Duan, A. Poulenard, A. Tagliasacchi, and L. Guibas. Vector neurons: A general framework for so(3)-equivariant networks. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 12180-12189, Los Alamitos, CA, USA, oct 2021. IEEE Computer Society. doi: 10.1109/ICCV48922.2021.01198. URL https://doi.ieeecomputersociety.org/10.1109/ICCV48922.2021.01198.2
308
+
309
+ [20] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61-80, 2009. doi: 10.1109/TNN.2008.2005605. 3
310
+
311
+ [21] Marc Finzi, Samuel Stanton, Pavel Izmailov, and Andrew Gordon Wilson. Generalizing convolutional neural networks for equivariance to lie groups on arbitrary continuous data. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 3165- 3176. PMLR, 13-18 Jul 2020. URL https://proceedings.mlr.press/v119/finzi20a.html.3
312
+
313
+ [22] Johannes Brandstetter, Rob Hesselink, Elise van der Pol, Erik J Bekkers, and Max Welling. Geometric and physical quantities improve e(3) equivariant message passing. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum? id=_xwr8g0BeV1.4,5,6,7
314
+
315
+ [23] Bowen Jing, Stephan Eismann, Pratham N. Soni, and Ron O. Dror. Equivariant graph neural networks for $3\mathrm{\;d}$ macromolecular structure,2021.4,7,8
316
+
317
+ [24] Philipp Thölke and Gianni De Fabritiis. Equivariant transformers for neural network based molecular potentials. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=zNHzqZ9wrRB.4
318
+
319
+ [25] Mario Geiger, Tess Smidt, Alby M., Benjamin Kurt Miller, Wouter Boomsma, Bradley Dice, Kostiantyn Lapchevskyi, Maurice Weiler, Michał Tyszkiewicz, Simon Batzner, Dylan Madisetti, Martin Uhrin, Jes Frellsen, Nuri Jung, Sophia Sanborn, Mingjian Wen, Josh Rackers, Marcel Rød, and Michael Bailey. Euclidean neural networks: e3nn, April 2022. URL https://doi.org/10.5281/zenodo.6459381.4
320
+
321
+ [26] Prajit Ramachandran, Barret Zoph, and Quoc V. Le. Searching for activation functions, 2017. URL https://arxiv.org/abs/1710.05941.4
322
+
323
+ [27] Albert Musaelian, Simon Batzner, Anders Johansson, Lixin Sun, Cameron J. Owen, Mordechai Kornbluth, and Boris Kozinsky. Learning local equivariant representations for large-scale atomistic dynamics, 2022. URL https://arxiv.org/abs/2204.05249.5
324
+
325
+ [28] Ilyes Batatia, Simon Batzner, Dávid Péter Kovács, Albert Musaelian, Gregor N. C. Simm, Ralf Drautz, Christoph Ortner, Boris Kozinsky, and Gábor Csányi. The design space of e(3)- equivariant atom-centered interatomic potentials, 2022. URL https://arxiv.org/abs/ 2205.06643.5
326
+
327
+ [29] Maurice Weiler, Mario Geiger, Max Welling, Wouter Boomsma, and Taco S Cohen. 3d steerable cnns: Learning rotationally equivariant features in volumetric data. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/file/ 488e4104520c6aab692863cc1dba45af-Paper.pdf. 7
328
+
329
+ [30] Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 933-941. PMLR, 06-11 Aug 2017. URL https://proceedings.mlr.press/v70/dauphin17a.html.7
330
+
331
+ [31] Noam Shazeer. Glu variants improve transformer, 2020. URL https://arxiv.org/abs/ 2002.05202.7
332
+
333
+ [32] Raphael John Lamarre Townshend, Martin Vögele, Patricia Adriana Suriana, Alexander Derry, Alexander Powers, Yianni Laloudakis, Sidhika Balachandar, Bowen Jing, Brandon M. Anderson, Stephan Eismann, Risi Kondor, Russ Altman, and Ron O. Dror. ATOM3d: Tasks on molecules in three dimensions. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1), 2021. URL https://openreview.net/forum? id=FkDZLpK1M12.7,8
334
+
335
+ [33] Adam Zemla. Lga - a method for finding 3d similarities in protein structures. Nucleic acids research, 31:3370-4, 08 2003. doi: 10.1093/nar/gkg571. 8
336
+
337
+ [34] Yujia Li, Chenjie Gu, Thomas Dullien, Oriol Vinyals, and Pushmeet Kohli. Graph matching networks for learning the similarity of graph structured objects, 2019. 9
338
+
339
+ [35] Hannes Stärk, Octavian Ganea, Lagnajit Pattanaik, Dr.Regina Barzilay, and Tommi Jaakkola. EquiBind: Geometric deep learning for drug binding structure prediction. In Kamalika Chaud-huri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 20503-20521. PMLR, 17-23 Jul 2022. URL https://proceedings.mlr.press/v162/stark22b.html.9
340
+
341
+ [36] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché- Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/ 2019/file/bdbca288fee7f92f2bfa9f7012727740-Paper.pdf. 9
342
+
343
+ [37] Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019. 9
344
+
345
+ [38] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1412.6980.13
346
+
347
+ ## A Appendix
348
+
349
+ ## Full Model Details and Hyperparameters
350
+
351
+ All EQGAT models in this paper were trained on a single Nvidia Tesla V100 GPU.
352
+
353
+ Table 3: Description of architectural parameters on the ATOM3D benchmarks.
354
+
355
+ <table><tr><td>Parameter</td><td>LBA</td><td>PSR</td><td>RSR</td></tr><tr><td>Learning rate (lr.)</td><td>${10}^{-4}$</td><td>${10}^{-4}$</td><td>${10}^{-4}$</td></tr><tr><td>Maximum epochs</td><td>20</td><td>30</td><td>30</td></tr><tr><td>Lr. patience</td><td>10</td><td>10</td><td>10</td></tr><tr><td>Lr. decay factor</td><td>0.75</td><td>0.75</td><td>0.75</td></tr><tr><td>Batch size</td><td>16</td><td>16</td><td>16</td></tr><tr><td>Num. layers</td><td>3</td><td>5</td><td>5</td></tr><tr><td>Num. RBFs</td><td>32</td><td>32</td><td>32</td></tr><tr><td>Cutoff [A]</td><td>4.5</td><td>4.5</td><td>4.5</td></tr><tr><td>Scalar channels ${F}_{s}$</td><td>100</td><td>100</td><td>100</td></tr><tr><td>Vector channels ${F}_{v}$</td><td>16</td><td>16</td><td>16</td></tr><tr><td>Num. parameters</td><td>238k</td><td>383k</td><td>383k</td></tr></table>
356
+
357
+ 548
358
+
359
+ We used the ADAM optimizer [38] apart from the defined learning rate all other standard hyperpa-rameter setting from the PyTorch library.
360
+
361
+ ## B Proof Equivariance
362
+
363
+ We prove the rotation equivariance in Eq. (10) which consists of the sum of three vector components, and displayed here again
364
+
365
+ $$
366
+ {m}_{i, v}^{\left( l + 1\right) } = \frac{1}{\left| \mathcal{N}\left( i\right) \right| }\mathop{\sum }\limits_{{j \in \mathcal{N}\left( i\right) }}\left( {{v}_{{ji},0}^{\left( l + 1\right) } + {v}_{{ji},1}^{\left( l + 1\right) } + {v}_{{ji},2}^{\left( l + 1\right) }}\right) .
367
+ $$
368
+
369
+ As the sum is a linear function, we require to show that each summand $\left( {{v}_{{ji},0},{v}_{{ji},1},{v}_{{ji},2}}\right)$ is equivari-ant. For brevity, we omit all top indices. The first term is computed as tensor product of an $l = 1$ representation and $l = 0$ representation through
370
+
371
+ $$
372
+ {v}_{{ji},0} = {p}_{{ji}, n} \otimes {b}_{{ji},0} = {p}_{{ji}, n}{b}_{{ji},0}^{\top } \in {\mathbb{R}}^{3 \times {F}_{v}},
373
+ $$
374
+
375
+ where ${b}_{{ji},0} \in {\mathbb{R}}^{{F}_{v}}$ is an $\mathrm{{SO}}\left( 3\right)$ -invariant representation, i.e. a scalar representation with ${F}_{v}$ channels, and ${p}_{{ji}, n} \in {S}_{2} \subset {\mathbb{R}}^{3}$ a normalized relative vector, which lies on the 2-dimensional sphere.
376
+
377
+ If the point cloud is rotated, as defined in Eq. (3), (relative) position as well as vector features change
378
+
379
+ 560 to
380
+
381
+ $$
382
+ p\overset{R}{ \rightarrow }{Rp}
383
+ $$
384
+
385
+ $$
386
+ v\overset{R}{ \rightarrow }{Rv}
387
+ $$
388
+
389
+ while the cross product between two vector features ${v}_{0},{v}_{1}$ is invariant to rotation, resulting to the
390
+
391
+ property
392
+
393
+ $$
394
+ \left( {R{v}_{0} \times R{v}_{1}}\right) = R\left( {{v}_{0} \times {v}_{1}}\right) .
395
+ $$
396
+
397
+ In case a rotation is acting on the system, from Eq. (3) we know how vector and scalar quantities transform, resulting into:
398
+
399
+ $$
400
+ R.{v}_{{ji},0} \rightarrow R{p}_{{ji}, n} \otimes {b}_{{ji},0} = R\left( {{p}_{{ji}, n} \otimes {b}_{{ji},0}}\right) = R{v}_{{ji},0}.
401
+ $$
402
+
403
+ due to the linearity of the tensor product which proves $\mathrm{{SO}}\left( 3\right)$ equivariance for the first term. For the second term, we calculate
404
+
405
+ $$
406
+ {v}_{{ji},1} = {b}_{{ji},1} \odot \left( {{v}_{i} \times {v}_{j}}\right) ,
407
+ $$
408
+
409
+ where ${b}_{{ji},1} \in {\mathbb{R}}^{{F}_{v}}$ is an $\mathrm{{SO}}\left( 3\right)$ -invariant representation and the output of the cross product is a vector representation $\in {\mathbb{R}}^{3 \times {F}_{v}}$ . To be precise, the elementwise multiplication from the left with the ${b}_{{ji},1}$ has to be rewritten, to match the shape, i.e. unsqueeze a new dimension to scale each of the ${F}_{v}$ vector by the scalar value, resulting into:
410
+
411
+ $$
412
+ {v}_{{ji},1} = \left( {1 \otimes {b}_{{ji},1}}\right) \odot \left( {{v}_{i} \times {v}_{j}}\right) ,
413
+ $$
414
+
415
+ where 1 is the one-vector in 3 dimensions. For a rotation acting on the system, we conclude that
416
+
417
+ $$
418
+ R.{v}_{{ji},1} \rightarrow \left( {1 \otimes {b}_{{ji},1}}\right) \odot \left( {R{v}_{i} \times R{v}_{j}}\right)
419
+ $$
420
+
421
+ $$
422
+ = \left( {1 \otimes {b}_{{ji},1}}\right) \odot R\left( {{v}_{i} \times {v}_{j}}\right) = R\left( {1 \otimes {b}_{{ji},1}}\right) \odot \left( {{v}_{i} \times {v}_{j}}\right)
423
+ $$
424
+
425
+ $$
426
+ = R{v}_{{ji},1}\text{,}
427
+ $$
428
+
429
+ which proves $\mathrm{{SO}}\left( 3\right)$ equivariance for the second term.
430
+
431
+ The third term is obtained through
432
+
433
+ $$
434
+ {v}_{{ji},2} = \left( {1 \otimes {b}_{{ji},2}}\right) \odot \left( {{v}_{j}{W}_{n}}\right) ,
435
+ $$
436
+
437
+ where ${b}_{{ji},2} \in {\mathbb{R}}^{{F}_{v}}$ is a scalar representation with ${F}_{v}$ channels and ${W}_{n}$ a linear transformation of shape $\left( {{F}_{v} \times {F}_{v}}\right)$ . Due to linearity, we can see that
438
+
439
+ $$
440
+ R{v}_{j}{W}_{n} = \left( {R{v}_{j}}\right) {W}_{n} = R\left( {{v}_{j}{W}_{n}}\right)
441
+ $$
442
+
443
+ 576 is $\mathrm{{SO}}\left( 3\right)$ equivariant. As we elementwise multiply with a unsqueezed/expanded scalar representation, we conclude for the last term $\mathrm{{SO}}\left( 3\right)$ equivariance
444
+
445
+ $$
446
+ R.{v}_{{ji},2} \rightarrow \left( {1 \otimes {b}_{{ji},1}}\right) \odot \left( {R{v}_{j}}\right) {W}_{n}
447
+ $$
448
+
449
+ $$
450
+ = \left( {1 \otimes {b}_{{ji},1}}\right) \odot R\left( {{v}_{j}{W}_{n}}\right) = R\left( {1 \otimes {b}_{{ji},1}}\right) \odot \left( {{v}_{j}{W}_{n}}\right)
451
+ $$
452
+
453
+ $$
454
+ = R{v}_{{ji},2}\text{.}
455
+ $$
456
+
457
+ Since all three components in the sum are $\mathrm{{SO}}\left( 3\right)$ equivariant, we conclude that the final sum is also SO(3) equivariant.
458
+
459
+ As the reader might have noticed, we build equivariant features based on linear functions and weighting $l = 1$ representations through $l = 0$ representations. This typical scaling is achieved through the tensor product $\otimes$ . Our architecure however, also performs a multiplication between two $l = 1$ representations, through the cross product, which has the pleasant $\mathrm{{SO}}\left( 3\right)$ invariance property that we can exploit to prove $\mathrm{{SO}}\left( 3\right)$ equivariance, when scaling the output with an $l = 0$ representation.
460
+
461
+ A Note on Translation Equivariance. Our proposed model is translation invariant, as all vector features are initially created by means of a tensor product of (normalized) relative position ${p}_{{ji}, n}$ . To see that, for any translation vector $t \in {\mathbb{R}}^{3}$ for relative positions, we can see that the calculation of such vectors ${}^{5}{p}_{ji} = {p}_{j} - {p}_{i}$ , are inherently translation invariant due to
462
+
463
+ $$
464
+ \text{t.}{p}_{ji} \rightarrow \left( {{p}_{j} + t}\right) - \left( {{p}_{i} + t}\right) = {p}_{j} - {p}_{i} + t - t = {p}_{j} - {p}_{i} = {p}_{ji}\text{.}
465
+ $$
466
+
467
+ Since we do not model absolute Cartesian coordinates, e.g., by updating the spatial coordinates through our layers, our model is not SE(3)-equivariant, i.e. next to rotation equivariance, also translation equivariant. We note that translation equivariance, however can be achieved through a simple operation such as the addition of an $\mathrm{{SE}}\left( 3\right)$ representation with an $\mathrm{{SO}}\left( 3\right)$ representation, e.g.
468
+
469
+ $$
470
+ {p}_{i} = {p}_{i} + {p}_{{ji}, n} \otimes s,
471
+ $$
472
+
473
+ where $s \in \mathbb{R}$ and reminiscent in the $\mathrm{E}\left( n\right)$ -GNN architecture, albeit the authors are not using the notation of the tensor product.
474
+
475
+ ---
476
+
477
+ ${}^{5}$ We omit the normalization to unit vectors for brevity.
478
+
479
+ ---
papers/LOG/LOG 2022/LOG 2022 Conference/kv4xUo5Pu6/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,317 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § REPRESENTATION LEARNING ON BIOMOLECULAR STRUCTURES USING EQUIVARIANT GRAPH ATTENTION
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ Learning and reasoning about $3\mathrm{D}$ molecular structures with varying size is an emerging and important challenge in machine learning and especially in the development of biotherapeutics. Equivariant Graph Neural Networks (GNNs) can simultaneously leverage the geometric and relational detail of the problem domain and are known to learn expressive representations through the propagation of information between nodes leveraging higher-order representations to faithfully express the geometry of the data, such as directionality in their intermediate layers. In this work, we propose an equivariant GNN that operates with Cartesian coordinates to incorporate directionality and we implement a novel attention mechanism, acting as a content and spatial dependent filter when propagating information between nodes. Our proposed message function processes vector features in a geometrically meaningful way by mixing existing vectors and creating new ones based on cross products. We demonstrate the efficacy of our architecture on accurately predicting properties of large biomolecules and show its computational advantage over recent methods which rely on irreducible representations by means of the spherical harmonics expansion.
12
+
13
+ § 18 1 INTRODUCTION
14
+
15
+ Predicting molecular properties is of central importance to applications in pharmaceutical research and protein design. Accurate computational methods can significantly accelerate the overall process of finding better molecular candidates in a faster and cost-efficient way. Learning on 3D environments of molecular structures is a rapidly growing area of machine learning with promising applications but also domain-specific challenges. While Deep Learning (DL) has replaced hand-crafted features to a large extent, many advances are crucially determined through inductive biases in deep neural networks. Developed neural models should maintain an efficient and accurate representation of structures with even up to thousand of atoms and correctly reason about their 3D geometry independent of orientation and position. A powerful method to restrict a neural network to the functions of interest, such as a molecular property, is to exploit the symmetry of the data by constraining equivariance with respect to transformations from a certain symmetry group $\left\lbrack {1,2}\right\rbrack$ .
16
+
17
+ 3D Graph Neural Networks (GNNs) have been applied on a widespread of molecular structures, such as in the prediction of quantum chemistry properties of small molecules $\left\lbrack {3,4}\right\rbrack$ but also on macromolecular structures like proteins [5-8] due to the natural representation of structures as graphs, with atoms as nodes and edges drawn based on bonding or spatial proximity. These networks generally encode the 3D geometry in terms of rotationally invariant representations, such as pairwise distances when modelling local interactions which leads to a loss of directional information, while the addition of angular information into network architecture has shown to be beneficial in representing the local geometry [9-11].
18
+
19
+ Neural models that preserve equivariance when working on point clouds in 3D space have been proposed [12-15] which can be described as Tensorfield Networks. These physics-inspired models - leverage higher-order tensor representations and require additional calculations, to construct the basis for the transformations of their learnable kernels, which can be expensive to compute. While these models enable the interaction between different-order representations, (often referred to as type- $l$ representation), many data types are often restricted to scalar values (type-0 e.g., temperature or energy) and 3D vectors (type-1 e.g., velocity or forces). Another choice of using more information with the (limited) data at hand and build data-efficient models on point clouds ${}^{1}$ through equivariant functions is to directly operate on Cartesian coordinates [16-19] and explicitly define the (equivariant) transformations which is conceptually simpler and does not require Clebsch-Gordan tensor products of irreducible representations as commonly used in Tensorfield Network-like architectures.
20
+
21
+ < g r a p h i c s >
22
+
23
+ Figure 1: (a) Visualization of the local neighbourhood of central carbon atom $i$ . Directed edges illustrate the message flow from neighbour $j$ to central atom $i$ , where scalar and vector features are propagated along the edges. Grey boxes $R$ represent the side-chain atoms of each residue and serve here as visual compression that include many more atoms. Here, nodes comprise scalar and vector features with 5 and 2 channels, respectively. (b) Proposed equivariant message function that computes a geometric and content related feature attention filter for scalar features, while vector messages are created based on a weighted combination of newly constructed vectors.
24
+
25
+ In this work, we introduce Equivariant Graph Attention Networks (EQGAT) that operates on large point clouds such as proteins or protein-ligand complexes and show its superior performance compared to invariant models as well as our proposed model's faster training time compared to recent architectures that achieve equivariance through the usage of irreducible representations. Our model implements a novel feature attention mechanism which is invariant to global rotations and translations of inputs and includes spatial- but also content related information which serves as powerful edge embedding when propagating information in the Message Passing Neural Networks (MPNNs) [4] framework. Since we define equivariant functions on the original Cartesian space while restricting ourselves to tensor representations of rank 1, i.e., vectors, we aim to capture as most geometrical information as possible through a geometrically motivated message function.
26
+
27
+ In summary, we make the following contributions:
28
+
29
+ * We introduce a computationally efficient equivariant Graph Neural Network that leverages geometric information by operating on vector features in Cartesian space.
30
+
31
+ * We implement a novel feature attention mechanism to propagate neighbouring node features and we define equivariant operations to combine vector features in a geometrically meaningful way.
32
+
33
+ * We benchmark our proposed architecture on large molecular systems such as protein complexes and shows its efficacy mostly relevant to industrial applications.
34
+
35
+ ${}^{1}$ An example of more information preservation is when considering relative positions between points in 3D space, where the information of orientation is maintained, as opposed when only the (scalar; invariant) distances between points are considered.
36
+
37
+ § 2 BACKGROUND
38
+
39
+ § 2.1 MESSAGE PASSING NEURAL NETWORKS (MPNNS)
40
+
41
+ MPNNs [4] generalize Graph Neural Networks (GNNs) [1, 2, 20] and aim to parameterize a mapping from a graph to a feature space. That feature space can either be defined on the node- or graph level. Formally, a graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ contains nodes $i \in \mathcal{V}$ and edges $\left( {j,i}\right) \in \mathcal{E}$ which represent the relationship between nodes $j$ and $i$ . Since MPNNs utilize shared trainable layers among nodes, permutation equivariance is preserved.
42
+
43
+ In this work, we consider graphs representing molecular systems embedded in 3D Euclidean space, where atoms represent nodes and the edges are described through covalent bonds and/or by atom pairs within a certain cutoff distance $c$ as illustrated in Figure 1(a). When working with protein point clouds, a common design choice is the construction of residue graphs, where the nodes are represented through the ${C}_{\alpha }$ -atom of each amino acid residue [5,6,18].
44
+
45
+ We refer ${x}_{i}^{\left( l\right) } = \left( {{a}_{i},{p}_{i},{s}_{i}^{\left( l\right) },{v}_{i}^{\left( l\right) }}\right)$ to the state of the $i$ -th atom, where ${a}_{i} \in {\mathbb{Z}}_{ + }$ and ${p}_{i} \in {\mathbb{R}}^{3}$ denote atom $i$ ’s chemical element and its spatial position, while ${h}_{i}^{\left( l\right) } = \left( {{s}_{i}^{\left( l\right) },{v}_{i}^{\left( l\right) }}\right) \in {\mathbb{R}}^{1 \times {F}_{s}} \times {\mathbb{R}}^{3 \times {F}_{v}}$ are the hidden scalar and vector features that are iteratively refined through $L$ message passing steps. We distinguish between scalar and vector features because scalars can be transformed without functional restrictions, e.g., with standard MLPs, and their domain spans the entire $\mathbb{R}$ , while vector features that reside in ${\mathbb{R}}^{3}$ can only be transformed in certain ways to preserve rotation equivariance. In theory, one could also only rely on vector features (with a number of ${F}_{v}$ channels), and perform a dot product reduction to make that representation invariant. This step however, restricts the domain space of scalars onto ${\mathbb{R}}_{ + }$ only.
46
+
47
+ A general MPNN implements a learnable message and update function denoted as ${M}_{l}\left( \cdot \right)$ and ${U}_{l}\left( \cdot \right)$ to process atom $i$ -th’s hidden feature by considering its local environment $\mathcal{N}\left( i\right)$ through
48
+
49
+ $$
50
+ {m}_{i}^{\left( l + 1\right) } = \mathop{\sum }\limits_{{j \in \mathcal{N}\left( i\right) }}{M}_{l}\left( {{x}_{i}^{\left( l\right) },{x}_{j}^{\left( l\right) }}\right) , \tag{1}
51
+ $$
52
+
53
+ $$
54
+ {x}_{i}^{\left( l + 1\right) } = \left( {{a}_{i},{p}_{i},{U}_{l}\left( {{x}_{i}^{\left( l\right) },{m}_{i}^{\left( l + 1\right) }}\right) }\right) , \tag{2}
55
+ $$
56
+
57
+ where $\mathcal{N}\left( i\right) = \left\{ {j : {\begin{Vmatrix}{p}_{ij}\end{Vmatrix}}_{2} = {\begin{Vmatrix}{p}_{j} - {p}_{i}\end{Vmatrix}}_{2} = {d}_{ij} < c}\right\}$ denotes central atom’s $i$ -th neighbour set that is obtained through a distance cutoff $c > 0$ .
58
+
59
+ For our 3D GNN, we wish to implement simple, yet powerful rotation equivariant transformations in the message and update functions, to accurately describe the local environment of nodes in the point cloud.
60
+
61
+ § 2.2 INVARIANCE AND EQUIVARIANCE
62
+
63
+ In this work, we consider the special orthogonal group $\mathrm{{SO}}\left( 3\right)$ , i.e. the group of proper rotations in three dimensions. A group element of $\mathrm{{SO}}\left( 3\right)$ is commonly represented as matrix $R \in {\mathbb{R}}^{3 \times 3}$ satisfying ${R}^{\top }R = R{R}^{\top } = I$ and $\det R = 1$ .
64
+
65
+ For a node feature $h = \left( {s,v}\right) \in {\mathbb{R}}^{{F}_{s}} \times {\mathbb{R}}^{3 \times {F}_{v}}$ , an SO(3)-equivariant function $f$ must obey the following equation
66
+
67
+ $$
68
+ {f}_{.R}\left( h\right) = \left( {{Is},{Rv}}\right) = \left( {s,{Rv}}\right) , \tag{3}
69
+ $$
70
+
71
+ where ${f}_{.R}$ in this work means, a rotation acting on the input of function $f$ . As shown in (3), invariance can be regarded as special case of equivariance, where equivariance for a scalar representation $s$ means that a trivial representation, i.e. the identity, acts on the scalar embedding, while vectors $v$ are transformed with $R$ , i.e., a change of basis is performed, where the new basis is determined by the column vectors in $R$ .
72
+
73
+ § 3 RELATED WORK
74
+
75
+ Neural networks that specifically achieve $\mathrm{E}\left( 3\right)$ or $\mathrm{{SE}}\left( 3\right)$ equivariance have been proposed in Ten-sorfield Networks (TFNs) [12] and its variants in the covariant Cormorant [13], NequIP [15] and SE(3)-Transformer [14]. With TFNs, equivariance is achieved through the usage of equivariant function spaces such as spherical harmonics combined with Clebsch-Gordan tensor products in their intermediate layer to allow the multiplication of different ordered representations, while others resort to lifting the spatial space to higher-dimensional spaces such as Lie group spaces [21]. Since no
76
+
77
+ restriction on the order of tensors is imposed on these methods, sufficient expressive power of these models is guaranteed, but at a cost of excessive computational calculations with increased time and memory. It was recently analyzed by Brandstetter et al. [22] that the implementation of non-linear equivariant Graph Neural Networks in their model, which they term Steerable E(3) Equivariant Graph Neural Networks (SEGNN) achieves strong empirical results on small point clouds like the N-Body experiment or QM9 dataset, but also larger systems as in the OC20 dataset. One of their insights is that the construction of their (non-linear) SEGNN-layer, allows the model to better capture the local environment and enables the reduction of radius cutoff when constructing the neighbour list for each central atom $i$ , since the Clebsch-Gordan tensor products between neighbouring nodes is computationally expensive. To circumvent the expensive computational cost, another line of research proposed to directly implement equivariant operations in the original Cartesian space, providing and efficient approach to preserve equivariance as introduced in the $\mathrm{E}\left( n\right)$ -GNN [16], GVP [18,23], PaiNN [17] and ET-Transformer [24] architectures without relying on irreducible representation of the orthogonal group by means of the spherical harmonics basis as originally introduced in TFN and implemented in the e3nn framework [25].
78
+
79
+ Our proposed model also implements equivariant operations in the original Cartesian space and includes a continuous filter through the self-attention coefficients which serve as spatial- and content based edge embedding in the message propagation, as opposed to the PaiNN model where the filter solely depends on the distance. Additionally, our model constructs vector features from the given point cloud data and leverages geometrical products that are efficient to compute. The $\mathrm{E}\left( n\right)$ -GNN architecture does not learn type-1 vector features with several channels, but only updates given type-1 features ${}^{2}$ through a weighted linear combination of such, where the (learnable) scalar weights are obtained from invariant embeddings. The GVP model which was initially designed to work on macromolecular structures includes a complex message functions of concatenated node- and edge features composed with a series of GVP-blocks that enables information exchange between type-0 and type-1 features, through dot product reduction of vectors, with a potential disadvantage of discontinuities through non-smooth components for distances close to the cutoff.
80
+
81
+ § 4 PROPOSED MODEL ARCHITECTURE
82
+
83
+ § 4.1 INPUT EMBEDDING
84
+
85
+ We initially embed atoms of small molecules or proteins based on their element/amino acid type using a trainable look-up table through
86
+
87
+ $$
88
+ {s}_{i}^{\left( 0\right) } = \operatorname{embed}\left( {a}_{i}\right) ,
89
+ $$
90
+
91
+ which provides a starting (invariant) scalar representation of the node prior to the message passing. As in most cases, no directional information for atoms are available, we initialize the vector features as zero tensor ${v}_{i}^{\left( 0\right) } = 0 \in {\mathbb{R}}^{3 \times {F}_{v}}$ .
92
+
93
+ § 4.2 EDGE FILTER THROUGH FEATURE ATTENTION
94
+
95
+ For the two-body interaction between neighbouring node(s) $j$ to central node $i$ , we implement a non-linear edge filter that depends on content related information stored in the scalar features $\left( {{s}_{j},{s}_{i}}\right)$ and a radial basis expansion of the Euclidean distance ${d}_{ji} \leq c$ . We choose the (orthonormal) Bessel basis ${G}_{d} : \mathbb{R} \rightarrow {\mathbb{R}}^{K}$ that projects the distance into $K$ basis values as introduced by Gasteiger et al. [9] and their polynomial envelope function $\kappa : \left\lbrack {0,c}\right\rbrack \rightarrow (0,1\rbrack$ that smoothly transitions from 1 to 0 as the cutoff value $c$ is approached. The computation of the attention edge-filter is obtained through
96
+
97
+ $$
98
+ {e}_{ji}^{\left( l + 1\right) } = \left\lbrack {{s}_{i}^{\left( l\right) }\left| \right| {s}_{j}^{\left( l\right) }\parallel \kappa \left( {d}_{ji}\right) {G}_{d}\left( {d}_{ji}\right) }\right\rbrack \in {\mathbb{R}}^{2{F}_{s} + K}
99
+ $$
100
+
101
+ $$
102
+ {f}_{ji}^{\left( l + 1\right) } = \operatorname{MLP}\left( {e}_{ji}^{\left( l + 1\right) }\right) \in {\mathbb{R}}^{{F}_{s} + 3{F}_{v}}, \tag{4}
103
+ $$
104
+
105
+ where MLP refers to a 1-layer Multilayer-Perceptron with SiLU activation function [26]. The input to the MLP is a concatenation of scalar features as well as a by $\kappa$ scaled radial basis expansion of the distance between nodes $j$ and $i$ . The $\mathrm{{SO}}\left( 3\right)$ -invariant embedding ${f}_{ji}^{\left( l + 1\right) }$ represents the ${F}_{s} + 3{F}_{v}$ attention logits which are further split into ${f}_{ji}^{\left( l + 1\right) } = {\left\lbrack {a}_{ji},{b}_{ji}\right\rbrack }^{\left( l + 1\right) }$ to be used as a non-linear filter
106
+
107
+ ${}^{2}$ In the $\mathrm{E}\left( n\right)$ -GNN architecture, Cartesian coordinates of particles $p \in {\mathbb{R}}^{3}$ are updated.
108
+
109
+ when propagating neighbouring features.
110
+
111
+ A novelty of our approach is that the attention coefficient between two vertices $j$ and $i$ is in fact obtained per feature-channel instead for the entire embedding as commonly through a single scalar value. The feature attention for the scalar embeddings is computed using the standard softmax
112
+
113
+ activation function
114
+
115
+ $$
116
+ {\alpha }_{ji} = \frac{\exp \left( {a}_{ji}\right) }{\mathop{\sum }\limits_{{{j}^{\prime } \in \mathcal{N}\left( i\right) }}\exp \left( {a}_{{j}^{\prime }i}\right) } \in {\left( 0,1\right) }^{{F}_{s}}, \tag{5}
117
+ $$
118
+
119
+ where the normalization in the denominator runs over all neighbours ${j}^{\prime }$ and the exponential function is applied componentwise.
120
+
121
+ The embedding ${b}_{ji} \in {\mathbb{R}}^{3{F}_{v}}$ is processed to create coefficients that serve as weights for a linear combination of vector quantities to compute the vector message from $j$ to $i$ , which we will describe in the following subsection.
122
+
123
+ § 4.3 EQUIVARIANT MESSAGE PROPAGATION
124
+
125
+ We follow the idea of standard convolution, which is a linear transformation of the input, and compute the scalar features message for central node $i$ as
126
+
127
+ $$
128
+ {m}_{i,s}^{\left( l + 1\right) } = \mathop{\sum }\limits_{{j \in \mathcal{N}\left( i\right) }}{\alpha }_{ji}^{\left( l + 1\right) } \odot {W}_{n}^{\left( l + 1\right) }{s}_{j}^{\left( l\right) }, \tag{6}
129
+ $$
130
+
131
+ where ${W}_{n}^{\left( l + 1\right) } \in {\mathbb{R}}^{{F}_{s} \times {F}_{s}}$ is a trainable weight matrix shared among all nodes and ${\alpha }_{ii}^{\left( l + 1\right) }$ the nonlinear attention filter obtained in (5). In context of atomistic neural network potentials (NNPs), the filter ${\alpha }_{ji}^{\left( l + 1\right) }$ is commonly implemented as an MLP that only inputs the distance ${d}_{ji}$ (by means of a radial basis expansion) as in SchNet [3], PaiNN [17], NequIP [15], while recent NNPs such as Allegro [27] and BOTNet [28] implement edge-filters that depend on the distance as well as node content, e.g., the chemical elements, unifying the idea of MPNNs in the context of machine learning force fields. The recent work by Brandstetter et al. [22] analyzes modern 3D equivariant GNNs with the insight that non-linear message and non-linear update functions combined with their proposed steerable features space leads to an improved model, which they term SEGNN. The SEGNN, in similar spirit to Tensorfield-Networks, can leverage higher-order geometric representations up to a maximal rotation order ${l}_{\max }$ through the spherical harmonics expansion of relative positions, which they take as steerable feature basis. Their proposed model implements steerable MLPs ${}^{3}$ into the message- and update function to leverage non-linearity and geometric covariant information of the steerable features that go beyond $l = 0$ , i.e., scalar features while our architecture is only restricted to scalar information, albeit vector information is still processed in the layers but then reduces to a scalar by a dot product operation. Our proposed message function for scalar features in Eq. (6) can also be formulated as a linear transformation where the weight matrix depends on distances but also hidden scalar information. To see this, we rewrite ${\alpha }_{ji}^{\left( l + 1\right) } \in {\left( 0,1\right) }^{{F}_{s}}$ as matrix using the diagonal operator ${A}_{ji}^{\left( l + 1\right) } = \operatorname{diag}\left( {\alpha }_{ji}^{\left( l + 1\right) }\right) \in {\left( 0,1\right) }^{{F}_{s} \times {F}_{s}}$ and observe that the filter scales the (independent) weight matrix ${W}_{n}^{\left( l + 1\right) }$ leading to the message propagation
132
+
133
+ $$
134
+ {m}_{i,s}^{\left( l + 1\right) } = \mathop{\sum }\limits_{{j \in \mathcal{N}\left( i\right) }}{A}_{ji}^{\left( l + 1\right) }{W}_{n}^{\left( l + 1\right) }{s}_{j}^{\left( l\right) } = \mathop{\sum }\limits_{{j \in \mathcal{N}\left( i\right) }}{W}_{{ji},n}^{\left( l + 1\right) }{s}_{j}^{\left( l\right) },
135
+ $$
136
+
137
+ where ${W}_{{ji},n}^{\left( l + 1\right) }$ defines the linear transformation matrix whose content depends on $\mathrm{{SO}}\left( 3\right)$ -invariant information through $\left( {{s}_{i}^{\left( l\right) },{s}_{j}^{\left( l\right) },{d}_{ji}}\right)$ , which can however still be interpreted as non-linear convolution as the ${A}_{ji}^{\left( l + 1\right) }$ weight matrix is obtained through an MLP and softmax activation function. Similar to Brandstetter et al. [22], we call this a pseudo-linear transformation.
138
+
139
+ Building Equivariant Features. In many cases, no initial vector features are provided in raw point cloud data. However, when working with the backbone of proteins and representing a node as residue through the sequence of atoms ${\left( {C}_{\alpha },{C}_{\beta },O,N\right) }_{i}$ , initial vectorial (node) features that describe the local environment of each residue can be pre-computed as described by Ingraham et al. [6] and Jing et al. [18]. In a full-atom model, initial vector features for a node $i$ can be obtained by averaging over relative position vectors ${v}_{i,0} = \frac{1}{\left| \mathcal{N}\left( i\right) \right| }\mathop{\sum }\limits_{{j \in \mathcal{N}\left( i\right) }}{p}_{ji} \in {\mathbb{R}}^{3}$ which satisfies Eq. (3) due to linearity.
140
+
141
+ ${}^{3}$ The weight matrices in steerable MLPs depend on geometric information, such as relative positions.
142
+
143
+ In our work, we initialize the vectors as zero tensor as described in Subsection 4.1 and in the first layer, equivariant features are obtained by utilizing normalized relative positions ${p}_{{ji},n}$ , to compute the interaction between central node $i$ and its neighbour $j$ . In the subsequent layers, we can think about extending the set of vectors by (1) constructing vectors based on normalized relative positions again, (2) mixing existing vector channels from the previous iteration and (3) creating new vector quantities by making use of the cross product. We will describe the three options in the following paragraph.
144
+
145
+ (1) Utilizing normalized relative positions: we create equivariant vector features based on normalized relative position ${p}_{{ji},n} = \frac{1}{{d}_{ji}}\left( {{p}_{i} - {p}_{j}}\right)$ as those provide directional information. Since we explicitly model scalar and vector features, each equipped with ${F}_{s}$ and ${F}_{v}$ channels, respectively, the tensor product offers a natural way to obtain an $l = 1$ feature, by combining an $l = 1$ and $l = 0$ feature, wherein we utilize the relative position ${p}_{{ji},n}$ vector that obeys the rule stated in Eq. (3). Equivariant interactions between node $j$ and $i$ are computed through
146
+
147
+ $$
148
+ {v}_{{ji},0}^{\left( l + 1\right) } = {p}_{{ji},n} \otimes {b}_{{ji},0}^{\left( l + 1\right) } = {p}_{{ji},n}{b}_{{ji},0}^{\left( {l + 1}\right) \top } \in {\mathbb{R}}^{3 \times {F}_{v}}, \tag{7}
149
+ $$
150
+
151
+ which preserves $\mathrm{{SO}}\left( 3\right)$ equivariance, due to the linearity of the tensor product. We note that the creation of 'initial' equivariant features in such manner is also performed in architectures, like $\left\lbrack {{12},{13},{15},{22}}\right\rbrack$ just to name a few, that make use of irreducible representations of the $\mathrm{{SO}}\left( 3\right)$ group by means of the spherical harmonics and implement the Clebsch-Gordan tensor product $\left( { \otimes }_{cg}\right)$ that allows the mixing of possibly higher-order embedding representations of type $l > 1$ . The $l = 1$ representation in Eq. (7) can be interpreted as ${F}_{v}$ scaled versions of the relative position ${p}_{{ji},n}$ .
152
+
153
+ (2) In similar fashion to the (independent) linear transformation of scalar channels, we mix the vector channels using a weight matrix ${W}_{v}^{\left( l\right) } \in {\mathbb{R}}^{{F}_{v} \times {F}_{v}}$ which preserves $\mathrm{{SO}}\left( 3\right)$ equivariant due to the linearity property.
154
+
155
+ $$
156
+ {v}_{n}^{\left( l + 1\right) } = {v}^{\left( l\right) }{W}_{v}^{\left( l + 1\right) },
157
+ $$
158
+
159
+ and is shared among all nodes. For a particular neighbouring node $j$ , we scale the linearly transformed vectors
160
+
161
+ $$
162
+ {v}_{{ji},1}^{\left( l + 1\right) } = {b}_{{ji},1}^{\left( l + 1\right) } \odot {v}_{n,j}^{\left( l + 1\right) }, \tag{8}
163
+ $$
164
+
165
+ which can be interpreted as a (pseudo-linear) gating of previously mixed vectors. (3) To capture more geometric information, while restricting the representation to be of type $l = 1$ , we utilize the vector cross product $c = \left( {a \times b}\right) \in {\mathbb{R}}^{3}$ between two $l = 1$ representations $a$ and $b$ that satisfy the following rotation invariance property
166
+
167
+ $$
168
+ {Ra} \times {Rb} = R\left( {a \times b}\right) .
169
+ $$
170
+
171
+ The output of the cross product $a \times b$ defines a vector $c$ that is perpendicular to plane spanned by $a$ and $b$ and in our network architecture, we utilize this by computing the cross product on same channels from the previous layers’ vector features of node $i$ and $j$ as
172
+
173
+ $$
174
+ {\widetilde{v}}_{{ji},2}^{\left( l + 1\right) } = \left( {{v}_{i}^{\left( l\right) } \times {v}_{j}^{\left( l\right) }}\right) \in {\mathbb{R}}^{3 \times {F}_{v}},
175
+ $$
176
+
177
+ to reduce the computational complexity. As we wish to implement an efficient architecture that can operate on large (protein) point clouds, a cross product that mixes all possible vectors for each channel, would result into a representation $\in {\mathbb{R}}^{3 \times {F}_{v} \times {F}_{v}}$ which requires to be reduced, by e.g. computing the average along the last axis.
178
+
179
+ We highlight that recent equivariant GNNs which operate on the original Cartesian space, such as GVP, PaiNN or ET-Transformer do not include the cross product in their architecture and are restricted in the creation of vector features that may span the entire ${\mathbb{R}}^{3}$ . These architecture make use of step (1) and (2) only. For example, when all atoms are placed on the ${xy}$ -plane, using step (1) and (2) would always create vectors on the ${xy}$ plane, while the coordinate on $z$ axis is always 0 . By leveraging the cross product, vectors in the $z$ direction can be computed, without increasing the rank order ${}^{4}$ .
180
+
181
+ ${}^{4}$ Two rank 1 Cartesian tensors, i.e., two geometric vectors could also be combined by computing the tensor product of the two, which would result into a rank 2 Cartesian tensor with 9 elements in the matrix. This rank 2 Cartesian tensor contains elements of the cross product in its antisymmetric part.
182
+
183
+ In similar fashion to Eq. (7) and (8), each channel of the representation ${\widetilde{v}}_{{ji},2}^{\left( l\right) }$ is weighted by the $\mathrm{{SO}}\left( 3\right)$ non-linear filter ${b}_{{ji},2}^{\left( l\right) } \in {\mathbb{R}}^{{F}_{v}}$ to obtain
184
+
185
+ $$
186
+ {v}_{{ji},2}^{\left( l + 1\right) } = {b}_{{ji},2}^{\left( l + 1\right) } \odot {\widetilde{v}}_{{ji},2}^{\left( l + 1\right) }, \tag{9}
187
+ $$
188
+
189
+ Finally, we define the vector message from node $j$ to central node $i$ as the sum of the three components in (7) to (9) and aggregate it across all neighbouring nodes $j \in \mathcal{N}\left( i\right)$ to obtain the vector message
190
+
191
+ $$
192
+ {m}_{i,v}^{\left( l + 1\right) } = \frac{1}{\left| \mathcal{N}\left( i\right) \right| }\mathop{\sum }\limits_{{j \in \mathcal{N}\left( i\right) }}\left( {{v}_{{ji},0}^{\left( l + 1\right) } + {v}_{{ji},1}^{\left( l + 1\right) } + {v}_{{ji},2}^{\left( l + 1\right) }}\right) , \tag{10}
193
+ $$
194
+
195
+ which results into new weighted geometric vectors by utilizing the (static) relative positions as well as neighbouring vector features and lastly, normal vectors obtained through the cross product. We provide the full proof of $\mathrm{{SO}}\left( 3\right)$ equivariance of Eq. (10) in the Appendix B.
196
+
197
+ Equivariant Update Function. After obtaining the aggregated message for central node $i$ in the representation ${m}^{\left( l + 1\right) } \in {\mathbb{R}}^{{F}_{s}} \times {\mathbb{R}}^{3 \times {F}_{v}}$ as (pseudo)-linear transformations of neighbouring node features, we deploy a residual connection as intermediate update step
198
+
199
+ $$
200
+ {\widetilde{{s}_{i}}}^{\left( l + 1\right) } = {s}_{i}^{\left( l\right) } + {m}_{i,s}^{\left( l + 1\right) },\text{ and }{\widetilde{{v}_{i}}}^{\left( l + 1\right) } = {v}_{i}^{\left( l\right) } + {m}_{i,v}^{\left( l + 1\right) }
201
+ $$
202
+
203
+ < g r a p h i c s >
204
+
205
+ Figure 2: A gated equivariant MLP that transforms scalar and vector features into a new representation. Here we used this block as update function ${U}_{l}\left( \cdot \right)$ .
206
+
207
+ and in the update layer, we implement an equivariant nonlinear transformation inspired by gated non-linearities proposed by [29] and used in [17] with minor modification as shown in Figure 2. Notably, the scalar features receive geometric information by concatenating the norm of linear transformed vector features, while the 1-layer scalar MLP is tasked to transform the combined embeddings to update the scalar states and retrieve non-linear weights that are used to reweight vector features. We apply these weights by element-wise multiplying with linearly transformed vector features as shown on the right which can also be interpreted as variants of the Gated Linear Unit [30, 31], followed by a linear layer to implement an equivariant MLP for vector features.
208
+
209
+ § 5 EXPERIMENTS AND RESULTS
210
+
211
+ We test the efficacy of our proposed EQGAT model on three
212
+
213
+ publicly available molecular benchmark datasets which pose significant challenges for the development of efficient and accurate prediction models in protein design.
214
+
215
+ �� 5.1 ATOM3D
216
+
217
+ The ATOM3D benchmark [32] provides datasets for representation learning on atomic-level 3D molecular structures of different kinds, i.e., proteins, RNAs, small molecules and complexes. Since proteins perform specific biological functions essential for all living organisms and hence, play a key role when investigating the most fundamental questions in the life sciences, we focus our experiments on the learning problems often encountered in structural biology with different difficulties due to data scarcity and varying structural sizes. We use provided training, validation and test splits from ATOM3D and refer the interested reader to the original work of Townshend et al. [32] for more details. For all benchmarks, we compare against the Baseline CNN and GNN models provided by Townshend et al. [32] from ATOM3D, GVP-GNN reported in [23] and we run experiments for SchNet [3], an $\mathrm{{SO}}\left( 3\right)$ invariant $\mathrm{{GNN}}$ architecture that has shown strong performance on small molecule prediction tasks, PaiNN [17] as SchNet's improved SO(3) equivariant architecture and the recently proposed SEGNN [22] that leverages higher-order representations by means of the irreducible representations and Clebsch-Gordan tensor products using their official code base.
218
+
219
+ Table 1: Benchmark results on three ATOM3D tasks. We report the results for the Baseline models from [32] and GVP-GNN [23]. We run our own experiments with the SchNet, PaiNN, SEGNN and our EQGAT model and report averaged metrics over 3 runs. For the SEGNN model we only report the results on a single run due longer training time on the PSR and RSR datasets.
220
+
221
+ ${R}_{S}$ stands for Spearman Rank Correlation, and RMSE abbreviates Root Mean Square Deviation.
222
+
223
+ max width=
224
+
225
+ 2*Tasks Metric 2|c|PSR (↑) 2|c|RSR (↑) LBA (↓)
226
+
227
+ 2-6
228
+ Mean ${R}_{S}$ Global ${R}_{S}$ Mean ${R}_{S}$ Global ${R}_{S}$ RMSE
229
+
230
+ 1-6
231
+ CNN ${0.431} \pm {0.013}$ ${0.789} \pm {0.017}$ ${0.264} \pm {0.046}$ ${0.372} \pm {0.027}$ ${1.416} \pm {0.021}$
232
+
233
+ 1-6
234
+ GNN ${0.515} \pm {0.010}$ ${0.755} \pm {0.004}$ ${0.234} \pm {0.006}$ ${0.512} \pm {0.049}$ ${1.570} \pm {0.025}$
235
+
236
+ 1-6
237
+ GVP-GNN ${0.511} \pm {0.010}$ ${0.845} \pm {0.008}$ ${0.211} \pm {0.142}$ ${0.330} \pm {0.054}$ ${1.594} \pm {0.073}$
238
+
239
+ 1-6
240
+ SchNet ${0.448} \pm {0.016}$ ${0.784} \pm {0.013}$ ${0.247} \pm {0.029}$ ${0.273} \pm {0.017}$ ${1.522} \pm {0.015}$
241
+
242
+ 1-6
243
+ PaiNN ${0.462} \pm {0.015}$ ${0.809} \pm {0.003}$ ${0.270} \pm {0.062}$ ${0.462} \pm {0.064}$ ${1.507} \pm {0.033}$
244
+
245
+ 1-6
246
+ SEGNN 0.474 0.833 -0.099 0.252 ${1.450} \pm {0.011}$
247
+
248
+ 1-6
249
+ EQGAT ${0.491} \pm {0.008}$ ${0.847} \pm {0.006}$ ${0.316} \pm {0.029}$ ${0.404} \pm {0.096}$ ${1.440} \pm {0.027}$
250
+
251
+ 1-6
252
+
253
+ For SchNet, PaiNN and our proposed EQGAT architecture, we implement a 5-layer GNN with ${F}_{s} = {100}$ scalar channels and ${F}_{v} = {16}$ vector channels for the PSR and RSR benchmark, as these benchmark consists of more training samples and comprise larger biomolecules. For the Ligand Binding Affinity (LBA) task, we utilize a 3-layer GNN with the same number of scalar- and vector channels. For the SEGNN architecture, we implement a 3-layer GNN with(100,16,8)channels for the embeddings of type $l = \left( {0,1,2}\right)$ that transform according to the irreducible representation of that order preserving $\mathrm{{SO}}\left( 3\right)$ equivariance. All architectures, apply a linear layer on the GNN encoder’s scalar embeddings, followed by a mean pooling and a 1-layer MLP with SiLU activation function and a final linear layer to predict the output target. The edges in the point clouds are constructed based on a radius cutoff of ${4.5Å}$ . All graphs are considered as full-atom graphs, i.e., the initial node feature is determined by the chemical element.
254
+
255
+ The Protein and RNA Structure Ranking tasks (PSR / RSR) in ATOM3D are both regression tasks with the objective to predict the quality score in terms of Global Distance Test (GDT_TS) or Root-Mean-Square Deviation (RMSD) for generated Protein and RNA models wrt. to its experimentally determined ground-truth structure. The ability to reliably rank a biopolymer structure requires a model to accurately learn the atomic environments such that discrepancies between a ground truth states an its corrupted version can be distinguished. We evaluated our model on the biopolymer ranking and obtained good results on the current benchmark, as reported in Table 1 in terms of Spearman rank correlation. Our proposed model performs particularly well on the PSR task outperforming the GVP-GNN [23] on the Global Rank Spearman correlation on the test set, while our model is more parameter efficient(383Kvs.640K). We believe our model could be further improved by additional hyperparameter tuning, e.g., by increasing the number of scalar or vector channels, which we did not do in our study to compare against the baseline models.
256
+
257
+ We noticed that the RSR benchmark was particularly difficult to validate as only a few dozen experimentally determined RNA structures are existent to date, and the structural models generated in the ATOM3D framework are labeled with the RMSD to its native structure, which is known to be sensitive to outlier regions, for exampling by inadequate modelling of loop regions [33], while the GDT_TS metric might be a better suited target to predict a ranking for generated RNA structures as in the PSR benchmark.
258
+
259
+ Another challenging and important task for drug discovery projects is estimating the binding strength (affinity) of a candidate drug atomistic's interaction with a target protein. We use the ligand binding affinity (LBA) dataset and found that among the GNN architectures, our proposed model obtains the best results, while also being computationally cheap and fast to train. The best performing model in the LBA-task is a 3D CNN model which works on the joint protein-ligand representation using voxel space and enforcing equivariance through data augmentation. The inferior performance of all equivariant GNNs might be caused by the need of larger filters to better capture the locality and many-body effects, where 3D CNNs have an advantage when using voxel representations, while GNNs commonly capture 2-body effects. Furthermore, as all GNN models jointly represent ligand-and protein as one graph by connecting vertices through a distance cutoff of ${4.5Å}$ , we believe that such union leads to an information loss of distinguishing the atom identity from the ligand and protein. A promising direction to investigate is to incorporate a ligand and protein GNN encoder seperately and merge the two embeddings prior the binding affinity prediction, similar to Graph Matching Networks [34] and recently realized by Stärk et al. [35] in a slightly different context.
260
+
261
+ Notably, our proposed EQGAT architecture performs on par with the SEGNN that implements geometric tensors of higher order, i.e., of rotation order $l = 2$ , that trans-foms as a rank 2 Cartesian tensor. We believe that including the cross product in our vector message in (10) allows the model to capture more of the geometric detail in a possible protein ligand binding pose for accurately predicting the binding affinity.
262
+
263
+ Model Efficiency. We assess the model efficiency of EQGAT in terms of computation time as well as trainable parameters and compare against SchNet, PaiNN and SEGNN on the LBA, PSR and RSR benchmarks. These datasets have on average 408 , 1624, and 2390 nodes per graph with 9180, 26756 and 44233 directed edges, respectively for the training set of LBA, PSR and RSR.
264
+
265
+ Table 2: Comparison on model efficiency when passing a batch of 10 macromolecular structures.
266
+
267
+ max width=
268
+
269
+ Dataset Model (# Param.) Inference Time [ms]
270
+
271
+ 1-3
272
+ 4*LBA EQGAT (238K) 11.94
273
+
274
+ 2-3
275
+ SchNet (240K) 8.25
276
+
277
+ 2-3
278
+ PaiNN (379K) 10.66
279
+
280
+ 2-3
281
+ SEGNN (238K) 89.53
282
+
283
+ 1-3
284
+ 4*PSR EQGAT (383K) 49.96
285
+
286
+ 2-3
287
+ SchNet (240K) 18.36
288
+
289
+ 2-3
290
+ PaiNN (379K) 18.58
291
+
292
+ 2-3
293
+ SEGNN (238K) 255.44
294
+
295
+ 1-3
296
+ 4*RSR EQGAT (383K) 75.45
297
+
298
+ 2-3
299
+ SchNet (240K) 27.27
300
+
301
+ 2-3
302
+ PaiNN (379K) 26.98
303
+
304
+ 2-3
305
+ SEGNN (238K) 390.69
306
+
307
+ 1-3
308
+
309
+ As these datasets consist of graphs with up to thousands of atoms, computationally-and memory efficient models are preferred such that batches of graphs can be stored on GPU memory and processed fast during training. We measure the inference time of a random batch comprising 10 macromolecular structures on an NVIDIA V100 GPU. As shown in Table 2, SchNet and PaiNN are both parameter efficient and both achieve the fastest inference time on a forward pass, while our proposed EQGAT is slower mainly due to the softmax attention normalization in the denominator in Eq. (5) which could be improved when the softmax attention with its normalization is replaced by a sigmoid activation function, to obtain soft-attention weights. This step however, results into a edge-filter ${\alpha }_{ji}$ that does not sum up to 1 when iterating over all neighbours $j$ . The SEGNN model has the longest runtime on the forward pass across the 3 datasets. This is mostly attributed to the Clebsch-Cordan tensor products which can be very expensive in learning tasks that involve proteins, as the CG product is always performed on edges.
310
+
311
+ § 6 CONCLUSION, LIMITATIONS AND FUTURE WORK
312
+
313
+ In this work, we introduce a novel attention-based equivariant graph neural network for the prediction of properties of large biomolecules. Our proposed architecture makes use of rotationally equivariant features in their intermediate layers to faithfully represent the geometry of the data, while being computationally efficient, as all equivariant functions are directly implemented in the original Cartesian space. Currently, our proposed model requires more FLOPs and is therefore slower than recent equivariant GNNs that operate in Cartesian space, such as PaiN, which might be a limitation but can be further investigated and possibly improved, if the standard softmax normalization is modified and model performance is not harmed by such change. As our proposed model operates on Cartesian tensors and we restrict the representation to be of rank 1 only, a general promising future direction of investigation is the implementation of Cartesian equivariant GNNs that leverage higher-rank tensors in their layers. As it is up to date not clear, how much improvement higher-order Cartesian tensors benefit for learning tasks that involve large biomolecular systems, we hope that our work and open-source code will be useful for the graph learning and computational biology community.
314
+
315
+ § CODE AVAILABILITY
316
+
317
+ We provide the implementation of our model and experiments on https://anonymous.4open.science/r/eqgat-3A3C/README.md.We use PyTorch [36] as Deep Learning framework and PyTorch Geometric [37] to implement our GNNs.
papers/LOG/LOG 2022/LOG 2022 Conference/kvwWjYQtmw/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,539 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # GARNET: Reduced-Rank Topology Learning for Robust and Scalable Graph Neural Networks
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ Graph neural networks (GNNs) have been increasingly deployed in various applications that involve learning on non-Euclidean data. However, recent studies show that GNNs are vulnerable to graph adversarial attacks. Although there are several defense methods to improve GNN robustness by eliminating adversarial components, they may also impair the underlying clean graph structure that contributes to GNN training. In addition, few of those defense models can scale to large graphs due to their high computational complexity and memory usage. In this paper, we propose GARNET, a scalable spectral method to boost the adversarial robustness of GNN models. GARNET first leverages weighted spectral embedding to construct a base graph, which is not only resistant to adversarial attacks but also contains critical (clean) graph structure for GNN training. Next, GARNET further refines the base graph by pruning additional uncritical edges based on probabilistic graphical model. GARNET has been evaluated on various datasets, including a large graph with millions of nodes. Our extensive experiment results show that GARNET achieves adversarial accuracy improvement and runtime speedup over state-of-the-art GNN (defense) models by up to ${10.23}\%$ and ${14.7} \times$ , respectively.
12
+
13
+ ## 18 1 Introduction
14
+
15
+ Recent years have witnessed a surge of interest in graph neural networks (GNNs), which incorporate both graph structure and node attributes to produce low-dimensional embedding vectors that maximally preserve graph structural information [1]. GNNs have achieved promising results in various real-world applications, such as recommendation systems [2], self-driving car [3], and chip placements [4]. However, recent studies have shown that adversarial attacks on graph structure accomplished by inserting, deleting, or rewiring edges in an unnoticeable way, can easily fool the GNN models and drastically degrade their accuracy in downstream tasks (e.g., node classification) [5, 6].
16
+
17
+ In literature, one of the most effective ways to defend GNNs is to purify the graph by removing adversarial graph structures. Entezari et al. [7] observe that adversarial attacks mainly affect high-rank graph properties; thus they propose to first construct a low-rank graph by performing truncated singular value decomposition (TSVD) on the graph adjacency matrix, which can then be exploited for training a robust GNN model. Later, Jin et al. [8] propose Pro-GNN to jointly learn a new graph and a robust GNN model with the low-rank constraints imposed by the graph structure. While prior methods using low-rank approximation largely eliminate adversarial components in the graph spectrum, they involve dense adjacency matrices during GNN training, leading to a much higher time/space complexity and prohibiting their applications in large-scale graph learning tasks.
18
+
19
+ In addition, due to the high computational cost of TSVD, existing low-rank based methods can only preserve top $r$ singular components (e.g., $r = {50}$ ). Consequently, as shown in Figure 1(a), these methods may lose a wide range of clean graph spectrum that corresponds to important structures of the clean graph in the spatial domain. This is confirmed in Figure 1(c), where the clean accuracy of the TSVD-based method largely increases when preserving more spectral information via increasing the graph rank $r$ . In other words, prior low-rank approximation methods eliminate high-rank adversarial
20
+
21
+ ![01963eee-3740-734d-92be-6621ae5a3aca_1_308_223_1167_310_0.jpg](images/01963eee-3740-734d-92be-6621ae5a3aca_1_308_223_1167_310_0.jpg)
22
+
23
+ Figure 1: "TSVD AdvGraph" and "GARNET AdvGraph" denote adversarial graphs purified by TSVD and GARNET, respectively. (a) Graph rank comparison on Cora under Metattack with different perturbation ratio. (b) Singular value comparison of different normalized graph adjacency matrices on Cora. (c) Accuracy $\pm$ std. of GCN-TSVD on Cora with different $r$ -rank approximation via TSVD.
24
+
25
+ components at the cost of inevitably impairing the important (clean) graph structure, which degrades the overall quality of the reconstructed graph and therefore limits the performance of GNN training.
26
+
27
+ In this work, we propose GARNET, a novel spectral approach to learning the underlying clean graph topology of an adversarial graph via combining spectral embedding with probabilistic graphical model (PGM), where the learned graph structure encodes the conditional dependence among low-dimensional node representations (spectral embedding vectors) [9]. More concretely, given an adversarial graph, GARNET first constructs a base graph topology by leveraging weighted spectral embeddings that are resistant to adversarial attacks, which is followed by an effective and efficient graph refinement scheme for pruning noncritical edges in the base graph by exploiting PGM.
28
+
29
+ By recovering the clean graph structure, Figures 1(a) and 1(b) show that the adversarial graph purified by GARNET largely restores the rank of the underlying clean graph. Thus, GARNET can be viewed as a reduced-rank topology learning approach that slightly reduces the rank of the input adversarial graph, which is fundamentally different from the prior low-rank based defense methods (e.g., TSVD and ProGNN). Moreover, GARNET scales comfortably to large graphs due to its nearly-linear algorithm complexity, and produces a sparse yet high-quality graph that improves GNN robustness without involving any dense adjacency matrices during GNN training. As a byproduct, unlike existing defense methods (e.g., ProGNN) that assume graphs to be homophilic, i.e, adjacent nodes in a graph tend to have similar attributes [10], GARNET does not have such an assumption and thus can protect GNNs against adversarial attacks on both homophilic and heterophilic graphs.
30
+
31
+ We evaluate GARNET on both homophilic and heterophilic datasets under strong graph adversarial attacks such as Nettack [5] and Metattack [6]. Moreover, we further show the nearly-linear scalability of our approach on the ogbn-products dataset that consists of millions of nodes [11]. Our experimental results indicate that GARNET largely improves both clean and adversarial accuracy over baselines in most cases. Our main technical contributions are summarized as follows:
32
+
33
+ - To our knowledge, we are the first to exploit spectral graph embedding and probabilistic graphical model for improving robustness of GNN models, which is achieved by learning a reduced-rank graph topology for recovering the underlying clean graph structure from the input adversarial graph.
34
+
35
+ - By recovering the critical edges that contribute to maximum likelihood estimation in PGM while ignoring adversarial components, GARNET produces a high-quality graph on which existing GNN models can be trained to achieve high accuracy. Our experimental results show that GARNET gains up to 10.23% adversarial accuracy improvement over state-of-the-art defense baselines.
36
+
37
+ - Our proposed reduced-rank topology learning method has a nearly-linear complexity in time/space and produces a sparse graph structure for scalable GNN training. This allows GARNET to run up to ${14.7} \times$ faster than prior defense methods on popular data sets such as Cora and Squirrel. In addition, GARNET scales comfortably to very large graph data sets with millions of nodes, while prior defense methods run out of memory even on a graph with ${20}\mathrm{k}$ nodes.
38
+
39
+ ## 2 Background
40
+
41
+ ### 2.1 Undirected Probabilistic Graphical Models
42
+
43
+ 79 Consider an $n$ -dimensional random vector $x$ that follows a multivariate Gaussian distribution $x \sim$ 0 $N\left( {0,\sum }\right)$ , where $\sum = \mathbb{E}\left\lbrack {x{x}^{\top }}\right\rbrack \succ 0$ represents the covariance matrix, and $\Theta = {\sum }^{-1}$ represents the
44
+
45
+ precision matrix (inverse covariance matrix). Given a data matrix $X \in {R}^{n \times d}$ that includes $d$ i.i.d (independent and identically distributed) samples $X = \left\lbrack {{x}_{1},\ldots ,{x}_{d}}\right\rbrack$ , where ${x}_{i} \sim N\left( {0,\sum }\right)$ has an $n$ - dimensional Gaussian distribution with zero mean, the goal of probabilistic graphical models (PGM) is to learn a precision matrix $\Theta$ that corresponds to an undirected graph structure $\mathcal{G}$ for encoding the conditional dependence between variables of the observations on columns of $X\left\lbrack {{12},{13}}\right\rbrack$ . Specifically, the classical graphical Lasso method aims at estimating a sparse $\Theta$ through maximum likelihood estimation (MLE) of $f\left( x\right)$ leveraging convex optimization [13]. In this work, we focus on one increasingly popular type of Gaussian graphical models, which is also known as attractive Gaussian Markov random fields (GMRFs). Attractive GMRFs restrict the precision matrix to be a Laplacian-like matrix $\Theta = L + \frac{I}{{\sigma }^{2}}$ , where $L = D - A$ denotes the set of valid graph Laplacian matrices with $D$ and $A$ representing the diagonal degree matrix and adjacency matrix of the underlying undirected graph, respectively, $I$ denotes the identity matrix, and ${\sigma }^{2}$ is a constant denoting prior data variance. Similar to the graphical Lasso method [13], recent methods for estimating attractive GMRFs leverage emerging graph signal processing (GSP) techniques to solve the following convex problem [9, 14-17]:
46
+
47
+ $$
48
+ \mathop{\max }\limits_{\Theta }\log \det \Theta - \frac{1}{d}\operatorname{tr}\left( {X{X}^{T}\Theta }\right) - \alpha \parallel \Theta {\parallel }_{1} \tag{1}
49
+ $$
50
+
51
+ where $\det \left( \cdot \right)$ and $\operatorname{tr}\left( \cdot \right)$ denote the determinant and trace operators, respectively, $\alpha$ is a hyperparameter to control the regularization term. The first two terms together can be interpreted as log-likelihood under a GMRF. The last ${\ell }_{1}$ regularization term is to enforce $\Theta$ (and the corresponding graph) to be sparse. If $X$ is non-Gaussian, Equation 1 can be regarded as Laplacian estimation based on minimizing the Bregman divergence between positive definite matrices induced by the function $\Theta \mapsto - \log \det \left( \Theta \right) \left\lbrack {18}\right\rbrack$ .
52
+
53
+ ### 2.2 Graph Adversarial Attacks
54
+
55
+ Most existing graph adversarial attacks aim at degrading the accuracy of GNN models by inserting/deleting edges in an unnoticeable way (e.g., maintaining node degree distribution) [19]. The most popular graph adversarial attacks fall into the following two categories: (1) targeted attack, (2) non-targeted attack. The targeted attacks attempt to mislead a GNN model to produce a wrong prediction on a target sample (e.g., node), while the non-targeted attacks strive to degrade the overall accuracy of a GNN model for the whole graph data set. Dai et al. [20] first formulate the targeted attack as a combinatorial optimization problem and leverages reinforcement learning to insert/delete edges such that the target node is misclassified. Zügner et al. [5] propose another targeted attack called Nettack, which produces an adversarial graph by maximizing the training loss of GNNs. Zügner and Günnemann [6] further introduce Metattack, a non-targeted attack that treats the graph as a hyperparameter and uses meta-gradients to perturb the graph structure. It is worth noting that graph adversarial attacks have two different settings: poison (perturb a graph prior to GNN training) and evasion (perturb a graph after GNN training). As shown by Zhu et al. [21], the poison setting is typically more challenging to defend, as it changes the graph structure that fools GNN training. Thus, we aim to improve model robustness against attacks under the poison setting.
56
+
57
+ ### 2.3 Graph Adversarial Defenses
58
+
59
+ To defend GNN against adversarial attacks, Entezari et al. [7] first observe that Nettack, a strong targeted attack, only changes the high-rank information of the adjacency matrix. Thus, they propose to construct a low-rank graph by performing truncated SVD to undermine the effects of adversarial attacks. Later, Jin et al. [8] propose Pro-GNN that adopts a similar idea yet jointly learns the low-rank graph and GNN model. Although those low-rank approximation based methods achieve state-of-the-art results on several datasets, they produce dense adjacency matrices that correspond to complete graphs, which would limit their applications for large graphs. Moreover, they only preserve a small region of the graph spectrum and thus may lose too much important information corresponding to the clean graph structure in the spatial domain, which limits the performance of GNN training. Recently, [22] exploit Laplacian eigenpairs to guide GNN training, which produces a robust model with quadratic time complexity and is thus not scalable to large graphs. In addition to the aforementioned spectral-based defense methods, GCNJaccard [23] and RS-GNN [24] purify the adversarial graph by connecting nodes with similar attributes or same labels. However, those defense methods explicitly (or implicitly) assume the underlying graph to be homophilic, which results in rather poor performance when defending GNN models on heterophilic graphs. In contrast to the prior arts, GARNET achieves highly robust yet scalable performance on both homophilic and heterophilic graphs under adversarial attacks by leveraging a novel graph purification scheme based on spectral embedding and graphical model.
60
+
61
+ ## 137 3 The GARNET Approach
62
+
63
+ ![01963eee-3740-734d-92be-6621ae5a3aca_3_431_273_938_472_0.jpg](images/01963eee-3740-734d-92be-6621ae5a3aca_3_431_273_938_472_0.jpg)
64
+
65
+ Figure 2: An overview of the three major phases of GARNET.
66
+
67
+ Recently, Entezari et al. [7] and Jin et al. [8] have shown that the well-known graph adversarial attacks (e.g., Nettack and Metattack) are essentially high-rank attacks, which increase graph rank by enlarging the smallest singular values of adjacency matrix when perturbing the graph structure, while rest of the graph spectrum remains almost the same. Consequently, a natural way for improving GNN robustness is to purify an adversarial graph by eliminating the high-rank components of its spectrum.
68
+
69
+ Low-rank topology learning (prior work). Given an adversarial adjacency matrix ${A}_{\text{adv }} \in {R}^{n \times n}$ , Entezari et al. [7] propose to reconstruct a low-rank approximated adjacency matrix via performing TSVD: $\widehat{A} = {U\sum }{V}^{T}$ , where $\sum \in {R}^{r \times r}$ is a diagonal matrix consisting of $r$ largest singular values of ${A}_{\text{adv }}.U \in {R}^{n \times r}$ and $V \in {R}^{n \times r}$ contain the corresponding left and right singular vectors, respectively. As the largest singular values are hardly affected by graph adversarial attacks, the reconstructed low-rank adjacency matrix $\widehat{A}$ is resistant to adversarial attacks.
70
+
71
+ However, due to the high computational cost of TSVD, $\widehat{A}$ is typically computed by only using top $r$ largest singular values and their corresponding singular vectors, where $r$ is a relatively small number (e.g., $r = {50}$ ). Consequently, the rank of $\widehat{A}$ is only $r = {50}$ , which is two orders of magnitude smaller than the rank of the clean graph, as shown in Figure 1(a). Since these low-rank methods are overly aggressive in reducing the graph rank, $\widehat{A}$ may lose too much important spectral information corresponding to the clean graph structure. As shown in Figure 1(c), the clean accuracy of the TSVD-based method is largely improved by increasing the graph rank $r$ , which indicates the low-rank graph obtained with a small $r$ loses the key graph structure contributing to GNN training. Note that the adversarial and clean graphs share most of the graph structure, as adversarial attacks perturb the clean graph in an unnoticeable way. Consequently, losing those important clean graph structures will also limit the performance of GNN on the adversarial graph.
72
+
73
+ Reduced-rank topology learning (this work). Given the adversarial graph ${\mathcal{G}}_{\text{adv }}$ and its adjacency matrix ${A}_{adv}$ , our goal is to learn a reduced-rank graph, which slightly reduces the rank of ${\mathcal{G}}_{adv}$ to mitigate the effects of adversarial attacks, while retaining most of the important graph spectrum corresponding to the clean graph structure. As adversarial attacks mainly affect the least dominant singular components of ${A}_{\text{adv }}$ [7], one straightforward way for constructing such a reduced-rank graph is to utilize all the singular components except those least dominant ones via TSVD. Nonetheless, computing such a large number of singular components is computationally expensive [25], and is thus not scalable to large graphs.
74
+
75
+ To learn the reduced-rank graph in a scalable way, in this work, we leverage only the top few (e.g., 50) dominant singular components of ${A}_{adv}$ to restore its important graph spectrum, via recovering the corresponding clean graph structure with the aid of PGM. Figure 2 gives an overview of our proposed approach, GARNET, which consists of three major phases. The first phase constructs a base graph by exploiting spectral embedding and a scalable nearest-neighbor graph algorithm. The second phase further refines the base graph by pruning noncritical edges based on PGM. The last phase trains existing GNN models on the refined base graph to improve their robustness. Next, we will first describe our notion of clean graph recovery via PGM as well as the scalability issue of prior PGM-based work in Section 3.1, which motivates us to develop scalable GARNET kernels described in Sections 3.2 and 3.3. We further provide the overall complexity of GARNET in Section 3.4.
76
+
77
+ ### 3.1 Graph Recovery via Graphical Model
78
+
79
+ A general philosophy behind PGM is that there exists an underlying graph $G$ , whose structure determines the joint probability distribution of the observations on the data entities, i.e., columns of a data matrix $X \in {R}^{n \times d}$ , where $n$ is the number of data points, $d$ the dimension per data point. To recover the underlying graph structure from the data matrix $X$ , one common way is to leverage MLE by solving Equation 1 in Section 2.1. As the top few dominant singular components of the adjacency matrix capture the corresponding graph structure, we can naturally construct the data matrix $X$ based on those dominant singular components, and then adopt PGM to recover an underlying graph via MLE. To this end, we define a weighted spectral embedding matrix as follows:
80
+
81
+ Definition 3.1. Given the top $r$ smallest eigenvalues ${\lambda }_{1},{\lambda }_{2},\ldots ,{\lambda }_{r}$ and their corresponding eigenvectors ${v}_{1},{v}_{2},\ldots ,{v}_{r}$ of normalized graph Laplacian matrix ${L}_{\text{norm }} = I - {D}^{-\frac{1}{2}}A{D}^{-\frac{1}{2}}$ , where $I$ and $A$ are the identity matrix and graph adjacency matrix, respectively, and $D$ is a diagonal matrix of node degrees, the weighted spectral embedding matrix is defined as $V\overset{\text{ def }}{ = }$ $\left\lbrack {\sqrt{\left| 1 - {\lambda }_{1}\right| }{v}_{1},\ldots ,\sqrt{\left| 1 - {\lambda }_{r}\right| }{v}_{r}}\right\rbrack$ , whose $i$ -th row ${V}_{i, : }$ is the weighted spectral embedding of the corresponding $i$ -th node in the graph.
82
+
83
+ Proposition 3.2. Given a normalized graph adjacency matrix ${A}_{\text{norm }} = {D}^{-\frac{1}{2}}A{D}^{-\frac{1}{2}}$ and weighted spectral embedding matrix $V$ of an undirected graph, let $\widehat{A}$ be the rank-r approximation of ${A}_{\text{norm }}$ via TSVD. If the top $r$ dominant eigenvalues of ${A}_{\text{norm }}$ are non-negative, then we have $\widehat{A} = V{V}^{T}$ .
84
+
85
+ Our proof for Proposition 3.2 is available in Appendix A. Proposition 3.2 shows the connection between weighted spectral embedding and the low-rank adjacency matrix $\widehat{A}$ obtained by TSVD. Specifically, the weighted spectral embedding matrix $V$ can be viewed as an eigensubspace matrix consisting of a few dominant singular components of the corresponding adjacency matrix. Thus, we can use $V$ to recover the underlying clean graph via PGM. However, obtaining $V$ requires the knowledge of the clean graph structure, which seems to create a chicken and egg problem.
86
+
87
+ Fortunately, since the dominant singular components are hardly affected by adversarial attacks [7], the weighted spectral embedding is therefore also resistant to adversarial attacks, indicating that the underlying clean graph ${\mathcal{G}}_{\text{clean }}$ and its corresponding adversarial graph ${\mathcal{G}}_{\text{adv }}$ share almost the same weighted spectral embeddings. As a result, we can exploit the weighted spectral embedding matrix $V$ of ${\mathcal{G}}_{\text{adv }}$ to represent that of ${\mathcal{G}}_{\text{clean }}$ . By replacing the data matrix $X$ with $V$ in Equation 1, we have the following objective function:
88
+
89
+ $$
90
+ \mathop{\max }\limits_{\Theta } : F = \log \det \Theta - \frac{1}{r}\operatorname{tr}\left( {V{V}^{T}\Theta }\right) - \alpha \parallel \Theta {\parallel }_{1} \tag{2}
91
+ $$
92
+
93
+ By finding the optimizer ${\Theta }^{ * }$ in Equation 2, we can recover the underlying graph that maximizes the likelihood given the observation on the weighted spectral embedding $V$ . However, solving Equation 2 requires at least $O\left( {n}^{2}\right)$ time/space complexity per iteration even with the most efficient algorithms, which thus cannot scale to large graphs [13, 26, 27].
94
+
95
+ As $\Theta$ is constrained to be a Laplacian-like matrix, finding the optimizer ${\Theta }^{ * }$ in Equation 2 is equivalent to searching for critical edges from a complete graph, which would involve all possible (i.e., $O\left( {n}^{2}\right)$ ) edges. Here we say an edge is critical (noncritical) if including it to the graph significantly increases (decreases) $F$ in Equation 2. Hence we can recover the underlying graph by pruning noncritical edges from the complete graph. However, storing a complete graph is still expensive. To have a near-linear algorithm for clean graph recovery, instead of searching in the complete graph, we limit our search within an initial base graph ${\mathcal{G}}_{\text{base }}$ that is much sparser but containing sufficient information for identifying the candidate edges critical to recover the clean graph. Subsequently, the final graphical model (graph Laplacian) can be obtained by further pruning noncritical edges from ${\mathcal{G}}_{\text{base }}$ .
96
+
97
+ ### 3.2 Base Graph Construction
98
+
99
+ During the first phase of GARNET (shown in Figure 2), our goal is to build a base graph ${\mathcal{G}}_{\text{base }}$ , which greatly reduces the search space by not constructing a complete graph while preserving the critical candidate edges that are key to clean graph recovery. To this end, we give the following theorem:
100
+
101
+ Theorem 3.3. Given a graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ and its normalized Laplacian matrix ${L}_{\mathcal{G}}$ , let ${V}_{i}$ denote the weighted spectral embedding of node $i$ by using top $r$ eigenpairs of ${L}_{\mathcal{G}}$ . Suppose ${V}_{i}$ is normalized, i.e., ${\begin{Vmatrix}{V}_{i}\end{Vmatrix}}_{2} = 1$ , and a relatively small $r$ is picked such that ${\lambda }_{r} \leq 1$ , where ${\lambda }_{r}$ is the $r$ -th smallest eigenvalue of ${L}_{\mathcal{G}}$ , then we have $\mathop{\sum }\limits_{{\left( {i, j}\right) \in \mathcal{E}}}{\begin{Vmatrix}{V}_{i} - {V}_{j}\end{Vmatrix}}_{2}^{2} \leq {0.25r}$ .
102
+
103
+ Our proof for Theorem 3.3 is available in Appendix B. Note that $r$ is a small constant, which is independent of the graph size. Thus, Theorem 3.3 indicates that, if an edge connects nodes $i$ and $j$ in the clean graph, then the Euclidean distance between the weighted spectral embeddings of these two nodes will be small, which motivates us to build a k-nearest neighbor (kNN) graph as ${\mathcal{G}}_{\text{base }}$ to incorporate those clean edges.
104
+
105
+ Concretely, we first obtain the weighted spectral embedding matrix $V$ of the input adversarial graph ${\mathcal{G}}_{\text{adv }}$ to represent that of the underlying clean graph ${\mathcal{G}}_{\text{clean }}$ , as $V$ consists of dominant singular components that are shared by ${\mathcal{G}}_{\text{adv }}$ and ${\mathcal{G}}_{\text{clean }}$ [7]. We then leverage $V$ to construct a kNN graph, where each node is connected to its $k$ most similar nodes based on the Euclidean distance between their spectral embeddings. In this work, we exploit an approximate kNN algorithm for constructing the graph, which has $O\left( {\left| \mathcal{V}\right| \log \left| \mathcal{V}\right| }\right)$ complexity and thus can scale to very large graphs [28]. By choosing a proper $k$ (e.g., $k = {50}$ ), ${\mathcal{G}}_{\text{base }}$ is likely to cover edges in the underlying clean graph. Thus, ${\mathcal{G}}_{\text{base }}$ can serve as a reasonable search space for identifying critical edges in the next step.
106
+
107
+ ### 3.3 Graph Refinement via Edge Pruning
108
+
109
+ For the second phase of GARNET shown in Figure 2, we refine ${G}_{\text{base }}$ by aggressively pruning noncritical edges from ${G}_{\text{base }}$ , such that the refined graph only preserves the most important edges that contribute most to the log-likelihood $F$ in Equation 2.
110
+
111
+ To identify critical (noncritical) edges that can most effectively increase (decrease) $F$ , we exploit the update of $\Theta$ based on gradient ascent: $\Theta \leftarrow \Theta + \eta \frac{\partial F}{\partial \Theta }$ , where $\eta$ is the step size. As mentioned in Section ${2.1},\Theta$ is constrained to be $L + \frac{I}{{\sigma }^{2}}$ , which means the off-diagonal elements in $\Theta$ correspond to negative of edge weights in the underlying graph, i.e., ${\Theta }_{i, j} = - {w}_{i, j}$ . Thus, the update of ${\Theta }_{i, j}$ during gradient ascent can be viewed as:
112
+
113
+ $$
114
+ {\Theta }_{i, j} \leftarrow {\Theta }_{i, j} + \eta {\left( \frac{\partial F}{\partial \Theta }\right) }_{i, j} = {\Theta }_{i, j} - \eta \frac{\partial F}{\partial {w}_{i, j}} \tag{3}
115
+ $$
116
+
117
+ Equation 3 means that, if $\frac{\partial F}{\partial {w}_{i, j}}$ is large and positive, ${\Theta }_{i, j}$ will become more negative, which corresponds to increasing the edge weight in the underlying graph. Similarly, if $\frac{\partial F}{\partial {w}_{i, j}}$ is small and negative, ${\Theta }_{i, j}$ will be less negative, corresponding to decreasing the edge weight. In other words, the edge weight ${w}_{i, j}$ with a large (small) $\frac{\partial F}{\partial {w}_{i, j}}$ should be increased (decreased) to maximize the log-likelihood $F$ , meaning the corresponding edge is critical (noncritical). Thus, we can identify the critical edges once we know $\frac{\partial F}{\partial {w}_{i, j}}$ . By setting $\alpha = 0$ in Equation 2 (as GARNET naturally produces a sparse graph) and taking the partial derivative with respect to an edge weight ${w}_{i, j}$ , we have:
118
+
119
+ $$
120
+ \frac{\partial F}{\partial {w}_{i, j}} = \mathop{\sum }\limits_{{k = 1}}^{n}\frac{1}{{\lambda }_{k} + 1/{\sigma }^{2}}\frac{\partial {\lambda }_{k}}{\partial {w}_{i, j}} - \frac{{\begin{Vmatrix}{V}^{T}{e}_{i, j}\end{Vmatrix}}_{2}^{2}}{r} \tag{4}
121
+ $$
122
+
123
+ where ${\lambda }_{k},\forall k = 1,2,\ldots , n$ are the Laplacian eigenvalues of a given graph (e.g., ${\mathcal{G}}_{\text{base }}$ ), ${e}_{i, j} = {e}_{i} - {e}_{j}$ , and ${e}_{i}$ denotes the vector with all zero entries except for the $i$ -th entry being 1 .
124
+
125
+ Theorem 3.4 (Feng [17]). Let ${\lambda }_{k}$ and ${u}_{k}$ be the $k$ -th eigenvalue and the corresponding eigenvector of the Laplacian matrix, respectively. The spectral perturbation $\delta {\lambda }_{k}$ due to the increase of an edge weight ${w}_{i, j}$ can be estimated by $\delta {\lambda }_{k} = \delta {w}_{i, j}{\left( {u}_{k}^{T}{e}_{i, j}\right) }^{2}$ .
126
+
127
+ The proof for Theorem 3.4 is available in Feng [17]. According to Theorem 3.4 and Equation 4, we can estimate $\frac{\partial F}{\partial {w}_{i, j}} \approx {\begin{Vmatrix}{U}^{T}{e}_{i, j}\end{Vmatrix}}_{2}^{2} - \frac{1}{r}{\begin{Vmatrix}{V}^{T}{e}_{i, j}\end{Vmatrix}}_{2}^{2}$ , where $U = \left\lbrack {\frac{{u}_{1}}{\sqrt{{\lambda }_{1} + 1/{\sigma }^{2}}},\ldots ,\frac{{u}_{r}}{\sqrt{{\lambda }_{r} + 1/{\sigma }^{2}}}}\right\rbrack ,{\lambda }_{i}$ is the $i$ -th smallest Laplacian eigenvalue of ${\mathcal{G}}_{\text{base }}$ , and ${u}_{i}$ is the corresponding eigenvector. Consequently, an edge(i, j)is critical if ${\begin{Vmatrix}{U}^{T}{e}_{i, j}\end{Vmatrix}}_{2}^{2} \gg \frac{1}{r}{\begin{Vmatrix}{V}^{T}{e}_{i, j}\end{Vmatrix}}_{2}^{2}$ . As $V$ and $U$ are the spectral embeddings on the input adversarial graph and the base graph, respectively, we define the spectral embedding distortion ${s}_{i, j} = \frac{{\begin{Vmatrix}{U}^{T}{e}_{i, j}\end{Vmatrix}}_{2}^{2}}{{\begin{Vmatrix}{V}^{T}{e}_{i, j}\end{Vmatrix}}_{2}^{2}}$ to measure the edge importance. Consequently, we prune edges in the base graph ${\mathcal{G}}_{\text{base }}$ that have small spectral embedding distortion, i.e., ${s}_{i, j} < \gamma$ , where $\gamma$ is a hyperparameter to control the sparsity of the refined graph. Hence, the refined base graph ${\mathcal{G}}_{\text{base }}^{\prime }$ largely recovers the underlying clean graph structure from the input adversarial graph. Since ${\mathcal{G}}_{\text{base }}^{\prime }$ is constructed by only leveraging the top few dominant singular components of ${\mathcal{G}}_{\text{adv }}$ , it ignores the high-rank adversarial components and thus robust to adversarial attacks. As a result, we can train a given GNN model on ${\mathcal{G}}_{\text{base }}^{\prime }$ to improve its robustness, which is the last phase of GARNET.
128
+
129
+ ### 3.4 Complexity of GARNET
130
+
131
+ The first phase of GARNET requires $O\left( {r\left| \mathcal{E}\right| }\right)$ time for computing top $r$ Laplacian eigenpairs [25], and $O\left( {\left| \mathcal{V}\right| \log \left| \mathcal{V}\right| }\right)$ time for $\mathrm{{kNN}}$ graph construction [28]. The second phase involves $O\left( {{rk}\left| \mathcal{V}\right| }\right)$ time for computing spectral embeddings and edge pruning on the kNN graph. Thus, the overall time complexity for graph purification is $O\left( {r\left( {\left| \mathcal{E}\right| + k\left| \mathcal{V}\right| }\right) + \left| \mathcal{V}\right| \log \left| \mathcal{V}\right| }\right)$ , where $\left| \mathcal{V}\right| \left( \left| \mathcal{E}\right| \right)$ denotes the number of nodes (edges) in the adversarial graph, and $k$ is the averaged node degree in the kNN graph. Our systematic approach of choosing $r$ and the space complexity analysis are in Appendix F.
132
+
133
+ ## 4 Experiments
134
+
135
+ We have conducted comparative evaluation of GARNET against state-of-the-art defense GNN models under targeted attack (Nettack) [5] and non-targeted attack (Metattack) [6] on both homophilic and heterophilic datasets. Besides, we also evaluate GARNET robustness against adaptive attacks. In addition, we further show the scalability of GARNET by comparing its run time with prior defense methods and evaluating GARNET on ogbn-products, which consists of more than 2 million nodes [11]. Finally, we conduct ablation studies to understand the effectiveness of GARNET kernels.
136
+
137
+ Experimental Setup. The details of datasets used in our experiments are available in Appendix C. We choose as baselines two state-of-the-art defense methods based on graph purification: TSVD [7] and Pro-GNN [8]. Besides, we evaluate training based defense methods GCN-LFR [22] and GN-NGuard [29] on homophilic and heterophilic graphs, respectively. Moreover, we use GCN [30] and GPRGNN [31] as the backbone GNN models for defense on homophilic datasets (i.e., Cora and Pubmed). As GCN performs poorly on heterophilic datasets [10, 32], we choose GPRGNN as the backbone model (as well as the surrogate model for attacking) on Chameleon and Squirrel datasets. Due to the space limit, we provide defense results with H2GCN [10] as the backbone model in Appendix J. For all baselines, we tune their hyperparameters against adversarial attacks with a small perturbation, and keep the same hyperparameters for larger adversarial perturbations. Detailed hyperparameter settings of baselines and GARNET are available in Appendix D. Our hardware information is provided in Appendix E.
138
+
139
+ ### 4.1 Robustness of GARNET
140
+
141
+ Defense on homophilic graphs. We first evaluate the model robustness on homophilic graphs against the targeted attack (Nettack) and the non-targeted attack (Metattack). Specifically, Nettack aims to fool a GNN model to misclassify some target nodes with a few structure (edge) perturbations. The goal of Metattack is to drop the overall accuracy of the whole test set with a given perturbation ratio budget (i.e., the number of adversarial edges over the number of total edges). Due to the space limit, we only show defense results under Nettack and Metattack with 5 perturbed edges per target node and 20% perturbation ratio, respectively. Results with other perturbation budgets are in Appendix I.
142
+
143
+ Table 1 reports the average accuracy over 10 runs on Cora and Pubmed. It shows that GARNET, with either a backbone GNN model (GCN or GPRGNN), outperforms defense baselines in terms of both clean and adversarial accuracy in most cases. We attribute the large accuracy improvement to GARNET's strengths in recovering key structures of the clean graph while ignoring the high-rank adversarial components during graph purification. Moreover, as both TSVD and ProGNN involve dense matrices during GNN training, they run out of GPU memory even on Pubmed, a graph with only ${20}\mathrm{k}$ nodes. In contrast, GARNET is not only robust to adversarial attacks, but also scalable to large graphs, as empirically shown in Section 4.2.
144
+
145
+ Defense on heterophilic graphs. We report the averaged accuracy over 10 runs on heterophilic graphs in Table 2, which shows that all defense baselines fail to defend GPRGNN on heterophilic graphs and even degrade the accuracy of the vanilla GPRGNN by a large margin. The reason why ProGNN performs poorly is that it follows the graph homophily assumption for improving GNN robustness, which contradicts the property of heterophilic graphs. For the TSVD-based defense method, the low-rank graph generated by TSVD contains negative edge weights, which degrade the performance of GPRGNN for adapting its graph filter on heterophilic graphs [31]. Although [29] have shown GNNGuard can improve model robustness on synthetic heterophillic graphs, our results indicate that it fails to defend GNN models on realistic heterohilic graphs. We attribute it to that the quality of graphlet degree vectors used in GNNGuard is degraded by structural perturbations induced via adversarial attacks. In contrast, GARNET largely recovers the clean graph structure based on Theorem 3.3 without the assumption on whether adjacent nodes have similar attributes. In other words, GARNET will produce a heterophilic graph if the underlying clean graph is heterophilic, which is further confirmed in Appendix O. Consequently, GARNET improves accuracy over defense baselines by up to ${10.23}\%$ (i.e., ${43.64}\% - {33.41}\%$ on Squirrel under Nettack) on heterophilic graphs.
146
+
147
+ Table 1: Averaged node classification accuracy $\left( \% \right) \pm$ std under targeted attack (Nettack) and non-targeted attack (Metattack) on homophilic graphs - We bold and underline the first and second highest accuracy of each backbone GNN model, respectively. OOM means out of memory.
148
+
149
+ <table><tr><td rowspan="2">Model</td><td colspan="2">Cora (Nettack)</td><td colspan="2">Cora (Metattack)</td><td colspan="2">Pubmed (Nettack)</td><td colspan="2">Pubmed (Metattack)</td></tr><tr><td>Clean</td><td>Adversarial</td><td>Clean</td><td>Adversarial</td><td>Clean</td><td>Adversarial</td><td>Clean</td><td>Adversarial</td></tr><tr><td>GCN-Vanilla</td><td>$\underline{80.96} \pm {0.95}$</td><td>${55.66} \pm {1.95}$</td><td>${81.35} \pm {0.66}$</td><td>${56.28} \pm {1.19}$</td><td>${87.26} \pm {0.51}$</td><td>${66.67} \pm {1.34}$</td><td>${87.16} \pm {0.09}$</td><td>${77.20} \pm {0.27}$</td></tr><tr><td>GCN-TSVD</td><td>${72.65} \pm {2.29}$</td><td>${60.30} \pm {2.25}$</td><td>${73.86} \pm {0.53}$</td><td>${62.44} \pm {1.16}$</td><td>${87.03} \pm {0.48}$</td><td>$\underline{79.56} \pm {0.48}$</td><td>${84.53} \pm {0.08}$</td><td>$\underline{84.30} \pm {0.08}$</td></tr><tr><td>GCN-ProGNN</td><td>${80.54} \pm {1.21}$</td><td>$\underline{65.38} \pm {1.65}$</td><td>${78.56} \pm {0.36}$</td><td>$\underline{72.28} \pm {1.67}$</td><td>${88.14} \pm {1.44}$</td><td>${71.89} \pm {1.56}$</td><td>${84.62} \pm {0.11}$</td><td>${83.89} \pm {0.32}$</td></tr><tr><td>GCN-LFR</td><td>${80.07} \pm {0.95}$</td><td>${53.73} \pm {2.17}$</td><td>${77.23} \pm {2.61}$</td><td>${65.38} \pm {3.71}$</td><td>${87.20} \pm {1.24}$</td><td>${68.49} \pm {2.44}$</td><td>${81.91} \pm {0.26}$</td><td>${78.32} \pm {0.69}$</td></tr><tr><td>GCN-GARNET</td><td>${81.08} \pm {2.05}$</td><td>${67.04} \pm {2.05}$</td><td>$\underline{79.64} \pm {0.75}$</td><td>${73.89} \pm {0.91}$</td><td>$\underline{87.96} \pm {0.58}$</td><td>${86.12} \pm {0.86}$</td><td>$\underline{85.37} \pm {0.20}$</td><td>${85.14} \pm {0.23}$</td></tr><tr><td>GPR-Vanilla</td><td>${83.04} \pm {2.05}$</td><td>${62.89} \pm {1.95}$</td><td>${83.05} \pm {0.42}$</td><td>${74.27} \pm {2.11}$</td><td>${90.05} \pm {0.73}$</td><td>${76.99} \pm {1.16}$</td><td>${87.35} \pm {0.13}$</td><td>${84.18} \pm {0.15}$</td></tr><tr><td>GPR-TSVD</td><td>${81.68} \pm {1.78}$</td><td>${63.52} \pm {3.27}$</td><td>${81.61} \pm {0.54}$</td><td>${78.50} \pm {1.20}$</td><td>OOM</td><td>OOM</td><td>OOM</td><td>OOM</td></tr><tr><td>GPR-ProGNN</td><td>${82.04} \pm {1.33}$</td><td>${63.74} \pm {2.57}$</td><td>${82.04} \pm {0.90}$</td><td>${76.29} \pm {1.46}$</td><td>OOM</td><td>OOM</td><td>OOM</td><td>${OOM}$</td></tr><tr><td>GPR-GARNET</td><td>${82.77} \pm {1.89}$</td><td>${71.45} \pm {2.73}$</td><td>${82.67} \pm {1.89}$</td><td>${81.34} \pm {0.79}$</td><td>$\mathbf{{90.99}} \pm {0.52}$</td><td>${89.52} \pm {0.45}$</td><td>$\underline{86.86} \pm {0.57}$</td><td>${85.69} \pm {0.26}$</td></tr></table>
150
+
151
+ Table 2: Averaged node classification accuracy $\left( \% \right) \pm$ std on heterophilic graphs — We bold and underline the first and second highest accuracy, respectively. The backbone GNN model is GPRGNN.
152
+
153
+ <table><tr><td rowspan="2">Model</td><td colspan="2">Chameleon (Nettack)</td><td colspan="2">Chameleon (Metattack)</td><td colspan="2">Squirrel (Nettack)</td><td colspan="2">Squirrel (Metattack)</td></tr><tr><td>Clean</td><td>Adversarial</td><td>Clean</td><td>Adversarial</td><td>Clean</td><td>Adversarial</td><td>Clean</td><td>Adversarial</td></tr><tr><td>Vanilla</td><td>$\underline{71.46} \pm {1.92}$</td><td>$\underline{66.26} \pm {1.71}$</td><td>${61.36} \pm {1.00}$</td><td>$\underline{{53.20} \pm {0.88}}$</td><td>${41.36} \pm {2.87}$</td><td>$\underline{{39.45} \pm {2.36}}$</td><td>$\underline{39.51} \pm {1.64}$</td><td>$\underline{35.22} \pm {1.20}$</td></tr><tr><td>TSVD</td><td>${62.12} \pm {3.04}$</td><td>${60.37} \pm {2.86}$</td><td>${47.29} \pm {1.63}$</td><td>${45.12} \pm {1.34}$</td><td>${32.98} \pm {2.36}$</td><td>${31.20} \pm {1.84}$</td><td>${31.36} \pm {1.87}$</td><td>${23.91} \pm {1.40}$</td></tr><tr><td>ProGNN</td><td>${58.80} \pm {1.72}$</td><td>${57.07} \pm {1.82}$</td><td>${48.39} \pm {0.68}$</td><td>${46.69} \pm {0.61}$</td><td>${31.81} \pm {1.72}$</td><td>${27.27} \pm {1.87}$</td><td>${31.64} \pm {2.87}$</td><td>${29.36} \pm {3.61}$</td></tr><tr><td>GNNGuard</td><td>${64.87} \pm {2.62}$</td><td>${62.21} \pm {1.94}$</td><td>${58.01} \pm {1.57}$</td><td>${49.89} \pm {1.34}$</td><td>${34.17} \pm {2.33}$</td><td>${33.41} \pm {1.82}$</td><td>${37.46} \pm {0.56}$</td><td>${32.69} \pm {0.59}$</td></tr><tr><td>GARNET</td><td>${72.89} \pm {2.65}$</td><td>${71.83} \pm {2.11}$</td><td>${61.11} \pm {2.46}$</td><td>$\mathbf{{59.96}} \pm {0.84}$</td><td>${44.91} \pm {1.53}$</td><td>${43.64} \pm {1.53}$</td><td>${43.43} \pm {1.14}$</td><td>${41.97} \pm {1.02}$</td></tr></table>
154
+
155
+ Defense against adaptive attacks. As GARNET is non-differentiable during kNN graph construction, it is difficult to optimize a specific loss function for adaptive attack. Instead, we adopt an attack called LowBlow from [7], which deliberately perturbs low-rank singular components in the graph spectrum, yet violates the unnoticeable condition (i.e., preserving node degree distribution after attacking). Since LowBlow has cubic complexity for computing the full set of adjacency eigenpairs, we only show results on the small graph Cora in Table 3, which indicates GARNET still achieves the highest adversarial accuracy under LowBlow, while all low-rank defense baselines perform even worse than vanilla GPRGNN model. The reason lies in that the kNN graph (with a relatively large $k$ ) in GARNET is less vulnerable to the perturbations of weighted spectral embeddings (i.e., low-rank components of the clean graph) [33], compared to prior low-rank defense methods.
156
+
157
+ ### 4.2 Scalability of GARNET
158
+
159
+ To demonstrate the scalability of GARNET, we first compare the run time of GARNET with prior low-rank defense methods with GPRGNN as the backbone GNN model. As shown in Figure 3, the TSVD defense method is slower than GARNET since it produces a dense adjacency matrix that slows down the GNN training. Moreover, ProGNN is extremely slow as it jointly learns the low-rank graph structure and the robust GNN model, which requires performing TSVD for every epoch. In contrast, GARNET can efficiently produce a sparse graph for downstream GNN training, leading to end-to-end runtime speedup over prior methods by up to ${14.7} \times$ . In addition, we further evaluate the robustness of GARNET on two large datasets: ogbn-arxiv and ogbn-products, under powerful and scalable attacks proposed by [34]. As we run out of GPU memory when performing the PR-BCD attack, we choose the more scalable version GR-BCD that has less memory usage. We use GCN as the backbone model since it outperforms GPRGNN on large graphs. As TSVD and ProGNN run out of memory on these two datasets, we choose GNNGuard, GCNJaccard [23], and Soft Median GDC [34] as baselines. Table 4 shows GARNET achieves comparable clean accuracy compared to GCN, and drastically improves the adversarial accuracy over defense baselines by up to 16.13%.
160
+
161
+ Table 3: Averaged accuracy $\left( \% \right) \pm$ std on Cora under Metattack and LowBlow with ${20}\%$ perturbation ratio. We use GPRGNN as the backbone GNN model.
162
+
163
+ <table><tr><td>Model</td><td>Metattack</td><td>LowBlow</td></tr><tr><td>Vanilla</td><td>${74.27} \pm {2.11}$</td><td>${74.77} \pm {0.71}$</td></tr><tr><td>TSVD</td><td>${78.50} \pm {1.20}$</td><td>${26.03} \pm {2.76}$</td></tr><tr><td>ProGNN</td><td>${76.29} \pm {1.46}$</td><td>${69.88} \pm {1.61}$</td></tr><tr><td>GARNET</td><td>${81.34} \pm {0.79}$</td><td>${77.71} \pm {0.95}$</td></tr></table>
164
+
165
+ ![01963eee-3740-734d-92be-6621ae5a3aca_7_981_1807_433_297_0.jpg](images/01963eee-3740-734d-92be-6621ae5a3aca_7_981_1807_433_297_0.jpg)
166
+
167
+ Figure 3: End-to-end runtime comparison of GARNET and prior defense methods.
168
+
169
+ Table 4: Averaged accuracy $\left( \% \right) \pm$ std under GR-BCD attack.
170
+
171
+ <table><tr><td rowspan="2">Model</td><td colspan="3">ogbn-arxiv</td><td colspan="3">ogbn-products</td></tr><tr><td>Clean</td><td>25% Ptb.</td><td>${50}\%$ Ptb.</td><td>Clean</td><td>25% Ptb.</td><td>50% Ptb.</td></tr><tr><td>GCN</td><td>$\mathbf{{70.74}} \pm {0.26}$</td><td>${45.18} \pm {0.25}$</td><td>${39.12} \pm {0.27}$</td><td>$\underline{75.68} \pm {0.20}$</td><td>${64.70} \pm {0.43}$</td><td>${62.71} \pm {0.44}$</td></tr><tr><td>GNNGuard</td><td>${68.78} \pm {0.32}$</td><td>$\underline{47.46} \pm {0.11}$</td><td>$\underline{{41.18} \pm {0.12}}$</td><td>${74.82} \pm {0.11}$</td><td>${66.76} \pm {0.23}$</td><td>$\underline{63.22} \pm {0.26}$</td></tr><tr><td>GCNJaccard</td><td>${67.77} \pm {0.18}$</td><td>${46.27} \pm {0.11}$</td><td>${40.84} \pm {0.19}$</td><td>${72.95} \pm {0.08}$</td><td>${60.90} \pm {0.18}$</td><td>${58.84} \pm {0.20}$</td></tr><tr><td>Soft Median GDC</td><td>${69.75} \pm {0.03}$</td><td>${45.31} \pm {0.06}$</td><td>${40.11} \pm {0.06}$</td><td>${66.31} \pm {0.03}$</td><td>${60.59} \pm {0.05}$</td><td>${59.73} \pm {0.05}$</td></tr><tr><td>GARNET</td><td>$\underline{69.91} \pm {0.29}$</td><td>$\mathbf{{61.32}} \pm {0.20}$</td><td>$\mathbf{{60.88}} \pm {0.13}$</td><td>$\mathbf{{76.05}} \pm {0.19}$</td><td>$\mathbf{{75.03}} \pm {0.14}$</td><td>${74.97} \pm {0.24}$</td></tr></table>
172
+
173
+ ![01963eee-3740-734d-92be-6621ae5a3aca_8_417_466_959_310_0.jpg](images/01963eee-3740-734d-92be-6621ae5a3aca_8_417_466_959_310_0.jpg)
174
+
175
+ Figure 4: Ablation study of GARNET on graph refinement.
176
+
177
+ ### 4.3 Ablation Analysis of GARNET
178
+
179
+ Figure 4 shows the comparison of GARNET results with and without graph refinement. When only constructing the base graph, GARNET achieves better adversarial accuracy than the vanilla GNN model, which confirms our Theorem 3.3 that the base graph construction can successfully recover clean graph edges. The graph refinement step further improves GARNET accuracy ( $\sim 2\%$ increase) since some noncritical or even harmful edges are removed based on PGM. Due to the space limitation, the ablation studies of GARNET on the kNN graph and edge pruning are available in Appendix G.
180
+
181
+ ### 4.4 Visualization
182
+
183
+ ![01963eee-3740-734d-92be-6621ae5a3aca_8_334_1240_1117_314_0.jpg](images/01963eee-3740-734d-92be-6621ae5a3aca_8_334_1240_1117_314_0.jpg)
184
+
185
+ Figure 5: Visualizations on the same target node (marked in blue) as well as its 1-hop and 2-hop neighbors. Neighbor nodes are marked in green if they have the same label as the target node, and red otherwise. (a) clean graph. (b) adversarial graph. (c) adversarial graph purified by GARNET.
186
+
187
+ We visualize the local structure (within 2-hop neighbors) of a target node (randomly picked) on Cora in Figure 5. By comparing Figures 5(b) and 5(c), it is clear that GARNET effectively removes most of the adversarial edges induced by Nettack that connect nodes with different labels [8]. As a result, it is trivial for the backbone GNN model to correctly predict the target node since the surrounding nodes share the same label as the target node in GARNET graph. This explains why GARNET substantially improves the adversarial accuracy of GNN models. More visualizations are available in Appendix N.
188
+
189
+ ## 5 Conclusions
190
+
191
+ This work introduces GARNET, a spectral approach to robust and scalable graph neural networks by combining spectral embedding and the probabilistic graphical model. GARNET first uses weighted spectral embedding to construct a base graph, which is then refined by pruning uncritical edges based on the graphical model. Results show that GARNET not only outperforms state-of-the-art defense models, but also scales to large graphs with millions of nodes. An interesting direction for future work is to incorporate the node feature information to further boost model robustness.
192
+
193
+ References
194
+
195
+ [1] William L Hamilton. Graph representation learning. Synthesis Lectures on Artifical Intelligence and Machine Learning, 14(3):1-159, 2020. 1
196
+
197
+ [2] Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L Hamilton, and Jure Leskovec. Graph convolutional neural networks for web-scale recommender systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 974-983, 2018. 1
198
+
199
+ [3] Sergio Casas, Cole Gulino, Renjie Liao, and Raquel Urtasun. Spagnn: Spatially-aware graph neural networks for relational behavior forecasting from sensor data. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pages 9491-9497. IEEE, 2020. 1
200
+
201
+ [4] Azalia Mirhoseini, Anna Goldie, Mustafa Yazgan, Joe Wenjie Jiang, Ebrahim Songhori, Shen Wang, Young-Joon Lee, Eric Johnson, Omkar Pathak, Azade Nazi, et al. A graph placement methodology for fast chip design. Nature, 594(7862):207-212, 2021. 1
202
+
203
+ [5] Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2847-2856, 2018. 1, 2, 3, 7, 19
204
+
205
+ [6] Daniel Zügner and Stephan Günnemann. Adversarial attacks on graph neural networks via meta learning. arXiv preprint arXiv:1902.08412, 2019. 1, 2, 3, 7
206
+
207
+ [7] Negin Entezari, Saba A Al-Sayouri, Amirali Darvishzadeh, and Evangelos E Papalexakis. All you need is low (rank) defending against adversarial attacks on graphs. In Proceedings of the 13th International Conference on Web Search and Data Mining, pages 169-177, 2020. 1, 3, 4, 5,6,7,8,19
208
+
209
+ [8] Wei Jin, Yao Ma, Xiaorui Liu, Xianfeng Tang, Suhang Wang, and Jiliang Tang. Graph structure learning for robust graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 66-74, 2020. 1, 3, 4, 7, 9, 14
210
+
211
+ [9] Xiaowen Dong, Dorina Thanou, Michael Rabbat, and Pascal Frossard. Learning graphs from data: A signal representation perspective. IEEE Signal Processing Magazine, 36(3):44-63, 2019.2,3
212
+
213
+ [10] Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, and Danai Koutra. Beyond homophily in graph neural networks: Current limitations and effective designs. arXiv preprint arXiv:2006.11468, 2020. 2, 7, 14, 17, 21
214
+
215
+ [11] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. arXiv preprint arXiv:2005.00687, 2020. 2, 7, 14
216
+
217
+ [12] Onureena Banerjee, Laurent El Ghaoui, and Alexandre d'Aspremont. Model selection through sparse maximum likelihood estimation for multivariate gaussian or binary data. The Journal of Machine Learning Research, 9:485-516, 2008. 3
218
+
219
+ [13] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432-441, 2008. 3, 5
220
+
221
+ [14] Xiaowen Dong, Dorina Thanou, Pascal Frossard, and Pierre Vandergheynst. Learning laplacian matrix in smooth graph signal representations. IEEE Transactions on Signal Processing, 64 (23):6160-6173, 2016. 3
222
+
223
+ [15] Hilmi E Egilmez, Eduardo Pavez, and Antonio Ortega. Graph learning from data under laplacian and structural constraints. IEEE Journal of Selected Topics in Signal Processing, 11(6):825-841, 2017.
224
+
225
+ [16] Vassilis Kalofolias and Nathanaël Perraudin. Large scale graph learning from smooth signals. International Conference on Learning Representations (ICLR 2019), 2019.
226
+
227
+ [17] Zhuo Feng. Sgl: Spectral graph learning from measurements. arXiv preprint arXiv:2104.07867, 2021.3,6
228
+
229
+ [18] Martin Slawski and Matthias Hein. Estimation of positive definite m-matrices and structure learning for attractive gaussian markov random fields. Linear Algebra and its Applications, 473: 145-179, 2015. 3
230
+
231
+ [19] Lichao Sun, Yingtong Dou, Carl Yang, Ji Wang, Philip S Yu, Lifang He, and Bo Li. Adversarial attack and defense on graph data: A survey. arXiv preprint arXiv:1812.10528, 2018. 3
232
+
233
+ [20] Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song. Adversarial attack on graph structured data. In International conference on machine learning, pages 1115-1124. PMLR, 2018. 3
234
+
235
+ [21] Jiong Zhu, Junchen Jin, Michael T Schaub, and Danai Koutra. Improving robustness of graph neural networks with heterophily-inspired designs. arXiv preprint arXiv:2106.07767, 2021. 3
236
+
237
+ [22] Heng Chang, Yu Rong, Tingyang Xu, Yatao Bian, Shiji Zhou, Xin Wang, Junzhou Huang, and Wenwu Zhu. Not all low-pass filters are robust in graph convolutional networks. In A. Beygelz-imer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, 2021. URL https://openreview.net/forum?id=bDdfxLQITtu.3, 7
238
+
239
+ [23] Huijun Wu, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, Kai Lu, and Liming Zhu. Adversarial examples on graph data: Deep insights into attack and defense. arXiv preprint arXiv:1903.01610,2019.3,9
240
+
241
+ [24] Enyan Dai, Wei Jin, Hui Liu, and Suhang Wang. Towards robust graph neural networks for noisy graphs with sparse labels. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, pages 181-191, 2022. 3
242
+
243
+ [25] James Baglama and Lothar Reichel. Augmented implicitly restarted lanczos bidiagonalization methods. SIAM Journal on Scientific Computing, 27(1):19-42, 2005. 4, 7
244
+
245
+ [26] Cho-Jui Hsieh, Matyas A Sustik, Inderjit S Dhillon, and Pradeep Ravikumar. Sparse inverse covariance matrix estimation using quadratic approximation. arXiv preprint arXiv:1306.3212, 2013.5
246
+
247
+ [27] Cho-Jui Hsieh, Mátyás A Sustik, Inderjit S Dhillon, Pradeep Ravikumar, et al. Quic: quadratic approximation for sparse inverse covariance estimation. J. Mach. Learn. Res., 15(1):2911-2947, 2014.5
248
+
249
+ [28] Yu A Malkov and Dmitry A Yashunin. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. IEEE transactions on pattern analysis and machine intelligence, 42(4):824-836, 2018. 6, 7, 15
250
+
251
+ [29] Xiang Zhang and Marinka Zitnik. Gnnguard: Defending graph neural networks against adversarial attacks. arXiv preprint arXiv:2006.08149, 2020. 7
252
+
253
+ [30] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016. 7, 17
254
+
255
+ [31] Eli Chien, Jianhao Peng, Pan Li, and Olgica Milenkovic. Adaptive universal generalized pagerank graph neural network. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=n6jl7fLxrP.7, 14, 17
256
+
257
+ [32] Derek Lim, Felix Matthew Hohne, Xiuyu Li, Sijia Linda Huang, Vaishnavi Gupta, Omkar Prasad Bhalerao, and Ser-Nam Lim. Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods. In Advances in Neural Information Processing Systems, 2021.7
258
+
259
+ [33] Yizhen Wang, Somesh Jha, and Kamalika Chaudhuri. Analyzing the robustness of nearest neighbors to adversarial examples. In International Conference on Machine Learning, pages 5133-5142. PMLR, 2018. 8
260
+
261
+ [34] Simon Geisler, Tobias Schmidt, Hakan Sirin, Daniel Zügner, Aleksandar Bojchevski, and Stephan Günnemann. Robustness of graph neural networks at scale. Advances in Neural Information Processing Systems, 34, 2021. 8, 9, 14
262
+
263
+ [35] Gene H Golub and Charles F Van Loan. Matrix computations, volume 3. JHU press, 2013. 13
264
+
265
+ [36] Zhilin Yang, William Cohen, and Ruslan Salakhudinov. Revisiting semi-supervised learning with graph embeddings. In International conference on machine learning, pages 40-48. PMLR, 2016. 14
266
+
267
+ [37] Benedek Rozemberczki, Carl Allen, and Rik Sarkar. Multi-scale attributed node embedding. Journal of Complex Networks, 9(2):cnab014, 2021. 14
268
+
269
+ [38] Yaxin Li, Wei Jin, Han Xu, and Jiliang Tang. Deeprobust: A pytorch library for adversarial attacks and defenses. arXiv preprint arXiv:2005.06149, 2020. 14
270
+
271
+ [39] Ulrike Von Luxburg. A tutorial on spectral clustering. Statistics and computing, 17(4):395-416, 2007. 15
272
+
273
+ [40] Chenhui Deng, Zhiqiang Zhao, Yongyu Wang, Zhiru Zhang, and Zhuo Feng. Graphzoom: A multi-level spectral approach for accurate and scalable graph embedding. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum? id=r1lGOOEKDH. 19
274
+
275
+ [41] Wuxinlin Cheng, Chenhui Deng, Zhiqiang Zhao, Yaohui Cai, Zhiru Zhang, and Zhuo Feng. Spade: A spectral method for black-box adversarial robustness evaluation. In International Conference on Machine Learning, pages 1814-1824. PMLR, 2021. 19
276
+
277
+ ## A Proof for Proposition 3.2
278
+
279
+ Proof. As the graph is undirected, we can perform eigendecomposition on both ${A}_{\text{norm }}$ and ${L}_{\text{norm }}$ to obtain their real eigenvalues and the corresponding eigenvectors. Let ${\lambda }_{i},{\widehat{\lambda }}_{i}$ , and ${\sigma }_{i}, i = 1,2,\ldots , r$ denote the $r$ smallest eigenvalues of ${L}_{\text{norm }}, r$ largest eigenvalues of ${A}_{\text{norm }}$ , and $r$ largest singular values of ${A}_{\text{norm }}$ , respectively. Since ${A}_{\text{norm }} = I - {L}_{\text{norm }},{A}_{\text{norm }}$ and ${L}_{\text{norm }}$ share the same set of eigenvectors while their eigenvalues satisfy: ${\widehat{\lambda }}_{i} = 1 - {\lambda }_{i}, i = 1,2,\ldots , r$ . Moreover, since we assume that the $r$ largest magnitude eigenvalues of ${A}_{\text{norm }}$ are non-negative, we have ${\sigma }_{i} = \left| {\widehat{\lambda }}_{i}\right| = {\widehat{\lambda }}_{i}, i =$ $1,2,\ldots , r$ . Thus, we have:
280
+
281
+ $$
282
+ V{V}^{T} = \left\lbrack {{v}_{1},\ldots ,{v}_{r}}\right\rbrack \left\lbrack \begin{array}{lll} \left| {1 - {\lambda }_{1}}\right| & & \\ & \ddots & \\ & & \left| {1 - {\lambda }_{r}}\right| \end{array}\right\rbrack {\left\lbrack {v}_{1},\ldots ,{v}_{r}\right\rbrack }^{T}
283
+ $$
284
+
285
+ $$
286
+ = \left\lbrack {{v}_{1},\ldots ,{v}_{r}}\right\rbrack \left\lbrack \begin{matrix} \left| \widehat{{\lambda }_{1}}\right| & & \\ & \ddots & \\ & & \left| \widehat{{\lambda }_{r}}\right| \end{matrix}\right\rbrack {\left\lbrack {v}_{1},\ldots ,{v}_{r}\right\rbrack }^{T}
287
+ $$
288
+
289
+ $$
290
+ = \left\lbrack {{v}_{1},\ldots ,{v}_{r}}\right\rbrack \left\lbrack \begin{array}{lll} {\sigma }_{1} & & \\ & \ddots & \\ & & {\sigma }_{r} \end{array}\right\rbrack {\left\lbrack {v}_{1},\ldots ,{v}_{r}\right\rbrack }^{T}
291
+ $$
292
+
293
+ $$
294
+ = \widehat{A}
295
+ $$
296
+
297
+ 502
298
+
299
+ ## B Proof for Theorem 3.3
300
+
301
+ Proof. Since the weighted embedding matrix $V$ is defined as $V \cong \left\lbrack {\sqrt{\left| 1 - {\lambda }_{1}\right| }{v}_{1},\ldots ,\sqrt{\left| 1 - {\lambda }_{r}\right| }{v}_{r}}\right\rbrack$ , where ${\lambda }_{1},{\lambda }_{2},\ldots ,{\lambda }_{r}$ and ${v}_{1},{v}_{2},\ldots ,{v}_{r}$ are the top $r$ smallest eigenvalues and the corresponding eigenvectors of normalized graph Laplacian matrix ${L}_{\text{norm }} = I - {D}^{-\frac{1}{2}}A{D}^{-\frac{1}{2}}$ , we have:
302
+
303
+ $$
304
+ \mathop{\sum }\limits_{{\left( {i, j}\right) \in \mathcal{E}}}{\begin{Vmatrix}{V}_{i} - {V}_{j}\end{Vmatrix}}_{2}^{2} = \mathop{\sum }\limits_{{k = 1}}^{r}\mathop{\sum }\limits_{{\left( {i, j}\right) \in \mathcal{E}}}\left| {1 - {\lambda }_{k}}\right| {\left( {v}_{k, i} - {v}_{k, j}\right) }^{2}
305
+ $$
306
+
307
+ $$
308
+ = \mathop{\sum }\limits_{{k = 1}}^{r}\left| {1 - {\lambda }_{k}}\right| \mathop{\sum }\limits_{{\left( {i, j}\right) \in \mathcal{E}}}{\left( {v}_{k, i} - {v}_{k, j}\right) }^{2}
309
+ $$
310
+
311
+ $$
312
+ = \mathop{\sum }\limits_{{k = 1}}^{r}\left| {1 - {\lambda }_{k}}\right| {v}_{k}^{T}{L}_{\text{norm }}{v}_{k}
313
+ $$
314
+
315
+ $$
316
+ = \mathop{\sum }\limits_{{k = 1}}^{r}\left( {1 - {\lambda }_{k}}\right) {\lambda }_{k}
317
+ $$
318
+
319
+ $$
320
+ \leq \mathop{\sum }\limits_{{k = 1}}^{r}{0.25}
321
+ $$
322
+
323
+ $$
324
+ = {0.25r}
325
+ $$
326
+
327
+ 507
328
+
329
+ The fourth equation above is based on Courant-Fischer Theorem [35] with the assumption that ${\lambda }_{r} \leq 1$ and ${\begin{Vmatrix}{v}_{k}\end{Vmatrix}}_{2} = 1,\forall k = 1,\ldots , r$ . The inequality is derived by arithmetic mean-geometric mean (AM-GM) inequality.
330
+
331
+ Table 5: Statistics of datasets used in our experiments.
332
+
333
+ <table><tr><td>Dataset</td><td>Type</td><td>Homophily Score</td><td>Nodes</td><td>Edges</td><td>Classes</td><td>Features</td></tr><tr><td>Cora</td><td>Homophily</td><td>0.80</td><td>2,485</td><td>5, 069</td><td>7</td><td>1,433</td></tr><tr><td>Pubmed</td><td>Homophily</td><td>0.80</td><td>19,717</td><td>44,324</td><td>3</td><td>500</td></tr><tr><td>Chameleon</td><td>Heterophily</td><td>0.23</td><td>2,277</td><td>62,792</td><td>5</td><td>2,325</td></tr><tr><td>Squirrel</td><td>Heterophily</td><td>0.22</td><td>5, 201</td><td>396,846</td><td>5</td><td>2,089</td></tr><tr><td>ogbn-arxiv</td><td>Homophily</td><td>0.66</td><td>169,343</td><td>1,166,243</td><td>40</td><td>128</td></tr><tr><td>ogbn-products</td><td>Homophily</td><td>0.81</td><td>2,449,029</td><td>61,859,140</td><td>47</td><td>100</td></tr></table>
334
+
335
+ ## C Dataset Details
336
+
337
+ Table 5 shows the statistics of the datasets used in our experiments. We follow Zhu et al. [10] to compute the homophily score per dataset (lower score means more heterophilic). As in Jin et al. [8], we extract the largest connected components of the original Cora and Pubmed datasets [36] for the adversarial evaluation, with the same train/validation/test split. For Chameleon and Squirrel [37], we keep the same split setting as Chien et al. [31]. Finally, we follow the split setting of Open Graph Benchmark (OGB) [11] on ogbn-arxiv and ogbn-products. Note that all data used in our experiments do not contain personally identifiable information or offensive content.
338
+
339
+ In addition, we follow Jin et al. [8] for the selection of target nodes on Cora and Pubmed under Nettack. For the Chameleon and Squirrel datasets under Nettack, we choose target nodes that have degrees within the range of $\left\lbrack {{20},{50}}\right\rbrack$ and $\left\lbrack {{20},{140}}\right\rbrack$ , respectively. In regard to non-targeted attacks (i.e., Metattack), we choose nodes in the test set as target nodes for all datasets. We implement all the adversarial attacks based on the DeepRobust library [38].
340
+
341
+ ## D Hyperparameters Settings
342
+
343
+ ### D.1 Backbone GNN Models
344
+
345
+ GCN. We choose the GCN hyperparameters based on the DeepRobust library [38].
346
+
347
+ GPRGNN. We follow the hyperparameter settings provided at github.com/jianhao2016/GPRGNN with slightly different dropout rates (chosen from0.3,0.5,0.7) and learning rates (chosen from ${0.01},{0.05},{0.1})$ . Specifically, we provide the complete choices of dropout rates and learning rates across all datasets and attack settings below:
348
+
349
+ - Cora-Nettack: dropout of 0.5 and learning rate of 0.01 .
350
+
351
+ - Cora-Metattack: dropout of 0.5 and learning rate of 0.01 .
352
+
353
+ - Pubmed-Nettack: dropout of 0.5 and learning rate of 0.01 .
354
+
355
+ - Pubmed-Metattack: dropout of 0.5 and learning rate of 0.01 .
356
+
357
+ - Chameleon-Nettack: dropout of 0.5 and learning rate of 0.05 .
358
+
359
+ - Chameleon-Metattack: dropout of 0.3 and learning rate of 0.05 .
360
+
361
+ - Squirrel-Nettack: dropout of 0.5 and learning rate of 0.1 .
362
+
363
+ - Squirrel-Metattack: dropout of 0.5 and learning rate of 0.1 .
364
+
365
+ H2GCN. We train a three-layer model in full batch, with a learning rate of 0.01, dropout of 0.5 , hidden dimension of 64, and 300 epochs for both Chameleon and Squirrel datasets.
366
+
367
+ ### D.2 Defense Baselines
368
+
369
+ TSVD. We use the same r eigenvectors in TSVD as those used in GARNET, which is shown in Table 6.
370
+
371
+ GCNJaccard. We choose the GCNJaccard hyperparameters based on the DeepRobust library [38].
372
+
373
+ GNNGuard. We set edge pruning threshold (the only hyperparameter in GNNGuard) to be ${P}_{0} = {0.1}$ .
374
+
375
+ Soft Median GDC. We strictly follow the hyperparameter setting suggested by Geisler et al. [34]. In particular, we choose temperature $T$ of 5.0 for soft median, $\alpha$ of 0.1 (0.15) and $k$ of 64 (32) for GDC on ogbn-arxiv (ogbn-products).
376
+
377
+ ProGNN. We find out its performance is very sensitive to hyperparameters. Thus we strictly follow the tuned hyperparameters available at github.com/ChandlerBang/Pro-GNN/scripts. As GCN-ProGNN training is very slow on Pubmed (estimated time is 30 days for 10 runs), we follow the suggestion from ProGNN authors to replace "svd" with "truncated svd" in the ProGNN implementation.
378
+
379
+ ### D.3 GARNET
380
+
381
+ Table 6: Summary of hyperparameters in GARNET- We denote the number of eigenpairs for spectral embedding, the number of nearest neighbors for base graph construction, and the threshold for edge pruning by $r, k$ , and $\gamma$ , respectively.
382
+
383
+ <table><tr><td>Dataset</td><td>$r$</td><td>$k$</td><td>$\gamma$</td></tr><tr><td>Cora-Nettack</td><td>50</td><td>30</td><td>0.003</td></tr><tr><td>Cora-Metattack</td><td>50</td><td>30</td><td>0.003</td></tr><tr><td>Pubmed-Nettack</td><td>50</td><td>50</td><td>0.003</td></tr><tr><td>Pubmed-Metattack</td><td>50</td><td>50</td><td>0.003</td></tr><tr><td>Chameleon-Nettack</td><td>50</td><td>50</td><td>0.003</td></tr><tr><td>Chameleon-Metattack</td><td>50</td><td>50</td><td>0.003</td></tr><tr><td>Squirrel-Nettack</td><td>50</td><td>50</td><td>0.003</td></tr><tr><td>Squirrel-Metattack</td><td>50</td><td>50</td><td>0.003</td></tr><tr><td>ogbn-arxiv-GRBCD</td><td>500</td><td>50</td><td>0.003</td></tr><tr><td>ogbn-products-GRBCD</td><td>500</td><td>50</td><td>0.003</td></tr></table>
384
+
385
+ We show the hyperparameters of GARNET on different datasets under Nettack (1 perturbation per node), Metattack (10% perturbation ratio), and GR-BCD (25% perturbation ratio) in Table 6. Note that we provide our strategy of choosing $r$ in Appendix F, which avoids conducting hyperparameter tuning on $r$ per dataset. Besides, we set the prior data variance ${\sigma }^{2}$ to be 1000 for all graphs. In addition, we run all GNN training with a full batch way.
386
+
387
+ ## E Hardware Information
388
+
389
+ We conduct all experiments on a Linux machine with an Intel Xeon Gold 5218 CPU ( 8 cores @ 2.30GHz) CPU, 8 NVIDIA RTX 2080 Ti GPU (11 GB memory per GPU), and 1 RTX A6000 GPU (48 GB memory).
390
+
391
+ ## F Complexity Analysis of GARNET
392
+
393
+ ### F.1 Time Complexity - Choice of $r$
394
+
395
+ We choose $r$ based on the number of classes per dataset, which depends on the downstream task rather than number of nodes in the graph. Specifically, suppose ${\lambda }_{r}$ is the $r$ -th largest eigenvalue, an appropriate $r$ is chosen if there is a large gap between ${\lambda }_{r}$ and ${\lambda }_{r + 1}$ (i.e., a large eigengap) in the graph spectrum. According to [39], the eigengap is highly related to the number of clusters in the graph. In this work, we approximate $r$ by $r \approx {10c}$ to cover the large eigengap, where $c$ denotes the number of classes/clusters. As shown in Tables 5 and 6, the number of classes in small (large) graphs is around 5 (50), so we use $r = {50}\left( {r = {500}}\right)$ in experiments. As a result, GARNET has the near-linear time complexity $O\left( {r\left( {\left| E\right| + k\left| V\right| }\right) + \left| V\right| \log \left| V\right| }\right) = O\left( {c\left( {\left| E\right| + k\left| V\right| }\right) + \left| V\right| \log \left| V\right| }\right)$ .
396
+
397
+ ### F.2 Space Complexity
398
+
399
+ GARNET involves forming a sparse kNN graph by building hierarchical navigable small world (HNSW) graphs [28] that contain $O\left( {\left| V\right| \log \left| V\right| }\right)$ nodes in total and each node connects to a fixed number of neighbors. Thus, the space complexity of storing the HNSW graphs is $O\left( {\left| V\right| \log \left| V\right| }\right)$ . In addition, GARNET also needs to store the input adversarial graph and the produced kNN graph. As a result, the total space complexity of GARNET is $O\left( {\left| V\right| \left( {\log \left| V\right| + k}\right) + \left| \mathcal{E}\right| }\right)$ , where $\left| \mathcal{V}\right|$ and $\left| \mathcal{E}\right|$ denote the number of nodes and edges in the adversarial graph, respectively, and $k$ is the averaged node degree in the kNN graph.
400
+
401
+ Apart from the complexity analysis, we further provide the algorithms of GARNET and TSVD below for comparison.
402
+
403
+ Algorithm 1: GARNET based adversarial defense (this work)
404
+
405
+ ---
406
+
407
+ Input: Adversarial graph ${\mathcal{G}}_{\text{adv }}$ ; node feature matrix $X \in {R}^{n \times d}$ ; prior data
408
+
409
+ variance ${\sigma }^{2}$ ; truncated svd rank $r$ ; kNN graph $k$ ; threshold for edge
410
+
411
+ pruning $\gamma$ ; a GNN model for defense.
412
+
413
+ Output: Node embedding matrix $Z \in {R}^{n \times c}$
414
+
415
+ $M, S = \operatorname{eigs}\left( {{\mathcal{G}}_{adv}, r}\right)$ ;
416
+
417
+ $V = M\sqrt{\left| I - S\right| };$
418
+
419
+ ${\mathcal{G}}_{\text{base }} = {kNN}\_ \operatorname{graph}\left( {V, k}\right)$ ;
420
+
421
+ ${M}^{\prime },{S}^{\prime } = \operatorname{eigs}\left( {{\mathcal{G}}_{\text{base }}, r}\right)$ ;
422
+
423
+ $U = {M}^{\prime }/\sqrt{{S}^{\prime } + I/{\sigma }^{2}};$
424
+
425
+ for ${e}_{i, j} \in {\mathcal{G}}_{\text{base }}$ do
426
+
427
+ if $\frac{{\begin{Vmatrix}{U}_{i} - {U}_{j}\end{Vmatrix}}_{2}^{2}}{{\begin{Vmatrix}{V}_{i} - {V}_{j}\end{Vmatrix}}_{2}^{2}} < \gamma$ then
428
+
429
+ Prune ${e}_{i, j}$ from ${\mathcal{G}}_{\text{base }}$ ;
430
+
431
+ end
432
+
433
+ end
434
+
435
+ $Z = \operatorname{GNN}\left( {{\mathcal{G}}_{\text{base }}^{\prime }, X}\right)$ ;
436
+
437
+ ---
438
+
439
+ Algorithm 2: Truncated SVD based adversarial defense (prior work)
440
+
441
+ ---
442
+
443
+ Input: Adversarial graph ${\mathcal{G}}_{\text{adv }}$ ; node feature matrix $X \in {R}^{n \times d}$ ; truncated svd
444
+
445
+ rank $r$ ; a GNN model for defense.
446
+
447
+ Output: Node embedding matrix $Z \in {R}^{n \times c}$
448
+
449
+ $U, S, V = {TSVD}\left( {{\mathcal{G}}_{\text{adv }}, r}\right)$ ;
450
+
451
+ ${A}_{{\mathcal{G}}_{tsvd}} = {US}{V}^{T};$
452
+
453
+ $Z = \operatorname{GNN}\left( {{\mathcal{G}}_{\text{tsvd }}, X}\right)$ ;
454
+
455
+ ---
456
+
457
+ ## 5 G Ablation Study
458
+
459
+ ## 36 G. 1 Choice of $k$ for kNN Graph Construction
460
+
461
+ ![01963eee-3740-734d-92be-6621ae5a3aca_15_365_1465_1022_494_0.jpg](images/01963eee-3740-734d-92be-6621ae5a3aca_15_365_1465_1022_494_0.jpg)
462
+
463
+ Figure 6: Ablation study of GARNET on $k$ for kNN graph construction.
464
+
465
+ To evaluate the sensitivity of GARNET to k nearest-neighbor (kNN) graph construction, we evaluate the adversarial accuracy of GARNET with different $k$ values for constructing kNN graphs. Figure 6 shows that the accuracy of GARNET does not change too much when varying $k$ value within the range of $\left\lbrack {{10},{100}}\right\rbrack$ , indicating a relatively large $k$ (e.g., $k \geq {10}$ ) can enable the kNN graph to incorporate most of edges in the underlying clean graph. Consequently, the performance of GARNET is relatively robust to the choice of $k$ for kNN graph construction. As the peak performance is typically achieved in $\left\lbrack {{30},{80}}\right\rbrack$ , we recommend choosing $k = {30} \sim {80}$ for building the kNN graph in practice.
466
+
467
+ ### G.2 Choice of $\gamma$ for edge pruning
468
+
469
+ ![01963eee-3740-734d-92be-6621ae5a3aca_16_318_510_1161_378_0.jpg](images/01963eee-3740-734d-92be-6621ae5a3aca_16_318_510_1161_378_0.jpg)
470
+
471
+ Figure 7: Ablation study of GARNET on threshold $\gamma$ for edge pruning.
472
+
473
+ Apart from the choice of $k$ for kNN graph construction, another critical hyperparameter of GARNET is the threshold $\gamma$ that determines whether an edge should be pruned in the base graph. Thus, we further evaluate the effect of $\gamma$ on the performance of GARNET. Specifically, we pick $\gamma$ in the set of $\{ {0.001},{0.005},{0.01},{0.05},{0.1}\}$ and evaluate the corresponding adversarial accuracy of GARNET under Nettack with 1 perturbation per target node. As shown in Figure 7, the improper choice of $\gamma$ may degrade the adversarial accuracy of GARNET by 3%. However, picking $\gamma$ around 0.01 can always achieve a reasonable adversarial accuracy on different datasets with different backbone GNN models. As a result, we recommend choosing $\gamma$ in the range of $\left\lbrack {{0.005},{0.05}}\right\rbrack$ in practice. In addition, Figure 7 also shows that GPRGNN-GARNET is more robust to the changes of $\gamma$ than GCN-GARNET, which is due to the reason that the adaptive graph filter in GPRGNN can adapt to the graph structure after edge pruning.
474
+
475
+ ## H Backbone GNN Models for Defense
476
+
477
+ As GARNET can be integrated with any existing GNN models to improve their adversarial accuracy, we choose two popular GNN models as the backbone model in our experiments: GCN and GPRGNN [30, 31]. As the GCN model implicitly assumes the underlying graph is homophilic, it performs poorly on heterophilic graphs [10]. In contrast, GPRGNN can work on both homophilic and heterophilic datasets, due to its learned graph filter that can adapt to the homophily/heterophily property of the underlying graph. Thus, we choose GPRGNN as the backbone model for evaluation on heterophilic datasets. In addition, we also show the defense results with the H2GCN [10] as backbone model in Appendix J.
478
+
479
+ ## Defense Results with Various Perturbation Budgets
480
+
481
+ We provide additional defense results under Nettack and Metattack with various perturbations in Tables 7 and 8 respectively. The results indicate that GARNET outperforms prior defense methods in most cases.
482
+
483
+ ## J Defense on H2GCN
484
+
485
+ We provide the results of combining GARNET with H2GCN [10] on heterophilic graphs in Table 9, which shows that GARNET achieves the highest accuracy in most cases and improves the accuracy on all perturbed graphs by a large margin compared to the vanilla H2GCN as well as H2GCN-TSVD.
486
+
487
+ Table 7: Averaged node classification accuracy $\left( \% \right) \pm$ std under targeted attack (Nettack) with different perturbation ratio - We denote the evaluated dataset by its name with the number of perturbations (e.g., Cora-0 means the clean Cora graph and Cora-1 denotes there is 1 adversarial edge perturbation per target node). As GCN is not designed for heterophilic graphs, we only show results of defense methods with GPRGNN as the backbone model on Chameleon and Squirrel. We bold and underline the first and second highest accuracy of each backbone GNN model, respectively. OOM means out of memory.
488
+
489
+ <table><tr><td rowspan="2">Dataset</td><td colspan="4">GCN</td><td colspan="4">GPRGNN</td></tr><tr><td>Vanilla</td><td>TSVD</td><td>ProGNN</td><td>GARNET</td><td>Vanilla</td><td>TSVD</td><td>ProGNN</td><td>GARNET</td></tr><tr><td>Cora-0</td><td>${80.96} \pm {0.95}$</td><td>${72.65} \pm {2.29}$</td><td>${80.54} \pm {1.21}$</td><td>${81.08} \pm {2.05}$</td><td>$\mathbf{{83.04}} \pm {2.05}$</td><td>${81.68} \pm {1.78}$</td><td>${82.04} \pm {1.33}$</td><td>$\underline{{82.77} \pm {1.89}}$</td></tr><tr><td>Cora-1</td><td>${70.06} \pm {0.81}$</td><td>${71.36} \pm {1.63}$</td><td>${81.65} \pm {0.59}$</td><td>$\underline{79.75} \pm {2.35}$</td><td>$\underline{81.68} \pm {2.18}$</td><td>${79.36} \pm {2.23}$</td><td>${80.56} \pm {1.71}$</td><td>${82.17} \pm {1.95}$</td></tr><tr><td>Cora-2</td><td>${68.60} \pm {1.81}$</td><td>${70.66} \pm {2.76}$</td><td>${79.83} \pm {1.10}$</td><td>${79.69} \pm {1.50}$</td><td>${74.34} \pm {2.41}$</td><td>${76.26} \pm {2.34}$</td><td>${76.12} \pm {2.43}$</td><td>${78.55} \pm {2.11}$</td></tr><tr><td>Cora-3</td><td>${65.04} \pm {3.31}$</td><td>${68.20} \pm {1.93}$</td><td>$\underline{72.08} \pm {1.20}$</td><td>${74.42} \pm {2.06}$</td><td>${70.96} \pm {2.00}$</td><td>${70.90} \pm {3.89}$</td><td>$\underline{73.74} \pm {2.73}$</td><td>${79.40} \pm {1.35}$</td></tr><tr><td>Cora-4</td><td>${61.69} \pm {1.48}$</td><td>${65.34} \pm {3.46}$</td><td>${67.83} \pm {1.87}$</td><td>$\mathbf{{69.60}} \pm {2.67}$</td><td>${65.90} \pm {1.61}$</td><td>${65.51} \pm {3.27}$</td><td>${68.94} \pm {3.25}$</td><td>${72.77} \pm {2.16}$</td></tr><tr><td>Cora-5</td><td>${55.66} \pm {1.95}$</td><td>${60.30} \pm {2.25}$</td><td>${65.38} \pm {1.65}$</td><td>${67.04} \pm {2.05}$</td><td>${62.89} \pm {1.95}$</td><td>${63.52} \pm {3.27}$</td><td>$\underline{63.74} \pm {2.57}$</td><td>${71.45} \pm {2.73}$</td></tr><tr><td>Pubmed-0</td><td>${87.26} \pm {0.51}$</td><td>${87.03} \pm {0.48}$</td><td>${88.14} \pm {1.44}$</td><td>${87.96} \pm {0.58}$</td><td>${90.05} \pm {0.73}$</td><td>${OOM}$</td><td>${OOM}$</td><td>${90.99} \pm {0.52}$</td></tr><tr><td>Pubmed-1</td><td>${84.29} \pm {0.68}$</td><td>${86.46} \pm {0.28}$</td><td>${85.75} \pm {1.23}$</td><td>$\mathbf{{87.03} \pm {0.68}}$</td><td>${89.30} \pm {0.54}$</td><td>OOM</td><td>OOM</td><td>${90.91} \pm {0.47}$</td></tr><tr><td>Pubmed-2</td><td>${82.17} \pm {0.67}$</td><td>${83.68} \pm {0.46}$</td><td>${81.23} \pm {1.21}$</td><td>${86.92} \pm {0.45}$</td><td>${87.42} \pm {0.28}$</td><td>OOM</td><td>${OOM}$</td><td>${90.75} \pm {0.55}$</td></tr><tr><td>Pubmed-3</td><td>${81.13} \pm {0.53}$</td><td>${81.34} \pm {0.68}$</td><td>${80.65} \pm {1.39}$</td><td>${86.50} \pm {0.45}$</td><td>${84.46} \pm {0.53}$</td><td>${OOM}$</td><td>${OOM}$</td><td>${90.70} \pm {0.37}$</td></tr><tr><td>Pubmed-4</td><td>${75.48} \pm {0.52}$</td><td>${82.41} \pm {0.54}$</td><td>${78.46} \pm {1.11}$</td><td>${86.44} \pm {0.64}$</td><td>${81.72} \pm {0.72}$</td><td>OOM</td><td>OOM</td><td>${90.11} \pm {0.57}$</td></tr><tr><td>Pubmed-5</td><td>${66.67} \pm {1.34}$</td><td>${79.56} \pm {0.48}$</td><td>${71.89} \pm {1.56}$</td><td>${86.12} \pm {0.86}$</td><td>${76.99} \pm {1.16}$</td><td>${OOM}$</td><td>${OOM}$</td><td>${89.52} \pm {0.45}$</td></tr><tr><td>Chameleon-0</td><td/><td/><td/><td/><td>$\underline{71.46} \pm {1.92}$</td><td>${62.12} \pm {3.04}$</td><td>${58.80} \pm {1.72}$</td><td>${72.89} \pm {2.65}$</td></tr><tr><td>Chameleon-1</td><td/><td/><td/><td/><td>${71.02} \pm {1.57}$</td><td>${61.34} \pm {2.93}$</td><td>${58.05} \pm {1.90}$</td><td>${72.68} \pm {1.89}$</td></tr><tr><td>Chameleon-2</td><td/><td/><td/><td/><td>${70.71} \pm {1.12}$</td><td>${61.09} \pm {2.80}$</td><td>${57.44} \pm {1.67}$</td><td>${72.20} \pm {2.31}$</td></tr><tr><td>Chameleon-3</td><td/><td/><td/><td/><td>${70.30} \pm {1.28}$</td><td>${60.98} \pm {2.82}$</td><td>${57.19} \pm {1.83}$</td><td>${72.17} \pm {2.07}$</td></tr><tr><td>Chameleon-4</td><td/><td/><td/><td/><td>${69.87} \pm {1.29}$</td><td>${60.85} \pm {3.31}$</td><td>${57.44} \pm {1.63}$</td><td>${72.06} \pm {2.94}$</td></tr><tr><td>Chameleon-5</td><td/><td/><td/><td/><td>${66.26} \pm {1.71}$</td><td>${60.37} \pm {2.86}$</td><td>${57.07} \pm {1.82}$</td><td>${71.83} \pm {2.11}$</td></tr><tr><td>Squirrel-0</td><td/><td/><td/><td/><td>${41.36} \pm {2.87}$</td><td>${32.98} \pm {2.36}$</td><td>${31.81} \pm {1.72}$</td><td>${44.91} \pm {1.53}$</td></tr><tr><td>Squirrel-1</td><td/><td/><td/><td/><td>${41.27} \pm {3.16}$</td><td>${32.63} \pm {0.87}$</td><td>${30.54} \pm {2.45}$</td><td>${43.55} \pm {1.79}$</td></tr><tr><td>Squirrel-2</td><td/><td/><td/><td/><td>${41.09} \pm {2.14}$</td><td>${32.05} \pm {1.05}$</td><td>${30.73} \pm {2.13}$</td><td>${44.09} \pm {2.35}$</td></tr><tr><td>Squirrel-3</td><td/><td/><td/><td/><td>${40.98} \pm {2.72}$</td><td>${32.00} \pm {1.66}$</td><td>${30.25} \pm {1.98}$</td><td>${44.18} \pm {2.26}$</td></tr><tr><td>Squirrel-4</td><td/><td/><td/><td/><td>${40.25} \pm {2.82}$</td><td>${31.45} \pm {1.38}$</td><td>${29.09} \pm {2.33}$</td><td>${43.73} \pm {1.62}$</td></tr><tr><td>Squirrel-5</td><td/><td/><td/><td/><td>${39.45} \pm {2.36}$</td><td>${31.20} \pm {1.84}$</td><td>${27.27} \pm {1.87}$</td><td>${43.64} \pm {1.53}$</td></tr></table>
490
+
491
+ Table 8: Averaged node classification accuracy $\left( \% \right) \pm$ std under non-targeted attack (Metattack) with different perturbation ratio - We denote the evaluated dataset by its name with the perturbation ratio (e.g., Cora-0 means the clean Cora graph and Cora-10 denotes there are 10% adversarial edges). As GCN is not designed for heterophilic graphs, we only show results of defense methods with GPRGNN as the backbone model on Chameleon and Squirrel. We bold and underline the first and second highest accuracy of each backbone GNN model, respectively. OOM means out of memory.
492
+
493
+ <table><tr><td rowspan="2">Dataset</td><td colspan="4">GCN</td><td colspan="4">GPRGNN</td></tr><tr><td>Vanilla</td><td>TSVD</td><td>ProGNN</td><td>GARNET</td><td>Vanilla</td><td>TSVD</td><td>ProGNN</td><td>GARNET</td></tr><tr><td>Cora-0</td><td>${81.35} \pm {0.66}$</td><td>${73.86} \pm {0.53}$</td><td>${78.56} \pm {0.36}$</td><td>${79.64} \pm {0.75}$</td><td>${83.05} \pm {0.42}$</td><td>${81.61} \pm {0.54}$</td><td>${82.04} \pm {0.90}$</td><td>${82.67} \pm {1.89}$</td></tr><tr><td>Cora-10</td><td>${69.50} \pm {1.46}$</td><td>${69.45} \pm {0.69}$</td><td>${77.90} \pm {0.69}$</td><td>${77.78} \pm {0.53}$</td><td>${80.37} \pm {0.65}$</td><td>${81.08} \pm {0.52}$</td><td>${80.31} \pm {1.23}$</td><td>${82.17} \pm {0.69}$</td></tr><tr><td>Cora-20</td><td>${56.28} \pm {1.19}$</td><td>${62.44} \pm {1.16}$</td><td>${72.28} \pm {1.67}$</td><td>${73.89} \pm {0.91}$</td><td>${74.27} \pm {2.11}$</td><td>$\underline{78.50} \pm {1.20}$</td><td>${76.29} \pm {1.46}$</td><td>${81.34} \pm {0.79}$</td></tr><tr><td>Pubmed-0</td><td>${87.16} \pm {0.09}$</td><td>${84.53} \pm {0.08}$</td><td>${84.62} \pm {0.11}$</td><td>${85.37} \pm {0.20}$</td><td>${87.35} \pm {0.13}$</td><td>${OOM}$</td><td>${OOM}$</td><td>${86.86} \pm {0.57}$</td></tr><tr><td>Pubmed-10</td><td>${81.16} \pm {0.13}$</td><td>${84.56} \pm {0.10}$</td><td>${84.09} \pm {0.12}$</td><td>${85.22} \pm {0.13}$</td><td>${85.52} \pm {0.14}$</td><td>${OOM}$</td><td>${OOM}$</td><td>${86.24} \pm {0.20}$</td></tr><tr><td>Pubmed-20</td><td>${77.20} \pm {0.27}$</td><td>$\underline{{84.30} \pm {0.08}}$</td><td>${83.89} \pm {0.32}$</td><td>$\mathbf{{85.14}} \pm {0.23}$</td><td>$\underline{84.18} \pm {0.15}$</td><td>${OOM}$</td><td>${OOM}$</td><td>${85.69} \pm {0.26}$</td></tr><tr><td>Chameleon-0</td><td rowspan="3" colspan="4"/><td>${61.36} \pm {1.00}$</td><td>${47.29} \pm {1.63}$</td><td>${48.39} \pm {0.68}$</td><td>${61.11} \pm {2.46}$</td></tr><tr><td>Chameleon-10</td><td>${57.55} \pm {1.26}$</td><td>${47.07} \pm {1.21}$</td><td>${47.80} \pm {0.91}$</td><td>${60.96} \pm {1.22}$</td></tr><tr><td>Chameleon-20</td><td>${53.20} \pm {0.88}$</td><td>${45.12} \pm {1.34}$</td><td>${46.69} \pm {0.61}$</td><td>$\mathbf{{59.96}} \pm {0.84}$</td></tr><tr><td>Squirrel-0</td><td rowspan="3" colspan="4"/><td>${39.51} \pm {1.64}$</td><td>${31.36} \pm {1.87}$</td><td>${31.64} \pm {2.87}$</td><td>${43.43} \pm {1.14}$</td></tr><tr><td>Squirrel-10</td><td>${38.27} \pm {0.83}$</td><td>${28.25} \pm {1.66}$</td><td>${30.33} \pm {3.29}$</td><td>${42.62} \pm {1.09}$</td></tr><tr><td>Squirrel-20</td><td>$\underline{35.22} \pm {1.20}$</td><td>${23.91} \pm {1.40}$</td><td>${29.36} \pm {3.61}$</td><td>${41.97} \pm {1.02}$</td></tr></table>
494
+
495
+ Table 9: Averaged node classification accuracy $\left( \% \right) \pm$ std on heterophilic graphs - We bold and underline the first and second highest accuracy, respectively. The backbone GNN model is H2GCN.
496
+
497
+ <table><tr><td rowspan="2">Model</td><td colspan="2">Chameleon (Nettack)</td><td colspan="2">Chameleon (Metattack)</td><td colspan="2">Squirrel (Nettack)</td><td colspan="2">Squirrel (Metattack)</td></tr><tr><td>Clean</td><td>Adversarial</td><td>Clean</td><td>Adversarial</td><td>Clean</td><td>Adversarial</td><td>Clean</td><td>Adversarial</td></tr><tr><td>Vanilla</td><td>$\underline{78.43} \pm {2.09}$</td><td>${62.20} \pm {1.99}$</td><td>${68.45} \pm {0.57}$</td><td>${52.73} \pm {1.72}$</td><td>${55.36} \pm {2.91}$</td><td>${29.55} \pm {3.09}$</td><td>${61.23} \pm {0.71}$</td><td>$\underline{44.84} \pm {0.89}$</td></tr><tr><td>TSVD</td><td>${67.07} \pm {1.15}$</td><td>$\underline{63.17} \pm {1.61}$</td><td>${61.75} \pm {1.09}$</td><td>$\underline{{54.06} \pm {1.66}}$</td><td>${32.45} \pm {1.87}$</td><td>$\underline{31.64} \pm {2.09}$</td><td>${46.66} \pm {1.71}$</td><td>${40.56} \pm {1.41}$</td></tr><tr><td>GARNET</td><td>${78.78} \pm {1.84}$</td><td>${76.10} \pm {1.92}$</td><td>$\underline{66.63} \pm {1.05}$</td><td>${61.12} \pm {0.59}$</td><td>$\underline{{54.09} \pm {1.73}}$</td><td>${53.27} \pm {1.50}$</td><td>$\underline{{59.67} \pm {0.83}}$</td><td>$\mathbf{{50.08}} \pm {1.92}$</td></tr></table>
498
+
499
+ The results further confirm that GARNET is able to improve robustness of different backbone GNN 625 models.
500
+
501
+ ## K Broader Impact
502
+
503
+ Zügner et al. [5] have shown that graph adversarial attacks can drastically degrade the performance of GNN models for downstream applications. For instance, an attacker can attack a GNN-based recommender system on Facebook social network or Amazon co-purchasing network, via creating a fake account and make some connections to other users or items. Those connections can be viewed as adversarial edges in the graph. As a result, the attacker can deliberately enforce a GNN model to recommend some irrelevant or even harmful contents to other users. Thus, improving adversarial robustness of GNN models has the potential for positive societal benefit.
504
+
505
+ We hope that this paper provides insight on the robustness and scalablity limitations of prior defense methods. Moreover, we believe that the proposed GARNET can largely overcome these two limitations and produce a robust GNN model against adversarial attacks on large-scale graph datasets. Nevertheless, we have to admit that GARNET may potentially provide the attacker with some hints about developing a even more powerful and scalable adversarial attack than all existing attacks, which is a possible negative consequence.
506
+
507
+ ## L Discussion on Node Features
508
+
509
+ ### L.1 Graph Construction from Node Features
510
+
511
+ Table 10: Averaged node classification accuracy (%) ± std under targeted attack (Nettack) and non-targeted attack (Metattack) on Cora and Pubmed - We bold and underline the first and second highest accuracy, respectively. "NodeFeat" denotes the graph constructed from node features is used for GNN training.
512
+
513
+ <table><tr><td rowspan="2">Model</td><td colspan="2">Cora (Nettack)</td><td colspan="2">Cora (Metattack)</td><td colspan="2">Pubmed (Nettack)</td><td colspan="2">Pubmed (Metattack)</td></tr><tr><td>Clean</td><td>Adversarial</td><td>Clean</td><td>Adversarial</td><td>Clean</td><td>Adversarial</td><td>Clean</td><td>Adversarial</td></tr><tr><td>GCN-Vanilla</td><td>${80.96} \pm {0.95}$</td><td>${55.66} \pm {1.95}$</td><td>${81.35} \pm {0.66}$</td><td>${56.28} \pm {1.19}$</td><td>$\underline{87.26} \pm {0.51}$</td><td>${66.67} \pm {1.34}$</td><td>${87.16} \pm {0.09}$</td><td>${77.20} \pm {0.27}$</td></tr><tr><td>GCN-NodeFeat</td><td>${52.65} \pm {2.69}$</td><td>${52.65} \pm {2.69}$</td><td>${56.44} \pm {1.04}$</td><td>$\underline{{56.44} \pm {1.04}}$</td><td>${83.01} \pm {0.99}$</td><td>$\underline{83.01} \pm {0.99}$</td><td>${78.66} \pm {0.15}$</td><td>$\underline{78.66} \pm {0.15}$</td></tr><tr><td>GCN-GARNET</td><td>${81.08} \pm {2.05}$</td><td>${67.04} \pm {2.05}$</td><td>$\underline{79.64} \pm {0.75}$</td><td>${73.89} \pm {0.91}$</td><td>${87.96} \pm {0.58}$</td><td>$\mathbf{{86.12}} \pm {0.86}$</td><td>$\underline{85.37} \pm {0.20}$</td><td>${85.14} \pm {0.23}$</td></tr></table>
514
+
515
+ As GARNET purifies the adversarial graph by building a kNN graph based on dominant singular components, a natural question is whether the kNN graph constructed from node features can also achieve similar performance. We answer this question by comparing the results of GARNET graph and the node feature graph in Table 10. Note that the clean and adversarial accuracy are the same on the graph constructed from node features, since node features are unchanged after graph adversarial attack. Besides, we only show results on homophilic graphs as the kNN graph constructed from node features naturally falls into this category. Table 10 shows that the node feature graph performs much worse than GARNET graph. This further confirms that the method proposed in this work is critical to improve the robustness of GNN models.
516
+
517
+ ### L.2 Defense Against Node Feature Attack
518
+
519
+ GARNET can be extended to handle node feature attack, although this paper mainly focuses on defending against graph structure attack, which we believe is more challenging than defending node feature attack due to the discrete nature. Specifically, we can perform TSVD to obtain the low-rank approximation of the node feature matrix, which can remove high-rank adversarial components in node features [7]. The low-rank feature matrix is then concatenated to the weighted spectral embeddings to produce the kNN base graph. In this way, the downstream GNN model will be able to aggregate neighbors whose features are less perturbed during message passing.
520
+
521
+ ## M Run Time on OGB Graphs
522
+
523
+ Apart from the runtime comparison of GARNET and defense baselines on small graphs in Figure 3, we further evaluate the run time of GARNET on large (OGB) graphs. Concretely, the end-to-end run time of GARNET is 18 mins and 2 hours on ogbn-arxiv and ogbn-products, respectively, which is $3 \times$ faster than the most competitive baseline GNNGuard that takes more than 1 hour on ogbn-arxiv and 8 hours on ogbn-products. One way to further accelerate GARNET is to leverage prior work on accelerating spectral embedding $\left\lbrack {{40},{41}}\right\rbrack$ . We leave this to our future work.
524
+
525
+ ## N Additional Visualization Results
526
+
527
+ ![01963eee-3740-734d-92be-6621ae5a3aca_19_339_339_1118_1395_0.jpg](images/01963eee-3740-734d-92be-6621ae5a3aca_19_339_339_1118_1395_0.jpg)
528
+
529
+ Figure 8: Cora visualizations on a target node (marked in blue) as well as its 1-hop and 2-hop neighbors. Neighbor nodes are marked in green if they have the same label as the target node, and red otherwise. Note that the three graphs in the same row share the same target node (randomly picked), while graphs in different rows focus on different target nodes. Left: clean graph. Middle: adversarial graph. Right: adversarial graph purified by GARNET.
530
+
531
+ We visualize more target nodes and their local strcutures in Figure 8, which reveals that GARNET consistently improves the quality of adversarial graph by removing adversarial edges that connect nodes with different labels. As a result, the adversarial accuracy of backbone GNN models can be largely improved once they are trained on the GARNET graph.
532
+
533
+ ## 671 O Homophily Score of GARNET Graph
534
+
535
+ Table 11: Graph homophily score.
536
+
537
+ <table><tr><td rowspan="2">Dataset</td><td colspan="2">Homophilic graphs</td><td colspan="2">Heterophilic graphs</td></tr><tr><td>Cora</td><td>Pubmed</td><td>Chameleon</td><td>Squirrel</td></tr><tr><td>Clean graph</td><td>0.80</td><td>0.80</td><td>0.23</td><td>0.22</td></tr><tr><td>GARNET graph</td><td>0.75</td><td>0.72</td><td>0.25</td><td>0.26</td></tr></table>
538
+
539
+ We follow Zhu et al. [10] to compute the homophily score per dataset (lower score means more heterophilic). As shown in Table 11, the GARNET graph is homophilic (heterophilic) if the cor- 4 responding clean graph is homophilic (heterophilic), which further confirms Theorem 3.3 that our approach can effectively recover the clean graph structure. As a result, GARNET supports both 76 homophilic and heterophilic graphs.
papers/LOG/LOG 2022/LOG 2022 Conference/kvwWjYQtmw/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,285 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § GARNET: REDUCED-RANK TOPOLOGY LEARNING FOR ROBUST AND SCALABLE GRAPH NEURAL NETWORKS
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ Graph neural networks (GNNs) have been increasingly deployed in various applications that involve learning on non-Euclidean data. However, recent studies show that GNNs are vulnerable to graph adversarial attacks. Although there are several defense methods to improve GNN robustness by eliminating adversarial components, they may also impair the underlying clean graph structure that contributes to GNN training. In addition, few of those defense models can scale to large graphs due to their high computational complexity and memory usage. In this paper, we propose GARNET, a scalable spectral method to boost the adversarial robustness of GNN models. GARNET first leverages weighted spectral embedding to construct a base graph, which is not only resistant to adversarial attacks but also contains critical (clean) graph structure for GNN training. Next, GARNET further refines the base graph by pruning additional uncritical edges based on probabilistic graphical model. GARNET has been evaluated on various datasets, including a large graph with millions of nodes. Our extensive experiment results show that GARNET achieves adversarial accuracy improvement and runtime speedup over state-of-the-art GNN (defense) models by up to ${10.23}\%$ and ${14.7} \times$ , respectively.
12
+
13
+ § 18 1 INTRODUCTION
14
+
15
+ Recent years have witnessed a surge of interest in graph neural networks (GNNs), which incorporate both graph structure and node attributes to produce low-dimensional embedding vectors that maximally preserve graph structural information [1]. GNNs have achieved promising results in various real-world applications, such as recommendation systems [2], self-driving car [3], and chip placements [4]. However, recent studies have shown that adversarial attacks on graph structure accomplished by inserting, deleting, or rewiring edges in an unnoticeable way, can easily fool the GNN models and drastically degrade their accuracy in downstream tasks (e.g., node classification) [5, 6].
16
+
17
+ In literature, one of the most effective ways to defend GNNs is to purify the graph by removing adversarial graph structures. Entezari et al. [7] observe that adversarial attacks mainly affect high-rank graph properties; thus they propose to first construct a low-rank graph by performing truncated singular value decomposition (TSVD) on the graph adjacency matrix, which can then be exploited for training a robust GNN model. Later, Jin et al. [8] propose Pro-GNN to jointly learn a new graph and a robust GNN model with the low-rank constraints imposed by the graph structure. While prior methods using low-rank approximation largely eliminate adversarial components in the graph spectrum, they involve dense adjacency matrices during GNN training, leading to a much higher time/space complexity and prohibiting their applications in large-scale graph learning tasks.
18
+
19
+ In addition, due to the high computational cost of TSVD, existing low-rank based methods can only preserve top $r$ singular components (e.g., $r = {50}$ ). Consequently, as shown in Figure 1(a), these methods may lose a wide range of clean graph spectrum that corresponds to important structures of the clean graph in the spatial domain. This is confirmed in Figure 1(c), where the clean accuracy of the TSVD-based method largely increases when preserving more spectral information via increasing the graph rank $r$ . In other words, prior low-rank approximation methods eliminate high-rank adversarial
20
+
21
+ < g r a p h i c s >
22
+
23
+ Figure 1: "TSVD AdvGraph" and "GARNET AdvGraph" denote adversarial graphs purified by TSVD and GARNET, respectively. (a) Graph rank comparison on Cora under Metattack with different perturbation ratio. (b) Singular value comparison of different normalized graph adjacency matrices on Cora. (c) Accuracy $\pm$ std. of GCN-TSVD on Cora with different $r$ -rank approximation via TSVD.
24
+
25
+ components at the cost of inevitably impairing the important (clean) graph structure, which degrades the overall quality of the reconstructed graph and therefore limits the performance of GNN training.
26
+
27
+ In this work, we propose GARNET, a novel spectral approach to learning the underlying clean graph topology of an adversarial graph via combining spectral embedding with probabilistic graphical model (PGM), where the learned graph structure encodes the conditional dependence among low-dimensional node representations (spectral embedding vectors) [9]. More concretely, given an adversarial graph, GARNET first constructs a base graph topology by leveraging weighted spectral embeddings that are resistant to adversarial attacks, which is followed by an effective and efficient graph refinement scheme for pruning noncritical edges in the base graph by exploiting PGM.
28
+
29
+ By recovering the clean graph structure, Figures 1(a) and 1(b) show that the adversarial graph purified by GARNET largely restores the rank of the underlying clean graph. Thus, GARNET can be viewed as a reduced-rank topology learning approach that slightly reduces the rank of the input adversarial graph, which is fundamentally different from the prior low-rank based defense methods (e.g., TSVD and ProGNN). Moreover, GARNET scales comfortably to large graphs due to its nearly-linear algorithm complexity, and produces a sparse yet high-quality graph that improves GNN robustness without involving any dense adjacency matrices during GNN training. As a byproduct, unlike existing defense methods (e.g., ProGNN) that assume graphs to be homophilic, i.e, adjacent nodes in a graph tend to have similar attributes [10], GARNET does not have such an assumption and thus can protect GNNs against adversarial attacks on both homophilic and heterophilic graphs.
30
+
31
+ We evaluate GARNET on both homophilic and heterophilic datasets under strong graph adversarial attacks such as Nettack [5] and Metattack [6]. Moreover, we further show the nearly-linear scalability of our approach on the ogbn-products dataset that consists of millions of nodes [11]. Our experimental results indicate that GARNET largely improves both clean and adversarial accuracy over baselines in most cases. Our main technical contributions are summarized as follows:
32
+
33
+ * To our knowledge, we are the first to exploit spectral graph embedding and probabilistic graphical model for improving robustness of GNN models, which is achieved by learning a reduced-rank graph topology for recovering the underlying clean graph structure from the input adversarial graph.
34
+
35
+ * By recovering the critical edges that contribute to maximum likelihood estimation in PGM while ignoring adversarial components, GARNET produces a high-quality graph on which existing GNN models can be trained to achieve high accuracy. Our experimental results show that GARNET gains up to 10.23% adversarial accuracy improvement over state-of-the-art defense baselines.
36
+
37
+ * Our proposed reduced-rank topology learning method has a nearly-linear complexity in time/space and produces a sparse graph structure for scalable GNN training. This allows GARNET to run up to ${14.7} \times$ faster than prior defense methods on popular data sets such as Cora and Squirrel. In addition, GARNET scales comfortably to very large graph data sets with millions of nodes, while prior defense methods run out of memory even on a graph with ${20}\mathrm{k}$ nodes.
38
+
39
+ § 2 BACKGROUND
40
+
41
+ § 2.1 UNDIRECTED PROBABILISTIC GRAPHICAL MODELS
42
+
43
+ 79 Consider an $n$ -dimensional random vector $x$ that follows a multivariate Gaussian distribution $x \sim$ 0 $N\left( {0,\sum }\right)$ , where $\sum = \mathbb{E}\left\lbrack {x{x}^{\top }}\right\rbrack \succ 0$ represents the covariance matrix, and $\Theta = {\sum }^{-1}$ represents the
44
+
45
+ precision matrix (inverse covariance matrix). Given a data matrix $X \in {R}^{n \times d}$ that includes $d$ i.i.d (independent and identically distributed) samples $X = \left\lbrack {{x}_{1},\ldots ,{x}_{d}}\right\rbrack$ , where ${x}_{i} \sim N\left( {0,\sum }\right)$ has an $n$ - dimensional Gaussian distribution with zero mean, the goal of probabilistic graphical models (PGM) is to learn a precision matrix $\Theta$ that corresponds to an undirected graph structure $\mathcal{G}$ for encoding the conditional dependence between variables of the observations on columns of $X\left\lbrack {{12},{13}}\right\rbrack$ . Specifically, the classical graphical Lasso method aims at estimating a sparse $\Theta$ through maximum likelihood estimation (MLE) of $f\left( x\right)$ leveraging convex optimization [13]. In this work, we focus on one increasingly popular type of Gaussian graphical models, which is also known as attractive Gaussian Markov random fields (GMRFs). Attractive GMRFs restrict the precision matrix to be a Laplacian-like matrix $\Theta = L + \frac{I}{{\sigma }^{2}}$ , where $L = D - A$ denotes the set of valid graph Laplacian matrices with $D$ and $A$ representing the diagonal degree matrix and adjacency matrix of the underlying undirected graph, respectively, $I$ denotes the identity matrix, and ${\sigma }^{2}$ is a constant denoting prior data variance. Similar to the graphical Lasso method [13], recent methods for estimating attractive GMRFs leverage emerging graph signal processing (GSP) techniques to solve the following convex problem [9, 14-17]:
46
+
47
+ $$
48
+ \mathop{\max }\limits_{\Theta }\log \det \Theta - \frac{1}{d}\operatorname{tr}\left( {X{X}^{T}\Theta }\right) - \alpha \parallel \Theta {\parallel }_{1} \tag{1}
49
+ $$
50
+
51
+ where $\det \left( \cdot \right)$ and $\operatorname{tr}\left( \cdot \right)$ denote the determinant and trace operators, respectively, $\alpha$ is a hyperparameter to control the regularization term. The first two terms together can be interpreted as log-likelihood under a GMRF. The last ${\ell }_{1}$ regularization term is to enforce $\Theta$ (and the corresponding graph) to be sparse. If $X$ is non-Gaussian, Equation 1 can be regarded as Laplacian estimation based on minimizing the Bregman divergence between positive definite matrices induced by the function $\Theta \mapsto - \log \det \left( \Theta \right) \left\lbrack {18}\right\rbrack$ .
52
+
53
+ § 2.2 GRAPH ADVERSARIAL ATTACKS
54
+
55
+ Most existing graph adversarial attacks aim at degrading the accuracy of GNN models by inserting/deleting edges in an unnoticeable way (e.g., maintaining node degree distribution) [19]. The most popular graph adversarial attacks fall into the following two categories: (1) targeted attack, (2) non-targeted attack. The targeted attacks attempt to mislead a GNN model to produce a wrong prediction on a target sample (e.g., node), while the non-targeted attacks strive to degrade the overall accuracy of a GNN model for the whole graph data set. Dai et al. [20] first formulate the targeted attack as a combinatorial optimization problem and leverages reinforcement learning to insert/delete edges such that the target node is misclassified. Zügner et al. [5] propose another targeted attack called Nettack, which produces an adversarial graph by maximizing the training loss of GNNs. Zügner and Günnemann [6] further introduce Metattack, a non-targeted attack that treats the graph as a hyperparameter and uses meta-gradients to perturb the graph structure. It is worth noting that graph adversarial attacks have two different settings: poison (perturb a graph prior to GNN training) and evasion (perturb a graph after GNN training). As shown by Zhu et al. [21], the poison setting is typically more challenging to defend, as it changes the graph structure that fools GNN training. Thus, we aim to improve model robustness against attacks under the poison setting.
56
+
57
+ § 2.3 GRAPH ADVERSARIAL DEFENSES
58
+
59
+ To defend GNN against adversarial attacks, Entezari et al. [7] first observe that Nettack, a strong targeted attack, only changes the high-rank information of the adjacency matrix. Thus, they propose to construct a low-rank graph by performing truncated SVD to undermine the effects of adversarial attacks. Later, Jin et al. [8] propose Pro-GNN that adopts a similar idea yet jointly learns the low-rank graph and GNN model. Although those low-rank approximation based methods achieve state-of-the-art results on several datasets, they produce dense adjacency matrices that correspond to complete graphs, which would limit their applications for large graphs. Moreover, they only preserve a small region of the graph spectrum and thus may lose too much important information corresponding to the clean graph structure in the spatial domain, which limits the performance of GNN training. Recently, [22] exploit Laplacian eigenpairs to guide GNN training, which produces a robust model with quadratic time complexity and is thus not scalable to large graphs. In addition to the aforementioned spectral-based defense methods, GCNJaccard [23] and RS-GNN [24] purify the adversarial graph by connecting nodes with similar attributes or same labels. However, those defense methods explicitly (or implicitly) assume the underlying graph to be homophilic, which results in rather poor performance when defending GNN models on heterophilic graphs. In contrast to the prior arts, GARNET achieves highly robust yet scalable performance on both homophilic and heterophilic graphs under adversarial attacks by leveraging a novel graph purification scheme based on spectral embedding and graphical model.
60
+
61
+ § 137 3 THE GARNET APPROACH
62
+
63
+ < g r a p h i c s >
64
+
65
+ Figure 2: An overview of the three major phases of GARNET.
66
+
67
+ Recently, Entezari et al. [7] and Jin et al. [8] have shown that the well-known graph adversarial attacks (e.g., Nettack and Metattack) are essentially high-rank attacks, which increase graph rank by enlarging the smallest singular values of adjacency matrix when perturbing the graph structure, while rest of the graph spectrum remains almost the same. Consequently, a natural way for improving GNN robustness is to purify an adversarial graph by eliminating the high-rank components of its spectrum.
68
+
69
+ Low-rank topology learning (prior work). Given an adversarial adjacency matrix ${A}_{\text{ adv }} \in {R}^{n \times n}$ , Entezari et al. [7] propose to reconstruct a low-rank approximated adjacency matrix via performing TSVD: $\widehat{A} = {U\sum }{V}^{T}$ , where $\sum \in {R}^{r \times r}$ is a diagonal matrix consisting of $r$ largest singular values of ${A}_{\text{ adv }}.U \in {R}^{n \times r}$ and $V \in {R}^{n \times r}$ contain the corresponding left and right singular vectors, respectively. As the largest singular values are hardly affected by graph adversarial attacks, the reconstructed low-rank adjacency matrix $\widehat{A}$ is resistant to adversarial attacks.
70
+
71
+ However, due to the high computational cost of TSVD, $\widehat{A}$ is typically computed by only using top $r$ largest singular values and their corresponding singular vectors, where $r$ is a relatively small number (e.g., $r = {50}$ ). Consequently, the rank of $\widehat{A}$ is only $r = {50}$ , which is two orders of magnitude smaller than the rank of the clean graph, as shown in Figure 1(a). Since these low-rank methods are overly aggressive in reducing the graph rank, $\widehat{A}$ may lose too much important spectral information corresponding to the clean graph structure. As shown in Figure 1(c), the clean accuracy of the TSVD-based method is largely improved by increasing the graph rank $r$ , which indicates the low-rank graph obtained with a small $r$ loses the key graph structure contributing to GNN training. Note that the adversarial and clean graphs share most of the graph structure, as adversarial attacks perturb the clean graph in an unnoticeable way. Consequently, losing those important clean graph structures will also limit the performance of GNN on the adversarial graph.
72
+
73
+ Reduced-rank topology learning (this work). Given the adversarial graph ${\mathcal{G}}_{\text{ adv }}$ and its adjacency matrix ${A}_{adv}$ , our goal is to learn a reduced-rank graph, which slightly reduces the rank of ${\mathcal{G}}_{adv}$ to mitigate the effects of adversarial attacks, while retaining most of the important graph spectrum corresponding to the clean graph structure. As adversarial attacks mainly affect the least dominant singular components of ${A}_{\text{ adv }}$ [7], one straightforward way for constructing such a reduced-rank graph is to utilize all the singular components except those least dominant ones via TSVD. Nonetheless, computing such a large number of singular components is computationally expensive [25], and is thus not scalable to large graphs.
74
+
75
+ To learn the reduced-rank graph in a scalable way, in this work, we leverage only the top few (e.g., 50) dominant singular components of ${A}_{adv}$ to restore its important graph spectrum, via recovering the corresponding clean graph structure with the aid of PGM. Figure 2 gives an overview of our proposed approach, GARNET, which consists of three major phases. The first phase constructs a base graph by exploiting spectral embedding and a scalable nearest-neighbor graph algorithm. The second phase further refines the base graph by pruning noncritical edges based on PGM. The last phase trains existing GNN models on the refined base graph to improve their robustness. Next, we will first describe our notion of clean graph recovery via PGM as well as the scalability issue of prior PGM-based work in Section 3.1, which motivates us to develop scalable GARNET kernels described in Sections 3.2 and 3.3. We further provide the overall complexity of GARNET in Section 3.4.
76
+
77
+ § 3.1 GRAPH RECOVERY VIA GRAPHICAL MODEL
78
+
79
+ A general philosophy behind PGM is that there exists an underlying graph $G$ , whose structure determines the joint probability distribution of the observations on the data entities, i.e., columns of a data matrix $X \in {R}^{n \times d}$ , where $n$ is the number of data points, $d$ the dimension per data point. To recover the underlying graph structure from the data matrix $X$ , one common way is to leverage MLE by solving Equation 1 in Section 2.1. As the top few dominant singular components of the adjacency matrix capture the corresponding graph structure, we can naturally construct the data matrix $X$ based on those dominant singular components, and then adopt PGM to recover an underlying graph via MLE. To this end, we define a weighted spectral embedding matrix as follows:
80
+
81
+ Definition 3.1. Given the top $r$ smallest eigenvalues ${\lambda }_{1},{\lambda }_{2},\ldots ,{\lambda }_{r}$ and their corresponding eigenvectors ${v}_{1},{v}_{2},\ldots ,{v}_{r}$ of normalized graph Laplacian matrix ${L}_{\text{ norm }} = I - {D}^{-\frac{1}{2}}A{D}^{-\frac{1}{2}}$ , where $I$ and $A$ are the identity matrix and graph adjacency matrix, respectively, and $D$ is a diagonal matrix of node degrees, the weighted spectral embedding matrix is defined as $V\overset{\text{ def }}{ = }$ $\left\lbrack {\sqrt{\left| 1 - {\lambda }_{1}\right| }{v}_{1},\ldots ,\sqrt{\left| 1 - {\lambda }_{r}\right| }{v}_{r}}\right\rbrack$ , whose $i$ -th row ${V}_{i, : }$ is the weighted spectral embedding of the corresponding $i$ -th node in the graph.
82
+
83
+ Proposition 3.2. Given a normalized graph adjacency matrix ${A}_{\text{ norm }} = {D}^{-\frac{1}{2}}A{D}^{-\frac{1}{2}}$ and weighted spectral embedding matrix $V$ of an undirected graph, let $\widehat{A}$ be the rank-r approximation of ${A}_{\text{ norm }}$ via TSVD. If the top $r$ dominant eigenvalues of ${A}_{\text{ norm }}$ are non-negative, then we have $\widehat{A} = V{V}^{T}$ .
84
+
85
+ Our proof for Proposition 3.2 is available in Appendix A. Proposition 3.2 shows the connection between weighted spectral embedding and the low-rank adjacency matrix $\widehat{A}$ obtained by TSVD. Specifically, the weighted spectral embedding matrix $V$ can be viewed as an eigensubspace matrix consisting of a few dominant singular components of the corresponding adjacency matrix. Thus, we can use $V$ to recover the underlying clean graph via PGM. However, obtaining $V$ requires the knowledge of the clean graph structure, which seems to create a chicken and egg problem.
86
+
87
+ Fortunately, since the dominant singular components are hardly affected by adversarial attacks [7], the weighted spectral embedding is therefore also resistant to adversarial attacks, indicating that the underlying clean graph ${\mathcal{G}}_{\text{ clean }}$ and its corresponding adversarial graph ${\mathcal{G}}_{\text{ adv }}$ share almost the same weighted spectral embeddings. As a result, we can exploit the weighted spectral embedding matrix $V$ of ${\mathcal{G}}_{\text{ adv }}$ to represent that of ${\mathcal{G}}_{\text{ clean }}$ . By replacing the data matrix $X$ with $V$ in Equation 1, we have the following objective function:
88
+
89
+ $$
90
+ \mathop{\max }\limits_{\Theta } : F = \log \det \Theta - \frac{1}{r}\operatorname{tr}\left( {V{V}^{T}\Theta }\right) - \alpha \parallel \Theta {\parallel }_{1} \tag{2}
91
+ $$
92
+
93
+ By finding the optimizer ${\Theta }^{ * }$ in Equation 2, we can recover the underlying graph that maximizes the likelihood given the observation on the weighted spectral embedding $V$ . However, solving Equation 2 requires at least $O\left( {n}^{2}\right)$ time/space complexity per iteration even with the most efficient algorithms, which thus cannot scale to large graphs [13, 26, 27].
94
+
95
+ As $\Theta$ is constrained to be a Laplacian-like matrix, finding the optimizer ${\Theta }^{ * }$ in Equation 2 is equivalent to searching for critical edges from a complete graph, which would involve all possible (i.e., $O\left( {n}^{2}\right)$ ) edges. Here we say an edge is critical (noncritical) if including it to the graph significantly increases (decreases) $F$ in Equation 2. Hence we can recover the underlying graph by pruning noncritical edges from the complete graph. However, storing a complete graph is still expensive. To have a near-linear algorithm for clean graph recovery, instead of searching in the complete graph, we limit our search within an initial base graph ${\mathcal{G}}_{\text{ base }}$ that is much sparser but containing sufficient information for identifying the candidate edges critical to recover the clean graph. Subsequently, the final graphical model (graph Laplacian) can be obtained by further pruning noncritical edges from ${\mathcal{G}}_{\text{ base }}$ .
96
+
97
+ § 3.2 BASE GRAPH CONSTRUCTION
98
+
99
+ During the first phase of GARNET (shown in Figure 2), our goal is to build a base graph ${\mathcal{G}}_{\text{ base }}$ , which greatly reduces the search space by not constructing a complete graph while preserving the critical candidate edges that are key to clean graph recovery. To this end, we give the following theorem:
100
+
101
+ Theorem 3.3. Given a graph $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ and its normalized Laplacian matrix ${L}_{\mathcal{G}}$ , let ${V}_{i}$ denote the weighted spectral embedding of node $i$ by using top $r$ eigenpairs of ${L}_{\mathcal{G}}$ . Suppose ${V}_{i}$ is normalized, i.e., ${\begin{Vmatrix}{V}_{i}\end{Vmatrix}}_{2} = 1$ , and a relatively small $r$ is picked such that ${\lambda }_{r} \leq 1$ , where ${\lambda }_{r}$ is the $r$ -th smallest eigenvalue of ${L}_{\mathcal{G}}$ , then we have $\mathop{\sum }\limits_{{\left( {i,j}\right) \in \mathcal{E}}}{\begin{Vmatrix}{V}_{i} - {V}_{j}\end{Vmatrix}}_{2}^{2} \leq {0.25r}$ .
102
+
103
+ Our proof for Theorem 3.3 is available in Appendix B. Note that $r$ is a small constant, which is independent of the graph size. Thus, Theorem 3.3 indicates that, if an edge connects nodes $i$ and $j$ in the clean graph, then the Euclidean distance between the weighted spectral embeddings of these two nodes will be small, which motivates us to build a k-nearest neighbor (kNN) graph as ${\mathcal{G}}_{\text{ base }}$ to incorporate those clean edges.
104
+
105
+ Concretely, we first obtain the weighted spectral embedding matrix $V$ of the input adversarial graph ${\mathcal{G}}_{\text{ adv }}$ to represent that of the underlying clean graph ${\mathcal{G}}_{\text{ clean }}$ , as $V$ consists of dominant singular components that are shared by ${\mathcal{G}}_{\text{ adv }}$ and ${\mathcal{G}}_{\text{ clean }}$ [7]. We then leverage $V$ to construct a kNN graph, where each node is connected to its $k$ most similar nodes based on the Euclidean distance between their spectral embeddings. In this work, we exploit an approximate kNN algorithm for constructing the graph, which has $O\left( {\left| \mathcal{V}\right| \log \left| \mathcal{V}\right| }\right)$ complexity and thus can scale to very large graphs [28]. By choosing a proper $k$ (e.g., $k = {50}$ ), ${\mathcal{G}}_{\text{ base }}$ is likely to cover edges in the underlying clean graph. Thus, ${\mathcal{G}}_{\text{ base }}$ can serve as a reasonable search space for identifying critical edges in the next step.
106
+
107
+ § 3.3 GRAPH REFINEMENT VIA EDGE PRUNING
108
+
109
+ For the second phase of GARNET shown in Figure 2, we refine ${G}_{\text{ base }}$ by aggressively pruning noncritical edges from ${G}_{\text{ base }}$ , such that the refined graph only preserves the most important edges that contribute most to the log-likelihood $F$ in Equation 2.
110
+
111
+ To identify critical (noncritical) edges that can most effectively increase (decrease) $F$ , we exploit the update of $\Theta$ based on gradient ascent: $\Theta \leftarrow \Theta + \eta \frac{\partial F}{\partial \Theta }$ , where $\eta$ is the step size. As mentioned in Section ${2.1},\Theta$ is constrained to be $L + \frac{I}{{\sigma }^{2}}$ , which means the off-diagonal elements in $\Theta$ correspond to negative of edge weights in the underlying graph, i.e., ${\Theta }_{i,j} = - {w}_{i,j}$ . Thus, the update of ${\Theta }_{i,j}$ during gradient ascent can be viewed as:
112
+
113
+ $$
114
+ {\Theta }_{i,j} \leftarrow {\Theta }_{i,j} + \eta {\left( \frac{\partial F}{\partial \Theta }\right) }_{i,j} = {\Theta }_{i,j} - \eta \frac{\partial F}{\partial {w}_{i,j}} \tag{3}
115
+ $$
116
+
117
+ Equation 3 means that, if $\frac{\partial F}{\partial {w}_{i,j}}$ is large and positive, ${\Theta }_{i,j}$ will become more negative, which corresponds to increasing the edge weight in the underlying graph. Similarly, if $\frac{\partial F}{\partial {w}_{i,j}}$ is small and negative, ${\Theta }_{i,j}$ will be less negative, corresponding to decreasing the edge weight. In other words, the edge weight ${w}_{i,j}$ with a large (small) $\frac{\partial F}{\partial {w}_{i,j}}$ should be increased (decreased) to maximize the log-likelihood $F$ , meaning the corresponding edge is critical (noncritical). Thus, we can identify the critical edges once we know $\frac{\partial F}{\partial {w}_{i,j}}$ . By setting $\alpha = 0$ in Equation 2 (as GARNET naturally produces a sparse graph) and taking the partial derivative with respect to an edge weight ${w}_{i,j}$ , we have:
118
+
119
+ $$
120
+ \frac{\partial F}{\partial {w}_{i,j}} = \mathop{\sum }\limits_{{k = 1}}^{n}\frac{1}{{\lambda }_{k} + 1/{\sigma }^{2}}\frac{\partial {\lambda }_{k}}{\partial {w}_{i,j}} - \frac{{\begin{Vmatrix}{V}^{T}{e}_{i,j}\end{Vmatrix}}_{2}^{2}}{r} \tag{4}
121
+ $$
122
+
123
+ where ${\lambda }_{k},\forall k = 1,2,\ldots ,n$ are the Laplacian eigenvalues of a given graph (e.g., ${\mathcal{G}}_{\text{ base }}$ ), ${e}_{i,j} = {e}_{i} - {e}_{j}$ , and ${e}_{i}$ denotes the vector with all zero entries except for the $i$ -th entry being 1 .
124
+
125
+ Theorem 3.4 (Feng [17]). Let ${\lambda }_{k}$ and ${u}_{k}$ be the $k$ -th eigenvalue and the corresponding eigenvector of the Laplacian matrix, respectively. The spectral perturbation $\delta {\lambda }_{k}$ due to the increase of an edge weight ${w}_{i,j}$ can be estimated by $\delta {\lambda }_{k} = \delta {w}_{i,j}{\left( {u}_{k}^{T}{e}_{i,j}\right) }^{2}$ .
126
+
127
+ The proof for Theorem 3.4 is available in Feng [17]. According to Theorem 3.4 and Equation 4, we can estimate $\frac{\partial F}{\partial {w}_{i,j}} \approx {\begin{Vmatrix}{U}^{T}{e}_{i,j}\end{Vmatrix}}_{2}^{2} - \frac{1}{r}{\begin{Vmatrix}{V}^{T}{e}_{i,j}\end{Vmatrix}}_{2}^{2}$ , where $U = \left\lbrack {\frac{{u}_{1}}{\sqrt{{\lambda }_{1} + 1/{\sigma }^{2}}},\ldots ,\frac{{u}_{r}}{\sqrt{{\lambda }_{r} + 1/{\sigma }^{2}}}}\right\rbrack ,{\lambda }_{i}$ is the $i$ -th smallest Laplacian eigenvalue of ${\mathcal{G}}_{\text{ base }}$ , and ${u}_{i}$ is the corresponding eigenvector. Consequently, an edge(i, j)is critical if ${\begin{Vmatrix}{U}^{T}{e}_{i,j}\end{Vmatrix}}_{2}^{2} \gg \frac{1}{r}{\begin{Vmatrix}{V}^{T}{e}_{i,j}\end{Vmatrix}}_{2}^{2}$ . As $V$ and $U$ are the spectral embeddings on the input adversarial graph and the base graph, respectively, we define the spectral embedding distortion ${s}_{i,j} = \frac{{\begin{Vmatrix}{U}^{T}{e}_{i,j}\end{Vmatrix}}_{2}^{2}}{{\begin{Vmatrix}{V}^{T}{e}_{i,j}\end{Vmatrix}}_{2}^{2}}$ to measure the edge importance. Consequently, we prune edges in the base graph ${\mathcal{G}}_{\text{ base }}$ that have small spectral embedding distortion, i.e., ${s}_{i,j} < \gamma$ , where $\gamma$ is a hyperparameter to control the sparsity of the refined graph. Hence, the refined base graph ${\mathcal{G}}_{\text{ base }}^{\prime }$ largely recovers the underlying clean graph structure from the input adversarial graph. Since ${\mathcal{G}}_{\text{ base }}^{\prime }$ is constructed by only leveraging the top few dominant singular components of ${\mathcal{G}}_{\text{ adv }}$ , it ignores the high-rank adversarial components and thus robust to adversarial attacks. As a result, we can train a given GNN model on ${\mathcal{G}}_{\text{ base }}^{\prime }$ to improve its robustness, which is the last phase of GARNET.
128
+
129
+ § 3.4 COMPLEXITY OF GARNET
130
+
131
+ The first phase of GARNET requires $O\left( {r\left| \mathcal{E}\right| }\right)$ time for computing top $r$ Laplacian eigenpairs [25], and $O\left( {\left| \mathcal{V}\right| \log \left| \mathcal{V}\right| }\right)$ time for $\mathrm{{kNN}}$ graph construction [28]. The second phase involves $O\left( {{rk}\left| \mathcal{V}\right| }\right)$ time for computing spectral embeddings and edge pruning on the kNN graph. Thus, the overall time complexity for graph purification is $O\left( {r\left( {\left| \mathcal{E}\right| + k\left| \mathcal{V}\right| }\right) + \left| \mathcal{V}\right| \log \left| \mathcal{V}\right| }\right)$ , where $\left| \mathcal{V}\right| \left( \left| \mathcal{E}\right| \right)$ denotes the number of nodes (edges) in the adversarial graph, and $k$ is the averaged node degree in the kNN graph. Our systematic approach of choosing $r$ and the space complexity analysis are in Appendix F.
132
+
133
+ § 4 EXPERIMENTS
134
+
135
+ We have conducted comparative evaluation of GARNET against state-of-the-art defense GNN models under targeted attack (Nettack) [5] and non-targeted attack (Metattack) [6] on both homophilic and heterophilic datasets. Besides, we also evaluate GARNET robustness against adaptive attacks. In addition, we further show the scalability of GARNET by comparing its run time with prior defense methods and evaluating GARNET on ogbn-products, which consists of more than 2 million nodes [11]. Finally, we conduct ablation studies to understand the effectiveness of GARNET kernels.
136
+
137
+ Experimental Setup. The details of datasets used in our experiments are available in Appendix C. We choose as baselines two state-of-the-art defense methods based on graph purification: TSVD [7] and Pro-GNN [8]. Besides, we evaluate training based defense methods GCN-LFR [22] and GN-NGuard [29] on homophilic and heterophilic graphs, respectively. Moreover, we use GCN [30] and GPRGNN [31] as the backbone GNN models for defense on homophilic datasets (i.e., Cora and Pubmed). As GCN performs poorly on heterophilic datasets [10, 32], we choose GPRGNN as the backbone model (as well as the surrogate model for attacking) on Chameleon and Squirrel datasets. Due to the space limit, we provide defense results with H2GCN [10] as the backbone model in Appendix J. For all baselines, we tune their hyperparameters against adversarial attacks with a small perturbation, and keep the same hyperparameters for larger adversarial perturbations. Detailed hyperparameter settings of baselines and GARNET are available in Appendix D. Our hardware information is provided in Appendix E.
138
+
139
+ § 4.1 ROBUSTNESS OF GARNET
140
+
141
+ Defense on homophilic graphs. We first evaluate the model robustness on homophilic graphs against the targeted attack (Nettack) and the non-targeted attack (Metattack). Specifically, Nettack aims to fool a GNN model to misclassify some target nodes with a few structure (edge) perturbations. The goal of Metattack is to drop the overall accuracy of the whole test set with a given perturbation ratio budget (i.e., the number of adversarial edges over the number of total edges). Due to the space limit, we only show defense results under Nettack and Metattack with 5 perturbed edges per target node and 20% perturbation ratio, respectively. Results with other perturbation budgets are in Appendix I.
142
+
143
+ Table 1 reports the average accuracy over 10 runs on Cora and Pubmed. It shows that GARNET, with either a backbone GNN model (GCN or GPRGNN), outperforms defense baselines in terms of both clean and adversarial accuracy in most cases. We attribute the large accuracy improvement to GARNET's strengths in recovering key structures of the clean graph while ignoring the high-rank adversarial components during graph purification. Moreover, as both TSVD and ProGNN involve dense matrices during GNN training, they run out of GPU memory even on Pubmed, a graph with only ${20}\mathrm{k}$ nodes. In contrast, GARNET is not only robust to adversarial attacks, but also scalable to large graphs, as empirically shown in Section 4.2.
144
+
145
+ Defense on heterophilic graphs. We report the averaged accuracy over 10 runs on heterophilic graphs in Table 2, which shows that all defense baselines fail to defend GPRGNN on heterophilic graphs and even degrade the accuracy of the vanilla GPRGNN by a large margin. The reason why ProGNN performs poorly is that it follows the graph homophily assumption for improving GNN robustness, which contradicts the property of heterophilic graphs. For the TSVD-based defense method, the low-rank graph generated by TSVD contains negative edge weights, which degrade the performance of GPRGNN for adapting its graph filter on heterophilic graphs [31]. Although [29] have shown GNNGuard can improve model robustness on synthetic heterophillic graphs, our results indicate that it fails to defend GNN models on realistic heterohilic graphs. We attribute it to that the quality of graphlet degree vectors used in GNNGuard is degraded by structural perturbations induced via adversarial attacks. In contrast, GARNET largely recovers the clean graph structure based on Theorem 3.3 without the assumption on whether adjacent nodes have similar attributes. In other words, GARNET will produce a heterophilic graph if the underlying clean graph is heterophilic, which is further confirmed in Appendix O. Consequently, GARNET improves accuracy over defense baselines by up to ${10.23}\%$ (i.e., ${43.64}\% - {33.41}\%$ on Squirrel under Nettack) on heterophilic graphs.
146
+
147
+ Table 1: Averaged node classification accuracy $\left( \% \right) \pm$ std under targeted attack (Nettack) and non-targeted attack (Metattack) on homophilic graphs - We bold and underline the first and second highest accuracy of each backbone GNN model, respectively. OOM means out of memory.
148
+
149
+ max width=
150
+
151
+ 2*Model 2|c|Cora (Nettack) 2|c|Cora (Metattack) 2|c|Pubmed (Nettack) 2|c|Pubmed (Metattack)
152
+
153
+ 2-9
154
+ Clean Adversarial Clean Adversarial Clean Adversarial Clean Adversarial
155
+
156
+ 1-9
157
+ GCN-Vanilla $\underline{80.96} \pm {0.95}$ ${55.66} \pm {1.95}$ ${81.35} \pm {0.66}$ ${56.28} \pm {1.19}$ ${87.26} \pm {0.51}$ ${66.67} \pm {1.34}$ ${87.16} \pm {0.09}$ ${77.20} \pm {0.27}$
158
+
159
+ 1-9
160
+ GCN-TSVD ${72.65} \pm {2.29}$ ${60.30} \pm {2.25}$ ${73.86} \pm {0.53}$ ${62.44} \pm {1.16}$ ${87.03} \pm {0.48}$ $\underline{79.56} \pm {0.48}$ ${84.53} \pm {0.08}$ $\underline{84.30} \pm {0.08}$
161
+
162
+ 1-9
163
+ GCN-ProGNN ${80.54} \pm {1.21}$ $\underline{65.38} \pm {1.65}$ ${78.56} \pm {0.36}$ $\underline{72.28} \pm {1.67}$ ${88.14} \pm {1.44}$ ${71.89} \pm {1.56}$ ${84.62} \pm {0.11}$ ${83.89} \pm {0.32}$
164
+
165
+ 1-9
166
+ GCN-LFR ${80.07} \pm {0.95}$ ${53.73} \pm {2.17}$ ${77.23} \pm {2.61}$ ${65.38} \pm {3.71}$ ${87.20} \pm {1.24}$ ${68.49} \pm {2.44}$ ${81.91} \pm {0.26}$ ${78.32} \pm {0.69}$
167
+
168
+ 1-9
169
+ GCN-GARNET ${81.08} \pm {2.05}$ ${67.04} \pm {2.05}$ $\underline{79.64} \pm {0.75}$ ${73.89} \pm {0.91}$ $\underline{87.96} \pm {0.58}$ ${86.12} \pm {0.86}$ $\underline{85.37} \pm {0.20}$ ${85.14} \pm {0.23}$
170
+
171
+ 1-9
172
+ GPR-Vanilla ${83.04} \pm {2.05}$ ${62.89} \pm {1.95}$ ${83.05} \pm {0.42}$ ${74.27} \pm {2.11}$ ${90.05} \pm {0.73}$ ${76.99} \pm {1.16}$ ${87.35} \pm {0.13}$ ${84.18} \pm {0.15}$
173
+
174
+ 1-9
175
+ GPR-TSVD ${81.68} \pm {1.78}$ ${63.52} \pm {3.27}$ ${81.61} \pm {0.54}$ ${78.50} \pm {1.20}$ OOM OOM OOM OOM
176
+
177
+ 1-9
178
+ GPR-ProGNN ${82.04} \pm {1.33}$ ${63.74} \pm {2.57}$ ${82.04} \pm {0.90}$ ${76.29} \pm {1.46}$ OOM OOM OOM ${OOM}$
179
+
180
+ 1-9
181
+ GPR-GARNET ${82.77} \pm {1.89}$ ${71.45} \pm {2.73}$ ${82.67} \pm {1.89}$ ${81.34} \pm {0.79}$ $\mathbf{{90.99}} \pm {0.52}$ ${89.52} \pm {0.45}$ $\underline{86.86} \pm {0.57}$ ${85.69} \pm {0.26}$
182
+
183
+ 1-9
184
+
185
+ Table 2: Averaged node classification accuracy $\left( \% \right) \pm$ std on heterophilic graphs — We bold and underline the first and second highest accuracy, respectively. The backbone GNN model is GPRGNN.
186
+
187
+ max width=
188
+
189
+ 2*Model 2|c|Chameleon (Nettack) 2|c|Chameleon (Metattack) 2|c|Squirrel (Nettack) 2|c|Squirrel (Metattack)
190
+
191
+ 2-9
192
+ Clean Adversarial Clean Adversarial Clean Adversarial Clean Adversarial
193
+
194
+ 1-9
195
+ Vanilla $\underline{71.46} \pm {1.92}$ $\underline{66.26} \pm {1.71}$ ${61.36} \pm {1.00}$ $\underline{{53.20} \pm {0.88}}$ ${41.36} \pm {2.87}$ $\underline{{39.45} \pm {2.36}}$ $\underline{39.51} \pm {1.64}$ $\underline{35.22} \pm {1.20}$
196
+
197
+ 1-9
198
+ TSVD ${62.12} \pm {3.04}$ ${60.37} \pm {2.86}$ ${47.29} \pm {1.63}$ ${45.12} \pm {1.34}$ ${32.98} \pm {2.36}$ ${31.20} \pm {1.84}$ ${31.36} \pm {1.87}$ ${23.91} \pm {1.40}$
199
+
200
+ 1-9
201
+ ProGNN ${58.80} \pm {1.72}$ ${57.07} \pm {1.82}$ ${48.39} \pm {0.68}$ ${46.69} \pm {0.61}$ ${31.81} \pm {1.72}$ ${27.27} \pm {1.87}$ ${31.64} \pm {2.87}$ ${29.36} \pm {3.61}$
202
+
203
+ 1-9
204
+ GNNGuard ${64.87} \pm {2.62}$ ${62.21} \pm {1.94}$ ${58.01} \pm {1.57}$ ${49.89} \pm {1.34}$ ${34.17} \pm {2.33}$ ${33.41} \pm {1.82}$ ${37.46} \pm {0.56}$ ${32.69} \pm {0.59}$
205
+
206
+ 1-9
207
+ GARNET ${72.89} \pm {2.65}$ ${71.83} \pm {2.11}$ ${61.11} \pm {2.46}$ $\mathbf{{59.96}} \pm {0.84}$ ${44.91} \pm {1.53}$ ${43.64} \pm {1.53}$ ${43.43} \pm {1.14}$ ${41.97} \pm {1.02}$
208
+
209
+ 1-9
210
+
211
+ Defense against adaptive attacks. As GARNET is non-differentiable during kNN graph construction, it is difficult to optimize a specific loss function for adaptive attack. Instead, we adopt an attack called LowBlow from [7], which deliberately perturbs low-rank singular components in the graph spectrum, yet violates the unnoticeable condition (i.e., preserving node degree distribution after attacking). Since LowBlow has cubic complexity for computing the full set of adjacency eigenpairs, we only show results on the small graph Cora in Table 3, which indicates GARNET still achieves the highest adversarial accuracy under LowBlow, while all low-rank defense baselines perform even worse than vanilla GPRGNN model. The reason lies in that the kNN graph (with a relatively large $k$ ) in GARNET is less vulnerable to the perturbations of weighted spectral embeddings (i.e., low-rank components of the clean graph) [33], compared to prior low-rank defense methods.
212
+
213
+ § 4.2 SCALABILITY OF GARNET
214
+
215
+ To demonstrate the scalability of GARNET, we first compare the run time of GARNET with prior low-rank defense methods with GPRGNN as the backbone GNN model. As shown in Figure 3, the TSVD defense method is slower than GARNET since it produces a dense adjacency matrix that slows down the GNN training. Moreover, ProGNN is extremely slow as it jointly learns the low-rank graph structure and the robust GNN model, which requires performing TSVD for every epoch. In contrast, GARNET can efficiently produce a sparse graph for downstream GNN training, leading to end-to-end runtime speedup over prior methods by up to ${14.7} \times$ . In addition, we further evaluate the robustness of GARNET on two large datasets: ogbn-arxiv and ogbn-products, under powerful and scalable attacks proposed by [34]. As we run out of GPU memory when performing the PR-BCD attack, we choose the more scalable version GR-BCD that has less memory usage. We use GCN as the backbone model since it outperforms GPRGNN on large graphs. As TSVD and ProGNN run out of memory on these two datasets, we choose GNNGuard, GCNJaccard [23], and Soft Median GDC [34] as baselines. Table 4 shows GARNET achieves comparable clean accuracy compared to GCN, and drastically improves the adversarial accuracy over defense baselines by up to 16.13%.
216
+
217
+ Table 3: Averaged accuracy $\left( \% \right) \pm$ std on Cora under Metattack and LowBlow with ${20}\%$ perturbation ratio. We use GPRGNN as the backbone GNN model.
218
+
219
+ max width=
220
+
221
+ Model Metattack LowBlow
222
+
223
+ 1-3
224
+ Vanilla ${74.27} \pm {2.11}$ ${74.77} \pm {0.71}$
225
+
226
+ 1-3
227
+ TSVD ${78.50} \pm {1.20}$ ${26.03} \pm {2.76}$
228
+
229
+ 1-3
230
+ ProGNN ${76.29} \pm {1.46}$ ${69.88} \pm {1.61}$
231
+
232
+ 1-3
233
+ GARNET ${81.34} \pm {0.79}$ ${77.71} \pm {0.95}$
234
+
235
+ 1-3
236
+
237
+ < g r a p h i c s >
238
+
239
+ Figure 3: End-to-end runtime comparison of GARNET and prior defense methods.
240
+
241
+ Table 4: Averaged accuracy $\left( \% \right) \pm$ std under GR-BCD attack.
242
+
243
+ max width=
244
+
245
+ 2*Model 3|c|ogbn-arxiv 3|c|ogbn-products
246
+
247
+ 2-7
248
+ Clean 25% Ptb. ${50}\%$ Ptb. Clean 25% Ptb. 50% Ptb.
249
+
250
+ 1-7
251
+ GCN $\mathbf{{70.74}} \pm {0.26}$ ${45.18} \pm {0.25}$ ${39.12} \pm {0.27}$ $\underline{75.68} \pm {0.20}$ ${64.70} \pm {0.43}$ ${62.71} \pm {0.44}$
252
+
253
+ 1-7
254
+ GNNGuard ${68.78} \pm {0.32}$ $\underline{47.46} \pm {0.11}$ $\underline{{41.18} \pm {0.12}}$ ${74.82} \pm {0.11}$ ${66.76} \pm {0.23}$ $\underline{63.22} \pm {0.26}$
255
+
256
+ 1-7
257
+ GCNJaccard ${67.77} \pm {0.18}$ ${46.27} \pm {0.11}$ ${40.84} \pm {0.19}$ ${72.95} \pm {0.08}$ ${60.90} \pm {0.18}$ ${58.84} \pm {0.20}$
258
+
259
+ 1-7
260
+ Soft Median GDC ${69.75} \pm {0.03}$ ${45.31} \pm {0.06}$ ${40.11} \pm {0.06}$ ${66.31} \pm {0.03}$ ${60.59} \pm {0.05}$ ${59.73} \pm {0.05}$
261
+
262
+ 1-7
263
+ GARNET $\underline{69.91} \pm {0.29}$ $\mathbf{{61.32}} \pm {0.20}$ $\mathbf{{60.88}} \pm {0.13}$ $\mathbf{{76.05}} \pm {0.19}$ $\mathbf{{75.03}} \pm {0.14}$ ${74.97} \pm {0.24}$
264
+
265
+ 1-7
266
+
267
+ < g r a p h i c s >
268
+
269
+ Figure 4: Ablation study of GARNET on graph refinement.
270
+
271
+ § 4.3 ABLATION ANALYSIS OF GARNET
272
+
273
+ Figure 4 shows the comparison of GARNET results with and without graph refinement. When only constructing the base graph, GARNET achieves better adversarial accuracy than the vanilla GNN model, which confirms our Theorem 3.3 that the base graph construction can successfully recover clean graph edges. The graph refinement step further improves GARNET accuracy ( $\sim 2\%$ increase) since some noncritical or even harmful edges are removed based on PGM. Due to the space limitation, the ablation studies of GARNET on the kNN graph and edge pruning are available in Appendix G.
274
+
275
+ § 4.4 VISUALIZATION
276
+
277
+ < g r a p h i c s >
278
+
279
+ Figure 5: Visualizations on the same target node (marked in blue) as well as its 1-hop and 2-hop neighbors. Neighbor nodes are marked in green if they have the same label as the target node, and red otherwise. (a) clean graph. (b) adversarial graph. (c) adversarial graph purified by GARNET.
280
+
281
+ We visualize the local structure (within 2-hop neighbors) of a target node (randomly picked) on Cora in Figure 5. By comparing Figures 5(b) and 5(c), it is clear that GARNET effectively removes most of the adversarial edges induced by Nettack that connect nodes with different labels [8]. As a result, it is trivial for the backbone GNN model to correctly predict the target node since the surrounding nodes share the same label as the target node in GARNET graph. This explains why GARNET substantially improves the adversarial accuracy of GNN models. More visualizations are available in Appendix N.
282
+
283
+ § 5 CONCLUSIONS
284
+
285
+ This work introduces GARNET, a spectral approach to robust and scalable graph neural networks by combining spectral embedding and the probabilistic graphical model. GARNET first uses weighted spectral embedding to construct a base graph, which is then refined by pruning uncritical edges based on the graphical model. Results show that GARNET not only outperforms state-of-the-art defense models, but also scales to large graphs with millions of nodes. An interesting direction for future work is to incorporate the node feature information to further boost model robustness.
papers/LOG/LOG 2022/LOG 2022 Conference/lwx5gi4MIh/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,445 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Learning Distributed Geometric Koopman Operator for Sparse Networked Dynamical Systems
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ The Koopman operator theory provides an alternative to studying nonlinear networked dynamical systems (NDS) by mapping the state space to an abstract higher dimensional space where the system evolution is linear. The recent works show the application of graph neural networks (GNNs) to learn state to object-centric embedding and achieve centralized block-wise computation of the Koopman operator(KO)under additional assumptions on the underlying node properties and constraints on the KO structure. However, the computational complexity of learning the Koopman operator increases for large NDS. Moreover, the computational complexity increases in a combinatorial fashion with the increase in number of nodes. The learning challenge is further amplified for sparse networks by two factors: 1) sample sparsity for learning the Koopman operator in the non-linear space, and 2) the dissimilarity in the dynamics of individual nodes or from one subgraph to another. Our work aims to address these challenges by formulating the representation learning of NDS into a multi-agent paradigm and learning the Koopman operator in a distributive manner. Our theoretical results show that the proposed distributed computation of the geometric Koopman operator is beneficial for sparse NDS, whereas for the fully connected systems this approach coincides with the centralized one. The empirical study on a rope system, a network of oscillators, and a power grid show comparable and superior performance along with computational benefits with the state-of-the-art methods.
12
+
13
+ ## 221 Introduction
14
+
15
+ NDS represents an important class of dynamic networks where the state of the network is defined by a vector of node-level properties in a geometrical manifold, and their evolution is governed by a set of differential equations. Data-driven modeling of both spatio-temporal dependencies and evolution dynamics is essential to predict the response of the NDS to an external perturbation. Surely, machine-learning approaches that explicitly recognize the interconnection structure of such systems or model the dynamical system-driven evolution of the network outperform initial deep learning approaches based on recurrent neural networks and its variants [1-5]. Deep learning approaches such as GNNs fit into this paradigm by learning non-linear functions for each of the encoder-system model-decoder components [6-10]. Discovering the underlying physics of dynamical systems have intrigued control theory researcher for decades resulting into multiple sub-space based system identification works [11-14]. Koopman operator theory [15, 16] is an approach for such model discovery where the core idea is to transform the observed state-space variables to the space of square-integrable functions, where a linear operator provides an exact representation of the underlying dynamical system and the spectrum of the operator encodes all the non-linear behaviors. However, for computational purposes, finding finite-dimensional approximation of Koopman operator is challenging. The key to computing the finite-dimensional Koopman operator is fixing the lifting functions (observables) and existing approaches such as classical or extended dynamic mode decomposition [17, 18] use an a-priori choice of basis functions for lifting; however, this choice usually fails to generalize to more complex environments. Instead, learning these transformations from the system trajectories themselves using deep neural networks (DNNs) have been shown to yield much richer invariant subspaces [19, 20].
16
+
17
+ Continuing the idea of lifting the non-linear state space into another space to learn linear transition dynamics, [21] proposed the use of a graph neural network as the encoder-decoder function. While graph neural networks (GNN) [22] appears to be a natural approach for modeling the physics of networked systems, their ability to discover dynamic evolution models of large-scale networked systems is a nascent area of research $\left\lbrack {6,7,9,{23}}\right\rbrack$ . For NDS, where the number of system states increases with the number of nodes, the computational complexity of learning the Koopman operator also increases. The topology of the network or its sparsity are typically not taken advantage of in the existing studies when learning observable functions or the Koopman operator.
18
+
19
+ In this work, we address the challenge of learning dissimilar dynamics in sparse networks by formulating the representation learning of networked dynamical systems into a multi-agent paradigm. We refer to this approach as Distributed Koopman-GNN (DKGNN). DKGNN is more suitable for sparse and large networked dynamical systems as the proposed distributed learning method yields superior computational efficiency compared to traditional methods. We applied the GNNs to capture the distributed nature of the dynamical system behavior, transform the original state-space into the Koopman observable space, and subsequently use the network sparsity patterns to constrain the Koopman operator construction into a block-structured distributed representation along with theoretical guarantees. Information-theoretic network clustering strategies were utilized for specific 60 dynamic systems to capture the joint evolution of the clusters in a coarse-grained fashion resulting in further computational benefits. Please see Figure 1 for an illustration of the approach.
20
+
21
+ ![01963ec4-05f0-7e1a-b267-9a6714d3960e_1_329_878_1148_624_0.jpg](images/01963ec4-05f0-7e1a-b267-9a6714d3960e_1_329_878_1148_624_0.jpg)
22
+
23
+ Figure 1: Overview of the proposed approach (best viewed with colors) : (Top row) the sparse networked dynamical system (NDS) is partitioned into clusters using dynamic spatio-temporal data resulting into an agent representation (each color represents an agent). The time-series associated with each node is also color coded by the agents. (Bottom row) the re-arranged multi-dimensional spatio-temporal data is fed to the GNN along with the agent network structure to learn the nonlinear observables. Learning the distributed geometric Koopman operator by exploiting the sparsity of the multi agent system is shown in lower right, with colors in the Koopman matrix capturing the distributed connection in the agent topology (white blocks correspond to no edges between the agents and hence they are all zeros).
24
+
25
+ ## Contributions. The main contributions of this paper are summarized as follows:
26
+
27
+ - We develop methods for learning distributed Koopman operator for large-scale networks, using system topology and network sparsity properties for NDS. We present a system theoretic learning approach that can exploit the network connectivity structure via GNNs.
28
+
29
+ - We introduce information theoretic-based clustering strategies for sparse NDS to learn a coarsened structure and model the system using a hierarchical multi-agent paradigm.
30
+
31
+ - We present theoretical results on bounding the performance of the distributed geometric Koop-man operator with respect to its centralized counterpart.
32
+
33
+ - We demonstrate that DKGNN yields two benefits. It improves the scalability of learning, and for sparse NDS with divergent dynamics across different parts of the network, it outperforms prediction performance of centralized approaches.
34
+
35
+ ### 1.1 Related Work
36
+
37
+ Koopman operator theory The infinite-dimensional Koopman operator is computationally intractable. Several methods for identifying approximations of the infinite-dimensional Koopman operator on a finite-dimensional space have recently been developed. Most notable works include dynamic mode decomposition (DMD) [17, 24], extended DMD (EDMD) [18, 25], Hankel DMD [26], naturally structured DMD (NS-DMD) [27] and deep learning based DMD (deepDMD) [20, 28]. These methods are data-driven and one or more of these methods have been successfully applied for system identification [17, 29] including system identification from noisy data [30], data-driven observability/controllability gramians for nonlinear systems [31, 32], control design [33-35], data-driven causal inference in dynamical systems [36] and to identify persistence of excitation conditions for nonlinear systems [37]. [38] discusses distributed design of Koopman without control and using dictionary lifting functions.
38
+
39
+ Graph Neural Networks GNNs [22, 39] have found widespread use into every application involving non-Euclidean data [40]. Extending GNNs to model physics-driven processes gives rise to a new class of physics-inspired neural networks (PINN) [8-10]. A common theme is to model many-body interactions via a nearest-neighbor graph and then model the evolution of that graph $\left\lbrack {6,7,9}\right\rbrack$ . However, addressing issues around compositionality $\left\lbrack {7,{23}}\right\rbrack$ and scalability becomes important as the foundation for PI(G)NN matures and we seek to model larger, multi-scale spatio-temporal interactions. Moreover, applications such as molecular biology [20] and power grid [41] motivate the modeling of NDS where the graph structure is distinct from k-nearest neighbor graphs, with sparsity and connectivity that resemble small-world networks. Recent works such as [21] provides a bridge that seeks to integrate GNNs and Koopman operators to improve generalization ability and result in simpler linear transition dynamics. However, their approach for learning Koopman state transitions and GNN embedding results in performance and scalability bottlenecks when system size increases.
40
+
41
+ ## 2 Methodology
42
+
43
+ ### 2.1 Networked Dynamical Systems and the Koopman Operator
44
+
45
+ Problem Statement: Consider a networked dynamical system (NDS) evolving over a network, $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ . Let the number of nodes and edges be ${n}_{v}$ and ${n}_{e}$ respectively and the governing equation for the NDS on $\mathcal{G}$ is given by,
46
+
47
+ $$
48
+ {x}_{t + 1} = F\left( {x}_{t}\right) , \tag{1}
49
+ $$
50
+
51
+ where ${x}_{t} \in \mathcal{M} \subseteq {\mathbb{R}}^{n}$ is the concatenated system state at time $t$ and $F : \mathcal{M} \rightarrow \mathcal{M}$ is the discrete-time nonlinear transition mapping. Our goal is to learn the system dynamics as expressed in equation (1) in a distributed approach combining the Koopman operator theory, graph neural networks and by leveraging on network sparsity properties.
52
+
53
+ Exploring the network structure, the ${n}_{v}$ nodes could be grouped to form ${n}_{a}$ agents where ${n}_{a} \leq {n}_{v}$ and results in a network denoted by ${\mathcal{G}}_{a} = \left( {{\mathcal{V}}_{a},{\mathcal{E}}_{a}}\right)$ with the state at any time $t$ is partitioned as ${x}_{t} = {\left\lbrack {x}_{t,1}^{\top },\ldots ,{x}_{t,{n}_{a}}^{\top }\right\rbrack }^{\top }$ where for every $\alpha \in \left\{ {1,\ldots ,{n}_{a}}\right\}$ , the states ${x}_{t,\alpha }$ belongs to agent $\alpha$ . For completion, we mention that the number of nodes in ${\mathcal{V}}_{a}$ is equal to ${n}_{a}$ . The motivation behind exploring the network structure is to develop models that will possess certain advantages when compared to the centrally learned models. A method to identify ${\mathcal{G}}_{a}$ from $\mathcal{G}$ for practical dynamical systems is discussed later in the paper. Associated with the system (1) is a linear operator, namely the Koopman operator $\mathbb{U}$ [42] which is defined as follows.
54
+
55
+ Definition 1 (Koopman Operator (KO) [42]). Given any $h \in {L}^{2}\left( \mathcal{M}\right)$ , the Koopman operator $\mathbb{U} : {L}^{2}\left( \mathcal{M}\right) \rightarrow {L}^{2}\left( \mathcal{M}\right)$ for the system (1) is defined as $\left\lbrack {\mathbb{U}h}\right\rbrack \left( x\right) = h\left( {F\left( x\right) }\right)$ , where ${L}^{2}\left( \mathcal{M}\right)$ is the space of square integrable functions on $\mathcal{M}$ .
56
+
57
+ Originally developed for autonomous systems, recently Koopman framework has been extended to systems with control $\left\lbrack {{43},{44}}\right\rbrack$ . In this paper we consider a controlled dynamical system of the form:
58
+
59
+ $$
60
+ {x}_{t + 1} = F\left( {x}_{t}\right) + G\left( {x}_{t}\right) {u}_{t}, \tag{2}
61
+ $$
62
+
63
+ where $G : \mathcal{M} \rightarrow {\mathbb{R}}^{n \times q}$ is the input vector field and ${u}_{t} \in {\mathbb{R}}^{q}$ denote the control input to the system at time $t$ . The Koopman operator associated with (2) is defined on an extended state-space obtained as the product of the original state-space and the space of all control sequences, resulting in a control-affine dynamical system on the extended state-space $\left\lbrack {{43},{44}}\right\rbrack$ . In general, the Koopman operator is an infinite-dimensional operator, but for computation purposes, a finite-dimensional approximation of the operator is constructed from the obtained time-series data as discussed below.
64
+
65
+ Consider the time-series data from a networked dynamical system as $X = \left\lbrack \begin{array}{llll} {x}_{1} & {x}_{2} & \ldots & {x}_{k} \end{array}\right\rbrack \in$ ${\mathbb{R}}^{n \times k}$ , and the corresponding control inputs $U = \left\lbrack \begin{array}{llll} {u}_{1} & {u}_{2} & \ldots & {u}_{k} \end{array}\right\rbrack \in {\mathbb{R}}^{q \times k}$ . Define one time-step separated datasets, ${X}_{p}$ and ${X}_{f}$ from $X$ as ${X}_{p} = \left\lbrack {{x}_{1},{x}_{2},\ldots ,{x}_{k - 1}}\right\rbrack ,{X}_{f} = \left\lbrack {{x}_{2},{x}_{3},\ldots ,{x}_{k}}\right\rbrack$ and let $\mathcal{S} = \left\{ {{\Psi }_{1},\ldots ,{\Psi }_{m}}\right\}$ be the choice of non-linear functions or observables where ${\Psi }_{i} \in {L}^{2}\left( {{\mathbb{R}}^{n},\mathcal{B},\mu }\right)$ (where $\mathcal{B}$ is the Borel $\sigma$ algebra and $\mu$ denote the measure [42]) and ${\Psi }_{i} : {\mathbb{R}}^{n} \rightarrow \mathbb{C}$ . Define a vector valued observable function $\Psi : {\mathbb{R}}^{n} \rightarrow {\mathbb{C}}^{m}$ as, $\Psi \left( x\right) \mathrel{\text{:=}} {\left\lbrack \begin{array}{llll} {\Psi }_{1}\left( x\right) & {\Psi }_{2}\left( x\right) & \cdots & {\Psi }_{m}\left( x\right) \end{array}\right\rbrack }^{\top }$ . Then the following optimization problem which minimizes the least-squares cost yields the Koopman operator and the input matrix.
66
+
67
+ $$
68
+ \mathop{\min }\limits_{{A, B}}{\begin{Vmatrix}{Y}_{f} - A{Y}_{p} - BU\end{Vmatrix}}_{F}^{2} \tag{3}
69
+ $$
70
+
71
+ where ${Y}_{p} = \Psi \left( {X}_{p}\right) = \left\lbrack {\Psi \left( {x}_{1}\right) ,\cdots ,\Psi \left( {x}_{k - 1}\right) }\right\rbrack ,{Y}_{f} = \Psi \left( {X}_{f}\right) = \left\lbrack {\Psi \left( {x}_{2}\right) ,\cdots ,\Psi \left( {x}_{k}\right) }\right\rbrack , A \in {\mathbb{R}}^{m \times m}$ is the finite dimensional approximation of the Koopman operator defined on the space of observables and the matrix $B \in {\mathbb{R}}^{m \times q}$ is the input matrix. The optimization problem (3) can be solved analytically and the approximate Koopman operator and the input matrix are given by $\left\lbrack \begin{array}{ll} A & B \end{array}\right\rbrack = {Y}_{f}{\left\lbrack \begin{array}{ll} {Y}_{p} & U \end{array}\right\rbrack }^{ \dagger }$ [43], where ${\left( \cdot \right) }^{ \dagger }$ is the Moore-Penrose pseudo-inverse of a matrix. Identifying the observable functions such that $\mathcal{S}$ is invariant under the action of the Koopman operator is challenging. In this work, graph neural network-based mappings are used to construct the non-linear observable functions that satisfy the invariance by simultaneously learning the observables and the Koopman operator.
72
+
73
+ ### 2.2 Graph Neural Network based Koopman Observables
74
+
75
+ Consider the network $\mathcal{G}$ with ${n}_{v}$ nodes where the time-series data at each node is supplemented with the node attribute capturing the nature of the node, denoted by the vector ${x}_{{v}_{i}}$ where $i = \left\{ {1,2,\ldots ,{n}_{v}}\right\}$ . For instance, we can characterize the generators in a electric power grid network with their inertia values. Similarly, the designer can embed knowledge about the interaction between the agents using edge attributes, denoted as ${x}_{{e}_{ij}}$ for the edge connecting nodes $i$ and $j$ . We consider a graph neural network embedding to transition from the actual state-space to the lifted state-space using multiple compositional neural operations. At the ${t}^{th}$ time-step, the node, and edge attributes are combined along with the state vectors of the agents which are compactly written as,
76
+
77
+ $$
78
+ {x}_{t, i}^{k} = {f}_{v}^{k}\left( {{x}_{t, i}^{k - 1},\mathop{\sum }\limits_{{j \in \mathcal{N}\left( i\right) }}{f}_{e}^{k}\left( {{x}_{t, i}^{k - 1},{x}_{{v}_{i}}^{k - 1},{x}_{t, j}^{k - 1},{x}_{{v}_{j}}^{k - 1},{x}_{{e}_{ij}}^{k - 1}}\right) }\right) \tag{4}
79
+ $$
80
+
81
+ where the superscript $k$ denotes the ${k}^{th}$ layer of the GNN, and functions ${f}_{e}\left( \cdot \right)$ , and ${f}_{v}\left( \cdot \right)$ are edge and node-level aggregation functions in a GNN architecture. We use $\phi \left( \cdot \right)$ to denote the multi-layer GNN operation in a compact form.
82
+
83
+ ## 3 Distributed Geometric Koopman Operator with Control Inputs
84
+
85
+ This section formally presents the computation of distributed geometric Koopman operator with control. The (centralized) Koopman operator with control input for the system (2) is obtained by solving (3). For the ${n}_{a}$ agent NDS, the resultant KO can be represented as ${n}_{a}^{2}$ block matrices:(5)
86
+
87
+ <table><tr><td rowspan="3">$A = \left\lbrack \begin{matrix} \frac{{A}_{1}}{{A}_{2}} \\ \vdots \\ \frac{\vdots }{{A}_{{n}_{a}}} \end{matrix}\right\rbrack = \left\lbrack \begin{matrix} \frac{{A}_{11}}{{A}_{12}} & \frac{{A}_{12}}{{A}_{22}} & \cdots & \frac{{A}_{1{n}_{a}}}{{A}_{2{n}_{a}}} \\ \frac{\vdots }{{A}_{{n}_{a}1}} & \frac{\vdots }{{A}_{{n}_{a}2}} & \cdots & \frac{\vdots }{{A}_{{n}_{a}{n}_{a}}} \end{matrix}\right\rbrack$</td><td/><td/><td/></tr><tr><td/><td/><td/></tr><tr><td/><td/><td/></tr></table>
88
+
89
+ The dynamics of the ${\alpha }^{th}$ agent have the dimension, ${m}_{\alpha }$ , such that, $\mathop{\sum }\limits_{{\alpha = 1}}^{{n}_{a}}{m}_{\alpha } = m$ . It now follows that the block matrix ${A}_{\alpha \beta } \in {\mathbb{R}}^{{m}_{\alpha } \times {m}_{\beta }}$ denotes the transition of agent $\alpha$ with respect to $\beta$ and the transition mapping for agent $\alpha$ is given by ${A}_{\alpha }$ . Similarly, the control input matrix is partitioned as $B =$ blkdiag $\left( {{B}_{1},{B}_{2},\ldots ,{B}_{{n}_{a}}}\right)$ , where the matrix ${B}_{\alpha }$ corresponds to input matrix of agent $\alpha \in \left\{ {1,2,\ldots ,{n}_{a}}\right\}$ . The objective of the distributed learning is to compute these block matrices in a distributed manner and form the geometric Koopman operator and the control input matrix for the complete NDS as opposed to directly solving the centralized optimization problem in Eq. (3) without sacrificing the performance with the distributed method. There are two major advantages to this approach. Firstly, if there is change in the local agent behavior, one can simply update the transition mapping corresponding to that agent and the agents dependent on it to learn the full system evolution. Secondly, computational advantages can be obtained by incorporating parallel learning of each agent transition mapping and this approach is more appropriate for the sparse networks.
90
+
91
+ By exploiting the topology of the network, the KO and the control input matrices are computed in a distributed manner. As a consequence, if agent $i$ is not a neighbor of agent $j$ , that is, the dynamics of agent $i$ is not affected by the dynamics of agent $j$ , we make ${A}_{ij} = 0$ . Therefore, for every $\alpha \in \left\{ {1,2,\cdots ,{n}_{a}}\right\}$ , let ${\widehat{A}}_{\alpha }$ be the transition mapping corresponding to the agent $\alpha$ , then the distributed Koopman is given by ${A}_{D} = {\left\lbrack \begin{array}{llll} {\widehat{A}}_{1}^{\top } & {\widehat{A}}_{2}^{\top } & \cdots & {\widehat{A}}_{{n}_{o}}^{\top } \end{array}\right\rbrack }^{\top }$ . For a sparse network, the distributed Koopman will be a sparse matrix irrespective of the centralized Koopman being either sparse or full (Figure 1). Consider ${X}_{p}$ and ${X}_{f}$ be the one time-step separated time-series data on the state space, $\phi$ be the GNN-embedding that maps the state space data into an embedded space. Then the time-series data on the embedded space for every agent can be expressed in terms of the neighbor and non-neighbor agents.
92
+
93
+ Remark 2. The one time-step forwarded time-series data corresponding to agent $\alpha$ is given by ${A}_{\alpha }{Y}_{p} = \left\lbrack \begin{array}{ll} {A}_{\mathcal{N}\left( \alpha \right) } & {A}_{\overline{\mathcal{N}\left( \alpha \right) }} \end{array}\right\rbrack \left\lbrack \begin{array}{l} {Y}_{p,\mathcal{N}\left( \alpha \right) } \\ {Y}_{p,\mathcal{N}\left( \alpha \right) } \end{array}\right\rbrack$ , where $\mathcal{N}\left( \alpha \right)$ is the set of agents containing the neighbors of agent $\alpha$ and itself, $\overline{\mathcal{N}\left( \alpha \right) }$ is the set of agents who are non-neighbors of agent $\alpha$ and the (rectangular) matrices, ${A}_{\mathcal{N}\left( \alpha \right) }$ and ${A}_{\overline{\mathcal{N}\left( \alpha \right) }}$ are the transition mappings associated with the agent $\alpha$ .
94
+
95
+ Let ${R}_{p,\alpha },{R}_{f,\alpha }$ , and ${R}_{u,\alpha }$ be the transformation matrices defined in such a way that they remove zero rows of any matrix, $D$ when pre-multiplied to the matrix, $D$ . Suppose if the matrix $D$ has no zero rows then the transformation matrices are identity.
96
+
97
+ Theorem 3. The centralized Koopman(A, B)learning problem described in Eq. (3) can be expressed as a distributed Koopman $\left( {{A}_{D},{B}_{D}}\right)$ learning problem such that there exists matrices, ${\widehat{A}}_{1},{\widehat{A}}_{2},\ldots ,{\widehat{A}}_{{n}_{a}},{\widehat{B}}_{1},{\widehat{B}}_{2},\ldots ,{\widehat{B}}_{{n}_{a}}$ and the distributed Koopman operator is given by ${A}_{D} = {\left\lbrack \begin{array}{llll} {\widehat{A}}_{1}^{\top } & {\widehat{A}}_{2}^{\top } & \cdots & {\widehat{A}}_{{n}_{a}}^{\top } \end{array}\right\rbrack }^{\top }$ , input matrix is ${B}_{D} =$ blkdiag $\left( {{\widehat{B}}_{1},{\widehat{B}}_{2},\ldots ,{\widehat{B}}_{{n}_{a}}}\right)$ where for $\alpha \in \left\{ {1,2,\ldots ,{n}_{a}}\right\} ,{\widehat{A}}_{\alpha } = {A}_{\mathcal{N}\left( \alpha \right) }{R}_{p,\alpha },{\widehat{B}}_{\alpha } = {B}_{\alpha }{R}_{u,\alpha }$ and ${A}_{\mathcal{N}\left( \alpha \right) },{B}_{\alpha }$ are obtained as a solution to the optimization problem $\mathop{\min }\limits_{{{A}_{\mathcal{N}\left( \alpha \right) },{B}_{\alpha }}}{\begin{Vmatrix}{Y}_{f,\alpha } - {A}_{\mathcal{N}\left( \alpha \right) }{Y}_{p,\mathcal{N}\left( \alpha \right) } - {B}_{\alpha }{U}_{\alpha }\end{Vmatrix}}_{F}^{2}$ .
98
+
99
+ From Theorem 3, with ${g}_{t} = \phi \left( {x}_{t}\right) ,\phi$ being the GNN encoder, the distributed geometric Koopman operator system with control input is given by ${g}_{t + 1} = {A}_{D}{g}_{t} + {B}_{D}{u}_{t}$ .
100
+
101
+ Corollary 4. The distributed learning problem and the centralized learning problem yield the same Koopman operator for a fully connected network.
102
+
103
+ The proofs for Theorem 3 and Corollary 4 are included in the appendix.
104
+
105
+ ### 3.1 Training Distributed Geometric Koopman Model
106
+
107
+ The state space data is mapped to the GNN-embedded space using the GNN encoder $\phi$ . To retrieve the actual state space data from the GNN-embedded space, we use a decoding GNN operator such that, ${\widehat{x}}_{t} = \varphi \left( {g}_{t}\right)$ . The decoder $\varphi \left( \cdot \right)$ follows similar GNN architecture as encoder however it maps from the
108
+
109
+ ![01963ec4-05f0-7e1a-b267-9a6714d3960e_4_655_1711_822_333_0.jpg](images/01963ec4-05f0-7e1a-b267-9a6714d3960e_4_655_1711_822_333_0.jpg)
110
+
111
+ Figure 2: Distributed geometric Koopman architecture
112
+
113
+ lifted Koopman space to the original state space. Looking into the agent-wise architectural detail, both encoder and decoder functions can be represented for the ${i}^{th}$ agent as ${\phi }_{i}\left( \cdot \right) ,{\varphi }_{i}\left( \cdot \right)$ as shown in Figure 2 with the understanding that all the GNN functionalities take the neighbouring agent states and attributes as additional inputs. This facilitates the computation of Koopman matrices in a distributed manner. This architecture leads us to compute an auto-encoding loss, and a prediction loss over the time-steps $t = 1,2,\ldots , k - 1$ , and are given as follows:
114
+
115
+ $$
116
+ {\mathcal{L}}_{ae} = \frac{1}{k}\mathop{\sum }\limits_{{t = 1}}^{{k - 1}}\mathop{\sum }\limits_{{i = 1}}^{{n}_{a}}{\varphi }_{i}\left( {{\phi }_{i}\left( {x}_{t, i}\right) }\right) - {x}_{t, i},\;{\mathcal{L}}_{p} = \frac{1}{k}\mathop{\sum }\limits_{{t = 1}}^{{k - 1}}\mathop{\sum }\limits_{{t = 1}}^{{n}_{a}}{\varphi }_{i}\left( {g}_{t + 1, i}\right) - {x}_{t + 1, i},
117
+ $$
118
+
119
+ with total loss of $\mathcal{L} = {\mathcal{L}}_{ae} + {\mathcal{L}}_{p}$ . The algorithm will consist of two main update steps sequentially, one to update the Koopman and the control input matrix in a distributed manner for a fixed set of GNN encoder and decoder parameters, and another to update GNN weights with a learned distributed geometric Koopman representation. Algorithm 1 shows the computational steps where the function DistributedKoopmanMatrices $\left( \cdot \right)$ presents the distributed Koopman state and input matrices, ${A}_{D},{B}_{D}$ . Thereafter the Main(.) function runs the update of the distributed Koopman matrices and the GNN parameters sequentially for each epoch as shown in steps 15 and 16. For simplicity of representation in the algorithm, we use the compact notations $\phi \left( \cdot \right)$ and $\varphi \left( \cdot \right)$ instead of agent-wise representation as in Figure 2.
120
+
121
+ Algorithm 1 Distributed Geometric Koopman Operator with Control Computation
122
+
123
+ ---
124
+
125
+ 1: function DISTRIBUTEDKOOPMANMATRICES $\left( {{X}_{p},{X}_{f}, U,\phi }\right)$
126
+
127
+ Map the time-series data to the GNN-embedded space using $\phi$ as follows:
128
+
129
+ $$
130
+ {Y}_{p} = \phi \left( {X}_{p}\right) = \left\lbrack {\phi \left( {x}_{1}\right) ,\phi \left( {x}_{2}\right) ,\cdots ,\phi \left( {x}_{k - 1}\right) }\right\rbrack ,\;{Y}_{f} = \phi \left( {X}_{f}\right) = \left\lbrack {\phi \left( {x}_{2}\right) ,\phi \left( {x}_{3}\right) ,\cdots ,\phi \left( {x}_{k}\right) }\right\rbrack
131
+ $$
132
+
133
+ for $\alpha = 1,2,\ldots ,{n}_{a}$ do
134
+
135
+ Define the transformation matrices for agent $\alpha$ as:
136
+
137
+ $$
138
+ {T}_{p,\alpha } \mathrel{\text{:=}} {blkdiag}\left( {a{e}_{1},\ldots , a{e}_{{n}_{a}}}\right) ,{T}_{f,\alpha } \mathrel{\text{:=}} {blkdiag}\left( {e{e}_{1},\ldots , e{e}_{{n}_{a}}}\right) ,
139
+ $$
140
+
141
+ ${T}_{u,\alpha } \mathrel{\text{:=}} {blkdiag}\left( {e{u}_{1},\ldots , e{u}_{{n}_{a}}}\right)$ , where
142
+
143
+ $a{e}_{i} = {\left( {a}_{\alpha } + {e}_{\alpha }\right) }_{i} \otimes {I}_{{m}_{i}}, e{e}_{i} = {\left( {e}_{\alpha }\right) }_{i} \otimes {I}_{{m}_{i}}, e{u}_{i} = {\left( {e}_{\alpha }\right) }_{i} \otimes {I}_{{q}_{i}},$
144
+
145
+ where $\otimes$ is the Kronecker product.
146
+
147
+ Compute ${Y}_{f,\alpha },{Y}_{p,\mathcal{N}\left( \alpha \right) },{U}_{\alpha }$ associated with agent $\alpha$ as
148
+
149
+ ${Y}_{f,\alpha } = {R}_{f,\alpha }{T}_{f,\alpha }{Y}_{f},{Y}_{p,\mathcal{N}\left( \alpha \right) } = {R}_{p,\alpha }{T}_{p,\alpha }{Y}_{p},{U}_{\alpha } = {R}_{u,\alpha }{T}_{u,\alpha }U$
150
+
151
+ Solve the optimization problem: $\mathop{\min }\limits_{{{A}_{\mathcal{N}\left( \alpha \right) },{B}_{\alpha }}}{\begin{Vmatrix}{Y}_{f,\alpha } - {A}_{\mathcal{N}\left( \alpha \right) }{Y}_{p,\mathcal{N}\left( \alpha \right) } - {B}_{\alpha }{U}_{\alpha }\end{Vmatrix}}_{F}^{2}$
152
+
153
+ Compute ${\widehat{A}}_{\alpha } = {A}_{\mathcal{N}\left( \alpha \right) }{R}_{p,\alpha }$ and ${\widehat{B}}_{\alpha } = {B}_{\alpha }{R}_{u,\alpha }$
154
+
155
+ end for
156
+
157
+ return: ${A}_{D} = {\left\lbrack \begin{array}{llll} {\widehat{A}}_{1}^{\top } & {\widehat{A}}_{2}^{\top } & \cdots & {\widehat{A}}_{n}^{\top } \end{array}\right\rbrack }^{\top },{B}_{D} =$ blkdiag $\left( {{\widehat{B}}_{1},{\widehat{B}}_{2},\ldots ,{\widehat{B}}_{n}}\right)$ .
158
+
159
+ end function
160
+
161
+ function MAIN( )
162
+
163
+ Given state $\left( {{X}_{p},{X}_{f}}\right)$ and input(U)time-series data from a ${N}_{a}$ agent network
164
+
165
+ Initialize the GNN-based encoder $\left( \phi \right)$ and decoder $\left( \varphi \right)$ network
166
+
167
+ for epochs $= 1,2,\ldots ,{N}_{\text{epoch }}$ do
168
+
169
+ Koopman Update: Run $\left( {{A}_{D},{B}_{D}}\right) =$ DistributedKoopmanMatrices $\left( {{X}_{p},{X}_{f}, U,\phi }\right)$
170
+
171
+ GNN Update: Compute , $\mathcal{L} = {\mathcal{L}}_{p} + {\mathcal{L}}_{ae}$ , and backpropagate $\mathcal{L}$ to update $\phi ,\varphi$ parameters.
172
+
173
+ end for
174
+
175
+ return: Updated ${A}_{D},{B}_{D},\phi$ , and, $\varphi$ .
176
+
177
+ end function
178
+
179
+ ---
180
+
181
+ ### 3.2 Multi-Agent Network Construction via Information Transfer-based Clustering
182
+
183
+ Mapping of nodes in an NDS to nodes in an agent network is a core aspect for our proposed method. We use an information-theoretic clustering method [45]that exploits both the adjacency matrix structure as well as dynamical properties of the network for this task. For a dynamical system, the definition of information transfer [46] from a dynamical state ${x}_{t, i}$ to another state ${x}_{t, j}$ is based on the intuition that the total entropy of a dynamical state ${x}_{t, j}$ is equal to the sum of the entropy of ${x}_{t, j}$ when another state ${x}_{t, i}$ is not present in the dynamics and the amount of entropy transferred from ${x}_{t, i}$ to ${x}_{t, j}$ . In particular, for a discrete-time dynamical system ${x}_{t + 1} = F\left( {x}_{t}\right)$ , where ${x}_{t} = {\left\lbrack \begin{array}{ll} {x}_{t,1}^{\top } & {x}_{t,2}^{\top } \end{array}\right\rbrack }^{\top }$ and $F = {\left\lbrack \begin{array}{ll} {f}_{1}^{\top } & {f}_{2}^{\top } \end{array}\right\rbrack }^{\top }$ , the one-step information transfer from ${x}_{t,1}$ to ${x}_{t,2}$ , as the system evolves from time-step $t$ to $t + 1$ is ${\left\lbrack {T}_{{x}_{t,1} \rightarrow {x}_{t,2}}\right\rbrack }_{t}^{t + 1} = H\left( {{x}_{t + 1,2} \mid {x}_{t,2}}\right) - {H}_{{\mathcal{H}}_{t,1}}\left( {{x}_{t + 1,2} \mid {x}_{t,2}}\right)$ . Here, $H\left( {{x}_{t + 1,2} \mid {x}_{t,2}}\right)$ is the conditional Shannon entropy of ${x}_{t,2}$ for the original system and ${H}_{{\mathcal{H}}_{t,1}}\left( {{x}_{t + 1,2} \mid {x}_{t,2}}\right)$ is the conditional entropy of ${x}_{t,2}$ for the system where ${x}_{t,1}$ has been held frozen from at time $t$ . Note that the information transfer is in general asymmetric and characterize the influence of one state on any other state. Furthermore, for stable dynamical system the information transfer between the states always settle to a steady state value.
184
+
185
+ ![01963ec4-05f0-7e1a-b267-9a6714d3960e_6_386_189_1004_427_0.jpg](images/01963ec4-05f0-7e1a-b267-9a6714d3960e_6_386_189_1004_427_0.jpg)
186
+
187
+ Figure 3: Illustration of divergent cluster dynamics from a power network. Top plot shows that the transient frequency trajectories from three different clusters behave differently. Phase-space plots in the bottom row illustrate the temporal evolution of the nodal attributes in each cluster, initial time-points are marked larger and lighter, while later time-points are marked thinner and darker.
188
+
189
+ We use this information transfer measure to define an influence graph for the NDS studied in this paper. We form a directed weighted graph with the states as the nodes and introduce an edge from ${x}_{t,1}$ to ${x}_{t,2}$ iff the information transfer from ${x}_{t,1}$ to ${x}_{t,2}$ is non-zero. Moreover, the edge-weight for the edge ${x}_{t,1} \rightarrow {x}_{t,2}$ is $\exp \left( {-\left| {T}_{{x}_{t,1} \rightarrow {x}_{t,2}}\right| /\beta }\right)$ [45], where $\left| {T}_{{x}_{t,1} \rightarrow {x}_{t,2}}\right|$ is the steady-state information transfer from ${x}_{t,1}$ to ${x}_{t,2}$ (we assume stable dynamics) and $\beta > 0$ is a parameter similar to temperature in a Gibbs' distribution. Applying this to a dynamical system, a directed weighted graph is computed based on the information transfer and is clustered accordingly to obtain a multi-agent network. Figure 3 uses a power network example to illustrate how the nodal attributes from different clusters demonstrate different transient evolution trajectories.
190
+
191
+ ## 4 Numerical Experiments
192
+
193
+ In this section, we aim to answer the following research questions through our experiments: (RQ1) How does the distributed GNN-based Koopman (DKGNN) model's performance compare with other state-of-the-art approaches such as centralized GNN-based Koopman (CKGNN) [21] and graph neural network approaches for modeling multi-body interactions [47] (RQ2) How do various dynamical system properties such as sparsity, spatio-temporal correlation, and damping properties influence the performance boost from the distributed algorithm? (RQ3) What is the potential for distributed approaches for scaling to larger NDS in the future?
194
+
195
+ <table><tr><td>Power Grid</td><td>CKGNN</td><td>DKGNN area-wise clustering</td><td>DKGNN IT-based clustering</td><td>PN</td></tr><tr><td>Disturbance Location</td><td>MSE</td><td>MSE</td><td>MSE</td><td>MSE</td></tr><tr><td>One hop</td><td>0.0352</td><td>0.0064</td><td>0.0079</td><td>0.1123</td></tr><tr><td>Two hops</td><td>0.0212</td><td>0.0044</td><td>0.0049</td><td>0.2498</td></tr><tr><td>Three hops</td><td>0.029</td><td>0.0075</td><td>0.0098</td><td>0.0468</td></tr><tr><td>High degree</td><td>0.0055</td><td>0.0013</td><td>8.538E-04</td><td>0.0246</td></tr><tr><td>Low degree</td><td>0.0195</td><td>0.005</td><td>0.006</td><td>0.0427</td></tr></table>
196
+
197
+ <table><tr><td>Oscillator</td><td>CKGNN</td><td>DKGNN</td><td>PN</td></tr><tr><td>Disturbance Location</td><td>MSE</td><td>MSE</td><td>MSE</td></tr><tr><td>High damping</td><td>1.938E-04</td><td>1.026E-04</td><td>3.6E-04</td></tr><tr><td>Low Damping</td><td>1.404E-04</td><td>1.185E-04</td><td>5.205E-04</td></tr><tr><td>Random</td><td>4.86E-05</td><td>4.059E-04</td><td>4.7328</td></tr><tr><td>Rope</td><td>0.2301</td><td>0.2008</td><td>0.3091</td></tr></table>
198
+
199
+ Table 1: Prediction performance of the proposed distributed geometric Koopman approach with other baselines in terms of mean square errors (MSE) averaged over all time-steps and states for test trajectories.
200
+
201
+ ![01963ec4-05f0-7e1a-b267-9a6714d3960e_7_337_202_1116_547_0.jpg](images/01963ec4-05f0-7e1a-b267-9a6714d3960e_7_337_202_1116_547_0.jpg)
202
+
203
+ Figure 4: DKGNN out-performs the baselines. Columns A, B, C represent for rope, oscillator (high damping) and grid (high degree) examples, respectively. Prediction error over time-steps with darker lines representing median are shown in Row A, and MSE-based box plots are shown in Row B.
204
+
205
+ Environments. We perform numerical experiments on three different data-sets. The baseline rope system introduced in [7] consists of a set of six objects connected via a line graph. The second dataset is a network of oscillators which is a prototype used for modelling various real-world systems in biology $\left\lbrack {{48},{49}}\right\rbrack$ , synchronization of fireflies $\left\lbrack {50}\right\rbrack$ , superconducting Josephson junctions $\left\lbrack {51}\right\rbrack$ etc. We consider a sparse network of oscillators consisting of 50 agents where the dynamics at each oscillator are governed by a second-order swing differential equation. The network density for this network is 0.0408 . Our third dataset studies a significantly larger and complex system of immense practical importance. Power grid networks [52] are complex infrastructures that are essential for every aspect of modern life. The ability to predict transient behavior in such networks is key to the prevention of cascading failures and effective integration of renewable energy sources [53, 54]. We consider the IEEE 68-bus power grid model [55] that represents the interconnections of New England Test System and New York Power System. This is a sparse, heterogeneous network with a network density 0.0378 ; the network has 68-nodes, with 16-nodes representing generators and the rest being loads. The ability to accurately predict changes in voltage and frequency is key to capturing the tendency of a power grid to move towards undesired oscillatory regions. Transient stability simulations for the grid datasets are performed by the Power Systems Toolbox [56]. We provide more details on the datasets and experiments in the supplementary.
206
+
207
+ Baselines and Implementation Details. We baseline our method against centralized GNN-based Koopman (CKGNN) [21] and propagation network (PN), a GNN-based approach [47] using their available implementations. We evaluate all models on the trajectory prediction task where we predict the node-level time-series measurements for each of these environments that represent velocity of objects (rope), angles and frequencies of oscillators, and frequency measurements for the power grid. For PN, we slightly modify the prediction workflow from the author-provided implementation to make sure that we always feed the PN with the predicted signal values except for the initial signal input for the prediction task in consecutive time steps. Our methods are implemented on the Pytorch framework [57] and run on an NVIDIA A100 GPU. For PN baseline, we used 3 propagation steps with 32 as the batch size. The dimensions of the hidden layer of relation encoder, object encoder, and propagation effect are set as follows, for PN we use 150, 100, and 100, respectively for all the cases, for the rope system with CKGNN and DKGNN, we have used 120, 100, and 100, and for the oscillator and grid example with both CKGNN and DKGNN, these are set to be 60. The models are trained with Adam based stochastic gradient descent optimizer with a learning rate of ${10}^{-5}$ . For rope, we consider10,000episodes with 100 time-steps and training-testing division of ${90}\% - {10}\%$ , and batch size of 8 episodes. We consider the oscillator node state trajectories of 100 time-steps and trained with a total of 9000 time-steps, batch size of 10 trajectories, and have tested with three different testing configurations and predicting for each trajectory of 100 time-steps. Considering the power grid network, we have considered 50 time-steps to capture the initial fast transients and train the model with 1100 time steps with a batch size of 5 episodes along with testing in five different scenarios with multiple testing trajectories each with 50 time steps.
208
+
209
+ RQ1: Prediction performance analysis. Figure 4 shows that DKGNN outperforms other baselines in trajectory prediction. Table 1 reports the mean square error (MSE) averaged over all time-steps and over all states in the test trajectory dataset. The rope system is minimally sparse with a network density of 0.33 , thereby resulting in improved predictive performance for the DKGNN when compared to KGNN. The improvements are significantly pronounced for sparser and larger network models of oscillators (network density 0.0408) and power grid (network density 0.0378). The superior prediction performance over considerable trajectory time-steps (as demonstrated in Figure 4 second row) substantiates the applicability of DKGNN for sparse NDS.
210
+
211
+ RQ2.1 Performance with respect to varying NDS properties. The damping parameter provides us with a way to systematically study the response of an NDS to an input. Lower damping implies that the system will take longer to converge to a steady state. We hypothesize that the introduction of input perturbations of the same magnitude at different nodes will evoke different responses depending on the connectivity structure around these nodes. For the oscillator network, we consider testing scenarios with disturbances created at high damping nodes ( $> {13}$ in appropriate units) and low damping nodes $\left( { < 1}\right)$ . We consider five different configurations for the power grid network. Three of the scenarios are based on the perturbations in the loads which are respectively one-hop, two-hop, and three-hop away from the generator buses, and two other load disturbance scenarios are considered at locations with high and low degrees of connectivity. We observe DKGNN approach is able to produce better prediction performance with all of these scenarios $({81},{79},{74},{76},{74}\%$ improvements with area-wise partitioning, and 77, 76, 66, 84, 69% improvements with information-theoretic partitioning for the five cases listed in Table I from top to bottom with respect to the centralized approach). The second and third columns of Figure 4 correspond to the oscillator (where the disturbance is at high-damping nodes), and the power grid (where the disturbance is at high-degree buses), respectively, where both of them show the superior performance of DKGNN to the baselines.
212
+
213
+ RQ2.2 Performance with respect to sparse NDS clustering. This subsection reports our validation of the effectiveness of the information-theoretic clustering based agent structure discovery using the power grid network. The IEEE-68 bus grid network specifications also include an area-wise partitioning that is done based on eigenvalue separation and extensive application of domain-knowledge [58]. Both the expert-driven partitioning and our clustering driven partitioning divide the grid into 5 clusters and yields a 5-agent network to use for the training. Table I and Figure 4 show that DKGNN exploits the localized dominant dynamics and yields superior predictive performance when compared to the centralized approaches such as CKGNN and PN.
214
+
215
+ RQ3. Computational Scalability. We compare the run time of our DKGNN with the corresponding centralized one, KGNN. Let $\tau \left( {\phi + {A}_{D} + \varphi }\right)$ denote the combined computation time for GNN encoder $\left( \phi \right)$ , distributed Koopman $\left( {A}_{D}\right)$ and the GNN decoder $\left( \varphi \right)$ . Similarly, $\tau \left( {\phi + A + \varphi }\right)$ denote the computation for KGNN where the centralized Koopman(A)is obtained. The reduction in total run-time $\left( \% \right)$ is computed as $\frac{\tau \left( {KGNN}\right) - \tau \left( {DKGNN}\right) }{\tau \left( {KGNN}\right) } \times {100}$ . From Figure 5, it is clear there is a significant reduction in run time for the larger and sparser networks of oscillator ( 50 nodes with network density $= {0.041}$ ), and power grid ( 68 nodes,5 clustered areas and network density of 0.038 ). These examples see a considerable performance boost (45.63% for oscillator and ${32}\%$ for power grid) owing to capturing the dominant localized dynamic behavior. The rope which is a smaller system with 6 nodes and single excitation (at the top) shows only slightly improvement in runtime (5%), owing to high network density (0.33).
216
+
217
+ ![01963ec4-05f0-7e1a-b267-9a6714d3960e_8_1027_1386_450_313_0.jpg](images/01963ec4-05f0-7e1a-b267-9a6714d3960e_8_1027_1386_450_313_0.jpg)
218
+
219
+ Figure 5: Scalability of DKGNN compared to KGNN model.
220
+
221
+ ## 5 Conclusions
222
+
223
+ We present a geometric deep learning based distributed Koopman operator (DKGNN) framework that can exploit dynamical system sparsity to improve computational scalability. Our results on bounding the DKGNN performance with respect to its centralized counterpart provides a rigorous theoretical foundation. Extensive empirical studies on large NDS of oscillators and practical power grid models show the effectiveness of DKGNN design with respect to varying degree of NDS dynamical properties and sparsity patterns. Future research will look into incorporating attention capability to the distributed design, investigating the robustness aspects in presence of physical or adversarial faults, and perform control designs on the learned distributed dynamical model.
224
+
225
+ References
226
+
227
+ [1] Coryn AL Bailer-Jones, David JC MacKay, and Philip J Withers. A recurrent neural network for modelling dynamical systems. Network: Computation in Neural Systems, 9(4):531-547, 1998. 1
228
+
229
+ [2] Min Han, Zhiwei Shi, and Wei Wang. Modeling dynamic system by recurrent neural network with state variables. In International Symposium on Neural Networks, pages 200-205. Springer, 2004.
230
+
231
+ [3] Olalekan Ogunmolu, Xuejun Gu, Steve Jiang, and Nicholas Gans. Nonlinear systems identification using deep dynamic neural networks. arXiv preprint arXiv:1610.01439, 2016.
232
+
233
+ [4] Yu Wang. A new concept using LSTM neural networks for dynamic system identification. In 2017 American Control Conference (ACC), pages 5324-5329. IEEE, 2017.
234
+
235
+ [5] FA Gers, J Schmidhuber, and F Cummins. Learning to forget: Continual prediction with lstm. Neural computation, 12(10):2451-2471, 2000. 1
236
+
237
+ [6] Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, et al. Interaction networks for learning about objects, relations and physics. In Advances in neural information processing systems, pages 4502-4510, 2016. 1, 2, 3
238
+
239
+ [7] Yunzhu Li, Jiajun Wu, Russ Tedrake, Joshua B Tenenbaum, and Antonio Torralba. Learning particle dynamics for manipulating rigid bodies, deformable objects, and fluids. arXiv preprint arXiv:1810.01566,2018.2,3,8
240
+
241
+ [8] Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics, 2019. 3
242
+
243
+ [9] Alvaro Sanchez-Gonzalez, Jonathan Godwin, Tobias Pfaff, Rex Ying, Jure Leskovec, and Peter W Battaglia. Learning to simulate complex physics with graph networks. arXiv preprint arXiv:2002.09405, 2020. 2, 3
244
+
245
+ [10] Martin JA Schuetz, J Kyle Brubaker, and Helmut G Katzgraber. Combinatorial optimization with physics-inspired graph neural networks. Nature Machine Intelligence, 4(4):367-377, 2022. 1,3
246
+
247
+ [11] Tohru Katayama. Subspace methods for system identification. Springer Science & Business Media, 2006. 1
248
+
249
+ [12] Michel Verhaegen and Vincent Verdult. Filtering and system identification: a least squares approach. Cambridge University Press, 2007.
250
+
251
+ [13] Rik Pintelon and Johan Schoukens. System identification: a frequency domain approach. John Wiley & Sons, 2012.
252
+
253
+ [14] Zoltán Szabó, Peter SC Heuberger, József Bokor, and Paul MJ Van den Hof. Extended Ho-Kalman algorithm for systems represented in generalized orthonormal bases. Automatica, 36 (12):1809-1818,2000. 1
254
+
255
+ [15] Bernard O Koopman. Hamiltonian systems and transformation in Hilbert space. Proceedings of the National Academy of Sciences of the United States of America, 17(5):315, 1931. 1
256
+
257
+ [16] Igor Mezić. Spectral properties of dynamical systems, model reduction and decompositions. Nonlinear Dynamics, 41(1):309-325, 2005. 1
258
+
259
+ [17] Clarence W Rowley, Igor Mezic, Shervin Bagheri, Philipp Schlatter, Dans Henningson, et al. Spectral analysis of nonlinear flows. Journal of Fluid Mechanics, 641(1):115-127, 2009. 1, 3
260
+
261
+ [18] Matthew O Williams, Ioannis G Kevrekidis, and Clarence W Rowley. A data-driven approximation of the Koopman operator: Extending dynamic mode decomposition. Journal of Nonlinear Science, 25(6):1307-1346, 2015. 1, 3
262
+
263
+ [19] Bethany Lusch, J Nathan Kutz, and Steven L Brunton. Deep learning for universal linear embeddings of nonlinear dynamics. Nature communications, 9(1):1-10, 2018. 1
264
+
265
+ [20] Enoch Yeung, Soumya Kundu, and Nathan Hodas. Learning deep neural network representations for koopman operators of nonlinear dynamical systems. In 2019 American Control Conference (ACC), pages 4832-4839. IEEE, 2019. 1, 3
266
+
267
+ [21] Yunzhu Li, Hao He, Jiajun Wu, Dina Katabi, and Antonio Torralba. Learning compositional koopman operators for model-based control. arXiv preprint arXiv:1910.08264, 2019. 2, 3, 7, 8, 14
268
+
269
+ [22] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61-80, 2008. 2,3
270
+
271
+ [23] Damian Mrowca, Chengxu Zhuang, Elias Wang, Nick Haber, Li Fei-Fei, Joshua B Tenenbaum, and Daniel LK Yamins. Flexible neural representation for physics prediction. arXiv preprint arXiv:1806.08047, 2018. 2, 3
272
+
273
+ [24] J Nathan Kutz, Steven L Brunton, Bingni W Brunton, and Joshua L Proctor. Dynamic mode decomposition: data-driven modeling of complex systems. SIAM, 2016. 3
274
+
275
+ [25] Qianxiao Li, Felix Dietrich, Erik M Bollt, and Ioannis G Kevrekidis. Extended dynamic mode decomposition with dictionary learning: A data-driven adaptive spectral decomposition of the Koopman operator. Chaos: An Interdisciplinary Journal of Nonlinear Science, 27(10):103111, 2017. 3
276
+
277
+ [26] Hassan Arbabi and Igor Mezic. Ergodic theory, dynamic mode decomposition, and computation of spectral properties of the Koopman operator. SIAM Journal on Applied Dynamical Systems, 16(4):2096-2126, 2017. 3
278
+
279
+ [27] Bowen Huang and Umesh Vaidya. Data-driven approximation of transfer operators: Naturally structured dynamic mode decomposition. In 2018 Annual American Control Conference (ACC), pages 5659-5664. IEEE, 2018. 3
280
+
281
+ [28] Naoya Takeishi, Yoshinobu Kawahara, and Takehisa Yairi. Learning Koopman invariant subspaces for dynamic mode decomposition. In Advances in Neural Information Processing Systems, pages 1130-1140, 2017. 3
282
+
283
+ [29] Peter J Schmid. Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics, 656:5-28, 2010. 3
284
+
285
+ [30] Subhrajit Sinha, Bowen Huang, and Umesh Vaidya. On robust computation of koopman operator and prediction in random dynamical systems. Journal of Nonlinear Science, pages 1-34, 2019. 3
286
+
287
+ [31] Umesh Vaidya. Observability gramian for nonlinear systems. In Decision and Control, 2007 46th IEEE Conference on, pages 3357-3362. IEEE, 2007. 3
288
+
289
+ [32] Amit Surana and Andrzej Banaszuk. Linear observer synthesis for nonlinear systems using Koopman operator framework. IFAC-PapersOnLine, 49(18):716-723, 2016. 3
290
+
291
+ [33] Steven L Brunton, Bingni W Brunton, Joshua L Proctor, and J Nathan Kutz. Koopman invariant subspaces and finite linear representations of nonlinear dynamical systems for control. PloS one, 11(2):e0150171, 2016. 3
292
+
293
+ [34] Bowen Huang, Xu Ma, and Umesh Vaidya. Feedback stabilization using koopman operator. In 2018 IEEE Conference on Decision and Control (CDC), pages 6434-6439. IEEE, 2018.
294
+
295
+ [35] Milan Korda and Igor Mezić. Koopman model predictive control of nonlinear dynamical systems. In The Koopman Operator in Systems and Control, pages 235-255. Springer, 2020. 3
296
+
297
+ [36] S Sinha and U Vaidya. On data-driven computation of information transfer for causal inference in discrete-time dynamical systems. Journal of Nonlinear Science, pages 1-26, 2020. 3
298
+
299
+ [37] Nibodh Boddupalli, Aqib Hasnain, Sai Pushpak Nandanoori, and Enoch Yeung. Koopman operators for generalized persistence of excitation conditions for nonlinear systems. In 2019 IEEE 58th Conference on Decision and Control (CDC), pages 8106-8111. IEEE, 2019. 3
300
+
301
+ [38] Sai Pushpak Nandanoori, Seemita Pal, Subhrajit Sinha, Soumya Kundu, Khushbu Agarwal, and Sutanay Choudhury. Data-driven distributed learning of multi-agent systems: A koopman operator approach. In 2021 60th IEEE Conference on Decision and Control (CDC), pages 5059-5066. IEEE, 2021. 3
302
+
303
+ [39] Mostafa Haghir Chehreghani. Half a decade of graph convolutional networks. Nature Machine Intelligence, 4(3):192-193, 2022. 3
304
+
305
+ [40] Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 2017. 3
306
+
307
+ [41] Wenlong Liao, Birgitte Bak-Jensen, Jayakrishnan Radhakrishna Pillai, Yuelong Wang, and Yusen Wang. A review of graph neural networks and their applications in power systems. Journal of Modern Power Systems and Clean Energy, 2021. 3
308
+
309
+ [42] Andrzej Lasota and Michael C Mackey. Chaos, fractals, and noise: stochastic aspects of dynamics, volume 97. Springer Science & Business Media, 2013. 3, 4
310
+
311
+ [43] Milan Korda and Igor Mezić. Linear predictors for nonlinear dynamical systems: Koopman operator meets model predictive control. Automatica, 93:149-160, 2018. 3, 4
312
+
313
+ [44] Joshua L Proctor, Steven L Brunton, and J Nathan Kutz. Dynamic mode decomposition with control. SIAM Journal on Applied Dynamical Systems, 15(1):142-161, 2016. 3, 4
314
+
315
+ [45] Subhrajit Sinha. Data-driven influence based clustering of dynamical systems. arXiv:2204.02373, accepted for publication in European Control Conference, 2022. 6, 7
316
+
317
+ [46] Subhrajit Sinha and Umesh Vaidya. Causality preserving information transfer measure for control dynamical system. In 2016 IEEE 55th Conference on Decision and Control (CDC), pages 7329-7334. IEEE, 2016. 6
318
+
319
+ [47] Yunzhu Li, Jiajun Wu, Jun-Yan Zhu, Joshua B Tenenbaum, Antonio Torralba, and Russ Tedrake. Propagation networks for model-based control under partial observation. pages 1205-1211, 2019.7,8
320
+
321
+ [48] Charles S Peskin. Mathematical aspects of heart physiology. Courant Inst. Math, 1975. 8
322
+
323
+ [49] Michael B Elowitz and Stanislas Leibler. A synthetic oscillatory network of transcriptional regulators. Nature, 403(6767):335-338, 2000. 8
324
+
325
+ [50] John Buck. Synchronous rhythmic flashing of fireflies. ii. The Quarterly review of biology, 63 (3):265-289, 1988. 8
326
+
327
+ [51] Kurt Wiesenfeld, Pere Colet, and Steven H Strogatz. Synchronization transitions in a disordered josephson series array. Physical review letters, 76(3):404, 1996. 8
328
+
329
+ [52] Florian Dörfler, Michael Chertkov, and Francesco Bullo. Synchronization in complex oscillator networks and smart grids. Proceedings of the National Academy of Sciences, 110(6):2005-2010, 2013.8
330
+
331
+ [53] Benjamin Schäfer, Dirk Witthaut, Marc Timme, and Vito Latora. Dynamically induced cascading failures in power grids. Nature communications, pages 1-13, 2018. 8
332
+
333
+ [54] Joshua W Busby, Kyri Baker, Morgan D Bazilian, Alex Q Gilbert, Emily Grubert, Varun Rai, Joshua D Rhodes, Sarang Shidore, Caitlin A Smith, and Michael E Webber. Cascading risks: Understanding the 2021 winter blackout in texas. Energy Research & Social Science, 2021. 8
334
+
335
+ [55] Wei Yao, Lin Jiang, Jinyu Wen, QH Wu, and Shijie Cheng. Wide-area damping controller of facts devices for inter-area oscillations considering communication time delays. IEEE Transactions on Power Systems, 29(1):318-329, 2013. 8
336
+
337
+ [56] Joe H Chow and Kwok W Cheung. A toolbox for power system dynamics and control engineering education and research. IEEE transactions on Power Systems, 7(4):1559-1564, 1992. 8, 15
338
+
339
+ [57] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019. 8
340
+
341
+ [58] Joe H Chow. Power system coherency and model reduction, volume 84. Springer, 2013. 9
342
+
343
+ ## 6 Appendix
344
+
345
+ Notations. The vectors ${a}_{j},{e}_{j}$ respectively denote the ${j}^{\text{th }}$ column of the adjacency matrix, ${Adj}$ and a vector of standard basis in ${\mathbb{R}}^{m}$ . The notation, ${\left( {a}_{j}\right) }_{i}$ denotes the ${i}^{\text{th }}$ entry of the column vector ${a}_{j}.{I}_{m}$ denote the identity matrix of size $m$ . The Kronecker product is denoted by $\otimes$ . The block diagonal matrix with block matrices, ${M}_{1},{M}_{2},\ldots ,{M}_{\ell }$ is denoted by blkdiag $\left( {{M}_{1},{M}_{2},\ldots ,{M}_{\ell }}\right)$ .
346
+
347
+ Suppose $D = {\left\lbrack \begin{array}{llll} {D}_{1}^{\top } & {D}_{2}^{\top } & \cdots & {D}_{\ell }^{\top } \end{array}\right\rbrack }^{\top }$ where ${D}_{1},{D}_{2},\ldots ,{D}_{\ell }$ are wide rectangular matrices. Then from the definition of the Frobenius norm, we have,
348
+
349
+ $$
350
+ \parallel D{\parallel }_{F}^{2} = \mathop{\sum }\limits_{{i = 1}}^{\ell }{\begin{Vmatrix}{D}_{i}\end{Vmatrix}}_{F}^{2}. \tag{S1}
351
+ $$
352
+
353
+ If ${D}_{1}$ and ${D}_{2}$ are two matrices, then
354
+
355
+ $$
356
+ {\begin{Vmatrix}{D}_{1} - {D}_{2}\end{Vmatrix}}^{2} = {\begin{Vmatrix}{D}_{1}\end{Vmatrix}}^{2} + {\begin{Vmatrix}{D}_{2}\end{Vmatrix}}^{2} - 2\operatorname{trace}\left( {{D}_{1}^{\top }{D}_{2}}\right) \tag{S2}
357
+ $$
358
+
359
+ where trace $\left( {{D}_{1}^{\top }{D}_{2}}\right)$ denote the Frobenius inner product of the matrices ${D}_{1}$ and ${D}_{2}$ . For any matrix $D$ , the Moore-Penrose inverse is denoted by ${D}^{ \dagger }$ .
360
+
361
+ ### 6.1 Proof of Theorem 3
362
+
363
+ Proof. Consider the the centralized learning problem described in Eq. M3 (where the notation 'M' indicates the equation is from the main manuscript). This problem is now rewritten with respect to each agent as follows.
364
+
365
+ $$
366
+ {\begin{Vmatrix}{Y}_{f} - A{Y}_{p} - BU\end{Vmatrix}}_{F}^{2} = {\begin{Vmatrix}{\left\lbrack \begin{matrix} {Y}_{f,1} - {A}_{1}{Y}_{p} - {B}_{1}{U}_{1} \\ {Y}_{f,2} - {A}_{2}{Y}_{p} - {B}_{2}{U}_{2} \\ \vdots \\ {Y}_{f,{n}_{a}} - {A}_{{n}_{a}}{Y}_{p} - {B}_{{n}_{a}}{U}_{{n}_{a}} \\ \end{matrix}\right\rbrack }_{F}^{2}\end{Vmatrix}}_{F}^{2} = \mathop{\sum }\limits_{{\alpha = 1}}^{{n}_{a}}{\begin{Vmatrix}{Y}_{f,\alpha } - {A}_{\alpha }{Y}_{p} - {B}_{\alpha }{U}_{\alpha }\end{Vmatrix}}_{F}^{2}\text{ (from Eq. S1) }
367
+ $$
368
+
369
+ (from Eq. S1)
370
+
371
+ $$
372
+ = \mathop{\sum }\limits_{{\alpha = 1}}^{{n}_{a}}{\begin{Vmatrix}{Y}_{f,\alpha } - {A}_{\mathcal{N}\left( \alpha \right) }{Y}_{p,\mathcal{N}\left( \alpha \right) } - {B}_{\alpha }{U}_{\alpha } - {A}_{\overline{\mathcal{N}\left( \alpha \right) }}{Y}_{p,\overline{\mathcal{N}\left( \alpha \right) }}\end{Vmatrix}}_{F}^{2}\text{(from Remark 2)}
373
+ $$
374
+
375
+ $$
376
+ = \mathop{\sum }\limits_{{\alpha = 1}}^{{n}_{a}}{\begin{Vmatrix}{Y}_{f,\alpha } - {A}_{\mathcal{N}\left( \alpha \right) }{Y}_{p,\mathcal{N}\left( \alpha \right) } - {B}_{\alpha }{U}_{\alpha }\end{Vmatrix}}_{F}^{2} + {\begin{Vmatrix}{A}_{\overline{\mathcal{N}\left( \alpha \right) }}{Y}_{p,\overline{\mathcal{N}\left( \alpha \right) }}\end{Vmatrix}}_{F}^{2}
377
+ $$
378
+
379
+ $$
380
+ \text{- 2 trace}\left( {{\left( {Y}_{f,\alpha } - {A}_{\mathcal{N}\left( \alpha \right) }{Y}_{p,\mathcal{N}\left( \alpha \right) } - {B}_{\alpha }{U}_{\alpha }\right) }^{\top }{A}_{\overline{\mathcal{N}\left( \alpha \right) }}{Y}_{p,\overline{\mathcal{N}\left( \alpha \right) }}}\right) \text{(from Eq. S2)}
381
+ $$
382
+
383
+ $$
384
+ = \mathop{\sum }\limits_{{\alpha = 1}}^{{n}_{a}}{\begin{Vmatrix}{Y}_{f,\alpha } - {A}_{\mathcal{N}\left( \alpha \right) }{Y}_{p,\mathcal{N}\left( \alpha \right) } - {B}_{\alpha }{U}_{\alpha }\end{Vmatrix}}_{F}^{2}
385
+ $$
386
+
387
+ where the last step follows by noticing that ${A}_{\overline{\mathcal{N}\left( \alpha \right) }} = 0$ since the agent $\alpha$ is not connected to the
388
+
389
+ agents in $\overline{\mathcal{N}\left( \alpha \right) }$ . In the above steps, the computation of ${Y}_{f,\alpha }$ and ${Y}_{p,\mathcal{N}\left( \alpha \right) }$ involves computing the transformation matrices ${T}_{f,\alpha },{R}_{f,\alpha },{T}_{p,\alpha },{R}_{p,\alpha }$ which are computed under the knowledge of the network topology.
390
+
391
+ Finally, we obtain,
392
+
393
+ $$
394
+ \mathop{\min }\limits_{{A, B}}{\begin{Vmatrix}{Y}_{f} - A{Y}_{p} - BU\end{Vmatrix}}_{F}^{2} = \mathop{\min }\limits_{\substack{{{A}_{\mathcal{N}\left( \alpha \right) },{B}_{\alpha }} \\ {\alpha \in \left\{ {1,2,\ldots ,{n}_{a}}\right\} } }}\mathop{\sum }\limits_{{\alpha = 1}}^{{n}_{a}}{\begin{Vmatrix}{Y}_{f,\alpha } - {A}_{\mathcal{N}\left( \alpha \right) }{Y}_{p,\mathcal{N}\left( \alpha \right) } - {B}_{\alpha }{U}_{\alpha }\end{Vmatrix}}_{F}^{2}
395
+ $$
396
+
397
+ $$
398
+ = \mathop{\sum }\limits_{{\alpha = 1}}^{{n}_{a}}\mathop{\min }\limits_{{{A}_{\mathcal{N}\left( \alpha \right) },{B}_{\alpha }}}{\begin{Vmatrix}{Y}_{f,\alpha } - {A}_{\mathcal{N}\left( \alpha \right) }{Y}_{p,\mathcal{N}\left( \alpha \right) } - {B}_{\alpha }{U}_{\alpha }\end{Vmatrix}}_{F}^{2}
399
+ $$
400
+
401
+ For every $\alpha \in \left\{ {1,2,\ldots ,{n}_{a}}\right\} ,{A}_{\mathcal{N}\left( \alpha \right) },{B}_{\alpha }$ can now be obtained analytically as $\left\lbrack \begin{array}{ll} {A}_{\mathcal{N}\left( \alpha \right) } & {B}_{\alpha } \end{array}\right\rbrack =$ ${Y}_{f,\alpha }{\left\lbrack \begin{array}{ll} {Y}_{p,\mathcal{N}\left( \alpha \right) } & {U}_{\alpha } \end{array}\right\rbrack }^{ \dagger }$ and the transition mapping corresponding to the agent $\alpha$ is given by
402
+
403
+ $$
404
+ {\widehat{A}}_{\alpha } = {A}_{\mathcal{N}\left( \alpha \right) }{R}_{p,\alpha },\;\text{ for }\alpha \in \left\{ {1,2,\ldots ,{n}_{a}}\right\} .
405
+ $$
406
+
407
+ 22 Finally the distributed Koopman is given by ${A}_{D} = {\left\lbrack \begin{array}{llll} {\widehat{A}}_{1}^{\top } & {\widehat{A}}_{2}^{\top } & \cdots & {\widehat{A}}_{{n}_{a}}^{\top } \end{array}\right\rbrack }^{\top }$ , input matrix is ${B}_{D} = {blkdiag}\left( {{B}_{1},{B}_{2},\ldots ,{B}_{{n}_{a}}}\right)$ . Hence the proof.
408
+
409
+ ![01963ec4-05f0-7e1a-b267-9a6714d3960e_13_395_255_985_711_0.jpg](images/01963ec4-05f0-7e1a-b267-9a6714d3960e_13_395_255_985_711_0.jpg)
410
+
411
+ Figure 6: Rope prediction performance under random control inputs (the solid lines indicate the actual state trajectories and the dotted line shows the corresponding predictions.)
412
+
413
+ ### 6.2 Proof of Corollary 4
414
+
415
+ Proof. The proof follows by noticing that in a fully connected network, for any agent $\alpha$ , the corresponding non-neighbors set, $\overline{N\left( \alpha \right) }$ is empty.
416
+
417
+ ### 6.3 More Details on the Numerical Studies
418
+
419
+ The network topologies of the three example systems are given in Figure 7. We follow the physics simulation engine provided by [21] for generating the data for the rope example, and follow the baseline node and edge attributes for the objects and connecting edges.
420
+
421
+ For the oscillator network, each of the individual node dynamics follows a second order differential equation. The overall dynamics is represented as:
422
+
423
+ $$
424
+ \left\lbrack \begin{matrix} \dot{\theta } \\ \ddot{\theta } \end{matrix}\right\rbrack = \left\lbrack \begin{matrix} {0}_{{n}_{v}} & {I}_{{n}_{v}} \\ - \beta {M}^{-1}\mathcal{L} & {M}^{-1}D \end{matrix}\right\rbrack \left\lbrack \begin{array}{l} \theta \\ \dot{\theta } \end{array}\right\rbrack , \tag{6}
425
+ $$
426
+
427
+ where $\theta \in {\mathbb{R}}^{{n}_{v}},\dot{\theta } \in {\mathbb{R}}^{{n}_{v}}$ are the angles and frequencies of the oscillator. The diagonal matrices $M$ and $D$ contain inertia and damping of the nodes. The coupling of the nodes are captured by the Laplacian $\mathcal{L}$ with their strengths represented by $\beta .{0}_{{n}_{v}}$ and ${I}_{{n}_{v}}$ denote the zero and identity matrices of size ${n}_{v}$ . For the oscillators we have created one-hot vector for node attributes. We have divided the nodes into low inertia ( $< 3$ in appropriate units), medium inertia $\left( { > 3\text{, but} < 8}\right)$ , and considerably high inertia $\left( { > 8}\right)$ , thereby creating 3-dimensional node attribute vectors. The edge attributes are also one-hot vectors with 6 different types. Based
428
+
429
+ ![01963ec4-05f0-7e1a-b267-9a6714d3960e_13_895_1810_477_241_0.jpg](images/01963ec4-05f0-7e1a-b267-9a6714d3960e_13_895_1810_477_241_0.jpg)
430
+
431
+ ![01963ec4-05f0-7e1a-b267-9a6714d3960e_14_449_246_859_652_0.jpg](images/01963ec4-05f0-7e1a-b267-9a6714d3960e_14_449_246_859_652_0.jpg)
432
+
433
+ Figure 8: Oscillator prediction performance with random perturbations where the model is distributed according to the adjacency matrix of the underlying network (the solid lines indicate the actual state trajectories and the dotted line shows the corresponding predictions.)
434
+
435
+ ![01963ec4-05f0-7e1a-b267-9a6714d3960e_14_431_1066_868_675_0.jpg](images/01963ec4-05f0-7e1a-b267-9a6714d3960e_14_431_1066_868_675_0.jpg)
436
+
437
+ Figure 9: Powergrid prediction performance with information theoretic clustering and two-hop load perturbations (the solid lines indicate the actual state trajectories and the dotted line shows the corresponding predictions.)
438
+
439
+ on inertia, these types are: low-low, high-high, medium-medium, and un-directed low-medium, medium-high, and low-high.
440
+
441
+ The IEEE benchmark 68-bus powergrid model is simulated with detailed dynamics following the power system toolbox [56]. Our prediction models are learned for fast transient frequencies. The node attributes are designed similarly as the oscillator studies, however, here the nodes are physical powergrid buses which are classified as generator buses, load buses, and buses without any loads or generators (let us call it none), thereby creating 3-dimensional one-hot vectors. The edges connecting buses represent powergrid transmission lines, thereby, edge attributes are characterized as generator-load, load-none, and generator-none connections, resulting in a 3-dimensional one-hot vectors.
442
+
443
+ The network density of the systems are used to characterize the sparsity which is defined as the ratio between the number of edges to the maximum number of possible edges.
444
+
445
+ We present few examples of the prediction performance of the DKGNN model for predicting the dynamic system behaviours over time steps. Figure 6 shows the the positions and velocities of the rope objects 2,4 and 6 for the predicted performance with respect to the actual physics simulations. Figure 8 shows an example of prediction performance for the network of oscillators at nodes12,23, and 45. Prediction performance for the powergrid example is shown in Figure 9 with information-theoretic clustering used for the DKGNN for buses20,54, and 60 . These figures show satisfactory performance for the distributed geometric Koopman models for all the networked dynamic system examples.
papers/LOG/LOG 2022/LOG 2022 Conference/lwx5gi4MIh/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,270 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § LEARNING DISTRIBUTED GEOMETRIC KOOPMAN OPERATOR FOR SPARSE NETWORKED DYNAMICAL SYSTEMS
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ The Koopman operator theory provides an alternative to studying nonlinear networked dynamical systems (NDS) by mapping the state space to an abstract higher dimensional space where the system evolution is linear. The recent works show the application of graph neural networks (GNNs) to learn state to object-centric embedding and achieve centralized block-wise computation of the Koopman operator(KO)under additional assumptions on the underlying node properties and constraints on the KO structure. However, the computational complexity of learning the Koopman operator increases for large NDS. Moreover, the computational complexity increases in a combinatorial fashion with the increase in number of nodes. The learning challenge is further amplified for sparse networks by two factors: 1) sample sparsity for learning the Koopman operator in the non-linear space, and 2) the dissimilarity in the dynamics of individual nodes or from one subgraph to another. Our work aims to address these challenges by formulating the representation learning of NDS into a multi-agent paradigm and learning the Koopman operator in a distributive manner. Our theoretical results show that the proposed distributed computation of the geometric Koopman operator is beneficial for sparse NDS, whereas for the fully connected systems this approach coincides with the centralized one. The empirical study on a rope system, a network of oscillators, and a power grid show comparable and superior performance along with computational benefits with the state-of-the-art methods.
12
+
13
+ § 221 INTRODUCTION
14
+
15
+ NDS represents an important class of dynamic networks where the state of the network is defined by a vector of node-level properties in a geometrical manifold, and their evolution is governed by a set of differential equations. Data-driven modeling of both spatio-temporal dependencies and evolution dynamics is essential to predict the response of the NDS to an external perturbation. Surely, machine-learning approaches that explicitly recognize the interconnection structure of such systems or model the dynamical system-driven evolution of the network outperform initial deep learning approaches based on recurrent neural networks and its variants [1-5]. Deep learning approaches such as GNNs fit into this paradigm by learning non-linear functions for each of the encoder-system model-decoder components [6-10]. Discovering the underlying physics of dynamical systems have intrigued control theory researcher for decades resulting into multiple sub-space based system identification works [11-14]. Koopman operator theory [15, 16] is an approach for such model discovery where the core idea is to transform the observed state-space variables to the space of square-integrable functions, where a linear operator provides an exact representation of the underlying dynamical system and the spectrum of the operator encodes all the non-linear behaviors. However, for computational purposes, finding finite-dimensional approximation of Koopman operator is challenging. The key to computing the finite-dimensional Koopman operator is fixing the lifting functions (observables) and existing approaches such as classical or extended dynamic mode decomposition [17, 18] use an a-priori choice of basis functions for lifting; however, this choice usually fails to generalize to more complex environments. Instead, learning these transformations from the system trajectories themselves using deep neural networks (DNNs) have been shown to yield much richer invariant subspaces [19, 20].
16
+
17
+ Continuing the idea of lifting the non-linear state space into another space to learn linear transition dynamics, [21] proposed the use of a graph neural network as the encoder-decoder function. While graph neural networks (GNN) [22] appears to be a natural approach for modeling the physics of networked systems, their ability to discover dynamic evolution models of large-scale networked systems is a nascent area of research $\left\lbrack {6,7,9,{23}}\right\rbrack$ . For NDS, where the number of system states increases with the number of nodes, the computational complexity of learning the Koopman operator also increases. The topology of the network or its sparsity are typically not taken advantage of in the existing studies when learning observable functions or the Koopman operator.
18
+
19
+ In this work, we address the challenge of learning dissimilar dynamics in sparse networks by formulating the representation learning of networked dynamical systems into a multi-agent paradigm. We refer to this approach as Distributed Koopman-GNN (DKGNN). DKGNN is more suitable for sparse and large networked dynamical systems as the proposed distributed learning method yields superior computational efficiency compared to traditional methods. We applied the GNNs to capture the distributed nature of the dynamical system behavior, transform the original state-space into the Koopman observable space, and subsequently use the network sparsity patterns to constrain the Koopman operator construction into a block-structured distributed representation along with theoretical guarantees. Information-theoretic network clustering strategies were utilized for specific 60 dynamic systems to capture the joint evolution of the clusters in a coarse-grained fashion resulting in further computational benefits. Please see Figure 1 for an illustration of the approach.
20
+
21
+ < g r a p h i c s >
22
+
23
+ Figure 1: Overview of the proposed approach (best viewed with colors) : (Top row) the sparse networked dynamical system (NDS) is partitioned into clusters using dynamic spatio-temporal data resulting into an agent representation (each color represents an agent). The time-series associated with each node is also color coded by the agents. (Bottom row) the re-arranged multi-dimensional spatio-temporal data is fed to the GNN along with the agent network structure to learn the nonlinear observables. Learning the distributed geometric Koopman operator by exploiting the sparsity of the multi agent system is shown in lower right, with colors in the Koopman matrix capturing the distributed connection in the agent topology (white blocks correspond to no edges between the agents and hence they are all zeros).
24
+
25
+ § CONTRIBUTIONS. THE MAIN CONTRIBUTIONS OF THIS PAPER ARE SUMMARIZED AS FOLLOWS:
26
+
27
+ * We develop methods for learning distributed Koopman operator for large-scale networks, using system topology and network sparsity properties for NDS. We present a system theoretic learning approach that can exploit the network connectivity structure via GNNs.
28
+
29
+ * We introduce information theoretic-based clustering strategies for sparse NDS to learn a coarsened structure and model the system using a hierarchical multi-agent paradigm.
30
+
31
+ * We present theoretical results on bounding the performance of the distributed geometric Koop-man operator with respect to its centralized counterpart.
32
+
33
+ * We demonstrate that DKGNN yields two benefits. It improves the scalability of learning, and for sparse NDS with divergent dynamics across different parts of the network, it outperforms prediction performance of centralized approaches.
34
+
35
+ § 1.1 RELATED WORK
36
+
37
+ Koopman operator theory The infinite-dimensional Koopman operator is computationally intractable. Several methods for identifying approximations of the infinite-dimensional Koopman operator on a finite-dimensional space have recently been developed. Most notable works include dynamic mode decomposition (DMD) [17, 24], extended DMD (EDMD) [18, 25], Hankel DMD [26], naturally structured DMD (NS-DMD) [27] and deep learning based DMD (deepDMD) [20, 28]. These methods are data-driven and one or more of these methods have been successfully applied for system identification [17, 29] including system identification from noisy data [30], data-driven observability/controllability gramians for nonlinear systems [31, 32], control design [33-35], data-driven causal inference in dynamical systems [36] and to identify persistence of excitation conditions for nonlinear systems [37]. [38] discusses distributed design of Koopman without control and using dictionary lifting functions.
38
+
39
+ Graph Neural Networks GNNs [22, 39] have found widespread use into every application involving non-Euclidean data [40]. Extending GNNs to model physics-driven processes gives rise to a new class of physics-inspired neural networks (PINN) [8-10]. A common theme is to model many-body interactions via a nearest-neighbor graph and then model the evolution of that graph $\left\lbrack {6,7,9}\right\rbrack$ . However, addressing issues around compositionality $\left\lbrack {7,{23}}\right\rbrack$ and scalability becomes important as the foundation for PI(G)NN matures and we seek to model larger, multi-scale spatio-temporal interactions. Moreover, applications such as molecular biology [20] and power grid [41] motivate the modeling of NDS where the graph structure is distinct from k-nearest neighbor graphs, with sparsity and connectivity that resemble small-world networks. Recent works such as [21] provides a bridge that seeks to integrate GNNs and Koopman operators to improve generalization ability and result in simpler linear transition dynamics. However, their approach for learning Koopman state transitions and GNN embedding results in performance and scalability bottlenecks when system size increases.
40
+
41
+ § 2 METHODOLOGY
42
+
43
+ § 2.1 NETWORKED DYNAMICAL SYSTEMS AND THE KOOPMAN OPERATOR
44
+
45
+ Problem Statement: Consider a networked dynamical system (NDS) evolving over a network, $\mathcal{G} = \left( {\mathcal{V},\mathcal{E}}\right)$ . Let the number of nodes and edges be ${n}_{v}$ and ${n}_{e}$ respectively and the governing equation for the NDS on $\mathcal{G}$ is given by,
46
+
47
+ $$
48
+ {x}_{t + 1} = F\left( {x}_{t}\right) , \tag{1}
49
+ $$
50
+
51
+ where ${x}_{t} \in \mathcal{M} \subseteq {\mathbb{R}}^{n}$ is the concatenated system state at time $t$ and $F : \mathcal{M} \rightarrow \mathcal{M}$ is the discrete-time nonlinear transition mapping. Our goal is to learn the system dynamics as expressed in equation (1) in a distributed approach combining the Koopman operator theory, graph neural networks and by leveraging on network sparsity properties.
52
+
53
+ Exploring the network structure, the ${n}_{v}$ nodes could be grouped to form ${n}_{a}$ agents where ${n}_{a} \leq {n}_{v}$ and results in a network denoted by ${\mathcal{G}}_{a} = \left( {{\mathcal{V}}_{a},{\mathcal{E}}_{a}}\right)$ with the state at any time $t$ is partitioned as ${x}_{t} = {\left\lbrack {x}_{t,1}^{\top },\ldots ,{x}_{t,{n}_{a}}^{\top }\right\rbrack }^{\top }$ where for every $\alpha \in \left\{ {1,\ldots ,{n}_{a}}\right\}$ , the states ${x}_{t,\alpha }$ belongs to agent $\alpha$ . For completion, we mention that the number of nodes in ${\mathcal{V}}_{a}$ is equal to ${n}_{a}$ . The motivation behind exploring the network structure is to develop models that will possess certain advantages when compared to the centrally learned models. A method to identify ${\mathcal{G}}_{a}$ from $\mathcal{G}$ for practical dynamical systems is discussed later in the paper. Associated with the system (1) is a linear operator, namely the Koopman operator $\mathbb{U}$ [42] which is defined as follows.
54
+
55
+ Definition 1 (Koopman Operator (KO) [42]). Given any $h \in {L}^{2}\left( \mathcal{M}\right)$ , the Koopman operator $\mathbb{U} : {L}^{2}\left( \mathcal{M}\right) \rightarrow {L}^{2}\left( \mathcal{M}\right)$ for the system (1) is defined as $\left\lbrack {\mathbb{U}h}\right\rbrack \left( x\right) = h\left( {F\left( x\right) }\right)$ , where ${L}^{2}\left( \mathcal{M}\right)$ is the space of square integrable functions on $\mathcal{M}$ .
56
+
57
+ Originally developed for autonomous systems, recently Koopman framework has been extended to systems with control $\left\lbrack {{43},{44}}\right\rbrack$ . In this paper we consider a controlled dynamical system of the form:
58
+
59
+ $$
60
+ {x}_{t + 1} = F\left( {x}_{t}\right) + G\left( {x}_{t}\right) {u}_{t}, \tag{2}
61
+ $$
62
+
63
+ where $G : \mathcal{M} \rightarrow {\mathbb{R}}^{n \times q}$ is the input vector field and ${u}_{t} \in {\mathbb{R}}^{q}$ denote the control input to the system at time $t$ . The Koopman operator associated with (2) is defined on an extended state-space obtained as the product of the original state-space and the space of all control sequences, resulting in a control-affine dynamical system on the extended state-space $\left\lbrack {{43},{44}}\right\rbrack$ . In general, the Koopman operator is an infinite-dimensional operator, but for computation purposes, a finite-dimensional approximation of the operator is constructed from the obtained time-series data as discussed below.
64
+
65
+ Consider the time-series data from a networked dynamical system as $X = \left\lbrack \begin{array}{llll} {x}_{1} & {x}_{2} & \ldots & {x}_{k} \end{array}\right\rbrack \in$ ${\mathbb{R}}^{n \times k}$ , and the corresponding control inputs $U = \left\lbrack \begin{array}{llll} {u}_{1} & {u}_{2} & \ldots & {u}_{k} \end{array}\right\rbrack \in {\mathbb{R}}^{q \times k}$ . Define one time-step separated datasets, ${X}_{p}$ and ${X}_{f}$ from $X$ as ${X}_{p} = \left\lbrack {{x}_{1},{x}_{2},\ldots ,{x}_{k - 1}}\right\rbrack ,{X}_{f} = \left\lbrack {{x}_{2},{x}_{3},\ldots ,{x}_{k}}\right\rbrack$ and let $\mathcal{S} = \left\{ {{\Psi }_{1},\ldots ,{\Psi }_{m}}\right\}$ be the choice of non-linear functions or observables where ${\Psi }_{i} \in {L}^{2}\left( {{\mathbb{R}}^{n},\mathcal{B},\mu }\right)$ (where $\mathcal{B}$ is the Borel $\sigma$ algebra and $\mu$ denote the measure [42]) and ${\Psi }_{i} : {\mathbb{R}}^{n} \rightarrow \mathbb{C}$ . Define a vector valued observable function $\Psi : {\mathbb{R}}^{n} \rightarrow {\mathbb{C}}^{m}$ as, $\Psi \left( x\right) \mathrel{\text{ := }} {\left\lbrack \begin{array}{llll} {\Psi }_{1}\left( x\right) & {\Psi }_{2}\left( x\right) & \cdots & {\Psi }_{m}\left( x\right) \end{array}\right\rbrack }^{\top }$ . Then the following optimization problem which minimizes the least-squares cost yields the Koopman operator and the input matrix.
66
+
67
+ $$
68
+ \mathop{\min }\limits_{{A,B}}{\begin{Vmatrix}{Y}_{f} - A{Y}_{p} - BU\end{Vmatrix}}_{F}^{2} \tag{3}
69
+ $$
70
+
71
+ where ${Y}_{p} = \Psi \left( {X}_{p}\right) = \left\lbrack {\Psi \left( {x}_{1}\right) ,\cdots ,\Psi \left( {x}_{k - 1}\right) }\right\rbrack ,{Y}_{f} = \Psi \left( {X}_{f}\right) = \left\lbrack {\Psi \left( {x}_{2}\right) ,\cdots ,\Psi \left( {x}_{k}\right) }\right\rbrack ,A \in {\mathbb{R}}^{m \times m}$ is the finite dimensional approximation of the Koopman operator defined on the space of observables and the matrix $B \in {\mathbb{R}}^{m \times q}$ is the input matrix. The optimization problem (3) can be solved analytically and the approximate Koopman operator and the input matrix are given by $\left\lbrack \begin{array}{ll} A & B \end{array}\right\rbrack = {Y}_{f}{\left\lbrack \begin{array}{ll} {Y}_{p} & U \end{array}\right\rbrack }^{ \dagger }$ [43], where ${\left( \cdot \right) }^{ \dagger }$ is the Moore-Penrose pseudo-inverse of a matrix. Identifying the observable functions such that $\mathcal{S}$ is invariant under the action of the Koopman operator is challenging. In this work, graph neural network-based mappings are used to construct the non-linear observable functions that satisfy the invariance by simultaneously learning the observables and the Koopman operator.
72
+
73
+ § 2.2 GRAPH NEURAL NETWORK BASED KOOPMAN OBSERVABLES
74
+
75
+ Consider the network $\mathcal{G}$ with ${n}_{v}$ nodes where the time-series data at each node is supplemented with the node attribute capturing the nature of the node, denoted by the vector ${x}_{{v}_{i}}$ where $i = \left\{ {1,2,\ldots ,{n}_{v}}\right\}$ . For instance, we can characterize the generators in a electric power grid network with their inertia values. Similarly, the designer can embed knowledge about the interaction between the agents using edge attributes, denoted as ${x}_{{e}_{ij}}$ for the edge connecting nodes $i$ and $j$ . We consider a graph neural network embedding to transition from the actual state-space to the lifted state-space using multiple compositional neural operations. At the ${t}^{th}$ time-step, the node, and edge attributes are combined along with the state vectors of the agents which are compactly written as,
76
+
77
+ $$
78
+ {x}_{t,i}^{k} = {f}_{v}^{k}\left( {{x}_{t,i}^{k - 1},\mathop{\sum }\limits_{{j \in \mathcal{N}\left( i\right) }}{f}_{e}^{k}\left( {{x}_{t,i}^{k - 1},{x}_{{v}_{i}}^{k - 1},{x}_{t,j}^{k - 1},{x}_{{v}_{j}}^{k - 1},{x}_{{e}_{ij}}^{k - 1}}\right) }\right) \tag{4}
79
+ $$
80
+
81
+ where the superscript $k$ denotes the ${k}^{th}$ layer of the GNN, and functions ${f}_{e}\left( \cdot \right)$ , and ${f}_{v}\left( \cdot \right)$ are edge and node-level aggregation functions in a GNN architecture. We use $\phi \left( \cdot \right)$ to denote the multi-layer GNN operation in a compact form.
82
+
83
+ § 3 DISTRIBUTED GEOMETRIC KOOPMAN OPERATOR WITH CONTROL INPUTS
84
+
85
+ This section formally presents the computation of distributed geometric Koopman operator with control. The (centralized) Koopman operator with control input for the system (2) is obtained by solving (3). For the ${n}_{a}$ agent NDS, the resultant KO can be represented as ${n}_{a}^{2}$ block matrices:(5)
86
+
87
+ max width=
88
+
89
+ 3*$A = \left\lbrack \begin{matrix} \frac{{A}_{1}}{{A}_{2}} \\ \vdots \\ \frac{\vdots }{{A}_{{n}_{a}}} \end{matrix}\right\rbrack = \left\lbrack \begin{matrix} \frac{{A}_{11}}{{A}_{12}} & \frac{{A}_{12}}{{A}_{22}} & \cdots & \frac{{A}_{1{n}_{a}}}{{A}_{2{n}_{a}}} \\ \frac{\vdots }{{A}_{{n}_{a}1}} & \frac{\vdots }{{A}_{{n}_{a}2}} & \cdots & \frac{\vdots }{{A}_{{n}_{a}{n}_{a}}} \end{matrix}\right\rbrack$ X X X
90
+
91
+ 2-4
92
+ X X X
93
+
94
+ 2-4
95
+ X X X
96
+
97
+ 1-4
98
+
99
+ The dynamics of the ${\alpha }^{th}$ agent have the dimension, ${m}_{\alpha }$ , such that, $\mathop{\sum }\limits_{{\alpha = 1}}^{{n}_{a}}{m}_{\alpha } = m$ . It now follows that the block matrix ${A}_{\alpha \beta } \in {\mathbb{R}}^{{m}_{\alpha } \times {m}_{\beta }}$ denotes the transition of agent $\alpha$ with respect to $\beta$ and the transition mapping for agent $\alpha$ is given by ${A}_{\alpha }$ . Similarly, the control input matrix is partitioned as $B =$ blkdiag $\left( {{B}_{1},{B}_{2},\ldots ,{B}_{{n}_{a}}}\right)$ , where the matrix ${B}_{\alpha }$ corresponds to input matrix of agent $\alpha \in \left\{ {1,2,\ldots ,{n}_{a}}\right\}$ . The objective of the distributed learning is to compute these block matrices in a distributed manner and form the geometric Koopman operator and the control input matrix for the complete NDS as opposed to directly solving the centralized optimization problem in Eq. (3) without sacrificing the performance with the distributed method. There are two major advantages to this approach. Firstly, if there is change in the local agent behavior, one can simply update the transition mapping corresponding to that agent and the agents dependent on it to learn the full system evolution. Secondly, computational advantages can be obtained by incorporating parallel learning of each agent transition mapping and this approach is more appropriate for the sparse networks.
100
+
101
+ By exploiting the topology of the network, the KO and the control input matrices are computed in a distributed manner. As a consequence, if agent $i$ is not a neighbor of agent $j$ , that is, the dynamics of agent $i$ is not affected by the dynamics of agent $j$ , we make ${A}_{ij} = 0$ . Therefore, for every $\alpha \in \left\{ {1,2,\cdots ,{n}_{a}}\right\}$ , let ${\widehat{A}}_{\alpha }$ be the transition mapping corresponding to the agent $\alpha$ , then the distributed Koopman is given by ${A}_{D} = {\left\lbrack \begin{array}{llll} {\widehat{A}}_{1}^{\top } & {\widehat{A}}_{2}^{\top } & \cdots & {\widehat{A}}_{{n}_{o}}^{\top } \end{array}\right\rbrack }^{\top }$ . For a sparse network, the distributed Koopman will be a sparse matrix irrespective of the centralized Koopman being either sparse or full (Figure 1). Consider ${X}_{p}$ and ${X}_{f}$ be the one time-step separated time-series data on the state space, $\phi$ be the GNN-embedding that maps the state space data into an embedded space. Then the time-series data on the embedded space for every agent can be expressed in terms of the neighbor and non-neighbor agents.
102
+
103
+ Remark 2. The one time-step forwarded time-series data corresponding to agent $\alpha$ is given by ${A}_{\alpha }{Y}_{p} = \left\lbrack \begin{array}{ll} {A}_{\mathcal{N}\left( \alpha \right) } & {A}_{\overline{\mathcal{N}\left( \alpha \right) }} \end{array}\right\rbrack \left\lbrack \begin{array}{l} {Y}_{p,\mathcal{N}\left( \alpha \right) } \\ {Y}_{p,\mathcal{N}\left( \alpha \right) } \end{array}\right\rbrack$ , where $\mathcal{N}\left( \alpha \right)$ is the set of agents containing the neighbors of agent $\alpha$ and itself, $\overline{\mathcal{N}\left( \alpha \right) }$ is the set of agents who are non-neighbors of agent $\alpha$ and the (rectangular) matrices, ${A}_{\mathcal{N}\left( \alpha \right) }$ and ${A}_{\overline{\mathcal{N}\left( \alpha \right) }}$ are the transition mappings associated with the agent $\alpha$ .
104
+
105
+ Let ${R}_{p,\alpha },{R}_{f,\alpha }$ , and ${R}_{u,\alpha }$ be the transformation matrices defined in such a way that they remove zero rows of any matrix, $D$ when pre-multiplied to the matrix, $D$ . Suppose if the matrix $D$ has no zero rows then the transformation matrices are identity.
106
+
107
+ Theorem 3. The centralized Koopman(A, B)learning problem described in Eq. (3) can be expressed as a distributed Koopman $\left( {{A}_{D},{B}_{D}}\right)$ learning problem such that there exists matrices, ${\widehat{A}}_{1},{\widehat{A}}_{2},\ldots ,{\widehat{A}}_{{n}_{a}},{\widehat{B}}_{1},{\widehat{B}}_{2},\ldots ,{\widehat{B}}_{{n}_{a}}$ and the distributed Koopman operator is given by ${A}_{D} = {\left\lbrack \begin{array}{llll} {\widehat{A}}_{1}^{\top } & {\widehat{A}}_{2}^{\top } & \cdots & {\widehat{A}}_{{n}_{a}}^{\top } \end{array}\right\rbrack }^{\top }$ , input matrix is ${B}_{D} =$ blkdiag $\left( {{\widehat{B}}_{1},{\widehat{B}}_{2},\ldots ,{\widehat{B}}_{{n}_{a}}}\right)$ where for $\alpha \in \left\{ {1,2,\ldots ,{n}_{a}}\right\} ,{\widehat{A}}_{\alpha } = {A}_{\mathcal{N}\left( \alpha \right) }{R}_{p,\alpha },{\widehat{B}}_{\alpha } = {B}_{\alpha }{R}_{u,\alpha }$ and ${A}_{\mathcal{N}\left( \alpha \right) },{B}_{\alpha }$ are obtained as a solution to the optimization problem $\mathop{\min }\limits_{{{A}_{\mathcal{N}\left( \alpha \right) },{B}_{\alpha }}}{\begin{Vmatrix}{Y}_{f,\alpha } - {A}_{\mathcal{N}\left( \alpha \right) }{Y}_{p,\mathcal{N}\left( \alpha \right) } - {B}_{\alpha }{U}_{\alpha }\end{Vmatrix}}_{F}^{2}$ .
108
+
109
+ From Theorem 3, with ${g}_{t} = \phi \left( {x}_{t}\right) ,\phi$ being the GNN encoder, the distributed geometric Koopman operator system with control input is given by ${g}_{t + 1} = {A}_{D}{g}_{t} + {B}_{D}{u}_{t}$ .
110
+
111
+ Corollary 4. The distributed learning problem and the centralized learning problem yield the same Koopman operator for a fully connected network.
112
+
113
+ The proofs for Theorem 3 and Corollary 4 are included in the appendix.
114
+
115
+ § 3.1 TRAINING DISTRIBUTED GEOMETRIC KOOPMAN MODEL
116
+
117
+ The state space data is mapped to the GNN-embedded space using the GNN encoder $\phi$ . To retrieve the actual state space data from the GNN-embedded space, we use a decoding GNN operator such that, ${\widehat{x}}_{t} = \varphi \left( {g}_{t}\right)$ . The decoder $\varphi \left( \cdot \right)$ follows similar GNN architecture as encoder however it maps from the
118
+
119
+ < g r a p h i c s >
120
+
121
+ Figure 2: Distributed geometric Koopman architecture
122
+
123
+ lifted Koopman space to the original state space. Looking into the agent-wise architectural detail, both encoder and decoder functions can be represented for the ${i}^{th}$ agent as ${\phi }_{i}\left( \cdot \right) ,{\varphi }_{i}\left( \cdot \right)$ as shown in Figure 2 with the understanding that all the GNN functionalities take the neighbouring agent states and attributes as additional inputs. This facilitates the computation of Koopman matrices in a distributed manner. This architecture leads us to compute an auto-encoding loss, and a prediction loss over the time-steps $t = 1,2,\ldots ,k - 1$ , and are given as follows:
124
+
125
+ $$
126
+ {\mathcal{L}}_{ae} = \frac{1}{k}\mathop{\sum }\limits_{{t = 1}}^{{k - 1}}\mathop{\sum }\limits_{{i = 1}}^{{n}_{a}}{\varphi }_{i}\left( {{\phi }_{i}\left( {x}_{t,i}\right) }\right) - {x}_{t,i},\;{\mathcal{L}}_{p} = \frac{1}{k}\mathop{\sum }\limits_{{t = 1}}^{{k - 1}}\mathop{\sum }\limits_{{t = 1}}^{{n}_{a}}{\varphi }_{i}\left( {g}_{t + 1,i}\right) - {x}_{t + 1,i},
127
+ $$
128
+
129
+ with total loss of $\mathcal{L} = {\mathcal{L}}_{ae} + {\mathcal{L}}_{p}$ . The algorithm will consist of two main update steps sequentially, one to update the Koopman and the control input matrix in a distributed manner for a fixed set of GNN encoder and decoder parameters, and another to update GNN weights with a learned distributed geometric Koopman representation. Algorithm 1 shows the computational steps where the function DistributedKoopmanMatrices $\left( \cdot \right)$ presents the distributed Koopman state and input matrices, ${A}_{D},{B}_{D}$ . Thereafter the Main(.) function runs the update of the distributed Koopman matrices and the GNN parameters sequentially for each epoch as shown in steps 15 and 16. For simplicity of representation in the algorithm, we use the compact notations $\phi \left( \cdot \right)$ and $\varphi \left( \cdot \right)$ instead of agent-wise representation as in Figure 2.
130
+
131
+ Algorithm 1 Distributed Geometric Koopman Operator with Control Computation
132
+
133
+ 1: function DISTRIBUTEDKOOPMANMATRICES $\left( {{X}_{p},{X}_{f},U,\phi }\right)$
134
+
135
+ Map the time-series data to the GNN-embedded space using $\phi$ as follows:
136
+
137
+ $$
138
+ {Y}_{p} = \phi \left( {X}_{p}\right) = \left\lbrack {\phi \left( {x}_{1}\right) ,\phi \left( {x}_{2}\right) ,\cdots ,\phi \left( {x}_{k - 1}\right) }\right\rbrack ,\;{Y}_{f} = \phi \left( {X}_{f}\right) = \left\lbrack {\phi \left( {x}_{2}\right) ,\phi \left( {x}_{3}\right) ,\cdots ,\phi \left( {x}_{k}\right) }\right\rbrack
139
+ $$
140
+
141
+ for $\alpha = 1,2,\ldots ,{n}_{a}$ do
142
+
143
+ Define the transformation matrices for agent $\alpha$ as:
144
+
145
+ $$
146
+ {T}_{p,\alpha } \mathrel{\text{ := }} {blkdiag}\left( {a{e}_{1},\ldots ,a{e}_{{n}_{a}}}\right) ,{T}_{f,\alpha } \mathrel{\text{ := }} {blkdiag}\left( {e{e}_{1},\ldots ,e{e}_{{n}_{a}}}\right) ,
147
+ $$
148
+
149
+ ${T}_{u,\alpha } \mathrel{\text{ := }} {blkdiag}\left( {e{u}_{1},\ldots ,e{u}_{{n}_{a}}}\right)$ , where
150
+
151
+ $a{e}_{i} = {\left( {a}_{\alpha } + {e}_{\alpha }\right) }_{i} \otimes {I}_{{m}_{i}},e{e}_{i} = {\left( {e}_{\alpha }\right) }_{i} \otimes {I}_{{m}_{i}},e{u}_{i} = {\left( {e}_{\alpha }\right) }_{i} \otimes {I}_{{q}_{i}},$
152
+
153
+ where $\otimes$ is the Kronecker product.
154
+
155
+ Compute ${Y}_{f,\alpha },{Y}_{p,\mathcal{N}\left( \alpha \right) },{U}_{\alpha }$ associated with agent $\alpha$ as
156
+
157
+ ${Y}_{f,\alpha } = {R}_{f,\alpha }{T}_{f,\alpha }{Y}_{f},{Y}_{p,\mathcal{N}\left( \alpha \right) } = {R}_{p,\alpha }{T}_{p,\alpha }{Y}_{p},{U}_{\alpha } = {R}_{u,\alpha }{T}_{u,\alpha }U$
158
+
159
+ Solve the optimization problem: $\mathop{\min }\limits_{{{A}_{\mathcal{N}\left( \alpha \right) },{B}_{\alpha }}}{\begin{Vmatrix}{Y}_{f,\alpha } - {A}_{\mathcal{N}\left( \alpha \right) }{Y}_{p,\mathcal{N}\left( \alpha \right) } - {B}_{\alpha }{U}_{\alpha }\end{Vmatrix}}_{F}^{2}$
160
+
161
+ Compute ${\widehat{A}}_{\alpha } = {A}_{\mathcal{N}\left( \alpha \right) }{R}_{p,\alpha }$ and ${\widehat{B}}_{\alpha } = {B}_{\alpha }{R}_{u,\alpha }$
162
+
163
+ end for
164
+
165
+ return: ${A}_{D} = {\left\lbrack \begin{array}{llll} {\widehat{A}}_{1}^{\top } & {\widehat{A}}_{2}^{\top } & \cdots & {\widehat{A}}_{n}^{\top } \end{array}\right\rbrack }^{\top },{B}_{D} =$ blkdiag $\left( {{\widehat{B}}_{1},{\widehat{B}}_{2},\ldots ,{\widehat{B}}_{n}}\right)$ .
166
+
167
+ end function
168
+
169
+ function MAIN( )
170
+
171
+ Given state $\left( {{X}_{p},{X}_{f}}\right)$ and input(U)time-series data from a ${N}_{a}$ agent network
172
+
173
+ Initialize the GNN-based encoder $\left( \phi \right)$ and decoder $\left( \varphi \right)$ network
174
+
175
+ for epochs $= 1,2,\ldots ,{N}_{\text{ epoch }}$ do
176
+
177
+ Koopman Update: Run $\left( {{A}_{D},{B}_{D}}\right) =$ DistributedKoopmanMatrices $\left( {{X}_{p},{X}_{f},U,\phi }\right)$
178
+
179
+ GNN Update: Compute , $\mathcal{L} = {\mathcal{L}}_{p} + {\mathcal{L}}_{ae}$ , and backpropagate $\mathcal{L}$ to update $\phi ,\varphi$ parameters.
180
+
181
+ end for
182
+
183
+ return: Updated ${A}_{D},{B}_{D},\phi$ , and, $\varphi$ .
184
+
185
+ end function
186
+
187
+ § 3.2 MULTI-AGENT NETWORK CONSTRUCTION VIA INFORMATION TRANSFER-BASED CLUSTERING
188
+
189
+ Mapping of nodes in an NDS to nodes in an agent network is a core aspect for our proposed method. We use an information-theoretic clustering method [45]that exploits both the adjacency matrix structure as well as dynamical properties of the network for this task. For a dynamical system, the definition of information transfer [46] from a dynamical state ${x}_{t,i}$ to another state ${x}_{t,j}$ is based on the intuition that the total entropy of a dynamical state ${x}_{t,j}$ is equal to the sum of the entropy of ${x}_{t,j}$ when another state ${x}_{t,i}$ is not present in the dynamics and the amount of entropy transferred from ${x}_{t,i}$ to ${x}_{t,j}$ . In particular, for a discrete-time dynamical system ${x}_{t + 1} = F\left( {x}_{t}\right)$ , where ${x}_{t} = {\left\lbrack \begin{array}{ll} {x}_{t,1}^{\top } & {x}_{t,2}^{\top } \end{array}\right\rbrack }^{\top }$ and $F = {\left\lbrack \begin{array}{ll} {f}_{1}^{\top } & {f}_{2}^{\top } \end{array}\right\rbrack }^{\top }$ , the one-step information transfer from ${x}_{t,1}$ to ${x}_{t,2}$ , as the system evolves from time-step $t$ to $t + 1$ is ${\left\lbrack {T}_{{x}_{t,1} \rightarrow {x}_{t,2}}\right\rbrack }_{t}^{t + 1} = H\left( {{x}_{t + 1,2} \mid {x}_{t,2}}\right) - {H}_{{\mathcal{H}}_{t,1}}\left( {{x}_{t + 1,2} \mid {x}_{t,2}}\right)$ . Here, $H\left( {{x}_{t + 1,2} \mid {x}_{t,2}}\right)$ is the conditional Shannon entropy of ${x}_{t,2}$ for the original system and ${H}_{{\mathcal{H}}_{t,1}}\left( {{x}_{t + 1,2} \mid {x}_{t,2}}\right)$ is the conditional entropy of ${x}_{t,2}$ for the system where ${x}_{t,1}$ has been held frozen from at time $t$ . Note that the information transfer is in general asymmetric and characterize the influence of one state on any other state. Furthermore, for stable dynamical system the information transfer between the states always settle to a steady state value.
190
+
191
+ < g r a p h i c s >
192
+
193
+ Figure 3: Illustration of divergent cluster dynamics from a power network. Top plot shows that the transient frequency trajectories from three different clusters behave differently. Phase-space plots in the bottom row illustrate the temporal evolution of the nodal attributes in each cluster, initial time-points are marked larger and lighter, while later time-points are marked thinner and darker.
194
+
195
+ We use this information transfer measure to define an influence graph for the NDS studied in this paper. We form a directed weighted graph with the states as the nodes and introduce an edge from ${x}_{t,1}$ to ${x}_{t,2}$ iff the information transfer from ${x}_{t,1}$ to ${x}_{t,2}$ is non-zero. Moreover, the edge-weight for the edge ${x}_{t,1} \rightarrow {x}_{t,2}$ is $\exp \left( {-\left| {T}_{{x}_{t,1} \rightarrow {x}_{t,2}}\right| /\beta }\right)$ [45], where $\left| {T}_{{x}_{t,1} \rightarrow {x}_{t,2}}\right|$ is the steady-state information transfer from ${x}_{t,1}$ to ${x}_{t,2}$ (we assume stable dynamics) and $\beta > 0$ is a parameter similar to temperature in a Gibbs' distribution. Applying this to a dynamical system, a directed weighted graph is computed based on the information transfer and is clustered accordingly to obtain a multi-agent network. Figure 3 uses a power network example to illustrate how the nodal attributes from different clusters demonstrate different transient evolution trajectories.
196
+
197
+ § 4 NUMERICAL EXPERIMENTS
198
+
199
+ In this section, we aim to answer the following research questions through our experiments: (RQ1) How does the distributed GNN-based Koopman (DKGNN) model's performance compare with other state-of-the-art approaches such as centralized GNN-based Koopman (CKGNN) [21] and graph neural network approaches for modeling multi-body interactions [47] (RQ2) How do various dynamical system properties such as sparsity, spatio-temporal correlation, and damping properties influence the performance boost from the distributed algorithm? (RQ3) What is the potential for distributed approaches for scaling to larger NDS in the future?
200
+
201
+ max width=
202
+
203
+ Power Grid CKGNN DKGNN area-wise clustering DKGNN IT-based clustering PN
204
+
205
+ 1-5
206
+ Disturbance Location MSE MSE MSE MSE
207
+
208
+ 1-5
209
+ One hop 0.0352 0.0064 0.0079 0.1123
210
+
211
+ 1-5
212
+ Two hops 0.0212 0.0044 0.0049 0.2498
213
+
214
+ 1-5
215
+ Three hops 0.029 0.0075 0.0098 0.0468
216
+
217
+ 1-5
218
+ High degree 0.0055 0.0013 8.538E-04 0.0246
219
+
220
+ 1-5
221
+ Low degree 0.0195 0.005 0.006 0.0427
222
+
223
+ 1-5
224
+
225
+ max width=
226
+
227
+ Oscillator CKGNN DKGNN PN
228
+
229
+ 1-4
230
+ Disturbance Location MSE MSE MSE
231
+
232
+ 1-4
233
+ High damping 1.938E-04 1.026E-04 3.6E-04
234
+
235
+ 1-4
236
+ Low Damping 1.404E-04 1.185E-04 5.205E-04
237
+
238
+ 1-4
239
+ Random 4.86E-05 4.059E-04 4.7328
240
+
241
+ 1-4
242
+ Rope 0.2301 0.2008 0.3091
243
+
244
+ 1-4
245
+
246
+ Table 1: Prediction performance of the proposed distributed geometric Koopman approach with other baselines in terms of mean square errors (MSE) averaged over all time-steps and states for test trajectories.
247
+
248
+ < g r a p h i c s >
249
+
250
+ Figure 4: DKGNN out-performs the baselines. Columns A, B, C represent for rope, oscillator (high damping) and grid (high degree) examples, respectively. Prediction error over time-steps with darker lines representing median are shown in Row A, and MSE-based box plots are shown in Row B.
251
+
252
+ Environments. We perform numerical experiments on three different data-sets. The baseline rope system introduced in [7] consists of a set of six objects connected via a line graph. The second dataset is a network of oscillators which is a prototype used for modelling various real-world systems in biology $\left\lbrack {{48},{49}}\right\rbrack$ , synchronization of fireflies $\left\lbrack {50}\right\rbrack$ , superconducting Josephson junctions $\left\lbrack {51}\right\rbrack$ etc. We consider a sparse network of oscillators consisting of 50 agents where the dynamics at each oscillator are governed by a second-order swing differential equation. The network density for this network is 0.0408 . Our third dataset studies a significantly larger and complex system of immense practical importance. Power grid networks [52] are complex infrastructures that are essential for every aspect of modern life. The ability to predict transient behavior in such networks is key to the prevention of cascading failures and effective integration of renewable energy sources [53, 54]. We consider the IEEE 68-bus power grid model [55] that represents the interconnections of New England Test System and New York Power System. This is a sparse, heterogeneous network with a network density 0.0378 ; the network has 68-nodes, with 16-nodes representing generators and the rest being loads. The ability to accurately predict changes in voltage and frequency is key to capturing the tendency of a power grid to move towards undesired oscillatory regions. Transient stability simulations for the grid datasets are performed by the Power Systems Toolbox [56]. We provide more details on the datasets and experiments in the supplementary.
253
+
254
+ Baselines and Implementation Details. We baseline our method against centralized GNN-based Koopman (CKGNN) [21] and propagation network (PN), a GNN-based approach [47] using their available implementations. We evaluate all models on the trajectory prediction task where we predict the node-level time-series measurements for each of these environments that represent velocity of objects (rope), angles and frequencies of oscillators, and frequency measurements for the power grid. For PN, we slightly modify the prediction workflow from the author-provided implementation to make sure that we always feed the PN with the predicted signal values except for the initial signal input for the prediction task in consecutive time steps. Our methods are implemented on the Pytorch framework [57] and run on an NVIDIA A100 GPU. For PN baseline, we used 3 propagation steps with 32 as the batch size. The dimensions of the hidden layer of relation encoder, object encoder, and propagation effect are set as follows, for PN we use 150, 100, and 100, respectively for all the cases, for the rope system with CKGNN and DKGNN, we have used 120, 100, and 100, and for the oscillator and grid example with both CKGNN and DKGNN, these are set to be 60. The models are trained with Adam based stochastic gradient descent optimizer with a learning rate of ${10}^{-5}$ . For rope, we consider10,000episodes with 100 time-steps and training-testing division of ${90}\% - {10}\%$ , and batch size of 8 episodes. We consider the oscillator node state trajectories of 100 time-steps and trained with a total of 9000 time-steps, batch size of 10 trajectories, and have tested with three different testing configurations and predicting for each trajectory of 100 time-steps. Considering the power grid network, we have considered 50 time-steps to capture the initial fast transients and train the model with 1100 time steps with a batch size of 5 episodes along with testing in five different scenarios with multiple testing trajectories each with 50 time steps.
255
+
256
+ RQ1: Prediction performance analysis. Figure 4 shows that DKGNN outperforms other baselines in trajectory prediction. Table 1 reports the mean square error (MSE) averaged over all time-steps and over all states in the test trajectory dataset. The rope system is minimally sparse with a network density of 0.33, thereby resulting in improved predictive performance for the DKGNN when compared to KGNN. The improvements are significantly pronounced for sparser and larger network models of oscillators (network density 0.0408) and power grid (network density 0.0378). The superior prediction performance over considerable trajectory time-steps (as demonstrated in Figure 4 second row) substantiates the applicability of DKGNN for sparse NDS.
257
+
258
+ RQ2.1 Performance with respect to varying NDS properties. The damping parameter provides us with a way to systematically study the response of an NDS to an input. Lower damping implies that the system will take longer to converge to a steady state. We hypothesize that the introduction of input perturbations of the same magnitude at different nodes will evoke different responses depending on the connectivity structure around these nodes. For the oscillator network, we consider testing scenarios with disturbances created at high damping nodes ( $> {13}$ in appropriate units) and low damping nodes $\left( { < 1}\right)$ . We consider five different configurations for the power grid network. Three of the scenarios are based on the perturbations in the loads which are respectively one-hop, two-hop, and three-hop away from the generator buses, and two other load disturbance scenarios are considered at locations with high and low degrees of connectivity. We observe DKGNN approach is able to produce better prediction performance with all of these scenarios $({81},{79},{74},{76},{74}\%$ improvements with area-wise partitioning, and 77, 76, 66, 84, 69% improvements with information-theoretic partitioning for the five cases listed in Table I from top to bottom with respect to the centralized approach). The second and third columns of Figure 4 correspond to the oscillator (where the disturbance is at high-damping nodes), and the power grid (where the disturbance is at high-degree buses), respectively, where both of them show the superior performance of DKGNN to the baselines.
259
+
260
+ RQ2.2 Performance with respect to sparse NDS clustering. This subsection reports our validation of the effectiveness of the information-theoretic clustering based agent structure discovery using the power grid network. The IEEE-68 bus grid network specifications also include an area-wise partitioning that is done based on eigenvalue separation and extensive application of domain-knowledge [58]. Both the expert-driven partitioning and our clustering driven partitioning divide the grid into 5 clusters and yields a 5-agent network to use for the training. Table I and Figure 4 show that DKGNN exploits the localized dominant dynamics and yields superior predictive performance when compared to the centralized approaches such as CKGNN and PN.
261
+
262
+ RQ3. Computational Scalability. We compare the run time of our DKGNN with the corresponding centralized one, KGNN. Let $\tau \left( {\phi + {A}_{D} + \varphi }\right)$ denote the combined computation time for GNN encoder $\left( \phi \right)$ , distributed Koopman $\left( {A}_{D}\right)$ and the GNN decoder $\left( \varphi \right)$ . Similarly, $\tau \left( {\phi + A + \varphi }\right)$ denote the computation for KGNN where the centralized Koopman(A)is obtained. The reduction in total run-time $\left( \% \right)$ is computed as $\frac{\tau \left( {KGNN}\right) - \tau \left( {DKGNN}\right) }{\tau \left( {KGNN}\right) } \times {100}$ . From Figure 5, it is clear there is a significant reduction in run time for the larger and sparser networks of oscillator ( 50 nodes with network density $= {0.041}$ ), and power grid ( 68 nodes,5 clustered areas and network density of 0.038 ). These examples see a considerable performance boost (45.63% for oscillator and ${32}\%$ for power grid) owing to capturing the dominant localized dynamic behavior. The rope which is a smaller system with 6 nodes and single excitation (at the top) shows only slightly improvement in runtime (5%), owing to high network density (0.33).
263
+
264
+ < g r a p h i c s >
265
+
266
+ Figure 5: Scalability of DKGNN compared to KGNN model.
267
+
268
+ § 5 CONCLUSIONS
269
+
270
+ We present a geometric deep learning based distributed Koopman operator (DKGNN) framework that can exploit dynamical system sparsity to improve computational scalability. Our results on bounding the DKGNN performance with respect to its centralized counterpart provides a rigorous theoretical foundation. Extensive empirical studies on large NDS of oscillators and practical power grid models show the effectiveness of DKGNN design with respect to varying degree of NDS dynamical properties and sparsity patterns. Future research will look into incorporating attention capability to the distributed design, investigating the robustness aspects in presence of physical or adversarial faults, and perform control designs on the learned distributed dynamical model.
papers/LOG/LOG 2022/LOG 2022 Conference/m3aVA7ykn67/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,383 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Sparse and Local Hypergraph Reasoning Networks
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ Reasoning about the relationships between entities from input facts (e.g., whether Ari is a grandparent of Charlie) generally requires explicit consideration of other entities that are not mentioned in the query (e.g., the parents of Charlie). In this paper, we present an approach for learning inference rules that solve problems of this kind in large, real-world domains, using sparse and local hypergraph reasoning networks (SpaLoc). SpaLoc is motivated by two observations from traditional logic-based reasoning: relational inferences usually apply locally (i.e., involve only a small number of individuals), and relations are usually sparse (i.e., only hold for a small percentage of tuples in a domain). We exploit these properties to make learning and inference efficient in very large domains by (1) using a sparse tensor representation for hypergraph neural networks, (2) applying a sparsification loss during training to encourage sparse representations, and (3) subsampling based on a novel information sufficiency-based sampling process during training. SpaLoc achieves state-of-the-art performance on several real-world, large-scale knowledge graph reasoning benchmarks, and is the first framework for applying hypergraph neural networks on real-world knowledge graphs with more than ${10}\mathrm{k}$ nodes.
12
+
13
+ ## 18 1 Introduction
14
+
15
+ Performing graph reasoning in large domains, such as predicting the relationship between two entities based on facts given as input, is an important practical problem that arises in reasoning about many domains, including molecular modeling, knowledge networks, and collections of objects in the physical world [Schlichtkrull et al., 2018b, Veličković et al., 2020, Battaglia et al., 2016]. This paper focuses on an inductive learning-based approach that learns inference rules from data and uses them to make predictions in novel problem instances. Consider the problem of learning a rule that explains the grandparent relationship. Given a dataset of labeled family relationship graphs, we aim to build machine-learning algorithms that learn to predict a specific relationship (e.g., grandparent) based on other input relationships, such as father and mother. A crucial feature of such reasoning tasks is that: in order to predict the relationship between two entities (e.g., whether Ari is a grandparent of Charlie), we need to jointly consider other entities (e.g., the father and mother of Charlie).
16
+
17
+ A natural approach to this problem is to use hypergraph neural networks to represent and reason about higher-order relations: in a hypergraph, a hyper-edge may connect more than two nodes. As an example, Neural Logic Machines [NLM; Dong et al., 2019] present a method for solving graph reasoning tasks by maintaining hyperedge representations for all tuples consisting of up to $B$ entities, where $B$ is a hyperparameter. Thus, they can infer more complex finitely-quantified logical relations than standard graph neural networks that only consider binary relationships between entities [Morris et al., 2019b, Barceló et al., 2020]. However, there are two disadvantages of such a dense hypergraph representation. First, the training and inference require considering all entities in a domain simultaneously, such as all of the $N$ people in a family relationship database. Second, ehed, scale exponentially with respect to the number of entities considered in a single type of relationship: inferring the grandparent relationship between all pairs of entities requires $O\left( {N}^{3}\right)$ time and space complexity. In practice, for large graphs, these limitations make the training and inference intractable and hinder the application of methods such as NLMs in large-scale real-world domains. To address these two challenges, we draw two inspirations from traditional logic-based reasoning: logical rules (e.g., my parent's parent is my grandparent) usually apply locally (e.g., only three people are involved in a grandparent rule), and sparsely (e.g., the grandparent relationship is sparse across all pairs of people in the world). Thus, during training and inference, we don't need to keep track of the representation of all hyperedges but only the hyperedges that are related to our prediction tasks.
18
+
19
+ Inspired by these observations, we develop the Sparse and Local Hypergraph Reasoning Network (SpaLoc) for inducing sparse relational rules from data in large domains. First, we present a sparse tensor-based representation for encoding hyperedge relationships among entities and extend hypergraph reasoning networks to this representation. Instead of storing a dense representation for all hyperedges, it only keeps track of edges related to the prediction task, which exploits the inherent sparsity of hypergraph reasoning. Second, since we do not know the underlying sparsity structures $a$ priori, we propose a training paradigm to recover the underlying sparse relational structure among objects by regularizing the graph sparsity. During training, the graph sparsity measurement is used as a soft constraint, while during inference, SpaLoc uses it to explicitly prune out irrelevant edges to accelerate the inference. Third, during both training and inference, SpaLoc focuses on a local induced subgraph of the input graph, instead of considering all entities and their relations. This is achieved by a novel sub-graph sampling technique motivated by information sufficiency (IS). IS quantifies the amount of information in a sub-graph for predictions about a specific hyperedge. Since the information in a sub-sampled graph may be insufficient for predicting the relationship between a pair of entities, we also propose to use the information sufficiency measure to adjust training labels.
20
+
21
+ We study the learning and generalization properties of SpaLoc on a domain of relational reasoning in family-tree datasets and evaluate its performance on real-world knowledge-graph reasoning benchmarks. First, we show that, with our sparsity regularization, the computational complexity for inference can be reduced to the same order as the human expert-developed inference method, which significantly outperforms the baseline models. Second, we show that training via sub-graph sampling and label adjustment enables us to learn relational rules in real-world knowledge graphs with more than ${10}\mathrm{\;K}$ nodes, whereas other hypergraph neural networks can be barely applied to graphs with more than 100 nodes. SpaLoc achieves state-of-the-art performance on several real-world knowledge graph reasoning benchmarks, surpassing several existing binary-edge-based graph neural networks. Finally, we show the generality of SpaLoc by applying it to different hypergraph neural networks.
22
+
23
+ ## 2 Related Work
24
+
25
+ (Hyper-)Graph representation learning. (Hyper-)Graph representation learning methods, including message passing neural networks [Shervashidze et al., 2011, Kipf and Welling, 2017, Velickovic et al., 2018, Hamilton et al., 2017] and embedding-based methods [Bordes et al., 2013, Yang et al., 2015, Toutanova et al., 2015, Dettmers et al., 2018], have been widely used for knowledge discovery. Since these methods treat relations (edges) as fixed indices for node feature propagation, their computational complexity is usually small (e.g., $O\left( {NE}\right)$ ), and they can be applied to large datasets. However, the fixed relation representation and low complexity restrict the expressive power of these methods [Xu et al., 2019, 2020, Luo et al., 2021], preventing them from solving general complex problems like inducing rules that involve more than three entities. Moreover, some widely used methods, such as knowledge embeddings, are inherently transductive and cannot learn lifted rules that generalize to unseen domains. By contrast, the learned rules from SpaLoc are inductive and can be applied to completely novel domains with an entire collection of new entities, as long as the underlying patterns of relational inference remain the same.
26
+
27
+ Inductive rule learning. In addition to graph learning frameworks, many previous approaches have studied how to learn generalized rules from data, i.e., inductive logic programming (ILP) [Muggleton, 1991, Friedman et al., 1999], with recent work integrating neural networks into ILP systems to combat noisy and ambiguous inputs [Dong et al., 2019, Evans and Grefenstette, 2018, Sukhbaatar et al., 2015]. However, due to the large search space of target rules, the computational and memory complexities of these models are too high to scale up to many large real-world domains. SpaLoc addresses this scalability problem by leveraging the sparsity and locality of real-world rules and thus can induce knowledge with local computations.
28
+
29
+ Efficient training and inference methods. There is a rich literature on efficient training and inference of neural networks. Two directions that are relevant to us are model sparsification and sampling training. Han et al. [2016] proposed to prune and compress the weights of neural networks for efficiency, and Yang et al. [2020] adopted Hoyer-Square regularization to sparsify models. SpaLoc
30
+
31
+ ![01963ef5-532a-79b8-8738-8c39d9763872_2_329_203_1139_362_0.jpg](images/01963ef5-532a-79b8-8738-8c39d9763872_2_329_203_1139_362_0.jpg)
32
+
33
+ Figure 1: The overall training pipeline of SpaLoc, including sub-graph sampling with label adjustment (Sec. 3.3), sparse hypergraph reasoning networks (Sec. 3.1), and sparsity regularizations (Sec. 3.2). $I$ and $V$ denote the index tensor and value tensor, respectively.
34
+
35
+ extends this sparsification idea by adding regularization at intermediate sparse tensor groundings to encourage concise induction. Chiang et al. [2019] and Zeng et al. [2020] proposed to sample sub-graphs for GNN training and Teru et al. [2020] proposed to construct sub-graphs for link prediction. SpaLoc generalizes these sampling methods to hypergraphs and proposes the information sufficiency-based adjustment method to remedy the information loss introduced by sub-sampling.
36
+
37
+ ## 3 SpaLoc Hypergraph Reasoning Networks
38
+
39
+ This section develops a training and inference framework for hypergraph reasoning networks. As illustrated in Fig. 1, we make hypergraph networks practical for large domains by using sparse tensors (Sec. 3.1). To encourage models to discover sparse interconnections, we add sparsity regularization to intermediate tensors (Sec. 3.2). We exploit the locality of the task by sampling subgraphs and compensate for information loss due to sampling through a novel label adjustment process (Sec. 3.3).
40
+
41
+ The fundamental structures used for both training and inference are hypergraphs $\mathcal{H} = \left( {\mathcal{V},\mathcal{E}}\right)$ , where $\mathcal{V}$ is a set of vertices and $\mathcal{E}$ is a set of hyperedges. Each hyperedge $e = \left( {{x}_{1},{x}_{2},\cdots ,{x}_{r}}\right)$ is an ordered tuple of $r$ elements ( $r$ is called the arity of the edge), where ${x}_{i} \in \mathcal{V}$ . We use $f : \mathcal{E} \rightarrow \mathcal{S}$ to denote a hyperedge representation function, which maps hyperedge $e$ to a feature in $\mathcal{S}$ . Domains $\mathcal{S}$ can be of various forms, including discrete labels, numbers, and vectors. For simplicity, we describe features associated with arity-1 edges as "node features" and features associated with the whole graph as "nullary" or "global" features.
42
+
43
+ A graph-reasoning task can be formulated as follows: given $\mathcal{H}$ and the input hyperedge representation functions $f$ associated with all hyperedges in $\mathcal{E}$ , such as node types and pairwise relationships (e.g., parent), our goal is to infer a target representation function ${f}^{\prime }$ for one or more hyperedges, i.e. ${f}^{\prime }\left( e\right)$ for some $e \in \mathcal{E}$ , such as predicting a new relationship (e.g., grandparent(Kim, Skye)). We consider two problem settings in this paper. The first one is to predict a target relation over all edges in the graph. The second one is to predict the relation on one single edge.
44
+
45
+ ### 3.1 Sparse Hypergraph Reasoning Networks
46
+
47
+ SpaLoc is a general formulation that can be applied to a range of hypergraph reasoning frameworks. We will primarily develop our method based on the Neural Logic Machine [NLM; Dong et al., 2019], a state-of-the-art inductive hypergraph reasoning network. We choose an NLM as the backbone network in SpaLoc because its tensor representation naturally generalizes to sparse cases. In Sec. 4.1, we also integrate SpaLoc with other hypergraph neural networks like k-GNNs [Morris et al., 2019a].
48
+
49
+ In SpaLoc, hypergraph features such as node features and edge features are represented as sparse tensors. For example, as shown in Fig. 1, at the input level, the parental relationship can be represented as a list of indices and values. In this case, each index(x, y)is an ordered pair of integers, and the corresponding value is 1 if node $x$ is a parent of node $y$ . To leverage the sparsity in relations, we treat values for indices not in the list as 0 . This convention also extends to vector representations of nodes and hyperedges. In general, vector representations $f\left( {{x}_{1},{x}_{2},\cdots ,{x}_{r}}\right)$ associated with all hyperedges of arity $r$ are represented as coordinate-list (COO) format sparse tensors [Fey and Lenssen,2019]. That is, each tensor is represented as two tensors $\mathcal{F} = \left( {\mathbf{I},\mathbf{V}}\right)$ , each with $M$ entries. The first tensor is an index tensor, of shape $M \times r$ , in which each row denotes a tuple $\left( {{x}_{1},{x}_{2},\cdots ,{x}_{r}}\right)$ . The second tensor $\mathbf{V}$ is a value tensor, of shape $M \times D$ , where $D$ is the length of $f\left( {{x}_{1},{x}_{2},\cdots ,{x}_{r}}\right)$ . Each row
50
+
51
+ ![01963ef5-532a-79b8-8738-8c39d9763872_3_309_177_1176_429_0.jpg](images/01963ef5-532a-79b8-8738-8c39d9763872_3_309_177_1176_429_0.jpg)
52
+
53
+ Figure 2: A running example of a single layer SpaLoc: inferring the binary relationship of $\operatorname{son}\left( {x, y}\right) \mathrel{\text{:=}} \operatorname{male}\left( x\right) \land \operatorname{parent}\left( {y, x}\right)$ from the attribute male and the binary relationship parent. The model first expands the unary tensor (containing the male information) into a binary relation, indicating whether the first entity in the pair is a male. Then, the permutation operation fuses the information for(x, y)and(y, x). For each pair(x, y), we now have four predicates: whether $\mathrm{x}$ is a parent of $\mathrm{y}$ , whether $\mathrm{y}$ is a parent of $\mathrm{x}$ , whether $\mathrm{x}$ is a male, and whether $\mathrm{y}$ is a male. Finally, a neural network predicts the target relationship son for each pair(x, y). Blue entries denote values that are reduced from high-arity tensors. Green entries are expanded from low-arity tensors. Yellow entries are created by the "permutation" operation. Gray entries are zero paddings.
54
+
55
+ $\mathbf{V}\left\lbrack i\right\rbrack$ denotes the vector representation associated with the tuple $\mathbf{I}\left\lbrack i\right\rbrack$ . For all tuples that are not recorded in I, their representations are treated as all-zero vectors.
56
+
57
+ Based on the sparse feature representations, a sparse hypergraph reasoning network is composed of multiple relational reasoning layers (RRLs) that operate on hyperedge representations. Fig. 1 shows the detailed computation graph of an SpaLoc model with ternary relations. The input to first RRL is the input information (e.g., demographic information and parental relationships in a person database). Each RRL computes a set of new hyperedge features as inputs to the next layer. The last layer output will be the final prediction of the task (e.g., the "son" relationship). During training time, we will supervise the network with ground-truth labels for final predictions.
58
+
59
+ Next, we describe the computation of individual RRLs. The descriptions will be brief and focus on differences from the original NLM layers. The input to and output of each RRL are both $R + 1$ sparse tensors of different arities, where $R$ is the maximum arity of the network. Let ${\mathcal{F}}^{\left( i - 1, r\right) }$ denote the input of arity $r$ of layer $i$ , the output of this layer ${\mathcal{F}}^{\left( i, r\right) }$ is computed as the following:
60
+
61
+ $$
62
+ {\mathcal{F}}^{\left( i, r\right) } = {\mathrm{{NN}}}^{\left( i, r\right) }\left( {\operatorname{PERMUTE}\left( {\operatorname{CONCAT}\left( {{\mathcal{F}}^{\left( i - 1, r\right) },\operatorname{EXPAND}\left( {\mathcal{F}}^{\left( i - 1, r - 1\right) }\right) ,\operatorname{REDUCE}\left( {\mathcal{F}}^{\left( i - 1, r + 1\right) }\right) }\right) }\right) }\right)
63
+ $$
64
+
65
+ In a nutshell, the EXPAND operation propagates representations from lower-arity tensors to a higher-arity form (e.g., from each node to the edges connected to it). The REDUCE operation aggregates higher-arity representations into a lower-arity form (e.g., aggregating the information from all edges connected to a node into that node). The PERMUTE operation fuses the representations of hyperedges that share the same set of entities but in different orders, such as(A, B)and(B, A). Finally, NN is a linear layer with nonlinear activation that computes the representation for the next layer. Fig. 2 gives a concrete running example of a single RRL.
66
+
67
+ Formally, the EXPAND operation takes a sparse tensor $\mathcal{F}$ of arity $r$ and creates a new sparse tensor ${\mathcal{F}}^{\prime }$ with arity $r + 1$ . This is implemented by duplicating each entry $f\left( {{x}_{1},\cdots ,{x}_{r}}\right)$ in $\mathcal{F}$ by $N$ times, creating the $N$ new vector representations for $\left( {{x}_{1},\cdots ,{x}_{r},{o}_{i}}\right)$ for all $i \in \{ 1,2,\cdots , N\}$ , where $N$ is the number of nodes in the hypergraph.
68
+
69
+ The REDUCE operation takes a sparse tensor $\mathcal{F} = \left( {\mathbf{I},\mathbf{V}}\right)$ of arity $r$ and creates a new sparse tensor ${\mathcal{F}}^{\prime }$ with arity $r - 1$ : it aggregates all information associated with all $r$ -tuples: $\left( {{x}_{1},{x}_{2},\cdots ,{x}_{r - 1},?}\right)$ with the same $r - 1$ prefix. In SpaLoc, the aggregation function is chosen to be max. Thus,
70
+
71
+ $$
72
+ {f}^{\prime }\left( {{x}_{1},\cdots ,{x}_{r - 1}}\right) = \mathop{\max }\limits_{{z : \left( {{x}_{1},\cdots ,{x}_{r - 1}, z}\right) \in \mathbf{I}}}f\left( {{x}_{1},\cdots ,{x}_{r - 1}, z}\right) .
73
+ $$
74
+
75
+ The CONCAT operation concatenates the input hyperedge representations along the channel dimension (i.e., the dimension corresponding to different relational features). Specifically, it first adds missing entries with all-zero values to the input hyperedge representations so that they have exactly the same set of indices $\mathbf{I}$ . It then concatenates the $\mathbf{V}$ ’s of inputs along the channel dimension.
76
+
77
+ The PERMUTE operation takes a sparse tensor $\mathcal{F}$ of arity $r$ and creates a new sparse tensor ${\mathcal{F}}^{\prime }$ of the same arity. However, the length of the vector representation will grow from $D$ to ${D}^{\prime } = r! \times D$ . It fuses the representation of hyperedges that share the same set of entities. Mathematically,
78
+
79
+ $$
80
+ {f}^{\prime }\left( {{x}_{1},\cdots ,{x}_{r}}\right) = \mathop{\operatorname{CONCAT}}\limits_{{\left( {{x}_{1}^{\prime },\cdots ,{x}_{r}^{\prime }}\right) \text{ is a permutation of }\left( {{x}_{1},\cdots ,{x}_{r}}\right) }}\left\lbrack {f\left( {{x}_{1}^{\prime },\cdots ,{x}_{r}^{\prime }}\right) }\right\rbrack .
81
+ $$
82
+
83
+ If a permutation of $\left( {{x}_{1},\cdots ,{x}_{r}}\right)$ does not exist in $\mathcal{F}$ , it will be treated as an all-zero vector. Thus, the number of entries $M$ may increase or remain unchanged.
84
+
85
+ Finally, the $i$ -th sparse relational reasoning layer has $R + 1$ linear layers ${L}^{\left( i,0\right) },{L}^{\left( i,1\right) },\cdots ,{L}^{\left( i, R\right) }$ with nonlinear activations (e.g., ReLU) as submodules with arities 0 through $R$ . For each arity $r$ , we will concatenate the feature tensors expanded from arity $r - 1$ , those reduced from arity $r + 1$ , and the output from the previous layer, apply a permutation, and apply ${L}^{\left( i, r\right) }$ on the derived tensor.
86
+
87
+ To make the intermediate features ${\mathcal{F}}^{\left( i, r\right) }$ sparse, SpaLoc uses a gating mechanism. In SpaLoc, for each linear layer ${L}^{\left( i, r\right) }$ , we add a linear gating layer, ${L}_{g}^{\left( i, r\right) }$ , which has sigmoid activation and outputs a scalar value in range $\left\lbrack {0,1}\right\rbrack$ that can be interpreted as the importance score for each hyperedge. During training, we modulate the output of ${L}^{\left( i, r\right) }$ with this importance value. Specifically, the output of layer $i$ arity $r$ is ${\mathcal{F}}^{\left( i, r\right) } = {L}^{\left( i, r\right) }\left( \mathcal{F}\right) \odot {L}_{g}^{\left( i, r\right) }\left( \mathcal{F}\right)$ , where $\mathcal{F}$ is the input sparse tensor, and $\odot$ is the element-wise multiplication operation. Note that we are using the same gate value to modulate each channel dimension of ${L}^{\left( i, r\right) }\left( \mathcal{F}\right)$ . During inference, we can prune out edges with small importance scores ${L}_{g}^{\left( i, r\right) } < \epsilon$ , where $\epsilon$ is a scalar hyperparameter. We use $\epsilon = {0.05}$ in our experiments.
88
+
89
+ We have described the computation of a sparsified Neural Logic Machine. However, we do not know a priori the sparse structures of intermediate layer outputs at training time, nor at inference time before we actually compute the output. Thus, we have to start from the assumption of a fully-connected dense graph. In the following sections, we will show how to impose regularization to encourage learning sparse features. Furthermore, we will present a subsampling technique to learn efficiently from large input graphs.
90
+
91
+ Remark. Even when the inputs have only unary and binary relations, allowing intermediate tensor representations of higher arity to be associated with hyperedges increases the expressiveness of NLMs [Dong et al.,2019], and Luo et al. [2021] proves that NLMs with max arity $k + 1$ are as expressive as $k$ -GNN hypergraph models (note that the regular GNN is 1-GNN). An intuitive example is that, in order to determine the grandparent relationship, we need to consider all 3-tuples of entities, even though the input relations are only binary. Despite their expressiveness, hyperedge-based NLMs cannot be directly applied to large-scale graphs. For a graph with more than 10,000 nodes, such as Freebase [Bollacker et al., 2008], it is almost impossible to store vector representations for all of the ${N}^{3}$ tuples of arity 3 . Our key observation to improve the efficiency of NLMs is that relational rules are usually applied sparsely (Sec. 3.2) and locally (Sec. 3.3).
92
+
93
+ ### 3.2 Sparsification through Hoyer Regularization
94
+
95
+ We use a regularization loss to encourage hyperedge sparsity, which is based on the Hoyer sparsity measure (2004). Let $x$ (in our case, the edge gate $g\left( {{x}_{1},\cdots ,{x}_{r}}\right)$ ) be a vector of length $n$ . Then
96
+
97
+ $$
98
+ \operatorname{Hoyer}\left( x\right) = \frac{\left( {\mathop{\sum }\limits_{i}^{n}\left| {x}_{i}\right| }\right) /\sqrt{\mathop{\sum }\limits_{i}^{n}{x}_{i}^{2}} - 1}{\sqrt{n} - 1}.
99
+ $$
100
+
101
+ The Hoyer measure takes values from 0 to 1 . The larger the Hoyer measure of a tensor, the denser the tensor is. In order to assign weights to different tensors based on their size, we use the Hoyer-Square
102
+
103
+ measure [Yang et al., 2020],
104
+
105
+ $$
106
+ {H}_{S}\left( x\right) = \frac{{\left( \mathop{\sum }\limits_{i}^{n}\left| {x}_{i}\right| \right) }^{2}}{\mathop{\sum }\limits_{i}^{n}{x}_{i}^{2}},
107
+ $$
108
+
109
+ which ranges from 1 (sparsest) to $n$ (densest). Intuitively, the Hoyer-Square measure is more suitable than ${L}_{1}$ or ${L}_{2}$ regularizers for graph sparsification since it encourages large values to be close to 1 and others to be zero, i.e., extremity. It has been widely used in sparse neural network training and has shown better performance than other sparse measures [Hurley and Rickard, 2009]. We empirically compare ${H}_{S}$ with other sparsity measures in Appendix C.
110
+
111
+ The overall training objective of SpaLoc is the task objective plus the sparsification loss, $\mathcal{L} =$ ${\mathcal{L}}_{\text{task }} + \lambda {\mathcal{L}}_{\text{density }}$ , where ${\mathcal{L}}_{\text{density }}$ is the sum of the ${H}_{S}$ , divided by the sum of the sizes of these tensors.
112
+
113
+ ### 3.3 Subgraph Training
114
+
115
+ Regularization enables us to learn a sparse model that will be efficient at inference time, but does not address the problem of training on large graphs. We describe a novel strategy that substantially reduces training complexity. It is based on the observation that an inferred relation among a set of entities generally depends only on a small set of other entities that are "related to" the target entities in the hypergraph, in the sense that they are connected via short paths of relevant relations.
116
+
117
+ Specifically, we employ a sub-graph sampling and label adjustment procedure. Here, we first present a measure to quantify the sufficiency of information in a sub-sampled graph for determining the relationship between two entities, namely, information sufficiency. Next, we present a sub-graph sampling procedure designed to maximize the information sufficiency for training. We further show that sub-graph sampling can also be employed at inference time. Finally, since information loss is inevitable during sampling, we further propose a training label adjustment process based on the information sufficiency.
118
+
119
+ Information sufficiency. Let ${\mathcal{H}}_{S} = \left( {{\mathcal{V}}_{S},{\mathcal{E}}_{S}}\right)$ be a sub-hypergraph of hypergraph $\mathcal{H} = \left( {\mathcal{V},\mathcal{E}}\right)$ , and ${e}^{ * } = \left( {{y}_{1},\ldots ,{y}_{r}}\right)$ be a target hyperedge in ${\mathcal{H}}_{S}$ , where ${y}_{1},\cdots ,{y}_{r} \in {\mathcal{V}}_{S} \subset \mathcal{V}$ . Intuitively, in order to determine the label for this hyperedge, we need to consider all "paths" that connect the nodes $\left\{ {{y}_{1},\ldots ,{y}_{r}}\right\}$ . More formally, we say a sequence of $K$ hyperedges $\left( {{e}_{1},\ldots ,{e}_{K}}\right)$ , represented as
120
+
121
+ $$
122
+ \frac{\left( {x}_{1}^{1},\cdots ,{x}_{{r}_{1}}^{1}\right) }{{e}_{1}},\frac{\left( {x}_{1}^{2},\cdots ,{x}_{{r}_{2}}^{2}\right) }{{e}_{2}},\cdots ,\frac{\left( {x}_{1}^{K},\cdots ,{x}_{{r}_{k}}^{K}\right) }{{e}_{K}},
123
+ $$
124
+
125
+ is a hyperpath for nodes $\left\{ {{y}_{1},\cdots {y}_{r}}\right\}$ if and only if $\left\{ {{y}_{1},\cdots {y}_{r}}\right\} \subset \mathop{\bigcup }\limits_{{j = 1}}^{K}{e}_{j}$ and ${e}_{j} \cap {e}_{j + 1} \neq \varnothing$ for all $j$ . In a graph with only binary edges, this is equivalent to the existence of a path from one node ${y}_{1}$ to another node ${y}_{2}$ . We define the information sufficiency measure for a hyperedge ${e}^{ * }$ in subgraph ${\mathcal{H}}_{S}$ as $\left( \frac{0}{0}\right.$ is defined as 1 .)
126
+
127
+ $$
128
+ \operatorname{IS}\left( {\left( {{y}_{1},\cdots ,{y}_{r}}\right) \mid {\mathcal{H}}_{S},\mathcal{H}}\right) \mathrel{\text{:=}} \frac{\# \text{ Paths connecting }\left( {{y}_{1},\cdots ,{y}_{r}}\right) \text{ in }{\mathcal{H}}_{S}}{\# \text{ Paths connecting }\left( {{y}_{1},\cdots ,{y}_{r}}\right) \text{ in }\mathcal{H}}.
129
+ $$
130
+
131
+ In practice, we approximate IS by only counting the number of paths whose length is less than a task-dependent threshold $\tau$ for efficiency. The number of paths in a large graph can be precomputed and cached before training, and the overhead of counting paths in a sampled graph is small, so this computation does not add much overhead to training and inference. When input graphs have maximum arity 2 , paths can be counted efficiently by taking powers of the graph adjacency matrix.
132
+
133
+ ![01963ef5-532a-79b8-8738-8c39d9763872_5_774_1148_700_255_0.jpg](images/01963ef5-532a-79b8-8738-8c39d9763872_5_774_1148_700_255_0.jpg)
134
+
135
+ Figure 3: Subgraph training contains two steps. First, we sample a subset of nodes from the whole graph. Next, we adjust labels for edges in the sub-sampled graph. $I{S}_{1} = 0$ because no paths connecting two nodes are sampled, while $I{S}_{2} = 1$ because all paths connecting two nodes are sampled.
136
+
137
+ Subgraph sampling. During training, each data point is a tuple $\left( {\mathcal{H}, f,{f}^{\prime }}\right)$ where $\mathcal{H}$ is the input graph, $f$ is the input representation, and ${f}^{\prime }$ is the desired output labels. We sample a subgraph ${\mathcal{H}}^{\prime } \subset \mathcal{H}$ , and train models to predict the value of ${f}^{\prime }$ on ${\mathcal{H}}^{\prime }$ given $f$ . For example, we train models to predict the grandparent relationship between all pairs of entities in ${\mathcal{H}}^{\prime }$ based on the parent relationship between entities in ${\mathcal{H}}^{\prime }$ . Thus, our goal is to find a subgraph that retains most of the paths connecting nodes in this subgraph. We achieve this using a neighbor expansion sampler that uniformly samples a few nodes from $\mathcal{V}$ as the seed nodes. It then samples new nodes connected with one of the nodes in the graph into the sampled graph and runs this "expansion" procedure for multiple iterations to get ${\mathcal{V}}_{S}$ . Finally, we include all edges that connect nodes in ${\mathcal{V}}_{S}$ to form the final subsampled hypergraph.
138
+
139
+ When the task is to infer the relations between a single pair of entities ${f}^{\prime }\left( {{y}_{1},{y}_{2}}\right)$ given the input representation $f$ , a similar sub-sampling idea can also be used at inference time to further speed it up. Specifically, we use a path sampler, which samples paths connecting ${y}_{1}$ and ${y}_{2}$ and induces a subgraph from these paths. We provide ablation studies on different sampling strategies in Sec. 4.1. The implementation details of our information sufficiency and samplers are in Appendices B and F.
140
+
141
+ Training label adjustment with IS. Due to the information loss caused by graph subsampling, the information contained in the subgraph may not be sufficient to make predictions about a target
142
+
143
+ Table 1: Results (Per-class Accuracy) on family tree reasoning benchmarks. Models are trained on domains with 20 to 2000 entities, and tested on domains with 100 entities. Minus mark means the model runs out of memory or cannot handle ternary predicates. All experiments are conducted on a single NVIDIA 3090 GPU with 24GB memory. The standard errors are computed based on three random seeds.
144
+
145
+ <table><tr><td>Family Tree</td><td colspan="2">MemNN</td><td colspan="2">$\partial$ ILP</td><td colspan="2">NLM</td><td colspan="2">Grall (R-GCN)</td><td colspan="2">SpaLoc (Ours)</td></tr><tr><td>${N}_{\text{train }}$</td><td>20</td><td>2,000</td><td>20</td><td>2,000</td><td>20</td><td>2,000</td><td>20</td><td>2,000</td><td>20</td><td>2,000</td></tr><tr><td>HasFather</td><td>65.24</td><td>-</td><td>100</td><td>-</td><td>100</td><td>-</td><td>100</td><td>100</td><td>${100}_{\pm {0.00}}$</td><td>${100} \pm {0.00}$</td></tr><tr><td>HasSister</td><td>66.21</td><td>-</td><td>100</td><td>-</td><td>100</td><td>-</td><td>97.05</td><td>97.95</td><td>${100} \pm {0.00}$</td><td>${98.01}_{\pm {0.04}}$</td></tr><tr><td>Grandparent</td><td>64.57</td><td>-</td><td>100</td><td>-</td><td>100</td><td>-</td><td>99.95</td><td>98.08</td><td>${100}_{\pm {0.00}}$</td><td>${100}_{\pm {0.00}}$</td></tr><tr><td>Uncle</td><td>64.82</td><td>-</td><td>100</td><td>-</td><td>100</td><td>-</td><td>97.87</td><td>96.50</td><td>${100}_{\pm {0.00}}$</td><td>${100}_{\pm {0.00}}$</td></tr><tr><td>MGUncle</td><td>80.93</td><td>-</td><td>100</td><td>-</td><td>100</td><td>-</td><td>54.67</td><td>71.29</td><td>${100} \pm {0.00}$</td><td>${100} \pm {0.00}$</td></tr><tr><td>FamilyOfThree</td><td>-</td><td>-</td><td>-</td><td>-</td><td>100</td><td>-</td><td>-</td><td>-</td><td>${100} \pm {0.00}$</td><td>${100} \pm {0.00}$</td></tr><tr><td>ThreeGenerations</td><td>-</td><td>-</td><td>-</td><td>-</td><td>100</td><td>-</td><td>-</td><td>-</td><td>${100} \pm {0.00}$</td><td>${100} \pm {0.00}$</td></tr></table>
146
+
147
+ ![01963ef5-532a-79b8-8738-8c39d9763872_6_334_645_1133_294_0.jpg](images/01963ef5-532a-79b8-8738-8c39d9763872_6_334_645_1133_294_0.jpg)
148
+
149
+ Figure 4: The memory usage and the inference time of each sample vs. the number of objects in the evaluation domains. SpaLoc reduces the memory complexity from $O\left( {n}^{3}\right)$ to $O\left( {n}^{2}\right)$ and achieves significant runtime speedup.
150
+
151
+ relationship. For example, in a family relationship graph, removing a subset of nodes may cause the system to be unable to conclude whether a specific person $x$ has a sibling.
152
+
153
+ Thus, we propose to adjust the model training by assigning each example ${f}^{\prime }\left( {{y}_{1},\cdots ,{y}_{r}}\right)$ with a soft label, as illustrated in Fig. 3. Consider a binary classification task ${f}^{\prime }$ . That is, function ${f}^{\prime }$ is a mapping from a hyperedge tuple of arity $r$ to $\{ 0,1\}$ . Denote the model prediction as ${\widehat{f}}^{\prime }$ . Typically, we train the SpaLoc model with a binary cross-entropy loss between $\widehat{{f}^{\prime }}$ and the ground truth ${f}^{\prime }$ . In our subgraph training, we instead compute a binary cross-entropy loss between ${\widehat{f}}^{\prime }$ and ${f}_{{\mathcal{H}}_{S}}^{\prime } \odot {IS}$ , where ${\mathcal{H}}_{S}$ is the sub-sampled graph. Mathematically,
154
+
155
+ $$
156
+ \left( {{f}_{{\mathcal{H}}_{S}}^{\prime } \odot {IS}}\right) \left( {{y}_{1},\cdots ,{y}_{r}}\right) \triangleq {f}_{{\mathcal{H}}_{S}}^{\prime }\left( {{y}_{1},\cdots ,{y}_{r}}\right) \cdot {IS}\left( {\left( {{y}_{1},\cdots ,{y}_{r}}\right) \mid {\mathcal{H}}_{S},\mathcal{H}}\right) .
157
+ $$
158
+
159
+ We empirically compare IS with other label smoothing methods in Appendix E.
160
+
161
+ ## 4 Experiments
162
+
163
+ In this section, we compare SpaLoc with other methods in two aspects: accuracy and efficiency on large domains. We first compare SpaLoc with other baseline models on a synthetic family tree reasoning benchmark. Since we know the underlying relational rules of the task and have fine-grained control over training/testing distributions, we use this benchmark for ablation studies about the space and time complexity of our model and two design choices (different sampling techniques and different label adjustment techniques). We further extend the results to several real-world knowledge-graph reasoning benchmarks.
164
+
165
+ ### 4.1 Family Tree Reasoning
166
+
167
+ We first evaluate SpaLoc on a synthetic family-tree reasoning benchmark for inductive logic programming. The goal is to induce target family relationships or member properties in the test domains based on four input relations: Son, Daughter, Father, and Mother. Details are defined in Appendix G.
168
+
169
+ Baseline. We compare SpaLoc against four baselines. The first three are Memory Networks [MemNNs; Sukhbaatar et al., 2015], ∂ILP [Evans and Grefenstette, 2018], and Neural Logic Machines [NLMs; Dong et al., 2019], which are state-of-the-art models for relational rule learning tasks. For these models, we follow the configuration and setup in Dong et al. [2019]. The fourth baseline is an inductive link prediction method based on graph neural networks, GraIL [Teru et al., 2020]. Since GraIL can only be used for link prediction, we use the full-batch R-GCN [Schlichtkrull et al., 2018b], the backbone network of GraIL, for node property predictions.
170
+
171
+ Table 2: Per-class Accuracy, per-sample inference time (ms), and memory usage (MB) when applying SpaLoc on 2-GNNs. Recall that 1-GNN is the standard GNN with only binary edge message passing. Models are tested on domains with 200 entities.
172
+
173
+ <table><tr><td rowspan="2"/><td colspan="3">Uncle</td><td colspan="3">Grandparent</td></tr><tr><td>Acc.</td><td>Time</td><td>Mem.</td><td>Acc.</td><td>Time</td><td>Mem.</td></tr><tr><td>NLM</td><td>100</td><td>133.8</td><td>3,846</td><td>100</td><td>135.0</td><td>3,846</td></tr><tr><td>SpaLoc + NLM</td><td>100</td><td>37.2</td><td>214</td><td>100</td><td>23.9</td><td>181</td></tr><tr><td>2-GNN</td><td>100</td><td>145.1</td><td>5,126</td><td>100</td><td>145.5</td><td>5,126</td></tr><tr><td>SpaLoc + 2-GNN</td><td>100</td><td>23.7</td><td>645</td><td>100</td><td>19.2</td><td>519</td></tr></table>
174
+
175
+ Table 3: Comparison of different samplers. The first column shows the size of the subsampled graph during training $\left( {N}_{s}\right)$ and the full training graph(N). Models are tested on domains with 100 entities.
176
+
177
+ <table><tr><td rowspan="2">${N}_{s}/N$</td><td colspan="2">Node</td><td colspan="2">Walk</td><td colspan="2">Neighbor</td></tr><tr><td>Acc</td><td>MIS</td><td>Acc</td><td>MIS</td><td>Acc</td><td>MIS</td></tr><tr><td>20 /50</td><td>100</td><td>54.82</td><td>100</td><td>85.14</td><td>100</td><td>89.78</td></tr><tr><td>20 / 200</td><td>100</td><td>33.05</td><td>100</td><td>71.51</td><td>100</td><td>80.60</td></tr><tr><td>20 / 500</td><td>58.18</td><td>27.27</td><td>100</td><td>78.22</td><td>100</td><td>78.70</td></tr><tr><td>20 / 1,000</td><td>1.84</td><td>24.49</td><td>100</td><td>77.18</td><td>100</td><td>78.38</td></tr><tr><td>20 / 2,000</td><td>0</td><td>19.66</td><td>100</td><td>79.69</td><td>100</td><td>78.53</td></tr></table>
178
+
179
+ Table 4: Results (AUC-PR) on real-world knowledge graph inductive reasoning datasets from GraIL.
180
+
181
+ <table><tr><td rowspan="2">Model</td><td colspan="4">WN18RR</td><td colspan="4">FB15k-237</td><td colspan="4">NELL-995</td></tr><tr><td>v1</td><td>v2</td><td>v3</td><td>v4</td><td>v1</td><td>v2</td><td>v3</td><td>v4</td><td>v1</td><td>v2</td><td>v3</td><td>v4</td></tr><tr><td>Neural-LP</td><td>86.02</td><td>83.78</td><td>62.90</td><td>82.06</td><td>69.64</td><td>76.55</td><td>73.95</td><td>75.74</td><td>64.66</td><td>83.61</td><td>87.58</td><td>85.69</td></tr><tr><td>DRUM</td><td>86.02</td><td>84.05</td><td>63.20</td><td>82.06</td><td>69.71</td><td>76.44</td><td>74.03</td><td>76.20</td><td>59.86</td><td>83.99</td><td>87.71</td><td>85.94</td></tr><tr><td>RuleN</td><td>90.26</td><td>89.01</td><td>76.46</td><td>85.75</td><td>75.24</td><td>88.70</td><td>91.24</td><td>91.79</td><td>84.99</td><td>88.40</td><td>87.20</td><td>80.52</td></tr><tr><td>Grall</td><td>94.32</td><td>94.18</td><td>85.80</td><td>92.72</td><td>84.69</td><td>90.57</td><td>91.68</td><td>94.46</td><td>86.05</td><td>92.62</td><td>93.34</td><td>87.50</td></tr><tr><td>TACT</td><td>96.15</td><td>97.95</td><td>90.58</td><td>96.15</td><td>88.73</td><td>94.20</td><td>97.10</td><td>98.30</td><td>94.87</td><td>96.58</td><td>95.70</td><td>96.12</td></tr><tr><td>SpaLoc</td><td>98.18</td><td>99.83</td><td>96.66</td><td>99.30</td><td>99.73</td><td>99.38</td><td>99.53</td><td>99.39</td><td>100</td><td>98.27</td><td>96.19</td><td>97.37</td></tr></table>
182
+
183
+ Accuracy & Scalability. Table 1 summarizes the result. Overall, SpaLoc achieves near-perfect performance across all prediction tasks, on par with the inductive logic programming-based method $\partial$ ILP and the baseline model NLM. This suggests that our sparsity regularizations and sub-graph sampling do not affect model accuracy. Importantly, our SpaLoc framework has drastically increased the scalability of the method: SpaLoc can be trained on graphs with 2000 nodes, which is infeasible for the baseline NLM model due to memory issues.
184
+
185
+ Another essential comparison is between GraIL and SpaLoc. GraIL is a graph neural network-based approach that only considers relationships between binary pairs of entities. This is sufficient for simple tasks such as HasFather, but not for more complex tasks such as Maternal Great Uncle (MGUncle). By contrast, SpaLoc explicitly reasons about hyperedges and solves more complex tasks.
186
+
187
+ Runtime & Memory. We study the time and memory complexity of SpaLoc against NLM on the HasSister and Grandparent tasks. Results are shown in Fig. 4, where we plot the curve of average memory consumption and inference time as a function of the input graph size. We fit a cubic polynomial equation to the data points to approximate the learned inference complexity of SpaLoc. The experimental results show that our method can reduce the space complexity from the original $O\left( {n}^{3}\right)$ complexity of NLM to approximately $O\left( {n}^{2}\right)$ . Note that this learned network has the same complexity as the optimal relational rule that can be designed to solve both tasks. The inference time also gets significantly improved.
188
+
189
+ Application to other hypergraph neural networks. SpaLoc is a general framework for scaling up hypergraph neural networks rather than a method that can only be used on NLMs. Here we apply our framework SpaLoc to a new method, k-GNN [Morris et al., 2019a] on the family tree benchmark. Specifically, we use a fully-connected k-hypergraph. The edge embeddings are initialized as a one-hot encoding of the input relationship. Shown in Table 2, we see consistent improvements in terms of inference speed and memory cost for k-GNNs and NLMs.
190
+
191
+ Ablation: Subgraph sampling. We compare our neighbor expansion sampler with two other sub-graph samplers, proposed in Zeng et al. [2020]: random node (Node) and random walk (Walk) samplers. We compare these samplers with two metrics: the final accuracy of the model and the average information sufficiency of all pairs of nodes in the sub-sampled graphs (MIS). Table 3 shows the result on the Grandparent task. The Node sampler does not leverage locality, so the performance of models and MIS drop as the domain size grows larger. The SpaLocs trained with Walk and Neighbor samplers perform similarly well in terms of test accuracy. Note that the accuracy results are consistent with the MIS results: comparing the Node sampler and others, we see that a higher MIS score leads to higher test accuracy. This supports the effectiveness of our proposed information sufficiency measure.
192
+
193
+ ### 4.2 Real-World Knowledge Graph Reasoning
194
+
195
+ To further demonstrate the scalability of SpaLoc, we apply it to complete real knowledge graphs. We test SpaLoc on both inductive and transductive relation prediction tasks, following GraIL [Schlichtkrull et al., 2018a]. In this setting, the test-task time is to infer the relationship on a given edge, so test-time graph subsampling is used. We evaluate the models with a classification metric, the area under the precision-recall curve (AUC-PR).
196
+
197
+ In the inductive setting, the training and evaluation graphs are disjoint sub-graphs extracted from WN18RR [Dettmers et al., 2018], FB15k-237 [Toutanova et al., 2015], and NELL-995 [Xiong et al., 2017]. For each knowledge graph, there are four versions with increasing sizes. In the transductive setting, we use the standard WN18RR, FB15k-237, and NELL-995 benchmarks. For WN18RR and FB15k-237, we use the original splits; for NELL-995, we use the split provided in GraIL. We also include the Hit@10 metric used by knowledge graph embedding methods. Following the setting of GraIL, we rank each test triplet among 50 randomly sampled negative triplets.
198
+
199
+ Baseline. We compare SpaLoc with several state-of-the-art models, including Neural LP [Yang et al., 2017], DRUM [Sadeghian et al., 2019], RuleN [Meilicke et al., 2018], GraIL, and TACT [Chen et al., 2021]. For transductive learning tasks, we compare SpaLoc with four representative knowledge graph embedding methods: TransE [Bordes et al., 2013], DistMult [Yang et al., 2015], ComplEx [Trouillon et al., 2017], and RotatE [Sun et al., 2019].
200
+
201
+ Results. Table 4 and Table 5 show the inductive and transductive relation prediction results respectively. In the inductive setting, SpaLoc significantly outperforms all baselines on all datasets. This demonstrates the scalability of SpaLoc on large-scale real-world data. SpaLoc is the only model that explicitly uses hyperedge representations, while none of the existing hypergraph neural networks are directly applicable to such large graphs due to memory and time complexities. In the transductive setting, SpaLoc outperforms all knowledge embedding (KE) methods and GraIL on WN18RR and NELL-995. SpaLoc also has comparable performance with KE methods on the FB15K-237 datasets, outperforming GraIL with a large margin.
202
+
203
+ Comparing SpaLoc with node embedding-based methods (TransE) and GNN-based methods (GraIL) that only consider binary edges, we see that our hyperedge-based model enables better relation prediction that requires reasoning about other entities. The necessity of hy-peredges is further supported by Appendix D, where we show that setting the maximum arity of SpaLoc to 2 (i.e., removing hyperedges) significantly degrades the performance. Notably, in contrast to other methods for the transduc-tive setting that store entity embeddings for all knowledge graph nodes, SpaLoc directly uses the inductive learning setting. That is, SpaLoc does not store identity information about each knowledge graph node. We leave better adaptations of SpaLoc to transductive learning settings as future work.
204
+
205
+ Table 5: Results of transductive link prediction on real-world knowledge graphs. We also include Hit@10 as an additional metric following knowledge graph embedding literature and Grall [Schlichtkrull et al., 2018a].
206
+
207
+ <table><tr><td rowspan="2"/><td colspan="2">WN18RR</td><td colspan="2">NELL-995</td><td colspan="2">FB15K-237</td></tr><tr><td>AUC-PR</td><td>H@10</td><td>AUC-PR</td><td>H@10</td><td>AUC-PR</td><td>H@10</td></tr><tr><td>TransE</td><td>93.73</td><td>88.74</td><td>98.73</td><td>98.50</td><td>98.54</td><td>98.87</td></tr><tr><td>DistMult</td><td>93.08</td><td>85.35</td><td>97.73</td><td>95.68</td><td>97.63</td><td>98.67</td></tr><tr><td>ComplEx</td><td>92.45</td><td>83.98</td><td>97.66</td><td>95.43</td><td>97.99</td><td>98.88</td></tr><tr><td>RotatE</td><td>93.55</td><td>88.85</td><td>98.54</td><td>98.09</td><td>98.53</td><td>98.81</td></tr><tr><td>Grall</td><td>90.91</td><td>73.12</td><td>97.79</td><td>94.54</td><td>92.06</td><td>75.87</td></tr><tr><td>SpaLoc</td><td>96.76</td><td>99.97</td><td>99.27</td><td>98.90</td><td>99.61</td><td>96.97</td></tr></table>
208
+
209
+ ## 5 Conclusion
210
+
211
+ We present SpaLoc, a framework for efficient training and inference of hypergraph reasoning networks. SpaLoc leverages sparsity and locality to train and infer efficiently. Through regularizing intermediate representation by a sparsification loss, SpaLoc achieves the same inference complexities on family tree tasks as algorithms designed by human experts. SpaLoc samples sub-graphs for training and inference, calibrates training labels with the information sufficiency measure to alleviate information loss, and therefore generalizes well on large-scale relational reasoning benchmarks.
212
+
213
+ Limitations.. The locality assumption applies to many benchmark datasets, but we admit that it is not a completely general solution. It may lead to problems on datasets where the property of interest may depend on distant nodes, i.e., SpaLoc may not perform well on problems that require long chains of inference (e.g., detecting that $\mathrm{A}$ is a $5\mathrm{\;{th}}$ cousin of $\mathrm{B}$ ). Nevertheless, the locality assumption is good enough for many real-world relational inference tasks. Meanwhile, it is hard to directly apply SpaLoc on extremely high-arity hypergraph datasets such as WD50K [Galkin et al., 2020], where the maximum arity is 67, because the permutation operation in SpaLoc has an $O\left( {r!}\right)$ complexity, where $\mathrm{r}$ is the arity. We leave the application of SpaLoc on extremely high-arity hypergraphs as future work.
214
+
215
+ References
216
+
217
+ Pablo Barceló, Egor V Kostylev, Mikael Monet, Jorge Pérez, Juan Reutter, and Juan Pablo Silva. The logical expressiveness of graph neural networks. In International Conference on Learning Representations (ICLR), 2020. 1
218
+
219
+ Peter W Battaglia, Razvan Pascanu, Matthew Lai, Danilo Rezende, and Koray Kavukcuoglu. Interaction networks for learning about objects, relations and physics. In Advances in Neural Information Processing Systems (NeurIPS), 2016. 1
220
+
221
+ Kurt D. Bollacker, Colin Evans, Praveen K. Paritosh, Tim Sturge, and Jamie Taylor. Freebase: a collaboratively created graph database for structuring human knowledge. In Jason Tsong-Li Wang, editor, ACM SIGMOD Conference on Management of Data (SIGMOD), 2008. 5
222
+
223
+ Antoine Bordes, Nicolas Usunier, Alberto García-Durán, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multi-relational data. In Advances in Neural Information Processing Systems (NeurIPS), 2013. 2, 9
224
+
225
+ Jiajun Chen, Huarui He, Feng Wu, and Jie Wang. Topology-aware correlations between relations for inductive link prediction in knowledge graphs. In AAAI Conference on Artificial Intelligence (AAAI), 2021. 9
226
+
227
+ Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, and Cho-Jui Hsieh. Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks. In ${ACM}\;{SIGKDD}$ International Conference on Knowledge Discovery and Data Mining (KDD), 2019. 3
228
+
229
+ Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. Convolutional 2d knowledge graph embeddings. In AAAI Conference on Artificial Intelligence (AAAI), 2018. 2, 9
230
+
231
+ Honghua Dong, Jiayuan Mao, Tian Lin, Chong Wang, Lihong Li, and Denny Zhou. Neural logic machines. In International Conference on Learning Representations (ICLR), 2019. 1, 2, 3, 5, 7, 15
232
+
233
+ Richard Evans and Edward Grefenstette. Learning explanatory rules from noisy data. Journal of Artificial Intelligence Research, 61:1-64, 2018. 2, 7
234
+
235
+ Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019. 3
236
+
237
+ Nir Friedman, Lise Getoor, Daphne Koller, and Avi Pfeffer. Learning probabilistic relational models. In International Joint Conference on Artificial Intelligence (IJCAI), 1999. 2
238
+
239
+ Mikhail Galkin, Priyansh Trivedi, Gaurav Maheshwari, Ricardo Usbeck, and Jens Lehmann. Message passing for hyper-relational knowledge graphs. In Empirical Methods in Natural Language Processing (EMNLP), 2020. 9
240
+
241
+ Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems (NeurIPS), 2017. 2
242
+
243
+ Song Han, Huizi Mao, and William J. Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. In Yoshua Bengio and Yann LeCun, editors, International Conference on Learning Representations (ICLR), 2016. 2
244
+
245
+ Patrik O. Hoyer. Non-negative matrix factorization with sparseness constraints. Journal of Machine Learning Research, 5:1457-1469, 2004. 5
246
+
247
+ Niall P. Hurley and Scott T. Rickard. Comparing measures of sparsity. IEEE Transactions on Information Theory, 55(10):4723-4741, 2009. 5
248
+
249
+ Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), 2015. 13
250
+
251
+ Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR), 2017. 2
252
+
253
+ Zhezheng Luo, Jiayuan Mao, Joshua B. Tenenbaum, and Leslie Pack Kaelbling. On the expressiveness and learning of relational neural networks on hypergraphs. https://openreview.net/forum?id=HRF6T1SsyDn, 2021. 2, 5
254
+
255
+ Christian Meilicke, Manuel Fink, Yanjie Wang, Daniel Ruffinelli, Rainer Gemulla, and Heiner Stuckenschmidt. Fine-grained evaluation of rule- and embedding-based systems for knowledge graph completion. In International Semantic Web Conference (ISWC), 2018. 9
256
+
257
+ Christopher Morris, Martin Ritzert, Matthias Fey, William L. Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In AAAI Conference on Artificial Intelligence (AAAI), 2019a. 3, 8
258
+
259
+ Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In AAAI Conference on Artificial Intelligence (AAAI), 2019b. 1
260
+
261
+ Stephen Muggleton. Inductive logic programming. New Gener. Comput., 8(4):295-318, 1991. 2
262
+
263
+ Ali Sadeghian, Mohammadreza Armandpour, Patrick Ding, and Daisy Zhe Wang. DRUM: end-to-end differentiable rule mining on knowledge graphs. In Advances in Neural Information Processing Systems (NeurIPS), 2019. 9
264
+
265
+ Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. Modeling relational data with graph convolutional networks. In Extended Semantic Web Conference (ESWC), 2018a. 9
266
+
267
+ Michael Sejr Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. Modeling relational data with graph convolutional networks. In Extended Semantic Web Conference (ESWC), 2018b. 1, 8
268
+
269
+ Nino Shervashidze, Pascal Schweitzer, Erik Jan van Leeuwen, Kurt Mehlhorn, and Karsten M. Borgwardt. Weisfeiler-lehman graph kernels. Journal of Machine Learning Research, 12:2539- 2561, 2011. 2
270
+
271
+ Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks. In Advances in Neural Information Processing Systems (NeurIPS), 2015. 2, 7
272
+
273
+ Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. Rotate: Knowledge graph embedding by relational rotation in complex space. In International Conference on Learning Representations (ICLR), 2019. 9
274
+
275
+ Komal Teru, Etienne Denis, and Will Hamilton. Inductive relation prediction by subgraph reasoning. In International Conference on Machine Learning (ICML), 2020. 3, 7
276
+
277
+ Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon. Representing text for joint embedding of text and knowledge bases. In Luís Màrquez, Chris Callison-Burch, Jian Su, Daniele Pighin, and Yuval Marton, editors, Empirical Methods in Natural Language Processing (EMNLP), 2015. 2, 9
278
+
279
+ Théo Trouillon, Christopher R. Dance, Éric Gaussier, Johannes Welbl, Sebastian Riedel, and Guillaume Bouchard. Knowledge graph completion via complex tensor factorization. Journal of Machine Learning Research, 2017. 9
280
+
281
+ Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In International Conference on Learning Representations (ICLR), 2018. 2
282
+
283
+ Petar Veličković, Rex Ying, Matilde Padovano, Raia Hadsell, and Charles Blundell. Neural execution of graph algorithms. In International Conference on Learning Representations (ICLR), 2020. 1
284
+
285
+ Wenhan Xiong, Thien Hoang, and William Yang Wang. Deeppath: A reinforcement learning method for knowledge graph reasoning. In Martha Palmer, Rebecca Hwa, and Sebastian Riedel, editors, Empirical Methods in Natural Language Processing (EMNLP), pages 564-573. Association for Computational Linguistics, 2017. 9
286
+
287
+ Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations (ICLR), 2019. 2
288
+
289
+ Keyulu Xu, Jingling Li, Mozhi Zhang, Simon S Du, Ken-ichi Kawarabayashi, and Stefanie Jegelka. What can neural networks reason about? International Conference on Learning Representations (ICLR), 2020. 2
290
+
291
+ Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Embedding entities and relations for learning and inference in knowledge bases. In International Conference on Learning Representations (ICLR), 2015. 2, 9
292
+
293
+ Fan Yang, Zhilin Yang, and William W. Cohen. Differentiable learning of logical rules for knowledge base reasoning. In Advances in Neural Information Processing Systems (NeurIPS), 2017. 9
294
+
295
+ Huanrui Yang, Wei Wen, and Hai Li. Deephoyer: Learning sparser neural network with differentiable scale-invariant sparsity measures. In International Conference on Learning Representations (ICLR), 2020. 2,5
296
+
297
+ Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, and Viktor K. Prasanna. Graphsaint: Graph sampling based inductive learning method. In International Conference on Learning Representations (ICLR), 2020. 3, 8
298
+
299
+ 494
300
+
301
+ ## SUPPLEMENTARY MATERIAL
302
+
303
+ The supplementary material is organized as follows. First, we provide the experimental configurations and hyperparameters in Appendix A. Second, in Appendix B, we provide implementation details of the subgraph samplers we used. Next, we provide the ablation study on sparsification loss and SpaLoc's maximum arity in Appendix C, Appendix D, and Appendix E, respectively. Besides, we elaborate the calculation of information sufficiency in Appendix F. Finally, we define relations in the Family Tree reasoning benchmark in Appendix G.
304
+
305
+ ## A Experimental configuration
306
+
307
+ Table 6: Hyper-parameters for SpaLoc.
308
+
309
+ <table><tr><td/><td>Tasks</td><td>Depth</td><td>Breadth</td><td>Hidden Dims</td><td>Batch size</td><td>Subgraph size</td><td>$\tau$</td></tr><tr><td rowspan="7">Family Tree</td><td>HasFather</td><td>5</td><td>3</td><td>8</td><td>8</td><td>20</td><td>1</td></tr><tr><td>HasSister</td><td>5</td><td>3</td><td>8</td><td>8</td><td>20</td><td>2</td></tr><tr><td>Grandparent</td><td>5</td><td>3</td><td>8</td><td>8</td><td>20</td><td>2</td></tr><tr><td>Uncle</td><td>5</td><td>3</td><td>8</td><td>8</td><td>20</td><td>2</td></tr><tr><td>MGUncle</td><td>5</td><td>3</td><td>8</td><td>8</td><td>20</td><td>2</td></tr><tr><td>Family-of-three</td><td>5</td><td>3</td><td>8</td><td>8</td><td>20</td><td>2</td></tr><tr><td>Three-generations</td><td>5</td><td>3</td><td>8</td><td>8</td><td>20</td><td>2</td></tr><tr><td rowspan="3">Inductive KG</td><td>WN18RR</td><td>6</td><td>3</td><td>64</td><td>128</td><td>10</td><td>3</td></tr><tr><td>FB15K-237</td><td>6</td><td>3</td><td>64</td><td>64</td><td>20</td><td>3</td></tr><tr><td>NELL-995</td><td>6</td><td>3</td><td>64</td><td>128</td><td>10</td><td>3</td></tr><tr><td rowspan="3">Transductive KG</td><td>WN18RR</td><td>6</td><td>3</td><td>64</td><td>64</td><td>20</td><td>3</td></tr><tr><td>FB15K-237</td><td>6</td><td>3</td><td>64</td><td>64</td><td>20</td><td>3</td></tr><tr><td>NELL-995</td><td>6</td><td>3</td><td>64</td><td>64</td><td>20</td><td>3</td></tr></table>
310
+
311
+ We optimize all models with Adam [Kingma and Ba, 2015] and use an initial learning rate of 0.005. All experiments are under the supervised learning setup; we use Softmax-Cross-Entropy as the loss function.
312
+
313
+ Table A shows hyper-parameters used by SpaLoc. For all MLP inside SpaLoc, we use no hidden layer and the sigmoid activation. Across all experiments in this paper, the maximum arity of intermediate predicates (i.e., the "breadth") is set to 3 as a hyperparameter, which allows SpaLoc to realize all first-order logic (FOL) formulas with at most three variables, such as a "transitive relation rule." We set the specification threshold $\epsilon$ to 0.05 in our experiments. This value is chosen based on the validation accuracy of our model. In practice, we observe that any values between 0.01 and 0.1 do not significantly impact the performance of our method and the inference complexity. We also set the multiplier of the sparsification loss $\lambda$ to 0.01 in all SpaLoc’s experiments.
314
+
315
+ ## B Implementation of subgraph samplers
316
+
317
+ Both the neighbor expansion and the path sampler sample subgraphs by inducing from selected node sets. To deal with input hypergraphs with any arities, the samplers simplify the input hypergraphs into binary graphs. We define two nodes in the hypergraph are connected if they are covered by a hyperedge. Therefore, the neighbor expansion and path-finding algorithms used by the samplers can be applied to any hypergraphs for finding node sets. After enough nodes are collected, the sampler will induce a sub-hypergraph from the original hypergraph by preserving all of the hyperedges lying in the set.
318
+
319
+ ## C Ablation study on sparsification loss
320
+
321
+ We compare our Hoyer-Square sparsification loss against the ${L}_{1}$ and ${L}_{2}$ regularizers on the family tree datasets. In Table 7, we show the performance of SpaLoc trained with different sparsification losses. All models are tested on domains with 100 objects.
322
+
323
+ "Density" is the percentage of non-zero elements (NNZ) in the intermediate groundings. The lower the density, the better the sparsification and the lower the memory complexity. We can see that,
324
+
325
+ Table 7: Comparison of different sparsification losses.
326
+
327
+ <table><tr><td rowspan="2"/><td colspan="2">HasSister</td><td colspan="2">Grandparent</td><td colspan="2">Uncle</td></tr><tr><td>Accuracy</td><td>Density</td><td>Accuracy</td><td>Density</td><td>Accurcay</td><td>Density</td></tr><tr><td>${L}_{1}$</td><td>91.81</td><td>0.48%</td><td>99.8%</td><td>0.99%</td><td>74.69%</td><td>0.68%</td></tr><tr><td>${L}_{2}$</td><td>100</td><td>0.75%</td><td>100%</td><td>0.61%</td><td>94.46%</td><td>2.44%</td></tr><tr><td>${H}_{S}$</td><td>100</td><td>0.51%</td><td>100%</td><td>0.48%</td><td>100%</td><td>0.87%</td></tr></table>
328
+
329
+ Table 8: Comparison (Per-class Accuracy) of SpaLoc with different max arities on family tree reasoning benchmarks.
330
+
331
+ <table><tr><td/><td>HasSister</td><td>Grandparent</td><td>Uncle</td></tr><tr><td>Max Arity $= 2$</td><td>87.66</td><td>86.74</td><td>50.00</td></tr><tr><td>Max Arity $= 3$</td><td>100</td><td>100</td><td>100</td></tr></table>
332
+
333
+ compared with ${L}_{1}$ and ${L}_{2}$ regularizers, the Hoyer-Square loss yields a higher or comparable sparsity while maintaining nearly perfect accuracy.
334
+
335
+ ## D Ablation study on SpaLoc's maximum arity
336
+
337
+ In this section, we compare two different SpaLoc models with different maximum arity, to validate the necessity and effectiveness of hyperedges in inductive reasoning. Shown in Table 8 and Table 9, we compare SpaLoc models with max arity 2 and 3 . Note that, even if the input and output relations are binary, adding ternary edges in the intermediate representations significantly improves the result.
338
+
339
+ ## E Ablation Study on Label Adjustment
340
+
341
+ In this section, we compare our information sufficiency-based label adjustment method (IS) against two simple baselines: no adjustment ("NC"), and label smoothing (LS). In "LS", we multiply all positive labels with a constant $\alpha = {0.9}$ .
342
+
343
+ Results are shown in Table 10. Overall, our method (IS) outperforms both baselines, even when the average information sufficiency of training graphs is very low (e.g., when using the Node sampler). Especially on the HasFather task, using constant label smoothing only has a close-to-chance accuracy (50%). Combining our IS-based label adjustment with the Neighbor sampler yields the best result.
344
+
345
+ ## F Calculation of the information sufficiency
346
+
347
+ The crucial part in the computation of the information sufficiency is to count the number of $k$ -hop hyperpaths connecting a given set of nodes $\left\{ {{v}_{1},\ldots ,{v}_{r}}\right\}$ in a hypergraph. We use the incidence matrix to calculate this. Firstly, we use a $n \times m$ incidence matrix $B$ to represent the hypergraph $\mathcal{H} = \left( {\mathcal{V},\mathcal{E}}\right)$ , where $n = \left| \mathcal{V}\right|$ and $m = \left| \mathcal{E}\right|$ , such that ${B}_{ij} = 1$ if the vertex ${v}_{i}$ and edge ${e}_{j}$ are incident and 0 otherwise. Next, ${B}^{\left( k\right) } \mathrel{\text{:=}} {\left( B{B}^{T}\right) }^{k - 1}B$ is the $k$ -hop incidence matrix of the hypergraph, i.e., ${B}_{ij}^{\left( k\right) }$ is the number of $k$ -hop paths that the vertex ${v}_{i}$ and edge ${e}_{j}$ are incident.
348
+
349
+ For example, when $r = 2$ , there are ${B}_{i}^{\left( k\right) }{B}_{j}^{T}k$ -hop paths connecting vertex ${v}_{i}$ and ${v}_{j}$ . When $r = 1$ , there are $\mathop{\sum }\limits_{j}{B}_{ij}^{\left( k\right) }k$ -hop paths connecting to vertex ${v}_{i}$ .
350
+
351
+ ## G Definition of Relations in Family Tree
352
+
353
+ The inputs predicates are: Father(x, y), Mother(x, y), Son(x, y), Daughter(x, y).
354
+
355
+ The target predicates are:
356
+
357
+ - HasFather $\left( x\right) \mathrel{\text{:=}} \exists a$ Father(x, a)
358
+
359
+ Table 9: Comparison (AUC-PR) of SpaLoc with different max arities on real-world knowledge graph inductive relation prediction task.
360
+
361
+ <table><tr><td/><td>WN18RR-v1</td><td>FB15k-237-v1</td><td>NELL-995-v1</td></tr><tr><td>Max Arity $= 2$</td><td>97.65</td><td>90.71</td><td>94.58</td></tr><tr><td>Max Arity $= 3$</td><td>98.18</td><td>99.73</td><td>100</td></tr></table>
362
+
363
+ Table 10: Comparison (per-class accuracy) for different label calibration methods.
364
+
365
+ <table><tr><td rowspan="2">Sampler</td><td colspan="3">HasFather</td><td colspan="3">HasSister</td></tr><tr><td>NC</td><td>LS</td><td>IS</td><td>NC</td><td>LS</td><td>IS</td></tr><tr><td>Node</td><td>50.00</td><td>50.00</td><td>50.00</td><td>50.00</td><td>50.00</td><td>80.72</td></tr><tr><td>Walk</td><td>50.00</td><td>52.41</td><td>100</td><td>59.90</td><td>75.13</td><td>93.16</td></tr><tr><td>Neighbor</td><td>50.00</td><td>51.63</td><td>100</td><td>75.29</td><td>78.06</td><td>98.01</td></tr></table>
366
+
367
+ - HasSister $\left( x\right) \mathrel{\text{:=}} \exists a\exists b$ Father $\left( {x, a}\right) \land$ Daughter $\left( {a, b}\right) \land \neg \left( {b = x}\right)$
368
+
369
+ - Grandparent $\left( {x, y}\right) \mathrel{\text{:=}} \exists a$ parent $\left( {x, a}\right) \land$ parent(a, y)
370
+
371
+ $\operatorname{parent}\left( {x, y}\right) \mathrel{\text{:=}}$ Father $\left( {x, y}\right) \vee$ Mother(x, y)
372
+
373
+ - Uncle $\left( {x, y}\right) \mathrel{\text{:=}} \exists a$ Grandparent $\left( {x, a}\right) \land \operatorname{Son}\left( {a, y}\right) \land \neg$ Father(x, y)
374
+
375
+ - MGUncle $\left( {x, y}\right) \mathrel{\text{:=}} \exists a\exists b$ Grandmother $\left( {x, a}\right) \land$ Mother $\left( {a, b}\right) \land \operatorname{Son}\left( {b, y}\right)$
376
+
377
+ $\texttt{Grandmother}(x, y) = {\exists a}\;\texttt{Parent}(x, a) \land \texttt{Mother}(a, y)$
378
+
379
+ - Family-of-three $\left( {x, y, z}\right) =$ Father $\left( {x, y}\right) \land$ Mother(x, z)
380
+
381
+ - Three-generations $\left( {x, y, z}\right) = \operatorname{Parent}\left( {x, y}\right) \land \operatorname{Parent}\left( {y, z}\right)$
382
+
383
+ We follow the dataset generation algorithm presented in Dong et al. [2019]. In detail, we simulate the growth of families to generate examples. For each new family member, we randomly assign gender and a pair of parents (can be none, which means it is the oldest person in the family tree) for it. After generating the family tree, we label the relationships according to the definitions above.
papers/LOG/LOG 2022/LOG 2022 Conference/m3aVA7ykn67/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,332 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § SPARSE AND LOCAL HYPERGRAPH REASONING NETWORKS
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ Reasoning about the relationships between entities from input facts (e.g., whether Ari is a grandparent of Charlie) generally requires explicit consideration of other entities that are not mentioned in the query (e.g., the parents of Charlie). In this paper, we present an approach for learning inference rules that solve problems of this kind in large, real-world domains, using sparse and local hypergraph reasoning networks (SpaLoc). SpaLoc is motivated by two observations from traditional logic-based reasoning: relational inferences usually apply locally (i.e., involve only a small number of individuals), and relations are usually sparse (i.e., only hold for a small percentage of tuples in a domain). We exploit these properties to make learning and inference efficient in very large domains by (1) using a sparse tensor representation for hypergraph neural networks, (2) applying a sparsification loss during training to encourage sparse representations, and (3) subsampling based on a novel information sufficiency-based sampling process during training. SpaLoc achieves state-of-the-art performance on several real-world, large-scale knowledge graph reasoning benchmarks, and is the first framework for applying hypergraph neural networks on real-world knowledge graphs with more than ${10}\mathrm{k}$ nodes.
12
+
13
+ § 18 1 INTRODUCTION
14
+
15
+ Performing graph reasoning in large domains, such as predicting the relationship between two entities based on facts given as input, is an important practical problem that arises in reasoning about many domains, including molecular modeling, knowledge networks, and collections of objects in the physical world [Schlichtkrull et al., 2018b, Veličković et al., 2020, Battaglia et al., 2016]. This paper focuses on an inductive learning-based approach that learns inference rules from data and uses them to make predictions in novel problem instances. Consider the problem of learning a rule that explains the grandparent relationship. Given a dataset of labeled family relationship graphs, we aim to build machine-learning algorithms that learn to predict a specific relationship (e.g., grandparent) based on other input relationships, such as father and mother. A crucial feature of such reasoning tasks is that: in order to predict the relationship between two entities (e.g., whether Ari is a grandparent of Charlie), we need to jointly consider other entities (e.g., the father and mother of Charlie).
16
+
17
+ A natural approach to this problem is to use hypergraph neural networks to represent and reason about higher-order relations: in a hypergraph, a hyper-edge may connect more than two nodes. As an example, Neural Logic Machines [NLM; Dong et al., 2019] present a method for solving graph reasoning tasks by maintaining hyperedge representations for all tuples consisting of up to $B$ entities, where $B$ is a hyperparameter. Thus, they can infer more complex finitely-quantified logical relations than standard graph neural networks that only consider binary relationships between entities [Morris et al., 2019b, Barceló et al., 2020]. However, there are two disadvantages of such a dense hypergraph representation. First, the training and inference require considering all entities in a domain simultaneously, such as all of the $N$ people in a family relationship database. Second, ehed, scale exponentially with respect to the number of entities considered in a single type of relationship: inferring the grandparent relationship between all pairs of entities requires $O\left( {N}^{3}\right)$ time and space complexity. In practice, for large graphs, these limitations make the training and inference intractable and hinder the application of methods such as NLMs in large-scale real-world domains. To address these two challenges, we draw two inspirations from traditional logic-based reasoning: logical rules (e.g., my parent's parent is my grandparent) usually apply locally (e.g., only three people are involved in a grandparent rule), and sparsely (e.g., the grandparent relationship is sparse across all pairs of people in the world). Thus, during training and inference, we don't need to keep track of the representation of all hyperedges but only the hyperedges that are related to our prediction tasks.
18
+
19
+ Inspired by these observations, we develop the Sparse and Local Hypergraph Reasoning Network (SpaLoc) for inducing sparse relational rules from data in large domains. First, we present a sparse tensor-based representation for encoding hyperedge relationships among entities and extend hypergraph reasoning networks to this representation. Instead of storing a dense representation for all hyperedges, it only keeps track of edges related to the prediction task, which exploits the inherent sparsity of hypergraph reasoning. Second, since we do not know the underlying sparsity structures $a$ priori, we propose a training paradigm to recover the underlying sparse relational structure among objects by regularizing the graph sparsity. During training, the graph sparsity measurement is used as a soft constraint, while during inference, SpaLoc uses it to explicitly prune out irrelevant edges to accelerate the inference. Third, during both training and inference, SpaLoc focuses on a local induced subgraph of the input graph, instead of considering all entities and their relations. This is achieved by a novel sub-graph sampling technique motivated by information sufficiency (IS). IS quantifies the amount of information in a sub-graph for predictions about a specific hyperedge. Since the information in a sub-sampled graph may be insufficient for predicting the relationship between a pair of entities, we also propose to use the information sufficiency measure to adjust training labels.
20
+
21
+ We study the learning and generalization properties of SpaLoc on a domain of relational reasoning in family-tree datasets and evaluate its performance on real-world knowledge-graph reasoning benchmarks. First, we show that, with our sparsity regularization, the computational complexity for inference can be reduced to the same order as the human expert-developed inference method, which significantly outperforms the baseline models. Second, we show that training via sub-graph sampling and label adjustment enables us to learn relational rules in real-world knowledge graphs with more than ${10}\mathrm{\;K}$ nodes, whereas other hypergraph neural networks can be barely applied to graphs with more than 100 nodes. SpaLoc achieves state-of-the-art performance on several real-world knowledge graph reasoning benchmarks, surpassing several existing binary-edge-based graph neural networks. Finally, we show the generality of SpaLoc by applying it to different hypergraph neural networks.
22
+
23
+ § 2 RELATED WORK
24
+
25
+ (Hyper-)Graph representation learning. (Hyper-)Graph representation learning methods, including message passing neural networks [Shervashidze et al., 2011, Kipf and Welling, 2017, Velickovic et al., 2018, Hamilton et al., 2017] and embedding-based methods [Bordes et al., 2013, Yang et al., 2015, Toutanova et al., 2015, Dettmers et al., 2018], have been widely used for knowledge discovery. Since these methods treat relations (edges) as fixed indices for node feature propagation, their computational complexity is usually small (e.g., $O\left( {NE}\right)$ ), and they can be applied to large datasets. However, the fixed relation representation and low complexity restrict the expressive power of these methods [Xu et al., 2019, 2020, Luo et al., 2021], preventing them from solving general complex problems like inducing rules that involve more than three entities. Moreover, some widely used methods, such as knowledge embeddings, are inherently transductive and cannot learn lifted rules that generalize to unseen domains. By contrast, the learned rules from SpaLoc are inductive and can be applied to completely novel domains with an entire collection of new entities, as long as the underlying patterns of relational inference remain the same.
26
+
27
+ Inductive rule learning. In addition to graph learning frameworks, many previous approaches have studied how to learn generalized rules from data, i.e., inductive logic programming (ILP) [Muggleton, 1991, Friedman et al., 1999], with recent work integrating neural networks into ILP systems to combat noisy and ambiguous inputs [Dong et al., 2019, Evans and Grefenstette, 2018, Sukhbaatar et al., 2015]. However, due to the large search space of target rules, the computational and memory complexities of these models are too high to scale up to many large real-world domains. SpaLoc addresses this scalability problem by leveraging the sparsity and locality of real-world rules and thus can induce knowledge with local computations.
28
+
29
+ Efficient training and inference methods. There is a rich literature on efficient training and inference of neural networks. Two directions that are relevant to us are model sparsification and sampling training. Han et al. [2016] proposed to prune and compress the weights of neural networks for efficiency, and Yang et al. [2020] adopted Hoyer-Square regularization to sparsify models. SpaLoc
30
+
31
+ < g r a p h i c s >
32
+
33
+ Figure 1: The overall training pipeline of SpaLoc, including sub-graph sampling with label adjustment (Sec. 3.3), sparse hypergraph reasoning networks (Sec. 3.1), and sparsity regularizations (Sec. 3.2). $I$ and $V$ denote the index tensor and value tensor, respectively.
34
+
35
+ extends this sparsification idea by adding regularization at intermediate sparse tensor groundings to encourage concise induction. Chiang et al. [2019] and Zeng et al. [2020] proposed to sample sub-graphs for GNN training and Teru et al. [2020] proposed to construct sub-graphs for link prediction. SpaLoc generalizes these sampling methods to hypergraphs and proposes the information sufficiency-based adjustment method to remedy the information loss introduced by sub-sampling.
36
+
37
+ § 3 SPALOC HYPERGRAPH REASONING NETWORKS
38
+
39
+ This section develops a training and inference framework for hypergraph reasoning networks. As illustrated in Fig. 1, we make hypergraph networks practical for large domains by using sparse tensors (Sec. 3.1). To encourage models to discover sparse interconnections, we add sparsity regularization to intermediate tensors (Sec. 3.2). We exploit the locality of the task by sampling subgraphs and compensate for information loss due to sampling through a novel label adjustment process (Sec. 3.3).
40
+
41
+ The fundamental structures used for both training and inference are hypergraphs $\mathcal{H} = \left( {\mathcal{V},\mathcal{E}}\right)$ , where $\mathcal{V}$ is a set of vertices and $\mathcal{E}$ is a set of hyperedges. Each hyperedge $e = \left( {{x}_{1},{x}_{2},\cdots ,{x}_{r}}\right)$ is an ordered tuple of $r$ elements ( $r$ is called the arity of the edge), where ${x}_{i} \in \mathcal{V}$ . We use $f : \mathcal{E} \rightarrow \mathcal{S}$ to denote a hyperedge representation function, which maps hyperedge $e$ to a feature in $\mathcal{S}$ . Domains $\mathcal{S}$ can be of various forms, including discrete labels, numbers, and vectors. For simplicity, we describe features associated with arity-1 edges as "node features" and features associated with the whole graph as "nullary" or "global" features.
42
+
43
+ A graph-reasoning task can be formulated as follows: given $\mathcal{H}$ and the input hyperedge representation functions $f$ associated with all hyperedges in $\mathcal{E}$ , such as node types and pairwise relationships (e.g., parent), our goal is to infer a target representation function ${f}^{\prime }$ for one or more hyperedges, i.e. ${f}^{\prime }\left( e\right)$ for some $e \in \mathcal{E}$ , such as predicting a new relationship (e.g., grandparent(Kim, Skye)). We consider two problem settings in this paper. The first one is to predict a target relation over all edges in the graph. The second one is to predict the relation on one single edge.
44
+
45
+ § 3.1 SPARSE HYPERGRAPH REASONING NETWORKS
46
+
47
+ SpaLoc is a general formulation that can be applied to a range of hypergraph reasoning frameworks. We will primarily develop our method based on the Neural Logic Machine [NLM; Dong et al., 2019], a state-of-the-art inductive hypergraph reasoning network. We choose an NLM as the backbone network in SpaLoc because its tensor representation naturally generalizes to sparse cases. In Sec. 4.1, we also integrate SpaLoc with other hypergraph neural networks like k-GNNs [Morris et al., 2019a].
48
+
49
+ In SpaLoc, hypergraph features such as node features and edge features are represented as sparse tensors. For example, as shown in Fig. 1, at the input level, the parental relationship can be represented as a list of indices and values. In this case, each index(x, y)is an ordered pair of integers, and the corresponding value is 1 if node $x$ is a parent of node $y$ . To leverage the sparsity in relations, we treat values for indices not in the list as 0 . This convention also extends to vector representations of nodes and hyperedges. In general, vector representations $f\left( {{x}_{1},{x}_{2},\cdots ,{x}_{r}}\right)$ associated with all hyperedges of arity $r$ are represented as coordinate-list (COO) format sparse tensors [Fey and Lenssen,2019]. That is, each tensor is represented as two tensors $\mathcal{F} = \left( {\mathbf{I},\mathbf{V}}\right)$ , each with $M$ entries. The first tensor is an index tensor, of shape $M \times r$ , in which each row denotes a tuple $\left( {{x}_{1},{x}_{2},\cdots ,{x}_{r}}\right)$ . The second tensor $\mathbf{V}$ is a value tensor, of shape $M \times D$ , where $D$ is the length of $f\left( {{x}_{1},{x}_{2},\cdots ,{x}_{r}}\right)$ . Each row
50
+
51
+ < g r a p h i c s >
52
+
53
+ Figure 2: A running example of a single layer SpaLoc: inferring the binary relationship of $\operatorname{son}\left( {x,y}\right) \mathrel{\text{ := }} \operatorname{male}\left( x\right) \land \operatorname{parent}\left( {y,x}\right)$ from the attribute male and the binary relationship parent. The model first expands the unary tensor (containing the male information) into a binary relation, indicating whether the first entity in the pair is a male. Then, the permutation operation fuses the information for(x, y)and(y, x). For each pair(x, y), we now have four predicates: whether $\mathrm{x}$ is a parent of $\mathrm{y}$ , whether $\mathrm{y}$ is a parent of $\mathrm{x}$ , whether $\mathrm{x}$ is a male, and whether $\mathrm{y}$ is a male. Finally, a neural network predicts the target relationship son for each pair(x, y). Blue entries denote values that are reduced from high-arity tensors. Green entries are expanded from low-arity tensors. Yellow entries are created by the "permutation" operation. Gray entries are zero paddings.
54
+
55
+ $\mathbf{V}\left\lbrack i\right\rbrack$ denotes the vector representation associated with the tuple $\mathbf{I}\left\lbrack i\right\rbrack$ . For all tuples that are not recorded in I, their representations are treated as all-zero vectors.
56
+
57
+ Based on the sparse feature representations, a sparse hypergraph reasoning network is composed of multiple relational reasoning layers (RRLs) that operate on hyperedge representations. Fig. 1 shows the detailed computation graph of an SpaLoc model with ternary relations. The input to first RRL is the input information (e.g., demographic information and parental relationships in a person database). Each RRL computes a set of new hyperedge features as inputs to the next layer. The last layer output will be the final prediction of the task (e.g., the "son" relationship). During training time, we will supervise the network with ground-truth labels for final predictions.
58
+
59
+ Next, we describe the computation of individual RRLs. The descriptions will be brief and focus on differences from the original NLM layers. The input to and output of each RRL are both $R + 1$ sparse tensors of different arities, where $R$ is the maximum arity of the network. Let ${\mathcal{F}}^{\left( i - 1,r\right) }$ denote the input of arity $r$ of layer $i$ , the output of this layer ${\mathcal{F}}^{\left( i,r\right) }$ is computed as the following:
60
+
61
+ $$
62
+ {\mathcal{F}}^{\left( i,r\right) } = {\mathrm{{NN}}}^{\left( i,r\right) }\left( {\operatorname{PERMUTE}\left( {\operatorname{CONCAT}\left( {{\mathcal{F}}^{\left( i - 1,r\right) },\operatorname{EXPAND}\left( {\mathcal{F}}^{\left( i - 1,r - 1\right) }\right) ,\operatorname{REDUCE}\left( {\mathcal{F}}^{\left( i - 1,r + 1\right) }\right) }\right) }\right) }\right)
63
+ $$
64
+
65
+ In a nutshell, the EXPAND operation propagates representations from lower-arity tensors to a higher-arity form (e.g., from each node to the edges connected to it). The REDUCE operation aggregates higher-arity representations into a lower-arity form (e.g., aggregating the information from all edges connected to a node into that node). The PERMUTE operation fuses the representations of hyperedges that share the same set of entities but in different orders, such as(A, B)and(B, A). Finally, NN is a linear layer with nonlinear activation that computes the representation for the next layer. Fig. 2 gives a concrete running example of a single RRL.
66
+
67
+ Formally, the EXPAND operation takes a sparse tensor $\mathcal{F}$ of arity $r$ and creates a new sparse tensor ${\mathcal{F}}^{\prime }$ with arity $r + 1$ . This is implemented by duplicating each entry $f\left( {{x}_{1},\cdots ,{x}_{r}}\right)$ in $\mathcal{F}$ by $N$ times, creating the $N$ new vector representations for $\left( {{x}_{1},\cdots ,{x}_{r},{o}_{i}}\right)$ for all $i \in \{ 1,2,\cdots ,N\}$ , where $N$ is the number of nodes in the hypergraph.
68
+
69
+ The REDUCE operation takes a sparse tensor $\mathcal{F} = \left( {\mathbf{I},\mathbf{V}}\right)$ of arity $r$ and creates a new sparse tensor ${\mathcal{F}}^{\prime }$ with arity $r - 1$ : it aggregates all information associated with all $r$ -tuples: $\left( {{x}_{1},{x}_{2},\cdots ,{x}_{r - 1},?}\right)$ with the same $r - 1$ prefix. In SpaLoc, the aggregation function is chosen to be max. Thus,
70
+
71
+ $$
72
+ {f}^{\prime }\left( {{x}_{1},\cdots ,{x}_{r - 1}}\right) = \mathop{\max }\limits_{{z : \left( {{x}_{1},\cdots ,{x}_{r - 1},z}\right) \in \mathbf{I}}}f\left( {{x}_{1},\cdots ,{x}_{r - 1},z}\right) .
73
+ $$
74
+
75
+ The CONCAT operation concatenates the input hyperedge representations along the channel dimension (i.e., the dimension corresponding to different relational features). Specifically, it first adds missing entries with all-zero values to the input hyperedge representations so that they have exactly the same set of indices $\mathbf{I}$ . It then concatenates the $\mathbf{V}$ ’s of inputs along the channel dimension.
76
+
77
+ The PERMUTE operation takes a sparse tensor $\mathcal{F}$ of arity $r$ and creates a new sparse tensor ${\mathcal{F}}^{\prime }$ of the same arity. However, the length of the vector representation will grow from $D$ to ${D}^{\prime } = r! \times D$ . It fuses the representation of hyperedges that share the same set of entities. Mathematically,
78
+
79
+ $$
80
+ {f}^{\prime }\left( {{x}_{1},\cdots ,{x}_{r}}\right) = \mathop{\operatorname{CONCAT}}\limits_{{\left( {{x}_{1}^{\prime },\cdots ,{x}_{r}^{\prime }}\right) \text{ is a permutation of }\left( {{x}_{1},\cdots ,{x}_{r}}\right) }}\left\lbrack {f\left( {{x}_{1}^{\prime },\cdots ,{x}_{r}^{\prime }}\right) }\right\rbrack .
81
+ $$
82
+
83
+ If a permutation of $\left( {{x}_{1},\cdots ,{x}_{r}}\right)$ does not exist in $\mathcal{F}$ , it will be treated as an all-zero vector. Thus, the number of entries $M$ may increase or remain unchanged.
84
+
85
+ Finally, the $i$ -th sparse relational reasoning layer has $R + 1$ linear layers ${L}^{\left( i,0\right) },{L}^{\left( i,1\right) },\cdots ,{L}^{\left( i,R\right) }$ with nonlinear activations (e.g., ReLU) as submodules with arities 0 through $R$ . For each arity $r$ , we will concatenate the feature tensors expanded from arity $r - 1$ , those reduced from arity $r + 1$ , and the output from the previous layer, apply a permutation, and apply ${L}^{\left( i,r\right) }$ on the derived tensor.
86
+
87
+ To make the intermediate features ${\mathcal{F}}^{\left( i,r\right) }$ sparse, SpaLoc uses a gating mechanism. In SpaLoc, for each linear layer ${L}^{\left( i,r\right) }$ , we add a linear gating layer, ${L}_{g}^{\left( i,r\right) }$ , which has sigmoid activation and outputs a scalar value in range $\left\lbrack {0,1}\right\rbrack$ that can be interpreted as the importance score for each hyperedge. During training, we modulate the output of ${L}^{\left( i,r\right) }$ with this importance value. Specifically, the output of layer $i$ arity $r$ is ${\mathcal{F}}^{\left( i,r\right) } = {L}^{\left( i,r\right) }\left( \mathcal{F}\right) \odot {L}_{g}^{\left( i,r\right) }\left( \mathcal{F}\right)$ , where $\mathcal{F}$ is the input sparse tensor, and $\odot$ is the element-wise multiplication operation. Note that we are using the same gate value to modulate each channel dimension of ${L}^{\left( i,r\right) }\left( \mathcal{F}\right)$ . During inference, we can prune out edges with small importance scores ${L}_{g}^{\left( i,r\right) } < \epsilon$ , where $\epsilon$ is a scalar hyperparameter. We use $\epsilon = {0.05}$ in our experiments.
88
+
89
+ We have described the computation of a sparsified Neural Logic Machine. However, we do not know a priori the sparse structures of intermediate layer outputs at training time, nor at inference time before we actually compute the output. Thus, we have to start from the assumption of a fully-connected dense graph. In the following sections, we will show how to impose regularization to encourage learning sparse features. Furthermore, we will present a subsampling technique to learn efficiently from large input graphs.
90
+
91
+ Remark. Even when the inputs have only unary and binary relations, allowing intermediate tensor representations of higher arity to be associated with hyperedges increases the expressiveness of NLMs [Dong et al.,2019], and Luo et al. [2021] proves that NLMs with max arity $k + 1$ are as expressive as $k$ -GNN hypergraph models (note that the regular GNN is 1-GNN). An intuitive example is that, in order to determine the grandparent relationship, we need to consider all 3-tuples of entities, even though the input relations are only binary. Despite their expressiveness, hyperedge-based NLMs cannot be directly applied to large-scale graphs. For a graph with more than 10,000 nodes, such as Freebase [Bollacker et al., 2008], it is almost impossible to store vector representations for all of the ${N}^{3}$ tuples of arity 3 . Our key observation to improve the efficiency of NLMs is that relational rules are usually applied sparsely (Sec. 3.2) and locally (Sec. 3.3).
92
+
93
+ § 3.2 SPARSIFICATION THROUGH HOYER REGULARIZATION
94
+
95
+ We use a regularization loss to encourage hyperedge sparsity, which is based on the Hoyer sparsity measure (2004). Let $x$ (in our case, the edge gate $g\left( {{x}_{1},\cdots ,{x}_{r}}\right)$ ) be a vector of length $n$ . Then
96
+
97
+ $$
98
+ \operatorname{Hoyer}\left( x\right) = \frac{\left( {\mathop{\sum }\limits_{i}^{n}\left| {x}_{i}\right| }\right) /\sqrt{\mathop{\sum }\limits_{i}^{n}{x}_{i}^{2}} - 1}{\sqrt{n} - 1}.
99
+ $$
100
+
101
+ The Hoyer measure takes values from 0 to 1 . The larger the Hoyer measure of a tensor, the denser the tensor is. In order to assign weights to different tensors based on their size, we use the Hoyer-Square
102
+
103
+ measure [Yang et al., 2020],
104
+
105
+ $$
106
+ {H}_{S}\left( x\right) = \frac{{\left( \mathop{\sum }\limits_{i}^{n}\left| {x}_{i}\right| \right) }^{2}}{\mathop{\sum }\limits_{i}^{n}{x}_{i}^{2}},
107
+ $$
108
+
109
+ which ranges from 1 (sparsest) to $n$ (densest). Intuitively, the Hoyer-Square measure is more suitable than ${L}_{1}$ or ${L}_{2}$ regularizers for graph sparsification since it encourages large values to be close to 1 and others to be zero, i.e., extremity. It has been widely used in sparse neural network training and has shown better performance than other sparse measures [Hurley and Rickard, 2009]. We empirically compare ${H}_{S}$ with other sparsity measures in Appendix C.
110
+
111
+ The overall training objective of SpaLoc is the task objective plus the sparsification loss, $\mathcal{L} =$ ${\mathcal{L}}_{\text{ task }} + \lambda {\mathcal{L}}_{\text{ density }}$ , where ${\mathcal{L}}_{\text{ density }}$ is the sum of the ${H}_{S}$ , divided by the sum of the sizes of these tensors.
112
+
113
+ § 3.3 SUBGRAPH TRAINING
114
+
115
+ Regularization enables us to learn a sparse model that will be efficient at inference time, but does not address the problem of training on large graphs. We describe a novel strategy that substantially reduces training complexity. It is based on the observation that an inferred relation among a set of entities generally depends only on a small set of other entities that are "related to" the target entities in the hypergraph, in the sense that they are connected via short paths of relevant relations.
116
+
117
+ Specifically, we employ a sub-graph sampling and label adjustment procedure. Here, we first present a measure to quantify the sufficiency of information in a sub-sampled graph for determining the relationship between two entities, namely, information sufficiency. Next, we present a sub-graph sampling procedure designed to maximize the information sufficiency for training. We further show that sub-graph sampling can also be employed at inference time. Finally, since information loss is inevitable during sampling, we further propose a training label adjustment process based on the information sufficiency.
118
+
119
+ Information sufficiency. Let ${\mathcal{H}}_{S} = \left( {{\mathcal{V}}_{S},{\mathcal{E}}_{S}}\right)$ be a sub-hypergraph of hypergraph $\mathcal{H} = \left( {\mathcal{V},\mathcal{E}}\right)$ , and ${e}^{ * } = \left( {{y}_{1},\ldots ,{y}_{r}}\right)$ be a target hyperedge in ${\mathcal{H}}_{S}$ , where ${y}_{1},\cdots ,{y}_{r} \in {\mathcal{V}}_{S} \subset \mathcal{V}$ . Intuitively, in order to determine the label for this hyperedge, we need to consider all "paths" that connect the nodes $\left\{ {{y}_{1},\ldots ,{y}_{r}}\right\}$ . More formally, we say a sequence of $K$ hyperedges $\left( {{e}_{1},\ldots ,{e}_{K}}\right)$ , represented as
120
+
121
+ $$
122
+ \frac{\left( {x}_{1}^{1},\cdots ,{x}_{{r}_{1}}^{1}\right) }{{e}_{1}},\frac{\left( {x}_{1}^{2},\cdots ,{x}_{{r}_{2}}^{2}\right) }{{e}_{2}},\cdots ,\frac{\left( {x}_{1}^{K},\cdots ,{x}_{{r}_{k}}^{K}\right) }{{e}_{K}},
123
+ $$
124
+
125
+ is a hyperpath for nodes $\left\{ {{y}_{1},\cdots {y}_{r}}\right\}$ if and only if $\left\{ {{y}_{1},\cdots {y}_{r}}\right\} \subset \mathop{\bigcup }\limits_{{j = 1}}^{K}{e}_{j}$ and ${e}_{j} \cap {e}_{j + 1} \neq \varnothing$ for all $j$ . In a graph with only binary edges, this is equivalent to the existence of a path from one node ${y}_{1}$ to another node ${y}_{2}$ . We define the information sufficiency measure for a hyperedge ${e}^{ * }$ in subgraph ${\mathcal{H}}_{S}$ as $\left( \frac{0}{0}\right.$ is defined as 1 .)
126
+
127
+ $$
128
+ \operatorname{IS}\left( {\left( {{y}_{1},\cdots ,{y}_{r}}\right) \mid {\mathcal{H}}_{S},\mathcal{H}}\right) \mathrel{\text{ := }} \frac{\# \text{ Paths connecting }\left( {{y}_{1},\cdots ,{y}_{r}}\right) \text{ in }{\mathcal{H}}_{S}}{\# \text{ Paths connecting }\left( {{y}_{1},\cdots ,{y}_{r}}\right) \text{ in }\mathcal{H}}.
129
+ $$
130
+
131
+ In practice, we approximate IS by only counting the number of paths whose length is less than a task-dependent threshold $\tau$ for efficiency. The number of paths in a large graph can be precomputed and cached before training, and the overhead of counting paths in a sampled graph is small, so this computation does not add much overhead to training and inference. When input graphs have maximum arity 2, paths can be counted efficiently by taking powers of the graph adjacency matrix.
132
+
133
+ < g r a p h i c s >
134
+
135
+ Figure 3: Subgraph training contains two steps. First, we sample a subset of nodes from the whole graph. Next, we adjust labels for edges in the sub-sampled graph. $I{S}_{1} = 0$ because no paths connecting two nodes are sampled, while $I{S}_{2} = 1$ because all paths connecting two nodes are sampled.
136
+
137
+ Subgraph sampling. During training, each data point is a tuple $\left( {\mathcal{H},f,{f}^{\prime }}\right)$ where $\mathcal{H}$ is the input graph, $f$ is the input representation, and ${f}^{\prime }$ is the desired output labels. We sample a subgraph ${\mathcal{H}}^{\prime } \subset \mathcal{H}$ , and train models to predict the value of ${f}^{\prime }$ on ${\mathcal{H}}^{\prime }$ given $f$ . For example, we train models to predict the grandparent relationship between all pairs of entities in ${\mathcal{H}}^{\prime }$ based on the parent relationship between entities in ${\mathcal{H}}^{\prime }$ . Thus, our goal is to find a subgraph that retains most of the paths connecting nodes in this subgraph. We achieve this using a neighbor expansion sampler that uniformly samples a few nodes from $\mathcal{V}$ as the seed nodes. It then samples new nodes connected with one of the nodes in the graph into the sampled graph and runs this "expansion" procedure for multiple iterations to get ${\mathcal{V}}_{S}$ . Finally, we include all edges that connect nodes in ${\mathcal{V}}_{S}$ to form the final subsampled hypergraph.
138
+
139
+ When the task is to infer the relations between a single pair of entities ${f}^{\prime }\left( {{y}_{1},{y}_{2}}\right)$ given the input representation $f$ , a similar sub-sampling idea can also be used at inference time to further speed it up. Specifically, we use a path sampler, which samples paths connecting ${y}_{1}$ and ${y}_{2}$ and induces a subgraph from these paths. We provide ablation studies on different sampling strategies in Sec. 4.1. The implementation details of our information sufficiency and samplers are in Appendices B and F.
140
+
141
+ Training label adjustment with IS. Due to the information loss caused by graph subsampling, the information contained in the subgraph may not be sufficient to make predictions about a target
142
+
143
+ Table 1: Results (Per-class Accuracy) on family tree reasoning benchmarks. Models are trained on domains with 20 to 2000 entities, and tested on domains with 100 entities. Minus mark means the model runs out of memory or cannot handle ternary predicates. All experiments are conducted on a single NVIDIA 3090 GPU with 24GB memory. The standard errors are computed based on three random seeds.
144
+
145
+ max width=
146
+
147
+ Family Tree 2|c|MemNN 2|c|$\partial$ ILP 2|c|NLM 2|c|Grall (R-GCN) 2|c|SpaLoc (Ours)
148
+
149
+ 1-11
150
+ ${N}_{\text{ train }}$ 20 2,000 20 2,000 20 2,000 20 2,000 20 2,000
151
+
152
+ 1-11
153
+ HasFather 65.24 - 100 - 100 - 100 100 ${100}_{\pm {0.00}}$ ${100} \pm {0.00}$
154
+
155
+ 1-11
156
+ HasSister 66.21 - 100 - 100 - 97.05 97.95 ${100} \pm {0.00}$ ${98.01}_{\pm {0.04}}$
157
+
158
+ 1-11
159
+ Grandparent 64.57 - 100 - 100 - 99.95 98.08 ${100}_{\pm {0.00}}$ ${100}_{\pm {0.00}}$
160
+
161
+ 1-11
162
+ Uncle 64.82 - 100 - 100 - 97.87 96.50 ${100}_{\pm {0.00}}$ ${100}_{\pm {0.00}}$
163
+
164
+ 1-11
165
+ MGUncle 80.93 - 100 - 100 - 54.67 71.29 ${100} \pm {0.00}$ ${100} \pm {0.00}$
166
+
167
+ 1-11
168
+ FamilyOfThree - - - - 100 - - - ${100} \pm {0.00}$ ${100} \pm {0.00}$
169
+
170
+ 1-11
171
+ ThreeGenerations - - - - 100 - - - ${100} \pm {0.00}$ ${100} \pm {0.00}$
172
+
173
+ 1-11
174
+
175
+ < g r a p h i c s >
176
+
177
+ Figure 4: The memory usage and the inference time of each sample vs. the number of objects in the evaluation domains. SpaLoc reduces the memory complexity from $O\left( {n}^{3}\right)$ to $O\left( {n}^{2}\right)$ and achieves significant runtime speedup.
178
+
179
+ relationship. For example, in a family relationship graph, removing a subset of nodes may cause the system to be unable to conclude whether a specific person $x$ has a sibling.
180
+
181
+ Thus, we propose to adjust the model training by assigning each example ${f}^{\prime }\left( {{y}_{1},\cdots ,{y}_{r}}\right)$ with a soft label, as illustrated in Fig. 3. Consider a binary classification task ${f}^{\prime }$ . That is, function ${f}^{\prime }$ is a mapping from a hyperedge tuple of arity $r$ to $\{ 0,1\}$ . Denote the model prediction as ${\widehat{f}}^{\prime }$ . Typically, we train the SpaLoc model with a binary cross-entropy loss between $\widehat{{f}^{\prime }}$ and the ground truth ${f}^{\prime }$ . In our subgraph training, we instead compute a binary cross-entropy loss between ${\widehat{f}}^{\prime }$ and ${f}_{{\mathcal{H}}_{S}}^{\prime } \odot {IS}$ , where ${\mathcal{H}}_{S}$ is the sub-sampled graph. Mathematically,
182
+
183
+ $$
184
+ \left( {{f}_{{\mathcal{H}}_{S}}^{\prime } \odot {IS}}\right) \left( {{y}_{1},\cdots ,{y}_{r}}\right) \triangleq {f}_{{\mathcal{H}}_{S}}^{\prime }\left( {{y}_{1},\cdots ,{y}_{r}}\right) \cdot {IS}\left( {\left( {{y}_{1},\cdots ,{y}_{r}}\right) \mid {\mathcal{H}}_{S},\mathcal{H}}\right) .
185
+ $$
186
+
187
+ We empirically compare IS with other label smoothing methods in Appendix E.
188
+
189
+ § 4 EXPERIMENTS
190
+
191
+ In this section, we compare SpaLoc with other methods in two aspects: accuracy and efficiency on large domains. We first compare SpaLoc with other baseline models on a synthetic family tree reasoning benchmark. Since we know the underlying relational rules of the task and have fine-grained control over training/testing distributions, we use this benchmark for ablation studies about the space and time complexity of our model and two design choices (different sampling techniques and different label adjustment techniques). We further extend the results to several real-world knowledge-graph reasoning benchmarks.
192
+
193
+ § 4.1 FAMILY TREE REASONING
194
+
195
+ We first evaluate SpaLoc on a synthetic family-tree reasoning benchmark for inductive logic programming. The goal is to induce target family relationships or member properties in the test domains based on four input relations: Son, Daughter, Father, and Mother. Details are defined in Appendix G.
196
+
197
+ Baseline. We compare SpaLoc against four baselines. The first three are Memory Networks [MemNNs; Sukhbaatar et al., 2015], ∂ILP [Evans and Grefenstette, 2018], and Neural Logic Machines [NLMs; Dong et al., 2019], which are state-of-the-art models for relational rule learning tasks. For these models, we follow the configuration and setup in Dong et al. [2019]. The fourth baseline is an inductive link prediction method based on graph neural networks, GraIL [Teru et al., 2020]. Since GraIL can only be used for link prediction, we use the full-batch R-GCN [Schlichtkrull et al., 2018b], the backbone network of GraIL, for node property predictions.
198
+
199
+ Table 2: Per-class Accuracy, per-sample inference time (ms), and memory usage (MB) when applying SpaLoc on 2-GNNs. Recall that 1-GNN is the standard GNN with only binary edge message passing. Models are tested on domains with 200 entities.
200
+
201
+ max width=
202
+
203
+ 2*X 3|c|Uncle 3|c|Grandparent
204
+
205
+ 2-7
206
+ Acc. Time Mem. Acc. Time Mem.
207
+
208
+ 1-7
209
+ NLM 100 133.8 3,846 100 135.0 3,846
210
+
211
+ 1-7
212
+ SpaLoc + NLM 100 37.2 214 100 23.9 181
213
+
214
+ 1-7
215
+ 2-GNN 100 145.1 5,126 100 145.5 5,126
216
+
217
+ 1-7
218
+ SpaLoc + 2-GNN 100 23.7 645 100 19.2 519
219
+
220
+ 1-7
221
+
222
+ Table 3: Comparison of different samplers. The first column shows the size of the subsampled graph during training $\left( {N}_{s}\right)$ and the full training graph(N). Models are tested on domains with 100 entities.
223
+
224
+ max width=
225
+
226
+ 2*${N}_{s}/N$ 2|c|Node 2|c|Walk 2|c|Neighbor
227
+
228
+ 2-7
229
+ Acc MIS Acc MIS Acc MIS
230
+
231
+ 1-7
232
+ 20 /50 100 54.82 100 85.14 100 89.78
233
+
234
+ 1-7
235
+ 20 / 200 100 33.05 100 71.51 100 80.60
236
+
237
+ 1-7
238
+ 20 / 500 58.18 27.27 100 78.22 100 78.70
239
+
240
+ 1-7
241
+ 20 / 1,000 1.84 24.49 100 77.18 100 78.38
242
+
243
+ 1-7
244
+ 20 / 2,000 0 19.66 100 79.69 100 78.53
245
+
246
+ 1-7
247
+
248
+ Table 4: Results (AUC-PR) on real-world knowledge graph inductive reasoning datasets from GraIL.
249
+
250
+ max width=
251
+
252
+ 2*Model 4|c|WN18RR 4|c|FB15k-237 4|c|NELL-995
253
+
254
+ 2-13
255
+ v1 v2 v3 v4 v1 v2 v3 v4 v1 v2 v3 v4
256
+
257
+ 1-13
258
+ Neural-LP 86.02 83.78 62.90 82.06 69.64 76.55 73.95 75.74 64.66 83.61 87.58 85.69
259
+
260
+ 1-13
261
+ DRUM 86.02 84.05 63.20 82.06 69.71 76.44 74.03 76.20 59.86 83.99 87.71 85.94
262
+
263
+ 1-13
264
+ RuleN 90.26 89.01 76.46 85.75 75.24 88.70 91.24 91.79 84.99 88.40 87.20 80.52
265
+
266
+ 1-13
267
+ Grall 94.32 94.18 85.80 92.72 84.69 90.57 91.68 94.46 86.05 92.62 93.34 87.50
268
+
269
+ 1-13
270
+ TACT 96.15 97.95 90.58 96.15 88.73 94.20 97.10 98.30 94.87 96.58 95.70 96.12
271
+
272
+ 1-13
273
+ SpaLoc 98.18 99.83 96.66 99.30 99.73 99.38 99.53 99.39 100 98.27 96.19 97.37
274
+
275
+ 1-13
276
+
277
+ Accuracy & Scalability. Table 1 summarizes the result. Overall, SpaLoc achieves near-perfect performance across all prediction tasks, on par with the inductive logic programming-based method $\partial$ ILP and the baseline model NLM. This suggests that our sparsity regularizations and sub-graph sampling do not affect model accuracy. Importantly, our SpaLoc framework has drastically increased the scalability of the method: SpaLoc can be trained on graphs with 2000 nodes, which is infeasible for the baseline NLM model due to memory issues.
278
+
279
+ Another essential comparison is between GraIL and SpaLoc. GraIL is a graph neural network-based approach that only considers relationships between binary pairs of entities. This is sufficient for simple tasks such as HasFather, but not for more complex tasks such as Maternal Great Uncle (MGUncle). By contrast, SpaLoc explicitly reasons about hyperedges and solves more complex tasks.
280
+
281
+ Runtime & Memory. We study the time and memory complexity of SpaLoc against NLM on the HasSister and Grandparent tasks. Results are shown in Fig. 4, where we plot the curve of average memory consumption and inference time as a function of the input graph size. We fit a cubic polynomial equation to the data points to approximate the learned inference complexity of SpaLoc. The experimental results show that our method can reduce the space complexity from the original $O\left( {n}^{3}\right)$ complexity of NLM to approximately $O\left( {n}^{2}\right)$ . Note that this learned network has the same complexity as the optimal relational rule that can be designed to solve both tasks. The inference time also gets significantly improved.
282
+
283
+ Application to other hypergraph neural networks. SpaLoc is a general framework for scaling up hypergraph neural networks rather than a method that can only be used on NLMs. Here we apply our framework SpaLoc to a new method, k-GNN [Morris et al., 2019a] on the family tree benchmark. Specifically, we use a fully-connected k-hypergraph. The edge embeddings are initialized as a one-hot encoding of the input relationship. Shown in Table 2, we see consistent improvements in terms of inference speed and memory cost for k-GNNs and NLMs.
284
+
285
+ Ablation: Subgraph sampling. We compare our neighbor expansion sampler with two other sub-graph samplers, proposed in Zeng et al. [2020]: random node (Node) and random walk (Walk) samplers. We compare these samplers with two metrics: the final accuracy of the model and the average information sufficiency of all pairs of nodes in the sub-sampled graphs (MIS). Table 3 shows the result on the Grandparent task. The Node sampler does not leverage locality, so the performance of models and MIS drop as the domain size grows larger. The SpaLocs trained with Walk and Neighbor samplers perform similarly well in terms of test accuracy. Note that the accuracy results are consistent with the MIS results: comparing the Node sampler and others, we see that a higher MIS score leads to higher test accuracy. This supports the effectiveness of our proposed information sufficiency measure.
286
+
287
+ § 4.2 REAL-WORLD KNOWLEDGE GRAPH REASONING
288
+
289
+ To further demonstrate the scalability of SpaLoc, we apply it to complete real knowledge graphs. We test SpaLoc on both inductive and transductive relation prediction tasks, following GraIL [Schlichtkrull et al., 2018a]. In this setting, the test-task time is to infer the relationship on a given edge, so test-time graph subsampling is used. We evaluate the models with a classification metric, the area under the precision-recall curve (AUC-PR).
290
+
291
+ In the inductive setting, the training and evaluation graphs are disjoint sub-graphs extracted from WN18RR [Dettmers et al., 2018], FB15k-237 [Toutanova et al., 2015], and NELL-995 [Xiong et al., 2017]. For each knowledge graph, there are four versions with increasing sizes. In the transductive setting, we use the standard WN18RR, FB15k-237, and NELL-995 benchmarks. For WN18RR and FB15k-237, we use the original splits; for NELL-995, we use the split provided in GraIL. We also include the Hit@10 metric used by knowledge graph embedding methods. Following the setting of GraIL, we rank each test triplet among 50 randomly sampled negative triplets.
292
+
293
+ Baseline. We compare SpaLoc with several state-of-the-art models, including Neural LP [Yang et al., 2017], DRUM [Sadeghian et al., 2019], RuleN [Meilicke et al., 2018], GraIL, and TACT [Chen et al., 2021]. For transductive learning tasks, we compare SpaLoc with four representative knowledge graph embedding methods: TransE [Bordes et al., 2013], DistMult [Yang et al., 2015], ComplEx [Trouillon et al., 2017], and RotatE [Sun et al., 2019].
294
+
295
+ Results. Table 4 and Table 5 show the inductive and transductive relation prediction results respectively. In the inductive setting, SpaLoc significantly outperforms all baselines on all datasets. This demonstrates the scalability of SpaLoc on large-scale real-world data. SpaLoc is the only model that explicitly uses hyperedge representations, while none of the existing hypergraph neural networks are directly applicable to such large graphs due to memory and time complexities. In the transductive setting, SpaLoc outperforms all knowledge embedding (KE) methods and GraIL on WN18RR and NELL-995. SpaLoc also has comparable performance with KE methods on the FB15K-237 datasets, outperforming GraIL with a large margin.
296
+
297
+ Comparing SpaLoc with node embedding-based methods (TransE) and GNN-based methods (GraIL) that only consider binary edges, we see that our hyperedge-based model enables better relation prediction that requires reasoning about other entities. The necessity of hy-peredges is further supported by Appendix D, where we show that setting the maximum arity of SpaLoc to 2 (i.e., removing hyperedges) significantly degrades the performance. Notably, in contrast to other methods for the transduc-tive setting that store entity embeddings for all knowledge graph nodes, SpaLoc directly uses the inductive learning setting. That is, SpaLoc does not store identity information about each knowledge graph node. We leave better adaptations of SpaLoc to transductive learning settings as future work.
298
+
299
+ Table 5: Results of transductive link prediction on real-world knowledge graphs. We also include Hit@10 as an additional metric following knowledge graph embedding literature and Grall [Schlichtkrull et al., 2018a].
300
+
301
+ max width=
302
+
303
+ 2*X 2|c|WN18RR 2|c|NELL-995 2|c|FB15K-237
304
+
305
+ 2-7
306
+ AUC-PR H@10 AUC-PR H@10 AUC-PR H@10
307
+
308
+ 1-7
309
+ TransE 93.73 88.74 98.73 98.50 98.54 98.87
310
+
311
+ 1-7
312
+ DistMult 93.08 85.35 97.73 95.68 97.63 98.67
313
+
314
+ 1-7
315
+ ComplEx 92.45 83.98 97.66 95.43 97.99 98.88
316
+
317
+ 1-7
318
+ RotatE 93.55 88.85 98.54 98.09 98.53 98.81
319
+
320
+ 1-7
321
+ Grall 90.91 73.12 97.79 94.54 92.06 75.87
322
+
323
+ 1-7
324
+ SpaLoc 96.76 99.97 99.27 98.90 99.61 96.97
325
+
326
+ 1-7
327
+
328
+ § 5 CONCLUSION
329
+
330
+ We present SpaLoc, a framework for efficient training and inference of hypergraph reasoning networks. SpaLoc leverages sparsity and locality to train and infer efficiently. Through regularizing intermediate representation by a sparsification loss, SpaLoc achieves the same inference complexities on family tree tasks as algorithms designed by human experts. SpaLoc samples sub-graphs for training and inference, calibrates training labels with the information sufficiency measure to alleviate information loss, and therefore generalizes well on large-scale relational reasoning benchmarks.
331
+
332
+ Limitations.. The locality assumption applies to many benchmark datasets, but we admit that it is not a completely general solution. It may lead to problems on datasets where the property of interest may depend on distant nodes, i.e., SpaLoc may not perform well on problems that require long chains of inference (e.g., detecting that $\mathrm{A}$ is a $5\mathrm{\;{th}}$ cousin of $\mathrm{B}$ ). Nevertheless, the locality assumption is good enough for many real-world relational inference tasks. Meanwhile, it is hard to directly apply SpaLoc on extremely high-arity hypergraph datasets such as WD50K [Galkin et al., 2020], where the maximum arity is 67, because the permutation operation in SpaLoc has an $O\left( {r!}\right)$ complexity, where $\mathrm{r}$ is the arity. We leave the application of SpaLoc on extremely high-arity hypergraphs as future work.
papers/LOG/LOG 2022/LOG 2022 Conference/mWzWvMxuFg1/Initial_manuscript_md/Initial_manuscript.md ADDED
The diff for this file is too large to render. See raw diff
 
papers/LOG/LOG 2022/LOG 2022 Conference/mWzWvMxuFg1/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,304 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § SHORTEST PATH NETWORKS FOR GRAPH PROPERTY PREDICTION
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ § ABSTRACT
10
+
11
+ Most graph neural network models rely on a particular message passing paradigm, where the idea is to iteratively propagate node representations of a graph to each node in the direct neighborhood. While very prominent, this paradigm leads to information propagation bottlenecks, as information is repeatedly compressed at intermediary node representations, which causes loss of information, making it practically impossible to gather meaningful signals from distant nodes. To address this issue, we propose shortest path message passing neural networks, where the node representations of a graph are propagated to each node in the shortest path neighborhoods. In this setting, nodes can directly communicate between each other even if they are not neighbors, breaking the information bottleneck and hence leading to more adequately learned representations. Theoretically, our framework generalizes message passing neural networks, resulting in provably more expressive models, and we show that some recent state-of-the-art models are special instances of this framework. Empirically, we verify the capacity of a basic model of this framework on dedicated synthetic experiments, and on real-world graph classification and regression benchmarks, and obtain state-of-the-art results.
12
+
13
+ § 18 1 INTRODUCTION
14
+
15
+ Graphs provide a powerful abstraction for relational data in a wide range of domains, ranging from systems in life-sciences (e.g., physical [1, 2], chemical [3, 4], and biological systems [5, 6]) to social networks [7], which sparked interest in machine learning over graphs. Graph neural networks (GNNs) $\left\lbrack {8,9}\right\rbrack$ have become prominent models for graph machine learning, owing to their adaptability to different graphs, and their capacity to explicitly encode desirable relational inductive biases [10], such as permutation invariance (resp., equivariance) relative to graph nodes.
16
+
17
+ The vast majority of GNNs [11-13] are instances of message passing neural networks (MPNNs) [14], since they follow a specific message passing paradigm, where each node iteratively updates its state by aggregating messages from its direct neighborhood. This mode of operation, however, is known to lead to information propagation bottlenecks when the learning task requires interactions between distant nodes of a graph [15]. In order to exchange information between nodes which are $k$ hops away from each other in a graph, at least $k$ message passing iterations (or, equivalently, $k$ network layers) are needed. For most non-trivial graphs, however, the number of nodes in each node's receptive field can grow exponentially in $k$ . Eventually, the information from this exponentially-growing receptive field is compressed into fixed-length node state vectors, which leads to a phenomenon referred to as over-squashing [15], causing a severe loss of information as $k$ increases.
18
+
19
+ In parallel to standard MPNNs, several message passing techniques have been proposed to allow more global communication between nodes. For instance, multi-hop neighborhoods [16, 17], based on powers of the graph adjacency matrix, and transformer-based models [18-20] employing full pairwise node attention, look beyond direct neighborhoods for message passing, but both suffer from noise and scalability limitations. More recently, several approaches have refined message passing using shortest paths between pairs of nodes, such that nodes interact differently based on the minimum distance between them [21-23]. Models in this category, such as Graphormer [23], have in fact achieved state-of-the-art results. However, the theoretical study of this message passing paradigm remains incomplete, with its expressiveness and propagation properties left unknown. In this paper, we introduce the shortest path message passing neural networks (SP-MPNNs) framework to alleviate over-squashing. The core idea behind this framework is to update node states by aggregating messages from shortest path neighborhoods instead of the direct neighborhood. Specifically, for each node $u$ in a graph $G$ , we define its $i$ -hop shortest path neighborhood as the set of nodes in $G$ reachable from $u$ through a shortest path of length $i$ . Then, the state of $u$ is updated by separately aggregating messages from each $i$ -hop neighborhood for $1 \leq i \leq k$ , for some choice of $k$ . This corresponds to a single iteration (i.e., layer) of SP-MPNNs, and we can use multiple layers as in MPNNs. For example, consider the graph shown in Figure 1, where 1-hop, 2-hop and 3- hop shortest path neighborhoods of the white node are represented by different colors. SP-MPNNs first separately aggregate representations from each neighborhood, and then combine all hop-level aggregates with the white node embedding to yield the new node state.
20
+
21
+ < g r a p h i c s >
22
+
23
+ Figure 1: SP-MPNNs update the state of the white node, by aggregating from its different shortest path neighborhoods, which are color-coded.
24
+
25
+ Our framework builds on a line of work on GNNs using multi-hop aggregation [16, 17, 24, 25], but distinguishes itself with key choices, as discussed in detail in Section 5. Most importantly, the choice of aggregating over shortest path neighborhoods ensures distinct neighborhoods, and thus avoids redundancies, i.e., nodes are not repeated over different hops. SP-MPNNs enable a direct communication between nodes in different hops, which in turn, enables more holistic node state updates. Our contributions can be summarized as follows:
26
+
27
+ * We propose SP-MPNNs, which strictly generalize MPNNs, and enable direct message passing between nodes and their shortest path neighbors. Similarly to MPNNs, our framework can be instantiated in many different ways, and encapsulates several recent models, including the state-of-the-art Graphormer [23].
28
+
29
+ * We show that SP-MPNNs can discern any pair of graphs which can be discerned either by the 1-WL graph isomorphism test, or by the shortest path graph kernel, making SP-MPNNs strictly more expressive than MPNNs which are upper bounded by the 1-WL test [12, 26].
30
+
31
+ * We present a logical characterization of SP-MPNNs, based on the characterization given for MPNNs [27], and show that SP-MPNNs can capture a larger class of functions than MPNNs.
32
+
33
+ * In our empirical analysis, we focus on a basic, simple model, called shortest path networks. We show that shortest path networks alleviate over-squashing, and propose carefully designed synthetic datasets through which we validate this claim empirically.
34
+
35
+ * We conduct a comprehensive empirical evaluation using real-world graph classification and regression benchmarks, and show that shortest path networks achieve state-of-the-art performance.
36
+
37
+ All proofs for formal statements, as well as further experimental details, can be found in the appendix.
38
+
39
+ § 2 MESSAGE PASSING NEURAL NETWORKS
40
+
41
+ Graph neural networks $\left( {GNNs}\right) \left\lbrack {8,9}\right\rbrack$ have become very prominent in graph machine learning $\lbrack {11} -$ 13], as they encode desirable relational inductive biases [10]. Message-passing neural networks (MPNNs) [14] are an effective class of GNNs, where each node $u$ is assigned an initial state vector ${\mathbf{h}}_{u}^{\left( 0\right) }$ , which is iteratively updated based on the state of its neighbors $\mathcal{N}\left( u\right)$ and its own state, as:
42
+
43
+ $$
44
+ {\mathbf{h}}_{u}^{\left( t + 1\right) } = \operatorname{COM}\left( {{\mathbf{h}}_{u}^{\left( t\right) },\operatorname{AGG}\left( {{\mathbf{h}}_{u}^{\left( t\right) },\left\{ \left\{ {{\mathbf{h}}_{v}^{\left( t\right) } \mid v \in \mathcal{N}\left( u\right) }\right\} \right\} }\right) }\right) ,
45
+ $$
46
+
47
+ where $\{ \left| {\cdot \} \text{ denotes a multiset, and COM and AGG are differentiable combination, and aggregation }}\right|$ functions, respectively. An MPNN is homogeneous if each of its layers uses the same COM and AGG functions, and heterogeneous, otherwise. The choice for the aggregate and combine functions varies across models, e.g., graph convolutional networks (GCNs) [11], graph isomorphism networks (GINs) [12], and graph attention networks (GATs) [13]. Following message passing, the final node embeddings are pooled to form a graph embedding vector to predict properties of entire graphs. The pooling often takes the form of simple averaging, summing or element-wise maximum.
48
+
49
+ MPNNs naturally capture the input graph structure and are computationally efficient, but they suffer from several wellknown limitations. MPNNs are limited in expressive power, at most matching the power of the 1-dimensional Weisfeiler Leman graph isomorphism test (1-WL) [12, 26]: graphs cannot be distinguished by MPNNs if 1-WL does not distinguish them, e.g., the pair of graphs in Figure 2 are indistinguishable by MPNNs. Hence, several alternatives, i.e., approaches based on unique node identifiers [28], random node features [29, 30], or higher-order GNN models [26, 31-33], have been proposed to improve on this bound. Two other limitations, known as over-smoothing $\left\lbrack {{34},{35}}\right\rbrack$ and over-squashing $\left\lbrack {15}\right\rbrack$ , are linked to using more message passing layers. Briefly, using more message passing layers leads to increasingly similar node representations, hence to over-smoothing. Concurrently, the receptive field in MPNNs grows exponentially with the number of message passing iterations, but the information from this receptive field is compressed into fixed-length node state vectors. This leads to substantial loss of information, referred to as over-squashing.
50
+
51
+ < g r a p h i c s >
52
+
53
+ Figure 2: ${G}_{1}$ and ${G}_{2}$ are indistinguishable by 1-WL.
54
+
55
+ § 3 SHORTEST PATH MESSAGE PASSING NEURAL NETWORKS
56
+
57
+ We consider simple, undirected, connected ${}^{1}$ graphs $G = \left( {V,E}\right)$ and write $\rho \left( {u,v}\right)$ to denote the length of the shortest path between nodes $u,v \in V$ . The $i$ -hop shortest path neighborhood of $u$ is defined as ${\mathcal{N}}_{i}\left( u\right) = \{ v \in V \mid \rho \left( {u,v}\right) = i\}$ , i.e., the set of nodes reachable from $u$ through a shortest path of length $i$ . In SP-MPNNs, each node $u \in V$ is assigned an initial state vector ${\mathbf{h}}_{u}^{\left( 0\right) }$ , which is iteratively updated based on the node states in the shortest path neighborhoods ${\mathcal{N}}_{1}\left( u\right) ,\ldots ,{\mathcal{N}}_{k}\left( u\right)$ for some choice of $k \geq 1$ , and its own state as:
58
+
59
+ $$
60
+ {\mathbf{h}}_{u}^{\left( t + 1\right) } = \operatorname{COM}\left( {{\mathbf{h}}_{u}^{\left( t\right) },{\mathrm{{AGG}}}_{u,1},\ldots ,{\mathrm{{AGG}}}_{u,k}}\right) ,
61
+ $$
62
+
63
+ where $\operatorname{COM}$ and ${\operatorname{AGG}}_{u,i} = \left( {{\mathbf{h}}_{u}^{\left( t\right) },\left\{ {\left| {\mathbf{h}}_{v}^{\left( t\right) }\right| v \in {\mathcal{N}}_{i}\left( u\right) }\right\} }\right)$ are differentiable combination, and aggregation functions, respectively. We write SP-MPNN $\left( {k = j}\right)$ to denote an SP-MPNN model using neighborhoods at distance up to $k = j$ . Importantly, $\mathcal{N}\left( u\right) = {\mathcal{N}}_{1}\left( u\right)$ for simple graphs, and so SP-MPNN $\left( {k = 1}\right)$ is a standard MPNN.
64
+
65
+ Similarly to MPNNs, different choices for instantiating AGG and COM lead to different SP-MPNN models. Moreover, graph pooling approaches [36], and related notions directly translate to SP-MPNNs, and so do, e.g., sub-graph sampling approaches [37, 38] for scaling to large graphs. Similarly to MPNNs, we can incorporate a global readout component to the layers to define SP-MPNNs with global readout:
66
+
67
+ $$
68
+ {\mathbf{h}}_{u}^{\left( t + 1\right) } = \operatorname{COM}\left( {{\mathbf{h}}_{u}^{\left( t\right) },{\operatorname{AGG}}_{u,1},\ldots ,{\operatorname{AGG}}_{u,k},\operatorname{READ}\left( {{\mathbf{h}}_{u}^{\left( t\right) },\left\{ \left\{ {{\mathbf{h}}_{v}^{\left( t\right) } \mid v \in G}\right\} \right\} }\right) }\right)
69
+ $$
70
+
71
+ where READ is a permutation-invariant readout function.
72
+
73
+ To make our study concrete, we define a basic, simple, instance of SP-MPNNs, called shortest path networks (SPNs) as:
74
+
75
+ $$
76
+ {\mathbf{h}}_{u}^{\left( t + 1\right) } = \operatorname{MLP}\left( {\left( {1 + \epsilon }\right) {\mathbf{h}}_{u}^{\left( t\right) } + \mathop{\sum }\limits_{{i = 1}}^{k}{\alpha }_{i}\mathop{\sum }\limits_{{v \in {\mathcal{N}}_{i}\left( u\right) }}{\mathbf{h}}_{v}^{\left( t\right) }}\right) ,
77
+ $$
78
+
79
+ where $\epsilon \in \mathbb{R}$ , and ${\alpha }_{i} \in \left\lbrack {0,1}\right\rbrack$ are learnable weights, satisfying ${\alpha }_{1} + \ldots + {\alpha }_{k} = 1$ . That is, SPNs use summation to aggregate within hops, weighted summation for aggregation across all $k$ hops, and finally, an MLP as a combine function. Intuitively, SPNs can directly aggregate from different neighborhoods, by weighing their contributions. It is easy to see that SPNs with $k = 1$ are identical to GIN, but observe also that SPNs with arbitrary $k$ are also identical to GIN as long as the weight of the direct neighborhood is learned to be ${\alpha }_{1} = 1$ . We use SPNs throughout this paper as an intentionally simple baseline, as we seek to purely evaluate the impact of our extended message passing paradigm with minimal reliance on tangential model choices, e.g., including attention, residual connections, recurrent units, etc.
80
+
81
+ ${}^{1}$ We assume connected graphs for ease of presentation; all of our results can be extended to disconnected graphs, see the appendix, for further details.
82
+
83
+ The SP-MPNN framework offers a unifying perspective for several recent models in graph representation learning using shortest path neighborhoods. In particular, SP-MPNN with global readout encapsulates models such as Graphormer ${}^{2}$ [23], the winner of the 2021 PCQM4M-LSC competition in the KDD Cup. Indeed, Graphormer is an instance of SP-MPNNs with global readout over simple, undirected, connected graphs (without edge types), as shown in the following theorem:
84
+
85
+ Theorem 1. A Graphormer with a maximum shortest path length of $M$ is an instance of SP-MPNN $\left( {k = M - 1}\right)$ with global readout.
86
+
87
+ § 3.1 INFORMATION PROPAGATION: ALLEVIATING OVER-SQUASHING
88
+
89
+ Consider a graph $G$ , its adjacency matrix representation $\mathbf{A}$ , and its diagonal degree matrix $\mathbf{D}$ , indicating the number of edges incident to every node in $G$ . We also consider variations of the degree matrix, e.g., $\widetilde{\mathbf{D}} = \mathbf{D} + \mathbf{I}$ , where $\mathbf{I}$ is the identity matrix. In our analysis, we focus on the normalized adjacency matrix $\widehat{\mathbf{A}} = {\widetilde{\mathbf{D}}}^{-{0.5}}\left( {\mathbf{A} + \mathbf{I}}\right) {\widetilde{\mathbf{D}}}^{-{0.5}}$ to align with recent work analyzing over-squashing [39].
90
+
91
+ To study over-squashing, Topping et al. [39] consider the Jacobian of node representations relative to initial node features, i.e., the ratio $\partial {\mathbf{h}}_{u}^{\left( r\right) }/\partial {\mathbf{h}}_{v}^{\left( 0\right) }$ , where $u,v \in V$ are separated by a distance $r \in {\mathbb{N}}^{ + }$ . This Jacobian is highly relevant to over-squashing, as it quantifies the effect of initial node features for distant nodes(v), on target node(u)representations, when sufficiently many message passing iterations(r)occur. In particular, a low Jacobian value indicates that ${\mathbf{h}}_{v}^{\left( 0\right) }$ minimally affects ${\mathbf{h}}_{u}^{\left( r\right) }$ .
92
+
93
+ To standardize this Jacobian, Topping et al. [39] assume the normalized adjacency matrix for AGG, i.e., neighbor messages are weighted by their coefficients in $\widehat{\mathbf{A}}$ and summed. This is a useful assumption, as $\widehat{\mathbf{A}}$ is normalized, thus preventing artificially high gradients. Furthermore, a smoothness assumption is made on the gradient of COM, as well as that of individual MPNN messages, i.e., the terms summed in aggregation. More specifically, these gradients are bounded by quantities $\alpha$ and $\beta$ , respectively. Given these assumptions, it has been shown that $\left| {\partial {\mathbf{h}}_{u}^{\left( r\right) }/\partial {\mathbf{h}}_{v}^{\left( 0\right) }}\right| \leq {\left( \alpha \beta \right) }^{r}{\widehat{\mathbf{A}}}_{uv}^{r}$ , upper-bounding the absolute value of the Jacobian [39]. Observe that the term ${\widehat{\mathbf{A}}}_{uv}^{r}$ typically decays exponentially with $r$ in MPNNs, as node degrees are typically much larger than 1, imposing decay due to $\widetilde{\mathbf{D}}$ . Moreover, this term is zero before iteration $r$ due to under-reaching.
94
+
95
+ Analogously, we also consider normalized adjacency matrices within SP-MPNNs. That is, we use the matrix ${\widehat{\mathbf{A}}}_{i} = {\widetilde{\mathbf{D}}}_{i}^{-{0.5}}\left( {{\mathbf{A}}_{i} + \mathbf{I}}\right) {\widetilde{\mathbf{D}}}_{i}^{-{0.5}}$ within each ${\mathrm{{AGG}}}_{i}$ , where ${\mathbf{A}}_{i}$ is the $i$ -hop $0/1$ adjacency matrix, which verifies ${\left( {\mathbf{A}}_{i}\right) }_{uv} = 1 \Leftrightarrow \rho \left( {u,v}\right) = i$ , and ${\widetilde{\mathbf{D}}}_{i}$ is the corresponding degree matrix. By design, SP-MPNNs span $k$ hops per iteration, and thus let information from $v$ reach $u$ in $q = \lceil r/k\rceil$ iterations. For simplicity, let $r$ be an exact multiple of $k$ . In this scenario, $\partial {\mathbf{h}}_{u}^{\left( q\right) }/\partial {\mathbf{h}}_{v}^{\left( 0\right) }$ is non-zero and depends on ${\left( {\widehat{\mathbf{A}}}_{k}\right) }_{uv}^{q}$ (this holds by simply considering $k$ -hop aggregation as a standard MPNN). Therefore, for larger $k,q \ll r$ , which reduces the adjacency exponent substantially, thus improving gradient flow. In fact, when $r \leq k$ , the Jacobian $\partial {\mathbf{h}}_{u}^{\left( 1\right) }/\partial {\mathbf{h}}_{v}^{\left( 0\right) }$ is only linearly dependent on ${\left( {\widehat{\mathbf{A}}}_{r}\right) }_{uv}$ . Finally, the hop-level neighbor separation of neighbors within SP-MPNN further improves the Jacobian, as node degrees are partitioned across hops. More specifically, the set of all connected nodes to $u$ is partitioned based on distance, leading to smaller degree matrices at every hop, and thus to less severe normalization, and better gradient flow, compared to, e.g, using a fully connected layer across $G$ [15].
96
+
97
+ § 3.2 EXPRESSIVE POWER OF SHORTEST PATH MESSAGE PASSING NETWORKS
98
+
99
+ Shortest path computations within SP-MPNNs introduce a direct correspondence between the model and the shortest path (SP) kernel [40], allowing the model to distinguish any pair of graphs SP distinguishes. At the same time, SP-MPNNs contain MPNNs which can match the expressive power of 1-WL when supplemented with injective aggregate and combine functions [12]. Building on these observations, we show that SP-MPNNs can match the expressive power of both kernels:
100
+
101
+ ${}^{2}$ We follow the authors’ terminology, and refer to the specific model defined using shortest path biases and degree positional embeddings as "Graphormer". This Graphormer model is introduced in detail in the appendix.
102
+
103
+ Theorem 2. Let ${G}_{1},{G}_{2}$ be two non-isomorphic graphs. There exists a ${SP}$ -MPNN $\mathcal{M} : \mathcal{G} \rightarrow \mathbb{R}$ , such that $\mathcal{M}\left( {G}_{1}\right) \neq \mathcal{M}\left( {G}_{2}\right)$ if either 1-WL distinguishes ${G}_{1}$ and ${G}_{2}$ , or SP distinguishes ${G}_{1}$ and ${G}_{2}$ .
104
+
105
+ Since SP distinguishes a different set of graphs than 1-WL (e.g., connected vs disconnected graphs), we conclude that SP-MPNNs strictly improve on the expressive power of MPNNs. For example, SP-MPNNs with $k \geq 2$ can distinguish the graph pair ${G}_{1}$ and ${G}_{2}$ shown in Figure 2. Nonetheless, the power provided by 1-WL and SP also has limitations, as neither kernel can distinguish the graphs ${H}_{1}$ and ${H}_{2}$ shown in Figure 3. It is easy to see that SP-MPNNs cannot discern ${H}_{1}$ and ${H}_{2}$ either.
106
+
107
+ < g r a p h i c s >
108
+
109
+ Figure 3: ${H}_{1}$ and ${H}_{2}$ are indistinguishable by neither 1-WL nor SP [41].
110
+
111
+ Unsurprisingly, the choice of $k$ affects expressive power. On one hand, $k = n - 1$ allows SP-MPNNs to replicate SP, whereas setting $k = 1$ reduces them to MPNNs. Also note that the expressive power of SP-MPNNs cannot be completely characterized within the WL hierarchy since, e.g., ${H}_{1}$ and ${H}_{2}$ , which cannot be distinguished by SP-MPNNs, can be distinguished by folklore 2-WL.
112
+
113
+ Beyond distinguishing graphs, we study the expressive power of SP-MPNNs in terms of the class of functions that they can capture, following the logical characterization given by Barceló et al. [27]. This characterization is given for node classification and establishes a correspondence between first-order formulas and MPNN classifiers. Briefly, a first-order formula $\phi \left( x\right)$ with one free variable $x$ can be viewed as a logical node classifier, by interpreting the free variable $x$ as a node $u$ from an input graph $G$ , and verifying whether the property $\phi \left( u\right)$ holds in $G$ , i.e., $G \vDash \phi \left( u\right)$ . For instance, the formula $\phi \left( x\right) = \exists {yE}\left( {x,y}\right) \land \operatorname{Red}\left( y\right)$ holds when $x$ is interpreted as a node $u$ in $G$ , if and only if $u$ has a red neighbor in $G$ . An MPNN $\mathcal{M}$ captures a logical node classifier $\phi \left( x\right)$ if $\mathcal{M}$ admits a parametrization such that for all graphs $G$ and nodes $u,\mathcal{M}$ maps(G, u)to true if and only if $G \vDash \phi \left( u\right)$ . Barceló et al. [27] show in their Theorem 5.1 that any ${\mathrm{C}}^{2}$ classifier can be captured by an MPNN with a global readout. ${\mathrm{C}}^{2}$ is the two-variable fragment of the logic $\mathrm{C}$ , which extends first-order logic with counting quantifiers, e.g., ${\exists }^{ \geq m}{x\phi }\left( x\right)$ for $m \in \mathbb{N}$ .
114
+
115
+ It would be interesting to obtain an analogous characterization for SP-MPNNs with global readout, and a promising candidate is to consider an extension of ${\mathrm{C}}^{2}$ which also encodes the $k$ -bounded shortest path neighborhoods. To this end, let us extend the relational vocabulary with a distinct set of binary shortest path predicates ${E}_{i},2 \leq i \leq k$ , such that ${E}_{i}\left( {u,v}\right)$ evaluates to true in $G$ if and only if there is a shortest path of length $i$ between $u$ and $v$ in $G$ . Let us further denote by ${\mathrm{C}}_{k}^{2}$ the extension of ${\mathrm{C}}^{2}$ with such shortest path predicates. Observe that ${\mathrm{C}}^{2} \subsetneq {\mathrm{C}}_{k}^{2}$ : given the graphs ${G}_{1},{G}_{2}$ from Figure 2, the ${\mathrm{C}}_{2}^{2}$ formula $\phi \left( x\right) = {\exists }^{ \geq 2}y{E}_{2}\left( {x,y}\right)$ evaluates to false on all ${G}_{1}$ nodes, and true on all ${G}_{2}$ nodes. By contrast, there is no ${\mathrm{C}}^{2}$ formula which can produce different outputs over the nodes of ${G}_{1},{G}_{2}$ , due to a correspondence between 1-WL and ${C}_{2}$ [42].
116
+
117
+ Through a simple adaptation of Theorem 5.1 of Barceló et al. [27], we obtain the following theorem: Theorem 3. Given a $k \in \mathbb{N}$ , each ${\mathrm{C}}_{k}^{2}$ classifier can be captured by a ${SP}$ -MPNN with global readout.
118
+
119
+ § 4 EMPIRICAL EVALUATION
120
+
121
+ In this section, we evaluate (i) SPNs and a small Graphormer model on dedicated synthetic experiments assessing their information flow contrasting with classical MPNNs; (ii) SPNs on real-world graph classification [43, 44] tasks and (iii) a basic relational variant of SPNs, called R-SPN, on regression benchmarks [45, 46]. In all experiments, SP-MPNN models achieve state-of-the-art results. Further details and additional experiments on MoleculeNet [47, 48] can also be found in the appendix.
122
+
123
+ § 4.1 EXPERIMENT: DO ALL RED NODES HAVE AT MOST TWO BLUE NODES AT $\LEQ H$ HOPS DISTANCE?
124
+
125
+ In this experiment, we evaluate the ability of SP-MPNNs to handle long-range dependencies, and compare against standard MPNNs. Specifically, we consider classification based on counting within $h$ -hop neighborhoods: given a graph with node colors including, e.g., red and blue, do all red nodes have at most 2 blue nodes within their h-hop neighborhood?
126
+
127
+ Table 1: Results (Accuracy) for SPNs with $k = \{ 1,5\}$ on the $h$ -Proximity benchmarks.
128
+
129
+ max width=
130
+
131
+ Model 1-Proximity 3-Proximity 5-Proximity 8-Proximity 10-Proximity
132
+
133
+ 1-6
134
+ GCN ${65.0}_{\pm {3.5}}$ ${50.0} \pm {0.0}$ ${50.0} \pm {0.0}$ ${50.1} \pm {0.0}$ ${49.9} \pm {0.0}$
135
+
136
+ 1-6
137
+ GAT ${91.7} \pm {7.7}$ ${50.4}_{\pm {1.0}}$ ${49.9} \pm {0.0}$ ${50.0} \pm {0.0}$ ${50.0} \pm {0.0}$
138
+
139
+ 1-6
140
+ $\operatorname{SPN}\left( {k = 1}\right)$ 99.4±0.6 ${50.5}_{\pm {0.7}}$ ${50.2} \pm {1.0}$ ${50.0} \pm {0.9}$ 49.8±0.8
141
+
142
+ 1-6
143
+ $\operatorname{SPN}\left( {k = 5,L = 2}\right)$ ${96.4} \pm {0.8}$ ${94.7} \pm {1.6}$ ${95.8} \pm {0.9}$ ${96.2} \pm {0.6}$ ${96.2} \pm {0.6}$
144
+
145
+ 1-6
146
+ SPN $\left( {k = 5,L = 5}\right)$ ${96.9} \pm {0.6}$ ${95.5}_{\pm {1.6}}$ 96.8±0.7 ${96.8} \pm {0.6}$ 96.8±0.6
147
+
148
+ 1-6
149
+ Graphormer ${94.1} \pm {2.3}$ ${94.7} \pm {2.7}$ ${95.1} \pm {1.8}$ 97.3±1.4 96.8±2.1
150
+
151
+ 1-6
152
+
153
+ This question presents multiple challenges for MPNNs. First, MPNNs must learn to identify the two relevant colors in the input graph. Second, they must count color statistics in their long-range neighborhoods. The latter is especially difficult, as MPNNs must keep track of all their long-range neighbors despite the redundancies stemming from message passing. This setup hence examines whether SP-MPNNs enable better information flow than MPNNs, and alleviate over-squashing.
154
+
155
+ Data generation. We propose the $h$ -Proximity datasets to evaluate long-range information flow in GNNs. In $h$ -Proximity, we use a graph structure based on node levels, where (i) consecutive level nodes are pairwise fully connected, (ii) nodes within a level are pairwise disconnected. As a result, these graphs are fully specified by their level count $l$ and the level width $w$ , i.e., the number of nodes per level. We show a graph pair with $l = 3,w = 3$ in Figure 4.
156
+
157
+ < g r a p h i c s >
158
+
159
+ Figure 4: Graph (a) has one red node with three blue neighbors (classified as false). Graph (b) has one red node with only two blue neighbors (classified as true).
160
+
161
+ Using this structure, we generate pairs of graphs, classified as true and false respectively, differing only by one edge. More specifically, we generate $h$ -Proximity datasets consisting each of 4500 pairs of graphs, for $h = \{ 1,3,5,8,{10}\}$ . Within these datasets, we design every graph pair to be at the decision boundary for our classification task: the positive graph always has all its red nodes connected exactly to 2 blue nodes in its $h$ -hop neighborhood, whereas the negative graph violates the rule by introducing one additional edge to the positive graph. We describe our data generation procedure in detail in Appendix D.
162
+
163
+ Experimental setup. We use two representative SP-MPNN models: SPN and a small Graphormer model. Following Errica et al. [43], we use SPN with batch normalization [49] and a ReLU nonlinearity following every message passing iteration. We evaluate SPN $\left( {k = \{ 1,5\} }\right)$ and Graphormer (max distance 5) and compare with GCN [11] and GAT [13] on $h$ -Proximity $\left( {h = \{ 1,3,5,8,{10}\} }\right)$ using the risk assessment protocol by Errica et al. [43]: we fix 10 random splits per dataset, run training 3 times per split, and report the average of the best results across the 10 splits. For GCN, GAT and SPN $\left( {k = 1}\right)$ , we experiment with $T = \{ 1,3,5,8,{10}\}$ message passing layers such that $T \geq h$ (so as to eliminate any potential under-reaching), whereas we use $T = \{ 2,\ldots ,5\}$ for SPN $\left( {k = 5}\right)$ and $T = \{ 1,\ldots ,5\}$ for Graphormer. Across all our models, we adopt the same pooling mechanism from Errica et al. [43], based on layer output addition: for $T$ message passing iterations, the pooled representation is given by $\mathop{\sum }\limits_{{i = 1}}^{T}\mathop{\sum }\limits_{{u \in V}}{\mathbf{W}}_{i}{\mathbf{h}}_{u}^{\left( i - 1\right) }$ , where ${\mathbf{W}}_{i}$ are learnable layer-specific linear maps. Furthermore, we represent node colors with learnable embeddings. Finally, we use analogous hyperparameter tuning grids across all models for fairness, and set an identical embedding dimensionality of 64. Further details on hyper-parameter setup can be found in Appendix E.
164
+
165
+ Results. Experimental results are shown in Table 1. MPNNs all exceed 50% on 1-Proximity, but fail on higher $h$ values, whereas $\operatorname{SPN}\left( {k = 5}\right)$ is strong across all $h$ -Prox datasets, with an average accuracy of ${96.1}\%$ with two layers, and ${96.6}\%$ with 5 layers. Hence, SPN successfully detects higher-hop neighbors, remains strong even when $h > k$ , and improves with more layers. Graphormer also improves as $h$ increases, but is more unstable, as evidenced by its higher standard deviations. Both these findings show that SP-MPNN models relatively struggle to identify the local pattern in 1-Prox given their generality, but ultimately are very successful on higher $h$ -Prox datasets. Conversely, standard MPNNs only perform well on 1-Proximity, where blue nodes are directly accessible, and struggle beyond this. Hence, message passing does not reliably relay long-range information due to over-squashing and the high connectivity of $h$ -Proximity graphs.
166
+
167
+ Table 2: Results (Accuracy) for SPN $\left( {k = \{ 1,5,{10}\} }\right)$ and competing models on chemical graph classification benchmarks. Other model results reported from Errica et al. [43].
168
+
169
+ max width=
170
+
171
+ Dataset D&D NCI1 PROTEINS ENZYMES
172
+
173
+ 1-5
174
+ Baseline ${\mathbf{{78.4}}}_{\pm {4.5}}$ ${69.8}_{\pm {2.2}}$ ${75.8}_{\pm {3.7}}$ ${65.2}_{\pm {6.4}}$
175
+
176
+ 1-5
177
+ DGCNN [54] 76.6±4.3 76.4±1.7 ${72.9}_{\pm {3.5}}$ ${}_{{38.9} \pm {5.7}}$
178
+
179
+ 1-5
180
+ DiffPool [55] ${75.0}_{\pm {3.5}}$ ${76.9}_{\pm {1.9}}$ 73.7±3.5 ${59.5}_{\pm {5.6}}$
181
+
182
+ 1-5
183
+ ECC [56] ${72.6}_{\pm {4.1}}$ 76.2±1.4 72.3±3.4 ${29.5} \pm {8.2}$
184
+
185
+ 1-5
186
+ GIN [12] ${75.3}_{\pm {2.9}}$ ${\mathbf{{80.0}}}_{\pm {1.4}}$ ${73.3}_{\pm {4.0}}$ ${}_{{59.6} \pm {4.5}}$
187
+
188
+ 1-5
189
+ GraphSAGE [7] ${72.9}_{\pm {2.0}}$ ${76.0}_{\pm {1.8}}$ ${73.0}_{\pm {4.5}}$ ${58.2} \pm {6.0}$
190
+
191
+ 1-5
192
+ SPN $\left( {k = 1}\right)$ ${72.7} \pm {2.6}$ $\mathbf{{80.0}} \pm {1.5}$ 71.0±3.7 ${67.5}_{\pm {5.5}}$
193
+
194
+ 1-5
195
+ SPN $\left( {k = 5}\right)$ ${77.4}_{\pm {3.8}}$ ${}_{{78.6} \pm {1.7}}$ 74.2±2.7 69.4±6.2
196
+
197
+ 1-5
198
+ SPN $\left( {k = {10}}\right)$ 77.8±4.0 ${78.2} \pm {1.2}$ 74.5±3.2 ${67.9}_{\pm {6.7}}$
199
+
200
+ 1-5
201
+
202
+ Interestingly, SPN $\left( {k = 1}\right)$ , or equivalently GIN, solves 1-Prox almost perfectly, whereas GAT performs slightly worse (92%), and GCN struggles (65%). This substantial variability stems from model aggregation choices: GIN uses sum aggregation and an MLP, and this offers maximal injective power. However, GAT is less injective, and effectively acts as a maximum function, which drops node cardinality information. Finally, GCN normalizes all messages based on node degrees, and thus effectively averages incoming signal and discards cardinality information.
203
+
204
+ Crucially, the basic SPN model successfully solves $h$ -Prox, and is also more stable and efficient than Graphormer, since it only considers shortest path neighborhoods up to $k$ , whereas Graphormer considers all-pair message passing and uses attention. Hence, SPN runs faster and is less suspectible to noise, while also being a representative SP-MPNN model, not relying on sophisticated components. For feasibility, we will solely focus on SPNs throughout the remainder of this experimental study.
205
+
206
+ § 4.2 GRAPH CLASSIFICATION
207
+
208
+ In this experiment, we evaluate SPNs on chemical graph classification benchmarks D&D [50], PROTEINS [51], NCI1 [52], and ENZYMES [53].
209
+
210
+ Experimental setup. We evaluate $\operatorname{SPN}\left( {k = \{ 1,5,{10}\} }\right)$ on all four chemical datasets. We also follow the risk assessment protocol [43], and use its provided data splits. When training SPN models, we follow the same hyperparameter tuning grid as GIN [43], but additionally include a learning rate of ${10}^{-4}$ , as original learning rate choices were artificially limiting GIN on ENZYMES.
211
+
212
+ Results. The SPN results on the chemical datasets are shown in Table 2. Here, using $k = 5$ and $k = {10}$ yields significant improvements on D&D and PROTEINS. Furthermore, SPN $\left( {k = \{ 5,{10}\} }\right)$ performs strongly on ENZYMES, surpassing all reported results, and is competitive on NCI1. These results are very encouraging, and reflect the robustness of the model. Indeed, NCI1 and ENZYMES have limited reliance on higher-hop information, whereas D&D and PROTEINS rely heavily on this information, as evidenced by earlier WL and SP results [57, 58]. This aligns well with our findings, and shows that SPNs effectively use shortest paths and perform strongly where the SP kernel is strong. Conversely, on NCI1 and ENZYMES, where 1-WL is strong, these models also maintain strong performance. Hence, SPNs robustly combine the strengths of both SP and 1-WL, even when higher hop information is noisy, e.g., for larger values of $k$ .
213
+
214
+ § 4.3 GRAPH REGRESSION
215
+
216
+ Model setup. We define a multi-relational version of SPNs, namely R-SPN as follows:
217
+
218
+ $$
219
+ {\mathbf{h}}_{u}^{\left( t + 1\right) } = \left( {1 + \epsilon }\right) {\operatorname{MLP}}_{s}\left( {\mathbf{h}}_{u}^{\left( t\right) }\right) + {\alpha }_{1}\mathop{\sum }\limits_{{j = 1}}^{R}\mathop{\sum }\limits_{{{r}_{j}\left( {u,v}\right) }}{\operatorname{MLP}}_{j}\left( {\mathbf{h}}_{v}^{\left( t\right) }\right) + \mathop{\sum }\limits_{{i = 2}}^{k}{\alpha }_{i}\mathop{\sum }\limits_{{v \in {\mathcal{N}}_{i}\left( x\right) }}{\operatorname{MLP}}_{h}\left( {\mathbf{h}}_{v}^{\left( t\right) }\right) ,
220
+ $$
221
+
222
+ 5 where $R$ is a set of relations ${r}_{1},\ldots ,{r}_{R}$ , with corresponding relational edges ${r}_{i}\left( {x,y}\right)$ . Essentially, 316 R-SPN introduces multi-layer perceptrons ${\mathrm{{MLP}}}_{1},\ldots ,{\mathrm{{MLP}}}_{R}$ to transform the input with respect to each relation, as well as a self-loop relation ${r}_{s}$ , encoded by ${\mathrm{{MLP}}}_{s}$ , to process the updating node. For higher hop neighbors, R-SPN introduces a relation type ${r}_{h}$ , encoded by ${\mathrm{{MLP}}}_{h}$ . R-SPN emulates the R-GIN model [45] at the first hop level, and treats higher hops as an additional edge type.
223
+
224
+ Table 3: Results (MAE) for R-SPN $\left( {k = \{ 1,5,{10}\} ,T = 8}\right)$ and competing models on QM9. Other model results, along with their fully adjacent (FA) extensions are as previously reported [15]. Average relative improvement by R-SPN versus the best GNN and FA result are shown in the last two rows.
225
+
226
+ max width=
227
+
228
+ 2*Property 2|c|R-GIN 2|c|R-GAT 2|c|GGNN 3|c|R-SPN
229
+
230
+ 2-10
231
+ base +FA base +FA base +FA $k = 1$ $k = 5$ $k = {10}$
232
+
233
+ 1-10
234
+ mu ${2.64} \pm {0.11}$ ${2.54} \pm {0.09}$ ${2.68} \pm {0.11}$ ${2.73} \pm {0.07}$ ${3.85} \pm {0.16}$ ${3.53} \pm {0.13}$ ${3.59} \pm {0.01}$ ${2.25}_{\pm {0.17}}$ ${2.32} \pm {0.20}$
235
+
236
+ 1-10
237
+ alpha ${4.67}_{\pm {0.52}}$ ${2.28} \pm {0.04}$ ${}_{{4.65} \pm {0.44}}$ ${2.32}_{\pm {0.16}}$ ${5.22}_{\pm {0.86}}$ ${2.72}_{\pm {0.12}}$ ${6.74} \pm {0.15}$ ${1.86} \pm {0.06}$ ${1.82}_{\pm {0.02}}$
238
+
239
+ 1-10
240
+ HOMO ${1.42} \pm {0.01}$ ${\mathbf{{1.26}}}_{\pm {0.02}}$ ${1.48} \pm {0.03}$ ${}_{{1.43} \pm {0.02}}$ ${1.67}_{\pm {0.07}}$ ${1.45} \pm {0.04}$ ${2.00} \pm {0.01}$ ${1.27} \pm {0.03}$ ${1.32} \pm {0.07}$
241
+
242
+ 1-10
243
+ LUMO ${1.50} \pm {0.09}$ ${1.34} \pm {0.04}$ ${}_{{1.53} \pm {0.07}}$ ${1.41}_{\pm {0.03}}$ ${1.74}_{\pm {0.06}}$ ${1.63} \pm {0.06}$ ${2.11} \pm {0.02}$ ${\mathbf{{1.23}}}_{\pm {0.03}}$ ${1.26} \pm {0.06}$
244
+
245
+ 1-10
246
+ gap ${2.27}_{\pm {0.09}}$ ${1.96}_{\pm {0.04}}$ ${2.31}_{\pm {0.06}}$ ${2.08} \pm {0.05}$ ${2.60} \pm {0.06}$ ${2.30}_{\pm {0.05}}$ ${2.95}_{\pm {0.02}}$ ${1.89}_{\pm {0.06}}$ ${1.94}_{\pm {0.08}}$
247
+
248
+ 1-10
249
+ R2 ${15.63} \pm {1.40}$ ${12.61}_{\pm {0.37}}$ ${52.39}_{\pm {42.5}}$ ${}_{{15.76} \pm {1.17}}$ ${}_{{35.94} \pm {35.7}}$ ${}_{{14.33} \pm {0.47}}$ ${22.41}_{\pm {0.64}}$ 10.80 ${}_{\pm {0.60}}$ ${10.82}_{\pm {1.30}}$
250
+
251
+ 1-10
252
+ ZPVE ${12.93} \pm {1.81}$ ${5.03}_{\pm {0.36}}$ ${14.87} \pm {2.88}$ ${}_{{5.98} \pm {0.43}}$ ${}_{{17.84} \pm {3.61}}$ ${5.24}_{\pm {0.30}}$ ${29.16}_{\pm {1.14}}$ ${3.34}_{\pm {0.16}}$ ${2.73}_{\pm {0.05}}$
253
+
254
+ 1-10
255
+ U0 ${5.88} \pm {1.01}$ ${2.21} \pm {0.12}$ ${7.61}_{\pm {0.46}}$ ${2.19}_{\pm {0.25}}$ ${8.65} \pm {2.46}$ ${}_{{3.35} \pm {1.68}}$ ${13.39}_{\pm {0.37}}$ ${1.15} \pm {0.05}$ ${\mathbf{{0.96}}}_{\pm {0.02}}$
256
+
257
+ 1-10
258
+ U ${}_{{18.71} \pm {23.36}}$ ${}_{{2.32} \pm {0.18}}$ ${}_{{6.86} \pm {0.53}}$ ${2.11} \pm {0.10}$ ${9.24}_{\pm {2.26}}$ ${2.49}_{\pm {0.34}}$ ${13.61}_{\pm {0.73}}$ ${1.32}_{\pm {0.04}}$ ${\mathbf{{0.96}}}_{\pm {0.04}}$
259
+
260
+ 1-10
261
+ H ${5.62} \pm {0.81}$ ${2.26} \pm {0.19}$ ${7.64}_{\pm {0.92}}$ ${2.27}_{\pm {0.29}}$ ${9.35}_{\pm {0.96}}$ ${2.31}_{\pm {0.15}}$ ${}_{{13.65} \pm {0.63}}$ ${1.20} \pm {0.05}$ ${1.02}_{\pm {0.06}}$
262
+
263
+ 1-10
264
+ G ${5.38}_{\pm {0.75}}$ ${2.04} \pm {0.24}$ ${}_{{6.54} \pm {0.36}}$ ${2.07} \pm {0.07}$ ${}_{{7.14} \pm {1.15}}$ ${2.17} \pm {0.29}$ ${12.22}_{\pm {0.71}}$ ${1.06} \pm {0.07}$ ${\mathbf{{0.94}}}_{\pm {0.03}}$
265
+
266
+ 1-10
267
+ Cv ${}_{{3.53} \pm {0.37}}$ ${1.86} \pm {0.03}$ ${4.11}_{\pm {0.27}}$ ${2.03} \pm {0.14}$ ${8.86}_{\pm {9.07}}$ ${2.25}_{\pm {0.20}}$ ${5.45}_{\pm {0.24}}$ ${1.42} \pm {0.05}$ ${\mathbf{{1.31}}}_{\pm {0.03}}$
268
+
269
+ 1-10
270
+ Omega ${1.05} \pm {0.11}$ ${0.80} \pm {0.04}$ ${1.48} \pm {0.87}$ ${0.73} \pm {0.04}$ ${}_{{1.57} \pm {0.53}}$ ${0.87}_{\pm {0.09}}$ ${2.90}_{\pm {0.06}}$ ${0.55}_{\pm {0.01}}$ ${0.55}_{\pm {0.02}}$
271
+
272
+ 1-10
273
+ vs best GNNs: X X X X X X +86.3% $- {50.2}\%$ $- {51.1}\%$
274
+
275
+ 1-10
276
+ X vs best FA models: X X X X X $+ {270}\%$ $- {24.4}\%$ $- \mathbf{{28.1}}\%$
277
+
278
+ 1-10
279
+
280
+ Experimental setup. We evaluate R-SPN $\left( {k = \{ 1,5,{10}\} }\right)$ on the 13 properties of the QM9 dataset [46] following the splits and protocol (5 reruns per split) of GNN-FiLM [45]. We train using mean squared error (MSE) and report mean absolute error (MAE) on the test set. We compare R-SPN against GNN-FiLM models, as well as their fully adjacent (FA) layer variants [15]. For fairness, we only report results with $T = 8$ layers, a learning rate of 0.001, a batch size of 128 and 128-dimensional embeddings. Moreover, complete results using $T = \{ 4,6\}$ can be found in the appendix. Due to the reported and observed instability of the original R-GIN setup (layer norm, residual connections)[45], we use the simpler pooling and update setup from SPNs with our R-SPNs.
281
+
282
+ Results. The results of R-SPN on all 13 properties of QM9 are shown in Table 3. In these results, R-SPN $\left( {k = 1}\right)$ performs worse than the reported R-GIN, and this is expected given its relative simplicity, e.g., no residual connections, no layer norm. However, R-SPNs with $k = \{ 5,{10}\}$ perform very strongly, comfortably surpassing the best MPNNs and their FA counterparts. In fact, R-SPN $\left( {k = {10}}\right)$ reduces the average MAE across all properties by over ${28}\%$ . Interestingly, improvement varies across QM9 properties. On the first five properties, R-SPN $\left( {k = {10}}\right)$ yields an average relative error reduction of ${8.5}\%$ , whereas this reduction exceeds ${50}\%$ for $\mathrm{U}0,\mathrm{U},\mathrm{H}$ , and $\mathrm{G}$ . This indicates that properties variably rely on higher-hop information, with the latter properties benefiting far more from higher $k$ . All in all, these results highlight that R-SPNs not only effectively alleviate over-squashing, but also provide a strong inductive bias to improve model performance.
283
+
284
+ Analyzing the model. To better understand model behavior, we inspect the average learned hop weights (across 5 training runs) within the first and last layers of R-SPN $\left( {k = {10}}\right) ,T = 8$ on the U0 property. We show the diameter distribution of QM9 graphs in Figure 5(a), and the learned weights in Figure 5(b).
285
+
286
+ < g r a p h i c s >
287
+
288
+ Figure 5: Histograms for R-SPN model analysis.
289
+
290
+ Despite their small size $( \sim {18}$ nodes on average), most QM9 graphs have a diameter of 6 or larger, which confirms the need for long-range information flow. This is further evidenced by the weights ${\alpha }_{1},\ldots ,{\alpha }_{10}$ , which are non-uniform and significant for higher hops, especially within the first layer. Hence, R-SPN learns non-trivial hop aggregations. Interestingly, the weights at layers 1 and 8 are very different, which indicates that R-SPN learns sophisticated node representations, based on distinct layer-wise weighted hop aggregations. Therefore, the learned weights on U0 highlight non-trivial processing of hop neighborhoods within QM9, diverging significantly from FA layers and better exploiting higher hop information.
291
+
292
+ § 5 RELATED WORK
293
+
294
+ The over-squashing phenomenon was first identified by Alon and Yahav [15]: applying message passing on direct node neighborhoods potentially leads to an exponentially growing amount of information being "squashed" into constant-sized embedding vectors, as the number of iterations increases. One approach to alleviate over-squashing is to "rewire" graphs, so as to connect relevant nodes (in a new graph) and shorten propagation distances to minimize bottlenecks. For instance, adding a fully adjacent final layer [45] naïvely connecting all node pairs yields substantial error reductions on QM9 [15]. DIGL [59] performs rewiring based on random walks, so as to establish connections between nodes which have small diffusion distance [60]. More recently, the Stochastic Discrete Ricci Flow [39] algorithm considers Ricci curvature over the input graph, where negative curvature indicates an information bottleneck, and introduces edges at negatively curved locations.
295
+
296
+ Instead of rewiring the input graphs, our study suggests better information flow for models which exploit multi-hop information through a dedicated, more general, message passing framework. We therefore build on a rich line of work that exploits higher-hop information within MPNNs [16, 17, 24, 25, 61-63]. Closely related to SP-MPNNs, the models N-GCN [16] and MixHop [17] use normalized powers of the graph adjacency matrix to access nodes up to $k$ hops away. Differently, however, these hops are not partitioned based on shortest paths as in SP-MPNNs, but rather are computed using powers of the adjacency matrix. Hence, this approach does not shrink the exponential receptive field of MPNNs, and in fact amplifies the signals coming from highly connected and nearer nodes, due to potentially redundant messages. To make this concrete, consider the graph from Figure 1: using $k = 3$ with adjacency matrix powers implies that each orange node has one third of the weight of a green node when aggregating at the white node. Intuitively, this is because the same nodes are repeatedly seen at different hops, which is not the case with shortest-path neighborhoods.
297
+
298
+ Our work closely resembles approaches which aggregate nodes based on shortest path distances. For instance, $k$ -hop GNNs [25] compute the $k$ -hop shortest path sub-graph around each node, and propagate and combine messages inward from hop $k$ nodes to the updating node. However, this message passing still suffers from over-squashing, as, e.g., the signal from orange nodes in Figure 1 is squashed across $k$ iterations, mixing with other messages, before reaching the white node. In contrast, SP-MPNNs enable distant neighbors to communicate directly with the updating node, which alleviates over-squashing significantly. Graphormer [23] builds on transformer approaches over graphs [18-20] and augments their all-pairs attention mechanism with shortest path distance-based bias. Graphormer is an instance of SP-MPNNs, and effectively exploits graph structure, but its attention still imposes a quadratic overhead, limiting its feasibility in practice. Similarly to MPNNs, our framework acts as a unifying framework for models based on shortest path message passing, and allows to precisely characterize their expressiveness and propagation properties (e.g., the theorems in Section 3 immediately apply to Graphormers).
299
+
300
+ Other approaches are proposed in the literature to exploit distant nodes in the graph, such as those based on random walks. For instance, DeepWalk [62] uses sampled random walks to learn node representations that maximize walk co-occurrence probabilities across node pairs in the graph. Similarly, random walk GNNs [61] process input graphs by comparing them with learnable "hidden" graphs using random walk-based similarity metrics [63]. Finally, NGNNs [24], use a nested message passing structure, such that representations are first learned by message passing within a $k$ -hop rooted sub-graph, and the resulting representations are then used for standard graph-level message passing.
301
+
302
+ § 6 SUMMARY AND OUTLOOK
303
+
304
+ We presented the SP-MPNN framework, which enables direct message passing between nodes and their distant hop neighborhoods based on shortest paths, and showed that it improves on MPNN representation power and alleviates over-squashing. We then empirically validated this framework on the synthetic Proximity datasets and on real-world graph classification and regression benchmarks.
papers/LOG/LOG 2022/LOG 2022 Conference/n5tvDCQGloq/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,522 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # GEFL: Extended Filtration Learning for Graph Classification
2
+
3
+ Anonymous Author(s)
4
+
5
+ Anonymous Affiliation
6
+
7
+ Anonymous Email
8
+
9
+ ## Abstract
10
+
11
+ Extended persistence is a technique from topological data analysis to obtain global multiscale topological information from a graph. This includes information about connected components and cycles that are captured by the so-called persistence bar-codes. We introduce extended persistence into a supervised learning framework for graph classification. Global topological information, in the form of four persistence barcodes and their explicit cycle representatives, is combined into the model by the readout function which is computed by extended persistence.The entire model is end-to-end differentiable. We use a link-cut tree data structure and parallelism to lower the complexity of computing extended persistence, obtaining a speedup of more than ${60}\mathrm{x}$ over the state-of-the-art. This makes extended persistence feasible for machine learning. We show that, under certain conditions, extended persistence surpasses both the WL[1] graph isomorphism test and 0-dimensional barcodes in terms of expressivity because it adds more global (topological) information. In particular, arbitrarily long cycles can be represented, which is difficult for finite receptive field message passing graph neural networks. Furthermore, we show the effectiveness of our method on real world datasets compared to many existing recent graph representation learning methods. ${}^{1}$
12
+
13
+ ## 1 Introduction
14
+
15
+ Graph classification is an important task in machine learning. Applications range from classifying social networks to chemical compounds. These applications require global as well as local topological information of a graph to achieve high performance. Message passing graph neural networks (GNNs) are an effective and popular method to achieve this task.
16
+
17
+ These existing methods crucially lack quantifiable information about the relative prominence of cycles and connected component to make predictions. Extended persistence is an unsupervised technique from topological data analysis that provides this information through a generalization of hierarchical clustering on graphs. It obtains both 1- and 0-dimensional multiscale global homological information.
18
+
19
+ Existing end-to-end filtration learning methods [1, 2] that use persistent homology do not compute extended persistence because of its high computational cost at scale. We address this by improving upon the work of [3] and introducing a link-cut tree data structure and parallellism for computation. This allows for $O\left( {\log \left( n\right) }\right)$ update and query operations on a spanning forest with $n$ nodes.
20
+
21
+ We consider the expressiveness of our model in terms of extended persistence barcodes and the cycle representatives. We characterize the barcodes in terms of size, what they measure, and their expressivity in comparison to WL[1] [2]. We show that it is possible to find a filtration where one of its cycle's length can be measured as well as a filtration where the size of each connected component can be measured. We also consider the case of barcodes when no learning of the filtration occurs. We consider several simple examples where our model can perfectly distinguish two classes of graphs that no GNN with expressivity at most that of WL[1] (henceforth called WL[1] bounded GNN) can. Furthermore, we present a case where experimentally 0 -dimensional standard persistence
22
+
23
+ ---
24
+
25
+ ${}^{1}$ code to be released if the paper is accepted, https://anonymous.4open.science/r/ GraphExtendedFiltrationLearning-34CB
26
+
27
+ ---
28
+
29
+ ![01963ef8-f5dd-705b-8380-e1b35a0c9a24_1_305_201_1186_205_0.jpg](images/01963ef8-f5dd-705b-8380-e1b35a0c9a24_1_305_201_1186_205_0.jpg)
30
+
31
+ Figure 1: Lower and upper filtrations for extended persistence and the resulting barcode for a graph. The green bar comes from a pairing of a green edge with a vertex in the lower filtration. Similarily the blue bar in the upper filtration comes from a vertex-edge pairing in the upper filtration. The two dark blue bars count connected components and come from pairs of two vertices. The two red bars count cycles and come from pairs of edges. Both ${\mathcal{B}}_{0}^{\text{ext }}$ and ${\mathcal{B}}_{1}^{\text{ext }}$ bars cross from the lower filtration to the upper filtration. The multiset of bars forms the barcode.
32
+
33
+ $\left\lbrack {2,4}\right\rbrack$ , the only kind of persistence considered in learning persistence so far, are insufficient for graph classification.
34
+
35
+ Our contributions are as follows:
36
+
37
+ 1. We introduce extended persistence and its cycle representatives into the supervised learning framework in an end-to-end differentiable manner, for graph classification.
38
+
39
+ 2. For a graph with $m$ edges and $n$ vertices, we introduce the link-cut tree data structure into the computation of extended persistence, resulting in an $O\left( {m\log n}\right)$ depth and $O\left( {mn}\right)$ work parallel algorithm, achieving more than ${60}\mathrm{x}$ speedup over the state-of-the-art, making extended persistence amenable for machine learning tasks.
40
+
41
+ 3. We analyze conditions and examples upon which extended persistence can surpass the WL[1] graph isomorphism test [5] and 0-dimensional standard persistence and characterize what extended persistence can measure from additional topological information.
42
+
43
+ 4. We perform experiments to demonstrate the feasibility of our approach against standard baseline models and datasets as well as an ablation study on the readout function for a learned filtration.
44
+
45
+ ## 2 Background
46
+
47
+ ### 2.1 Computational Topology for Graphs
48
+
49
+ Define a graph $G = \left( {V, E}\right) , E \subset V \times V$ as a set of vertices $V,\left| V\right| = n$ along with a set of edges $E$ , $\left| E\right| = m$ . Graphs in our case are undirected and simple, containing at most a single edge between any two vertices. Define on all vertices and edges a filtration function $F : G \rightarrow \mathbb{R}$ where on each vertex and edge there is a value in $\mathbb{R}$ , denoted $F\left( u\right)$ or $F\left( e\right)$ for $u \in V$ or $e \in E$ . Define the corresponding increasing filtration $\varnothing = {G}_{0} \subset {G}_{1} \subset \ldots \subset {G}_{n + m} = G$ as a sequence of subgraphs of $G$ ordered by inclusion s.t. ${G}_{i + 1} \smallsetminus {G}_{i}$ is a single edge or vertex and ${F}_{i} \mathrel{\text{:=}} F\left( {{G}_{i + 1} \smallsetminus {G}_{i}}\right)$ with ${F}_{i} \leq {F}_{i + 1}$ . A corresponding decreasing filtration is defined the same way but with the condition ${F}_{i} \geq {F}_{i + 1}$ . Define a vertex-induced lower filtration for a vertex function ${f}_{G}$ as an increasing filtration where a vertex $v$ has a value $F\left( v\right) \mathrel{\text{:=}} {f}_{G}\left( v\right)$ and any edge(u, v)has the value $F\left( {u, v}\right) \mathrel{\text{:=}} \max \left( {F\left( u\right) , F\left( v\right) }\right)$ . Similarly define an upper filtration for ${f}_{G}$ as a decreasing filtration where $F\left( v\right) \mathrel{\text{:=}} {f}_{G}\left( v\right)$ and the edge(u, v)has value $F\left( {u, v}\right) \mathrel{\text{:=}} \min \left( {{f}_{G}\left( u\right) ,{f}_{G}\left( v\right) }\right)$ .
50
+
51
+ Persistent homology(PH) applied to graphs computes its homological features in a multiscale manner; see books $\left\lbrack {6,7}\right\rbrack$ . PH pairs vertices and edges together as birth and death pairs for ${H}_{0}$ and ${H}_{1}$ , the zeroth and first homology groups. An ${H}_{0}$ -birth at index $i$ is determined by the filtration value of a vertex that creates a connected component (CC) while a ${H}_{1}$ -birth is determined by filtration value of an edge that creates a cycle when ${G}_{i}$ changes to ${G}_{i + 1}$ . Death is determined by the merging or trivializing of a homological feature when ${G}_{i}$ changes to ${G}_{i + 1}$ . Each element $\left( {{b}_{i},{d}_{i}}\right)$ , which we will also denote as $\left\lbrack {{b}_{i},{d}_{i}}\right\rbrack$ , in the set of birth death pairs $\mathcal{B} = {\left\{ \left( {b}_{i},{d}_{i}\right) \right\} }_{i}$ given by the persistent homology is called a bar in the barcode $\mathcal{B}$ . Define the persistence of each bar $\left( {{b}_{i},{d}_{i}}\right)$ in a barcode as the quantity $\left| {{d}_{i} - {b}_{i}}\right|$ . Notice that, both in 0 - and 1-dimensional persistence some bars may have infinite persistence since some components $\left( {H}_{0}\right.$ features) and cycles $\left( {H}_{1}\right.$ features) never die. It will be useful when comparing the outputs of persistent homology to view a barcode as a multiset of points in the extended plane ${\left( \mathbb{R}\cup \{ - \infty ,\infty \} \right) }^{2}$ with a diagonal of infinite multiplicity. This geometric view of a barcode is known as a persistence diagram (PD).
52
+
53
+ Extended persistence $\left( {\mathrm{{PH}}}_{\text{ext }}\right)$ takes an extended filtration ${F}_{{f}_{G}}$ as input which is the concatentation of a lower filtration and an upper filtration induced by a vertex function ${f}_{G}$ and computes the pairing of birth and death pairs. In extended persistence all features die (bars are finite) because the upper filtration in ${F}_{{f}_{G}}$ actually filters a coned space of the graph $G$ ; see [8] for details. Four different persistence pairings or barcodes result from ${\mathbf{{PH}}}_{\text{ext }}$ . The barcode ${\mathcal{B}}_{0}^{\text{low }}$ results from the vertex-edge pairs within the lower filtration, the barcode ${\mathcal{B}}_{0}^{up}$ results from the vertex-edge pairs within the upper filtration, the barcode ${\mathcal{B}}_{0}^{\text{ext }}$ results from the vertex-vertex pairs that represent the persistence of connected components born in the lower filtration and die in the upper filtration, and the barcode ${\mathcal{B}}_{1}^{\text{ext }}$ results from edge-edge pairs that represent the persistence of cycles that are born in the lower filtration and die in the upper filtration. The barcodes ${\mathcal{B}}_{0}^{\text{low }},{\mathcal{B}}_{0}^{up}$ , and ${\mathcal{B}}_{0}^{\text{ext }}$ represent persistence in the 0 th homology ${H}_{0}$ . The barcode ${\mathcal{B}}_{1}^{\text{ext }}$ represents persistence in the 1st homology ${H}_{1}$ . In the TDA literature, ${\mathcal{B}}_{0}^{\text{low }},{\mathcal{B}}_{0}^{up},{\mathcal{B}}_{0}^{\text{ext }}$ , and ${\mathcal{B}}_{1}^{\text{ext }}$ also go by the names of ${\operatorname{Ord}}_{0},{\operatorname{Rel}}_{1},{\operatorname{Ext}}_{0},{\operatorname{Ext}}_{1}$ respectively.
54
+
55
+ See Figure 1 for an illustration of the filtration and barcode one obtains for a simple graph with vertices taking on values from 0 ... 6 denoted by the variable $t$ . In particular, at each $t$ , we have the filtration subgraph ${G}_{t}$ of all vertices and edges of filtration function value less than or equal to $t$ . Each line indicates the values 0...6 from the bottom to top. Symmetry or repetition in the bars which appear on the right of Figure 1 is highly likely in general due to the fact that there are only $O\left( n\right)$ filtration values but $O\left( m\right)$ possible bars.
56
+
57
+ ### 2.2 Message Passing Graph Neural Networks (MPGNN)
58
+
59
+ A message passing GNN (MPGNN) convolutional layer takes a vertex embedding ${\mathbf{h}}_{u}$ and an adjacency matrix ${A}_{G}$ and outputs a vertex embedding ${\mathbf{h}}_{u}^{\prime }$ for some $u \in V$ . The $k$ th layer is defined generally as
60
+
61
+ $$
62
+ {\mathbf{h}}_{u}^{k + 1} \leftarrow \operatorname{AGG}\left( {\left\{ {\operatorname{MSG}\left( {\mathbf{h}}_{v}^{k}\right) \mid v \in {N}_{{A}_{G}}\left( u\right) }\right\} ,{\mathbf{h}}_{u}^{k}}\right) , u \in V
63
+ $$
64
+
65
+ where ${N}_{{A}_{G}}\left( u\right)$ is the neighborhood of $u$ . The functions MSG and AGG have different implementations and depend on the type of GNN.
66
+
67
+ Since there should not be a canonical ordering to the nodes of a GNN in graph classification, a GNN for graph classification should be permutation invariant. To achieve permutation invariance [9], as well as achieve a global view of the graph, there must exist a readout function or pooling layer in a GNN. The readout function is crucial to achieving power for graph classification. With a sufficiently powerful readout function, a simple 1-layer MPGNN with $O\left( \Delta \right)$ number of attributes [10] can compute any Turing computable function, $\Delta$ being the max degree of the graph. Examples of simple readout functions include aggregating the node embeddings, or taking the element-wise maximum of node embeddings [11]. See Section 3 for various message passing GNNs and readout functions from the literature.
68
+
69
+ ## 3 Related Work
70
+
71
+ Graph Neural Networks (GNN)s have achieved state of the art performance on graph classification tasks in recent years. For a comprehensive introduction to GNNs, see the survey [12]. In terms of the Weisfeler Lehman (WL) hierarchy, there has been much success and efficiency in GNNs [11, 13, 14] bounded by the WL[1] [15] graph isomorphism test. In recent years, the WL[1] bound has been broken by heterogenous message passing [16], high order GNNs [17], and put into the framework of cellular message passing networks [18]. Furthermore, a sampling based pooling layer is designed in [19]. It has no theoretical guarantees and its code is not publicly available for comparison. Other readout functions include [20], [21] [22]. For a full survey on global pooling, see [23].
72
+
73
+ Topological Data Analysis (TDA) based methods [2, 4] that use learning with persistent homology have achieved favorable performance with many conventional GNNs in recent years. All existing methods have been based on 0-dimensional standard persistent homology on separated lower and upper filtrations [4] and have been supervised. We sidestep these known limitations by introducing extended persistence into supervised learning while keeping computation efficient.
74
+
75
+ A TDA inspired cycle representation learning method in [24] learns the task of knowledge graph completion. It keeps track of cycle bases from shortest path trees and has a $O\left( {\left| V\right| \cdot \left| E\right| \cdot k}\right) , k$ a constant, computational complexity per graph. This high computational cost is addressed in our method by a more efficient algorithm for keeping track of a cycle basis.
76
+
77
+ ![01963ef8-f5dd-705b-8380-e1b35a0c9a24_3_308_197_1183_305_0.jpg](images/01963ef8-f5dd-705b-8380-e1b35a0c9a24_3_308_197_1183_305_0.jpg)
78
+
79
+ Figure 2: The extended persistence architecture (bars+cycles) for graph representation learning. The negative log likelihood (NLL) loss is used for supervised classification. The yellow arrow denotes extended persistence computation, which can compute both barcodes and cycle representatives.
80
+
81
+ On the computational side, fast methods to compute higher dimensional PH using GPUs [25], a necessity for modern deep learning, have been introduced. However differentiability and parallel extended persistence has not been implemented. Given the expected future use of extended persistence in graph data, a parallel differentiable extended persistence algorithm is an advance on its own.
82
+
83
+ ## 4 Method
84
+
85
+ Our method as illustrated in Figure 2 introduces extended persistence as the readout function for graph classification. In our method, an upper and lower filtration, represented by a filtration function, coincides with a set of scalar vertex representations from standard message passing GNNs. This filtration function is thus learnable by MPGNN convolutional layers. Learning filtrations was originally introduced in [4] with standard persistence. As we show in Section 6 and Section 5 arbitrary cycle lengths are hard to distinguish by both standard GNN readout functions [26] as well as standard persistence due to the lack of explicitly tracking paths or cycles. Extended persistence, on the other hand, explicitly computes learned displacements on cycles of some cycle basis as determined by the filtration function as well as explicit cycle representatives.
86
+
87
+ We represent the map from graphs to learnable filtrations by any message passing GNN layer such as GIN, GCN or GraphSAGE followed by a multi layer perceptron (MLP) as a Jumping Knowledge (JK) [27] layer. The JK layer with concatenation is used since we want to preserve the higher frequencies from the earlier layers [28]. Our experiments demonstrate that fewer MPGNN layers perform better than more MPGNN layers. This prevents oversmoothing [29, 30], which is exacerbated by the necessity of scalar representations.
88
+
89
+ The readout function, the function that consolidates a filtration into a global graph representation, is determined by computing four barcodes for the extended persistence on the concatenation of the lower and upper filtrations followed by compositions with four rational hat functions $\widehat{r}$ as used in $\left\lbrack {1,2,4}\right\rbrack$ . To each of the four barcodes $\mathcal{B}$ , we apply the hat function $\widehat{r}$ to obtain a $k$ -dimensional vector. The function $\widehat{r}$ is defined as:
90
+
91
+ $$
92
+ \widehat{r}\left( \mathcal{B}\right) \mathrel{\text{:=}} {\left\{ \mathop{\sum }\limits_{{\mathbf{p} \in \mathcal{B}}}\frac{1}{1 + {\left| \mathbf{p} - {\mathbf{c}}_{\mathbf{i}}\right| }_{1}} - \frac{1}{1 + \left| \right| {r}_{i}\left| -\right| \mathbf{p} - {\mathbf{c}}_{\mathbf{i}}{\left| \right| }_{1} \mid }\right\} }_{i = 1}^{k} \tag{1}
93
+ $$
94
+
95
+ where ${r}_{i} \in \mathbb{R}$ and ${\mathbf{c}}_{\mathbf{i}} \in {\mathbb{R}}^{2}$ are learnable parameters. The intent of Equation 1 is to have controlled gradients. It is derived from a monotonic function, see [1]. This representation is then passed through MLP layers followed by a softmax to obtain prediction probability vector ${\widehat{p}}_{G}$ for each graph $G$ . The negative log likelihood loss from standard graph classification is then used on these vectors ${\widehat{p}}_{G}$ .
96
+
97
+ Observe that the injectivity of the filtration is important to keep the gradients of any persistence computation well defined as noted in [4]. We do this by breaking ties consistently throughout, making our filtrations injective. Furthermore, the barcode representation is permutation invariant, as required, since we sort the filtration stably and the connectivity of the graph is not changing.
98
+
99
+ Cycle Representatives: Because computing extended persistence results in computing a cycle basis, we can explicitly store the cycle representatives, or sequences of filtration scalars, along with the barcode on graph data. This slightly improves the performance in practice and guarantees cycle length classification for arbitrary lengths. After the cycle representatives are stored, we pass them through a bidirectional LSTM then aggregate these LSTM representation per graph and then sum this graph representation by cycles with the vectorization of the graph barcode by the rational hat function of Equation 1, see Figure 2. The aggregation of the cycle representations is permutation invariant due to the composition of aggregations [9]. In particular, the sum of the barcode vectorization and the mean of cycle representatives, our method's graph representation, must be permutation invariant. What makes keeping track of cycle representatives unique to standard message passing GNNs is that a finite receptive field message passing GNN would never be able to obtain such cycle representations and certainly not from a well formed cycle basis.
100
+
101
+ ### 4.1 Efficient Computation of Extended Persistence
102
+
103
+ The computation for extended persistence can be reduced to applying a matrix reduction algorithm to a coned matrix as detailed in [7]. In [3], this computation was found to be equivalent to a graph algorithm, which we improve upon.
104
+
105
+ #### 4.1.1 Algorithm
106
+
107
+ Our algorithm is as follows and written in Algorithm 1. We perform the 0-dimensional persistence algorithm using the union find data structure in $O\left( {{m\alpha }\left( n\right) }\right)$ time and $O\left( n\right)$ memory for the upper and lower filtrations in lines 2 and 3 . They generate the vertex-edge pairs for ${\mathcal{B}}_{0}^{\text{low }}$ and ${\mathcal{B}}_{0}^{up}$ . We then measure the minimum lower filtration value and maximum upper filtration value of each vertex in the union find data structure found from the 0 -dimensional standard persistence algorithms as in lines 5 and 6. These produce the vertex-vertex pairs in ${\mathcal{B}}_{0}^{\text{ext }}$ .
108
+
109
+ Algorithm 1 Efficient Computation of ${\mathrm{{PH}}}_{\text{ext }}$
110
+
111
+ ---
112
+
113
+ 1: Input: $G = \left( {V, E}\right) ,{F}_{low}$ : lower filtration function, ${F}_{up}$ : upper filtration function
114
+
115
+ Output: ${\mathcal{B}}_{0}^{\text{low }},{\mathcal{B}}_{0}^{up},{\mathcal{B}}_{0}^{\text{ext }},{\mathcal{B}}_{1}^{\text{ext }},\mathcal{C}$ : cycle reps.
116
+
117
+ ${\mathcal{B}}_{0}^{\text{low }},{E}_{\text{pos }}^{\text{low }},{E}_{\text{neg }}^{\text{low }},{U}_{\text{low }} \leftarrow \operatorname{UNIONFIND}\left( {G,{F}_{\text{low }}}\right)$
118
+
119
+ ${\mathcal{B}}_{0}^{up}{E}_{pos}^{up},{E}_{neg}^{up},{U}_{up} \leftarrow \operatorname{UNIONFIND}\left( {G,{F}_{up}}\right)$
120
+
121
+ roots $\leftarrow \{$ GET_UF_ROOTS $\left( {{U}_{up}, v}\right) , v \in V\}$
122
+
123
+ ${\mathcal{B}}_{0}^{\text{ext }} \leftarrow \{ \min \left( {\text{ roots }\left\lbrack v\right\rbrack }\right) ,\max \left( {\text{ roots }\left\lbrack v\right\rbrack }\right) , v \in V\}$
124
+
125
+ : $\mathbf{T} \leftarrow \{ \}$ empty link-cut tree; ${\mathcal{B}}_{1}^{\text{ext }} \leftarrow \{ \} ;\mathcal{C} \leftarrow \{ \}$ empty list of cycle representatives
126
+
127
+ for $e = \left( {u, v}\right) \in {E}_{neq}^{up}$ do
128
+
129
+ $\mathbf{T} \leftarrow \operatorname{LINK}\left( {\mathbf{T}, e, w}\right)$ /* $w \notin \mathbf{T}, w = u$ or $v *$ /
130
+
131
+ end for
132
+
133
+ $\operatorname{SORT}\left( {E}_{pos}^{up}\right)$ with respect to ${F}_{up}$ descending
134
+
135
+ for $e = \left( {u, v}\right) \in {E}_{\text{pos }}^{up}$ do
136
+
137
+ ${lca} \leftarrow \operatorname{LCA}\left( {u, v}\right)$ (Get the least common ancestor of $u$ ( and $v$ to form a cycle)
138
+
139
+ ${P}_{1} \leftarrow \operatorname{LISTRANK}\left( {\operatorname{PATH}\left( {u,{lca}}\right) }\right) ;{P}_{2} \leftarrow \operatorname{LISTRANK}\left( {\operatorname{PATH}\left( {v,{lca}}\right) }\right)$
140
+
141
+ $\mathcal{C} \leftarrow \mathcal{C} \sqcup \left\{ {{F}_{up}\left( {P}_{1}\right) \sqcup {F}_{up}\left( {\operatorname{Reverse}\left( {P}_{2}\right) }\right) }\right\}$ (Keep track of the scalar activations on the cycle)
142
+
143
+ ${v}^{\prime } \leftarrow \operatorname{ARGMAXREDUCE}\left( {{P}_{1} \sqcup {P}_{2}}\right)$ ;
144
+
145
+ ${u}^{\prime } \leftarrow$ predecessor $\left( {v}^{\prime }\right)$ on $\left( {\{ v\} \cup {P}_{1}}\right) \sqcup \left( {\{ u\} \cup {P}_{2}}\right)$
146
+
147
+ ${\mathbf{T}}_{\mathbf{1}},{\mathbf{T}}_{\mathbf{2}} \leftarrow \operatorname{CUT}\left( {\mathbf{T},\left( {{u}^{\prime },{v}^{\prime }}\right) }\right) ;\mathbf{T} \leftarrow \operatorname{LINK}\left( {{\mathbf{T}}_{\mathbf{1}},\left( {u, v}\right) ,{\mathbf{T}}_{\mathbf{2}}}\right)$
148
+
149
+ ${\mathcal{B}}_{1}^{ext} \leftarrow {\mathcal{B}}_{1}^{ext} \cup \{ \left( {{F}_{low}\left( {{u}^{\prime },{v}^{\prime }}\right) ,{F}_{up}\left( {u, v}\right) }\right) \}$
150
+
151
+ end for
152
+
153
+ return $\left( {{\mathcal{B}}_{0}^{\text{low }},{\mathcal{B}}_{0}^{up},{\mathcal{B}}_{0}^{ext},{\mathcal{B}}_{1}^{ext},\mathcal{C}}\right)$
154
+
155
+ ---
156
+
157
+ For computing edge-edge pairs in ${\mathcal{B}}_{1}^{\text{ext }}$ with cycle representatives, we implement the algorithm in [3] with a link-cut tree data structure that facilitates deleting and inserting edges in a spanning tree and employ a parallel algorithm to determine the edge with the least value in a cycle. We collect the max spanning forest $T$ of negative edges, edges that join componenents, from the upper filtration by repeatedly applying the link operation $n - 1$ times and collect the list of the remaining positive edges, which create cycles, in lines 8 and 9 . Then, for each positive edge $e = \left( {u, v}\right)$ , in order of the upper filtration (line 12), we find the least common ancestor $l{ca}$ of $u$ and $v$ in the spanning forest $T$ we are maintaining as in line 13. Next, we apply the parallel primitive [31] of list ranking twice, once on the path $u$ to $l{ca}$ and the other on the path $v$ to $l{ca}$ in line 14 . List ranking allows a list to populate an array in parallel in logarithmic time. The tensor concatenation of the two arrays is appended to a list of cycle representatives as in line 15 . This is so that the cycle maintains order from $u$ to $v$ . Two max-reductions are then applied on the two arrays ${P}_{1},{P}_{2}$ found by list ranking. The larger of ${u}^{\prime }$ and ${v}^{\prime },\max \left( {{u}^{\prime },{v}^{\prime }}\right) \geq \max \left( {{P}_{1} \cup {P}_{2}}\right)$ , is then found by two reductions on the two paths in line 16. The edge ${e}^{\prime } = \left( {{u}^{\prime },{v}^{\prime }}\right)$ is obtained by backtracking one index from the maximum element’s index, which is how the predecessor operation is implemented on the arrays $\{ v\} \cup {P}_{1}$ and $\{ u\} \cup {P}_{2}$ , where $v$ comes before all elements of ${P}_{1}$ and $u$ comes before all elements of ${P}_{2}$ . We then cut the spanning forest at the edge $\left( {{u}^{\prime },{v}^{\prime }}\right)$ , forming two forests as in Line 18 . These two forests are then linked together at(u, v)as in Line 18 . The bar $\left( {{F}_{\text{low }}\left( {{u}^{\prime },{v}^{\prime }}\right) ,{F}_{up}\left( {u, v}\right) }\right)$ is now found and added to the multiset ${\mathcal{B}}_{1}^{\text{ext }}$ . The final output of the algorithm is four barcodes and a list of cycle representatives: $\left( {\left( {{\mathcal{B}}_{0}^{\text{low }},{\mathcal{B}}_{0}^{up},{\mathcal{B}}_{0}^{\text{ext }},{\mathcal{B}}_{1}^{\text{ext }}}\right) ,\mathcal{C}}\right)$ .
158
+
159
+ #### 4.1.2 Complexity
160
+
161
+ We improve upon the complexity of [3] by obtaining a $O\left( {mn}\right)$ work $O\left( {m\log n}\right)$ depth algorithm on $O\left( n\right)$ processors using $O\left( n\right)$ memory. Here $m$ and $n$ are the number of edges and vertices in the input graph. We introduce two ingredients for lowering the complexity, the first is the link-cut dynamic connectivity data structure and the second is the parallel primitives of list ranking and reduction on paths. The link-cut tree data structure is a dynamic connectivity data structure that can keep track of the spanning forest with $O\left( {\log n}\right)$ time for the operations of least common ancestor, edge deletion (cut), adjoining two trees (link), and find-max along a path. We use these fast operations in our algorithm design. Furthermore, list ranking [32] is an $O\left( {\log n}\right)$ depth and $O\left( n\right)$ work parallel algorithm on $O\left( \frac{n}{\log n}\right)$ processors that determines the distance of each vertex from the start of the path or linked list it is on. In other words, list ranking turns a linked list into an array in parallel. Reduction on an array is an $O\left( n\right)$ work, $O\left( {\log n}\right)$ depth, and $O\left( n\right)$ memory algorithm that applies an associative binary operator such as max on an array to obtain a max of all elements. Sorting can be performed in parallel using $O\left( {n\log n}\right)$ work and $O\left( {\log n}\right)$ depth.
162
+
163
+ ## 5 Expressivity of Extended Persistence
164
+
165
+ We prove some properties of extended persistence barcodes. We also find a case where extended persistence with supervised learning can give high performance for graph classification. WL[1] bounded GNNs, on the other hand, are guaranteed to not perform well. Certainly all such results also apply for the explicit cycle representatives since the min and max on the scalar activations on the cycle form the corresponding bar.
166
+
167
+ ### 5.1 Some Properties
168
+
169
+ The following Theorem 5.1 states some properties of extended persistence. This should be compared with the 0 - and 1-dimensional persistence barcodes in the standard persistence. Every vertex and edge is associated with some bar in the standard persistence though they can be both finite or infinite. However, in extended persistence all bars are finite and we form barcodes from an extended filtration of ${2m} + {2n}$ edges and vertices instead of the standard $\left( {m + n}\right)$ -lengthed filtration.
170
+
171
+ Theorem 5.1. (Extended Barcode Properties)
172
+
173
+ ${\mathbf{{PH}}}_{\text{ext }}\left( G\right)$ produces four sets of barcodes: ${\mathcal{B}}_{1}^{\text{ext }},{\mathcal{B}}_{0}^{\text{ext }},{\mathcal{B}}_{0}^{\text{low }},{\mathcal{B}}_{0}^{\text{up }}$ , s.t.
174
+
175
+ $\left| {\mathcal{B}}_{1}^{\text{ext }}\right| = \dim {H}_{1} = m - n + C,$
176
+
177
+ $\left| {\mathcal{B}}_{0}^{ext}\right| = \dim {H}_{0} = C,$
178
+
179
+ $\left| {\mathcal{B}}_{0}^{\text{low }}\right| = \left| {\mathcal{B}}_{0}^{\text{upper }}\right| = n - C,$
180
+
181
+ where there are $C$ connected components and $\dim {H}_{k}$ is the dimension of the $k$ th homology group s.t.:
182
+
183
+ 1. the ${H}_{1}$ barcode comes from a cycle basis of $G$ which also constitutes a basis of its fundamental group, 2. $\dim {H}_{1}$ counts the number of chordless cycles when $G$ is outer-planar, and
184
+
185
+ 3. there exists an injective filtration function where the union of the resulting barcodes is strictly more expressive than the histogram produced by the WL[1] graph isomorphism test.
186
+
187
+ The barcodes found by extended persistence thus have more degrees of freedom than those obtained from standard persistence. For example, a cycle is now represented by two filtration values rather than just one. Furthermore, the persistence $\left| {d - b}\right|$ of a bar $\left( {b, d}\right) \in {\mathcal{B}}_{1}^{\text{ext }}$ or ${\mathcal{B}}_{0}^{\text{ext }}$ can measure topological significance of a cycle or a connected component respectively through persistence. Thus, extended persistence encodes more information than standard persistence. In Theorem 5.1, property 1 says that extended persistence actually computes pairs of edges of cycles in a cycle basis. A modification of the extended persistence algorithm could generate all or count certain kinds of important cycles, see [33]. Property 2 characterizes what extended persistence can count.
188
+
189
+ We makes some observations on the expressivity of ${\mathrm{{PH}}}_{\text{ext }}$ .
190
+
191
+ Observation 5.2. (Cycle Lengths) For any graph $G$ and a cycle $\mathbf{C} \subset G$ , there exists an injective filtration function where ${\mathbf{{PH}}}_{\text{ext }}$ of that filtration function can measure the number of edges along $\mathbf{C}$ .
192
+
193
+ Such a result cannot hold for learning of the filtration by local message passing from constant node attributes. Thus, for the challenging 2CYCLE graphs dataset in Section B.2, it is a necessity to use the cycle representatives $\mathcal{C}$ for each graph to distinguish pairs of cycles of arbitrary length. This should be compared with Top- $K$ methods, $K$ being a constant hyper parameter such as in [19,34]. The constant hyper parameter $K$ prevents learning an arbitrarily long cycle length when the node attributes are all the same. Furthermore, a readout function like SUM is agnostic to graph topology and also struggles with learning when presented with an arbitrarily long cycle. This struggle for distinguishing cycles in standard MPGNNs is also reported in [35]. An observation similar to the previous Observation 5.2 can also be made for paths measured by ${\mathcal{B}}_{0}^{\text{ext }}$ .
194
+
195
+ Observation 5.3. (Connected Component Sizes) For any graph $G$ and all connected components $\mathbf{{CC}} \subset G$ , there exists an injective filtration function where ${\mathbf{{PH}}}_{\text{ext }}$ of that filtration can measure the number of vertices in $\mathbf{{CC}}$ .
196
+
197
+ We investigate the case where no learning takes place, namely when the filtration values come from a random noise. We observe that even in such a situation some information is still encoded in the extended persistence barcodes with a probability that depends on the graph.
198
+
199
+ Observation 5.4. For any graph $G$ where every edge belongs to some cycle and an extended filtration on it induced by randomly sampled vertex values ${x}_{i} \sim U\left( \left\lbrack {0,1}\right\rbrack \right) ,{\mathbf{{PH}}}_{\text{ext }}$ has a ${H}_{1}$ bar $\left\lbrack {\mathop{\max }\limits_{i}\left( {x}_{i}\right) ,\mathop{\min }\limits_{i}\left( {x}_{i}\right) }\right\rbrack$ with probability $\mathop{\sum }\limits_{{v \in V}}\frac{1}{n}\frac{\deg \left( v\right) }{n - 1}$ .
200
+
201
+ Notice that for a clique, the probability of finding the bar with maximum possible persistence is 1 . It becomes lower for sparser graphs.
202
+
203
+ Corollary 5.5. In Observation 5.4, the expected persistence $\mathbb{E}\left\lbrack \left| {{ma}{x}_{i}\left( {x}_{i}\right) - {mi}{n}_{i}\left( {x}_{i}\right) }\right| \right\rbrack$ of bar $\left\lbrack {\mathop{\max }\limits_{i}\left( {x}_{i}\right) ,\mathop{\min }\limits_{i}\left( {x}_{i}\right) }\right\rbrack$ goes to 1 as $n \rightarrow \infty$ .
204
+
205
+ What Corollary 5.5 implies is that, for certain graphs, even when nothing is learned by the GNN filtration learning layers, the longest ${\mathcal{B}}_{1}^{\text{ext }}$ bar indicates that $n$ is large. This happens for graphs that are randomly initialized with vertex labels from the unit interval and occurs with high probability for dense graphs by Observation 5.4. For large $n$ , the empirical mean of the longest bar will have persistence near 1 . Notice that ${\mathcal{B}}_{1}^{\text{ext }}$ can measure this even though the number of ${H}_{1}$ bars, $m - n + C$ , could tell us nothing about $n$ .
206
+
207
+ ## 6 Experiments
208
+
209
+ We perform experiments of our method on standard GNN datasets. We also perform timing experiments for our extended persistence algorithm, showing impressive scaling. Finally, we investigate cases where experimentally our method distinguishes graphs that other methods cannot, demonstrating how our method learns to surpass the WL[1] bound.
210
+
211
+ ### 6.1 Experimental Setup
212
+
213
+ We perform experiments on a 48 core Intel Xeon Gold CPU machine with 1 TB DRAM equipped with a Quadro RTX 6000 NVIDIA GPU with 24 GB of GPU DRAM.
214
+
215
+ Hyper parameter information can be found in Table ??. For all baseline comparisons, the hyperpa-rameters were set to their repository's standard values. In particular, all training were stopped at 100 epochs using a learning rate of 0.01 with the Adam optimizer. Vertex attributes were used along with vertex degree information as initial vertex labels if offered by the dataset. We perform a fair performance evaluation by performing standard 10 -fold cross validation on our datasets. The lowest validation loss is used to determined a test score on a test partition. An average $\pm$ standard deviation test score over all partitions determines the final evaluation score.
216
+
217
+ The specific layers of our architecture for the neural network for our filtration function ${f}_{G}$ is given by one or two GIN convolutional layers, with the number of layers as determined by an ablation study.
218
+
219
+ <table><tr><td colspan="7">Experimental Evaluation</td></tr><tr><td>avg. acc. $\pm$ std.</td><td>DD</td><td>PROTEINS</td><td>IMDB- MULTI</td><td>MUTAG</td><td>PINWHEELS</td><td>2CYCLES</td></tr><tr><td>GFL</td><td>${75.2} \pm {3.5}$</td><td>${73.0} \pm {3.0}$</td><td>${46.7} \pm {5.0}$</td><td>${87.2} \pm {4.6}$</td><td>${100} \pm {0.0}$</td><td>${50.0} \pm {0.0}$</td></tr><tr><td>Ours+Bars</td><td>${75.5} \pm {2.9}$</td><td>${74.9} \pm {4.1}$</td><td>${50.3} \pm {4.7}$</td><td>$\mathbf{{88.3} \pm {7.1}}$</td><td>$\mathbf{{100} \pm {0.0}}$</td><td>${50} \pm {0.0}$</td></tr><tr><td>Ours+Bars+Cycles</td><td>$\mathbf{{75.9} \pm {2.0}}$</td><td>$\mathbf{{75.2}} \pm {4.1}$</td><td>$\mathbf{{51.0} \pm {4.6}}$</td><td>${86.8} \pm {7.1}$</td><td>$\mathbf{{100} \pm {0.0}}$</td><td>$\mathbf{{100} \pm {0.0}}$</td></tr><tr><td>GIN</td><td>${72.6} \pm {4.2}$</td><td>${66.5} \pm {3.8}$</td><td>${49.8} \pm {3.0}$</td><td>${84.6} \pm {7.9}$</td><td>${50.0} \pm {0.0}$</td><td>${50.0} \pm {0.0}$</td></tr><tr><td>GIN0</td><td>${72.3} \pm {3.6}$</td><td>${67.5} \pm {4.7}$</td><td>${48.7} \pm {3.7}$</td><td>${83.5} \pm {7.4}$</td><td>${50.0} \pm {0.0}$</td><td>${50.0} \pm {0.0}$</td></tr><tr><td>GraphSAGE</td><td>${72.6} \pm {3.7}$</td><td>${59.6} \pm {0.2}$</td><td>${50.0} \pm {3.0}$</td><td>${72.4} \pm {8.1}$</td><td>${50.0} \pm {0.0}$</td><td>${50.0} \pm {0.0}$</td></tr><tr><td>GCN</td><td>${72.7} \pm {1.6}$</td><td>${59.6} \pm {0.2}$</td><td>${50.0} \pm {2.0}$</td><td>${73.9} \pm {9.3}$</td><td>${50.0} \pm {0.0}$</td><td>${50.0} \pm {0.0}$</td></tr><tr><td>GraphCL</td><td>${65.4} \pm {12}$</td><td>${62.5} \pm {1.5}$</td><td>49.6± 0.4</td><td>${76.6} \pm {26}$</td><td>${49.0} \pm {8.0}$</td><td>${50.5} \pm {10}$</td></tr><tr><td>InfoGraph</td><td>${61.5} \pm {10}$</td><td>${65.5} \pm {12}$</td><td>${40.0} \pm {8.9}$</td><td>${89.1} \pm {1.0}$</td><td>${50.0} \pm {0.0}$</td><td>${50.0} \pm {0.0}$</td></tr><tr><td>ADGCL</td><td>${74.8} \pm {0.7}$</td><td>${73.2} \pm {0.3}$</td><td>${47.4} \pm {0.8}$</td><td>63.3± 31</td><td>${42.5} \pm {19}$</td><td>${52.5} \pm {21}$</td></tr><tr><td>TOGL</td><td>${74.7} \pm {2.4}$</td><td>${66.5} \pm {2.5}$</td><td>${44.7} \pm {6.5}$</td><td>-</td><td>${47.0} \pm 3$</td><td>${54.4} \pm {5.8}$</td></tr><tr><td>Filt.+SUM</td><td>${75.0} \pm {3.2}$</td><td>${73.5} \pm {2.8}$</td><td>${48.0} \pm {2.9}$</td><td>${86.7} \pm {8.0}$</td><td>${51.0} \pm {11}$</td><td>${50.0} \pm {0.0}$</td></tr><tr><td>Filt.+MAX</td><td>67.6± 3.9</td><td>${68.6} \pm {4.3}$</td><td>${45.5} \pm {3.1}$</td><td>${70.3} \pm {5.4}$</td><td>${48.0} \pm {4.2}$</td><td>${50.0} \pm {0.0}$</td></tr><tr><td>Filt.+AVG</td><td>${69.5} \pm {2.9}$</td><td>${67.2} \pm {4.2}$</td><td>${46.7} \pm {3.8}$</td><td>${81.4} \pm {7.9}$</td><td>${50.0} \pm {13}$</td><td>${50.0} \pm {0.0}$</td></tr><tr><td>Filt.+SORT</td><td>${76.9} \pm {2.6}$</td><td>${72.6} \pm {4.6}$</td><td>${49.0} \pm {3.6}$</td><td>${85.6} \pm {9.2}$</td><td>${51.0} \pm {16}$</td><td>${50.0} \pm {0.0}$</td></tr><tr><td>Filt.+S2S</td><td>${69.0} \pm {3.3}$</td><td>${67.8} \pm {4.6}$</td><td>${48.7} \pm {4.2}$</td><td>${86.8} \pm {7.1}$</td><td>${51.0} \pm {13}$</td><td>${50.0} \pm {0.0}$</td></tr></table>
220
+
221
+ Table 1: Average accuracy $\pm$ std. dev. of our approach (EGFL) with and without explicit cycle representations, Graph Filtration Learning (GFL), GIN0, GIN, GraphSAGE, GCN, ADGCL, GraphCL and TOGL and a readout ablation study on the four TUDatasets: DD, PROTEINS, IMDB-MULTI, MUTAG as well as the two Synthetic WL[1] bound and Cycle length distinguishing datasets. Numbers in bold are highest in performance; bold-gray numbers show the second highest. The symbol-denotes that the dataset was not compatible with software at the time.
222
+
223
+ <table><tr><td colspan="8">10-fold cross validation ablation study on OGBG-MOL datasets by ROC-AUC</td></tr><tr><td>avg. score $\pm$ std.</td><td>Ours+Bars</td><td>Ours+Bars +Cycles</td><td>Filt.+SUM</td><td>Filt.+MAX</td><td>Filt.+AVG</td><td>Filt.+SORT</td><td>Filt.+Set2Set</td></tr><tr><td>molbace</td><td>${80.0} \pm {3.6}$</td><td>$\mathbf{{81.6} \pm {3.9}}$</td><td>${79.7} \pm {4.6}$</td><td>${71.9} \pm {4.8}$</td><td>${78.0} \pm {3.0}$</td><td>${78.4} \pm {3.3}$</td><td>${78.2} \pm {3.6}$</td></tr><tr><td>molbbbp</td><td>${78.0} \pm {4.3}$</td><td>$\mathbf{{81.9} \pm {3.3}}$</td><td>${76.7} \pm {4.9}$</td><td>${69.8} \pm {8.7}$</td><td>${78.5} \pm {4.6}$</td><td>${76.3} \pm {4.3}$</td><td>${78.0} \pm {5.0}$</td></tr></table>
224
+
225
+ Table 2: Ablation study on readout functions. The average ROC-AUC $\pm$ std. dev. on the ogbg-mol datasets is shown for each readout function. Number coloring is as in Table 1
226
+
227
+ ### 6.2 Performance on Real World and Synthetic Datasets
228
+
229
+ We perform experiments with the TUDatasets [36], a standard GNN benchmark. We compare with WL[1] bounded GNNs (GIN, GIN0, GraphSAGE, GCN) from the PyTorch Geometric [37, 38] benchmark baseline commonly used in practice as well as GFL[4], ADGCL [39], and InfoGraph [40], self-supervised methods. Self supervised methods are promising but should not surpass the performance of supervised methods since they do not use the label during representation learning. We also compare with existing topology based methods TOGL [2] and GFL [4]. We also perform an ablation study on the readout function, comparing extended persistence as the readout function with the SUM, AVERAGE, MAX, SORT, and SET2SET [41] readout functions. The hyper parameter $k$ is set to the 10th percentile of all datasets when sorting for the top- $k$ nodes activations. We do not compare with
230
+
231
+ [19] since its code is not available online. The performance numbers are listed in Table 1. We are able to improve upon other approaches for almost all cases. The real world datasets include DD, MUTAG, PROTEINS and IMDB-MULTI. DD, PROTEINS, and MUTAG are molecular biology datasets, which emphasize cycles, while IMDB-MULTI is a social network, which emphasize cliques and their connections. We use accuracy as our performance score since it is the standard for the TU datasets.
232
+
233
+ We also verify that our method surpasses the WL[1] bound, a theoretical property which can be proven, as well as can count cycle lengths when the graph is sparse enough, e.g. when the set of cycles is equal to the cycle basis. This is achieved by the two datasets PINWHEELS and 2CYCLES. See the Appendix Sections B for the related experimental and dataset details. Both datasets are particularly hard to classify since they contain spurious constant node attributes, with the labels depending completely on the graph connectivity. This removal of node attributes is in simulation of the WL[1] graph isomorphism test, see [5]. Furthermore, doing so is a case considered in [42]. It is known that WL[1], in particular WL[2], cannot determine the existence of cycles of length greater than seven $\left\lbrack {{43},{44}}\right\rbrack$ .
234
+
235
+ Table 2 shows the ablation study of extended filtration learning on the ogbg datasets [45] OGBG MOLBACE and MOLBBBP. We perform a 10 fold cross validation with the test ROC-AUC score of the lowest validation loss used as the test score. This is performed instead of using the train/val/test split offered by the OGBG dataset in order to keep our evaluation methods consistent with the evaluation of the TUDATASETS and synthetic datasets.
236
+
237
+ From Section B, we know that there are special cases where extended persistence can distinguish graphs where WL[1] bounded GNNs cannot. We perform experiments to show that our method can surpass random guessing whereas other methods achieve only $\sim {50}\%$ accuracy on average, which is no better than random guessing. Our high accuracy is guaranteed on PINWHEELS since such graphs are distinguished by counting bars through 0 -dim standard persistence. Similarly, 2CYCLES is guaranteed high accuracy when keeping track of cycles and comparing the variance of cycle representations since cycle lengths can be distinguished by a LSTM on different lengthed cycle inputs. Of course, a barcode representation alone will not distinguish cycle lengths.
238
+
239
+ ## 7 Conclusion
240
+
241
+ We introduce extended persistence into the supervised learning framework, bringing in crucial global connected component and cycle measurement information into the graph representations. We address a fundamental limitation of MPGNNs, which is their inability to measure cycles lengths. Our method hinges on an efficient algorithm for computing extended persistence. This is a parallel differentiable algorithm with an $O\left( {m\log n}\right)$ depth $O\left( {mn}\right)$ work complexity and scales impressively over the state-of-the-art. The speed with which we can compute extended persistence makes it feasible for machine learning. Our end-to-end model obtains favorable performance on real world datasets. We also construct cases where our method can distinguish graphs that existing methods struggle with.
242
+
243
+ ## References
244
+
245
+ [1] Christoph D Hofer, Roland Kwitt, and Marc Niethammer. Learning representations of persistence barcodes. J. Mach. Learn. Res., 20(126):1-45, 2019. 1, 4, 17
246
+
247
+ [2] Max Horn, Edward De Brouwer, Michael Moor, Yves Moreau, Bastian Rieck, and Karsten Borgwardt. Topological graph neural networks. arXiv preprint arXiv:2102.07835, 2021. 1, 2, 3, 4,8,13,16
248
+
249
+ [3] Zuoyu Yan, Tengfei Ma, Liangcai Gao, Zhi Tang, and Chao Chen. Link prediction with persistent homology: An interactive view. In International Conference on Machine Learning, pages 11659-11669. PMLR, 2021. 1, 5, 6
250
+
251
+ [4] Christoph Hofer, Florian Graf, Bastian Rieck, Marc Niethammer, and Roland Kwitt. Graph filtration learning. In International Conference on Machine Learning, pages 4314-4323. PMLR, 2020.2,3,4,8
252
+
253
+ [5] Boris Weisfeiler and Andrei Leman. The reduction of a graph to canonical form and the algebra which appears therein. NTI, Series, 2(9):12-16, 1968. 2, 9
254
+
255
+ [6] Tamal K. Dey and Yusu Wang. Computational Topology for Data Analysis. Cambridge University Press, 2022. https://www.cs.purdue.edu/homes/tamaldey/book/CTDAbook/ CTDAbook.pdf. 2
256
+
257
+ [7] Herbert Edelsbrunner and John Harer. Computational topology: an introduction. American Mathematical Soc., 2010. 2, 5
258
+
259
+ [8] David Cohen-Steiner, Herbert Edelsbrunner, and John Harer. Extending persistence using poincaré and lefschetz duality. Foundations of Computational Mathematics, 9(1):79-103, 2009. 3
260
+
261
+ [9] Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexander J Smola. Deep sets. Advances in neural information processing systems, 30, 2017.3,5
262
+
263
+ [10] Andreas Loukas. What graph neural networks cannot learn: depth vs width. arXiv preprint arXiv:1907.03199, 2019. 3
264
+
265
+ [11] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018. 3
266
+
267
+ [12] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems, 32(1):4-24, 2020. 3
268
+
269
+ [13] William L Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 1025-1035, 2017. 3
270
+
271
+ [14] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016. 3
272
+
273
+ [15] Sandra Kiefer, Neil Immerman, Pascal Schweitzer, and Martin Grohe. Power and limits of the weisfeiler-leman algorithm. Technical report, Fachgruppe Informatik, 2020. 3
274
+
275
+ [16] Jiaxuan You, Jonathan Gomes-Selman, Rex Ying, and Jure Leskovec. Identity-aware graph neural networks. arXiv preprint arXiv:2101.10320, 2021. 3
276
+
277
+ [17] Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 4602-4609, 2019. 3
278
+
279
+ [18] Cristian Bodnar, Fabrizio Frasca, Nina Otter, Yu Guang Wang, Pietro Liò, Guido F Montufar, and Michael Bronstein. Weisfeiler and lehman go cellular: Cw networks. Advances in Neural Information Processing Systems, 34, 2021. 3
280
+
281
+ [19] Hongyang Gao, Yi Liu, and Shuiwang Ji. Topology-aware graph pooling networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(12):4512-4518, 2021. 3, 7, 9
282
+
283
+ [20] Junhyun Lee, Inyeop Lee, and Jaewoo Kang. Self-attention graph pooling. In International conference on machine learning, pages 3734-3743. PMLR, 2019. 3
284
+
285
+ [21] Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. arXiv preprint arXiv:1511.05493, 2015. 3
286
+
287
+ [22] Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, and Jure Leskovec. Hierarchical graph representation learning with differentiable pooling. Advances in neural information processing systems, 31, 2018. 3
288
+
289
+ [23] Chuang Liu, Yibing Zhan, Chang Li, Bo Du, Jia Wu, Wenbin Hu, Tongliang Liu, and Dacheng Tao. Graph pooling for graph neural networks: Progress, challenges, and opportunities. arXiv preprint arXiv:2204.07321, 2022. 3
290
+
291
+ [24] Zuoyu Yan, Tengfei Ma, Liangcai Gao, Zhi Tang, and Chao Chen. Cycle representation learning for inductive relation prediction. In ICLR 2022 Workshop on Geometrical and Topological Representation Learning, 2022. 3
292
+
293
+ [25] Simon Zhang, Mengbai Xiao, and Hao Wang. Gpu-accelerated computation of vietoris-rips persistence barcodes. In 36th International Symposium on Computational Geometry (SoCG 2020). Schloss Dagstuhl-Leibniz-Zentrum für Informatik, 2020. 4
294
+
295
+ [26] Clement Vignac, Andreas Loukas, and Pascal Frossard. Building powerful and equivariant graph neural networks with structural message-passing. Advances in Neural Information Processing Systems, 33:14143-14155, 2020. 4
296
+
297
+ [27] Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. Representation learning on graphs with jumping knowledge networks. In International conference on machine learning, pages 5453-5462. PMLR, 2018. 4
298
+
299
+ [28] Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, and Danai Koutra. Beyond homophily in graph neural networks: Current limitations and effective designs. Advances in Neural Information Processing Systems, 33:7793-7804, 2020. 4
300
+
301
+ [29] Wenbing Huang, Yu Rong, Tingyang Xu, Fuchun Sun, and Junzhou Huang. Tackling over-smoothing for general graph convolutional networks. arXiv preprint arXiv:2008.09864, 2020. 4
302
+
303
+ [30] Kenta Oono and Taiji Suzuki. Optimization and generalization analysis of transduction through gradient boosting and application to multi-scale graph neural networks. Advances in Neural Information Processing Systems, 33:18917-18930, 2020. 4
304
+
305
+ [31] Guy E Blelloch and Bruce M Maggs. Parallel algorithms. In Algorithms and theory of computation handbook: special topics and techniques, pages 25-25. 2010. 6
306
+
307
+ [32] Richard J Anderson and Gary L Miller. Deterministic parallel list ranking. Algorithmica, 6(1): 859-868, 1991. 6
308
+
309
+ [33] Zhengdao Chen, Lei Chen, Soledad Villar, and Joan Bruna. Can graph neural networks count substructures? arXiv preprint arXiv:2002.04025, 2020. 7
310
+
311
+ [34] Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An end-to-end deep learning architecture for graph classification. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018. 7
312
+
313
+ [35] Vikas Garg, Stefanie Jegelka, and Tommi Jaakkola. Generalization and representational limits of graph neural networks. In International Conference on Machine Learning, pages 3419-3430. PMLR, 2020. 7
314
+
315
+ [36] Christopher Morris, Nils M Kriege, Franka Bause, Kristian Kersting, Petra Mutzel, and Marion Neumann. Tudataset: A collection of benchmark datasets for learning with graphs. arXiv preprint arXiv:2007.08663, 2020. 8
316
+
317
+ [37] Matthias Fey and Jan Eric Lenssen. Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428, 2019. 8
318
+
319
+ [38] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019. 8
320
+
321
+ [39] Susheel Suresh, Pan Li, Cong Hao, and Jennifer Neville. Adversarial graph augmentation to improve graph contrastive learning. Advances in Neural Information Processing Systems, 34, 2021.8
322
+
323
+ [40] Fan-Yun Sun, Jordan Hoffmann, Vikas Verma, and Jian Tang. Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. arXiv preprint arXiv:1908.01000, 2019. 8
324
+
325
+ [41] Haoji Hu and Xiangnan He. Sets2sets: Learning from sequential sets with neural networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1491-1499, 2019. 8
326
+
327
+ [42] Pan Li, Yanbang Wang, Hongwei Wang, and Jure Leskovec. Distance encoding: Design provably more powerful neural networks for graph representation learning. Advances in Neural Information Processing Systems, 33:4465-4478, 2020. 9
328
+
329
+ [43] Vikraman Arvind, Frank Fuhlbrück, Johannes Köbler, and Oleg Verbitsky. On weisfeiler-leman invariance: Subgraph counts and related graph properties. Journal of Computer and System Sciences, 113:42-59, 2020. 9
330
+
331
+ [44] Martin Fürer. On the combinatorial power of the weisfeiler-lehman algorithm. In International Conference on Algorithms and Complexity, pages 260-271. Springer, 2017. 9
332
+
333
+ [45] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. arXiv preprint arXiv:2005.00687, 2020. 9
334
+
335
+ [46] Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 3438-3445, 2020. 16
336
+
337
+ ## A Proofs
338
+
339
+ Theorem A.1. (Theorem 5.1)
340
+
341
+ ${\mathbf{{PH}}}_{\text{ext }}\left( G\right)$ produces four sets of barcodes: ${\mathcal{B}}_{1}^{\text{ext }},{\mathcal{B}}_{0}^{\text{ext }},{\mathcal{B}}_{0}^{\text{low }},{\mathcal{B}}_{0}^{\text{up }}$ , s.t.
342
+
343
+ $\left| {\mathcal{B}}_{1}^{\text{ext }}\right| = \dim {H}_{1} = m - n + C,$
344
+
345
+ $\left| {\mathcal{B}}_{0}^{ext}\right| = \dim {H}_{0} = C,$
346
+
347
+ $\left| {\mathcal{B}}_{0}^{\text{low }}\right| = \left| {\mathcal{B}}_{0}^{\text{upper }}\right| = n - C,$
348
+
349
+ where there are $C$ connected components and $\dim {H}_{k}$ is the dimension of the $k$ th homology group s.t.:
350
+
351
+ 1. the ${H}_{1}$ barcode comes from a cycle basis of $G$ which also constitutes a basis of its fundamental group,
352
+
353
+ 2. $\dim {H}_{1}$ counts the number of chordless cycles when $G$ is outer-planar; and
354
+
355
+ 3. there exists an injective filtration function where the union of the resulting barcodes is strictly more expressive than the histogram produced by the WL[1] graph isomorphism test.
356
+
357
+ Proof. There are $n$ bars with vertex births since every vertex creates exactly one connected component. The number of these bars which are in ${\mathcal{B}}_{0}^{\text{ext }}$ is $C$ , which counts the number of global connected components. In other words, ${\mathcal{B}}_{0}^{\text{ext }} = \dim \left( {H}_{0}\right) = C$ . Thus, we have $n - C = \left| {\mathcal{B}}_{0}^{\text{low }}\right| = \left| {\mathcal{B}}_{0}^{\text{upper }}\right|$ .
358
+
359
+ Considering all ${2m}$ edges on the extended filtration, every edge gets paired. Furthermore, $n - C$ of the edges in the lower filtration are negative edges paired with vertices that give birth to connected components. Similarly there are $n - C$ edges paired with vertices in the upper filtration. We thus have $\frac{{2m} - 2\left( {n - C}\right) }{2}$ edge-edge pairings in ${\mathcal{B}}_{1}^{\text{ext }}$ because every edge gets paired. Thus, $\left| {\mathcal{B}}_{1}^{\text{ext }}\right| = m - n + C$ . Since each bar in ${\mathcal{B}}_{1}^{\text{ext }}$ counts a birth of a 1-dimensional homological class which together span the 1-dimensional homological classes in ${H}_{1}$ , we have that $\dim {H}_{1} = \left| {\mathcal{B}}_{1}^{\text{ext }}\right|$ .
360
+
361
+ 1. This follows from the discussion above.
362
+
363
+ 2. By Euler’s formula, we have $n - m + F = C + 1$ for planar graphs where $F$ is the number of faces of the planar graph as embedded in ${\mathbb{S}}^{2}$ . For outer planar graphs, since $F - 1$ interior faces lie on one hemisphere of ${\mathbb{S}}^{2}$ and one exterior face covers the opposite hemisphere, each interior face must be a chordless cycle.
364
+
365
+ 3. This follows directly by the result in [2] stating that 0 -dimensional barcodes are more expressive than the WL[1] graph isomorphism test. In extended persistence, ${\mathcal{B}}_{0}^{\text{low }}$ and ${\mathcal{B}}_{0}^{\text{ext }}$ are computed. Since all bars in ${\mathcal{B}}_{0}^{\text{ext }}$ correspond to infinite bars denoted ${\mathcal{B}}_{0}^{\infty }$ in the 0 -dimensional standard persistence, we have that ${\mathcal{B}}_{0}^{\text{low }}$ and ${\mathcal{B}}_{0}^{\text{ext }}$ carry at least the same amount of information as a 0-dimensional barcode as determined by ${\mathcal{B}}_{0}^{\text{low }}$ and ${\mathcal{B}}_{0}^{\infty }$ .
366
+
367
+ Observation A.2. (Observation 5.2) For any graph $G$ and a cycle $\mathbf{C} \subset G$ , there exists an injective filtration function where ${\mathbf{{PH}}}_{\text{ext }}$ of the induced filtration can measure the number of edges along $\mathbf{C}$ .
368
+
369
+ Proof. Number the vertices of the cycle $\mathbf{C}$ of length $k$ in descending order and counter clockwise as $n - 1\ldots n - k$ . For each vertex $u \in \mathbf{C}$ , set ${f}_{G}\left( u\right)$ to be the index of $u$ . For the other vertices, assign arbitrary different values less than $n - k$ and then apply $\varepsilon$ -perturbation to make the filtration injective. For example, for vertex $u$ and all its incident edges of same filtration value, one can subtract different $\varepsilon \in {\mathbb{R}}^{ + }$ from each edge to impose injectivity on the induced filtration of ${f}_{G}$ . We then get that every edge on the cycle $\mathbf{C}$ except one:(n - 1, n - k)becomes negative and thus belongs to the negative spanning forest of the upper filtration. The positive edge of smallest value in the upper filtration is edge(n - 1, n - k). The extended persistence algorithm, after computing ${\mathcal{B}}_{0}^{\text{low }}$ and ${\mathcal{B}}_{0}^{up}$ , pairs the edge $e = \left( {n - 1, n - k}\right)$ with the edge having maximum value in the lower filtration in the cycle $\mathbf{C}$ that $e$ forms with the spanning forest. This paired edge is(n - 1, n - 2)and has lower filtration value $n - 1$ . We thus have the bar $\left\lbrack {n - 1, n - k}\right\rbrack$ which encodes the length $k$ of the cycle $\mathbf{C}$ .
370
+
371
+ 520
372
+
373
+ Observation A.3. (Observation 5.3) For any graph $G$ and all connected components $\mathbf{{CC}} \subset G$ , there exists an injective filtration function where ${\mathbf{{PH}}}_{\text{ext }}$ of that filtration can measure the number of vertices in $\mathbf{{CC}}$ .
374
+
375
+ Proof. For each connected component $\mathbf{{CC}}$ in $G$ , index the vertices in $\mathbf{{CC}}$ in consecutive order where indices in each connected component remain distinct. Then define ${f}_{G}\left( u\right)$ equal to the index of $u$ in $G$ . By $\varepsilon$ -perturbation, we can make this an injective filtration function. Since ${\mathcal{B}}_{0}^{ext}$ has each bar $\left\lbrack {\mathop{\min }\limits_{{u \in \mathbf{{CC}}}}{f}_{G}\left( u\right) ,\mathop{\max }\limits_{{u \in \mathbf{{CC}}}}{f}_{G}\left( u\right) }\right\rbrack$ and since all indices are consecutive, each bar’s persistence in ${\mathcal{B}}_{0}^{\text{ext }}$ measures how many vertices are in the connected component they constitute.
376
+
377
+ Observation A.4. (Observation 5.4) For any graph $G$ where every edge belongs to some cycle and an extended filtration on it is induced by randomly sampling vertex values ${x}_{i} \sim U\left( \left\lbrack {0,1}\right\rbrack \right) ,{\mathbf{{PH}}}_{\text{ext }}$ has the ${H}_{1}$ bar $\left\lbrack {\mathop{\max }\limits_{i}\left( {x}_{i}\right) ,\mathop{\min }\limits_{i}\left( {x}_{i}\right) }\right\rbrack$ with probability $\mathop{\sum }\limits_{{v \in V}}\frac{1}{n}\frac{\deg \left( v\right) }{n - 1}$ .
378
+
379
+ Proof. Since the probability of finding a given permutation on $n$ vertices sampled uniformly at random without replacement is equivalent to the probability of a given order on the vertices sampled uniformly at randomly $n$ times, it suffices to find the probability of sampling uniformly at random without replacement two vertices that are connected with an edge in $G$ .
380
+
381
+ For a fixed $\sigma \in {S}_{n}$ , a permutation from the group ${S}_{n}$ of permutations on $n$ vertices, we have:
382
+
383
+ $$
384
+ \frac{1}{n!} = P\left( {{x}_{n} < {x}_{n - 1} < \ldots < {x}_{1},{x}_{i} \sim U\left( \left\lbrack {0,1}\right\rbrack \right) }\right)
385
+ $$
386
+
387
+ $$
388
+ = {\int }_{0}^{1}{\int }_{0}^{{x}_{1}}\ldots {\int }_{0}^{{x}_{n - 1}}d{x}_{n}d{x}_{n - 1}\ldots d{x}_{1} = P\left( {\sigma \sim U\left( {S}_{n}\right) }\right)
389
+ $$
390
+
391
+ Let $G = \left( {V, E}\right)$ be the graph with vertex values sampled from a uniform distribution. Let ${G}^{\prime } =$ $\left( {{V}^{\prime },{E}^{\prime }}\right)$ be the same graph with vertex values in $\{ 0,1,\ldots , n - 1\}$ sampled uniformly without replacement. We know that the probability for a given order on these vertices is the same for both graphs. By the law of total probability:
392
+
393
+ $$
394
+ P\left( {\left( {\mathop{\min }\limits_{i}{x}_{i},\mathop{\max }\limits_{i}{x}_{i}}\right) \in E,{x}_{i} \sim U\left( \left\lbrack {0,1}\right\rbrack \right) }\right)
395
+ $$
396
+
397
+ $$
398
+ = \mathop{\sum }\limits_{{v \in V}}\left( {P\left( {v = \mathop{\max }\limits_{i}{x}_{i},{x}_{i} \sim U\left( \left\lbrack {0,1}\right\rbrack \right) }\right) \cdot P\left( {\mathop{\min }\limits_{i}{x}_{i} \in {Nbr}\left( v\right) \mid v = \mathop{\max }\limits_{i}{x}_{i},{x}_{i} \sim U\left( \left\lbrack {0,1}\right\rbrack \right) }\right) }\right)
399
+ $$
400
+
401
+ $$
402
+ = \mathop{\sum }\limits_{{v \in V}}\left( {n - 1}\right) !{\int }_{0}^{1}{\int }_{0}^{{x}_{1}}\ldots {\int }_{0}^{{x}_{n - 1}}d{x}_{n}d{x}_{n - 1}\ldots d{x}_{1} \cdot \deg \left( v\right) \left( {n - 2}\right) !{\int }_{0}^{1}{\int }_{0}^{{x}_{2}}\ldots {\int }_{0}^{{x}_{n - 1}}d{x}_{n}d{x}_{n - 1}\ldots d{x}_{2}
403
+ $$
404
+
405
+ $$
406
+ = P\left( {\left( {n - 1,0}\right) \in {E}^{\prime }}\right) = \mathop{\sum }\limits_{{v \in {V}^{\prime }}}\left( {P\left( {v = n - 1}\right) \cdot P\left( {0 \in {Nbr}\left( v\right) \mid v = n - 1}\right) }\right)
407
+ $$
408
+
409
+ $$
410
+ = \mathop{\sum }\limits_{{v \in {V}^{\prime }}}\frac{1}{n}\frac{\deg \left( v\right) }{n - 1}
411
+ $$
412
+
413
+ We now show that if $\left( {\mathop{\min }\limits_{i}{x}_{i},\mathop{\max }\limits_{i}{x}_{i}}\right)$ occurs as an edge in $G = \left( {V, E}\right)$ , where every edge belongs to some cycle, then the bar $\left\lbrack {\mathop{\max }\limits_{i}{x}_{i},\mathop{\min }\limits_{i}{x}_{i}}\right\rbrack$ is guaranteed to occur.
414
+
415
+ The spanning tree comprised of negative edges that begins the computation for ${\mathcal{B}}_{1}^{\text{ext }}$ as in line 8 of Algorithm 1 for the ${H}_{1}$ barcode computation is a maximum spanning tree. This is because the negative edges are just those found by the Kruskal's algorithm for the 0 -dimensional standard persistence applied to an upper filtration. Since $e = \left( {\mathop{\min }\limits_{i}{x}_{i},\mathop{\max }\limits_{i}{x}_{i}}\right)$ has value $\mathop{\min }\limits_{i}{x}_{i}$ in the upper filtration and since every edge belongs to at least one cycle, it cannot be in the maximum spanning tree. Thus $e$ is a positive edge.
416
+
417
+ Since $e$ is positive in the upper filtration, it will be considered at some iteration of the for loop in line 12 of Algorithm 1. When we consider it, it will form a cycle $\mathbf{C}$ with the dynamically maintained spanning forest. To form a persistence ${H}_{1}$ bar for $e$ , we pair it with the maximum edge in the cycle $\mathbf{C}$ in the lower filtration. This forms a bar $\left\lbrack {\mathop{\max }\limits_{i}{x}_{i},\mathop{\min }\limits_{i}{x}_{i}}\right\rbrack$ .
418
+
419
+ Corollary A.5. In Observation 5.4, the expected persistence of bar $\left\lbrack {\mathop{\max }\limits_{i}\left( {x}_{i}\right) ,\mathop{\min }\limits_{i}\left( {x}_{i}\right) }\right\rbrack$ , $\mathbb{E}\left\lbrack \left| {{ma}{x}_{i}\left( {x}_{i}\right) - {mi}{n}_{i}\left( {x}_{i}\right) }\right| \right\rbrack$ , goes to 1 as $n \rightarrow \infty$ .
420
+
421
+ Proof. Define the random variable ${X}_{n} = \left| {\mathop{\max }\limits_{i}{x}_{i} - \mathop{\min }\limits_{i}{x}_{i}}\right|$ for $n$ random points drawn uniformly from $\left\lbrack {0,1}\right\rbrack$ . We find $\mathop{\lim }\limits_{{n \rightarrow \infty }}\mathbb{E}\left\lbrack {X}_{n}\right\rbrack$ . The following sequence of equations follow by repeated substitution.
422
+
423
+ $$
424
+ \mathbb{E}\left\lbrack {X}_{n}\right\rbrack = n!{\int }_{0}^{1}{\int }_{0}^{{x}_{1}}\ldots {\int }_{0}^{{x}_{n - 1}}\left( {{x}_{1} - {x}_{n}}\right) d{x}_{n}\ldots d{x}_{1}
425
+ $$
426
+
427
+ $$
428
+ = n!{\int }_{0}^{1}\left( {\frac{{x}_{1}^{n}}{\left( {n - 1}\right) !} - \frac{{x}_{1}^{n}}{n!}}\right) d{x}_{1} = \frac{n - 1}{n + 1}
429
+ $$
430
+
431
+ where the $n$ ! comes from symmetry.
432
+
433
+ Therefore: $\mathop{\lim }\limits_{{n \rightarrow \infty }}\mathbb{E}\left\lbrack {X}_{n}\right\rbrack = 1$ .
434
+
435
+ ## B Demonstrating the Expressivity of Learned Extended Persistence
436
+
437
+ We present some cases where the classification performance of our method excels. We look for graphs that cannot be distinguished by WL[1] bounded GNNs. We find that pinwheeled cycle graphs and varied length cycle graphs can be perfectly distinguished by learned extended persistence and, in practice, with much better performance than random guessing using our model. See the experiments Section 6 to see the empirical results for our method against other methods on this synthetic data.
438
+
439
+ ### B.1 Pinwheeled Cycle Graphs (The PINWHEELS Dataset)
440
+
441
+ ![01963ef8-f5dd-705b-8380-e1b35a0c9a24_14_1020_1272_277_216_0.jpg](images/01963ef8-f5dd-705b-8380-e1b35a0c9a24_14_1020_1272_277_216_0.jpg)
442
+
443
+ Figure 4: Class 1: A hexagon with pinwheel at each vertex.
444
+
445
+ Figure 3: Class 0: 2 triangles with pinwheel at each vertex.
446
+
447
+ We consider pinwheeled cycle graphs. To form the base skeleton of these graphs, we take the standard counter example to the WL[1] test of 2 triangles and 1 hexagon. We then append pinwheels of a constant number of vertices to the vertices of these base skeletons. The node attributes are set to a spurious constant noise vector. They have no effect on the labels.
448
+
449
+ It is easy to check that both Class 0 and Class 1 graphs are indistinguishable by WL[1]; see Figures 3 and 4. Notice that if there are 6 core vertices and edges in the base skeleton and if there are pinwheels of size $k$ , then with edge deletions and vertex deletions composed, we have a $1 - {\left( \frac{6}{{6k} + 6}\right) }^{2}$ probability of only deleting a pinwheel edge or vertex and thus not affecting ${H}_{1}$ . This probability converges to 1 as $k \rightarrow \infty$ . According to Theorem 5.1, dim ${H}_{1}$ measures the number of cycles and $\dim {H}_{0}$ measures the number of connected components. If neither of these counts are affected by training during supervised learning, our method is guaranteed to distinguish the two classes simply by counting according to Theorem 5.1.
450
+
451
+ Certainly the pinwheeled cycle graphs, are distinguishable by counts of bars. We check this experimentally by constructing a dataset of 1000 graphs of two classes of graph evenly split. Class 0 is as in Figure 3 and involves two triangles with pinwheels of random sizes. Class 1 is as in Figure 4 with a hexagon and pinwheels of random sizes attached. We obtain on average 100% accuracy. This is confirmed experimentally in Table 3. This matches the performance of GFL, since counting bars, or Betti numbers, can also be done through 0 -dim. standard persistence. Interestingly TOGL does not achieve a score of 100 accuracy on this dataset. We conjecture this is because their layers are not able to ignore the spurious and in fact misleading constant node attributes.
452
+
453
+ ### B.2 Regular Varied Length Cycle Graphs (The 2CYCLES dataset)
454
+
455
+ ![01963ef8-f5dd-705b-8380-e1b35a0c9a24_15_454_469_311_232_0.jpg](images/01963ef8-f5dd-705b-8380-e1b35a0c9a24_15_454_469_311_232_0.jpg)
456
+
457
+ Figure 5: Class 0: A 15 node cycle and an 85 node cycle.
458
+
459
+ ![01963ef8-f5dd-705b-8380-e1b35a0c9a24_15_1084_466_311_235_0.jpg](images/01963ef8-f5dd-705b-8380-e1b35a0c9a24_15_1084_466_311_235_0.jpg)
460
+
461
+ Figure 6: Class 1: A 50 node cycle with a 50 node cycle.
462
+
463
+ We further consider varied length cycle graphs. These are graphs that involve two cycles. Class 0 has one short and one long cycle while Class 1 has two near even lengthed cycles. The node attributes are all the same and spurious in this dataset. Extended persistence should do well to distinguish these two classes. We conjecture this based on Observation 5.2, which states that there is some filtration that can measure the length of certain cycles.
464
+
465
+ It is the path length, coming from Observation 5.3, which is being measured. The 0-dimensional standard persistence is insufficient for this purpose. The infinite bars of 0 -dimensional standard persistence are determined only by a birth time. Furthermore, extended persistence without cycle representatives is also insufficient since a message passing GNN learns a constant filtration function over the nodes. However, with cycle representatives, or a list of scalar node activations per cycle for each graph, we can easily distinguish the average sequence representation since the pair of sequence lengths are different. In class 0 , a short cycle and a long cycle are paired while in class 1 , two cycles of medium lengths are paired.
466
+
467
+ A similar but more challenging dataset to the PINWHEELS dataset, the 2CYCLES dataset, is similar to the necklaces dataset from [2] and is illustrated in Section B. 2 but with more misleading node attributes and simplified to two cycles. It involves 400 graphs consisting of two cycles. There are two classes as shown in Figures 5 and 6.
468
+
469
+ The experimental performance on 2CYCLES surpasses random guessing while all other methods just randomly guess as stated in Section B.2. Certainly WL[1] bounded GNNs cannot distinguish the two classes in 2CYCLES since they are all regular. As discussed, because GFL and TOGL use learned 0 -dimensional standard persistence, these approaches do no better than random guessing on this dataset.
470
+
471
+ #### B.2.1 Number of Convolutional Layers Experiment
472
+
473
+ We also perform an experiment to determine the number of layers in the MPGNN of the filtration function that has the highest performance. Due to oversmoothing [46], which is exacerbated by the required scalar-dimensional vertex embeddings, as we increase the number of layers for the filtration function the performance drops. See Figure 7 for an illustration of this phenomenon on the Proteins and Mutag dataset. For these two datasets, two layers perform the best.
474
+
475
+ ## C Timing of Extended Persistence Algorithm (without storing cycle representations)
476
+
477
+ Since the persistence computation, especially extended persistence computation, is the bottleneck to any machine learning algorithm that uses it, it is imperative to have a fast algorithm to compute it. We perform timing experiments with a C++ torch implementation of our fast extended persistence algorithm. The implementation does not require any parallelism since for each graph in the batch there is only a single thread assigned to it. Our experiment involves two parameters, the sparsity, or probability, $p$ for the edges of an Erdos-Renyi graph and the number of vertices of such a graph $n$ . We plot our speedup over GUDHI, the state of the art software for computing extended persistence, as a function of $p$ with $n$ held fixed. We run GUDHI and our algorithm 5 times and take the average and standard deviation of each run's speedup. Since our algorithm has lower complexity, our speedup is theoretically unbounded. We obtain up to ${62}\mathrm{x}$ speedup before surpassing 12 hours of computation time for experimentation. The plot is shown in Figure 8. The speedup is up to ${2.8}\mathrm{x},9\mathrm{x},{24}\mathrm{x}$ , and ${62}\mathrm{x}$ for $n = {200},{500},{1000},{2000}$ respectively.
478
+
479
+ ![01963ef8-f5dd-705b-8380-e1b35a0c9a24_16_434_227_922_374_0.jpg](images/01963ef8-f5dd-705b-8380-e1b35a0c9a24_16_434_227_922_374_0.jpg)
480
+
481
+ Figure 7: An exhibit of oversmoothing in the filtration convolutional layers. Plot of the average accuracy with std. dev. as a function of the number of convolutional layers before the Jumping Knowledge MLP and the extended persistence readout. The Proteins and Mutag datasets were used in (a) and (b) respectively.
482
+
483
+ ![01963ef8-f5dd-705b-8380-e1b35a0c9a24_16_597_1111_599_458_0.jpg](images/01963ef8-f5dd-705b-8380-e1b35a0c9a24_16_597_1111_599_458_0.jpg)
484
+
485
+ Figure 8: Average speedup with std. dev. as a function of sparsity $p$ and number of vertices $n$ on Erdos Renyi graphs.
486
+
487
+ ## D Rational Hat Function Visualization
488
+
489
+ Figure 9 and Figure 10 visualize the rational hat function for fixed $r$ value and varying $x$ and $y$ values. Notice the boundedness of the plot as $\left( {x, y}\right) \rightarrow \infty$ . For the theory behind the rational hat function, see [1].
490
+
491
+ ## E Datasets and Hyperparameter Information
492
+
493
+ Here are the datasets, both synthetic and real world, used in all of our experiments along with training training hyperparameter information.
494
+
495
+ ![01963ef8-f5dd-705b-8380-e1b35a0c9a24_17_307_203_542_487_0.jpg](images/01963ef8-f5dd-705b-8380-e1b35a0c9a24_17_307_203_542_487_0.jpg)
496
+
497
+ Figure 9: The function $\widehat{r}$ , output sliced at one dimension, as a function of ${\left| \left( x, y\right) \right| }_{1}$ with $r = {0.5}$ from Equation 1. The point(x, y)is given by $\left( {x, y}\right) = \mathbf{p} - \mathbf{c}$ .
498
+
499
+ ![01963ef8-f5dd-705b-8380-e1b35a0c9a24_17_926_205_544_485_0.jpg](images/01963ef8-f5dd-705b-8380-e1b35a0c9a24_17_926_205_544_485_0.jpg)
500
+
501
+ Figure 10: The function $\widehat{r}$ , output sliced at one dimension, as a function of ${\left| \left( x, y\right) \right| }_{1}$ with $r = {1.0}$ from Equation 1. The point(x, y)is given by $\left( {x, y}\right) = \mathbf{p} - \mathbf{c}$ .
502
+
503
+ <table><tr><td colspan="8">Dataset and Hyperparameter Information</td></tr><tr><td>Dataset</td><td>Graphs</td><td>Classes</td><td>Avg. Vertices</td><td>Avg. Edges</td><td>$\operatorname{lr}$</td><td>Node Attrs.(Y/N)</td><td>num. layers</td></tr><tr><td>DD</td><td>1178</td><td>2</td><td>284.32</td><td>715.66</td><td>0.01</td><td>Yes</td><td>2</td></tr><tr><td>PROTEINS</td><td>1113</td><td>2</td><td>39.06</td><td>72.82</td><td>0.01</td><td>Yes</td><td>2</td></tr><tr><td>IMDB-MULTI</td><td>1500</td><td>3</td><td>13.00</td><td>65.94</td><td>0.01</td><td>No</td><td>2</td></tr><tr><td>MUTAG</td><td>188</td><td>2</td><td>17.93</td><td>19.79</td><td>0.01</td><td>Yes</td><td>1</td></tr><tr><td>PINWHEELS</td><td>100</td><td>2</td><td>71.934</td><td>437.604</td><td>0.01</td><td>No</td><td>2</td></tr><tr><td>2CYCLES</td><td>400</td><td>2</td><td>551.26</td><td>551.26</td><td>0.01</td><td>No</td><td>2</td></tr><tr><td>MOLBACE</td><td>1513</td><td>2</td><td>34.09</td><td>36.9</td><td>0.001</td><td>Yes</td><td>2</td></tr><tr><td>MOLBBBP</td><td>2039</td><td>2</td><td>24.06</td><td>25.95</td><td>0.001</td><td>Yes</td><td>2</td></tr></table>
504
+
505
+ Table 3: Dataset statistics and training hyperparameters used for all datasets in scoring experiments of Table 1 and Table 2
506
+
507
+ ## F Implementation Dependencies
508
+
509
+ Our experiments have the following dependencies: python 3.9.1, torch 1.10.1, torch_geometric 2.0.5, torch_scatter 2.0.9, torch_sparse 0.6.13, scipy 1.6.3, numpy 1.21.2, CUDA 11.2, GCC 7.5.0.
510
+
511
+ ## 643 G Visualization of Graph Filtrations
512
+
513
+ 44 We visualize the filtration functions ${f}_{G}$ learned on graphs $G$ for the datasets: IMDB-MULTI, MUTAG, and REDDIT-BINARY. The value of ${f}_{G}\left( v\right)$ for each $v \in V$ is shown in each figure.
514
+
515
+ ![01963ef8-f5dd-705b-8380-e1b35a0c9a24_18_481_804_833_408_0.jpg](images/01963ef8-f5dd-705b-8380-e1b35a0c9a24_18_481_804_833_408_0.jpg)
516
+
517
+ Figure 11: IMDB-MULTI learned filtration function
518
+
519
+ ![01963ef8-f5dd-705b-8380-e1b35a0c9a24_18_484_1409_833_412_0.jpg](images/01963ef8-f5dd-705b-8380-e1b35a0c9a24_18_484_1409_833_412_0.jpg)
520
+
521
+ Figure 12: MUTAG learned filtration function
522
+