File size: 56,312 Bytes
f2c935b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
# An Effective and Efficient Entity Alignment Decoding Algorithm via Third-Order Tensor Isomorphism

Xin Mao $^{1*}$ , Meirong Ma $^{2}$ , Hao Yuan $^{2}$ , Jianchao Zhu $^{2}$ , Zongyu Wang $^{3}$ , Rui Xie $^{3}$ , Wei Wu $^{3}$ , Man Lan $^{1*}$

$^{1}$ School of Computer Science and Technology, East China Normal University  $^{2}$ Transsion Group,  $^{3}$ Meituan Group

xmao@stu.ecnu.edu.cn, mlan@cs.ecnu.edu.cn

{meirong.ma,hao.yuan,jianchao.zhu}@transsion.com

{wangzongyu02, rui.xie, wuwei30}@meituan.com

# Abstract

Entity alignment (EA) aims to find the equivalent entity pairs between KGs, which is a crucial step for integrating multi-source KGs. For a long time, most researchers have regarded EA as a pure graph representation learning task and focused on improving graph encoders while paying little attention to the decoding process. In this paper, we propose an effective and efficient EA Decoding Algorithm via Third-order Tensor Isomorphism (DATTI). Specifically, we derive two sets of isomorphism equations: (1) Adjacency tensor isomorphism equations and (2) Gramian tensor isomorphism equations. By combining these equations, DATTI could effectively utilize the adjacency and inner correlation isomorphisms of KGs to enhance the decoding process of EA. Extensive experiments on public datasets indicate that our decoding algorithm can deliver significant performance improvements even on the most advanced EA methods, while the extra required time is less than 3 seconds.

# 1 Introduction

Knowledge graphs (KGs) illustrate the relations between real-world entities—e.g., objects, situations, or concepts—and usually are stored in the form of triples (subject, relation, object). Over recent years, a large number of KGs have been constructed to provide structural knowledge to facilitate downstream applications, such as recommendation systems (Cao et al., 2019) and question-answering systems (Zhao et al., 2020).

Most KGs are independently extracted from different languages or domains. Thus, these KGs usually hold unique information individually but also have some shared parts. Integrating these cross-lingual / domain KGs could provide a broader view for users, especially for the minority language users who usually suffer from lacking language resources. As shown in Figure 1, entity alignment (EA) aims

![](images/c121a218d0f2bb84d7d0813988680678a64dd0d91a8fdd72b1d844c5cd83bc87.jpg)  
Figure 1: An example of cross-lingual entity alignment.

to find the equivalent entity pairs between KGs, which is a crucial step for integrating KGs.

Existing EA methods are built on the same core premise: equivalent entity pairs between KGs have similar neighborhood structures (i.e., isomorphism). Therefore, most existing EA methods (Wang et al., 2018; Sun et al., 2020b; Mao et al., 2020) could be abstracted into the same architecture (as shown in Figure 2): encoding the structural information of KGs into a low-dimensional vector space by Siamese graph encoders and then mapping equivalent entity pairs into the proximate space by alignment loss functions.

For a long time, most researchers have regarded EA as a graph representation learning task and focused on improving graph encoders. Starting from the simplest graph encoder TransE (Bordes et al., 2013), the newest graph encoding methods are successively introduced into EA and achieve decent improvements. For example, GCN-align (Wang et al., 2018) first proposed to use graph convolutional networks (GCN) (Kipf and Welling, 2017) to encode KGs. RSN (Guo et al., 2019) introduces recurrent neural networks (RNN) (Graves et al., 2008) and biased random walk to exploit the long-term relational dependencies existing in KGs. Dual-AMN (Mao et al., 2021a) proposes the proxy-matching layer and normalized hard samples mining loss to speed up the training process.

In stark contrast to the efforts on graph encoders, few researchers focus on improving EA decoding

algorithms (Sun et al., 2020c), which have been proved to significantly improve performance and reliability in other fields, such as dependency parsing (Zmigrod et al., 2020) and machine translation (He et al., 2021). Earlier EA studies (Wang et al., 2018; Sun et al., 2017) simply calculate the similarities of each pair of entities and select the closest one as the alignment result. This naive strategy results in one entity may be aligned to multiple entities simultaneously, which violates the one-to-one constraint of EA  ${}^{1}$  . Thus, some recent studies (Xu et al., 2020; Zhu et al., 2021) propose the global alignment strategy, i.e., regarding the decoding process as a one-to-one assignment problem that could be solved by the Hungarian algorithm (Kuhn, 1955). Overall, these studies just use existing decoding algorithms without further exploration of KGs' characteristics. Similar to graph encoders, we argue that a good EA decoding algorithm should also be capable of exploiting the structural information of KGs.

In this paper, we propose an effective and efficient EA Decoding Algorithm via Third-order Tensor Isomorphism (DATTI). Different from recent studies (Fey et al., 2020; Mao et al., 2021b) that regard EA as a matrix (second-order tensor) isomorphism problem, we express the isomorphism of KGs in the form of third-order tensors, which could completely describe the structural information of KGs. Specifically, we derive two sets of tensor isomorphism equations: (1) Adjacency tensor isomorphism equations and (2) Gramian tensor isomorphism equations. By combining these equations, DATTI could effectively utilize the adjacency and inner correlation isomorphisms of KGs to enhance the decoding process of EA, thus significantly improving the performance. Besides, the introduction of third-order tensors will inevitably lead to a quadratic increase in space-time complexity. Therefore, we adopt the randomized truncated singular value decomposition algorithm (RTSVD) (Sarlós, 2006) and Sinkhorn operator (Sinkhorn, 1964) to improve efficiency.

To comprehensively evaluate our proposed method, we apply DATTI to three advanced EA methods with different kinds of graph encoders. Experimental results on two widely used public datasets show that DATTI can deliver significant performance improvements (3.9% on Hits@1 and 3.2% on MRR) even on the most advanced EA

![](images/f35042fb703772aea3b5e3b61ed3b61cdbe63232c391130eaeb4a30d486d713c.jpg)  
Figure 2: The architecture of existing EA methods.

methods. Furthermore, our decoding algorithm is highly efficient. The decoding time is less than 3 seconds, which is almost negligible compared to the time consumption of the training process. The main contributions are summarized as follows:

- We propose an effective and efficient EA Decoding Algorithm via Third-order Tensor Isomorphism (DATTI), which consists of two sets of tensor isomorphism equations: (1) Adjacency tensor isomorphism equations and (2) Gramian tensor isomorphism equations.  
- Extensive experiments on public datasets indicate that our decoding algorithm can deliver significant performance improvements even applied to the SOTA method, while the extra required time is less than 3 seconds.

# 2 Task Definition

A KG could be defined as  $G = (E, R, T)$ , where  $E, R$ , and  $T$  represent the entity set, relation set, and triple set, respectively. Given a source graph  $G_{s} = (E_{s}, R_{s}, T_{s})$  and a target graph  $G_{t} = (E_{t}, R_{t}, T_{t})$ , the goal of EA is to explore the one-to-one entity correspondences  $P_{e}$  between KGs.

# 3 Related Work

# 3.1 Encoders and Enhancement

The core premise of EA methods is that equivalent entity pairs between KGs have similar neighborhood structures. As shown in Figure 2, most of them could be summarized into two steps: (1) Using KG embedding methods (e.g., TransE, GCN, and GAT (Velickovic et al., 2018)) to encode entities and relations into low-dimensional embeddings. (2) Mapping these embeddings into a unified vector space through pre-aligned entity pairs and alignment loss functions. To organize existing EA methods clearly, we categorize them based on the encoders and enhancement strategies in Table 1.

Encoders and Losses. There are mainly two kinds of Encoders: Trans represents TransE (Bordes et al., 2013) and subsequent derivative algorithms. These methods assume that entity and relation embeddings follow the equation  $h + r \approx t$ . Because of the easy implementation, the Trans encoders are widely used in early EA methods. More recently, Graph Neural Networks (GNN) gradually became the mainstream encoder because of their powerful modeling capability on graph structures. Inspired by language models, RSN proposes a biased random walk sampling strategy and uses RNN to encode the sampled sequences. As for alignment losses, the vast majority of EA methods (Wang et al., 2018; Wu et al., 2019; Mao et al., 2020) adopt contrastive losses, e.g., Triplet loss (Schroff et al., 2015). These loss functions share one core idea, attracting positive entity pairs and repulsing negative entity pairs.

Enhancement. Due to the lack of labeled data, several methods (Sun et al., 2018; Mao et al., 2020) adopt iterative strategies to produce semisupervised aligned entity pairs. Despite significant performance improvements, the time consumption of these methods increases several times more. Some methods (Xu et al., 2019; Yang et al., 2019) introduce textual information (e.g., entity name embeddings) as the initial features of GNN to provide a multi-aspect view. However, literal information is not always available in real applications. For example, there will be privacy risks when using user-generated content. Therefore, we will separately discuss these textual-based methods in the experiment section.

As mentioned in Section 1, some studies (Xu et al., 2020; Wu et al., 2019) regard the decoding process as a one-to-one assignment problem. The assignment problem is a fundamental combinatorial optimization problem. An intuitive instance is to assign  $N$  jobs for  $N$  workers. The assignment problem is to find a one-to-one assignment plan so that the total profit is maximum. Formally, it is equivalent to maximizing the following equation:

$$
\underset {\boldsymbol {P} \in \mathbb {P} _ {N}} {\operatorname {a r g m a x}} \left\langle \boldsymbol {P}, \boldsymbol {X} \right\rangle_ {F} \tag {1}
$$

$X\in \mathbb{R}^{N\times N}$  is the profit matrix.  $P$  is a permutation matrix denoting the assignment plan. There are exactly one entry of 1 in each row and each column in  $P$  while 0s elsewhere.  $\mathbb{P}_N$  represents the set of all N-dimensional permutation matrices. Here,  $\langle \cdot \rangle_F$  represents the Frobenius inner product.

<table><tr><td>Method</td><td>Encoder</td><td>Enhancement</td></tr><tr><td>JAPE (Sun et al., 2017)</td><td>Trans</td><td></td></tr><tr><td>GCN-Align (Wang et al., 2018)</td><td>GNN</td><td></td></tr><tr><td>OTEA (Pei et al., 2019)</td><td>Trans</td><td></td></tr><tr><td>RSN (Guo et al., 2019)</td><td>RNN</td><td></td></tr><tr><td>BootEA (Sun et al., 2018)</td><td>Trans</td><td>Semi</td></tr><tr><td>TransEdge(Sun et al., 2020a)</td><td>Trans</td><td>Semi</td></tr><tr><td>MRAEA (Mao et al., 2020)</td><td>GNN</td><td>Semi</td></tr><tr><td>Dual-AMN (Mao et al., 2021a)</td><td>GNN</td><td>Semi</td></tr><tr><td>GM-Align (Xu et al., 2019)</td><td>GNN</td><td>Entity Name</td></tr><tr><td>RDGCN (Wu et al., 2019)</td><td>GNN</td><td>Entity Name</td></tr><tr><td>DGMC (Fey et al., 2020)</td><td>GNN</td><td>Entity Name</td></tr><tr><td>AttrGNN (Liu et al., 2020)</td><td>GNN</td><td>Entity Name</td></tr><tr><td>CREA (Xu et al., 2020)</td><td>GNN</td><td>Hungarian</td></tr><tr><td>RAGA (Zhu et al., 2021)</td><td>GNN</td><td>Hungarian</td></tr></table>

Table 1: Categorization of some popular EA methods.

# 4 The Proposed Method

In the following, we describe our proposed decoding algorithm (DATTI), which consists of two sets of tensor isomorphism equations: (1) Adjacency tensor isomorphism equations and (2) Gramian tensor isomorphism equations. Furthermore, we adopt the randomized truncated singular value decomposition (RTSVD) algorithm and the Sinkhorn operator to speed up the decoding process.

# 4.1 Adjacency Isomorphism

Some recent studies (Fey et al., 2020; Mao et al., 2021b) regard EA as a matrix isomorphism problem. These methods assume that the adjacency matrices  $\mathbf{A}_s \in \mathbb{R}^{|E_s| \times |E_s|}$  of source graph  $G_s$  and  $\mathbf{A}_t \in \mathbb{R}^{|E_t| \times |E_t|}$  of target graph  $G_t$  are isomorphic, i.e.,  $\mathbf{A}_s$  could be transformed into  $\mathbf{A}_t$  according to the entity correspondence matrix  $P_e$ :

$$
\boldsymbol {P} _ {e} \boldsymbol {A} _ {s} \boldsymbol {P} _ {e} ^ {\top} = \boldsymbol {A} _ {t} \tag {2}
$$

$P_{e_{[i,j]}} = 1$  indicates that  $e_i$  and  $e_j$  are equivalent. However, matrices (second-order tensors) cannot fully describe the adjacency information of KGs, which is stored in the form of triples. Therefore, we use third-order tensors to express KGs to avoid the information missing from using matrices. Let  $\mathbf{A}_s\in \mathbb{R}^{|E_s|\times |R_s|\times |E_s|}$  and  $\mathbf{A}_t\in \mathbb{R}^{|E_t|\times |R_t|\times |E_t|}$  be the adjacency tensors of  $G_{s}$  and  $G_{t}$ .  $\mathbf{A}_{[h,r,t]} = 1$  indicates that the triple  $(h,r,t)$  is in the KG. The matrix isomorphism Equation (2) could be generalized into the third-order form as follows:

$$
\boldsymbol {A} _ {\mathbf {s}} \times_ {1} \mathbf {P} _ {\mathbf {e}} \times_ {2} \mathbf {P} _ {\mathbf {r}} \times_ {3} \mathbf {P} _ {\mathbf {e}} = \boldsymbol {A} _ {t} \tag {3}
$$

where  $P_r$  represents the one-to-one relation correspondence matrix between  $G_s$  and  $G_t$  and  $\times_k$  represents the  $k$ -mode tensor-matrix product.

![](images/39412a4cd50dae4faa3e3d88f4b09339a9e855089d44690f539da8296aa102af.jpg)  
Figure 3: The illustration of tensor-matrix product and isomorphic adjacency tensors.

As illustrated in Figure 3, Equation (3) can be interpreted as successively reordering the tensor along three axes. Since the number of triples  $|T|$  is usually much less than  $|E| \times |R| \times |E|$ ,  $\mathbf{A}_s$  and  $\mathbf{A}_t$  are extremely sparse. Unfortunately, existing tensor computing frameworks (e.g., Numpy (Harris et al., 2020) and Tensorflow (Abadi et al., 2015)) can only provide few and limited operators for third-order sparse tensors. Therefore, we have to re-transform Equation (3) into the matrix form:

$$
\boldsymbol {\mathcal {A}} _ {s} \times_ {1} \mathrm {P} _ {\mathrm {e}} \times_ {2} \mathrm {P} _ {\mathrm {r}} \times_ {3} \mathrm {P} _ {\mathrm {e}} = \boldsymbol {\mathcal {A}} _ {t}
$$

$$
\boldsymbol {P} _ {e} \boldsymbol {\mathcal {A}} _ {s} ^ {(1)} \left(\boldsymbol {P} _ {e} \otimes \boldsymbol {P} _ {r}\right) ^ {\top} = \boldsymbol {\mathcal {A}} _ {t} ^ {(1)}
$$

$$
\Longleftrightarrow \quad P _ {r} \mathcal {A} _ {s} ^ {(2)} \left(P _ {e} \otimes P _ {e}\right) ^ {\top} = \mathcal {A} _ {t} ^ {(2)} \tag {4}
$$

$$
\boldsymbol {P} _ {e} \boldsymbol {\mathcal {A}} _ {s} ^ {(3)} \left(\boldsymbol {P} _ {r} \otimes \boldsymbol {P} _ {e}\right) ^ {\top} = \boldsymbol {\mathcal {A}} _ {t} ^ {(3)}
$$

here  $\otimes$  represents the Kronecker product,  $P_{e}\otimes$ $P_{r}\in \mathbb{P}^{(|E|\cdot |R|)\times (|E|\cdot |R|)}$ .  $\pmb{A}^{(k)}$  represents the mode-  $k$  unfolding matrix of the tensor  $\pmb{A}$ , e.g.,  $\pmb{A}^{(1)} = [\pmb{A}_{[:,:,0]}\| \pmb{A}_{[:,:,1]}\| \dots \| \pmb{A}_{[:,:,|E|]}]\in \mathbb{R}^{|E|\times (|E|\cdot |R|)}$ , where  $\parallel$  is the concatenate operation. When  $\pmb{A}_s$  and  $\pmb{A}_t$  are second-order adjacency tensors, the above equations degrade to Equation (2):

$$
\boldsymbol {A} _ {s} \times_ {1} \mathbf {P} _ {\mathrm {e}} \times_ {2} \mathbf {P} _ {\mathrm {e}} = \boldsymbol {A} _ {t}
$$

$$
\Longleftrightarrow \quad P _ {e} \mathcal {A} _ {s} ^ {(1)} P _ {e} ^ {\top} = \mathcal {A} _ {t} ^ {(1)} \tag {5}
$$

# 4.2 Gramian Isomorphism

Gramian matrix  $G(\mathbf{A}) = \mathbf{A}\mathbf{A}^{\top}$  reflects the inner correlations between each vector of matrix  $\mathbf{A}$ . If we regard  $\mathbf{A}$  as random variables,  $G(\mathbf{A})$  is equivalent to the uncentered covariance matrix. When  $\mathbf{A}_s$  and  $\mathbf{A}_t$  are isomorphic, their Gramian matrices  $\mathbf{A}_s\mathbf{A}_s^\top$  and  $\mathbf{A}_t\mathbf{A}_t^\top$  are isomorphic too:

$$
\boldsymbol {A} _ {t} \boldsymbol {A} _ {t} ^ {\top} = \left(\boldsymbol {P} _ {e} \boldsymbol {A} _ {s} \boldsymbol {P} _ {e} ^ {\top}\right) \left(\boldsymbol {P} _ {e} \boldsymbol {A} _ {s} \boldsymbol {P} _ {e} ^ {\top}\right) ^ {\top} = \boldsymbol {P} _ {e} \boldsymbol {A} _ {s} \boldsymbol {A} _ {s} ^ {\top} \boldsymbol {P} _ {e} ^ {\top} (6)
$$

Similar to adjacency matrices, the Gramian matrix isomorphism equation could also be generalized into the third-order form:

$$
\boldsymbol {P} _ {e} G \left(\boldsymbol {A} _ {s} ^ {(1)}\right) \boldsymbol {P} _ {e} ^ {\top} = G \left(\boldsymbol {A} _ {t} ^ {(1)}\right)
$$

$$
\boldsymbol {P} _ {r} G \left(\boldsymbol {A} _ {\boldsymbol {s}} ^ {(2)}\right) \boldsymbol {P} _ {r} ^ {\top} = G \left(\boldsymbol {A} _ {\boldsymbol {t}} ^ {(2)}\right) \tag {7}
$$

$$
\boldsymbol {P} _ {e} G \left(\boldsymbol {A} _ {s} ^ {(3)}\right) \boldsymbol {P} _ {e} ^ {\top} = G \left(\boldsymbol {A} _ {t} ^ {(3)}\right)
$$

Furthermore, it is easy to prove that the following equations hold for arbitrary depth  $l \in \mathbb{N}$ :

$$
\boldsymbol {P} _ {e} G (\boldsymbol {\mathcal {A}} _ {s} ^ {(1)}) ^ {l} \boldsymbol {P} _ {e} ^ {\top} = G (\boldsymbol {\mathcal {A}} _ {t} ^ {(1)}) ^ {l}
$$

$$
\boldsymbol {P} _ {r} G \left(\boldsymbol {A} _ {s} ^ {(2)}\right) ^ {l} \boldsymbol {P} _ {r} ^ {\top} = G \left(\boldsymbol {A} _ {t} ^ {(2)}\right) ^ {l} \tag {8}
$$

$$
\boldsymbol {P} _ {e} G \left(\boldsymbol {A} _ {s} ^ {(3)}\right) ^ {l} \boldsymbol {P} _ {e} ^ {\top} = G \left(\boldsymbol {A} _ {t} ^ {(3)}\right) ^ {l}
$$

# 4.3 Decoding via Isomorphism

Although we have derived two sets of isomorphic equations, neither of them could be solved directly. These equations are equivalent to the quadratic or cubic assignment problem (Yan et al., 2016), which has been proved to be NP-hard (Lawler, 1963). Fortunately, these isomorphic equations could be used to enhance the decoding process.

Let  $\pmb{H}_s^e \in \mathbb{R}^{|E_s| \times d^e}$  and  $\pmb{H}_s^r \in \mathbb{R}^{|R_s| \times d^r}$  represent the entity and relation embeddings of  $G_s$ .  $\pmb{H}_t^e \in \mathbb{R}^{|E_t| \times d^e}$  and  $\pmb{H}_t^r \in \mathbb{R}^{|R_t| \times d^r}$  represent the embeddings of  $G_t$ . Assume that these embeddings have been approximately aligned by EA methods:

$$
\begin{array}{l} \boldsymbol {P} _ {e} \boldsymbol {H} _ {s} ^ {e} \approx \boldsymbol {H} _ {t} ^ {e} \\ \boldsymbol {D} \boldsymbol {W} ^ {r} = \boldsymbol {W} ^ {r} \end{array} \tag {9}
$$

$$
\boldsymbol {P} _ {r} \boldsymbol {H} _ {s} ^ {r} \approx \boldsymbol {H} _ {t} ^ {r}
$$

As mentioned in Section 1, some recent studies (Xu et al., 2020; Sun et al., 2020c) regard the decoding process of  $P_{e}$  as an assignment problem:

$$
\underset {\boldsymbol {P} _ {e} \in \mathbb {P} _ {| E |}} {\arg \min } \| \boldsymbol {P} _ {e} \boldsymbol {H} _ {s} ^ {e} - \boldsymbol {H} _ {t} ^ {e ^ {\top}} \| _ {F} ^ {2} \tag {10}
$$

$$
\Longleftrightarrow \quad \underset {\boldsymbol {P} _ {e} \in \mathbb {P} _ {| E |}} {\arg \max } \left\langle \boldsymbol {P} _ {e}, \boldsymbol {H} _ {s} ^ {e} \boldsymbol {H} _ {t} ^ {e ^ {\top}} \right\rangle_ {F} \tag {10}
$$

Since this simple decoding strategy does not utilize the structural information of KGs, we propose to introduce the adjacency and Gramian isomorphism equations into the decoding process. By combining Equations (4), (8), and (9), the connection between the 8-tuple  $\{\mathcal{A}_s,\mathcal{A}_t,H_s^e,H_t^e,H_s^r,H_t^r,P_e,P_r\}$  could

be described as follows, for arbitrary depth  $l \in \mathbb{N}$ :

$$
\boldsymbol {P} _ {e} G \left(\boldsymbol {\mathcal {A}} _ {s} ^ {(1)}\right) ^ {l} \boldsymbol {\mathcal {A}} _ {s} ^ {(1)} \left(\boldsymbol {H} _ {s} ^ {e} \otimes \boldsymbol {H} _ {s} ^ {r}\right) \approx G \left(\boldsymbol {\mathcal {A}} _ {t} ^ {(1)}\right) ^ {l} \boldsymbol {\mathcal {A}} _ {t} ^ {(1)} \left(\boldsymbol {H} _ {t} ^ {e} \otimes \boldsymbol {H} _ {t} ^ {r}\right) \tag {11}
$$

$$
\left. \boldsymbol {P} _ {r} G \left(\boldsymbol {\mathcal {A}} _ {s} ^ {(2)}\right) ^ {l} \boldsymbol {\mathcal {A}} _ {s} ^ {(2)} \left(\boldsymbol {H} _ {s} ^ {e} \otimes \boldsymbol {H} _ {s} ^ {e}\right) \approx G \left(\boldsymbol {\mathcal {A}} _ {t} ^ {(2)}\right) ^ {l} \boldsymbol {\mathcal {A}} _ {t} ^ {(2)} \left(\boldsymbol {H} _ {t} ^ {e} \otimes \boldsymbol {H} _ {t} ^ {e}\right) \right. \tag {12}
$$

$$
\boldsymbol {P} _ {e} G \left(\boldsymbol {\mathcal {A}} _ {s} ^ {(3)}\right) ^ {l} \boldsymbol {\mathcal {A}} _ {s} ^ {(3)} \left(\boldsymbol {H} _ {s} ^ {r} \otimes \boldsymbol {H} _ {s} ^ {e}\right) \approx G \left(\boldsymbol {\mathcal {A}} _ {t} ^ {(3)}\right) ^ {l} \boldsymbol {\mathcal {A}} _ {t} ^ {(3)} \left(\boldsymbol {H} _ {t} ^ {r} \otimes \boldsymbol {H} _ {t} ^ {e}\right) \tag {13}
$$

Detailed proof is listed in Appendix A. Although it looks complex, the above equations essentially have the same form as Equation (9). Take Equation (11) as an example, let  $\hat{H}_s^l = G(\mathcal{A}_s^{(1)})^l\mathcal{A}_s^{(1)}(H_s^e\otimes H_s^r)$  and  $\hat{H}_t^l = G(\mathcal{A}_t^{(1)})^l\mathcal{A}_t^{(1)}(H_t^e\otimes H_t^r)$ , Equation (11) can be simplified into as follows:

$$
\boldsymbol {P} _ {e} \hat {\boldsymbol {H}} _ {s} ^ {l} \approx \hat {\boldsymbol {H}} _ {t} ^ {l} \tag {14}
$$

Therefore,  $P_{e}$  could also be solved by maximizing the equation  $\underset {P_e\in \mathbb{P}_{|E|}}{\arg \max}\left\langle P_e,\hat{H}_s^l\hat{H}_t^{\top}\right\rangle_F$  . Theoretically, for arbitrarily depth  $l\in \mathbb{N}$  , the result of  $P_{e}$  should be the same. However, the above equations are based on the ideal isomorphic situation. In practice,  $\mathcal{A}_s$  and  $\mathcal{A}_t$  can not always be strictly isomorphic. In order to reduce the impact of noise existing in practice,  $P_{e}$  should be fit for various  $l$  ..

$$
\begin{array}{l} \sum_ {l = 0} ^ {L} \underset {\boldsymbol {P} _ {e} \in \mathbb {P} _ {| E |}} {\arg \max } \left\langle \boldsymbol {P} _ {e}, \hat {\boldsymbol {H}} _ {s} ^ {l} \hat {\boldsymbol {H}} _ {t} ^ {l} ^ {\top} \right\rangle_ {F} \tag {15} \\ \Longleftrightarrow \quad \underset {\boldsymbol {P} _ {e} \in \mathbb {P} _ {| E |}} {\operatorname {a r g m a x}} \left\langle \boldsymbol {P} _ {e}, \sum_ {l = 0} ^ {L} \hat {\boldsymbol {H}} _ {s} ^ {l} \hat {\boldsymbol {H}} _ {t} ^ {l} ^ {\top} \right\rangle_ {F} \\ \end{array}
$$

By Equation (15), we successfully integrate the adjacency and Gramian isomorphism equations into the decoding process of EA. Similar to the above, Equation (12) could obtain the relation alignment result  $P_r$ . Because Equation (13) is equivalent to Equation (11), it only needs to solve either of them to obtain the entity alignment result  $P_e$ . It is noted that entity scales  $|E_s|$  and  $|E_t|$  are usually inconsistent in practice, which is called the unbalanced assignment problem. Assuming that  $|E_s| > |E_t|$ , a naive solution is to pad the profit matrix with zeros such that its shape becomes  $\mathbb{R}^{|E_s| \times |E_s|}$ .

# 4.4 Reducing the Complexity

Randomized truncated SVD. The introduction of third-order tensors enables DATTI to fully describe the structural information of KGs. However, there is no such thing as a free lunch. The space-time complexity also increases quadratically. The main bottleneck is to compute  $\hat{\pmb{H}}_s^l\in \mathbb{R}^{|E_s|\times (d^e\cdot d^r)}$  and

![](images/89a3cc007f857374a67ba8369f87df981fcffc0fc7080f3b72f243f1df7f2e16.jpg)  
Figure 4: The singular value distribution of  $\hat{H}_s^l$  obtained by TransEdge on DBP15K. The abscissa represents the top  $k\%$  singular values, and the ordinate represents the proportion of these singular values in total.

$\hat{H}_t^l \in \mathbb{R}^{|E_t| \times (d^e \cdot d^r)}$ . Even with the sparse optimization trick, the complexity is still up to  $O(ld^rd^e |T|)$  which is much worse than most GNN encoders  $O(l(d^{e} + d^{r})|T|)$  (Mao et al., 2020).

In Figure 4, we list the singular value distribution of  $\hat{H}_s^l$  obtained by TransEdge (Sun et al., 2020a) on DBP15K. Interestingly, the distribution is highly concentrated in the top  $20\%$ , which means the contained information of  $\hat{H}_s^l$  is sparse and compressible. By dropping the smaller singular values of  $\hat{H}_s^l$  and  $\hat{H}_t^l$ , the space-time complexity could be significantly reduced. This paper adopts randomized truncated SVD (Sarlós, 2006) to decompose matrices approximately and only retains the top  $\phi \%$  of the singular values of  $\hat{H}_s^l$  and  $\hat{H}_t^l$ .

Sinkhorn operator. The first and most well-known solving algorithm for the assignment problem is the Hungarian algorithm (Kuhn, 1955), which is based on improving a matching along the augmenting paths. The time complexity of the original Hungarian algorithm is  $O(n^4)$ . Then, Jonker and Volgenant (1987) improve the algorithm to achieve an  $O(n^3)$  running time.

Besides the Hungarian algorithm, the assignment problem could also be regarded as a special case of the optimal transport (OT) problem. Based on the Sinkhorn operator (Sinkhorn, 1964), Cuturi (2013) proposes a fast and completely parallelizable algorithm for OT problem:

$$
S ^ {0} (\boldsymbol {X}) = \exp (\boldsymbol {X}),
$$

$$
S ^ {k} (\boldsymbol {X}) = \mathcal {N} _ {c} \left(\mathcal {N} _ {r} \left(S ^ {k - 1} (\boldsymbol {X})\right)\right), \tag {16}
$$

$$
\operatorname {S i n k h o r n} (\boldsymbol {X}) = \lim  _ {k \rightarrow \infty} S ^ {k} (\boldsymbol {X}).
$$

where  $\mathcal{N}_r(\pmb {X}) = \pmb {X}\oslash (\pmb {X}\mathbf{1}_N\mathbf{1}_N^T)$  and  $\mathcal{N}_c = X\oslash (\mathbf{1}_N\mathbf{1}_N^T\mathbf{X})$  are the row and column-wise normalization operators of a matrix,  $\varnothing$  represents the element-wise division, and  $\mathbf{1}_N$  is a column vector of ones.

<table><tr><td colspan="2">Datasets</td><td>|E|</td><td>|R|</td><td>|T|</td></tr><tr><td rowspan="2">DBPZH-EN</td><td>Chinese</td><td>19,388</td><td>1,701</td><td>70,414</td></tr><tr><td>English</td><td>19,572</td><td>1,323</td><td>95,142</td></tr><tr><td rowspan="2">DBPJA-EN</td><td>Japanese</td><td>19,814</td><td>1,299</td><td>77,214</td></tr><tr><td>English</td><td>19,780</td><td>1,153</td><td>93,484</td></tr><tr><td rowspan="2">DBPFR-EN</td><td>French</td><td>19,661</td><td>903</td><td>105,998</td></tr><tr><td>English</td><td>19,993</td><td>1,208</td><td>115,722</td></tr><tr><td rowspan="2">SRPRSFR-EN</td><td>French</td><td>15,000</td><td>177</td><td>33,532</td></tr><tr><td>English</td><td>15,000</td><td>221</td><td>36,508</td></tr><tr><td rowspan="2">SRPRSDE-EN</td><td>German</td><td>15,000</td><td>120</td><td>37,377</td></tr><tr><td>English</td><td>15,000</td><td>222</td><td>38,363</td></tr></table>

Table 2: Statistical data of DBP15K and SRPRS.

Then, Mena et al. (2018) further prove that the Sinkhorn operation could also solve the assignment problem as a special case of OT problem:

$$
\begin{array}{l} \underset {\boldsymbol {P} \in \mathbb {P} _ {N}} {\operatorname {a r g m a x}} \left\langle \boldsymbol {P}, \boldsymbol {X} \right\rangle_ {F} \\ = \lim  _ {\tau \rightarrow 0 ^ {+}} \operatorname {S i n k h o r n} (\boldsymbol {X} / \tau) \tag {17} \\ \end{array}
$$

The time complexity of the Sinkhorn operator is  $O(kn^2)$ . According to our experimental results, a small  $k$  is enough to achieve decent performance. Compared with the Hungarian algorithm, the Sinkhorn operation is much more efficient. Therefore, this paper adopts the Sinkhorn operator to solve Equation (15).

# 5 Experiments

Our experiments are conducted on a PC with a GeForce GTX 3090 GPU and a Ryzen ThreadRipper 3970X CPU. The code and datasets are available in Github<sup>2</sup>.

# 5.1 Datasets

To comprehensively evaluate the proposed decoding algorithm, we experiment with two widely used public datasets: (1) DBP15K (Sun et al., 2017) consists of three cross-lingual subsets from multilingual DBpedia. Each subset contains 15,000 entity pairs. (2) SRPRS (Guo et al., 2019). Each subset also contains 15,000 entity pairs but with much fewer triples compared to DBP15K. The statistics of these datasets are summarized in Table 2. To be consistent with previous studies (Wang et al., 2018; Sun et al., 2018), we randomly split  $30\%$  of the prealigned entity pairs for training and development while using the remaining  $70\%$  for testing. All the results are the average of five independent runs.

# 5.2 Baselines

To ensure the universality, we evaluate DATTI on three advanced EA methods with different types of graph encoders: Dual-AMN (Mao et al., 2021a) is the SOTA of GNN-based methods; TransEdge (Sun et al., 2020a) is the SOTA of Trans-based methods; RSN (Guo et al., 2019) is the only EA method using RNN as the encoder. Furthermore, we choose the Hungarian algorithm (Hun.) as the decoding baseline, proven to be effective by recent EA methods (Xu et al., 2020; Zhu et al., 2021).

# 5.3 Settings

Metrics. Following convention, we use Hits@k and Mean Reciprocal Rank (MRR) as the evaluation metrics. The Hits@k score is calculated by measuring the proportion of correct pairs in the top-k. In particular, Hits@1 equals accuracy.

Hyper-parameter. For TransEdge, we retain the top  $\phi = 20\%$  of the singular values of  $\hat{H}_s^l$  and  $\hat{H}_t^l$ . Since the output dimensions of Dual-AMN  $(d^{e} = 768, d^{r} = 128)$  and RSN  $(d^{e} = d^{r} = 256)$  are much larger than TransEdge  $(d^{e} = d^{r} = 75)$ , we only set the retaining ratio  $\phi = 2\%$ . Other hyper-parameters keep the same for all datasets and methods: iterations  $k = 15$ ; temperature  $\tau = 0.02$ ; max depth  $L = 3$ .

# 5.4 Main Experiments

We list the main experimental results in Table 3. Among these three EA methods, Dual-AMN beats other baselines by more than  $5.5\%$  on Hits@1 and  $4.2\%$  on MRR, which indicates the advantages of GNN encoders. On RSN and TransEdge, the Hungarian algorithm shows decent performance improvements on Hits@1 by at least  $3.2\%$ . In contrast, the Hungarian does not positively affect Dual-AMN, probably due to the bi-directional nearest iterative strategy of Dual-AMN that has included the core idea of the Hungarian algorithm.

Our proposed DATTI consistently achieves the best performances on all datasets and baselines. On DBP15K, DATTI delivers performance gains by at least  $2.8\%$  on  $\text{Hits} @ 1$  and  $3.2\%$  on MRR. Especially for the SOTA method Dual-AMN, DATTI further raises the performance ceiling of EA by more than  $3.9\%$  on  $\text{Hits} @ 1$ . On SRPRS, DATTI could significantly improve the performances of RSN and TransEdge. But for Dual-AMN, the improvements are much less. One possible explanation is that SRPRS removes too many triples, resulting in a lower performance ceiling.

<table><tr><td rowspan="2">Method</td><td colspan="3">DBPZH-EN</td><td colspan="3">DBPJA-EN</td><td colspan="3">DBPFR-EN</td><td colspan="3">SRPRSFR-EN</td><td colspan="3">SRPRSDE-EN</td></tr><tr><td>H@1</td><td>H@10</td><td>MRR</td><td>H@1</td><td>H@10</td><td>MRR</td><td>H@1</td><td>H@10</td><td>MRR</td><td>H@1</td><td>H@10</td><td>MRR</td><td>H@1</td><td>H@10</td><td>MRR</td></tr><tr><td>RSN</td><td>0.607</td><td>0.829</td><td>0.685</td><td>0.591</td><td>0.815</td><td>0.670</td><td>0.632</td><td>0.864</td><td>0.713</td><td>0.351</td><td>0.638</td><td>0.447</td><td>0.511</td><td>0.744</td><td>0.590</td></tr><tr><td>+ Hun.</td><td>0.661</td><td>-</td><td>-</td><td>0.633</td><td>-</td><td>-</td><td>0.693</td><td>-</td><td>-</td><td>0.374</td><td>-</td><td>-</td><td>0.538</td><td>-</td><td>-</td></tr><tr><td>+ DATTI</td><td>0.721</td><td>0.903</td><td>0.785</td><td>0.686</td><td>0.895</td><td>0.759</td><td>0.720</td><td>0.918</td><td>0.790</td><td>0.407</td><td>0.694</td><td>0.502</td><td>0.559</td><td>0.782</td><td>0.637</td></tr><tr><td>(Imp.%)</td><td>9.1%</td><td>8.9%</td><td>14.6%</td><td>8.4%</td><td>9.8%</td><td>13.3%</td><td>3.9%</td><td>6.3%</td><td>10.8%</td><td>8.8%</td><td>8.8%</td><td>12.3%</td><td>3.9%</td><td>5.1%</td><td>8.0%</td></tr><tr><td>TransEdge</td><td>0.762</td><td>0.921</td><td>0.818</td><td>0.746</td><td>0.929</td><td>0.811</td><td>0.769</td><td>0.940</td><td>0.830</td><td>0.403</td><td>0.675</td><td>0.492</td><td>0.556</td><td>0.753</td><td>0.633</td></tr><tr><td>+ Hun.</td><td>0.787</td><td>-</td><td>-</td><td>0.771</td><td>-</td><td>-</td><td>0.796</td><td>-</td><td>-</td><td>0.427</td><td>-</td><td>-</td><td>0.574</td><td>-</td><td>-</td></tr><tr><td>+ DATTI</td><td>0.814</td><td>0.947</td><td>0.863</td><td>0.804</td><td>0.957</td><td>0.861</td><td>0.818</td><td>0.965</td><td>0.873</td><td>0.441</td><td>0.707</td><td>0.521</td><td>0.593</td><td>0.782</td><td>0.673</td></tr><tr><td>(Imp.%)</td><td>3.4%</td><td>2.8%</td><td>5.5%</td><td>4.3%</td><td>3.0%</td><td>6.2%</td><td>2.8%</td><td>2.7%</td><td>5.2%</td><td>3.3%</td><td>4.7%</td><td>5.9%</td><td>3.5%</td><td>3.8%</td><td>6.3%</td></tr><tr><td>Dual-AMN</td><td>0.804</td><td>0.937</td><td>0.853</td><td>0.803</td><td>0.947</td><td>0.856</td><td>0.834</td><td>0.962</td><td>0.881</td><td>0.483</td><td>0.755</td><td>0.573</td><td>0.612</td><td>0.819</td><td>0.683</td></tr><tr><td>+ Hun.</td><td>0.801</td><td>-</td><td>-</td><td>0.803</td><td>-</td><td>-</td><td>0.839</td><td>-</td><td>-</td><td>0.483</td><td>-</td><td>-</td><td>0.611</td><td>-</td><td>-</td></tr><tr><td>+ DATTI</td><td>0.835</td><td>0.953</td><td>0.880</td><td>0.836</td><td>0.969</td><td>0.884</td><td>0.873</td><td>0.979</td><td>0.913</td><td>0.495</td><td>0.760</td><td>0.583</td><td>0.623</td><td>0.822</td><td>0.691</td></tr><tr><td>(Imp.%)</td><td>3.9%</td><td>1.7%</td><td>3.2%</td><td>4.1%</td><td>2.3%</td><td>3.3%</td><td>4.7%</td><td>1.8%</td><td>3.6%</td><td>2.5%</td><td>0.6%</td><td>1.7%</td><td>1.8%</td><td>0.4%</td><td>1.2%</td></tr></table>

Table 3: Main experimental results on DBP15K and SRPRS. All the results and initial embeddings are obtained by their official code with default hyper-parameters. Imp.% represents the percentage increase of DATTI compared to the suboptimal result. Since the Hungarian algorithm only outputs one aligned entity pair for each entity, instead of a rank list, we can only report Hits@1. All improvements are statistically significant with  $p < 0.01$  on paired  $t$ -test.  

<table><tr><td rowspan="2">Method</td><td colspan="2">DBP15K</td><td colspan="2">SRPRS</td></tr><tr><td>Train</td><td>DATTI</td><td>Train</td><td>DATTI</td></tr><tr><td>RSN</td><td>3,659</td><td>2.4</td><td>1,279</td><td>1.7</td></tr><tr><td>TransEdge</td><td>1,625</td><td>1.3</td><td>907</td><td>1.2</td></tr><tr><td>Dual-AMN</td><td>177</td><td>3.3</td><td>163</td><td>2.6</td></tr></table>

# 5.5 Auxiliary Experiments

To explore the behavior of our proposed decoding algorithm in different situations, we design the following experiments:

Time Efficiency. By adopting RTSVD and the Sinkhorn operator, our proposed decoding algorithm acquires high efficiency. Table 4 lists the time costs of the training and decoding process (DATTI) of three EA methods on DBP15K and SRPRS. DATTI only requires 3 seconds to obtain the result at most, which is negligible even compared to the training process of the fastest method Dual-AMN.

Adjacency and Gramian Isomorphism. The core contribution of DATTI is to introduce the adjacency and Gramian isomorphism equations into the EA decoding process. To demonstrate their effectiveness, we independently add each of them on Dual-AMN. As shown in Table 5, both could slightly improve the performance (less than  $1.6\%$  on Hits@1). Interestingly, the performance gain brought by their combination is greater than the sum of their independent gains, which means these two kinds of isomorphism equations could capture non-overlapping information.

Table 4: Time costs (second) on DBP15K and SRPRS.  

<table><tr><td rowspan="2">Method</td><td colspan="2">DBPZH-EN</td><td colspan="2">DBPJA-EN</td><td colspan="2">DBPFR-EN</td></tr><tr><td>Hits@1</td><td>MRR</td><td>Hits@1</td><td>MRR</td><td>Hits@1</td><td>MRR</td></tr><tr><td>Dual-AMN</td><td>0.804</td><td>0.853</td><td>0.803</td><td>0.856</td><td>0.834</td><td>0.881</td></tr><tr><td>+Adj.</td><td>0.820</td><td>0.866</td><td>0.818</td><td>0.868</td><td>0.859</td><td>0.902</td></tr><tr><td>+Gram.</td><td>0.809</td><td>0.857</td><td>0.812</td><td>0.863</td><td>0.848</td><td>0.895</td></tr><tr><td>+DATTI</td><td>0.835</td><td>0.880</td><td>0.836</td><td>0.884</td><td>0.873</td><td>0.913</td></tr></table>

Table 5: Ablation studies on DBP15K.

![](images/cfec7230d3be6bce03d64157ade1df8c1fe1871d5b6c4c136e5a26ca187b650f.jpg)  
Figure 5: Hits@1 on DBP $_{\mathrm{ZH - EN}}$  with different  $\tau$ .

Iterations  $k$  and Temperature  $\tau$ . The  $\tau$  in the Sinkhorn operator is used to make distribution closer to one-hot, which is similar to the  $\tau$  in the softmax operator. We set  $\tau$  from 0.01 to 0.05 and report the corresponding performance curves of DATTI (Dual-AMN) on  $\mathrm{DBP_{ZH - EN}}$  in Figure 5. If we choose an appropriate value, the Sinkhorn operator will converge quickly to the optimal solution. Although  $\tau$  theoretically needs to be close to zero, an over small  $\tau$  will make the algorithm unstable because of the error of big floating-point numbers. In contrast, an over large  $\tau$  will lead the algorithm to fail to converge.

![](images/cfc87515ced231c14646c39aa3ea52d0a1f39925ef5fef56f23351656ae677fa.jpg)  
Figure 6: Hits@1 on DBP15K with different depths  $L$ .

![](images/bfca7921b9a0cba9ebc23a6ebd159ee05607608271f8f3241990283655ab0457.jpg)  
Figure 7: Hits@1 and time cost (second) on  $\mathrm{DBP_{ZH - EN}}$  with different retaining ratios  $\phi$ .

Depth  $L$ . Figure 6 lists the performances of DATTI (Dual-AMN) with different max depths  $L$ . In particular,  $L = 0$  is equivalent to only using adjacency isomorphism equations to decode  $P_{e}$ . When the depth  $L$  is less than 3, each additional layer could deliver significant performance improvements on all subsets of DBP15K. When stacking more layers, the performance gains become negligible or even degrade, which indicates that over-smoothing (Kipf and Welling, 2017) also exists in DATTI.

Retaining ratio  $\phi$ . To reduce the space-time complexity of DATTI, we only retain the top  $\phi \%$  of the singular values of  $\hat{H}_s^l$  and  $\hat{H}_t^l$ . In Figure 7, we report the Hits@1 and time cost of DATTI (Dual-AMN) on DBP $_{\mathrm{ZH - EN}}$  with different retaining ratios  $\phi$ . From the observation, when the retaining ratio exceeds  $2\%$ , the growth of Hits@1 becomes very slow, while the time cost still keeps quadratic growing. Therefore,  $\phi = 2\%$  is the sweet spot between performance and efficiency in this situation. In practice, the retaining ratio  $\phi$  could be adjusted according to computing resources and data scales.

# 5.6 Unsupervised Entity Alignment

So far, all the experiments are based on pure structural-based EA methods. As mentioned in Section 3.1, some methods (Xu et al., 2020; Wu et al., 2019) introduce textual information (e.g., entity name) to provide a multi-aspect view. Specifically,

<table><tr><td rowspan="2">Method</td><td colspan="2">DBPZH-EN</td><td colspan="2">DBPJA-EN</td><td colspan="2">DBPFR-EN</td></tr><tr><td>Hits@1</td><td>Hits@10</td><td>Hits@1</td><td>Hits@10</td><td>Hits@1</td><td>Hits@10</td></tr><tr><td>GM-Align</td><td>0.679</td><td>0.785</td><td>0.740</td><td>0.872</td><td>0.894</td><td>0.952</td></tr><tr><td>RDGCN</td><td>0.697</td><td>0.842</td><td>0.763</td><td>0.763</td><td>0.873</td><td>0.957</td></tr><tr><td>DGMC</td><td>0.801</td><td>0.875</td><td>0.848</td><td>0.897</td><td>0.933</td><td>0.960</td></tr><tr><td>AtrrGNN</td><td>0.796</td><td>0.929</td><td>0.783</td><td>0.920</td><td>0.919</td><td>0.979</td></tr><tr><td>CREA</td><td>0.736</td><td>-</td><td>0.792</td><td>-</td><td>0.924</td><td>-</td></tr><tr><td>RAGA</td><td>0.873</td><td>-</td><td>0.909</td><td>-</td><td>0.966</td><td>-</td></tr><tr><td>Init-Emb</td><td>0.625</td><td>0.756</td><td>0.680</td><td>0.807</td><td>0.848</td><td>0.919</td></tr><tr><td>+Hun.</td><td>0.667</td><td>-</td><td>0.728</td><td>-</td><td>0.893</td><td>-</td></tr><tr><td>+DATTI</td><td>0.890</td><td>0.958</td><td>0.921</td><td>0.971</td><td>0.979</td><td>0.995</td></tr><tr><td>(Imp.%)</td><td>1.9%</td><td>3.1%</td><td>1.3%</td><td>5.5%</td><td>1.3%</td><td>1.6%</td></tr></table>

Table 6: Performances of textual-based EA methods. The results of baselines are collected from the origin papers. Init-Emb represents only using the cosine similarity between the averaged name embeddings.

these methods first use machine translation systems or cross-lingual word embeddings to map entity and relation names into a unified semantic space and then average the pre-trained word embeddings to construct the initial features for entities and relations. In our opinion, since the initial features of entity  $H^{e}$  and relation  $H^{r}$  have been pre-mapped, these textual-based EA methods are more like decoding algorithms to eliminate the translation noise. In this situation, DATTI could also play a similar role even without any pre-aligned entity pairs.

To make fair comparisons with these textural-based EA methods, we use the same entity name translations and pre-trained word embeddings provided by Xu et al. (2019). For DATTI, we retain the top  $10\%$  of the singular values of  $\hat{H}_s^l$  and  $\hat{H}_t^l$ , while keeping other hyper-parameters the same. Table 6 lists the performances of DATTI and six baselines on DBP15K. Surprisingly, unsupervised DATTI outperforms all the supervised competitors, improves the performance on Hits@1 by more than  $1.3\%$ . Besides showing the powerful competitiveness of DATTI, this result also indicates that existing textural-based EA methods have considerable redundancy. When the initial features have been pre-mapped, complex neural networks and pre-aligned entity pairs may not be necessary.

# 6 Conclusion

In this paper, we propose an effective and efficient EA decoding algorithm via third-order tensor isomorphism (DATTI). Extensive experiments on public datasets indicate that our decoding algorithm can deliver significant performance improvements even on the most advanced EA methods, while the extra required time is less than 3 seconds.

# References

Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaojiang Zheng. 2015. TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org.  
Antoine Bordes, Nicolas Usunier, Alberto García-Durán, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 2787-2795.  
Yixin Cao, Xiang Wang, Xiangnan He, Zikun Hu, and Tat-Seng Chua. 2019. Unifying knowledge graph learning and recommendation: Towards a better understanding of user preferences. In The World Wide Web Conference, WWW 2019, San Francisco, CA, USA, May 13-17, 2019, pages 151-161.  
Marco Cuturi. 2013. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 2292-2300.  
Matthias Fey, Jan Eric Lenssen, Christopher Morris, Jonathan Masci, and Nils M. Kriege. 2020. Deep graph matching consensus. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020.  
Alex Graves, Marcus Liwicki, Santiago Fernández, Roman Bertolami, Horst Bunke, and Jürgen Schmidhuber. 2008. A novel connectionist system for unconstrained handwriting recognition. IEEE transactions on pattern analysis and machine intelligence, 31(5):855-868.  
Lingbing Guo, Zequn Sun, and Wei Hu. 2019. Learning to exploit long-term relational dependencies in knowledge graphs. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pages 2505-2514.  
Charles R. Harris, K. Jarrod Millman, Stéfan J. van der Walt, Ralf Gommers, Pauli Virtanen, David Cournaepau, Eric Wieser, Julian Taylor, Sebastian Berg,

Nathaniel J. Smith, Robert Kern, Matti Picus, Stephan Hoyer, Marten H. van Kerkwijk, Matthew Brett, Allan Haldane, Jaime Fernández del Río, Mark Wiebe, Pearu Peterson, Pierre Gérard-Marchant, Kevin Sheppard, Tyler Reddy, Warren Weckesser, Hameer Abbasi, Christoph Gohlke, and Travis E. Oliphant. 2020. Array programming with NumPy. Nature, 585(7825):357-362.  
Hao He, Qian Wang, Zhipeng Yu, Yang Zhao, Jiajun Zhang, and Chengqing Zong. 2021. Synchronous interactive decoding for multilingual neural machine translation. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 12981-12988. AAAI Press.  
Roy Jonker and A. Volgenant. 1987. A shortest augmenting path algorithm for dense and sparse linear assignment problems. Computing, 38(4):325-340.  
Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings.  
Harold W Kuhn. 1955. The hungarian method for the assignment problem. Naval research logistics quarterly, 2(1-2):83-97.  
Eugene L Lawler. 1963. The quadratic assignment problem. Management science, 9(4):586-599.  
Zhiyuan Liu, Yixin Cao, Liangming Pan, Juanzi Li, and Tat-Seng Chua. 2020. Exploring and evaluating attributes, values, and structures for entity alignment. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 6355-6364. Association for Computational Linguistics.  
Xin Mao, Wenting Wang, Yuanbin Wu, and Man Lan. 2021a. Boosting the speed of entity alignment 10  $\times$ : Dual attention matching network with normalized hard sample mining. In WWW '21: The Web Conference 2021, Virtual Event / Ljubljana, Slovenia, April 19-23, 2021, pages 821-832. ACM / IW3C2.  
Xin Mao, Wenting Wang, Yuanbin Wu, and Man Lan. 2021b. From alignment to assignment: Frustratingly simple unsupervised entity alignment. CoRR, abs/2109.02363.  
Xin Mao, Wenting Wang, Huimin Xu, Man Lan, and Yuanbin Wu. 2020. MRAEA: an efficient and robust entity alignment approach for cross-lingual knowledge graph. In WSDM '20: The Thirteenth ACM International Conference on Web Search and Data Mining, Houston, TX, USA, February 3-7, 2020, pages 420-428.

Gonzalo E. Mena, David Belanger, Scott W. Linderman, and Jasper Snoek. 2018. Learning latent permutations with gumbel-sinkhorn networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. Open-Review.net.  
Shichao Pei, Lu Yu, and Xiangliang Zhang. 2019. Improving cross-lingual entity alignment via optimal transport. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 3231-3237.  
Tamás Sarlós. 2006. Improved approximation algorithms for large matrices via random projections. In 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2006), 21-24 October 2006, Berkeley, California, USA, Proceedings, pages 143-152. IEEE Computer Society.  
Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. Facenet: A unified embedding for face recognition and clustering. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 815-823. IEEE Computer Society.  
Richard Sinkhorn. 1964. A relationship between arbitrary positive matrices and doubly stochastic matrices. The annals of mathematical statistics, 35(2):876-879.  
Zequn Sun, Wei Hu, and Chengkai Li. 2017. Cross-lingual entity alignment via joint attribute-preserving embedding. In *The Semantic Web - ISWC* 2017 - 16th International Semantic Web Conference, Vienna, Austria, October 21-25, 2017, Proceedings, Part I, volume 10587 of Lecture Notes in Computer Science, pages 628-644. Springer.  
Zequn Sun, Wei Hu, Qingheng Zhang, and Yuzhong Qu. 2018. Bootstrapping entity alignment with knowledge graph embedding. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 4396-4402.  
Zequn Sun, JiaCheng Huang, Wei Hu, Muchao Chen, Lingbing Guo, and Yuzhong Qu. 2020a. Transedge: Translating relation-contextualized embeddings for knowledge graphs. CoRR, abs/2004.13579.  
Zequn Sun, Chengming Wang, Wei Hu, Muhao Chen, Jian Dai, Wei Zhang, and Yuzhong Qu. 2020b. Knowledge graph alignment network with gated multi-hop neighborhood aggregation. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 222-229.

Zequn Sun, Qingheng Zhang, Wei Hu, Chengming Wang, Muhao Chen, Farahnaz Akrami, and Chengkai Li. 2020c. A benchmarking study of embedding-based entity alignment for knowledge graphs. Proc. VLDB Endow., 13(11):2326-2340.  
Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph attention networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings.  
Zhichun Wang, Qingsong Lv, Xiaohan Lan, and Yu Zhang. 2018. Cross-lingual knowledge graph alignment via graph convolutional networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 349-357.  
Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, Rui Yan, and Dongyan Zhao. 2019. Relation-aware entity alignment for heterogeneous knowledge graphs. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 5278-5284.  
Kun Xu, Linfeng Song, Yansong Feng, Yan Song, and Dong Yu. 2020. Coordinated reasoning for crosslingual knowledge graph alignment. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 9354-9361.  
Kun Xu, Liwei Wang, Mo Yu, Yansong Feng, Yan Song, Zhiguo Wang, and Dong Yu. 2019. Cross-lingual knowledge graph alignment via graph matching neural network. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 3156-3161.  
Junchi Yan, Xu-Cheng Yin, Weiyao Lin, Cheng Deng, Hongyuan Zha, and Xiaokang Yang. 2016. A short survey of recent advances in graph matching. In Proceedings of the 2016 ACM on International Conference on Multimedia Retrieval, ICMR 2016, New York, New York, USA, June 6-9, 2016, pages 167-174. ACM.  
Hsiu-Wei Yang, Yanyan Zou, Peng Shi, Wei Lu, Jimmy Lin, and Xu Sun. 2019. Aligning cross-lingual entities with multi-aspect information. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 4430-4440.

Chen Zhao, Chenyan Xiong, Xin Qian, and Jordan L. Boyd-Graber. 2020. Complex factoid question answering with a free-text knowledge graph. In WWW '20: The Web Conference 2020, Taipei, Taiwan, April 20-24, 2020, pages 1205-1216.

Renbo Zhu, Meng Ma, and Ping Wang. 2021. RAGA: relation-aware graph attention networks for global entity alignment. In Advances in Knowledge Discovery and Data Mining - 25th Pacific-Asia Conference, PAKDD 2021, Virtual Event, May 11-14, 2021, Proceedings, Part I, volume 12712 of Lecture Notes in Computer Science, pages 501-513. Springer.

Ran Zmigrod, Tim Vieira, and Ryan Cotterell. 2020. Please mind the root: Decoding arborescences for dependency parsing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 4809-4819. Association for Computational Linguistics.

# A Appendix

Proof: To prove Equation (11), we combine the first sub-equations of Equation (4) and (8):

$$
\left\{ \begin{array}{l} P _ {e} G (\boldsymbol {\mathcal {A}} _ {\boldsymbol {s}} ^ {(1)}) ^ {l} \boldsymbol {P} _ {e} ^ {\top} = G (\boldsymbol {\mathcal {A}} _ {t} ^ {(1)}) ^ {l} \\ P _ {e} \boldsymbol {\mathcal {A}} _ {\boldsymbol {s}} ^ {(1)} (\boldsymbol {P} _ {e} \otimes \boldsymbol {P} _ {r}) ^ {\top} = \boldsymbol {\mathcal {A}} _ {t} ^ {(1)} \end{array} \right.
$$

Because  $P_{e}^{\top} P_{e} = E$ , thus:

$$
\boldsymbol {P} _ {e} G (\boldsymbol {\mathcal {A}} _ {s} ^ {(1)}) ^ {l} \boldsymbol {\mathcal {A}} _ {s} ^ {(1)} (\boldsymbol {P} _ {e} \otimes \boldsymbol {P} _ {r}) ^ {\top} = G (\boldsymbol {\mathcal {A}} _ {t} ^ {(1)}) ^ {l} \boldsymbol {\mathcal {A}} _ {t} ^ {(1)}
$$

According to Equation (9), we could obtain:

$$
\boldsymbol {P} _ {e} \boldsymbol {H} _ {s} ^ {e} \otimes \boldsymbol {P} _ {r} \boldsymbol {H} _ {s} ^ {r} \approx \boldsymbol {H} _ {t} ^ {e} \otimes \boldsymbol {H} _ {t} ^ {r} \tag {18}
$$

Finally, because of  $(\pmb{P}_e \otimes \pmb{P}_r)^\top (\pmb{P}_e \pmb{H}_s^e \otimes \pmb{P}_r \pmb{H}_s^r) = \pmb{P}_e^\top \pmb{P}_e \pmb{H}_s^e \otimes \pmb{P}_r^\top \pmb{P}_r \pmb{H}_s^r = \pmb{H}_s^e \otimes \pmb{H}_s^r$ , Equation (11) is proved as follows:

$$
\begin{array}{l} P _ {e} G \left(\mathcal {A} _ {s} ^ {(1)}\right) ^ {l} \mathcal {A} _ {s} ^ {(1)} \left(P _ {e} \otimes P _ {r}\right) ^ {\top} \left(P _ {e} H _ {s} ^ {e} \otimes P _ {r} H _ {s} ^ {r}\right) \\ = P _ {e} G \left(\mathcal {A} _ {s} ^ {(1)}\right) ^ {l} \mathcal {A} _ {s} ^ {(1)} \left(H _ {s} ^ {e} \otimes H _ {s} ^ {r}\right) \\ \approx G \left(\boldsymbol {A} _ {t} ^ {(1)}\right) ^ {l} \boldsymbol {A} _ {t} ^ {(1)} \left(\boldsymbol {H} _ {t} ^ {e} \otimes \boldsymbol {H} _ {t} ^ {r}\right) \\ \end{array}
$$

Furthermore, Equations (12) and (13) could also be proved in similar way.