# ActiveEA: Active Learning for Neural Entity Alignment Bing Liu1,2, Harrison Scells1, Guido Zuccon1, Wen Hua1, Genghong Zhao2 1The University of Queensland, Australia $^{2}$ Neusoft Research of Intelligent Healthcare Technology, Co. Ltd., China {bing.liu, h.scells, g.zuccon, w.hua}@uq.edu.au zhaogenghong@neusoft.com # Abstract Entity Alignment (EA) aims to match equivalent entities across different Knowledge Graphs (KGs) and is an essential step of KG fusion. Current mainstream methods – neural EA models – rely on training with seed alignment, i.e., a set of pre-aligned entity pairs which are very costly to annotate. In this paper, we devise a novel Active Learning (AL) framework for neural EA, aiming to create highly informative seed alignment to obtain more effective EA models with less annotation cost. Our framework tackles two main challenges encountered when applying AL to EA: (1) How to exploit dependencies between entities within the AL strategy. Most AL strategies assume that the data instances to sample are independent and identically distributed. However, entities in KGs are related. To address this challenge, we propose a structure-aware uncertainty sampling strategy that can measure the uncertainty of each entity as well as its impact on its neighbour entities in the KG. (2) How to recognise entities that appear in one KG but not in the other KG (i.e., bachelors). Identifying bachelors would likely save annotation budget. To address this challenge, we devise a bachelor recognizer paying attention to alleviate the effect of sampling bias. Empirical results show that our proposed AL strategy can significantly improve sampling quality with good generality across different datasets, EA models and amount of bachelors. # 1 Introduction Knowledge Graphs (KGs) store entities and their relationships with a graph structure and are used as knowledge drivers in many applications (Ji et al., 2020). Existing KGs are often incomplete but complementary to each other. A popular approach used to tackle this problem is KG fusion, which attempts to combine several KGs into a single, comprehensive one. Entity Alignment (EA) is an essential  Figure 1: An example of Entity Alignment. step for KG fusion: it identifies equivalent entities across different KGs, supporting the unification of their complementary knowledge. For example, in Fig. 1 Donald Trump and US in the first KG correspond to D.J. Trump and America respectively in the second KG. By aligning them, the political and business knowledge about Donald Trump can be integrated within one KG. Neural models (Chen et al., 2017, 2018; Wang et al., 2018; Cao et al., 2019) are the current state-of-the-art in EA and are capable of matching entities in an end-to-end manner. Typically, these neural EA models rely on a seed alignment as training data which is very labour-intensive to annotate. However, previous EA research has assumed the availability of such seed alignment and ignored the cost involved with their annotation. In this paper, we seek to reduce the cost of annotating seed alignment data, by investigating methods capable of selecting the most informative entities for labelling so as to obtain the best EA model with the least annotation cost: we do so using Active Learning. Active Learning (AL) (Aggarwal et al., 2014) is a Machine Learning (ML) paradigm where the annotation of data and the training of a model are performed iteratively so that the sampled data is highly informative for training the model. Though many general AL strategies have been proposed (Settles, 2012; Ren et al., 2020), there are some unique challenges in applying AL to EA. The first challenge is how to exploit the dependencies between entities. In the EA task, neighbouring entities (context) in the KGs naturally affect each other. For example, in the two KGs of Fig. 1, we can infer US corresponds to America if we already know that Donald Trump and D.J. Trump refer to the same person: this is because a single person can only be the president of one country. Therefore, when we estimate the value of annotating an entity, we should consider its impact on its context in the KG. Most AL strategies assume data instances are independent, identically distributed and cannot capture dependencies between entities (Aggarwal et al., 2014). In addition, neural EA models exploit the structure of KGs in different and implicit ways (Sun et al., 2020b). It is not easy to find a general way of measuring the effect of entities on others. The second challenge is how to recognize the entities in a KG that do not have a counterpart in the other KG (i.e., bachelors). In the first KG of Fig. 1, Donald Trump and US are matchable entities while New York City and Republican Party are bachelors. Selecting bachelors to annotate will not lead to any aligned entity pair. The impacts of recognizing bachelors are twofold: 1. From the perspective of data annotation, recognizing bachelors would automatically save annotation budget (because annotators will try to seek a corresponding entity for some time before giving up) and allow annotators to put their effort in labelling matchable entities. This is particularly important for the existing neural EA models, which only consider matchable entities for training: thus selecting bachelors in these cases is a waste of annotation budget. 2. From the perspective of EA, bachelor recognition remedies the limitation of existing EA models that assume all entities to align are matchable, and would enable them to be better used in practice (i.e., real-life KGs where bachelors are popular). To address these challenges, we propose a novel AL framework for EA. Our framework follows the typical AL process: entities are sampled iteratively, and in each iteration a batch of entities with the highest acquisition scores are selected. Our novel acquisition function consists of two components: a structure-aware uncertainty measurement module and a bachelor recognizer. The structure-aware uncertainty can reflect the uncertainty of a single entity as well as the influence of that entity in the context of the KG, i.e., how many uncertainties it can help its neighbours eliminate. In addition, we design a bachelor recognizer, based on Graph Convolutional Networks (GCNs). Because the bachelor recognizer is trained with the sampled data and used to predict the remaining data, it may suffer from bias (w.r.t. the preference of sampling strategy) of these two groups of data. We apply model ensembling to alleviate this problem. Our major contributions in this paper are: 1. A novel AL framework for neural EA, which can produce more informative data for training EA models while reducing the labour cost involved in annotation. To our knowledge, this is the first AL framework for neural EA. 2. A structure-aware uncertainty sampling strategy, which models uncertainty sampling and the relation between entities in a single AL strategy. 3. An investigation of bachelor recognition, which can reduce the cost of data annotation and remedy the defect of existing EA models. 4. Extensive experimental results that show our proposed AL strategy can significantly improve the quality of data sampling and has good generality across different datasets, EA models, and bachelor quantities. # 2 Background # 2.1 Entity Alignment Entity alignment is typically performed between two KGs $\mathcal{G}^1$ and $\mathcal{G}^2$ , whose entity sets are denoted as $\mathcal{E}^1$ and $\mathcal{E}^2$ respectively. The goal of EA is to find the equivalent entity pairs $\mathcal{A} = \{(e^1, e^2) \in \mathcal{E}^1 \times \mathcal{E}^2 | e^1 \sim e^2\}$ , where $\sim$ denotes an equivalence relationship and is usually assumed to be a one-to-one mapping. In supervised and semi-supervised models, a subset of the alignment $\mathcal{A}^{seed} \subset \mathcal{A}$ , called seed alignment, are annotated manually beforehand and used as training data. The remaining alignment form the test set $\mathcal{A}^{test} = \mathcal{A} \setminus \mathcal{A}^{seed}$ . The core of an EA model $F$ is a scoring function $F(e^1, e^2)$ , which takes two entities as input and returns a score for how likely they match. The effectiveness of an EA model is essentially determined by $\mathcal{A}^{seed}$ and we thus denote it as $m(\mathcal{A}^{seed})$ . # 2.2 Active Learning An AL framework consists of two components: (1) an oracle (annotation expert), which provides labels for the queries (data instances to label), and  Figure 2: Overview of ActiveEA. (2) a query system, which selects the most informative data instances as queries. In pool-based scenario, there is a pool of unlabelled data $\mathcal{U}$ . Given a budget $B$ , some instances $\mathcal{U}_{\pi,B}$ are selected from the pool following a strategy $\pi$ and sent to the experts to annotate, who produce a training set $\mathcal{L}_{\pi,B}$ . We train the model on $\mathcal{L}_{\pi,B}$ and the effectiveness $m(\mathcal{L}_{\pi,B})$ of the obtained model reflects how good the strategy $\pi$ is. The goal is to design an optimal strategy $\pi_*$ such that $\pi_* = \operatorname{argmax}_{\pi} m(\mathcal{L}_{\pi,B})$ . # 3 ActiveEA: Active Entity Alignment # 3.1 Problem Definition Given two KGs $\mathcal{G}^1, \mathcal{G}^2$ with entity sets $\mathcal{E}^1, \mathcal{E}^2$ , an EA model $F$ , a budget $B$ , the AL strategy $\pi$ is applied to select a set of entities $\mathcal{U}_{\pi,B}$ so that the annotators label the counterpart entities to obtain the labelled data $\mathcal{L}_{\pi,B}$ . $\mathcal{L}_{\pi,B}$ consists of annotations of matchable entities $\mathcal{L}_{\pi,B}^{+}$ , which form the seed alignment $\mathcal{A}_{\pi,B}^{seed}$ , and bachelors $\mathcal{L}_{\pi,B}^{-}$ . We measure the effectiveness $m(\mathcal{A}_{\pi,B}^{seed})$ of the AL strategy $\pi$ by training the EA model on $\mathcal{A}_{\pi,B}^{seed}$ and then evaluating it with $\mathcal{A}_{\pi,B}^{test} = \mathcal{A} \setminus \mathcal{A}_{\pi,B}^{seed}$ . Our goal is to design an optimal entity sampling strategy $\pi_*$ so that $\pi_* = \operatorname{argmax}_{\pi} m(\mathcal{A}_{\pi,B}^{seed})$ . In our annotation setting, we select entities from one KG and then let the annotators identify their counterparts from the other KG. Under this setting, we assume the pool of unlabelled entities is initialized with $\mathcal{U} = \mathcal{E}^1$ . The labelled data will be like $\mathcal{L}_{\pi,B}^{+} = \{(e^{1} \in \mathcal{E}^{1}, e^{2} \in \mathcal{E}^{2})\}$ and $\mathcal{L}_{\pi,B}^{-} = \{(e^{1} \in \mathcal{E}^{1}, null)\}$ . # 3.2 Framework Overview The whole annotation process, as shown in Fig. 2, is carried out iteratively. In each iteration, the query system selects $N$ entities from $\mathcal{U}$ and sends them to the annotators. The query system includes (1) a structure-aware uncertainty measurement module $f^{su}$ , which combines uncertainty sampling with the structure information of the KGs, and (2) a bachelor recognizer $f^b$ , which helps avoid selecting bachelor entities. The final acquisition $f^{\pi}$ used to select which entities to annotate is obtained by combining the outputs of these two modules. After the annotators assign the ground-truth counterparts to the selected entities, the new annotations are added to the labelled data $\mathcal{L}$ . With the updated $\mathcal{L}$ , the query system updates the EA model and the bachelor recognizer. This process repeats until no budget remains. To simplify the presentation, we omit the sampling iteration when explaining the details. # 3.3 Structure-aware Uncertainty Sampling We define the influence of an entity on its context as the amount of uncertainties it can help its neighbours remove. As such, we formulate the structure-aware uncertainty $f^{su}$ as $$ \begin{array}{l} f ^ {s u} \left(e _ {i} ^ {1}\right) = \alpha \sum_ {e _ {i} ^ {1} \rightarrow e _ {j} ^ {1}, e _ {j} ^ {1} \in \mathcal {N} _ {i} ^ {\text {o u t}}} w _ {i j} f ^ {s u} \left(e _ {j} ^ {1}\right) \tag {1} \\ + (1 - \alpha) \frac {f ^ {u} (e _ {i} ^ {1})}{\sum_ {e ^ {1} \in \mathcal {E} ^ {1}} f ^ {u} (e ^ {1})}, \\ \end{array} $$ where $\mathcal{N}_i^{out}$ is the outbound neighbours of entity $e_i^1$ (i.e. the entities referred to by $e_i^1$ ) and $w_{ij}$ measures the extent to which $e_i^1$ can help $e_j^1$ eliminate uncertainty. The parameter $\alpha$ controls the trade-off between the impact of entity $e_i^1$ on its context (first term in the equation) and the normalized uncertainty (second item). Function $f^u(e^1)$ refers to the margin-based uncertainty of an entity. For each entity $e^1$ , the EA model can return the matching scores $F(e^1, e^2)$ with all unaligned entities $e^2$ in $\mathcal{G}^2$ . Since these scores in existing works are not probabilities, we exploit the margin-based uncertainty measure for convenience, outlined in Eq. 2: $$ f ^ {u} \left(e ^ {1}\right) = - \left(F \left(e ^ {1}, e _ {*} ^ {2}\right) - F \left(e ^ {1}, e _ {* *} ^ {2}\right)\right) \tag {2} $$ where $F(e^{1}, e_{*}^{2})$ and $F(e^{1}, e_{**}^{2})$ are the highest and second highest matching scores respectively. A large margin represents a small uncertainty. For each entity $e_j^1$ , we assume its inbound neighbours can help it clear all uncertainty. Then, we have $\sum_{e_i^1 \to e_j^1, e_i^1 \in \mathcal{N}_j^{in}} w_{ij} = 1$ , where $\mathcal{N}_j^{in}$ is the inbound neighbour set of $e_j^1$ . In this work, we assume all inbound neighbours have the same impact on $e_j^1$ . In this case, $w_{ij} = \frac{1}{\mathrm{degree}(e_j^1)}$ , where $\mathrm{degree}(\cdot)$ returns the in-degree of an entity. Using matrix notion, Eq. 1 can be rewritten as $$ \mathbf {f} ^ {s u} = \alpha \mathbf {W} \mathbf {f} ^ {s u} + (1 - \alpha) \frac {\mathbf {f} ^ {u}}{| \mathbf {f} ^ {u} |} $$ where $\mathbf{f}^{su}$ is the vector of structure-aware uncertainties, $\mathbf{f}^u$ is the vector of uncertainties, and $\mathbf{W}$ is a matrix encoding influence between entities, i.e., $w_{ij} > 0$ if $e_i^1$ is linked to $e_j^1$ , otherwise 0. As $\mathbf{W}$ is a stochastic matrix (Gagniuc, 2017), we solve Eq. 1 iteratively, which can be viewed as the power iteration method (Franceschet, 2011), similar to Pagerank (Brin and Page, 1998). Specifically, we initialize the structure-aware uncertainty vector as $\mathbf{f}_0^{su} = \mathbf{f}^u$ . Then we update $\mathbf{f}_t^{su}$ iteratively: $$ \mathbf {f} _ {t} ^ {s u} = \alpha \mathbf {W} \mathbf {f} _ {t - 1} ^ {s u} + (1 - \alpha) \frac {\mathbf {f} ^ {u}}{| \mathbf {f} ^ {u} |}, t = 1, 2, 3, \ldots $$ The computation ends when $|\mathbf{f}_t^{su} - \mathbf{f}_{t - 1}^{su}| < \epsilon$ # 3.4 Bachelor Recognizer The bachelor recognizer is formulated as a binary classifier, which is trained with the labelled data and used to predict the unlabelled data. One challenge faced here is the bias between the labelled data and the unlabelled data caused by the sampling strategy (since it is not random sampling). We alleviate this issue with a model ensemble. # 3.4.1 Model Structure We apply two GCNs (Kipf and Welling, 2017; Hamilton et al., 2017) as the encoders to get the entity embeddings $\mathbf{H}^{1} = \mathbf{GCN}^{1}(\mathcal{G}^{1}),\mathbf{H}^{2} = \mathbf{GCN}^{2}(\mathcal{G}^{2})$ , where each row in $\mathbf{H}^1$ or $\mathbf{H}^2$ corresponds to a vector representation of a particular entity. The two GCN encoders share the same structure but have separate parameters. With each GCN encoder, each entity $e_i$ is first assigned a vector representation $\mathbf{h}_i^{(0)}$ . Then contextual features of each entity are extracted: $$ \mathbf {h} _ {i} ^ {(l)} = \operatorname {n o r m} (\sigma (\sum_ {j \in \mathcal {N} _ {i} \cup \{i \}} \mathbf {V} ^ {(l)} \mathbf {h} _ {j} ^ {(l - 1)} + \mathbf {b} ^ {(l)})), $$ where $l$ is the layer index, $\mathcal{N}_i$ is the neighbouring entities of entity $e_i$ , and $\sigma$ is the activation function, $\mathrm{norm}(\cdot)$ is a normalization function, and $\mathbf{V}^{(l)}, \mathbf{b}^{(l)}$ are the parameters in the $l$ -th layer. The representations of each entity $e_i$ obtained in all GCN layers are concatenated into a single representation: $\mathbf{h}_i = \mathrm{concat}(\mathbf{h}_i^{(0)}, \mathbf{h}_i^{(1)}, \dots, \mathbf{h}_i^{(L)})$ , where $L$ is the number of GCN layers. After getting the representations of entities, we compute the similarities of each entity in $\mathcal{E}^1$ with all entities in $\mathcal{E}^2$ ( $\mathbf{S} = \mathbf{H}^1 \cdot \mathbf{H}^{2T}$ ) and obtain its corresponding maximum matching score as in $f^{s}(e_{i}^{1}) = \max (\mathbf{S}_{i,:}).$ The entity $e_i^1$ whose maximum matching score is greater than a threshold $\gamma$ is considered to be a matchable entity as in $f^{b}(e_{i}^{1}) = \mathbb{1}_{f^{s}(e_{i}^{1}) > \gamma},$ otherwise a bachelor. # 3.4.2 Learning In each sampling iteration, we train the bachelor recognizer with existing annotated data $\mathcal{L}$ containing matchable entities $\mathcal{L}^{+}$ and bachelors $\mathcal{L}^{-}$ . Furthermore, $\mathcal{L}$ is divided into a training set $\mathcal{L}^t$ and a validation set $\mathcal{L}^v$ . We optimize the parameters, including $\{\mathbf{V}^{(l)},\mathbf{b}^{(l)}\}_{1\leq l\leq L}$ of each GCN encoder and the threshold $\gamma$ , in two phases, sharing similar idea with supervised contrastive learning (Khosla et al., 2020). In the first phase, we optimize the scoring function $f^s$ by minimizing the constrastive loss shown in Eq.3. $$ \begin{array}{l} \text {l o s s} = \sum_ {\left(e _ {i} ^ {1}, e _ {j} ^ {2}\right) \in \mathcal {L} ^ {t, +}} \| \mathbf {h} _ {i} ^ {1} - \mathbf {h} _ {j} ^ {2} \| \\ + \beta \sum_ {\left(e _ {i ^ {\prime}} ^ {1}, e _ {j ^ {\prime}} ^ {2}\right) \in \mathcal {L} ^ {t, n e g}} \left[ \lambda - \left\| \mathbf {h} _ {i ^ {\prime}} ^ {1} - \mathbf {h} _ {j ^ {\prime}} ^ {2} \right\| \right] _ {+} \tag {3} \\ \end{array} $$ Here, $\beta$ is a balance factor, and $[\cdot]_{+}$ is $\max(0, \cdot)$ , and $\mathcal{L}^{t,neg}$ is the set of negative samples generated by negative sampling (Sun et al., 2018). For a given pre-aligned entity pair in $\mathcal{L}^{+}$ , each entity of it is substituted for $N^{neg}$ times. The distance of negative samples is expected to be larger than the margin $\lambda$ . In the second phase, we freeze the trained $f^{s}$ and optimize $\gamma$ for $f^{b}$ . It is easy to optimize $\gamma$ , e.g. by simple grid search, so that $f^{b}$ can achieve the highest performance on $\mathcal{L}^{v}$ (denoted as $q(f^{s}, \gamma, \mathcal{L}^{v}))$ using: $$ \gamma^ {*} = \operatorname {a r g m a x} _ {\gamma} q (f ^ {s}, \gamma , \mathcal {L} ^ {v}). $$ # 3.4.3 Model Ensemble for Sampling Bias The sampled data may be biased, since they have been preferred by the sampling strategy rather than selected randomly. As a result, even if the bachelor recognizer is well trained with the sampled data it may perform poorly on data yet to sample. We apply a model ensemble to alleviate this problem. Specifically, we divide the $\mathcal{L}$ into $K$ subsets evenly. Then we apply $K$ -fold cross-validation to train $K$ scoring functions $\{f_1^s,\dots,f_K^s\}$ , each time using $K - 1$ subsets as the training set and the left out portion as validation set. Afterwards, we search for an effective $\gamma$ threshold: $$ \gamma^ {*} = \operatorname {a r g m a x} _ {\gamma} \frac {1}{K} \sum_ {1 \leq k \leq K} q \left(f _ {k} ^ {s}, \gamma , \mathcal {L} _ {k} ^ {v}\right) $$ At inference, we ensemble by averaging the $K$ scoring functions $f_{k}^{s}$ to form the final scoring function $f^{s}$ as in Eq. 4 and base $f^{b}$ on it. $$ f ^ {s} \left(e _ {i} ^ {1}\right) = \frac {1}{K} \sum_ {1 \leq k \leq K} f _ {k} ^ {s} \left(e _ {i} ^ {1}\right) \tag {4} $$ # 3.5 Final Acquisition Function We combine our structure-aware uncertainty sampling with the bachelor recognizer to form the final acquisition function: $$ f ^ {\pi} \left(e _ {i} ^ {1}\right) = f ^ {s u} \left(e _ {i} ^ {1}\right) f ^ {b} \left(e _ {i} ^ {1}\right) $$ # 4 Experimental Setup # 4.1 Sampling Strategies We construct several baselines for comparison: rand random sampling used by existing EA work degree selects entities with high degrees. pagerank (Brin and Page, 1998) measures the centrality of entities by considering their degrees as well as the importance of its neighbours. betweenness (Freeman, 1977) refers to the number of shortest paths passing through an entity. uncertainty sampling selects entities that the current EA model cannot predict with confidence. Note that in this work we measure uncertainty using Eq. 2 for fair comparison. degree, pagerank and betweenness are purely topology-based and do not consider the current EA model. On the contrary, uncertainty is fully based on the current EA model without being able to capture the structure information of KG. We compare both our structure-aware uncertainty sampling (struct_uncert) and the full framework ActiveEA with the baselines listed above. We also examine the effect of Bayesian Transformation, which aims to make deep neural models represent uncertainty more accurately (Gal et al., 2017). # 4.2 EA Models We apply our ActiveEA framework to three different EA models, which are a representative spread of neural EA models and varied in KG encoding, considered information and training method (Liu et al., 2020; Sun et al., 2018): BootEA (Sun et al., 2018) encodes the KGs with the translation model (Bordes et al., 2013), exploits the structure of KGs, and uses self-training. Alinet (Sun et al., 2020a) also exploits the structure of KGs but with a GCN-based KG encoder, and is trained in a supervised manner. RDGCN (Wu et al., 2019) trains a GCN in a supervised manner, as Alinet, but it can incorporate entities' attributes. Our implementations and parameter settings of the models rely on OpenEA $^1$ (Sun et al., 2020b). # 4.3 Datasets We use three different datasets: D-W-15K V1 (DW), EN-DE-15K V1 (ENDE), and EN-FR-100K V1 (ENFR), obtained from OpenEA (Sun et al., 2020b). Each dataset contains two KGs and equivalent entity pairs. The KGs used in these datasets were sampled from real KGs, i.e. DBpedia (Lehmann et al., 2015), Wikidata (Vrandecic and Krötzsch, 2014), and YAGO (Rebele et al., 2016), which are widely used in EA community. These datasets differ in terms of KG sources, languages, sizes, etc. We refer the reader to Sun et al. (2020b) for more details. Existing work on EA assumes all entities in the KGs are matchable, thus only sampling entities with counterparts when producing the datasets. For investigating the influence of bachelors on AL strategies, we synthetically modify the datasets by excluding a portion of entities from the second KG. # 4.4 Evaluation Metrics We use Hit@1 as the primary evaluation measure of the EA models. To get an overall evaluation of one AL strategy across different sized budgets, we plot the curve of a EA model's effectiveness with respect to the proportion of annotated entities, and calculate the Area Under the Curve (AUC). # 4.5 Parameter Settings We set $\alpha = 0.1$ , $\epsilon = 1e^{-6}$ for the structure-aware uncertainty. We use $L = 1$ GCN layer for our bachelor recognizer with 500 input and 400 output dimensions. We set $K = 5$ for its model ensemble and $\lambda = 1.5$ , $\beta = 0.1$ , $N^{neg} = 10$ for its training. The sampling batch size is set to $N = 100$ for 15K data and $N = 1000$ for 100K data. # 4.6 Reproducibility Details Our experiments are run on a GPU cluster. We allocate 50G memory and one 32GB nVidia Tesla V100 GPU for each job on 15K data, and 100G memory for each job on 100K data. The training and evaluation of ActiveEA take approximately 3h with Alinet on 15K data, 10h with BootEA on 15K          rand degree pagerank betweenness uncertainty struct_untert ActiveEA  Percentage of Annotated Entities    Figure 3: HIT@1 of sampling strategies for all EA models on DW and ENDE, as annotation portion increases. Top row shows experiments that do not include bachelors; bottom row shows experiments that include $30\%$ bachelors. ActiveEA is equivalent to struct_uncert in absence of bachelors, and is thus shown only for the second row.   Figure 4: Hit@1 for all sampling strategies on the Alinet EA model on ENFR. Left shows experiments without bachelors, right shows with $30\%$ bachelors. data, 10h with RDGCN on 15K data, and 48h with Alinet on 100K data. Most baseline strategies take less time than ActiveEA on the same dataset except betweenness on 100K data, which takes more than 48h. We apply grid search for setting $\alpha$ and $N$ (shown in Sec. 5.4). Hyper-parameters of the bachelor recognizer are chosen by referring the settings of OpenEA and our manual trials. Code and datasets are available at https://github.com/UQ-Neusoft-Health-Data-Science/ActiveEA. # 5 Experimental Results # 5.1 Comparison with Baselines Fig. 3 presents the overall performance of each strategy with three EA models on two datasets, each of which we also synthetically modify to include $30\%$ bachelors. We also report the AUC@0.5 values of these curves in Tab. 1. ActiveEA degenerates into struct_uncert when there is no bachelor. Random Sampling. Random sampling usually performs poorly when the annotation proportion is small, while it becomes more competitive when the amount of annotations increases. But for most annotation proportions, random sampling exhibits a large gap in performance compared to the best method. This observation highlights the need to investigate data selection for EA. Topology-based Strategies. The topology-based strategies are effective when few annotations are provided, e.g., $< 20\%$ . However, once annotations increase, the effectiveness of topology-based strategies is often worse than random sampling. This may be because these strategies suffer more from the bias between the training set and test set. Therefore, only considering the structural information of KGs has considerable drawbacks for EA. Uncertainty Sampling. On the contrary, the un
| Strategy | BootEA | AliNet | RDGCN | |||||||||
| DW (0%) | DW (30%) | ENDE (0%) | ENDE (30%) | DW (0%) | DW (30%) | ENDE (0%) | ENDE (30%) | DW (0%) | DW (30%) | ENDE (0%) | ENDE (30%) | |
| rand | 23.5n | 17.0 | 28.1 | 21.3 | 19.4 | 16.7 | 26.0 | 23.7 | 25.8 | 25.0 | 41.3n | 41.0 |
| degree | 19.5 | 16.0 | 24.0 | 20.0 | 17.1 | 15.2 | 22.2 | 20.5 | 23.3 | 22.9 | 39.1 | 39.4 |
| pagerank | 22.3 | 18.3 | 27.6 | 23.0 | 19.9 | 17.3 | 25.8 | 24.1 | 24.5 | 23.9 | 40.5 | 40.6 |
| betweenness | 20.5 | 16.3 | 26.1 | 21.1 | 17.8 | 15.6 | 23.7 | 22.3 | 23.2 | 22.7 | 40.2 | 40.3 |
| uncertainty | 23.9 | 16.1 | 29.8 | 21.2 | 21.6 | 15.4 | 28.2 | 22.2 | 24.7 | 23.9 | 40.9n | 40.5 |
| struct_uncert ActiveEA | 26.3 | 20.8 | 33.6 | 27.4 | 23.1 | 19.1 | 30.6 | 26.8 | 26.5 | 25.6 | 41.9 | 41.0 |
| 26.7 | 31.5 | 31.5 | 25.7 | 32.8 | 28.1 | 42.3 | ||||||