AcademicEval / title_30K /test_title_long_2404.16726v2.json
jiyuuuu's picture
syn
6cde16e
raw
history blame
276 kB
{
"url": "http://arxiv.org/abs/2404.16726v2",
"title": "History repeats Itself: A Baseline for Temporal Knowledge Graph Forecasting",
"abstract": "Temporal Knowledge Graph (TKG) Forecasting aims at predicting links in\nKnowledge Graphs for future timesteps based on a history of Knowledge Graphs.\nTo this day, standardized evaluation protocols and rigorous comparison across\nTKG models are available, but the importance of simple baselines is often\nneglected in the evaluation, which prevents researchers from discerning actual\nand fictitious progress. We propose to close this gap by designing an intuitive\nbaseline for TKG Forecasting based on predicting recurring facts. Compared to\nmost TKG models, it requires little hyperparameter tuning and no iterative\ntraining. Further, it can help to identify failure modes in existing\napproaches. The empirical findings are quite unexpected: compared to 11 methods\non five datasets, our baseline ranks first or third in three of them, painting\na radically different picture of the predictive quality of the state of the\nart.",
"authors": "Julia Gastinger, Christian Meilicke, Federico Errica, Timo Sztyler, Anett Schuelke, Heiner Stuckenschmidt",
"published": "2024-04-25",
"updated": "2024-04-29",
"primary_cat": "cs.LG",
"cats": [
"cs.LG"
],
"label": "Original Paper",
"paper_cat": "Knowledge AND Graph",
"gt": "History repeats Itself: A Baseline for Temporal Knowledge Graph Forecasting",
"main_content": "Introduction The lack of experimental rigor is one of the most problematic issues in fast-growing research communities, producing empirical results that are inconsistent or in disagreement with each other. Such ambiguities are often hard to resolve in a short time frame, and they eventually slow down scientific progress. This issue is especially evident in the machine learning field, where missing experimental details, the absence of standardized evaluation protocols, and unfair comparisons make it challenging to discern true advancements from fictitious ones [Lipton and Steinhardt, 2019]. As a result, researchers have spent considerable effort in re-evaluating the performances of various models on different benchmarks, to establish proper comparisons and robustly gauge the benefit of an approach over others. In recent years, this was the case of node and graph classification benchmarks [Shchur et al., 2018; Errica et al., 2020], link prediction on Knowledge Graphs [Sun et al., 2020; Rossi et al., 2021], neural recommender systems [Dacrema et al., 2019], and temporal graph learning [Huang et al., 2023]. Not only does such fast growing literature impact reproducibility and replicability, but it is also characterized by a certain forgetfulness that simple baselines set a threshold above which approaches are actually useful. Oftentimes, these baselines are missing from the empirical evaluations, but when introduced they provide a completely new picture of the state of the art. Examples can be found in the field of Knowledge Graph completion, where simple rule-based systems can outperform embedding-based ones [Meilicke et al., 2018], or in graph-related tasks where structure-agnostic baselines can compete with deep graph networks [Errica et al., 2020; Poursafaei et al., 2022; Errica, 2023]. In the last few years, the field of Temporal Knowledge Graph (TKG) Forecasting has also experienced a fast-paced research activity culminating in a large stream of works and a variety of empirical settings [Liu et al., 2022; Sun et al., 2021; Zhang et al., 2023]. Researchers have already provided a thorough re-assessment of some TKG Forecasting methods to address growing concerns about their reproducibility, laying down a solid foundation for future comparisons [Gastinger et al., 2023]. What is still missing, however, is a comparison with simple baselines to gauge if we are really making progress and to identify pain points of current representation learning approaches for TKGs. Our contribution aims at filling this gap with a novel baseline, which places a strong inductive bias on the re-occurrence of facts over time. Not only does our baseline require tuning of just two hyperparameters, but also no training phase is needed since it is parameter-free. We introduce three variants of the baseline, divided into strict recurrency, relaxed recurrency, and a combination of both. Our empirical results convey an unexpected message: the baseline ranks first and third on three out of five datasets considered, compared to 11 TKG methods. It is a perhaps unsurprising result, given the long history of aforementioned works that propose strong baselines in different communities, but it further highlights the compelling need for considering simple heuristics in the TKG forecasting domain. Finally, by carefully comparing the performance of these baselines with other methods, we provide a failure analysis that highlights where it might be necessary to improve existing models. 2 Related Work In this section, we give a concise overview of the plethora of TKG forecasting methods that appeared in recent years. arXiv:2404.16726v2 [cs.LG] 29 Apr 2024 \fDeep Graph Networks (DGNs) Several models in this category leverage message-passing architectures [Scarselli et al., 2009; Micheli, 2009] along with sequential approaches to integrate structural and sequential information for TKG forecasting. RE-Net adopts an autoregressive architecture, learning temporal dependencies from a sequence of graphs [Jin et al., 2020]. RE-GCN combines a convolutional DGN with a sequential neural network and introduces a static graph constraint to consider additional information like entity types [Li et al., 2021b]. xERTE employs temporal relational attention mechanisms to extract query-relevant subgraphs [Han et al., 2021a]. TANGO utilizes neural ordinary differential equations and DGNs to model temporal sequences and capture structural information [Han et al., 2021b]. CEN integrates a convolutional neural network capable of handling evolutional patterns in an online setting, adapting to changes over time [Li et al., 2022b]. At last, RETIA generates twin hyperrelation subgraphs and aggregates adjacent entities and relations using a graph convolutional network [Liu et al., 2023a]. Reinforcement Learning (RL) Methods in this category combine reinforcement learning with temporal reasoning for TKG forecasting. CluSTeR employs a two-step process, utilizing a RL agent to induce clue paths and a DGN for temporal reasoning [Li et al., 2021a]. Also, TimeTraveler leverages RL based on temporal paths, using dynamic embeddings of the queries, the path history, and the candidate actions to sample actions, and a time-shaped reward [Sun et al., 2021]. Rule-based Rule-based approaches focus on learning temporal logic rules. TLogic learns these rules via temporal random walks [Liu et al., 2022]. TRKG extends TLogic by introducing new rule types, including acyclic rules and rules with relaxed time constraints [Kiran et al., 2023]. ALREIR combines embedding-based and logical rule-based methods, capturing deep causal logic by learning rule embeddings [Mei et al., 2022]. LogE-Net combines logical rules with REGCN, using them in a preprocessing step for assisting reasoning [Liu et al., 2023b]. At last, TECHS incorporates a temporal graph encoder and a logical decoder for differentiable rule learning and reasoning [Lin et al., 2023]. Others There are additional approaches with mixed contributions that cannot be immediately placed in the above categories. CyGNet predicts future facts based on historical appearances, employing a \u201dcopy\u201d and \u201dgeneration\u201d mode [Zhu et al., 2021]. TiRGN employs a local encoder for evolutionary representations in adjacent timestamps and a global encoder to collect repeated facts [Li et al., 2022a]. CENET distinguishes historical and non-historical dependencies through contrastive learning and a mask-based inference process [Xu et al., 2023]. Finally, L2TKG utilizes a structural encoder and latent relation learning module to mine and exploit intraand inter-time latent relations [Zhang et al., 2023]. 3 Approach This section introduces several baselines: We start with the Strict Recurrency Baseline, before moving to its \u201crelaxed\u201d version, the Relaxed Recurrency Baseline, and, ultimately, a combination of the two, the so-called Combined Recurrency (marta, playsFor, vasco-da-gamah, 1) (marta, playsFor, vasco-da-gamah , 2) (marta, playsFor, santa-cruz, 3) (marta, playsFor, santa-cruz, 4) (marta, playsFor, umea-ik, 5) (marta, playsFor, umea-ik, 6) (marta, playsFor, umea-ik, 7) (marta, playsFor, umea-ik, 8) (marta, playsFor, los-angeles-sol, 9) Figure 1: A (slightly simplified) listing of the clubs that Marta Vieira da Silva, known as Marta, played for from 2001 to 2009. Baseline. Before we introduce these baselines, we give a formal definition of the notion of a Temporal Knowledge Graph and and provide a running example to illustrate our approach. 3.1 Preliminaries A Temporal Knowledge Graph G is a set of quadruples (s, r, o, t) with s, o \u2208E, relation r \u2208R, and time stamp t \u2208T with T = {1 . . . n}, n \u2208N+. More precisely, E is the set of entities, R is the set of possible relations, and T is the set of timesteps. A quadruple\u2019s (s, r, o, t) semantic meaning is that s is in relation r to o at t. Alternatively, we may refer to this quadruple as a temporal triple that holds during the timestep t. This allows us to talk about the triple (s, r, o) and its occurrence and recurrence at certain timesteps. In the following, we use a running example G, where G is a TKG in the soccer domain shown in Figure 1. G contains triples from the years 2001 to 2009, which we map to indices 1 to 9. Temporal Knowledge Graph Forecasting is the task of predicting quadruples for future timesteps t+ given a history of quadruples G, with t+ > n and t+ \u2208N+. In this work we focus on entity forecasting, that is, predicting object or subject entities for queries (s, r, ?, t+) or (?, r, o, t+). Akin to KG completion, TKG forecasting is approached as a ranking task [Han, 2022]. For a given query, e.g. (s, r, ?, t+), methods rank all entities in E using a scoring function, assigning plausibility scores to each quadruple. In the following, we design several variants of a simple scoring function f that assigns a score in R+ to a quadruple at a future timestep t+ given a Temporal Knowledge Graph G, i.e., f((s, r, o, t+), G) 7\u2192R+. All variants of our scoring function are simple heuristics to solve the TKG forecasting task, based on the principle that something that happened in the past will happen again in the future. 3.2 Strict Recurrency Baseline The first family of recurrency baselines checks if the triple that we want to predict at timestep t+ has already been observed before. The simplest baseline of this family is the following scoring function \u03d51: \u03d51((s, r, o, t+), G) = \u001a1, if \u2203k with (s, r, o, k) \u2208G 0, otherwise. (1) If we apply \u03d51 to the set of triples in Figure 1 to compute the scores for 2010, we get the following outcome (using pf \fto abbreviate playsFor). \u03d51((marta, pf, vasco-da-gamah, 10), G) = 1 \u03d51((marta, pf, santa-cruz, 10), G) = 1 \u03d51((marta, pf, umea-ik, 10), G) = 1 \u03d51((marta, pf, los-angeles-sol, 10), G) = 1 This scoring function suffers from the problem that it does not take the temporal distance into account, which is highly relevant for the relation of playing for a club. It is far more likely that Marta will continue to play for Los Angeles Sol rather than sign a contract with a previous club. To address this problem, we introduce a time weighting mechanism to assign higher scores to more recent triples. Defining a generic function \u2206: N+ \u00d7 N+ \u2192R that takes the query timestep t+, a previous timestep k in G, and returns the weight of the triple, we can define strict recurrency scoring functions as follows: \u03d5\u2206((s,r,o,t+),G)= \u001a\u2206(t+,max{k|(s,r,o,k)\u2208G}) 0, if \u2204k with (s,r,o,k)\u2208G. (2) For instance, using \u22060(t+, k) = k/t+, k < t+ produces: \u03d5\u22060((marta, pf, vasco-da-gamah, 10), G) = 0.2 \u03d5\u22060((marta, pf, santa-cruz, 10), G) = 0.4 \u03d5\u22060((marta, pf, umea-ik, 10), G) = 0.8 \u03d5\u22060((marta, pf, los-angeles-sol, 10), G) = 0.9, which already makes more sense: the latest club that a person played for will always receive the highest score. Interestingly, we can establish an equivalence class among a subset of the functions \u03d5\u2206, and we will use this fact in our experiments. As long as we solely focus on ranking results, two scoring functions are equivalent if they define the same partial order over all possible temporal predictions. Definition 1. Two scoring functions \u03d5 and \u03d5\u2032 are rankingequivalent if for any pair of predictions p = (s, r, o, t+) and p\u2032 = (s\u2032, r\u2032, o\u2032, t+) we have that \u03d5(p, G) > \u03d5(p\u2032, G) \u21d0 \u21d2 \u03d5\u2032(p, G) > \u03d5\u2032(p\u2032, G). The next result states that we do not need to search for an optimal time weighting function \u2206(t+, k) if we choose it to be strictly monotonically increasing with respect to k, as these functions belong to the same equivalence class. Proposition 1. Scoring functions \u03d5\u2206and \u03d5\u2206\u2032 are ranking equivalent iff, \u2200k1, k2, t+ such that k1 < k2 < t+ it holds \u2206(t+, k1) < \u2206(t+, k2) and \u2206\u2032(t+, k1) < \u2206\u2032(t+, k2). Proposition 1 follows from the application of Definition 1. Therefore, the set of functions \u03d5\u2206, characterized by a \u2206that is strictly monotonically increasing in k, are ranking equivalent. While \u03d5\u2206works well to predict the club that a person will play for, there are relations with different temporal characteristics. An example might be a relation that expresses that a soccer club wins a certain competition. In Figure 2, we extend our TKG with temporal triples using the relation wins. The relation wins seems to follow a different pattern compared to the previous example. Indeed, applying \u03d5\u22060 to predict the 2010 winner of the Bundesliga would not reflect the (fc-bayern-munich, wins, bundesliga, 1) (borussia-dortmund, wins, bundesliga, 2) (fc-bayern-munich, wins, bundesliga, 3) (werder-bremen, wins, bundesliga, 4) (fc-bayern-munich, wins, bundesliga, 5) (fc-bayern-munich, wins, bundesliga, 6) (vfb-stuttgart, wins, bundesliga, 7) (fc-bayern-munich, wins, bundesliga, 8) (vfl-wolfsburg, wins, bundesliga, 9) Figure 2: Clubs winning the Bundesliga from 2001 to 2009. fact that FC Bayern Munich is the club with the highest ratio of won championships, and year 9 might just have been a lucky one for VFL Wolfsburg. The frequency of wins could be considered a better indicator for a scoring function: \u03c81((s, r, o, t+), G) = |{k | (s, r, o, k) \u2208G}|/t+ (3) Based on this scoring function, the club that has won the most titles, Bayern Munich, receives the highest score of 0.6, while all other clubs receive a score of 0.1. As done earlier, we now generalize the formulation of \u03c81 to \u03c8\u2206using a weighting function \u2206(t+, k) where triples that occurred more recently are weighted higher: \u03c8\u2206((s, r, o, t+), G) = P i\u2208{k|(s,r,o,k)\u2208G} \u2206(t+, i) Pn i=1 \u2206(t+, i) . (4) Again, we apply the new scoring functions to our example. We shortened the names of the clubs and abbreviated bundesliga as bl: \u03c8\u22060((dortmund, wins, bl, 10), G) = 0.2/4.5 \u22480.04 \u03c8\u22060((bremen, wins, bl, 10), G) = 0.4/4.5 \u22480.09 \u03c8\u22060((stuttgart, wins, bl, 10), G) = 0.7/4.5 \u22480.15 \u03c8\u22060((munich, wins, bl, 10), G) = 2.3/4.5 \u22480.51 \u03c8\u22060((wolfsburg, wins, bl, 10), G) = 0.9/4.5 \u22480.2 It is worth noting that, for a restricted family of distributions \u2206\u2032(t, k), we can achieve ranking equivalence between scoring functions \u03c8\u2206\u2032 and \u03d5\u2206with a strictly increasing \u2206(t, k). More specifically, if we make \u2206\u2032(t, k) parametric, then \u03c8\u2206\u2032 can generalize the family of scoring functions \u03d5\u2206. Consider the parameterized function \u2206\u03bb(t+, k) = 2\u03bb(k\u2212t+) with \u03bb \u2208R+ 0 , where \u03bb acts as a decay factor. The higher \u03bb, the stronger the decay effect we achieve. In particular, if we set \u03bb = 1, we can enforce that a time point k always receives a higher weight than the sum of all previous time points 1, . . . , k \u22121. This means \u03c8\u22061 and \u03d5\u2206are ranking equivalent. Proposition 2. For \u03bb \u22651, \u2206\u03bb = 2\u03bb(k\u2212t+), and any strictly increasing time weighting function \u2206, the scoring functions \u03d5\u2206and \u03c8\u2206\u03bb are ranking equivalent. Proposition 2 follows directly from the fact that Pn i=k+1 1 2i < 1 2k for any n > k \u2208N+. On the contrary, we get ranking equivalence between \u03c81 and \u03c8\u2206\u03bb if we set \u03bb = 0. Proposition 3. The scoring functions \u03c81 and \u03c8\u2206\u03bb are ranking equivalent if we set \u03bb = 0. \fProposition 3 follows directly from 20 = 1 and the definition of \u03c81 in Equation 3. Propositions 2 and 3 help us to interpret our experimental results, as it indicates that different settings of \u03bb result in a scoring function that is situated between \u03c81 and \u03d5\u2206\u03bb. We treat \u03bb as a relation-specific hyperparameter in our experiments, meaning we will select a different \u03bbr for each relation r. Since relations are independent of each other, each \u03bbr can be optimized independently. 3.3 Relaxed Recurrency Baseline So far, our scoring functions were based on a strict application of the principle of recurrency. However, this approach fails to score a triple that has never been seen before, and we need to account for queries of this nature: imagine a young player appearing for the first time in a professional club. Thus, we introduce a relaxed variant of the baseline. Instead of looking for exact matching of triples in previous timesteps, which would not work for unseen triples, we are interested in how often parts of the triple have been observed in the data. When asked to score the query (s, r, ?, t+), we compute the normalized frequency that the object o has been in relationship r with any subject s\u2032: \u2212 \u2192 \u03be ((s, r, o, t+), G) = |{(s\u2032, k) | (s\u2032, r, o, k) \u2208G}| |{(s\u2032, o\u2032, k) | (s\u2032, r, o\u2032, k) \u2208G}| (5) Analogously, we denote with \u2190 \u2212 \u03be ((s, r, o, t+), G) the relaxed baseline used to score queries of the form (?, r, o, t+). In the following, we omit the arrow above \u03be and use the directed version depending on the type of query without explicit reference to the direction. Let us revisit the example of Figure 1 and apply \u03be to score a triple never seen before. We can now assign non-zero scores to the clubs that Aitana Bonmati, who never appeared in G, will likely play for in 2010: \u03be((bonmati, pf, vasco-da-gamah, 10), G) = 0.22 \u03be((bonmati, pf, santa-cruz, 10), G) = 0.22 \u03be((bonmati, pf, umea-ik, 10), G) = 0.44 \u03be((bonmati, pf, los-angeles-sol, 10), G) = 0.11 While we also report results for \u03be on its own, we are mainly interested in its combination with the the Strict Recurrency Baseline, where we expect it to fill up gaps and resolve ties. For simplicity, we do not introduce a weighted version of this baseline to avoid the extra hyperparameter. 3.4 Combined Recurrency Baseline We conclude the section with a linear combination of the Strict Recurrency Baseline \u03c8\u2206\u03bb and the Relaxed Recurrency Baseline \u03be. In particular (omitting \u03bb to keep the notation uncluttered): \u03c8\u2206\u03be((s, r, o, t+), G) = \u03b1 \u2217\u03c8\u2206(s, r, o, t+), G)+ (1 \u2212\u03b1) \u2217\u03be(s, r, o, t+), G), (6) where \u03b1 \u2208[0, 1] is another hyperparameter. Similar to \u03bb, we select a different \u03b1r for each relation r. In the following, we refer to this baseline as the Combined Recurrency Baseline. 4 Experimental Setup This section describes our experimental setup and provides information on how to reproduce our experiments1. We rely on the unified evaluation protocol of [Gastinger et al., 2023], reporting results about single-step predictions. We report results for the multi-step setting in the supplementary material2. 4.1 Hyperparameters We select the best hyperparameters by evaluating the performances on the validation set as follows: First, we select \u03bbr\u2200r \u2208R from in total 14 values, \u03bbr \u2208Lr = {0, ..., 1.0001} for \u03c8\u03bb. Then, after fixing the best \u03bbr\u2200r \u2208R, we select \u03b1r\u2200r \u2208R from 13 values, \u03b1r \u2208Ar = {0, ..., 1}, leading to a total of 27 combinations per relation. 4.2 Methods for Comparison We compare our baselines to 11 among the 17 methods described in Section 2. Two of these 17 methods run only in multi-step setting, see comparisons to these in the supplementary material. Further, for four methods we find discrepancies in the evaluation protocol and thus exclude them from our comparisons3. Unless otherwise stated, we report the results for these 11 methods based on the evaluation protocol by [Gastinger et al., 2023]. For TiRGN, we report the results of the original paper and do a sanity check of the released code. We do the same for L2TKG, LogE-Net, and TECHS, but we cannot do a sanity check as their code has not been released. 4.3 Dataset Information We assess the performance of the recurrency baselines on five datasets [Gastinger et al., 2023; Li et al., 2021b], namely WIKI, YAGO, ICEWS14, ICEWS18, and GDELT4. Table 1 shows characteristics such as the number of entities and quadruples, and it reports the timestep-based data splitting (short: #Tr/Val/Te TS) all methods are evaluated against. In addition, we compute the fraction of test temporal triples (s, r, o, t+) for which there exists a k < t+ such that (s, r, o, k) \u2208G, and we refer to this measure as the recurrency degree (Rec). Similarly, we also compute the fraction of temporal triples (s, r, o, t+) for which it holds that (s, r, o, t+ \u22121) \u2208G, which we call direct recurrency degree (DRec). Note that Rec defines an upper bound of Strict Recurrency Baseline\u2019s performance; instead, DRec informs about the test triples that have, from our baselines\u2019 perspective, a trivial solution. On YAGO and WIKI, both measures are higher than 85%, meaning that the application of the recurrency principle would likely work very well. 1https://github.com/nec-research/recurrency baseline tkg. 2Supplementary Material: https://github.com/nec-research/ recurrency baseline tkg/blob/master/supplementary material.pdf 3CENET, RETIA, and CluSTER do not report results in time-aware filter setting. ALRE-IR does not report results on WIKI, YAGO, and GDELT, and uses different dataset versions for ICEWS14 and ICEWS18. 4See Supplementary Material for additional dataset information. \fDataset #Nodes #Rels #Train #Valid #Test Time Int. #Tr/Val/Te TS DRec [%] Rec [%] ICEWS14 7128 230 74845 8514 7371 24 hours 304/30/31 10.5 52.4 ICEWS18 23033 256 373018 45995 49545 24 hours 239/30/34 10.8 50.4 GDELT 7691 240 1734399 238765 305241 15 min. 2303/288/384 2.2 64.9 YAGO 10623 10 161540 19523 20026 1 year 177/5/6 92.7 92.7 WIKI 12554 24 539286 67538 63110 1 year 210/11/10 85.6 87.0 Table 1: We report some statistics of the datasets, the timestep interval, and the specifics of the data splitting. We also include the recurrency degree (Rec) and the direct recurrency degree (DRec). Please refer to the text for a more detailed description. 4.4 Evaluation Metrics As is common in link prediction evaluations, we focus on two metrics: the Mean Reciprocal Rank (MRR), computing the average of the reciprocals of the ranks of the first relevant item in a list of results, as well as the Hits at 10 (H@10), the proportion of queries for which at least one relevant item is among the top 10 ranked results. Following [Gastinger et al., 2023], we report the time-aware filtered MRR and H@10. 5 Experimental Results This section reports our quantitative and qualitative results, illustrating our baselines help to gain a deeper understanding of the field. We list runtimes in the Supplementary Material. 5.1 Global Results Table 2 (lower area) shows the MRR and H@10 results for the Strict (\u03be), the Relaxed (\u03c8\u2206), and the Combined Recurrency Baseline (\u03c8\u2206\u03be). For all datasets, with one minor discrepancy, the Combined Recurrency Baseline performs better than the strict and the relaxed variants. However, the Strict Recurrency Baseline is not much worse: The difference to the Combined Recurrency Baseline is for both metrics never more than one percentage point. We observe that, while \u03be scores a MRR between 5% and 15% on its own, when combined with \u03c8\u2206(thus obtaining \u03c8\u2206\u03be) it can grant up to 0.9% of absolute improvement. As described in Section 3, its main role is to fill gaps and resolve ties. The results confirm our intuition. Interestingly, results for \u03c8\u2206\u03be on all datasets reflect the reported values of the recurrency degree and direct recurrency degree (see Table 2): For both YAGO and WIKI (Rec and DRec > 85%), our baseline yields high MRRs (> 80%), while in other cases the values are below 40%. When compared to results from related work (upper area of Table 2), the Combined Recurrency Baseline as well as the Strict Recurrency Baseline yield the highest test scores for two out of five datasets (GDELT and YAGO) and the thirdhighest test scores for the WIKI dataset. This is an indication that most related work models seem unable to learn and consistently apply a simple forecasting strategy that yields high gains. In particular, we highlight the significant difference between the Combined Recurrency Baseline and the runner-up methods for GDELT (with a relative change of +12.9%). Results for ICEWS14 and ICEWS18, instead, suggest that more complex dependencies need to be captured on these datasets. While two methods (TRKG and TANGO) perform worse than our baseline, the majority achieves better results. In summary, none of the methods proposed so far can accomplish the results achieved by a combination of two very na\u00a8 \u0131ve baselines for two out of five datasets. This result is rather surprising, and it raises doubts about the predictive quality of current methods. 5.2 Per-Relation Analysis We conduct a detailed per-relation analysis and focus on two datasets: ICEWS14, since our baseline performed worse there, and YAGO, for the opposite reason. We compare the Combined Recurrency Baseline to the four methods that performed best on the respective dataset, considering the seven methods evaluated under the evaluation protocol of [Gastinger et al., 2023]5. For clarity, we adopt the following notation to denote a relation and its prediction direction: [relation] (head) signifies predictions in head direction, corresponding to queries of the form (?, r, o, t+); [relation] (tail) denotes predictions in tail direction, i.e., (s, r, ?, t+). ICEWS14 In Figure 3(a), we focus on the nine most frequent relations. For each relation, one or multiple methods reach MRRs higher than the Combined Recurrency Baseline, with an absolute offset in MRR of approximately 3% to 7% between the best-performing method and our baseline. This indicates that it might be necessary to capture patterns going beyond the simple recurrency principle. However, even for ICEWS14, we see three relations where some methods produce worse results than the Combined Recurrency Baseline. For two of these (Make a visit, Host a visit), RE-GCN and CEN attain the lowest MRR. In the third relation (Arrest detain or charge with legal action), TLogic and xERTE have the lowest MRR. This implies that, despite having better aggregated MRRs, the methods display distinct weaknesses and are not learning to model recurrency for all relations. YAGO Figure 3(b), instead, shows two distinct categories of relations: the first category contains relations where most methods demonstrate competitive performance (MRR\u2265 85%). In all of them, the Combined Recurrency Baseline attains the highest scores. Thus, the capabilities of related work, like detecting patterns across different relations or multiple hops in the KG, do not seem to be beneficial for these relations, and a simpler inductive bias might be preferred. The second category contains relations where all methods perform poorly (MRR \u226420%). Due to the dataset\u2019s limited information, reliably predicting prize winners or deaths is unfeasible. For these reasons, we expect no significant improvement in future work on YAGO beyond the results of our baseline. However, YAGO still provides value to the research field: it can be used to inspect the methods\u2019 capabilities to identify 5Since we could compute prediction scores for every query. \fGDELT YAGO WIKI ICEWS14 ICEWS18 MRR H@10 MRR H@10 MRR H@10 MRR H@10 MRR H@10 L2TKG\u2020 20.5 35.8 47.4 71.1 33.4 55.0 LogE-Net\u2020 43.7 63.7 32.7 53.0 TECHS\u2020 89.2 92.4 76.0 82.4 d.d.v d.d.v. 30.9 49.8 TiRGN 21.7 37.6 88.0 92.9 81.7 87.1 44.0 63.8 33.7 54.2 TRKG 21.5 37.3 71.5 79.2 73.4 76.2 27.3 50.8 16.7 35.4 RE-GCN 19.8 33.9 82.2 88.5 78.7 84.7 42.1 62.7 32.6 52.6 xERTE 18.9 32.0 87.3 91.2 74.5 80.1 40.9 57.1 29.2 46.3 TLogic 19.8 35.6 76.5 79.2 82.3 87.0 42.5 60.3 29.6 48.1 TANGO 19.2 32.8 62.4 67.8 50.1 52.8 36.8 55.1 28.4 46.3 Timetraveler 20.2 31.2 87.7 91.2 78.7 83.1 40.8 57.6 29.1 43.9 CEN 20.4 35.0 82.7 89.4 79.3 84.9 41.8 60.9 31.5 50.7 Relaxed (\u03be) 14.2 23.6 5.2 10.7 14.3 25.4 14.4 28.6 11.6 22.0 Strict (\u03c8\u2206) 23.7 38.3 90.7 92.8 81.6 87.0 36.3 48.4 27.8 41.4 Combined (\u03c8\u2206\u03be) 24.5 39.8 90.9 93.0 81.5 87.1 37.2 51.8 28.7 43.7 Table 2: Experimental results. An entry \u2020 means authors have not released their code, and thus we could not reproduce their results, an entry that the related work does not report results on this dataset, and an entry \u201dd.d.v\u201d, that the it reports results on a different dataset version. and predict simple recurring facts and, if this is not the case, to pinpoint their deficiencies. Thus, YAGO can be also seen as a dataset for sanity checks. All analysed methods from related work fail this sanity check: none of them can exploit the simple recurrency pattern for all relations. The main disparity in overall MRR between the Combined Recurrency Baseline and related work can be attributed to two specific relations: playsFor (head, tail), and isAffiliatedTo (head). Queries attributed to these relations make for almost 50% of all test queries. More specifically, Timetraveler exhibits limitations with isAffiliatedTo (head) and playsFor (head); xERTE shows its greatest shortcomings for isAffiliatedTo (head); and REGCN and CEN exhibit limitations with the relation playsFor in both directions. These findings highlight the specific weaknesses of each method that are possible by comparisons with baselines, thus allowing for targeted improvements. 5.3 Failure Analysis In the following, we analyse some example queries where the recurrency principle offers an unambiguous solution which, however, is not chosen by a specific method. Following Section 5.2, we focus on YAGO and the same four models. We base our analysis on the insights that YAGO has a very high direct recurrency degree, and that predicting facts based on strict recurrency with steep time decay leads to very high scores. The MRR of \u03d5\u2206is 90.7%. For each model, we count for how many queries the following conditions are fulfilled, given the test query (s, r, ?, t) with correct answer o: (i) (s, r, o, t \u22121) \u2208G, (ii) the model proposed o\u2032 \u0338= o as top candidate, (iii) there exists no k with (s, r, o\u2032, k) \u2208G. If these are fulfilled, there is strong evidence for o due to recurrency, while (s, r, o\u2032) has never been observed in the past. We conduct the same analysis for head queries (?, r, o, t). For each model, we randomly select some of these queries6 and 6Summing up over head and tail queries for Timetraveler, we find 34 queries that fulfilled all three conditions, for xERTE 149, for CEN 286, and for RE-GCN 525 queries. describe the mistakes made. Timetraveler Surprisingly, Timetraveler sometimes suggests top candidates that are incompatible with respect to domain and range of the given relation, even when all above conditions are met. Here are two examples for the \u201dplaysFor\u201d (pf) relation, where the proposed candidates are marked with a question mark: (?=spain-national-u23, pf, lierse-sk, 10) (?=baseball-ground, pf, derby-county-fc, 10) The reasons behind Timetraveler\u2019s predictions, despite the availability of reasonable candidates according to the recurrency principle, fall outside the scope of this paper. xERTE For xERTE, we detect a very clear pattern that explains the mistakes. In 147 out of 149 cases, xERTE predicts a candidate as subject (object) c when c was given as object (subject). This happens in nearly all cases for the symmetric relation isMarriedTo resulting in the prediction of triples such as (john, isMarriedTo, john). This error pattern bears a striking resemblance to issues observed in the context of nontemporal KG completion in [Meilicke et al., 2018] where it has already been argued that some models perform surprisingly badly on symmetric relations. CEN and RE-GCN Both CEN and RE-GCN exhibit distinct behavior. Errors frequently occur with the \u201dplaysFor\u201d relation, particularly in tail prediction. In all analysed examples, the types (soccer players and soccer clubs) of the incorrectly predicted candidates were correct. Moreover, we cannot find any other systematic error pattern or explanation for the erroneous predictions. It seems that both models are not able to learn that the playsFor relation follows the simple regularity of strict recurrency, even though this regularity dominates the training set. These examples highlight significant insights into the current weaknesses of each method. Future research can leverage these insights to enhance the affected models. \f(a) ICEWS14 t h t h t h t h t h t h t h t h t h Make_statement Consult Make_an _appeal_or_request Express_intent_to _meet_or_negotiate Make_a_visit Host_a_visit Arrest,_detain,_or _charge_with_legal_action Praise_or_endorse Criticize_or_denounce Relation 0 20 40 60 80 100 MRR (%) TLogic CEN RE-GCN xERTE Recurrency Baseline 400 600 800 1000 1200 (b) YAGO t h t h t h t h t h t h t h t h t h <worksAt> <playsFor> <hasWonPrize> <isMarriedT o> <owns> <graduatedFrom> <diedIn> <isAffiliatedT o> <created> Relation 0 20 40 60 80 100 MRR (%) Timetraveler CEN RE-GCN xERTE Recurrency Baseline 100 101 102 103 Figure 3: Test MRRs for each relation and direction (\u201ct\u201d means tail and \u201ch\u201d head, respectively) for (a) ICEWS14 (top) and (b) YAGO (bottom). Colors indicate the number of queries for relation and its direction in the test set. 5.4 Parameter Study In the following, we summarize our findings regarding the influence of hyperparameters on baseline predictions. Detailed results are provided in the Supplementary Material. Influence of Hyperparameter Values We analyze the impact of \u03bb and \u03b1 on overall MRR. Notably, \u03bb significantly affects the MRR, e.g., with test results ranging from 12.1% to 23.7% for GDELT across different \u03bb values. The optimal \u03bb varies across datasets. This underlines the influence of time decay: Predicting repetitions of the most recent facts is most beneficial for YAGO and WIKI, while also considering the frequency of previous facts is better for the other datasets. This distinction is also mirrored in the direct recurrency degree, being notably high for YAGO and WIKI, and thus indicating the importance of the most recent facts. Additionally, setting \u03b1 to a high value (\u03b1 \u22650.99) yields the best aggregated test results across all datasets, indicating the benefits of emphasizing predictions from the Strict Recurrency Baseline and using the Relaxed Recurrency Baseline to resolve ties and rank unseen triples. Impact of Relaxed Recurrency Baseline Further, to understand the impact of the Relaxed Recurrency Baseline (\u03be) on the combined baseline, we compare the MRR of strict and relaxed baseline on a per-relation basis. We find that, even though the aggregated improvement of \u03c8\u2206\u03be as compared to \u03c8\u2206is only marginal (< 1%) for each dataset, for some relations, where the strict baseline fails, the impact of the relaxed baseline is meaningful: For example, on the dataset YAGO and the relation diedIn (tail), the Strict Recurrency Baseline yields a very low MRR of 0.7%, whereas the Relaxed Recurrency Baseline yields a MRR of 17.5%. Overall, this highlights the influence of hyperparameter values, dataset differences, and the advantage of combining baselines on a per-relation basis. 6 Conclusion We are witnessing a notable growth of scientific output in the field of TKG forecasting. However, a reliable and rigorous comparison with simple baselines, which can help us distinguish real from fictitious progress, has been missing so far. Inspired by real-world examples, this work filled the current gap by designing an intuitive baseline that exploits the straightforward concept of facts\u2019 recurrency. In summary, despite its inability to grasp complex dependencies in the data, the baseline provides a better or a competitive alternative to existing models on three out of five common benchmarks. This result is surprising and raises doubts about the predictive quality of the proposed methods. Once more, it stresses the importance of testing na\u00a8 \u0131ve baselines as a key component of any TKG forecasting benchmark: should a model fail when a baseline succeeds, its predictive capability should be subject to critical scrutiny. By conducting critical and detailed analyses, we identified limitations of existing models, such as the prediction of incompatible types. We hope that our work will foster awareness about the necessity of simple baselines in the future evaluation of TKG methods.",
"additional_info": [
{
"url": "http://arxiv.org/abs/2402.03528v1",
"title": "Efficient Generation of Grids and Traversal Graphs in Compositional Spaces towards Exploration and Path Planning Exemplified in Materials",
"abstract": "Many disciplines of science and engineering deal with problems related to\ncompositions, ranging from chemical compositions in materials science to\nportfolio compositions in economics. They exist in non-Euclidean simplex\nspaces, causing many standard tools to be incorrect or inefficient, which is\nsignificant in combinatorically or structurally challenging spaces exemplified\nby Compositionally Complex Materials (CCMs) and Functionally Graded Materials\n(FGMs). Here, we explore them conceptually in terms of problem spaces and\nquantitatively in terms of computational feasibility.\n This work implements several essential methods specific to the compositional\n(simplex) spaces through a high-performance open-source library nimplex. Most\nsignificantly, we derive and implement an algorithm for constructing a novel\nn-dimensional simplex graph data structure, which contains all discretized\ncompositions and all possible neighbor-to-neighbor transitions as pointer\narrays. Critically, no distance or neighborhood calculations are performed,\ninstead leveraging pure combinatorics and the ordering in procedurally\ngenerated simplex grids, keeping the algorithm $\\mathcal{O}(N)$, so that graphs\nwith billions of transitions take seconds to construct on a laptop.\nFurthermore, we demonstrate how such graph representations can be combined to\nexpress path-planning problem spaces and to incorporate prior knowledge while\nkeeping the problem space homogeneous. This allows for efficient deployment of\nexisting high-performance gradient descent, graph traversal search, and other\npath optimization algorithms.",
"authors": "Adam M. Krajewski, Allison M. Beese, Wesley F. Reinhart, Zi-Kui Liu",
"published": "2024-02-05",
"updated": "2024-02-05",
"primary_cat": "cond-mat.mtrl-sci",
"cats": [
"cond-mat.mtrl-sci",
"physics.data-an"
],
"label": "Original Paper",
"paper_cat": "Knowledge AND Graph",
"gt": "Efficient Generation of Grids and Traversal Graphs in Compositional Spaces towards Exploration and Path Planning Exemplified in Materials",
"main_content": "Introduction 1 1.1 Compositional Spaces . . . . . . . . . . . . . 1 1.2 Compositionally Complex Materials . . . . . . 2 1.3 Path Planning in Functionally Graded Materials 3 1.4 Combinatorial Complexities . . . . . . . . . . 5 2 Simplex Uniform Random Sampling 6 2.1 Monte Carlo . . . . . . . . . . . . . . . . . . . 6 2.2 Quasi Monte Carlo . . . . . . . . . . . . . . . 7 3 Simplex Grid 7 3.1 Full . . . . . . . . . . . . . . . . . . . . . . . 7 3.2 Internal . . . . . . . . . . . . . . . . . . . . . 7 4 Simplex Graph 8 4.1 Binary . . . . . . . . . . . . . . . . . . . . . . 8 4.2 Ternary . . . . . . . . . . . . . . . . . . . . . 9 4.3 N-Dimensional . . . . . . . . . . . . . . . . . 9 4.4 Simplex Graph Complexes . . . . . . . . . . . 10 4.5 Discussion of Exploration . . . . . . . . . . . 12 5 Summary and Conclusion 12 A Appendix A 16 1 Introduction 1.1 Compositional Spaces The term composition refers to a way an entity can be split into a set of distinct components, and it plays a critical role in many disciplines of science, engineering, and mathematics. For instance, in combinatorics, the composition will refer to a way a positive integer is split into a sequence of other positive integers. In materials science, chemical composition refers to how a material (or, more generally, matter) is split into distinct components, such as chemical elements, based on considerations such as fraction of atoms, occupied volume, or contributed mass. In economics, portfolio composition may refer to how finite capital is split across assets, such as cash, equity instruments, real estate, and commodities, based on their monetary value. The definition of a composition will typically allow for the definition of a finite space in which such a composition exists. In the typical case of the composition defined in terms of a sequence of d fractions, such space will 1 arXiv:2402.03528v1 [cond-mat.mtrl-sci] 5 Feb 2024 \fbe a standard simplex a (d \u22121)-dimensional polytope of unit length edges defined for points x which satisfy xi > 0 and Pd i=0 xi = 1. Or, in simple terms, the space where all fractions are positive, treated equally, and add up to 1. Some special cases of d=2,3,4, corresponding to 1-simplex, 2-simplex, and 3-simplex, are also known as line segment, triangle, and tetrahedron, respectively. Working within compositional (simplex) spaces requires several additional considerations relative to the more common Euclidean spaces for which most tools were designed. Otherwise, numerous problems can be introduced, ranging from sampling points outside the space, through incorrect density estimates, to incorrect gradient calculations caused by modifying every xj\u0338=i when changing xi assumed to be independent. This work introduces a new high-performance library called nimplex or NIM library for simPLEX spaces, created exclusively for working with such spaces. It was written in low-level Nim language, allowing for careful optimizations, and then compiled with a native Python interface for general use. It provides an efficient implementation of (a) existing methods from literature (see Sec. 2.1 and 3.1), (b) modifications of existing methods (see Sec. 3.2), and (c) entirely new capabilities developed in this paper (see Sec. 4). Neither compositional space nor nimplex is exclusive to any discipline; however, to better showcase its capabilities, two complex, highly-dimensional materials-related problems of high impact are highlighted. At the same time, translating them and their solutions to other areas, such as economics, can be done directly and is discussed. 1.2 Compositionally Complex Materials An exemplar of how tackling highly-dimensional problems allows researchers to unlock novel solutions is the class of Compositionally Complex Materials (CCMs), which includes several sub-classes, such as Multi Principle Element Alloys (MPEAs), High Entropy Alloys (HEAs), High Entropy Ceramics (HECs), and High Entropy Metallic Glasses (HEMGs). CCMs are materials with at least several elements in significant fractions and was initiated by two pioneering 2004 works on HEAs by Yeh et al. [1] and by Cantor et al. [2], who independently proposed that equimolar (equal fractions) alloys with more than 5 (Yeh) or between 6 and 9 (Cantor) elements, could form single solid solutions (SSS) thanks to the configurational entropy stabilizing them. Other notable definitions include all materials with idealized configurational entropy \u2206Sconf \u2265R ln 5 = 1.61R [3] (\u22482.32 bits of information in the composition x) or \u2206Sconf \u22651R [4] (\u22481.44 bits). Regardless of the exact definition, while individual CCMs contain a few components, they always occupy very high dimensional problem spaces relative to other materials because they are not as restricted in terms of which elements are present. This results in homogeneous datasets occupying over 30-dimensional spaces (or 10-20 for specific problems, like refractory HEA [4]), which are orders of magnitude larger compared to traditional alloys with one or two primary elements. This introduces opportunities for finding exceptional alloys in littleexplored chemical spaces, as demonstrated by some cases of excellent hardness [5], ductility [6], room temperature strength [7], and refractory strength [8], [9]. In recent years, high-throughput thermodynamicsdriven combinatorial studies on CCMs have been successfully performed to generate high-performance materials [10], [11], utilizing CALPHAD thermodynamic databases for CCMs/HEAs (e.g., [12]\u2013[14]). However, they are often limited to coarse sampling (e.g., spaced at 5/10at.%) due to the combinatorial complexity in number of alloys and low-dimensional points (e.g., d = 4) due to the combinatorial complexity in component interactions tracked in CALPHAD calculations increasing single evaluation cost [10], [11], sometimes limited further to particular points such as equimolar alloys [15]. To somewhat alleviate these computational cost challenges, ML models started to be used as surrogates for thermodynamic calculations and experiments [16], [17] or in the role of candidate selection from ML latent space [18]. They are extremely fast relative to traditional methods, usually taking microseconds per prediction, and they may seem to work near-instantly when used as a drop-in replacement. However, when one tries to deploy ML models on more complex systems, the combinatorial complexities involved (discussed in Section 1.4) may quickly make ML deployment very expensive, prompting optimization of the approach. While the ML inference is typically optimized to the computational limits in state-of-the-art tools like PyTorch [19], the rest of the customized composition space infrastructure, traditionally designed for thousands of evaluations taking seconds, may become a severe bottleneck when moving to billions of evaluations taking microseconds, as explored throughout this paper. In particular, being able to do the following tasks in nanosecond to microsecond region typically becomes critical and needs to be considered: 1. Efficient random sampling from the uniform grids and continuous distributions (Monte Carlo in Section 2.1) to facilitate approaches including active 2 of 17 \flearning [18] and generative design [20]. 2. Efficient generation of the uniform grids in simplex spaces to facilitate complete screenings, quantitatively explored in Sections 3.1 and 3.2. 3. Efficient generation of high-dimensional graph representations with complete connectivity to all adjacent CCM compositions, explored in detail throughout Section 4, to deterministically allocate problem space structure and facilitate neighborhood-based exploration. This is particularly beneficial for the gradient calculations between neighboring grid points, where one typically has to either (a) na\u00efvely compute all possible compositional changes despite redundancy (e.g., if at point 1 gradient +1%B \u22121%A from point 1 to 2 and gradient +1%C \u22121%A from point 1 to 3, then at point 2 the gradient +1%C \u22121%B to point 3 can be known) at least doubling the number of required evaluations, or (b) keep track of all visited states through an associative array (dictionary). The latter can, in principle, scale well with the number of visited points (O(1) avg. time for hash map) but is many times more computationally intensive compared to directly accessing known memory location through a pointer as one can do with a graph data structure. 1.3 Path Planning in Functionally Graded Materials Another class of materials where complex compositional spaces have to be considered, even if intermediate compositions may not be complex themselves, is the class of Functionally Graded Materials (FGMs), sometimes narrowed to Compositionally Graded Materials (CGMs). In them, a set of compositions is traversed to form a compositional path inside a single physical part in order to spatially leverage combinations of properties that may not be possible or feasible with a homogeneous material [21]. In the simplest binary example, this could mean increasing the porosity fraction as a function of depth from the part surface to achieve a higher performance-to-weight ratio. This paper focuses on the computational design of advanced FGMs, which enable solutions to otherwise impossible challenges. An example of such is the design of compositional pathways between stainless steel and titanium alloys to allow for additive manufacturing (AM) of aerospace and nuclear components, combining these alloys within a single print [22]. Such a task is highly nontrivial as the simple linear mixing causes several brittle or otherwise detrimental Fe-Ti and Cr-Ti phases to form, regardless of how gradual the compositional gradient is [23]. Formation of such phases in significant quantities is not specific to this alloy pair; thus, all possible ternary systems in Cr-Fe-Ni-Ti-V space had to be considered and manually arranged together by experts to obtain a pathway navigating through feasible regions [22]. Figure 1: Three available compositions existing in a quaternary (d=4) compositional space forming a ternary (d=3) compositional space which can be attained with them; sampled with a uniform grid with 24 divisions. The hexagonal tiling emerges based on the distance metric in 2-simplex and would become rhombic dodecahedral in 3simplex. While in recent years, the fabrication of FGMs has become dominated by Directed Energy Deposition AM for various geometries (e.g., radial deposition [24]), several other notable manufacturing techniques allow the deployment of such pathways. These include deposition-based methods for high-precision applications, casting-based methods for high-volume fabrication [21], and recently, 3 of 17 \fbrazing consecutive metallic foils [25] to create relatively thin compositionally graded interfaces on mass. In a typical FGM manufacturing scenario, a discrete set of compositions (individual available materials) exists in a compositional (simplex) space formed by a union of all components (usually chemical elements or compounds not affecting later steps), as depicted in the top of Figure 1, which one could call the elemental space. The position in this elemental space is fundamental and is usually the one considered in both mechanistic (e.g., thermodynamic CALPHAD-type models [26]) and predictive (ML/empirical-rule) modeling. However, during the FGM design, it is more convenient to consider another compositional space formed by treating the original available compositions as components, as depicted on the bottom of Figure 1, which one could call attainable compositions space or more generally the design space. Within an FGM manufacturing apparatus, it is common for each of the available compositions to be treated equally, e.g., powder hoppers [27], sputtering targets [25], or other flow sources are symmetrically arranged and offer the same degree of control. Thus, as depicted in Figure 1, the attainable compositional space can be treated as a standard simplex for design purposes and partitioned equally across dimensions, reflecting the nature of the problem even though equidistant points in it may not be equidistant in the original (elemental) space. The attainable spaces used in the final design tend to be lower-dimensional relative to the corresponding elemental spaces, especially when the available compositions are CCMs/HEAs or the number of flow sources is limited. However, this trend is not fundamentally required, and going against it may be beneficial in many contexts. For instance, one may conceptualize a ternary (d = 3) elemental compositional space where 4 compositions are available, arranged as vertices of some tetragon; thus, forming a quaternary (d = 4) attainable compositions space tetrahedron. In such a case, some regions have to overlap in the elemental space, while new regions are guaranteed to be unlocked relative to 3 available compositions if the formed tetragon is strictly convex. This seemingly oversamples; however, it is critical to consider that there is no oversampling in the design space because the available materials can possess properties that are not a function of the composition alone, such as the CO2 footprint or price. A clear and industry-significant example of the above happens during FGM design in elemental spaces containing Hf and Zr. The two are very difficult to separate, causing both price and greenhouse emissions to rise sharply as a function of the separation purity requirements. Furthermore, physical form factors available from suppliers tend to be limited or at lower demand for pureZr and pure-Hf, furthering the cost. In the case of AM using wires as feedstock (WAAM) [28], as explored in detail in Appendix A, using pure Zr in place of the more industry-common alloy with 4.5%Hf can be somewhere from a few times to over 100 times more expensive. In a typical, manual FGM design, a researcher selects one of the two grades based on their expertise. However, by considering the two grades as independent components of the higher-dimensional design space, one can avoid forcing a decision before exploring the space, thus limiting human bias and allowing exploration of both options simultaneously, allowing their combination where regions of space insensitive to the Hf content utilize the cheaper grade while the pure Zr is used when necessary or favorable based on some path heuristic. Figure 2: A path example which avoids infeasible (red) and undesirable (yellow) regions, or their combination (orange). With the design space carefully set up, one can start to evaluate different paths across it. Typically, the core considerations deal with meeting specific feasibility (hard) constraints. In the case of metal alloy FGMs, these can be (1) formation of detrimental phases based on thermodynamic equilibrium [27], (2) formation of detrimental phases based on non-equilibrium predictions of solidification results based on Scheil\u2013Gulliver method, which better describes the as-made material [29], or (3) a combination of the two [22]. In the future, these will likely be extended through (4) precipitation modeling improving the metastable material design, thanks to the recent re4 of 17 \flease of open-source high-performance software Kawin [30], and (5) automated modeling of manufacturing constraints, such as printability in AM [31]. Furthermore, one can also try to meet desirability (soft) constraints, such as the physical appearance of a composite material, which can be broken if needed. These two types of constraints are depicted in Figure 2, alongside an example path navigating through them. In Figure 2, all infeasible points violating the constraints are annotated for visualization. However, doing so may be unnecessary when path-planning, especially iteratively based on neighbor connectivity, as the insides of the infeasible space could not be reached, thus reducing the total number of evaluations. In addition to the feasibility and desirability constraints, further considerations are often made to how the path optimizes values of a set of properties of interest, either individually or through some heuristics combining them. Usually, this optimization constitutes finding the path that minimizes or maximizes either average or extreme values over the set of visited states, exemplified by the pink path in Figure 3. In the case of metal alloy FGMs, this can mean, for instance, minimizing the average evaporation rate of the molten metal [32], minimizing the maximum susceptibility to different cracking mechanisms [33], or maximizing the ductility [34]. Figure 3: Two path examples in the attainable compositional space annotated with some virtual property. One (pink/inner) minimizes/maximizes the average property value given a number of fixed path lengths, and another (purple/outer) minimizes the gradient in the property along the path. The last, fundamentally different, property optimization task has to do with the gradient, or more generally, the character, of transitions between intermediate states, which will be critical later in the context of graphs in Section 4. Most commonly, one optimizes the path to minimize value function gradients, exemplified by the purple path in Figure 3, in order to, for instance, minimize the thermal expansion coefficient mismatch and by extension stresses induced by temperature changes [35]. 1.4 Combinatorial Complexities As eluded to in Sections 1.2 and 1.3, when sampling compositions or partitioning corresponding spaces, the resulting combinatorial complexities have to be considered to determine whether a method will be computationally feasible. There are two key equations governing these complexities based on (1) the dimensionality of the space (number of components) d and (2) the number of equal divisions made in each dimension nd, which can be found for every feasible fractional step size (such that it can add to 100%). The first, very intuitive equation gives the number of samples NC on a Cartesian grid in d\u22121 dimensions, with \u22121 term due to one of the components being considered dependent. NC(d, nd) = (nd + 1)d\u22121 (1) The second equation gives the number of ways nd balls can be arranged in d bins, which is well known to be equivalent to much simpler problems of choosing d \u22121 moves or nd placements from d\u22121+nd possible options (see [36] or [37]). While these may seem unrelated to compositions, the former problem is precisely equivalent to finding a composition of an integer or distributing nd compositional fractions 1 nd across components or chemical elements, giving the number NS of unique discrete compositions in the simplex space. NS(d, nd) = \u0012d \u22121 + nd d \u22121 \u0013 = \u0012d \u22121 + nd nd \u0013 (2) In terms of factorials, both expressions can be simplified to the same NS(d, nd) = (d \u22121 + nd)! (d \u22121)!nd! Throughout Sections 3 and 4, the interplay between these equations will be utilized to contrast different computational methods, and their direct results will allow computational feasibility evaluation. 5 of 17 \f2 Simplex Uniform Random Sampling 2.1 Monte Carlo Performing a uniform random sampling, also known as the Monte Carlo method, over a simplex space is a prevalent task; however, it is also one of the most common sources of inefficiency, bias, or errors when implemented incorrectly. Software (e.g., alchemyst/ternplot in Matlab [38]) and methods dealing with low-dimensional or otherwise small compositional spaces, often utilize a na\u00efve approach of sampling uniformly distributed points from a Cartesian space/grid in d \u22121 dimensions and then rejecting some infeasible points (Pd i xi > 1), as depicted in the left part of Figure 4, which for small (d \u22644) can be both easiest and computationally fastest. However, this method becomes inefficient for large d because the fraction of rejected points increases with the dimensionality. While this problem is widely noted in the literature [39], best to the authors\u2019 knowledge, it has yet to be discussed quantitatively despite being critical to estimating the sampling\u2019s computational complexity. Thus, it is derived herein. One can consider that a grid of NS simplex-feasible points is a subset of a grid of NC points distributed uniformly in the Cartesian space so that random selection from this grid should have a NS NC probability of falling within the simplex. Thus, as shown below, one can find the acceptance rate by considering an infinitely fine grid (nd \u2192inf). Appendix B gives an alternative, intuitive method for finding f(4) using geometry, which agrees with this result. f(d) = lim nd\u2192inf NS NC = lim nd\u2192inf \u0000d\u22121+nd d\u22121 \u0001 (nd + 1)d\u22121 = \u0393(d)\u22121 = 1 (d \u22121)! = d d! (3) As one can see in Equation 3, the rejection rate exhibits factorial growth, and while it is not a significant obstacle for low-dimensional cases like ternary f(3) = 1 2 or a quaternary f(4) = 1 6, it will relatively quickly become a problem when compositionally complex materials are considered. For instance, in the case of nonary chemical space f(9) = 1 40320 or only \u22480.0025% of points will fall into the feasible space. Such a rejection rate could have a particularly severe effect on ML-driven methods, such as generative CCM design. To circumvent the rejection problem, one may randomly sample from N-cube and normalize to 1; however, as shown in the center of Figure 4 and commonly known Figure 4: (left) Uniform random sampling in 2-cube (square) filtered to fall onto a 2-simplex (ternary composition), showing 50% rejection rate, (middle) random sampling in 3-cube projected onto 2-simplex by normalizing coordinates, showing oversampling in the center of each dimension, and (right) ideal uniform random sampling of a simplex. in the literature [40], this leads to oversampling in the center of each underlying dimension. Thus, to achieve a uniform random sampling, nimplex and other carefully designed methods (e.g., [39] and [40]) tend to take Dirichlet distribution, where one samples points y from Gamma distributions with density y\u03b1\u22121 i e\u2212yi \u0393(\u03b1) and consider its special \"flat\" case, where \u03b1 = 1 simplifies the density equation to just 1e\u2212yi 1 = e\u2212yi. This is equivalent to sampling z from linear distributions and calculating yi = \u2212log(zi), which then can be normalized to obtain x as xi = yi/ P y. The following snippet shows nimplex\u2019s implementation of this, which samples z with the high-performance xoroshiro128+ random number generator [41] underlying randomTensor function from the Arraymancer tensor library [42]. 1 proc simplex_sampling_mc( 2 dim: int, samples: int): Tensor[float] = 3 let neglograndom = 4 randomTensor[float]([samples, dim], 1.0 5 ).map(x => -ln(x)) 6 let sums = neglograndom.sum(axis=1) 7 return neglograndom /. sums An alternative approach worth mentioning, sometimes found in this context, is based on (1) generating a (d+1)length list composed of 0, d \u22121 random numbers, and 1, (2) sorting it, and (3) obtaining d-length list of differences between consecutive elements, which is guaranteed to be uniformly distributed over a simplex as shown in [43]. While this approach may be easier to conceptualize, it is much more computationally expensive due to the sorting step. On the author\u2019s laptop, for d = 9, the method implemented in nimplex (involving calculation of 9 logarithms and normalizing them) takes 3.6ns while the above (implemented with merge sort) takes 74.5ns per iteration, i.e., over 20 times longer while not providing any clear benefit. Furthermore, their complexities are O(N) and O(N ln N), respectively, so the computational cost difference will also slowly widen with increasing d. 6 of 17 \f2.2 Quasi Monte Carlo While beyond the current implementation scope of nimplex, it is beneficial to consider quasi-Monte Carlo (QMC) sampling methods, where quasi-random sequences of low discrepancy (having highly uniform coverage of all regions) are used to sample the space deterministically. Such an approach is guaranteed to be very beneficial in low-dimensional (d \u22643) problems and has been implemented in thermodynamic tools, including pycalphad [40], [44] improving sampling of ternary systems. However, the QMC can become problematic as one moves to higher dimensional problems. Firstly, the upper discrepancy bounds for QMC quickly increase with increasing N, unlike MC, which depends only on the number of samples; thus, MC can outperform it (thanks to better guarantees) unless a quickly (often exponentially) growing number of samples is taken (see discussion on p.271 in [45]). Because of this, even for quaternary (d = 4) spaces, MC may be preferred for a low number of samples, even though QMC, especially with additional scrambling, can outperform it, as shown in [40]. Another significant problem in QMC is the unequal sampling of different dimensions, which can be very severe in high dimensions (see p.154 in [46]). In addition to causing under-performance in space-filling, such bias, combined with the standard alphabetical ordering of chemical components, can cause systematically worse exploration of, e.g., titanium compared to aluminum in CCMs, just based on their names. 3 Simplex Grid 3.1 Full Next, one can consider the creation of a grid of uniformly distributed points, which is known to contain \u0000d\u22121+nd d\u22121 \u0001 points, as discussed in Section 1.4. Similar to the random sampling discussed in Section 2, such a compositional grid cannot be constructed by simply projecting a Cartesian grid in (N \u22121)-cube as patterns will emerge (explored in detail in [40]), but it can be quickly constructed through rejecting infeasible points, as shown in Figure 5. However, it will suffer from a nearly as bad rejection rate, quantitatively dependent on both d and nd. For instance, if we consider 5% spaced compositions in 9-components, the fraction of points summing to 100% is fM=20(9) \u2248 1 12,169 or 0.0082%. Fortunately, in their 1978 textbook, Nijenhuis and Wlif [36] explored the problem and gave an efficient algorithm/routine called NEXCOM to procedurally generate Figure 5: (left) Uniform grid (nd = 24) in 2-cube (square) filtered to fall onto a 2-simplex (ternary composition), showing 12 25 = 48% rejection rate, (right) uniform grid in the corresponding simplex. these simplex lattice points for arbitrary d and nd, resulting in the grid shown in Figure 5 on the right. In the following years, several authors made various modifications to the algorithm, and the most recent one by Chasalow and Brand [37] improves performance without sacrificing simplicity. Over the years, it has been implemented in relatively modern languages such as FORTRAN90, C, MATLAB, and Python. Now, it has been implemented in Nim language as well, with the Nim code snippet shown below. 1 proc simplex_grid( 2 dim: int, ndiv: int): Tensor[int] = 3 let N: int = binom(ndiv+dim-1, dim-1) 4 result = newTensor[int]([N, dim]) 5 var x = zeros[int](dim) 6 x[dim-1] = ndiv 7 for j in 0..dim-1: 8 result[0, j] = x[j] 9 var h = dim 10 for i in 1..N-1: 11 h -= 1 12 let val = x[h] 13 x[h] = 0 14 x[dim-1] = val 1 15 x[h-1] += 1 16 for j in 0..dim-1: 17 result[i, j] = x[j] 18 if val != 1: 19 h = dim 20 return result As one can deduce from above, the algorithm proceeds through the simplex space starting from [0, 0, ..., nd] and redistributes one 1 nd fraction NS \u22121 times across dimensions, forming a zig-zag path to [nd, 0, ..., 0]. 3.2 Internal To the best of the authors\u2019 knowledge, something that has not been implemented before, but that is significant 7 of 17 \fto exploration of CCMs (see Sec 1.2) is an algorithm to obtain only internal points of the simplex grid, i.e., points with non-zero values in all dimensions, to allow, e.g., generating all 7-component HEAs rather than all alloys in 7component space. In principle, one can filter the output of the algorithm presented in Section 3.1; however, this may quickly become inefficient, especially for nd low enough as to approach d. The number of points can be found by, again, considering the surrogate problem of ball compositions mentioned in Section 1.4 and noting that if the last ball cannot be removed from any position, there will be d fewer possible options to perform d \u22121 moves, thus resulting in NI samples: NI(d, nd) = \u0012nd \u22121 d \u22121 \u0013 (4) This can be quickly double-checked through summation of internal points of all lower \u03b4 dimensional spaces enclosed in d space: d X \u03b4=1 \"\u0012nd \u22121 \u03b4 \u22121 \u0013 \u00d7 \u0012d \u03b4 \u0013# = (d \u22121 + nd)! (d \u22121)!nd! = NS(d, nd) We can now look at NI(d, nd) to NS(d, nd) ratio for the aforementioned case of generating all 7-component alloys. For 5% grid (nd = 20) we get \u2248 1 8.5, and for 10% grid (nd = 10) we get \u22481 95, showing a clear benefit of implementing the new method. This can be done by taking the modified-NEXCOM algorithm [37] from Section 3.1 and: 1. Adjusting procedure length from NS to NI. 2. Initializing first states in x to 1. 3. Adjusting the starting point from [1, 1, ..., ndiv] to [1, 1, ..., ndiv \u2212dim + 1]. 4. Jumping to the next dimension one step earlier (val \u0338= 2). To implement the following nimplex snippet. 1 proc simplex_internal_grid( 2 dim: int, ndiv: int): Tensor[int] = 3 let N: int = binom(ndiv-1, dim-1) 4 result = newTensor[int]([N, dim]) 5 var x = ones[int](dim) 6 x[dim-1] = ndiv+1-dim 7 for j in 0..dim-1: 8 result[0, j] = x[j] 9 var h = dim 10 for i in 1..N-1: 11 h -= 1 12 let val = x[h] 13 x[h] = 1 14 x[dim-1] = val 1 15 x[h-1] += 1 16 for j in 0..dim-1: 17 result[i, j] = x[j] 18 if val != 2: 19 h = dim 20 return result 4 Simplex Graph The simplex grid algorithm presented in Section 3.1 is used commonly; however, it has an important feature that has not been utilized yet and was only briefly noted by its authors [37]. Namely, the fact that generated points are sorted in a lexicographic order (forward or reverse, depending on convention) which opens the door for using pure combinatorics for finding certain interesting relations between points at near-zero costs compared to other popular methods. 4.1 Binary In the simplest possible case, which will be expanded upon later, one can look at a binary (d = 2 / 1-simplex) compositional grid and write a straightforward function that will find all neighboring points (transitions to them) to create a graph representation of the binary system like one presented in Figure 6, without any notion of distance calculations. Figure 6: 1-simplex graph corresponding to a binary system (nd = 12) with 13 nodes/compositions and 24 edges/transitions. Such a function, shown below, can be implemented by setting up a neighbors list of lists (NS of \u22642 length) of integer positions and then, at the end of every i-th iteration, populating it with forward (i+1) and backward (i\u22121) transitions unless start ([0, 1]) or end ([1, 0]) points x respectively, corresponding to lack of some component, have been reached. 1 proc neighborsLink2C(i:int, x:Tensor, 2 neighbors: var seq[seq[int]]): void = 3 if x[0] != 0: 4 neighbors[i].add(i+1) 5 if x[1] != 0: 6 neighbors[i].add(i-1) 8 of 17 \fWhile the above is trivial, it clearly demonstrates that the graph can be constructed within the original O(N) computational complexity of the simplex grid algorithm, unlike a similarly trivial distance matrix calculation, which would be O(N2); thus, unlocking efficient generation of even massive graphs of this kind. 4.2 Ternary With the core of the approach set up in Section 4.1, one can move to the more complex ternary (d = 3 / 2simplex) case, which can be conceptualized as a series of 13 binary systems (already solved individually in Sec. 4.1) of lengths from 13 to 1 and with simple modification of positional coordinates shifted forward by 1 to accommodate for the new dimension. The newly allowed neighbor transitions across these binaries can be quickly noticed to be dependent on which of these binaries is considered; however, they can be intuitively found by considering that each transition in the 3rd dimension (increasing x0) limits the size of the binary simplex by 1 from the original size of \u0000d\u22121+nd d\u22121 \u0001 = \u00002\u22121+nd 2\u22121 \u0001 = nd + 1. Thus, one can define two convenient jump lengths: Jd=3 0 = 1 Jd=3 1 (x0) = 1 + nd \u2212x0 Then, one can quickly visualize that (1) unless x2 = 0, a transition by jump J1 should be possible, (2) unless x1 = 0, a transition by jump J1 combined with backward jump J0 in the target binary should be possible, and (3) unless x0 = 0 (the first traversed binary is considered), transitions by both backward jump J1 and backward jump J1 + J0 (extra step within the earlier binary) should be possible. Thus, one arrives at the following algorithm, which requires additional nd (\"ndiv\") input on top of the one from Section 4.1 but retains its structure. 1 proc neighborsLink3C(..., 2 ndiv: int): void = 3 let jump0 = 1 4 let jump1 = 1+ndiv-x[0] 5 if x[0] != 0: 6 neighbors[i].add(i-jump1) 7 neighbors[i].add(i-jump1-jump0) 8 if x[1] != 0: 9 neighbors[i].add(i-jump0) 10 neighbors[i].add(i+jump1-jump0) 11 if x[2] != 0: 12 neighbors[i].add(i+jump0) 13 neighbors[i].add(i+jump1) Utilizing the above, the result presented in Figure 7 can be quickly obtained for any number of divisions. The numbering of points can help to visualize how the transitions were obtained. Figure 7: 2-simplex graph corresponding to a ternary system (nd = 12) with 91 nodes/compositions and 468 edges/transitions. 4.3 N-Dimensional Moving beyond ternary systems, one has to increase the number of tracked transitions to higher dimensions, which can be counted for every jump length Jj with P(d\u2212j\u22122) 0 xi, and then utilized to obtain a general equation for all d \u22121 elements of jump length array J as a function of current point x. Jj(x) = \u0012j + nd \u2212P(d\u2212j\u22122) i=0 xi j \u0013 (5) As expected, for the special cases of d = 3, the above agrees with J0 and J1 found for the ternary case in Section 4.2. One can also note that J0 always equals to 1 as \u0000a 0 \u0001 = 1 for any a. With J defined, one can take a quaternary system (d=4 / 3-simplex) and perform a similar visualization thought exercise in the head as in Section 4.2, but in 3D, considering the new transitions to 3 neighbors above and 3 neighbors below, in order to set up neighborsLink4C procedure which is presented in Appendix C. Such an approach of visualizing and counting the possible jumps in the head becomes (a) challenging for quinary systems (d=5 / 4-simplex) case where one has to visualize 4 forward and 4 backward jumps to and from 9 of 17 \fpoints inscribed in every tetrahedron formed by the 3simplex tetrahedral grids, and (b) near impossible for higher orders, both because of the visualization dimensionality and the growing number of neighbors to track, given by Pd \u03b4 2(\u03b4 \u22121) = d(d + 1) or for d = 6, 7, 8, and 9 corresponding to 30, 42, 56, and 72 neighbors respectively; thus prompting for an alternative. Fortunately, while performing the above thought exercises for increasing d, with transition lengths T expressed as compositions of jump lengths described by J, a careful observer can quickly note that for any dimensionality of the simplex grid, the main challenge in finding the higherdimensional T lies in distributing the d \u22121 new forward (x0 increment) transitions across all previous xi = 0 constraints, while the d\u22121 new backward (x0 decrease) transitions are always possible for x0 > 0 and follow a relatively simple trend of transition lengths Jd, Pd j=d\u22121 Jj, ..., Pd j=0 Jj. This allows a relatively simple construction of all backward transitions by stacking them together across all d \u22122 considered dimensions. Finally, a simple notion that every backward transition b \u2192a of grid point b is associated with a forward transition a \u2192b of point a allows for the complete construction of the simplex graph representation of the compositional space. This is implemented very concisely in the nimplex snippet below, where for every considered dimension \u03b4 from d (highest at 0th index of x) down to 2 ((d \u22122)th index), the \u03b4 of backward and \u03b4 of forward transitions of lengths tk are found by iteratively summing jump lengths J\u03b4, P\u03b4 j=\u03b4\u22121 Jj, ..., P\u03b4 j=0 Jj, and then used to assign neighborhood. 1 proc neighborsLink(...): void = 2 var jumps = newSeq[int](dim-1) 3 jumps[0] = 1 #binom(a,0)=1 4 for j in 1..<(dim-1): 5 jumps[j] = binom( 6 j+ndiv-sum(x[0..(dim-2-j)]), j) 7 var trans: int 8 for order in 0..(dim-2): 9 trans = 0 10 if x[order] != 0: 11 for dir in 0..(dim-2-order): 12 temp += jumps[dim-2-order-dir] 13 neighbors[i].add(i trans) 14 neighbors[i trans].add(i) The result of running the above algorithm with d = 4 and relatively low nd is shown in Figure 8 to help visualize neighbor-neighbor transitions despite the overlap when printed in 2D. Figure 8: A quaternary (d=4 / 3-simplex) simplex graph (nd = 6) with 84 nodes (compositions) and 672 edges (possible moves). A set of nodes has been manually selected (highlighted in pink) to depict a toy example of infeasible points (similarly to Figure 2), which forces a non-trivial path (highlighted in red) to traverse from the bottom-left corner at 1 to the bottom-right corner at 84. It is critical to note that the above algorithm is still within the O(N) computational complexity for N grid points, just like the forward/backward jumps discussed in Section 4.1. Thus, for instance, the task of constructing 1% resolution graph for a 6-component chemical space containing NS(d = 6, nd = 100) or nearly 100 million unique vertices requiring 2.76 billion edges (possible chemistry changes) takes as little as 23s tested on author\u2019s laptop computer. This stands in stark contrast with O(N2) distance-based graph construction, which, even when well implemented to take around 3ns per comparison, would take approximately 1 year on the same machine. Furthermore, the method scales excellently with the increasing problem dimensionality. For a 12-component chemical space with nd = 12 divisions per dimension, even though up to 132 neighbors have to be considered for all NS = 1.35 million vertices, the 93 million edges are constructed in 950 milliseconds. 4.4 Simplex Graph Complexes Once able to rapidly set up simplex graphs in arbitrary dimensions, one can also efficiently combine them to construct more complex graphs representing non-trivial 10 of 17 \fproblem statements where many different paths are possible to explore, and prior knowledge can be incorporated as assumptions in the problem solution space if needed. At the same time, it allows the dimensionality of the intermediate compositional spaces to be kept within manufacturing feasibility, i.e., the number of material flow sources. Suppose one tries to connect elemental compositions A and F , but assumes prior knowledge that they cannot be combined directly in any quantity, and also knows that (1) A is compatible with B and C, (2) F is compatible with D and E, but (3) B and E are incompatible in any quantity, (4) C and D are incompatible in any quantity. Furthermore, (5) G and H are individually compatible with B and D, and (6) I and J are individually compatible with C and E. These rules can be used to set up a problem graph like in the top of Figure 9, encoding everything that is known about the system a priori and limiting the solution space from full \u000010\u22121+12 12 \u0001 \u2248300, 000 to 2 \u00d7 \u00003\u22121+12 12 \u0001 + 10 \u00d7 \u00002\u22121+12 12 \u0001 = 312, or three orders of magnitude. Figure 9: Graph Complex Example #1 depicting a problem space where 2 ternary systems can be connected through 6 different binary paths. The space constructed in Figure 9 is kept very minimal in terms of going beyond known assumptions and dimensionality to illustrate the concept in a plane. However, real examples of this technique can be highly non-trivial and essential in bringing the number of considered points into a computationally feasible regime when tens of available compositions can be considered. Furthermore, unlike in Figure 9 where spaces are simply connected through single-components, the interfaces between the individual compositional spaces can be along any subspace (e.g., the ternary face of quaternary tetrahedron), allowing one to quickly set up search problems where one or more components are unknown, but their relations to others are fixed. One can quickly demonstrate the benefits of such ability by looking at the SS316 to Ti-6Al-4V problem studied by Bobbio, Bocklund, Simsek, et al. [22]. After idealizing and anonymizing the components, it becomes a problem where one tries to combine compositions A with G, which cannot be combined directly in almost any quantity, and also knows that (1) system ABC is highly feasible across it, but (2) C cannot be combined directly with G in any quantity, and (3) a complete path from pure B to G is not possible. In this case, a simple problem setup is to look at several BC? and BG? pairs, forming parallel pathways from ABC to G. This is depicted in Figure 10 for 3 candidates D, E, F, forming 6 ternary spaces to consider, but nothing limits the method to be extended to an arbitrary number of candidates while still retaining its linear complexity. Figure 10: Graph Complex Example #2 depicting a problem where 3 choices (D/E/F) can be made to traverse from ABC to G through dual ternary systems containing B. Vertices were spread in 3D to depict three possible ABC to G paths, which would exactly overlap in a plane. 11 of 17 \fIn the above examples in Figures 9 and 10, all connections between compositional spaces were directional; however, that is not necessary, and in some problems it may be beneficial to allow bidirectional movement. Suppose one tries to combine compositions A with D, which cannot be combined directly in any quantity, and also knows that (1) system ABC is highly feasible across it, but (2) system BCD is not traversable on its own. Thus, E can be introduced to set up intermediate spaces BDE and CDE, allowing obstacles in BCD to be avoided. Furthermore, BCE can also be set up as an alternative, possibly shortening the total path. Figure 11 depicts such a problem setup. Figure 11: Graph Complex Example #3 depicting the possibility of competing paths, including cycles. Notably, while the above example in Figure 11 depicts a single 5th component E to help visualize cycling between spaces, these concepts can be extended to many possible intermediate components. At the same time, the maximum dimensionality of individual compositional spaces is kept constant (d = 3). Thus, it provides a powerful method to keep the problem solvable, even experimentally, while considering many possible pathways formally defined prior to path planning to fit within the feasibility of evaluation and manufacturing. 4.5 Discussion of Exploration Critically, creating such a homogeneous problem structure through graph representation allows one to deploy the same exploration strategies across many dimensionalities and even combinations of individual spaces shown in Section 4.4. Furthermore, in the described graphs, points are on an equidistant grid; thus, it is easy to set up a heuristic function that can be both consistent and admissible. This, in turn, enables one to harvest many generalpurpose graph traversal algorithms, which are actively researched and available through high-performance libraries. For instance, to navigate against constraints, the A\u2217algorithm [47] can be used with such a heuristic and is mathematically guaranteed to find the shortest feasible compositional path while exploring the least number of nodes [48], what can be critical if the shortest path is necessary while each evaluation is relatively expensive. Then, if one tries to find a feasible solution first and then improve on it, modifications of the A\u2217algorithms such as the RWA\u2217[49] can be used to first make it more greedy and then gradually move towards A\u2217to obtain the optimal solution if sufficient resources are available. Alternatively, for highly complex problems where exploration needs to proceed quickly towards the goal but the optimality guarantees are not needed, one can use a search algorithm based on the Monte Carlo tree search (MCTS), which has been famously used in conjunction with an ML model to master the game of Go [50]. 5 Summary and Conclusion This work starts by providing an abstract description of compositional spaces applicable to a wide range of disciplines while formalizing several vital concepts. Then, Section 1.2 discusses complex compositional spaces, using Compositionally Complex Materials (CCMs) as a real-world application and considers the challenges of exploring such spaces using different methods. Section 1.3 uses another real-world application of Functionally Graded Materials (FGMs) to expand on that by discussing compositional spaces formed from compositions in other spaces and when these spaces are preferred for design. It also discusses key concepts related to path planning in relation to types of constraint and property optimizations. Last in the Introduction, Section 1.4 discusses some equations critical to investigating the combinatorial complexities in these problems. Next, discussions and implementations are given for several methods for efficiently solving compositional problems through random sampling in Section 2, gridbased methods in Section 3, graph-based methods, including graphs combining multiple compositional spaces, in Section 4. The three most critical contributions introduced in this process are: 12 of 17 \f1. Novel algorithm for rapid procedural generation of N-dimensional graph representations of compositional spaces where uniformly distributed simplex grid points in d dimensions are completely connected to up to d(d \u22121) neighbors representing all possible component-pair changes. For instance, in economics, this could represent all possible compositions of a financial portfolio of 12 assets and, for each one of them, all 132 transfer choices that can be made to modify it. Critically, this method scales linearly with the number of points and generates graphs with billions of connections between millions of points in just seconds. Furthermore, this algorithm allows deterministic memory allocation during the graph construction, where arrays of pointers to neighboring compositions represent allowed transitions, resulting in a very high-performance data structure. 2. The new free, open-source software (FOSS) package nimplex (nimplex.phaseslab.org), which gives high-performance implementations of both essential existing methods and all new methods introduced within this work including the simplex graphs. 3. The novel concept of combining many compositional spaces using graph representations to create homogeneous problem spaces, both simplifying the path planning and allowing for efficient incorporation of constraints and assumptions about problem spaces as demonstrated in Section 4.4. In addition to the above, three other new contributions are given in this work: 1. Sections 2 and 3 discuss random sampling and grid construction in simplex spaces in the context of the composition of chemical spaces. In the process, several theoretical results critical to the problem, which have not been discussed previously in this context, are presented. For instance, the commonly found random sampling of a d \u22121 hypercube and rejection of compositions > 100% to sample a dcomponent space, commonly found in software, has a rejection rate exhibiting factorial growth and can severely impact when deploying ML models. 2. In Section 3.2, a new algorithm was developed to efficiently create internal (subspace-exclusive) grids in simplex spaces based on an algorithm from the literature (modified-NEXCOM [37]). It is beneficial to performance in cases of, for instance, sampling only d-component materials in d-component chemical space without considering lower-order points. 3. In a few areas, Section 1.3 leverages its general character to go beyond the usual FGM literature introduction. For instance, it contrasts elemental spaces with attainable design spaces and discusses the use of similar compositions (alloy grades) in the design process to reduce cost and greenhouse emissions without making prior assumptions. Code Availability The nimplex software described in this work has been published as free open-source software (FOSS) under the MIT license. It can be effortlessly used as a native Nim library, native Python library, or Command Line Interface (CLI) tool interfacing with nearly any language through binary data or plain text. All distributions of the source contain (1) the core library, (2) additional utilities, (3) testing procedures, (4) use examples, (5) quick-start guide using Python/CLI in the form of a Jupyter notebook, (6) devcontainer.json specification, and (7) documentation. They are available through: \u2022 The documentation page at nimplex.phaseslab.org, which contains (1) installation instructions, (2) usage instructions in Python, Nim, and CLI, and (3) Application Programming Interface (API) reference. It also links to a public GitHub repository hosting the latest code (github.com/amkrajewski/nimplex) at the time of writing. \u2022 (Selected Major Versions) A public repository archive on Zenodo under DOI: 10.5281/zenodo.10611931. Contributions Adam M. Krajewski: Conceptualization, Methodology, Software, Writing Original Draft, Validation, Visualization Alison Beese: Funding acquisition, Writing Review & Editing Wesley F. Reinhart: Funding acquisition, Writing Review & Editing Zi-Kui Liu: Funding acquisition, Supervision, Writing Review & Editing, Resources Acknowledgments This work has been funded through grants: NSF-POSE FAIN-2229690, ONR N00014-23-2721, and DOE-ARPAE DE-AR0001435. Adam M. Krajewski would like to thank Gonville & Caius College at the University of Cambridge and Dr. 13 of 17 \fGareth Conduit for generously hosting him as a visiting postgraduate student during the writing of this publication, and Peter and Carol Thrower for sponsoring the fellowship. We would also like to thank Luke Myers and Ricardo Amaral for testing code exercises and proofreading the documentation."
},
{
"url": "http://arxiv.org/abs/2402.14315v2",
"title": "Structure-Based Drug Design via 3D Molecular Generative Pre-training and Sampling",
"abstract": "Structure-based drug design aims at generating high affinity ligands with\nprior knowledge of 3D target structures. Existing methods either use\nconditional generative model to learn the distribution of 3D ligands given\ntarget binding sites, or iteratively modify molecules to optimize a\nstructure-based activity estimator. The former is highly constrained by data\nquantity and quality, which leaves optimization-based approaches more promising\nin practical scenario. However, existing optimization-based approaches choose\nto edit molecules in 2D space, and use molecular docking to estimate the\nactivity using docking predicted 3D target-ligand complexes. The misalignment\nbetween the action space and the objective hinders the performance of these\nmodels, especially for those employ deep learning for acceleration. In this\nwork, we propose MolEdit3D to combine 3D molecular generation with optimization\nframeworks. We develop a novel 3D graph editing model to generate molecules\nusing fragments, and pre-train this model on abundant 3D ligands for learning\ntarget-independent properties. Then we employ a target-guided self-learning\nstrategy to improve target-related properties using self-sampled molecules.\nMolEdit3D achieves state-of-the-art performance on majority of the evaluation\nmetrics, and demonstrate strong capability of capturing both target-dependent\nand -independent properties.",
"authors": "Yuwei Yang, Siqi Ouyang, Xueyu Hu, Mingyue Zheng, Hao Zhou, Lei Li",
"published": "2024-02-22",
"updated": "2024-03-15",
"primary_cat": "q-bio.BM",
"cats": [
"q-bio.BM",
"cs.LG"
],
"label": "Original Paper",
"paper_cat": "Knowledge AND Graph",
"gt": "Structure-Based Drug Design via 3D Molecular Generative Pre-training and Sampling",
"main_content": "Introduction Drug molecules exhibit their activities by forming tightly binding 3D complex with disease-related targets. Rooted in this concept, structure-based drug discovery (SBDD) aims to design ligands (drug candidates) using the prior knowledge of the 3D target structure. [Batool et al., 2019] Ideally, identified ligand molecules should be 1) novel to existing database 2) satisfying target-independent properties, such as being easy-to-synthesize, drug-like, and energetically stable 3) satisfying targetdependent properties, particularly, demonstrating high binding affinity to the given target. arXiv:2402.14315v2 [q-bio.BM] 15 Mar 2024 \fIt is challenging to design drugs satisfying the above criteria. Traditional SBDD employs virtual screening to filter molecules from a large database [Bajorath, 2002, Ferreira et al., 2015, Meng et al., 2011], which cannot identify new molecules. Recent work using deep generative models show promise in generating novel ligands. One widely used approach formulates the problem as conditional generation, which use 3D target-ligand complex data to learn the distribution of ligands given a target. These methods face two difficulties: 1) due to the nature of generative model, the generated ligands have similar structures and properties as those in the training data and may not surpass known actives,[Walters and Murcko, 2020] 2) experimental measured 3D target-ligand complexes are scarce, and may not be sufficient to support the learning of both target-dependent and -independent properties.[Liu et al., 2017] Another line of work treats SBDD as an optimization task and employs binding affinity as an objective to reflect the fitness between ligands and targets. Molecular docking is commonly used for structurebased binding affinity estimation.[Meng et al., 2011] It uses global optimization to identify the lowest binding energy (highest binding affinity) and its corresponding binding pose on the system\u2019s energy surface, which is called docking. If a 3D target-ligand complex is given, molecular docking can also directly calculate or conduct local minimization to estimate the binding affinity, which is called scoring or minimization respectively. AutoGrow models [Durrant et al., 2009, Spiegel and Durrant, 2020], use genetic algorithm as the optimization engine and iteratively modifies 2D ligands to achieve better docking score. RGA [Fu et al., 2022] uses reinforcement learning to improve the optimization efficiency of genetic algorithm and reduces its randomness in SBDD task. However, the above-mentioned methods generate 2D molecules and rely on docking, a computationally expensive oracle, to obtain the binding affinity, which greatly impact their efficiency. In addition, when using deep learning to accelerate the optimization procedure for SBDD, the misalignment between the objective and the action space can also influence the model performance. More specifically, the objective, namely docking score, reflects the 3D interactions between the target and the ligand, however, the editing operations are conducted on 2D molecules. To address these challenges in existing SBDD models, we propose a pre-trained 3D graph editing model (MolEdit3D) to combine 3D molecular generation with target-guided optimization. As shown in Figure 1, we designed a novel 3D graph editing model which generates 3D molecules by adding or deleting fragments. Using fragments as building blocks can improve the validity of local structures and reduce model complexity comparing to atomand bond-based generation. The model is pre-trained using abundant 3D molecules to capture target-independent properties. We implement the 3D generation model in a Bayesian sampling procedure with simulated annealing to optimize desired target-dependent properties. In addition, we adopts self-learning to fine-tune the model using self-generated samples for the improvement of sampling efficiency and further enhancement of target-awareness. Our contributions are as follows: \u2022 We develop a 3D molecular graph editing model to generate 3D molecules by modifying fragments. \u2022 We develop techniques to pre-train the 3D graph editing model with 3D ligands and further fine-tune it using self-generated samples with target-guidance. \u2022 Our experimental results demonstrate that MolEdit3D generates molecules with higher binding affinities compared to the best prior method (Vina score: -10.16 versus -9.77), and improves success rate by an absolute of 13.8% from the previous best. In addition, the generated molecules maintain proper target-independent properties, including energetic stability and drug-like ring compositions. 2 Related Work In this section, we review previous work focusing on conditional generation and optimization-based approaches for SBDD. Additional related work can be found in Appendix. Conditional 3D Drug Design Generating 3D molecules is a relatively new research area in AI powered drug discovery due to its intrinsic difficulty. Within this field, conditional 3D drug design task requires the model to generate novel 3D molecules that are geometrically constrained by the target binding site and also satisfy multiple drug-like requirements. Comparing to generating 1D/2D molecules, the additional dimension significantly increases the explorable molecular space and makes this task more challenging. Regardless of its significance, limited efforts have been made in this field. 2 \fMasuda et al. [2020] use variational autoencoders [Kingma and Welling, 2013] to learn the 3D ligand distribution conditioned on the target structures using CrossDocked2020 [Francoeur et al., 2020], a 3D target-ligand complex database. Similarly, Luo et al. [2021] model the probability of atom occurrence within the target binding sites using mask prediction. Liu et al. [2022] encode both protein and ligand, and places new atoms sequentially based on the contextual features while preserving the equivariance property. It should be noted, the above-mentioned methods can only model atoms and rely on a post-processing algorithm to assign bonds between the generated atoms, which is possible to create chemically invalid molecules. To improve validity, Peng et al. [2022] use an equivariant generative network to predict both atoms and bonds conditioned on pocket features. Long et al. [2022] propose to generate molecules conditioning on pocket shape instead of conditioning on the complete target structure. In addition, they use fragment as base unit to generate ligands and employ a greedy algorithm to connect the generated 3D fragments. Our method also generates 3D molecules. It treats molecules as linked rigid fragments, which improves the chemical validity of local structures. In addition, it predicts the connectivity and the torsion angles between fragments which learns rational fragment connectivity and energetic stability from training data. Optimization-Based Drug Design Drug design is essentially an optimization task, and previously deep models have used various approaches to optimize the properties, such as Bayesian optimization in latent space [G\u00f3mez-Bombarelli et al., 2018, Jin et al., 2018, Winter et al., 2019], reinforcement learning [De Cao and Kipf, 2018, Popova et al., 2018, You et al., 2018, Popova et al., 2019, Shi et al., 2020, Zhou et al., 2019], evolutionary and genetic algorithms [Ahn et al., 2020, Jensen, 2019, Devi et al., 2015, Nigam et al., 2020, A Nicolaou et al., 2012], sampling based approach [Fu et al., 2021, Xie et al., 2021] and etc. The above-mentioned methods are developed for optimization in ligand-based drug design scenario, which neglects the 3D interactions between the ligands and the targets. Spiegel and Durrant [2020] and Fu et al. [2022] adopt genetic algorithm for SBDD by incorporating molecular docking objective. Although the objective reflects 3D interactions, their action space is still defined on 2D graphs. Molecular editing is an important component in optimization-based drug design. For instance, Zhou et al. [2019] edit 2D molecules by adding or deleting atoms and bonds, Xie et al. [2021] and Fu et al. [2021] edit 2D molecules by adding, deleting or replacing fragments, and Nigam et al. [2020] edit 1D SELFIES strings [Krenn et al., 2020] by inserting or replacing single atoms or phenyl rings. In this work, we further develop editing-based molecular generation for 3D drug design. More specifically, we propose to add and delete rigid fragments in 3D space and use torsion angles to describe the relative spatial location of the newly edited region. 3 The Proposed Method SBDD task is formulated as: given a 3D target structure, generate diverse molecules satisfying desired properties. In this section, we address the procedure of MolEdit3D for solving the SBDD task. As shown in Figure 1, MolEdit3D contains three components: 3D graph editing model, generative pre-training, and target-guided self learning. 3D molecules are generated by adding or deleting 3D fragments with the graph editing model, which is first pre-trained with abundant 3D ligand molecules to capture target-independent properties, such as drug-likeness, synthesizability and stability, and then further fine-tuned using self-generated molecules with improved target-related properties to enhance its capability of generating high binding affinity ligands for a specific target. 3.1 3D Graph Editing Model 3D graph editing model is developed to generate 3D molecules within the target binding site. In molecules, single bond rotation is the major cause of 3D conformation changes, therefore, we represent molecules as a group of linked rigid 3D fragments. We build a rigid fragment library by breaking non-terminal single bonds. The broken bonds are labeled as editable sites for molecular editing and a hydrogen atom is added to maintain the original valency. More details of the fragment library can be found in Appendix. MolEdit3D places an initial seed molecule within a target binding site, and builds 3D molecules by adding or deleting rigid fragments iteratively as shown in Figure 1 Left. In order to achieve adding and deleting operations separately, the editable sites are categorized as addable sites, which correspond to the editable sites with hydrogen atoms attached, and deletable sites which correspond 3 \f3D Edit Prediction Atom Level Fragment Level Skeleton Molecules 3D Graph Editing Model 3D Graph Editing Model Reconstruct if properties, update Break v\u2019 v w w\u2019 Add Fragment Delete Fragment Predicted Edits 3D Graph Editing Model Generative Pre-training Target-Guided Self Learning Evaluate Properties Figure 1: Model Overview. MolEdit3D contains three components. 3D graph editing model predicts the geometric edits which either add or delete a rigid fragment from the skeleton molecule. For add operation, the skeleton molecule is linked with a rigid fragment using the predicted attaching sites and torsion angle (defined by four consecutive atoms, w, v, v\u2032 and w\u2032). For delete operation, the predicted bond is broken. With the editing model, we use generative pre-training to reconstruct 3D ligands for learning target-independent properties. The model is further finetuned using target-guided self-learning strategy, which use self-generated molecules with improved target-related properties to enhance target-awareness. to those with non-hydrogen atoms attached. In adding operations, the model firstly selects an addable sites in the skeleton molecule (the molecule to edit). Then it chooses a 3D fragment from the fragment library, and an addable site in the fragment to attach to the skeleton molecule. Lastly, it decides the torsion angle between the two connected components. In deleting operations, the model chooses a deletable site in the skeleton molecule to break. A hydrogen atom is added to the broken bond to maintain the correct valency. Parameterization of Editing Operations To achieve 3D molecular editing, we represent a 3D molecule using hierarchical graphs, which have a atom-layer and a fragment layer. A network \u03d5 is used to parameterize the hierarchical graphs. Let x be the skeleton molecule, we select an edge in the fragment-level to edit as follows: satom-node, sfrag-node, sfrag-edge = \u03d51(x) (1) scoreskel j,k = MLP1(sfrag-edge j,k ) \u2208R (2) padd(r|xskel) = softmax({scoreskel j,k }(j,k)\u2208Ea) (3) pdelete(r|xskel) = softmax({scoreskel j,k }(j,k)\u2208Ed) (4) where satom-node is the skeleton molecule\u2019s atom-layer node hidden representations, sfrag-node and sfrag-edge are fragment-layer node and edge hidden representations respectively. (j, k) is a directed edge pointing from node j to k. Ea and Ed are disjoint sets of editable edges in x for addition and deletion respectively. MLP standards for multi-layer perceptrons. We sample a directed edge r = (uskel, vskel) from 1 2padd + 1 2pdelete as the predicted site to edit. If the selected edge r = (uske, vskel) is deletable, we replace the fragment on uskel side with hydrogen. Otherwise, r is addable which means uskel is an hydrogen. We remove uskel and choose a fragment f from the fragment library H to add to vskel: scoref = MLP2(sfrag-edge r )f \u2208R (5) pfrag(f|x, r) = softmax({scoref}f\u2208H) (6) We sample a fragment f from pfrag. Then we need to select an edge a in the fragment f to attach to edge r in the skeleton molecule x. The edge to attach in the fragment is determined jointly by x and 4 \ff: oatom-node, ofrag-node, ofrag-edge = \u03d52(f) (7) \u00af sfrag-node = MEANPOOL(sfrag-node) (8) scorefrag j,k = MLP3(CONCAT(\u00af sfrag-node, ofrag-edge j,k )) \u2208R (9) pattach(a|x, r, f) = softmax({scorefrag j,k }(j,k)\u2208Efrag a ) (10) where Efrag a is the set of addable bonds in fragment f. Then we sample an edge a = (ufrag, vfrag) to attach. Finally, we gather the atom-layer features of four atoms, wskel, vskel, vfrag, wfrag, around the new bond (wskel is the neighboring atom of vskel in the skeleton and wfrag is that of vfrag in the added fragment as shown in Figure 1 Add Fragment) and determine the torsion angle as a classification task which separates angles into discrete bins \u0393 = {0, 10, 20, \u00b7 \u00b7 \u00b7 350}: hangle = CONCAT(satom-node wskel , satom-node vskel , oatom-node wfrag , oatom-node vfrag ) (11) scoreangle \u03b3 = MLP4(hangle)\u03b3 \u2208R (12) pangle(\u03b3|x, r, f, a) = softmax({scoreangle \u03b3 }\u03b3\u2208\u0393) (13) and sample an angle \u03b3 from pangle. We then connect the skeleton molecule and fragment molecule together by this torsion angle \u03b3. Parameterization of Hierarchical Graphs We develop a hierarchical message passing neural networks (HMPNNs) to parameterize the molecular graphs. HMPNNs contain two MPNNs as its atomlayer and fragment layer. A input molecule m is represented as a graph g = (A, qatom-node, qatom-edge) with A as the adjacency matrix, qatom-node and qatom-edge as feature vectors of atoms and bonds. We pass the graph through the first MPNN to obtain atom-level representations: hatom-node u = MPNN1(g)u \u2208Rd (14) where hatom-node u is the atom-level hidden representation of node u. Note that atoms separated by rotatable single bonds are considered as belonging to different fragments. We then regard each fragment as a single node, which induces a fragment-level adjacency matrix A\u2032, and obtain the fragment embedding zfrag-node by aggregating features of atoms that belong to it using mean pooling. We preserve those edges between fragments with feature vectors zfrag-edge initialized by fragment embeddings: zfrag-node i = MEANPOOLu\u2208Vi(hatom-node u ) \u2208Rd (15) zfrag-edge j,k = W1 \u00b7 CONCAT(zfrag-node j , zfrag-node k ) + k1 \u2208Rd (16) where Vi is the set of atoms in fragment i; j and k are adjacent fragments. The new graph g\u2032 = (A\u2032, zfrag-node, zfrag-edge) is then passed to another MPNN to obtain fragment-level representations: hfrag-node i = MPNN2(g\u2032)i \u2208Rd (17) hfrag-edge j,k = W2 \u00b7 CONCAT(hfrag-node j , hfrag-node k ) + k2 \u2208Rd (18) where hfrag-node i is the hidden representation of fragment i and hfrag-edge j,k is the hidden representation of edge between fragment j and k. 3.2 Generative Pre-training for 3D Graph Editing Model Due to the scarcity of target-ligand complexes, we propose to pre-train a generative model with 3D molecules to extract information of target-independent properties. We define a pre-training objective for 3D graph editing model using 3D molecular reconstruction. Notice that in this step, our pre-trained model does not contain information about a specific target. Therefore it will be able to generate valid molecules but may not bind tightly with a target. Given a drug-like 3D molecule x, we represent it as a fragment graph gfrag with nodes as fragments defined in our fragment library and edges as bonds between fragments. The pre-training data of each molecule is then generated in an iterative manner. As shown in Figure 1 Top Right, at each step we 5 \frandomly break an edge connecting a leaf node f and the rest of the graph gfrag \u00acf , i.e., the deletable edges mentioned in Section 3.1. The remaining graph gfrag \u00acf forms a new molecule x\u00acf and we add (x\u00acf, x) to the pre-training data. Then we repeat this operation on x\u00acf until there is only a single fragment left. Given a set of pre-training data Dp = {(x\u00acf, x)}, we train the 3D molecular editing model to maximize the likelihood of predicting the operation of adding f on the corresponding edge of molecule x\u00acf, i.e., the reverse operation of deleting f from the original molecule x, arg max \u03b8 1 |Dp| X (x\u00acf ,x)\u2208Dp log p\u03b8(x|x\u00acf) (19) where p\u03b8 is the model with parameter \u03b8. This pre-training stage enables the model to capture the relation between 3D molecular structure and general drug-like properties. 3.3 Target-Guided Bayesian Sampling with Self-Learning In this stage, we use the pre-trained 3D graph editing model to generate molecules for a given target. The overall idea is to use the 3D graph editing model in a Bayesian sampling framework (e.g. Markov chain Monte Carlo sampling). However, one issue is the pre-trained 3D graph editing model does not contain target-specific information, therefore it might not be able to generate molecules tailored for the target. To fix this issue, we use the samples generated during the procedure to further fine-tune the 3d graph editing model. Given a pre-trained 3D molecular editing model and a target protein, we start with an initial molecule x0 (e.g. methane CH4) and employ multi-chain annealed Bayesian sampling Kirkpatrick et al. [1983] with a target-guided objective function to sample desired candidate ligands. For ith chain at step t, the model proposes an editing operation (adding or deleting a fragment) to modify xi t to x\u2032. The proposed x\u2032 can either be accepted xi t+1 = x\u2032 or rejected xi t+1 = xi t as determined by an acceptance probability A(x\u2032, xi t) = min(1, exp( J (x\u2032)\u2212J (xi t) T )) where J is the target-guided objective function and T is the annealing temperature controlling how greedy the process is. Here we use a linear combination of three scores as the objective function: J (x) = VINAmin(x) + \u03b1 log QED(x) + \u03b2 log SASCORE(x), (20) where VINAmin provides a target-aware score that measures the binding affinity between the target protein and the 3D ligand structure, while QED Bickerton et al. [2012] and SASCORE Ertl and Schuffenhauer [2009] are two target-independent scores that measure the drug-likeliness and synthetic accessibility of the candidate ligand respectively. Given a 3D target-ligand complex, we use AutoDock Vina [Eberhardt et al., 2021] to conduct a quick local minimization and calculate the binding affinity as VINAmin, which is much faster then the docking procedure used in AutoGrow and RGA. We discuss more details of Vina in Appendix. During sampling, a dataset Dt for target-guided self-training is collected on-the-fly. Denote x and x\u2032 to be the molecule before and after edit respectively. If the objective score of x\u2032 is higher than x, molecule pair (x, x\u2032) is added to the dataset Dt. We train our model simultaneously with the sampling using weighted maximum likelihood estimation (WMLE) as follows, arg max \u03b8 1 |Dt| X (x,x\u2032)\u2208Dt \u03bb(x\u2032, x) log p\u03b8(x\u2032|x) (21) where p\u03b8 is the model and \u03bb(x\u2032, x) is a monotonic function indicating the score difference between x\u2032 and x. Here we choose \u03bb(x\u2032, x) = min{J (x\u2032) \u2212J (x), 5}. WMLE injects more target information into the training signal than direct MLE. 4 Results and Discussions 4.1 Experiments Model Details We use ChEMBL[Gaulton et al., 2017], a database of bioactive molecules, for rigid fragment library construction and pre-training. HMPNN model contains 6 atomic layers and 6 \fTable 1: Performance comparison between structure-based drug design methods. 1000 molecules per target are generated by each method and the average and standard deviation values are reported. Top 1 results are highlighted in bold. MolEdit3D achieves SOTA performance on Validity, Success Rate, High Affinity and median Vina score, while maintaining adequate Uniqness and Diversity. Type Method Valid(\u2191) Uniq (\u2191) Div (\u2191) High Aff (\u2191) Vina (\u2193) QED (\u2191) SA(\u2191) Succ (\u2191) (%) (%) (%) (kcal/mol) (%) Cond. liGAN 96.3 \u00b1 1.0 99.9 \u00b1 0.1 0.889 \u00b1 0.001 0.3 \u00b1 0.3 -5.91 \u00b1 0.43 0.41 \u00b1 0.17 0.59 \u00b1 0.11 1.0 \u00b1 1.1 AR 79.2 \u00b1 19.5 44.5 \u00b1 9.1 0.838 \u00b1 0.040 51.8 \u00b1 23.0 -9.34 \u00b1 1.47 0.54 \u00b1 0.19 0.53 \u00b1 0.18 16.5 \u00b1 18.6 GraphBP 99.6 \u00b1 0.1 100.0 \u00b1 0.0 0.924 \u00b1 0.001 8.2 \u00b1 8.8 -6.34 \u00b1 0.97 0.41 \u00b1 0.21 0.46 \u00b1 0.15 1.0 \u00b1 1.0 DESERT 100.0 \u00b1 0.0 99.9 \u00b1 0.2 0.917 \u00b1 0.007 45.2 \u00b1 34.3 -9.20 \u00b1 1.17 0.64 \u00b1 0.17 0.65 \u00b1 0.13 47.3 \u00b1 18.8 Pocket2Mol* 100.0 \u00b1 0.0 100.0 \u00b1 0.0 0.902 \u00b1 0.006 61.1 \u00b1 23.1 -9.77 \u00b1 1.42 0.64 \u00b1 0.14 0.74 \u00b1 0.11 68.2 \u00b1 22.2 Opt. MARS 99.8 \u00b1 0.0 99.5 \u00b1 0.3 0.915 \u00b1 0.003 13.9 \u00b1 20.0 -7.63 \u00b1 0.91 0.42 \u00b1 0.23 0.75 \u00b1 0.09 21.5 \u00b1 11.1 AutoGrow 100.0 \u00b1 0.0 99.7 \u00b1 0.3 0.871 \u00b1 0.025 11.8 \u00b1 14.4 -7.92 \u00b1 0.61 0.34 \u00b1 0.15 0.59 \u00b1 0.07 13.2 \u00b1 8.3 RGA 100.0 \u00b1 0.0 100.0 \u00b1 0.0 0.923 \u00b1 0.004 8.8 \u00b1 8.9 -6.61 \u00b1 0.52 0.49 \u00b1 0.18 0.71 \u00b1 0.11 16.1 \u00b1 7.3 MolEdit3D (L) 100.0 \u00b1 0.0 99.0 \u00b1 2.1 0.885 \u00b1 0.006 66.0 \u00b1 15.7 -10.00 \u00b1 1.00 0.55 \u00b1 0.19 0.77 \u00b1 0.09 78.7 \u00b1 14.6 MolEdit3D 100.0 \u00b1 0.0 99.2 \u00b1 1.3 0.880 \u00b1 0.009 70.3 \u00b1 14.2 -10.16 \u00b1 1.00 0.55 \u00b1 0.19 0.78 \u00b1 0.08 82.0 \u00b1 13.1 *Only 100 ligands per target are sampled for Pocket2mol due to its high computational cost. 3 fragment layers. The atomic node features include atomic number, element type, charge and 3D coordinates, and the atomic edge features include bond type. The hidden layer node embedding has a size of 64. We experiment with two versions of MolEdit3D, differed by the number of chains employed in Bayesian sampling. The light version, MolEdit3D (L) uses 1000 chains and the regular version MolEdit3D uses 5000 chains. More model details and computational cost can be found in Appendix. Evaluation Following Masuda et al. [2020], Liu et al. [2022], Long et al. [2022], we evalaute our method using 10 targets and all assessed models generate 1000 ligands for each target. We evaluate them using the following metrics: Validity (Valid) denotes the percentage of molecules that are readable by RDKit and have all atoms in the same connected component. Uniqueness (Uniq) is the percentage of unique molecules among all generated ones. Diversity (Div) measures the internal diversity of the generated molecules. To evaluate the models\u2019 capability of generating bioactive molecules, we also use High Affinity (High Aff) and Vina Score (Vina) to quantify the binding affinity. High Affinity is defined as the percentage of generated molecules with higher affinity (lower Vina score) than the reference ligand. Unlike Vina minimization score used in the objective function, Vina Score is computed after a docking procedure for a more accurate binding energy estimation. For target-independent properties, we use QED and SAscore (SA) to quantify drug-likeness and synthesizability. As drug design requires the generated molecules satisfy multiple requirements simultaneously, we use Success rate (Succ) to evaluate the percentage of generated molecules that pass the predefined thresholds for the desired properties. We define a qualified molecule to have QED \u22650.25, SAscore \u22650.59, and Vina score \u2264\u22128.18 kcal/mol. QED and SAscore thresholds are defined as the 10th percentile of approved drugs in DrugCentral [Ursu et al., 2019] and the intuition is to cover majority of real drugs. The Vina score threshold corresponds to a binding affinity less than 1 \u00b5M, which is a widely used value to guarantee a moderate bioactivity in medicinal chemistry. We compare MolEdit3D with two types of SBDD models, namely conditional generation (Cond.) and optimization-based methods (Opt). Conditional generation methods learn the 3D ligand distribution conditioning on target binding sites, which includes liGAN[Masuda et al., 2020], AR[Luo et al., 2021], GraphBP[Liu et al., 2022], DESERT[Long et al., 2022] and Pocket2Mol[Peng et al., 2022]. On the other hand, optimization-based method uses target-ware objective to guide the optimization process. AutoGrow4[Spiegel and Durrant, 2020] and RGA[Fu et al., 2022] belong to this category and are designed for the SBDD scenario. Additionally, we convert MARS, an optimization algorithm for ligand-based drug design, to the SBDD settings by substituting ligandbased affinity predictor with Vina docking score. More detailed description of baselines can be found in Appendix. 7 \f5MKU 3VRJ vina: -11.5 qed: 0.72 sa: 0.77 vina: -11.0 qed: 0.84 sa: 0.87 vina: -11.2 qed: 0.72 sa: 0.84 vina: -11.7 qed: 0.76 sa:0.82 vina: -11.4 qed:0.70 sa:0.80 vina:-11.7 qed:0.79 sa:0.81 Figure 2: Target binding pose of MolEdit3D generated molecules for 5MKU and 3VRJ proteins. The generated molecules demonstrate high Vina score, QED and SAscore. CCCC Torsion Angle (Degree) Cccc Torsion Angle (Degree) Figure 3: Angular distribution comparison for CCCC (upper panel) and Cccc (lower panel) torsion angles between CrossDocked2020 reference molecules and model generated molecules. DESERT and MolEdit3D show better overlap with reference distribution. 4.2 Results and Analysis Main Result The performances of MolEdit3D and baseline models are summarized in Table 1, and the reported values are averaged over the 10 evaluated targets. Among baseline models, liGAN, AR and GraphBP generate atom types and positions using conditional variational autoencoder, autoregressive model and flow model. Since these models only generate atoms and require a postprocessing algorithm to assign bonds, it is possible for them to generate incomplete molecules (the generate atoms cannot be connected into one molecule) or invalid substructures as demonstrated by the Valid metric in Table 1. On the other hand, DESERT and MolEdit3D utilizes fragments to build molecules, which can guarantee local validity. MolEdit3D predicts the connectivity between fragments to generate complete molecules, and using pretraining and target-guided self-training to encourage validity. Similarly, DESERT uses pretraining and a greedy approach to link the generated fragments. Pocket2Mol generate bonds together with atoms, which also improves the molecular validity. MolEdit3D achieves the best performance in generating high binding affinity molecules as reflected by High Affinity and Vina Score in Table 1. liGAN, AR, GraphBP and Pocket2Mol use a supervised approach to learn the distribution of 3D ligands using target-ligand complex in CrossDocked2020 [Francoeur et al., 2020]. However, the data have mixed high affinity and low affinity molecules, which can influence the model performance. DESERT proposes to only use shape information of the target to guide the drug design. With the incomplete target information, the binding affinities of the generated molecules cannot be guaranteed. AutoGrow, RGA and MARS edit 2D molecules iteratively to optimize Vina docking score. Since their actions are defined in 2D space and may not correlate well with Vina score changes, the optimization performance is not satisfactory and their results are worse than some conditional generation models. Our method directly generates 3D molecules and adopts a target-guided self-training approach to accelerate the annealed sampling framework. As an optimization-based method, we demonstrate that defining action space in 3D is more efficient and effective for SBDD task. Additionally, MolEdit3D achieves state-of-the-art performance on SAscore and Success Rate, and the latter simultaneously evaluate binding affinity, drug-likeness and synthesizability. Unlike binding 8 \f0 2 4 6 8 10 12 14 16 18 Ring Size 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Frequency Ref liGAN 0 2 4 6 8 10 12 14 16 18 Ring Size Ref AR 0 2 4 6 8 10 12 14 16 18 Ring Size Ref GraphBP 0 2 4 6 8 10 12 14 16 18 Ring Size Ref DESERT 0 2 4 6 8 10 12 14 16 18 Ring Size Ref Pocket2Mol 0 2 4 6 8 10 12 14 16 18 Ring Size Ref Ours Figure 4: Frequency of different ring sizes for DrugCentral reference molecules and model generated molecules. MolEdit3D has the best overlap with the reference frequency. Table 2: The influence of pre-training and target-guided self-training. Pretrain Self Uniq (\u2191) Div (\u2191) High Aff (\u2191) Vina (\u2193) QED (\u2191) SA(\u2191) Succ (\u2191) Learning (%) (%) (kcal/mol) (%) \u00d7 \u00d7 100.0 0.895 10.7 -8.83 0.68 0.67 59.5 \u00d7 \u2713 100.0 0.895 16.3 -9.07 0.68 0.68 67.0 \u2713 \u00d7 98.7 0.883 24.1 -9.35 0.70 0.83 85.6 \u2713 \u2713 93.5 0.871 53.6 -10.30 0.64 0.84 94.1 affinity, drug-likeness and synthesizablity are target-independent and our method can simultaneously optimize all three metrics well. In Figure 2, we show the docking poses of MolEdit3D generated molecules for two targets, 5MKU and 3VRJ. The results indicate our generated molecules can form strong shape complementarity with the target binding site, which is the premise of high binding affinity. Additional Target-Independent Properties We analyze additional target-independent properties of the generated molecules by 3D molecular generation methods here. To examine whether generated molecules have reasonable 2D structures, we compare generated molecules with those in DrugCentral Database [Ursu et al., 2019], which are either drugs or pharmaceuticals. We compare the frequency of different sized rings for molecules generated by SBDD models with the reference ones in Figure 4. liGAN, AR and GraphBP demonstrate high tendencies of generating 3-member rings. Meanwhile, DESERT and Pocket2Mol tend to generate larger ring systems. Among these models, MolEdit3D shows the best alignment with reference drug molecules. In addition to 2D structures, we also evaluate the conformational stability of the generated molecules. Since bond length and angles are mostly encoded in the fragment vocabulary for fragment-based molecular generation models, such as DESERT and MolEdit3D, we emphasize on the accuracy of torsion angles to make fair comparisons with atom-based models. We compare the distribution of two representative torsion angles, CCCC and Cccc, in Figure 3. CCCC is a common torsion angle of aliphatic carbons (non aromatic), and Cccc is an angle around aromatic carbons. Here, we use the 3D ligands from the CrossDocked2020 database [Francoeur et al., 2020] as reference. Normally, a torsion angle shows high frequency at a few preferred values, which corresponds to the energy-stable conformers for the rotatable bond of interest. As shown in Figure 3, DESERT and MolEdit3D achieve the best alignment with reference distribution. However, atom-based approaches, namely liGAN, AR, GraphBP and Pocket2Mol, are less satisfactory especially in the case of CCCC dihedral angle. In MolEdit3D, we explicitly train the model to predict torsion angles between rigid fragments. DESERT uses fragments as building blocks for the molecular generation. Although torsion angles is not modeled explicitly in DESERT, it uses fragments containing rotatable torsion angles, which can benefit the generation of energy-stable conformers. 4.3 Ablation Study In this section we study the contribution of pretraining and target-guided self-training using 5MKU target and MolEdit3D (L) model as an example, and the results are summarized in Table 2. With pretraining, the Success Rate is improved from 59.5% to 85.6% on the without self-learning setting, and is improved from 67.0% to 94.1% on the with self-learning setting. Pretraining can significantly improve SAscore by 0.16 on both with and without self-learning settings, and also boost Vina Score. Meanwhile, target-guided self-learning has a major impact on binding affinities to the given target, reflected by Vina Score and High Affinity. With self-learning, Vina Score is improved on both with 9 \fand without pretraining settings. In addition, High Affinity is increased by about 2 fold and 3 fold on with and without pretraining settings respectively. It should also be noted, pretraining can slightly decrease the Uniqueness and Diversity of the generated molecules, but the change is relatively small. 5 Conclusion and Future Work In this paper, we propose MolEdit3D, which adopts a sampling framework to generate 3D molecules in a target binding site and optimize desired properties. We propose a novel 3D graph editing model, and employs generative pre-training and target-guided self-learning to extract target-independent and -dependent properties respectively. MolEdit3D achieves SOTA performance on Validity, binding affinity (High Affinity and Vina Score), SAscore and Success Rate, while maintaining adequate diversity, uniqueness and QED. In addition, MolEdit3D generated molecules show strong agreement with reference molecules on both 2D and 3D molecular properties. Although MolEdit3D has achieved state-of-the-art results on SBDD tasks, there is still space for improvement of this method and the evaluation of SBDD models in general. Practical drug design usually requires the designed molecules to form desired interactions, such as hydrogen bonding with a specific residue, which is a more sparse and harder objective for optimization. In addition, current evaluation metrics, such as QED and SAscore, has been questioned about their reliability, and better standards should be developed."
},
{
"url": "http://arxiv.org/abs/2402.06861v1",
"title": "UrbanKGent: A Unified Large Language Model Agent Framework for Urban Knowledge Graph Construction",
"abstract": "Urban knowledge graph has recently worked as an emerging building block to\ndistill critical knowledge from multi-sourced urban data for diverse urban\napplication scenarios. Despite its promising benefits, urban knowledge graph\nconstruction (UrbanKGC) still heavily relies on manual effort, hindering its\npotential advancement. This paper presents UrbanKGent, a unified large language\nmodel agent framework, for urban knowledge graph construction. Specifically, we\nfirst construct the knowledgeable instruction set for UrbanKGC tasks (such as\nrelational triplet extraction and knowledge graph completion) via\nheterogeneity-aware and geospatial-infused instruction generation. Moreover, we\npropose a tool-augmented iterative trajectory refinement module to enhance and\nrefine the trajectories distilled from GPT-4. Through hybrid instruction\nfine-tuning with augmented trajectories on Llama-2-13B, we obtain the UrbanKGC\nagent, UrbanKGent-13B. We perform a comprehensive evaluation on two real-world\ndatasets using both human and GPT-4 self-evaluation. The experimental results\ndemonstrate that UrbanKGent-13B not only can significantly outperform 21\nbaselines in UrbanKGC tasks, but also surpass the state-of-the-art LLM, GPT-4,\nby more than 10\\% with approximately 20 times lower cost. We deploy\nUrbanKGent-13B to provide online services, which can construct an UrbanKG with\nthousands of times richer relationships using only one-fifth of the data\ncompared with the existing benchmark. Our data, code, and opensource UrbanKGC\nagent are available at https://github.com/usail-hkust/UrbanKGent.",
"authors": "Yansong Ning, Hao Liu",
"published": "2024-02-10",
"updated": "2024-02-10",
"primary_cat": "cs.AI",
"cats": [
"cs.AI"
],
"label": "Original Paper",
"paper_cat": "Knowledge AND Graph",
"gt": "UrbanKGent: A Unified Large Language Model Agent Framework for Urban Knowledge Graph Construction",
"main_content": "INTRODUCTION Urban Knowledge Graph (UrbanKG) aims to model intricate relationships and semantics within a city by extracting and organizing urban entities (e.g., POIs, road networks, etc.) into a multi-relational heterogeneous graph. As an emerging building block, multi-sourced urban data are widely used to construct an UrbanKG to provide critical knowledge for various knowledge-enhanced urban downstream tasks, such as traffic management, pollution monitoring, and emergency response [5, 7, 35]. UrbanKG has gradually become an essential tool of the modern smart city. In prior literature, many efforts have been devoted to urban knowledge graph construction (UrbanKGC) using massive urban data sources. In particular, one commonly used approach [27, 33, 40] is to extract entities from structured urban data (e.g., geographic data, city sensor data, and traffic data) and define the relationships between obtained urban entities based on manually designed rules or patterns. However, these approaches suffer heavy reliance on a deep understanding of the application domain and are laborintensive. Recently, inspired by the success of the Large Language Models (LLMs) in knowledge graph construction [44, 47, 56], the LLMs have been adopted to improve UrbanKGC. For instance, GeoLM [41] is proposed for spatial grounded entity recognition and arXiv:2402.06861v1 [cs.AI] 10 Feb 2024 \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Yansong Ning and Hao Liu Columbia University (CU) is the oldest institution of higher education in New York, established in 1754 on the grounds of Trinity Church in Manhattan. Relational Triplet Extraction Given two urban entities: Columbia University, lat, lng; Empire State Building, lat, lng; Please complete the geospatial relationship between them. Sorry, It\u2019s hard to decide based on these information. Given the urban text, please extract the urban relational triplet from it. Return the results with <head entity, relation, tail entity> format. (b) Lack of Geospatial Computing Ability You can invoke geospatial tools (e.g., distance calculation, geo-hashing encoding, \u2026) to help determine the relationship. The distance between the \u201cEmpire State Building\u201d and the \u201cColumbia University is 6.85km. Therefore, they are sperate and do not share the same boundary. So, the geospatial relationship between these two entities are disconnected. (a) Lack of Heterogeneous relationship understanding ability <CU, established-in, 1754> <CU, Locate-in, Trinity Church> <CU, Locate-in, New York City> Given the urban text, please extract urban triplet from it. Spatial relation specifies how some object is located in space in relation to some reference object. Return the results with <head entity, relation, tail entity> format. <CU, established-in, 1754 > <CU, Locate-in, Manhattan> <CU, Locate-in, New York City > Urban Knowledge Graph Completion Geospatially disconnected Figure 2: Illustrative example of urban relational triplet extraction and knowledge graph completion. (a) The heterogeneous relationship understanding limitation of LLMs can be addressed by injecting prior urban knowledge into instruction. (b) The geospatial computing limitation of LLMs can be alleviated by invoking external geospatial tools. relation extraction. K2 [10] retrains the Llama-2-7B model on a huge annotated geoscience text corpus for geospatial relation extraction. Nevertheless, these works rely on annotated corpus and model retraining, which may discourage researchers from adopting it for their own work and thus limit the application of UrbanKG. LLM agent [11, 12] has recently emerged and shown remarkable zero-shot capability for autonomous domain-specific task completion. For example, Voyager [39] is a LLM-powered agent for zero-shot game exploration without re-training, and LLMLight [16] is a traffic signal control agent with zero-shot LLM reasoning ability. These studies motivate us to construct tailored LLM agents to address the aforementioned limitations in UrbanKG construction. In fact, constructing an LLM agent compatible with various UrbanKGC tasks is a non-trivial problem due to the following two challenges: (1) Challenges 1: How to adapt LLMs for UrbanKGC? LLMs may not align well with the specific task due to the gap [15] between the natural language processing corpus for training LLMs and the domain-specific corpus in urban domain [24]. For example, the urban text data is usually heterogeneous and contains multifaceted urban knowledge (e.g., spatial, temporal, and functional aspects) [10]. As shown in Figure 2(a), the text description of \"Columbia University\" reflects its geographic spatial locations (i.e., spatial relationship), construction timelines (i.e., temporal relationship), and how it provides educational service for the city (i.e., functional relationship). LLMs may require a tailored alignment to understand heterogeneous urban relationships to extract these urban spatial, temporal, and functional relations accurately. (2) Challenges 2: How to improve the capacity of LLMs for UrbanKGC? The effectiveness of LLMs for urban knowledge graph construction is restricted by their feeble numerical computation capacity [13, 53], leading to their disability in complex geospatial relationship extraction [2, 32]. However, the urban geospatial relationship plays a vital role in urban semantic modeling [23] and has been widely incorporated in previous UrbanKGs [26, 33]. As can be seen in Figure 2, extracting \"disconnected\" relation between the geo-entity \"Columbia University\" and \"Empire State Building\" is useful for urban geo-semantic modeling. Accurately extracting such geospatial relationship demands necessary geospatial computing (e.g., utilizing latitude and longitude for distance calculation) and reasoning (i.e., deriving calculation results for geospatial relation reasoning) capabilities. It is appealing to improve the geospatial computing and reasoning ability (e.g., invoking external tools for calculation) of LLMs to satisfy the UrbanKGC task requirement. To address the aforementioned challenges, in this study, we propose UrbanKGent, a unified LLM agent framework for automatic UrbanKG construction. Figure 1 illustrates the overview of UrbanKGent. For a given city, we first generate a knowledgeable instruction set for UrbanKGC tasks (e.g., relational triplet extraction and knowledge graph completion) from urban geographic and text data sources. By heterogeneity-aware and geospatial-infused instruction generation, as shown in Figure 2(a), various urban spatiotemporal relationship knowledge can be encoded into the instruction, which facilitates alignment between LLMs with the target UrbanKGC tasks. Moreover, we propose a tool-augmented iterative trajectory refinement module to enhance and refine the trajectory derived by distilling GPT-4 with the above constructed instructions. Based on geospatial tool augmentation and self-refinement, the deficiency of LLMs in geospatial computing and reasoning could be alleviated, and unfaithful trajectories could be filtered out. Finally, we perform hybrid instruction fine-tuning based on the enhanced and refined trajectories on Llama 2-7B and Llama 2-13B variants [37] by using LoRA [18]. The obtained agent, UrbanKGent-13B, is feasible for completing multiple UrbanKGC tasks in a cost-effective way (i.e., no extra GPT-API cost needed). We conduct comprehensive experiments on two UrbanKGC tasks in two metropolises (New York City and Chicago) using both human evaluation and GPT-4-based self-evaluation. The empirical results validate the effectiveness of the proposed LLM agent for completing various UrbanKGC tasks. Moreover, we deploy the constructed agent, UrbanKGent-13B, online for UrbanKGC service, which can extract the same scale of triplets and entities of existing UrbanKG benchmark [33] using only one-fifth of data, and even expand the types of relations by thousands of times. Our contributions are summarized as follows: \u2022 We propose the first UrbanKGC agent construction framework UrbanKGent, and release the UrbanKGent-13B to provide real-world UrbanKGC service1, which offers new opportunities to advance UrbanKG studies. \u2022 We propose a knowledgeable instruction generation module and a tool-augmented iterative trajectory refinement method, which align LLMs to UrbanKGC tasks and compensate for their geospatial computing and reasoning inability. 1https://htmlpreview.github.io/?https://raw.githubusercontent.com/usailhkust/UrbanKGent/main/UrbanKGent%20Demo/index.html \fUrbanKGent: A Unified Large Language Model Agent Framework for Urban Knowledge Graph Construction Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY \u2022 Extensive experiments on two real-world datasets validate the effectiveness of the proposed framework and uncover its exceptional performance across various UrbanKGC tasks. 2 URBANKGC DATA DESCRIPTION This section introduces the process of data collection and preprocessing details. 2.1 Data Collection We first acquire urban knowledge for two large cities New York City and Chicago from two data sources. Table 1 summarizes the statistics of the raw datasets. 2.1.1 Geographic Data. The geographic data provides critical urban spatial structure information and functional semantics, which has been widely used in previous UrbanKG studies [25, 26, 33, 54]. Area-Of-Interst (AOI) Data. AOI data describes the urban spatial area structure, including urban commercial areas (e.g., shopping centers), residential areas (e.g., communities and neighbors), and so on. In this work, we first follow UUKG [33] to acquire the AOI name and geometry value from NYC Gov2 and CHI Gov3. Next, we use the AOI name to search for their text description from Wikipedia website4. If there is no direct match, we then retrieve the Wikipedia pages that mention the AOI name and collect the related paragraphs. Each AOI record contains an AOI name, a polygon geometry value, and a text description. For example, [\"Jamaica Bay\", polygon (-73.86 40.58, ...), \"Jamaica Bay is an estuary ...\"] is the record of the AOI \"Jamaica Bay\" with geometry value and text description. Road Network Data. Road data describes the urban spatial transportation network, including urban motorways, overpasses, and so on. We first follow [33] to obtain the road name and geometry value from Open Street Map (OSM)5. Then, following the same text acquisition operation in AOI data, we crawl the textual description of each road record from Wikipedia. Each road record contains a road name, a linestring geometry value, a road type and a text description. For example, [\"Central Park Avenue\", linestring (-73.87 40.90, ...), primary, \"Central Park Avenue is a boulevard in ...\"] describes the primary road named \"Central Park Avenue\" with a linestring geometry value and its textual description. Point-Of-Interest (POI) Data. POI data represents different urban functions (e.g., residential and commercial), which have been widely adopted in many recent UrbanKG works [26, 33, 40]. We first follow [33] to obtain the POI name, and geometry value from OSM. Then the textual description of each POI record could be crawled from Wikipedia following the similar process. Each POI record contains a POI name, a coordinate geometry, a POI type, and a text description. For example, [\"Trump World Tower\", coordinate (-73.96 40.75), residential, \"Trump World Tower is a residential condominium ...\"] is the record of the POI \"Trump World Tower\". 2.1.2 Text Data. The text data provides rich contextual knowledge of the city space from different perspectives (e.g., the spatial context) [10], and it plays an important role in geospatial understanding. In this work, we collect two types of text corpus. 2https://www.nyc.gov/ 3https://www.chicago.gov/ 4https://www.wikipedia.org/ 5https://www.openstreetmap.org/ Table 1: The statistics of raw datasets. Dataset Description New York City Chicago Geographic Data # of AOI 183 80 # of road 6,397 1,893 # of POI 5,369 5,658 Text Data # of review 16,360 13,627 # of web page 11,596 7,283 Review Data. The review of urban places provides useful commercial information that citizens use to make business decisions [9], which plays a critical role in urban knowledge distillation. We collect review data from Google Map6. Specifically, we first manually split the city into multiple rectangular regions, then we utilize the Google Map API to query the places contained within each region and their reviews. Each review record contains a place name, a coordinate geometry value, a rating, and a text review. For example, [\"Lifestyles Academy Inc\", coordinate (-87.87 41.65), 4.9, \"Very nice organization and ...\"] is the review record of place \"Lifestyles Academy Inc\". Web Page Data. The web page data works as the general text corpus for the city, and it contains rich geoscience knowledge that has been utilized in recent urban entity and relation extraction studies [10]. We collect web page data from the Google search engine. Specifically, we first input the name of the crawled AOI, Road, and POI record into Google. Then we concatenate the textual sentences of the top 10 retrieved web pages. Each web page record contains a long urban text description. 2.2 Data Preprocessing Before constructing the UrbanKGC dataset, we first preprocess the raw datasets. Specifically, we filter out AOIs, roads, POIs, reviews, and web pages whose crawled textual descriptions are null value, too short (e.g., less than ten word description) or meaningless (e.g., just repeating the POI name). In addition, we remove irrelevant information from the text description, such as non-English characters, non-ASCII gibberish, website addresses, and so on. More details can be found in Appendix A. 3 PRELIMINARY This section presents the UrbanKGC task definition and provides task analysis. 3.1 Task Definition and Problem Formulation Before diving into the technical details, we first introduce the definition of urban knowledge graph (UrbanKG): Definition 1. UrbanKG. The UrbanKG is defined as a multirelational graph G = (E, R, F ), where E, R and F is the set of urban entities, relations and facts, respectively. In particular, facts are defined as F = {\u27e8\u210e,\ud835\udc5f,\ud835\udc61\u27e9| \u210e,\ud835\udc61\u2208E,\ud835\udc5f\u2208R}, where each triplet \u27e8\u210e,\ud835\udc5f,\ud835\udc61\u27e9 describes head entity \u210eis connected with tail entity \ud835\udc61via relation \ud835\udc5f. 6https://www.google.com/maps \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Yansong Ning and Hao Liu Spatial Temporal Functional Others 0.0 0.1 0.2 0.3 0.4 0.5 % (a) The performance of GPT-4 on heterogeneous relations in RTE task missed wrong correct DC EC EQ PO IN 0.0 0.1 0.2 0.3 0.4 0.5 % (b) The performance of GPT-4 on five geospatial relations in KGC task wrong correct Figure 3: Quantitative performance analysis of prompting GPT-4 for RTE and KGC tasks. The result is obtained by comparing 50 GPT-4\u2019s outputs with the human\u2019s annotation. The UrbanKG encodes diverse urban semantic knowledge by connecting urban entities into a multi-relational graph. This work aims to construct an UrbanKG from collected unstructured text data. We decompose the UrbanKG construction (UrbanKGC) process into two sequential knowledge graph construction tasks, namely relational triple extraction [56] and knowledge graph completion [47]. We first provide the basic definition for these two subtasks, and then introduce the problem formulation of this work. 3.1.1 Task Definition. We give the basic task definition as follow. Relational Triplet Extraction (RTE). Given the unstructured texts, this task achieves joint extraction of entities and their relations [56] which are in the form of a triplet \u27e8\u210e,\ud835\udc5f,\ud835\udc61\u27e9. For instance, given the urban text sentence \"Columbia University is a private Ivy league research university in New York City.\", this task aims to identify two entities \"Columbia University\" and \"New York City\" and their relation \"locate-in\", described as triplet <Columbia University, locate-in, New York City>. Knowledge Graph Completion (KGC). Given a head entity \u210eand a tail entity \ud835\udc61, this task is to predict the missing relation between them [47]. For instance, given the head entity \"Columbia University\" and the tail entity \"Empire State Building\", this task is to predict that their missing relation, e.g., \"disconnected\", described as triplet <Columbia University, disconnected, Empire State Building>. 3.1.2 Problem Formulation. Given the urban unstructured text data, the desired output is an UrbanKG G. In this paper, this problem is decomposed into two sequential subtasks: (1) Relational Triplet Extraction: the first task extracts relational triplet \u27e8\u210e,\ud835\udc5f,\ud835\udc61\u27e9from the urban text data. The output of RTE task is G1 =(E, R1, F1), where E and R1is the set of extracted entities and relations, while F1 is the set of extracted triplets. (2) Knowledge Graph Completion: for the given head entity \u210eand tail entity \ud835\udc61in G1, the second task is to predict the geospatial relationship7 between them. The output of this task is G2 =(E, R2, F2), where R2 and F2 is the set of completed relations and triplets. By sequentially completing the above two tasks, we can obtain the constructed UrbanKG G = (E, R1 \u222aR2, F1 \u222aF2). 3.2 Quantitative Task Analysis As shown in Figure 2, we qualitatively find that LLMs lack urban heterogeneous relationship understanding ability and experience in 7We follow GeoLM [23] to provide five RCC relationship [30] candidates: Disconnection (DC), external connection (EC), equality (EQ), partial overlap (PO), and tangential and non-tangential proper parts (IN). Details are in Appendix A. geospatial computing and reasoning difficulty when adopting it for UrbanKGC tasks. This subsection presents a quantitative analysis of these two challenges. Heterogenous Relationship Understanding. The ability to understand heterogeneous relationships is ubiquitous in distilling knowledge from the massive urban corpus. For example, the text description in Figure 2 illustrates a place from spatial location, temporal time, and functional aspects. Capturing these heterogeneous semantics is important for urban knowledge distillation. We perform quantitative analysis by randomly sampling 50 urban text data and then prompt GPT-4 to complete relational triplet extraction by providing only the basic task description. As shown in Figure 3(a), we find that the LLMs experience serious misjudgment (i.e., extract wrong triplets or miss the triplet) on urban spatial, temporal, and functional triplet extraction. This indicates the limited capacity of LLMs to understand heterogeneous relationships. Geospatial Computing and Reasoning. Geospatial computing and reasoning techniques are widely used in many previous UrbanKG studies [26, 33] for urban geospatial relation extraction. In recent works [31, 32], the geospatial skills of LLMs have also been demonstrated to lack geospatial awareness and reasoning ability [2]. To identify potential limitations, we quantitatively investigate how LLMs can perform on geospatial relation completion tasks. Specifically, we construct 100 head and tail entity pairs, covering five geospatial relations in the KGC task, and then prompt GPT-4 to predict with basic task description and geospatial relation candidates. As shown in Figure 3(b), we find that GPT-4 performs poorly on five geospatial relation completion. This further validates the disability of LLMs in geospatial computing and reasoning. 4 URBANKGC AGENT CONSTRUCTION This section presents the proposed UrbanKGC agent construction framework. 4.1 Overview The overall pipeline of the UrbanKGent framework is illustrated in Figure 4, which consists of three major components: (1) Knowledgeable Instruction Generation consists of the heterogeneity-aware and geospatial-infused instruction generation modules for aligning LLMs to UrbanKGC tasks. (2) Tool-augmented Iterative Trajectory Refinement proposes geospatial tool interface invocation and iterative self-refinement mechanisms to enhance and refine generated trajectory. (3) Hybrid Instruction Fine-tuning fine-tune LLMs based on the refined trajectories for cost-effectively completing diverse UrbanKGC tasks. 4.2 Knowledgeable Instruction Generation We first construct the knowledgeable instruction to adopt LLMs for two UrbanKGC tasks, including relational triplet extraction (RTE) and knowledge graph completion (KGC). Figure 4(a) illustrates the overview of the instruction construction process of these two tasks. 4.2.1 Heterogeneity-aware Instruction Generation for Relational Triplet Extraction. As discussed in Section 3, the urban text contains diverse heterogeneous relationships, thus we consider multiple views with both urban entity and relation definition for relational triplet extraction. In particular, we construct a multi-view \fUrbanKGent: A Unified Large Language Model Agent Framework for Urban Knowledge Graph Construction Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY instruction template for the urban relational triplet extraction, including spatial view, temporal view, and functional view. Each view is a multi-turn question-answer dialog [43] consisting of entity recognition, relation extraction, and triplet extraction module. For the spatial view, we devise a two-turn dialog to align LLMs for spatial triplet extraction. In the first turn, we inject spatial entity and relation definition into the instruction template to guide LLMs to understand spatial characteristics and then extract potential spatial entities (e.g., University) and relations types (e.g., locatein). In the second turn, the extracted types are explicitly fed into the instruction template for spatial triplet extraction. Intuitively, the spatial view allocates dedicated urban knowledge for LLMs to extract urban spatial relationships. Similarly, we construct the temporal view and functional view for corresponding temporal and functional triplet extraction, independently. We provide the detailed instruction template in the Appendix B. 4.2.2 Geospatial-infused Instruction Generation for Knowledge Graph Completion. Despite heterogeneity-aware instruction enabling LLMs to extract urban triplets from various perspectives, the geospatial relationship between geospatial entities cannot be directly extracted. Therefore, we introduce a geospatial-infused instruction generation module to guide LLMs to complete missing geospatial relationships. First, we incorporate the geometry information (i.e., the latitude and longitude) of geo-entities into instruction, so that the LLMs can utilize these geospatial values for relation inference. Second, we add the geospatial relationship definition to the instruction to guide LLMs in understanding the geospatial relationship definition. Intuitively, LLMs can refer to this geospatial knowledge and make practical solutions for the knowledge graph completion task. We provide the detailed instruction template in the Appendix B. 4.3 Tool-augmented Iterative Trajectory Refinement Then, we present the trajectory generation to further augment and refine the constructed UrbanKGC instructions. 4.3.1 Trajectory Generation. With the initial UrbanKGC instructions constructed, the following step is to generate reasoning trajectories [51], which will be used to fine-tune LLMs tailored to UrbanKGC task. Specifically, we follow FireAct [4] and use Chainof-Thought (CoT) [42], a gradient-free technique, to prompt GPT-4 (i.e., add prompt trigger \"Let\u2019s think step by step\" at the end of RTE and KGC instructions template) to generate the reasoning trajectories for UrbanKGC tasks. The generated CoT trajectories could provide a step-by-step reasoning solution for UrbanKGC tasks. Nevertheless, the complex geospatial relationships cannot be easily extracted as discussed in Section 3 and recent geospatial reasoning works [2, 31, 32]. Therefore, we introduce a tool invocation module to guide LLMs to invoke tailored external geospatial tools [3] to enhance their geospatial computing and reasoning capacity for UrbanKGC tasks. 4.3.2 Tool Invocation for Trajectory Augmentation. We conduct two sequential procedures: tool invocation for geospatial computing support and trajectory deliberation for reasoning enhancement. Tool Invocation. First, we construct a geospatial reasoning toolkit (e.g., distance calculation, eight interfaces in total shown Two-turn dialog Spatial view Temporal view Functional view External Geospatial Tools (a) Knowledgeable Instruction Generation General LLMs Trajectory Verifier Trajectory Updater Feedback Refine (b) Tool-augmented Iterative Trajectory Refinement (c) Hybrid Instruction Fine-tuning Entity Recognition <Given urban text> <Task profile> <Entity definition> <Format> Relation Extraction <Given urban text> <Task profile> <Relation definition> <Format> Knowledge Graph Completion <Given entities> <Geometry information> <Task profile> <Geospatial relation definition> <Format> <COT> Let\u2019s think step by step Relational Triplet Extraction <Given urban text> <Task profile> <Spatial entity and relation types> <Format> <COT> Let\u2019s think step by step Distance Calculator Geohash Encoder Point within Polygon Polygon within Polygon Knowledge Graph Completion Relational Triplet Extraction LoRA Fine-tuning Augmented Trajectories Refined Trajectories UrbanKGC agent Instructions Inference Invoke Figure 4: An overview of UrbanKGent Construction. in Table 8) by prompting GPT-4 for self-programming. Then, we construct tailored prompts to guide LLMs to invoke these interfaces. Specifically, the prompt is concatenated with an illustrative description of the function of each geospatial tool and a task instruction (i.e., \"Which types of tool interface you need\"). Intuitively, the external tool allocates necessary calculation results for LLMs to infer missing geospatial relation. The detailed toolkit description can be found in Appendix C. Trajectory Deliberation. After manipulation with external tools, we prompt LLMs to refine uncertain reasoning steps based on these obtained manipulation results. Specifically, we construct the prompt by concatenating with manipulation results (e.g., the distance and geohash value of geo-entity) and a task instruction (i.e., \"Please refine your reasoning process\"). After feeding the prompt into GPT-4, the enhanced trajectory is obtained. Detailed prompt information can be found in Appendix C. 4.3.3 Iterative Trajectory Self-refinement. Despite tool-augmented deliberation improving the geospatial computing and reasoning ability of LLMs, enhanced trajectories may not all be faithful [17]. To alleviate potential error and ensure the trajectory quality [19], we refine these trajectories via an iterative self-refinement mechanism [29]. Specifically, we iterate two sequential blocks: (i) Trajectory \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Yansong Ning and Hao Liu verifier: given the trajectory, the verifier aims to provide feedback for refining the reasoning process; (ii) Trajectory updater: given the trajectory and feedback, the updater will further refine the current trajectory based on the feedback. Trajectory Verifier. We construct a tailored prompt to ask LLMs to generate feedback. Specifically, we use a simple but effective trigger (\"Judge whether all extracted triplets are correct and provide improvement suggestion\") to prompt LLMs to provide feedback. If the trajectory no longer requires modification, we let LLMs respond with \"This is a faithful trajectory\". Such a verification step lets LLMs make reflections and improve the correctness of the trajectory. Trajectory Updater. The updater utilizes provided feedback to refine the current trajectory via prompt trigger \"Follow suggestion to refine the reasoning process\". Intuitively, the feedback may address multiple aspects (e.g., missed triplet in the RTE task or unfaithful reasoning process in the KGC task) of the unfaithful trajectories. We iterate the trajectory verifier and updater until the predefined stopping condition is satisfied. The stopping condition is determined by either meeting the maximum number of iterations (we set it at three to avoid excessive cost) or when the verifier confirms all trajectories are faithful. Upon meeting the stopping condition, we use the last refined trajectory for further fine-tuning. Detailed prompt information can be found in Appendix C. 4.4 Hybrid Instruction Fine-Tuning To construct a cost-effective UrbanKGC agent, we further utilize trajectories (generated by GPT-4) to fine-tune a smaller open-source LLM for faster inference speed and lower cost (i.e., prompting GPT4 for UrbanKGC is expensive). Specifically, we finetune the LLM via the mixed-task instruction-tuning strategy [51]. The goal is to enhance the LLMs\u2019 capabilities in diverse UrbanKGC tasks. 4.4.1 Mixture Training. Set the base language model as M, and PM (\ud835\udc66| \ud835\udc65) represents the probability distribution of response \ud835\udc66 given instruction \ud835\udc65. We consider the trajectory set on two UrbanKGC tasks, i.e., D\ud835\udc45\ud835\udc47\ud835\udc38and D\ud835\udc3e\ud835\udc3a\ud835\udc36. Since both the instruction and the target output are formatted in natural language, we can unify the training into an end-to-end sequence-to-sequence way. Formally, the optimization process aims to minimize the loss of language model M as follows: L = E(\ud835\udc65,\ud835\udc66)\u223cDRTE \u0002 log PM (\ud835\udc66| \ud835\udc65) \u0003 + E(\ud835\udc65,\ud835\udc66)\u223cDKGC \u0002 log PM (\ud835\udc66| \ud835\udc65) \u0003 , (1) where\ud835\udc65and\ud835\udc66represent the instruction input and instruction output in the trajectory, respectively. 4.4.2 Training Setup. We choose the chat version of open-sourced Llama 2-7/13B [37] as our backbone models, and fine-tune Llama using Low-Rank Adaptation (LoRA) strategy [18]. Through LoRA, the training process could be done by tuning only one-thousandth of full LLM parameters. 4.5 Inference on UrbanKGC Task Via hybrid instruction fine-tuning, the LLM can be trained to follow the instructions to finish the UrbanKGC task. After obtaining the UrbanKGent, we can prompt it to complete UrbanKGC tasks by following the pipeline shown in Figure 4. Table 2: The statistics of constructed UrbanKGC dataset. Dataset NYC-Instruct NYC CHI-Instruct CHI Records RTE 232 2,089 122 1,102 KGC 232 2,080 122 1,101 For the relational triplet extraction task, we sequentially execute entity recognition, relation extraction, and relational triplet instruction generation. We then perform iterative self-refinement and finally output the extracted triplets. For the knowledge graph completion task, we sequentially execute KGC instruction generation, external tool augmentation, iterative self-refinement block, and finally output the completed triplets. 5 EXPERIMENTS We evaluate the proposed method on two real-world UrbanKGC datasets, and aim to answer the following research questions: \u2022 RQ1: How does the constructed UrbanKGent perform compared with existing baselines and LLM prompting paradigms on UrbanKGC tasks? \u2022 RQ2: How do different components (e.g., the knowledgeable instruction generation) affect the performance? \u2022 RQ3: How can the constructed UrbanKGent provide application service to real-world scenarios? 5.1 Experimental Settings 5.1.1 Dataset. In this work, two sequential tasks (i.e., RTE and KGC) of UrbanKGC are within an open-world setting (i.e., no predefined ontology) [28, 48]. Therefore, for the RTE task, every data record is an urban text without the triplet label. For the KGC task, every data record is a quadruple (i.e., head entity name, head entity geometry, tail entity name, tail entity geometry) without the geospatial relation label. We construct the RTE and KGC datasets of NYC and CHI by sampling uniformly from five raw data in Table 1, respectively. As shown in Table 2, we first construct two small datasets (i.e., NYC-Instruct and CHI-Instruct) for instruction finetuning and two middle datasets (i.e., NYC and CHI) to validate the performance of the constructed UrbanKGC agent. The remaining data works as the large-scale UrbanKGC dataset (i.e., NYC-Large and CHI-Large) in real-world scenarios shown in Table 7. The three types of datasets are non-overlapping to prevent data leakage. More dataset construction details are in Appendix A. For each UrbanKGC dataset, we follow Section 4.5 to prompt constructed UrbanKGent to complete RTE and KGC tasks. 5.1.2 Baseline Methods. We provide a comprehensive comparison of our method with existing paradigms (e.g., zero-shot reasoning [38], in-context learning [42] and vanilla finetuning [47]) on the UrbanKGC tasks. Pretrained Language Model Baselines. For the RTE task, we utilize RelationPrompt [8], an end-to-end generation model for zero-shot RTE. For the KGC task, we fine-tune KG-BERT [46] and KG-T5 [34] with the QA pairs constructed from the self-instruct dataset. \fUrbanKGent: A Unified Large Language Model Agent Framework for Urban Knowledge Graph Construction Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Table 3: The main results of relational triplet extraction (RTE) and knowledge graph completion (KGC). We report the accuracy (acc) and confidence for GPT evaluation on two datasets, and report accuracy (acc) for the Human evaluation approach. The best baseline performance is underlined. Type Models NYC CHI GPT (acc/confidence) Human (acc) GPT (acc/confidence) Human (acc) RTE KGC RTE KGC RTE KGC RTE KGC Pretrained Language Models KG-BERT 0.24/3.15 0.23 0.19/4.12 0.24 KG-T5 0.21/4.02 0.21 0.15/3.98 0.24 RelationPrompt 0.12/3.38 0.12 0.21/3.53 0.18 Zero-shot Reasoning Llama-2-7B 0.14/1.98 0.18/3.75 0.16 0.18 0.26/1.96 0.15/2.83 0.21 0.22 Llama-2-13B 0.21/2.07 0.28/3.91 0.19 0.22 0.22/2.19 0.16/2.47 0.22 0.24 Llama-2-70B 0.25/3.07 0.28/3.75 0.22 0.24 0.27/3.55 0.16/2.47 0.24 0.23 GPT-3.5 0.29/4.11 0.36/3.47 0.31 0.23 0.31/3.79 0.31/3.16 0.31 0.29 GPT-4 0.38/4.03 0.39/3.82 0.41 0.29 0.39/4.08 0.32/4.03 0.43 0.35 In-context Learning Llama-2-7B (1-shot) 0.12/2.07 0.19/3.91 0.15 0.16 0.21/2.07 0.16/3.09 0.20 0.19 Llama-2-7B (3-shot) 0.18/2.15 0.21/3.96 0.19 0.18 0.25/2.44 0.18/3.27 0.23 0.20 Llama-2-13B (1-shot) 0.22/2.45 0.26/3.87 0.20 0.19 0.24/3.09 0.15/2.98 0.21 0.23 Llama-2-13B (3-shot) 0.26/3.52 0.31/3.28 0.23 0.24 0.28/2.65 0.21/2.53 0.25 0.26 GPT-3.5 (1-shot) 0.32/4.25 0.38/3.87 0.37 0.28 0.33/3.88 0.30/4.17 0.36 0.34 GPT-3.5 (3-shot) 0.41/4.65 0.42/4.08 0.42 0.31 0.36/4.24 0.36/4.23 0.39 0.36 Vanilla Fine-tuning Llama-2-7B 0.32/4.37 0.38/3.65 0.32 0.27 0.29/3.80 0.30/3.65 0.33 0.31 Llama-2-13B 0.35/4.26 0.41/3.92 0.39 0.29 0.31/4.14 0.29/3.87 0.37 0.35 UrbanKGent Inference Llama-2-7B 0.27/3.05 0.26/4.12 0.28 0.24 0.27/2.87 0.24/3.54 0.26 0.29 Llama-2-13B 0.31/3.87 0.32/3.56 0.35 0.27 0.28/3.24 0.26/3.28 0.31 0.32 Llama-2-70B 0.33/4.28 0.35/4.27 0.33 0.29 0.29/3.80 0.28/4.01 0.32 0.34 GPT-3.5 0.43/4.12 0.46/3.88 0.43 0.34 0.40/4.21 0.39/3.87 0.46 0.41 GPT-4 0.45/4.08 0.48/4.02 0.47 0.42 0.46/4.17 0.41/4.35 0.52 0.43 UrbanKGent-7B 0.46/4.12 0.49/3.97 0.48 0.44 0.49/4.28 0.43/4.58 0.54 0.45 \u21912.22% \u21912.08% \u21912.08% \u21914.76% \u21916.52% \u21914.88% \u21913.84% \u21914.66% UrbanKGent-13B 0.52/4.38 0.56/4.13 0.54 0.47 0.53/4.15 0.48/4.42 0.59 0.49 \u219115.56% \u219114.29% \u219114.89% \u219111.90% \u219115.22% \u219117.07% \u219113.46% \u219113.95% LLMs-based Zero-shot Reasoning Methods. We directly prompt the LLMs with basic task definitions to get the answer without training. LLMs-based In-context Learning Methods. We sample a fewshot QA pairs as demonstrations from the self-instruct dataset as examples and get the answers from the LLMs without training. Vanilla Fine-tuning Methods. We directly fine-tune the LLMs using the QA pairs constructed from the self-instruct dataset, and then prompt the LLMs with basic task definition without demonstrations. UrbanKGent Inference Methods. We directly prompt the LLMs using the UrbanKGgent inference pipeline in Section 4.5. The prompt templates of the above baseline methods are in Appendix B. 5.1.3 Implementation and Detail Settings. In our experiment, we select Llama-2, GPT-3.5 and GPT-4 as the backbone LLM M. We follow the open-source Llama document8 to deploy Llama-2-7B, Llama-2-13B, and Llama-2-70B in our device. We implement two 8https://ai.meta.com/llama/ fine-tuning agent versions (e.g., Llama-2-7B and Llama-2-13B), and all experiments are conducted on eight NVIDIA A800 GPUs. For the GPT-3.5 and GPT-4, we adopt the gpt-3.5-turbo-16k-0613 API and gpt-4-0613 API from the GPT service9 provided by HKUST (GZ). 5.1.4 Evaluation Protocol. Since the UrbanKGC tasks in this work follow an open-world setting where labels are not visible, the classical metric (e.g., F1 and Hits@10) is not applicable. In this work, we regard evaluation as the binary classification, i.e., if the extracted triplet in RTE task is correct and if the completed relation in KGC task is correct. We follow recent LLMs-based KGC works [47] to employ accuracy as an evaluation metric. To make a comprehensive evaluation of the experimental results, we employ the evaluation from two aspects: Human Evaluation. We employ human annotators to evaluate the results on 200 random samples. For the relational triplet extraction task, we first manually annotate the triplet label for each sample. Then, we manually evaluate the correctness of each 9https://gpt.hkust-gz.edu.cn/ \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Yansong Ning and Hao Liu Table 4: Effect of different blocks. Models GPT (acc/confidence) Human (acc) RTE KGC RTE KGC UrbanKGent-7B\u2660 0.38/4.17 0.42/3.98 0.37 0.34 UrbanKGent-7B\u2605 0.34/4.06 0.45/4.02 0.34 0.39 UrbanKGent-7B\u2021 0.45/4.32 0.40/3.97 0.45 0.23 UrbanKGent-7B\u2020 0.44/4.10 0.47/3.85 0.46 0.43 triplet [28] based on annotation and calculate the accuracy value. For the knowledge graph completion task, we follow [47] to manually label the response as correct or wrong, and calculate the accuracy. The detailed human evaluation process is in Appendix D. GPT Evaluation. Recently, many studies [38, 57] adopt LLMbased evaluation for open-domain tasks and empirically demonstrate that GPT-4\u2019s evaluation and human evaluation can be consistent [6]. In this work, we also use GPT-4 to evaluate the model performance on the full data to escape intensive labor. Specifically, given an UrbanKGC instruction and results, we prompt GPT-4 to return the confidence score and the justification (i.e., True/False), which will be further used to calculate the accuracy. In this work, the GPT-4\u2019s evaluation has also been demonstrated to be consistent with the human evaluation. The detail is in Appendix D. 5.2 Main Result (RQ1) The performance results are reported in Table 3. As can be seen, the constructed agent outperforms all twenty-one baseline models on two UrbanKGC datasets. Specifically, the UrbanKGent-13B achieves (15.56%, 14.29%, 14.89%, and 11.90%) improvements compared with the state-of-the-art GPT-4 with the same inference pipeline on NYC. The improvements on CHI are (15.22%, 17.07%, 13.46%, and 13.95%), respectively. Moreover, the UrbanKGent-7B also achieves comparable performance compared with the GPT-4. Such improvement demonstrates the superiority of the constructed UrbanKGC agent. Meanwhile, we observe that the zero-shot LLMs perform poorly in the UrbanKGC tasks, even using GPT-4. In addition, although the demonstrations provided by In-context-learning can incorporate the UrbanKGC task information, the performance gain is limited and even leads to performance degradation in smaller LLMs (e.g., Llama-2-7/13B). Besides, we find that fine-tuning LLMs can make obvious improvements in the overall performance. Through vanilla fine-tuning, the Llama-2-7B and Llama-2-13B could achieve comparable performance with GPT-3.5 under the zero-shot reasoning and In-context learning settings. Moreover, although the LLMs (i.e., Llama-2-7B and Llama-2-13B) using the UrbanKGent inference pipeline perform slightly worse than the vanilla fine-tuning method, they could obtain better performance compared with zero-shot reasoning and In-context learning paradigms. Such results demonstrate the benefit of knowledgeable instruction design and external tool innovation, but also indicate its performance bottleneck (i.e., well-designed UrbanKGent prompting pipeline unable to beat vanilla fine-tuning). As a deeper exploration, our work fills this gap through hybrid instruction fine-tuning, and the fine-tuned UrbanKGC agents, whether 7B or 13B, can achieve state-of-the-art performance in UrbanKGC tasks. Table 5: Comparison among LLM-based UrbanKGC methods in four ways. Method Extra Knowledge Require Fine-tuning Tool Invokation Self Refinement ZSL \u00d7 \u00d7 \u00d7 \u00d7 ICL \u221a \u00d7 \u00d7 \u00d7 VFT \u221a \u221a \u00d7 \u00d7 UrbanKGent Inference \u221a \u00d7 \u221a \u221a UrbanKGent \u221a \u221a \u221a \u221a 0 10 20 30 40 50 60 3 4 5 6 7 (1.73,2.87) (2.93,5.02) (35.34,3.42) (51.05,6.72) UrbanKGent-13B (RTE) UrbanKGent-13B (KGC) GPT-4 (RTE) GPT-4 (KGC) Lantency (mins) Cost (dollar) Figure 5: The model latency and cost of constructed UrbanKGent-13B and GPT-4 in UrbanKGC tasks. We report the total inference time and cost of 1,000 RTE and KGC tasks. 5.3 In-Depth Analysis (RQ2) We conduct an in-depth analysis of the proposed instruction generation and tool-augmented iterative trajectory refinement module on the NYC dataset. Specifically, for the RTE and KGC task, we validate the effectiveness of each block by comparing the following variants: (1) UrbanKGent-7B\u2660removes knowledgeable instruction template in RTE and KGC task; (2) UKGent\u2605removes multi-view design in RTE task; (3) UrbanKGent-7B\u2021 removes tool invocation; (4) UrbanKGent-7B\u2020 removes iterative trajectory self-refinement. We summarize the results in Table 4, and obtain the following observations. First, knowledgeable instruction generation contributes to the overall performance of both RTE and KGC tasks. We observe a performance degradation by removing the knowledgeable instruction template. Second, the multi-view instruction design provides the most performance gain, which matches our intuition that the UrbanKG text contains heterogeneous relationships that can be effectively extracted by multi-view prompting design. Third, the tool invocation is very important for the KGC task, as we can observe significant performance degradation after removing the tool invocation. In addition, the iterative trajectory self-refinement brings consistent performance gain for both the RTE and KGC tasks. 5.4 Complexity and Effiency Analysis We make a comparison with the four paradigms to demonstrate the advantages of the constructed agent, which is shown in Table 5. Compared with Zero-shot reasoning (ZSL), In-context Learning (ICL), Vanilla Fine-tuning (VFT), and UrbanKGent Inference, UrbanKGent can incorporate extra urban knowledge, invoke external \fUrbanKGent: A Unified Large Language Model Agent Framework for Urban Knowledge Graph Construction Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Table 6: Statistics comparision between constructed two UrbanKGs in NYC and CHI and existing UrbanKG dataset and benchmark UUKG [33]. For relation and data volume, we report the increase and decrease percentages in terms of absolute percentage returns. Dataset # Entity # Relation # Triplet Data Volume NYC-Large 265,938 24,052 (\u21911850%) 975,102 39,600 (\u2193597%) CHI-Large 191,473 21,089 (\u21911622%) 610,258 28,378 (\u2193495%) NYC-UUKG 236,287 13 930,240 236,277 CHI-UUKG 140,602 13 564,400 140,577 tools and iteratively self-refine to help better complete UrbanKGC tasks. Moreover, we also provide efficiency analysis in Figure 5. As can be seen, UrbanKGent-13B achieve lower inference speed in latency and reduce the cost by roughly 20 times in both RTE and KGC tasks. The detail is in Appendix H. 5.5 Agent Application and Deployment (RQ3) We have deployed a prototype system10 to facilitate real-world UrbanKGC construction. The service is equipped with the UrbanKGent13B version and is currently optimized for New York City and Chicago. As shown in Table 6, compared to the previous UrbanKG benchmark [33], we only use roughly one-fifth of the data for constructing the UrbanKGs with the same scale of triplets and entities, and even expanding the variety of relationships to a thousand times the original types. Please refer to Appendix F for more details. 6 RELATED WORK Our work is related to domain-oriented agent construction, LLMs for knowledge graph construction and urban knowledge graph. We briefly discuss them in this section 6.1 Domain-Oriented Agent Construction The concept of language agent [4] has become very popular recently, and a variety of LLM agents targeting different domains have been proposed. For example, Voyager [39] is constructed for automated game exploration, WebGPT [16] is an HTML agent for diverse document understanding tasks, LLMLight [20] constructs a language agent for transportation domain, K2 [10], GeoGalactica [24] and GeoLLM [23] propose to re-train language agent for geospatial semantic understanding. In addition, many recent works like AutoGPT [1] and CAMEL [21] aim at proposing an autonomous agent framework for agent construction. Nevertheless, there is still no UrbanKGC agent construction framework for the urban computing domain. 6.2 LLMs for Knowledge Graph Construction Knowledge Graph Construction aims to extract structural information from unstructured text [48]. Recently, the advent of Large 10https://htmlpreview.github.io/?https://raw.githubusercontent.com/usailhkust/UrbanKGent/main/UrbanKGent%20Demo/index.html Language Models (LLMs) invigorated the field of Natural Language Processing (NLP). Many studies have begun to explore the potential of LLMs in the domain of knowledge graph construction. For example, [22, 43] finds that transforming the NER and RE task into a multi-turn question-answering dialog could improve the performance of LLMs on the KG construction task. [44] explicitly derive syntactic knowledge to guide LLMs to think, which could develop the performance of NER. Despite these LLM-driven knowledge graph construction methods [28, 49] in general domains being widely investigated, knowledge graph construction in specific domains still remains an open challenge [52]. There is still no promising knowledge graph construction method in the urban computing domain, and this hinders its potential achievement. 6.3 Urban Knowledge Graph Urban knowledge graph has been proven useful in various urban tasks, such as traffic flow prediction [25, 36, 45, 55], mobility prediction [40], site selection [27], city profiling [58], crime prediction and so on [33]. Their common approach involves manually extracting urban entities and defining urban relations to construct an urban knowledge graph. For example, [40] construct a dedicated spatiotemporal knowledge graph regarding trajectory and timestamp as entities to improve trajectory prediction and [27] construct user check-in relations to help mobility prediction. Nevertheless, existing UrbanKGs heavily rely on manual design, leading to high labor costs and constrained by a limited variety of urban entities and relation categories. 7 CONCLUSION In this work, we proposed UrbanKGent, the first automatic UrbanKG construction agent framework with Large Language Models (LLMs). We first constructed a knowledgeable instruction set to adopt LLMs for different UrbanKGC tasks. Then, we proposed a tool-augmented iterative trajectory refinement module to facilitate the instruction tuning of various large language models. Extensive experimental results demonstrate the advancement of UrbanKGent in improving UrbanKGC tasks. We release the UrbanKGent-13B, an agent based on Llama-13B, with lower latency and cost compared with GPT-4 for UrbanKG construction. We hope the open-source UrbanKGent can foster future urban knowledge graph research and broader smart city applications. 8 LIMITATION AND FUTURE WORK This work has limitation on the further application demonstration of construction UrbanKGs, although proposed UrbanKGent-13B could construct a UrbanKG with a thousandfold relationship using only one-fifth of data. In addition, the evaluation method in this work is cost-intensive although GPT evaluation and Human evaluation has been experimentally demonstrated to be consistent. Despite the above limitations, we hope the opensource UrbanKGC agent can foster more extensive UrbanKG research and broad smart city application. In the future, we will derive extra image-modality data to further enrich UrbanKGC. \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Yansong Ning and Hao Liu"
},
{
"url": "http://arxiv.org/abs/2402.11199v1",
"title": "Direct Evaluation of Chain-of-Thought in Multi-hop Reasoning with Knowledge Graphs",
"abstract": "Large language models (LLMs) demonstrate strong reasoning abilities when\nprompted to generate chain-of-thought (CoT) explanations alongside answers.\nHowever, previous research on evaluating LLMs has solely focused on answer\naccuracy, neglecting the correctness of the generated CoT. In this paper, we\ndelve deeper into the CoT reasoning capabilities of LLMs in multi-hop question\nanswering by utilizing knowledge graphs (KGs). We propose a novel\ndiscriminative and generative CoT evaluation paradigm to assess LLMs' knowledge\nof reasoning and the accuracy of the generated CoT. Through experiments\nconducted on 5 different families of LLMs across 2 multi-hop question-answering\ndatasets, we find that LLMs possess sufficient knowledge to perform reasoning.\nHowever, there exists a significant disparity between answer accuracy and\nfaithfulness of the CoT reasoning generated by LLMs, indicating that they often\narrive at correct answers through incorrect reasoning.",
"authors": "Minh-Vuong Nguyen, Linhao Luo, Fatemeh Shiri, Dinh Phung, Yuan-Fang Li, Thuy-Trang Vu, Gholamreza Haffari",
"published": "2024-02-17",
"updated": "2024-02-17",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"label": "Original Paper",
"paper_cat": "Knowledge AND Graph",
"gt": "Direct Evaluation of Chain-of-Thought in Multi-hop Reasoning with Knowledge Graphs",
"main_content": "Introduction While large language models (LLMs) have shown great potential as general-purpose task solvers, they tend to be unreliable reasoners (Bang et al., 2023). Prior research suggests that LLMs demonstrate reasoning-like behaviors as the number of parameters increases (Wei et al., 2022). Notably, Chainof-Thought (CoT) prompting, where LLMs are explicitly instructed to decompose questions into a sequence of logical steps before generating answers, has achieved impressive performance in various reasoning tasks (Wei et al., 2022; Kojima et al., 2022). However, as LLMs function as black-box models, the mechanism behind their reasoning processes remains largely unknown. Previous research measures the reasoning ability of LLMs by reporting their performance, e.g. \u2217 The first two authors contributed equally to this work. Factual Errors Step 1: Justin Bieber is the child of James Brown. Step 2: James Brown is the father of Teddy Brown. Thus, the brother of Justin Bieber is Teddy Brown. Question:\u00a0Who is the brother of\u00a0Justin Bieber? Reasoning Coherence Step 1: Justin Bieber is the child of Jeremy Bieber. Step 2: Jaxon Bieber was born in Canada.\u00a0 Thus, the brother of Justin Bieber is Jaxon Bieber. Answer Correctness Step 1: Justin Bieber is the child of Jeremy Bieber. Step 2: Jeremy Bieber lives in Canada. Thus, the nationality of Justin Bieber is Canadian. Faithful CoT Step 1: Justin Biber is the child of Jeremy Bieber. Step 2:\u00a0Jeremy Bieber.\u00a0is the father of Jaxon Bieber. Thus, the brother of Justin Bieber is Jaxon Bieber. Grounded by KGs. Knowledge Graph (KGs) Reasoning Path Let's think it step by step. Figure 1: Examples of different CoT reasoning errors and a faithful CoT grounded by KGs. accuracy, on the downstream tasks that require reasoning (Huang and Chang, 2023). This evaluation strategy cannot provide a direct assessment of the reasoning steps. Hence, it remains unclear whether their strong performance is the result of true reasoning ability or simple heuristics. Recent studies on analyzing CoT reasoning introduce perturbations to prompts, including the injection of invalid reasoning paths, incorrect facts, or the addition of arbitrary symbols to the few-shot examples (Madaan et al., 2023; Wang et al., 2023a; Ye et al., 2023). These studies show that various aspects of prompts, such as query relevance, style patterns, and the correct ordering of reasoning steps, are more important than the validity of reasoning in the in-context demonstrations. While revealarXiv:2402.11199v1 [cs.CL] 17 Feb 2024 \fing interesting insights into the reasoning process of LLMs, prompt perturbation-based methods still cannot directly evaluate the correctness of reasoning steps. Automatically verifying CoT reasoning steps is still an open challenge due to the unstructured nature of its freeform rationales. In this paper, we go beyond evaluating only the final answers to directly analyzing the intermediate reasoning steps generated by CoT prompting in multi-hop question-answering (QA) tasks. To tackle the unstructured nature of CoT, we introduce a novel evaluation framework that grounds LLMs\u2019 responses in knowledge graphs (KGs) and verify whether it forms a faithful path to the given KGs. Before evaluating more open-ended generative reasoning skills, we design discriminative tests to assess whether LLMs can identify faithful reasoning paths when perturbed with factual errors, incoherent, and misguided reasoning steps. Our discriminative evaluation results reveal that LLMs possess certain knowledge of valid reasoning under sufficient knowledge conditions. Building on this observation, we further propose the generative evaluation to measure the reasoning ability of LLMs and detect fine-grained reasoning errors (see Figure 1). In the generative evaluation, we instruct LLMs to generate CoT in a structured format, enabling us to parse their responses into a structured reasoning path and validate against KGs. Our ablation study with human experts shows that our framework achieves good accuracy in reasoning path retrieval and evaluation. We use the proposed evaluation framework to understand the CoT reasoning process of five stateof-the-art LLMs on two complex QA tasks, which require performing multi-step reasoning to answer the questions. Our study reveals that \u2022 LLMs contain sufficient knowledge to conduct reasoning. However, they are still limited in considering the coherence of the reasoning and hallucinations during CoT generation. \u2022 The correct final answer may not necessarily follow from faithful reasoning. We observe a significant gap between answer accuracy and reasoning faithfulness. It highlights the necessity of directly evaluating the reasoning steps rather than solely scoring the final answers. \u2022 The performance gap between the final answer and reasoning worsens as the model size increases. As the answer accuracy also increases with the model size, it suggests that the bigger models may have the knowledge of the final answer without the need to perform reasoning. \u2022 Better prompting strategies such as selfconsistency or instructing LLMs with planning can further improve both the final answer and reasoning faithfulness. 2 Preliminaries Chain-of-thought (CoT) Reasoning Chain-ofthought (CoT) (Wei et al., 2022) is a reasoning framework that prompts LLMs to generate a stepby-step reasoning process S = {s1, s2, . . . , sn} to a question q, where si is a natural language sentence describing a step in the reasoning process. Faithful CoT A faithful CoT should satisfy the following properties (Creswell and Shanahan, 2022): (i) there are no factual errors, (ii) the reasoning process is coherent (i.e., the conclusion of previous step si\u22121 should be the prerequisite of the current step si), (iii) the reasoning process leads to the correct answers. Examples of violations of these properties are shown in Figure 1. Knowledge Graphs (KGs) Knowledge graphs (KGs) are structured representations of knowledge that contain abundant facts in the form of triples G = {(eh, r, et) | eh, et \u2208E, r \u2208R}, where eh and et are head and tail entities, and r is the relation between them. A path in KGs is a sequence of triples: P = e0 r1 \u2212 \u2192e1 r2 \u2212 \u2192. . . rl \u2212 \u2192el, connecting the entity e0 to the entity el. Reasoning Paths Given a question q and the answer a, a valid reasoning path P \u2217= eq r1 \u2212 \u2192e1 r2 \u2212 \u2192 . . . rl \u2212 \u2192ea is a path that connects the topic entity eq of q to the answer entity ea of a in KGs. The reasoning path P \u2217expresses a valid reasoning process for answering the question according to the KG. Example 1. Given a question \u201cWho is the brother of Justin Biber?\u201d, we can find a valid reasoning path P \u2217in KGs as: Justin Bieber child_of \u2212 \u2212 \u2212 \u2212 \u2212 \u2192 Jeremy Bieber father_of \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192Jaxon Bieber. It indicates: (i) Justin Bieber is the child of Jeremy Bieber, and (ii) Jeremy Bieber is the father of Jaxon Bieber. Thus, the brother of Justin Bieber is Jaxon Bieber. Faithful CoT Grounded by KGs We verify the faithfulness of the LLMs\u2019 CoT reasoning by grounding it with KGs. By treating each reasoning step as a triple in KGs, we convert the CoT into a reasoning path. If a reasoning path starting from the \fquestion and ending at the answers exists in KGs, we deem the CoT of LLMs faithful. A grounded example is shown at the bottom of Figure 1. 3 Evaluating the CoT Reasoning of LLMs We propose a framework to evaluate the CoT reasoning process of LLMs with the help of KGs. Specifically, we propose two evaluation modules: discriminative evaluation and generative evaluation. The discriminative evaluation investigates whether LLMs possess enough knowledge for conducting faithful reasoning and the generative evaluation further analyzes whether LLMs can provide faithful reasoning process during CoT generation. The overall framework is shown in Figure 2. 3.1 Discriminative Evaluation The discriminative evaluation aims to analyze whether the LLMs possess enough knowledge to conduct faithful reasoning. We hypothesize that if the LLMs possess sufficient knowledge for faithful reasoning, they should be able to distinguish valid reasoning paths from invalid ones given the question and answer. Following previous studies that evaluate the factual knowledge inside LLMs (Luo et al., 2023b), we feed both the valid and invalid reasoning paths to the LLMs and ask them to predict the validity of these paths. This allows us to assess the reasoning knowledge inside LLMs by analyzing their prediction accuracy. We carefully design prompts to describe the task and instruct LLMs to provide the prediction. Below is an example of the zero-shot prompt template. Zero-shot Discriminative Evaluation Prompt A reasoning path is a sequence of triples that can be used to derive the answer of given question. Given this reasoning path, do you think this is a valid path to derive the answer of given question? If yes please answer \"YES\", otherwise please answer \"NO Question: <Question> Answer: <Answer> Reasoning path: <Reasoning Path> where <Question> indicates the question, <Answer> denotes the corresponding answer, and <Reasoning Path> denotes the input reasoning path, which is verbalized as a structured sentence. A valid reasoning path is a sequence of triples that can be used to derive the answer of given question. The valid reasoning paths are extracted from the ground-truth reasoning paths1 P \u2217\u2208P\u2217. We generate three types of invalid reasoning paths P \u2032 by breaking specific properties of a faithful CoT: \u2022 Factual error reasoning path: we construct the invalid paths with factual errors by randomly corrupting entities within the valid reasoning path. This would result in some factual errors in the reasoning path, which are not valid for answering the question. \u2022 Incoherent reasoning path: we shuffle the triples of valid paths to construct an incoherent reasoning path. Even though the facts within the paths are accurate, the overall coherence of the path is compromised. \u2022 Misguided reasoning path: we randomly sample the paths starting from other questions in KGs. These paths are factually correct and coherent, but they are not related to the questions and lead to incorrect answers. To thoroughly assess the reasoning abilities of LLMs, in addition to the zero-shot prompt, we have also developed few-shot, zero-shot CoT, and fewshot CoT prompts. The details of these prompts are shown in Appendix E.1. Findings The results of the discriminative assessment are shown in \u00a75.1. From the results, we can conclude that LLMs possess enough knowledge to identify factual errors as well as reasoning path relatedness, but have limitations in considering the coherence of reasoning paths and CoT generation. Therefore, we propose the generative evaluation to further assess the faithfulness of CoT reasoning in LLMs\u2019 generation. 3.2 Generative Evaluation The generative evaluation aims to assess the faithfulness of the CoT reasoning process generated by LLMs. Our main idea is to ground LLMs\u2019 CoT into KG and verify whether it forms a valid path. To address the challenge of evaluating unstructured CoT, we carefully design a prompting strategy to instruct LLMs to output the CoT in a structured format. This enables us to parse LLM\u2019s response into a structured reasoning path, which can then be validated against KG. The example prompts and struc1The ground-truth reasoning paths are constructed from the SPARQL queries provided in the datasets. The detailed construction is shown in Appendix A. \fJustin Bieber Jazzy Bieber has_sister Jaxon Bieber Jeremy Bieber child_of father_of sister_of Erin Bieber step_child_of mother _of Canada born_in Singer is_a Question:\u00a0Who is the brother of\u00a0Justin Bieber? Answer: Jaxon Bieber Large Language Models (LLMs) Ground-truth Reasoning Paths Evaluation YES/NO \u2460\u00a0LLM Discrimination Large Language Models (LLMs) Chain-of-thought (CoT) Reasoning Step 1: Justin Biber is the child of Jeremy Bieber. Step 2:\u00a0Jeremy Bieber.\u00a0is the father of Jaxon Bieber. Thus, the brother of Justin Bieber is Jaxon Bieber. \u2460\u00a0LLM CoT Generation Evaluation 1. 2. \u2461\u00a0Triples Retrieval \u2462\u00a0Reasoning Path Construction KGs Discriminative Evaluation Generative Evaluation Knowledge Graphs (KGs) Figure 2: The overall framework of evaluating the CoT reasoning of LLMs, which contains two evaluation modules: discriminative evaluation and generative evaluation. The orange and red rectangles denote the entities mentioned in the question and answer, respectively. tured CoT output are provided in Appendix E.2. Specifically, given a generated CoT response S of question q, we first construct a reasoning path \u02c6 P by retrieving triples from the KGs. Then, we evaluate the validity of the reasoning path by checking whether the path coherently connects the question and answer entities in the KGs. The details of these steps are explained in the following subsections. 3.2.1 Reasoning Path Construction Given a CoT response S = {s1, s2, . . . , sn}, we first retrieve a triple2 T = (eh, r, et) for each step si in the CoT response. The retrieved triple is the structural representation of each reasoning step, which can be used to construct the reasoning path for evaluation. Previous works usually retrieve triples by identifying the entities and relations mentioned in the sentence and linking them to the KGs (Lan et al., 2021; Wang et al., 2021). However, this process is not scalable to KGs. Inspired by the recent fact retrieval method (Baek et al., 2023), we represent the reasoning step s and triples in a unified embedding space and retrieve the triple T based on their embedding similarity. For all the triples in a KG G, we verbalize each triplet into a sentence by concatenating the entities and relation x = \u201deh r et.\u201d. Then, we use the Sentence-BERT model (Reimers and Gurevych, 2We noticed in almost all cases in our experiments, a CoT step corresponds to one KG triplet. The extension to multiple triplets per CoT step is left for future work. 2019) to obtain its embedding hT = E(x). These embeddings are constructed in advance and saved in a vector database for efficient retrieval. Similarly, the embedding of a given reasoning step s is computed as hs = E(s). Then we retrieve the topK triples from KG by calculating the embedding similarity between hs and hT as: \u03c4i = f(hs, hTi), Ti = (eh, r, et) \u2208G, (1) where \u03c4i denotes the similarity score of triple Ti, and f(\u00b7, \u00b7) is a non-parametric scoring function that measures the similarity between two embeddings. We adopt cosine similarity as the scoring function. The embedding-based retrieval method may lead to the omission of entities mentioned in the reasoning step. To solve this problem, we also take into account the presence of head and tail entities in the reasoning step in the scoring function. The final score for each retrieved triple is calculated as, \u02dc \u03c4i = \u03c4i + \u03f5h + \u03f5t 3 , (2) where \u03f5h and \u03f5t represent the fuzzy-match ratio of head and tail entities in the reasoning step, which are range from 0 to 1, where 0 denotes no existence, 1 denotes a complete match. The overall retrieval process is presented in the Algorithm 1 in the Appendix B. Thus, we could obtain a set of triples T = {T1, T2, . . . , Tn} for the CoT response S. Then, we construct the reasoning path \u02c6 P by connecting the triples in T . \f3.2.2 Reasoning Path Evaluation By evaluating the validity of constructed reasoning paths, we can assess the faithfulness of the CoT reasoning process generated by LLMs. Specifically, we evaluate the validity of the constructed path \u02c6 P from three aspects: \u2022 Factual correctness: \u02c6 P contains factual error if the similarity score \u02dc \u03c4i of any retrieved triples are below a factual threshold \u03c3. \u2022 Coherence: given a factually correct path, it is incoherent if there exists a step where its premise is not the conclusion of the previous step. \u2022 Final answer correctness: given a factually correct and coherent path, whether the final answer is correct, i.e. matched with ground-truths. Validity of Reasoning Path The prerequisite and conclusion at each reasoning step are considered head and tail entities, respectively. If the reasoning path \u02c6 P can connect the question and answer entities in the KG, we can conclude that it is a valid path. The detailed algorithm is shown in the Algorithm 2 in the Appendix B. Fine-grained Assessment In addition to the binary evaluation, we also report the minimum edit distance between the constructed reasoning path \u02c6 P and the ground-truth path P \u2217. This serves as a fine-grained assessment of CoT reasoning capability. We adopt a widely used sequence alignment algorithm Needleman Wunsch algorithm (Needleman and Wunsch, 1970) to obtain continuous alignment scores (i.e., edit distance), which indicate how close the constructed reasoning path is to the ground-truth reasoning paths. If multiple groundtruth paths exist, we report the score against one with the highest match rate. The detailed algorithm is shown in the Algorithm 3 in the Appendix B. 4 Experimental Conditions We use the proposed evaluations to understand the CoT reasoning process of the state-of-the-art LLMs on complex question-answering (QA) tasks which requires performing multi-step reasoning to answer the questions. Through analysis, we seek to answer the following research questions (RQs) \u2022 RQ1: Do LLMs have the knowledge of faithful reasoning? We leverage the discriminative evaluation to test whether LLMs can identify valid reasoning paths. This evaluation focuses on assessing LLMs\u2019 knowledge about faithful reasoning. Dataset #Test #2hop #\u22653hop CWQ 1421 1386 35 GrailQA 1813 1528 285 Table 1: Statistic of datasets. \u2022 RQ2: Can LLMs express such knowledge to generate faithful reasoning? Utilizing our generative evaluation framework, we assess the capacity of LLMs to produce coherent and correct reasoning. We also investigate various factors, such as model size and prompting strategies, to understand their impact on reasoning capability. Dataset We conduct experiments on two QA datasets: Complex WebQuestions (CWQ) (Talmor and Berant, 2018) and GrailQA (Gu et al.) which contain up to 4-hop questions. To evaluate multistep reasoning capability, we filter out single-hop questions in the test set. Table 1 shows the statistics of the filtered test set. The generated reasoning paths are validated against Freebase (Bollacker et al., 2008) an open knowledge graph containing around 88M entities, 20K relations, and 126M triples. More details can be found at Appendix C.1. Large Language Models We evaluate the reasoning capability of several LLMs with instructionfollowing capability at different sizes, including Mistral (7B) (Jiang et al., 2023), Qwen (7B, 14B) (Bai et al., 2023), Vicuna (33B) (Chiang et al., 2023), LLaMA2-Chat (70B) (Touvron et al., 2023) and ChatGPT (175B) (OpenAI, 2023). The details of model versions are available in Appendix C.2. We set temperature as 0.7 and top p as 0.9 for generation in all models. Prompting Strategies We experiment with multiple CoT prompting strategies, including \u2022 Few-shot CoT Five examples with structured CoT followed by the answer are added to the prompt (Figure 11 in Appendix E.2). \u2022 Few-shot CoT with planning (CoT-Plan) We also explore the ability of LLMs to plan and decompose the relations required to reach the answer before verbalizing the CoT reasoning. In particular, we add the ground-truth plan (Luo et al., 2023a) (i.e., a relation path pointing to the answers) into each example. An example prompt is given in Figure 12 in Appendix E.2. \u2022 Few-shot CoT with self-consistency (CoT-SC) Beyond the conventional CoT prompting, we also experiment with Self-Consistency (Wang et al., 2023c), a more sophisticated method de\fLLMs Size Zero-shot Zero-shot CoT Few-shot Few-shot CoT Mistral 7B 87.59 89.88 56.91 69.98 Qwen 7B 74.76 76.13 79.64 73.23 Qwen 14B 88.59 88.86 88.81 75.87 Vicuna-1.5 33B 92.79 92.88 84.91 67.05 LLaMA2-Chat 70B 77.96 80.71 56.99 47.76 ChatGPT 175B 89.86 90.17 87.09 80.15 Table 2: Discriminative evaluation results of different LLMs on CWQ. We use binary accuracy as the metric. The best results of each column and row are highlighted in bold and underlined. signed to mitigate the inconsistencies in CoT reasoning by aggregating the final answer through majority votes. In our evaluation, we sample four outputs, and report the maximum performance across all the outputs. Evaluation Framework Implementation Given a question from the benchmark, in discriminative evaluation, we construct the invalid paths by randomly perturbing the ground-truth paths extracted from SPARQL (Kumar et al., 2019). The implementation detail is described in Appendix A. In generative mode, we use FAISS (Johnson et al., 2019) as the vector database, Sentence-BERT (Reimers and Gurevych, 2019) as the employed embedding model and partial ratio fuzzy matching3 as the entity scoring function. We retrieve top-10 triples and set the factual threshold \u03c3 of 0.7. Evaluation Metrics For discriminative evaluation, we report the accuracy of detecting valid reasoning paths from invalid ones. For generative evaluation, we report CoT reasoning performance of LLMs with the following metrics: (i) final answer accuracy, (ii) faithful reasoning score, and (iii) minimum edit distance between the generated and ground truth paths. As different LLMs vary in instruction-following capabilities and guardrail implementations, we may encounter responses with unstructured format or abstained answers (Luo et al., 2023b). Therefore, we classify LLMs\u2019 responses into four groups: abstained (A), unstructured (U), faithful reasoning (FR), and unfaithful reasoning (UR). We use the F1 score to measure the faithfulness of CoT reasoning where precision and recall are calculated as P = FR FR+UR and R = FR FR+UR+A+U. Detailed implementations are described in Appendix C.3. Results of precision and recall are presented in Appendices D.2 and D.3. 3https://github.com/seatgeek/thefuzz 5 Main Results 5.1 Discriminate Evaluation Finding 1: LLMs possess knowledge of valid reasoning The overall discriminative evaluation results are shown in Table 2. Based on the results, it is evident that all LLMs achieve a high level of accuracy in distinguishing valid reasoning paths. This indicates that LLMs, which are pre-trained on largescale corpora, already possess certain knowledge to perform reasoning tasks effectively. However, when using few-shot prompts, there is a noticeable decrease in performance for Mistral and LLaMA2. This could be attributed to the sensitivity of these particular LLMs towards the provided few-shot examples. The detailed results of each perturbation type are illustrated in Appendix D.1, where the accuracy of incoherent paths is lower than other types. This can be attributed to the fact that LLMs cannot capture structural information in the context (Guo et al., 2023). Moreover, the few-shot CoT fails to improve the accuracy in identifying valid paths. We speculate that LLMs are prone to hallucination during CoT generation, resulting in incorrect predictions. we can conclude that despite having enough reasoning knowledge, LLMs still face limitations in conducting faithful reasoning during CoT generation. 5.2 Generative Evaluation Table 3 shows the performance of LLMs in generative evaluation mode. Overall, ChatGPT demonstrate superior performance in terms of both final answer accuracy and faithfulness of the reasoning. Surprisingly, Mistral 7B, despite being the smallest model, exhibits competitive performance comparable to larger models within the <50B range. Furthermore, enhancing prompting strategies with planning (CoT-Plan) and self-consistency (CoTSC) results in substantial improvements across all LLMs, especially for smaller models. Finding 2: The correct final answer may not necessarily result from faithful reasoning As shown in Table 3, there is a notable discrepancy between the accuracy of the final answer and the reasoning process. The average gap is 15.76% for CWQ, and 16.44% for GrailQA. While advanced prompting may improve answer and reasoning accuracy, this performance gap mostly stays consistent. Interestingly, Vicuna achieves reasonable answer accuracy but has the lowest reasoning perfor\fLLMs Size CWQ GrailQA Answer\u2191 Reasoning\u2191 Gap\u2193 Edit Dist.\u2193 Answer\u2191 Reasoning\u2191 Gap\u2193 Edit Dist.\u2193 Fewshot CoT Mistral 7B 36.45 25.18 11.27 69.86 16.35 2.12 14.23 94.03 Qwen 7B 32.52 19.38 13.14 76.78 13.35 1.63 11.72 94.69 Qwen 14B 40.39 27.38 13.01 74.49 18.83 2.13 16.70 92.90 Vicuna 33B 44.50 15.92 28.58 74.60 18.26 0.95 17.31 95.39 LLaMA2 70B 49.80 33.98 15.82 62.23 22.05 2.88 19.17 92.58 ChatGPT 175B 49.85 37.13 12.72 57.94 23.69 4.17 19.52 90.13 Fewshot CoT Plan Mistral 7B 37.14+0.69 25.69+0.51 11.45 70.01 17.30+0.95 3.36+1.24 13.94 94.46 Qwen 7B 35.35+2.91 21.57+2.19 13.86 74.74 13.74+0.39 2.06+0.43 11.68 94.61 Qwen 14B 40.86+0.47 27.97+0.59 12.02 73.68 19.00+0.17 2.48+0.35 15.43 92.58 Vicuna 33B 48.80+4.30 20.24+4.32 28.56 63.93 20.84+2.58 2.09+1.14 18.75 92.12 LLaMA2 70B 50.26+0.46 37.08+3.10 13.18 57.81 22.35+0.30 3.29+0.41 19.06 89.61 ChatGPT 175B 51.74+1.89 38.60+1.47 13.14 56.61 24.21+0.52 4.32+0.15 19.11 89.84 Fewshot CoT SC Mistral 7B 40.86+4.41 30.38+5.20 10.48 65.21 16.70+0.35 2.60+0.48 14.10 94.10 Qwen 7B 34.75+6.08 23.21+3.83 15.39 74.24 14.00+0.65 2.32+0.69 11.68 94.35 Qwen 14B 41.01+0.62 29.26+1.88 11.75 73.21 21.00+2.17 3.24+1.11 17.76 92.50 Vicuna 33B 45.43+2.18 21.32+5.40 25.36 66.17 18.92+0.66 1.88+0.93 17.04 94.23 LLaMA2 70B 50.42+0.62 37.00+3.02 13.42 58.55 22.35+0.30 3.29+0.41 19.06 91.50 ChatGPT 175B 51.74+1.89 40.73+3.60 11.01 52.57 24.97+1.28 4.86+0.69 20.11 89.22 Table 3: Generative evaluation performance of different LLMs on CWQ and GrailQA datasets. F1-scores of the final answer and reasoning accuracy are reported in Answer Reasoning respectively. The Gap column denotes the differences between Answer and Reasoning. The Edit Dist. denotes the edit distance metric described in Appendix C.3. +x.xx denotes the improvement in comparison to few-shot CoT. mance of all the models, suggesting its reasoning ability is inferior, even when compared to small models like Mistral and Qwen-7B. This finding highlights the inadequacy of relying on final answer accuracy as a proxy to gauge reasoning ability. Finding 3: The reasoning gap worsens as the model size increases It can be seen that the reasoning performance increases gradually with model size, proving the reasoning ability of bigger models. However, the gap between answer and reasoning performance also gradually increases with model size and the correctness of the answer. While LLaMA2-70B and ChatGPT rank first in performance, their gaps are also the highest. Meanwhile, the smallest-size models, including Mistral-7B and Qwen-7B, hold the lowest gap on CWQ and GrailQA, respectively. We speculate that larger LLMs may grasp the question context better or have more knowledge to provide the correct answer directly without performing reasoning. Finding 4: Better prompting strategy can improve both the answer and reasoning accuracy The use of enhanced prompting strategies such as CoT-Plan and CoT-SC lead to improvement of both the answer and reasoning accuracy across most LLMs. However, the gap between them remains consistent, regardless of prompt strategy. 5.3 Analysis Reasoning Errors We present a detailed breakdown of the reasoning errors in Figure 3. The results reveal that factual errors account for the majority of errors, indicating that LLMs tend to generate incorrect information during reasoning. As GrailQA is a more complex dataset, LLMs have a higher percentage of coherence errors on GrailQA than CWQ. Interestingly, even when the generated CoT paths are free from factual and coherent errors, LLMs may fail to produce correct answers, evidenced by a substantial amount of answer errors. Error case examples are shown in Appendix D.4. Ablation Study To ensure the effectiveness of our generative evaluation framework, we randomly select 100 CoT responses generated by ChatGPT in CWQ dataset and asked two human experts to evaluate the constructed reasoning path. The results presented in Table 4 demonstrate that our method can accurately detect both faithful and unfaithful reasoning paths. This further confirms the efficacy of our approach in evaluating CoT reasoning. Parsing Error While we carefully design prompts to instruct LLMs to generate a structured \fMistral 7B Qwen 7B Qwen 14B Vicuna 33B Llama2 ChatGPT 0% 25% 50% 75% factual error coherent error answer error CWQ Mistral 7B Qwen 7B Qwen 14B Vicuna 33B Llama2 ChatGPT 0% 25% 50% 75% factual error coherent error answer error GrailQA Figure 3: The breakdown of reasoning error types in CWQ and GrailQA. Reasoning Types Precision Recall F1 Faithful Reasoning 95.42 83.88 89.28 Unfaithful Reasoning 86.12 96.01 90.80 Table 4: Precision, Recall, and F1 score of the framework on human annotated datasets. CoT, there are still corner cases where LLMs generate unstructured and abstention responses due to their unpredicted behaviors. As reported in Appendix D.2, the unstructured and abstention rates are less than 20% in CWQ dataset and can be mitigated with CoT-Plan and CoT-SC. 6 Related Work Reasoning with LLMs While LLMs have proven to offer a variety of reasoning abilities, they still tend to hallucinate facts, making them unreliable and imperfect (Qiao et al., 2022). Several studies have concentrated on improving their reasoning capacity through prompting (Wang et al., 2023c; Ye and Durrett, 2022; Wiegreffe et al., 2022). CoT (Wei et al., 2022) is a prompting approach that has demonstrated notable improvements in reasoning performance. A significant enhancement compared to CoT, self-consistency (Wang et al., 2023c), is a scheme where multiple CoTs are generated and the most consistent self-generated answer is selected. Recently, self-consistency was extended with Tree of Thoughts (ToT) (Yao et al., 2023), which models the reasoning process with a tree. ToT allows LLMs to interactively backtrack and explore alternate chains of reasoning, avoiding getting stuck on a single line of incorrect reasoning. Ye and Durrett (2022) mitigate the effect of unreliable rationales by calibrating the prediction probability based on the factuality of CoT. Wiegreffe et al. (2022) train a Seq2Seq model to filter out unacceptable rationale. Liu et al. (2021) utilize GPT-3 (Brown et al., 2020) with few-shot prompting to generate knowledge and prompts the downstream language models. Reasoning Evaluation Evaluation of the reasoning ability of LLMs has been undertaken for two main purposes: to enhance the reasoning ability of LLMs (Lyu et al., 2023; Li et al., 2023; Tyen et al., 2023; Chen et al., 2023) and to quantify the reasoning ability of LLMs (Wang et al., 2023b; Atanasova et al., 2023). For instance, Huang and Chang (2023) gauges the reasoning ability of LLMs by assessing their performance on reasoning benchmarks such as GSM8K and BIG-bench for downstream tasks. However, this evaluation strategy is unable to offer a direct assessment of the reasoning steps. Tyen et al. (2023) release the BIG-Bench Mistake dataset, which includes logical errors in CoT reasoning steps. Using this benchmark, Tyen et al. (2023); Chen et al. (2023) illustrate the inability of state-of-the-art LLMs to identify mistakes and reasoning errors, even in unequivocal cases. 7 Conclusion We propose an evaluation framework to understand the CoT reasoning capability of LLMs beyond the sole assessment of final answer accuracy. With the help of a KG and a careful prompting strategy, we can turn the unstructured CoT into a structured format for automatic evaluation. Our framework consists of two evaluation modules: (i) a discriminative module that isolates the effects of different reasoning errors to verify LLMs\u2019 knowledge about reasoning, and (ii) a generative module to assess the generated CoT reasoning. While LLMs showcase remarkable capabilities in generating correct answers, our study emphasizes the need for more nuanced evaluations of their reasoning processes. Addressing the gap between the final answer and reasoning accuracy remains a critical area for further exploration in enhancing the true reasoning capabilities and interpretability of LLMs. \fLimitation The limitation of our work includes: \u2022 Assuming a CoT step corresponding to one KG triple. LLMs sometimes return a sentence containing more than two relations. Thus, it is a comprehensive evaluation that finds out all of the possible candidate triples and assesses them. It can be tackled by returning top-K candidates from Algorithm 1 and a dynamic program algorithm expanded from Algorithm 2. \u2022 Assuming a completed knowledge graph. In this work, we use knowledge graphs (KGs) to retrieve the facts and the connections of facts. However, knowledge graphs are often incomplete which could contain implicit facts for faithful reasoning. Thus, it could be inadequate to evaluate the LLMs with the existing KGs. In the future, we plan to incorporate the knowledge graph completion methods to improve the comprehension of the retrieval algorithm. \u2022 Checking a reasoning path for a final answer. In reality, there are a number of questions that return more than a correct answer which corresponds to many reasoning paths. It is possible that LLM only returns one correct reasoning path while the others are incorrect. This is an interesting issue to solve in the future."
},
{
"url": "http://arxiv.org/abs/2403.06832v2",
"title": "The Power of Noise: Toward a Unified Multi-modal Knowledge Graph Representation Framework",
"abstract": "The advancement of Multi-modal Pre-training highlights the necessity for a\nrobust Multi-Modal Knowledge Graph (MMKG) representation learning framework.\nThis framework is crucial for integrating structured knowledge into multi-modal\nLarge Language Models (LLMs) at scale, aiming to alleviate issues like\nknowledge misconceptions and multi-modal hallucinations. In this work, to\nevaluate models' ability to accurately embed entities within MMKGs, we focus on\ntwo widely researched tasks: Multi-modal Knowledge Graph Completion (MKGC) and\nMulti-modal Entity Alignment (MMEA). Building on this foundation, we propose a\nnovel SNAG method that utilizes a Transformer-based architecture equipped with\nmodality-level noise masking for the robust integration of multi-modal entity\nfeatures in KGs. By incorporating specific training objectives for both MKGC\nand MMEA, our approach achieves SOTA performance across a total of ten datasets\n(three for MKGC and seven for MEMA), demonstrating its robustness and\nversatility. Besides, SNAG can not only function as a standalone model but also\nenhance other existing methods, providing stable performance improvements. Our\ncode and data are available at: https://github.com/zjukg/SNAG.",
"authors": "Zhuo Chen, Yin Fang, Yichi Zhang, Lingbing Guo, Jiaoyan Chen, Huajun Chen, Wen Zhang",
"published": "2024-03-11",
"updated": "2024-03-20",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"label": "Original Paper",
"paper_cat": "Knowledge AND Graph",
"gt": "The Power of Noise: Toward a Unified Multi-modal Knowledge Graph Representation Framework",
"main_content": "INTRODUCTION The exploration of multi-modal dimensions within Knowledge Graphs (KGs) has become a pivotal force in the semantic web domain, catalyzing advancements in various artificial intelligence applications. With the evolution of Large language Models (LLMs) and Multi-modal Pre-training, the imperative for a robust and comprehensive Multi-Modal Knowledge Graph (MMKG) representation learning framework has become apparent. Such a framework is essential for the effective integration of structured knowledge into multi-modal LLMs at scale, addressing prevalent challenges like knowledge misconceptions and multi-modal hallucination. Current efforts to integrate MMKG with pre-training are scarce. Triple-level methods [38] treat triples as standalone knowledge units, embedding the (head entity, relationship, tail entity) structure \u2020Corresponding author. Figure 1: While existing works design models to refuse and combat noise in MMKGs, our SnAg accepts and deliberately incorporates noise to adapt to the noisy real-world scenarios. into Visual Language Model\u2019s space. On the other hand, Graphlevel methods [18, 26] capitalize on the structural connections among entities in a global MMKG. By selectively gathering multimodal neighbor nodes around each entity featured in the training corpus, they apply techniques such as Graph Neural Networks (GNNs) or concatenation to effectively incorporate knowledge during the pre-training process. However, these approaches predominantly view MMKG from a traditional KG perspective, not fully separating the MMKG representation process from downstream or pre-training tasks. In this work, we revisit MMKG representation learning uniquely from the MMKG perspective itself, employing two tasks: Multimodal Knowledge Graph Completion (MKGC) and Multi-modal Entity Alignment (MMEA) to validate our method. Specifically, we introduce a unified Transformer-based framework (SnAg) that achieves SOTA results across an array of ten datasets by simply aligning our framework with Task-Specific Training targets. SnAg stands out for its lightweight design, efficiency, and adaptability, incorporating components like Entity-Level Modality Interaction that can be seamlessly upgraded with advanced technologies. A key aspect of our method is the Gauss Modality Noise Masking module, whose design sharply contrasts with previous MMKGrelated efforts that primarily focus on designing methods to refuse and combat noise in MMKGs. In contrast, as shown in Figure 1, our SnAg accepts and deliberately incorporates noise, adapting to the noisy real-world scenarios. This strategy can significantly boost performance across various MKGC and MMEA approaches. arXiv:2403.06832v2 [cs.CL] 20 Mar 2024 \fConference\u201917, July 2017, Washington, DC, USA Zhuo Chen et al. Importantly, as the first MMKG effort to concurrently support both MKGC and MMEA tasks, this work demonstrates its adaptability of our strategy, highlighting its potential to interface with more training tasks in the future and paving the way for further research in MMKG Pre-training and Multi-modal Knowledge Injection. 2 RELATED WORK Typically, a KG is considered multi-modal when it contains knowledge symbols expressed across various modalities, including, but not limited to, text, images, sound, or video [12]. Current research primarily concentrates on the visual modality, assuming that other modalities can be processed similarly. 2.1 MMKG Representation The current mainstream approaches to MMKG representation learning, which focus on integrating entity modalities within MMKGs, can broadly be classified into two distinct categories: (i) Late Fusion methods focus on the interactions and weighting of different modalities, typically employing techniques like Summation, Concatenation, Multi-Layer Perceptrons (MLPs), or Gating Mechanisms to aggregate features just before generating outputs. For example, MKGRL-MS [52] crafts distinct single-modal embeddings, using multi-head self-attention to evaluate the contribution of each modality to the semantic composition and summing the weighted multi-modal features for MMKG entity representation. MMKRL [36] learns cross-modal embeddings in a unified translational semantic space, merging modality embeddings for each entity through concatenation. DuMF [29] adopts a dual-track strategy, utilizing a bilinear layer for feature projection and an attention block for modality preference learning in each track, with a gate network to synthesize these features into a unified representation. (ii) Early Fusion methods integrate multi-modal feature at an initial stage, fostering deeper interaction between modalities that\u2019s essential for complex reasoning. This fosters a unified and potent entity representation, enhancing their compatibility in the process of integrating with other models. For example, CMGNN [16] first normalizes entity modalities into a unified embedding using an MLP, then refines them by contrasting with perturbed negative samples. MMRotatH [56] utilizes a gated encoder to merge textual and structural data, filtering irrelevant information within a rotational dynamics-based KGE framework. Recent studies [8, 23, 31] utilize Pre-trained Language Models (PLMs) like BERT and Vision Transformers like ViT for multi-modal data integration. They format graph structures, text, and images into sequences or dense embeddings compatible with PLMs, thereby utilizing the PLMs\u2019 reasoning capabilities and the knowledge embedded in their parameters to support downstream tasks. In this paper, we propose a Transformer-based method SnAg that introduce fine-grained, entity-level modality preference to enhance entity representation. This strategy combines the benefits of Early Fusion, with its effective modality interaction, while also aligning with the Late Fusion modality integration paradigm. Furthermore, our model is lightweight, boasting a significantly lower parameter count compared to traditional PLM-based methods, which offers increased flexibility and wider applicability. 2.2 Multi-Modal Knowledge Graph Completion Multi-modal Knowledge Graph Completion (MKGC) is crucial for inferring missing triples in existing MMKGs, involving three subtasks: Entity Prediction, Relation Prediction, and Triple Classification. Currently, most research in MKGC focuses on Entity Prediction, also widely recognized as Link Prediction, with two main methods emerging: Embedding-based Approaches build on conventional Knowledge Graph Embedding (KGE) methods [2, 45], adapted to integrate multi-modal data, enhancing entity embeddings. (i) Modality Fusion methods [21, 23, 32, 52, 57] integrate multi-modal and structural embeddings to assess triple plausibility. Early efforts, like IKRL [58], utilize multiple TransE-based scoring functions [2] for modal interaction. RSME [53] employs gates for selective modal information integration. OTKGE [3] leverages optimal transport for fusion, while CMGNN [17] implements a multi-modal GNN with cross-modal contrastive learning. (ii) Modality Ensemble methods train distinct models per modality, merging outputs for predictions. For example, MoSE [67] utilizes structural, textual, and visual data to train three KGC models and employs, using ensemble strategies for joint predictions. Similarly, IMF [27] proposes an interactive model to achieve modal disentanglement and entanglement to make robust predictions. (iii) Modality-aware Negative Sampling methods boost differentiation between correct and erroneous triples by incorporating multi-modal context for superior negative sample selection. MMKRL [36] introduces adversarial training to MKGC, adding perturbations to modal embeddings. Following this, VBKGC [66] and MANS [62] develop fine-grained visual negative sampling to better align visual with structural embeddings for more nuanced comparison training. MMRNS [59] enhances this with relation-based sample selection. Finetune-based Approaches exploit the world understanding capabilities of pre-trained Transformer models like BERT [15] and VisualBERT [25] for MKGC. These approaches reformat MMKG triples as token sequences for PLM processing [30], often framing KGC as a classification task. For example, MKGformer [8] integrates multi-modal fusion at multiple levels, treating MKGC as a Masked Language Modeling (MLM) task, while SGMPT [31] extends this by incorporating structural data and a dual-strategy fusion module. 2.3 Multi-Modal Entity Alignment Entity Alignment (EA) is pivotal for KG integration, aiming to identify identical entities across different KGs by leveraging relational, attributive, and literal (surface) features. Multi-Modal Entity Alignment (MMEA) enhances this process by incorporating visual data, thereby improving alignment accuracy accuracy [5, 35]. EVA [34] applies an attention mechanism to modulate the importance of each modality and introduces an unsupervised approach that utilizes visual similarities for alignment, reducing reliance on goldstandard labels. MSNEA [6] leverages visual cues to guide relational feature learning. MCLEA [33] employs KL divergence to mitigate the modality distribution gap between uni-modal and joint embeddings. PathFusion [68] and ASGEA [37] combine information from different modalities using the modality similarity or alignment path as an information carrier. MEAformer [9] adjusts mutual modality preferences dynamically for entity-level modality fusion, addressing inconsistencies in entities\u2019 surrounding modalities. \fThe Power of Noise: Toward a Unified Multi-modal Knowledge Graph Representation Framework Conference\u201917, July 2017, Washington, DC, USA Figure 2: The overall framework of SnAg. Despite nearly five years of development, tasks like MMEA and MKGC have evolved independently within the MMKG community without a unified representation learning framework to address both. With the advancement of multi-modal LLMs, it\u2019s timely to reconsider these challenges from a broader perspective, aiming for a holistic framework that addresses both tasks and delivers meaningful multi-modal entity representations. 3 METHOD 3.1 Preliminaries Drawing on the categorization proposed in [69], we distinguish between two types of MMKGs: A-MMKG and N-MMKG. In AMMKGs, images are attached to entities as attributes, while in NMMKGs, images are treated as standalone entities interconnected with others. A-MMKGs are more prevalent in current research and applications within the semantic web community due to their accessibility and similarity to traditional KGs [12]. Therefore, this paper will focus exclusively on A-MMKG, unless stated otherwise. Definition 1. Multi-modal Knowledge Graph. A KG is defined as G = {E, R, A, T, V} where T = {T A, T R} with T R = E \u00d7 R \u00d7 E and T A = E \u00d7A \u00d7V. MMKG utilizes multi-modal data (e.g., images) as specific attribute values for entities or concepts, with T A = E\u00d7A\u00d7 (V \ud835\udc3e\ud835\udc3a\u222aV \ud835\udc40\ud835\udc40), where V \ud835\udc3e\ud835\udc3aand V \ud835\udc40\ud835\udc40are values of KG and multimodal data, respectively. For instance, in an MMKG, an attribute triple (\ud835\udc52,\ud835\udc4e, \ud835\udc63) in T A might associates an image as \ud835\udc63to an entity \ud835\udc52via an attribute \ud835\udc4e, typically denoted as hasImage. Definition 2. MMKG Completion. The objective of MKGC is to augment the set of relational triples T \ud835\udc45within MMKGs by identifying and adding missing relational triples among existing entities and relations, potentially utilizing attribute triples T A. Specifically, our focus is on Entity Prediction, which involves determining the missing head or tail entities in queries of the form (\u210e\ud835\udc52\ud835\udc4e\ud835\udc51,\ud835\udc5f, ?) or (?,\ud835\udc5f,\ud835\udc61\ud835\udc4e\ud835\udc56\ud835\udc59). Definition 3. Multi-modal Entity Alignment. Given two aligned MMKGs G1 and G2, the objective of MMEA is to identify entity pairs (\ud835\udc521 \ud835\udc56, \ud835\udc522 \ud835\udc56) from E1 and E2, respectively, that correspond to the same real-world entity \ud835\udc52\ud835\udc56. This process utilizes a set of pre-aligned entity pairs, divided into a training set (seed alignments S) and a testing set S\ud835\udc61\ud835\udc52, following a pre-defined seed alignment ratio \ud835\udc45\ud835\udc60\ud835\udc4e= |S|/|S \u222aS\ud835\udc61\ud835\udc52|. The modalities associated with an entity are denoted by M = {\ud835\udc54,\ud835\udc5f,\ud835\udc4e, \ud835\udc63,\ud835\udc60}, signifying graph structure, relation, attribute, vision, and surface (i.e., entity names) modalities, respectively. 3.2 Multi-Modal Knowledge Embedding 3.2.1 Graph Structure Embedding. Let \ud835\udc65\ud835\udc54 \ud835\udc56\u2208R\ud835\udc51represents the graph embedding of entity \ud835\udc52\ud835\udc56, which is randomly initialized and learnable, with \ud835\udc51representing the predetermined hidden dimension. In MKGC, we follow prior work [64] to set \u210e\ud835\udc54 \ud835\udc56= \ud835\udc39\ud835\udc36\ud835\udc54(\ud835\udc4a\ud835\udc54,\ud835\udc65\ud835\udc54 \ud835\udc56), where \ud835\udc39\ud835\udc36\ud835\udc54is a KG-specific fully connected layer applied to \ud835\udc65\ud835\udc54 \ud835\udc56with weights \ud835\udc4a\ud835\udc54. For MMEA, we follow [9, 10] to utilize the Graph Attention Network (GAT) [50], configured with two attention heads and two layers, to capture the structural information of G. This is facilitated by a diagonal weight matrix [60] \ud835\udc4a\ud835\udc54\u2208R\ud835\udc51\u00d7\ud835\udc51for linear transformation. The structure embedding is thus defined as \u210e\ud835\udc54 \ud835\udc56= \ud835\udc3a\ud835\udc34\ud835\udc47(\ud835\udc4a\ud835\udc54, \ud835\udc40\ud835\udc54;\ud835\udc65\ud835\udc54 \ud835\udc56), where \ud835\udc40\ud835\udc54refers to the graph\u2019s adjacency matrix. 3.2.2 Relation and Attribute Embedding. Our study for MKGC, consistent with the domain practices [8, 27, 53, 56, 67], focuses exclusively on relation triples. These are represented by learnable embeddings \ud835\udc65\ud835\udc5f \ud835\udc57\u2208R\ud835\udc51/2, where \ud835\udc57uniquely identifies each relation \ud835\udc5f\ud835\udc57, distinguishing it from entity indices. We exclude attribute triples to maintain consistency with methodological practices in the field. The choice of dimensionality \ud835\udc51/2 is informed by our use of the RotatE model [45] as the scoring function for assessing triple plausibility. RotatE models relations as rotations in a complex space, requiring the relation embedding\u2019s dimension to be half that of the entity embedding to account for the real and imaginary components of complex numbers. For MMEA, following Yang et al. [61], we use bag-of-words features for relation (\ud835\udc65\ud835\udc5f) and attribute (\ud835\udc65\ud835\udc4e) representations of entities (detailed in \u00a7 4.1.3) . Separate FC layers, parameterized by \ud835\udc4a\ud835\udc5a\u2208R\ud835\udc51\ud835\udc5a\u00d7\ud835\udc51, are employed for embedding space harmonization: \u210e\ud835\udc5a \ud835\udc56= \ud835\udc39\ud835\udc36\ud835\udc5a(\ud835\udc4a\ud835\udc5a,\ud835\udc65\ud835\udc5a \ud835\udc56), where \ud835\udc5a\u2208{\ud835\udc5f,\ud835\udc4e} and \ud835\udc65\ud835\udc5a \ud835\udc56 \u2208R\ud835\udc51\ud835\udc5arepresents the input feature of entity \ud835\udc52\ud835\udc56for modality \ud835\udc5a. 3.2.3 Visual and Surface Embedding. For visual embeddings, a pre-trained (and thereafter frozen) visual encoder, denoted as \ud835\udc38\ud835\udc5b\ud835\udc50\ud835\udc63, is used to extract visual features \ud835\udc65\ud835\udc63 \ud835\udc56for each entity \ud835\udc52\ud835\udc56with associated image data. In cases where entities lack corresponding image data, we synthesize random image features adhering to a normal distribution, parameterized by the mean and standard deviation \fConference\u201917, July 2017, Washington, DC, USA Zhuo Chen et al. observed across other entities\u2019 images [9, 10, 64]. Regarding surface embeddings, we leverage Sentence-BERT [40], a pre-trained textual encoder, to derive textual features from each entity\u2019s description. The [CLS] token serves to aggregate sentence-level textual features \ud835\udc65\ud835\udc60 \ud835\udc56. Consistent with the approach applied to other modalities, we utilize \ud835\udc39\ud835\udc36\ud835\udc5aparameterized by \ud835\udc4a\ud835\udc5a\u2208R\ud835\udc51\ud835\udc5a\u00d7\ud835\udc51to integrate the extracted features \ud835\udc65\ud835\udc63 \ud835\udc56and \ud835\udc65\ud835\udc60 \ud835\udc56into the embedding space, yielding the embeddings \u210e\ud835\udc63 \ud835\udc56and \u210e\ud835\udc60 \ud835\udc56. 3.3 Gauss Modality Noise Masking Recent research in MMKG [10, 19, 64] suggests that models can tolerate certain noise levels without a noticeable decline in the expressive capability of multi-modal entity representations, a finding echoed across various machine learning domains [4, 22, 43]. Additionally, Cuconasu et al. [13] observe that in the RetrievalAugmented Generation (RAG) process of LLMs, filling up the retrieved context with irrelevant documents consistently improves model performance in realistic scenarios. Similarly, Chen et al. [11] demonstrate that cross-modal masking and reconstruction can improve a model\u2019s cross-modal alignment capabilities. Inspired by evidence of model noise resilience, we hypothesize that introducing noise during MMKG modality fusion training could enhance both modal feature robustness and real-world performance. In light of these observations, we propose a new mechanism termed Gauss Modality Noise Masking (GMNM), aimed at enhancing modality feature representations through controlled noise injection at the training stage for MMKG. This stochastic mechanism introduces a probabilistic transformation to each modality feature \ud835\udc65\ud835\udc5a \ud835\udc56 at the beginning of every training epoch, described as follows: c \ud835\udc65\ud835\udc5a \ud835\udc56 = ( \ud835\udc65\ud835\udc5a \ud835\udc56, if \ud835\udc5d> \ud835\udf0c, (1 \u2212\ud835\udf16)\ud835\udc65\ud835\udc5a \ud835\udc56+ \ud835\udf16f \ud835\udc65\ud835\udc5a \ud835\udc56, otherwise, (1) where \ud835\udc5d\u223c\ud835\udc48(0, 1) denotes a uniformly distributed random variable that determines whether noise is applied, with \ud835\udf0cbeing the threshold probability for noise application to each \ud835\udc65\ud835\udc5a \ud835\udc56. Here, \ud835\udf16signifies the noise (mask) ratio. We define the generation of noise vector f \ud835\udc65\ud835\udc5a \ud835\udc56 as: f \ud835\udc65\ud835\udc5a \ud835\udc56 = \ud835\udf11\ud835\udc5a\u2299\ud835\udc67+ \ud835\udf07\ud835\udc5a, \ud835\udc67\u223cN (0, \ud835\udc3c), (2) where \ud835\udf11\ud835\udc5aand \ud835\udf07\ud835\udc5arepresent the standard deviation and mean of the modality-specific non-noisy data for \ud835\udc5a, respectively, and \ud835\udc67denotes a sample drawn from a Gaussian distribution N (0, \ud835\udc3c) with mean vector with mean 0 and identity covariance matrix \ud835\udc3c, ensuring that the introduced noise is statistically coherent with the intrinsic data variability of the respective modality. Additionally, the intensity of noise (\ud835\udf16) can be dynamically adjusted to simulate realworld data imperfections. This adaptive noise injection strategy is designed to foster a model resilient to data variability, capable of capturing and representing complex multi-modal interactions with enhanced fidelity in practical applications. Note that after the transformation from \ud835\udc65\ud835\udc5ato c \ud835\udc65\ud835\udc5a, these modified features are still subject to further processing through \ud835\udc39\ud835\udc36\ud835\udc5aas detailed in \u00a7 3.2. This critical step secures the generation of the ultimate modal representation, symbolized as c \u210e\ud835\udc5a. For clarity in subsequent sections, we will treat \u210e\ud835\udc5aand \u210e\ud835\udc5a \ud835\udc56as representing their final states, c \u210e\ud835\udc5aand c \u210e\ud835\udc5a \ud835\udc56, unless specified otherwise. 3.4 Entity-Level Modality Interaction This phase is designed for instance-level modality weighting and fusion, enabling dynamic adjustment of training weights based on modality information\u2019s signal strength and noise-induced uncertainty. We utilize a Transformer architecture [49] for this purpose, noted for its efficacy in modality fusion and its ability to derive confidence-based weighting for modalitieswhich improves interpretability and adaptability. The Transformer\u2019s self-attention mechanism is crucial for ensuring the model evaluates and prioritizes accurate and relevant modal inputs. Specifically, we adapt the vanilla Transformer through integrating three key components: Multi-Head Cross-Modal Attention (MHCA), Fully Connected Feed-Forward Networks (FFN), and Instance-level Confidence (ILC). (i) MHCA operates its attention function across \ud835\udc41\u210eparallel heads. Each head, indexed by \ud835\udc56, employs shared matrices\ud835\udc4a(\ud835\udc56) \ud835\udc5e ,\ud835\udc4a(\ud835\udc56) \ud835\udc58 ,\ud835\udc4a(\ud835\udc56) \ud835\udc63 \u2208R\ud835\udc51\u00d7\ud835\udc51\u210e(where \ud835\udc51\u210e= \ud835\udc51/\ud835\udc41\u210e), to transform input \u210e\ud835\udc5ainto queries \ud835\udc44(\ud835\udc56) \ud835\udc5a, keys \ud835\udc3e(\ud835\udc56) \ud835\udc5a, and values \ud835\udc49(\ud835\udc56) \ud835\udc5a: \ud835\udc44(\ud835\udc56) \ud835\udc5a, \ud835\udc3e(\ud835\udc56) \ud835\udc5a,\ud835\udc49(\ud835\udc56) \ud835\udc5a = \u210e\ud835\udc5a\ud835\udc4a(\ud835\udc56) \ud835\udc5e ,\u210e\ud835\udc5a\ud835\udc4a(\ud835\udc56) \ud835\udc58 ,\u210e\ud835\udc5a\ud835\udc4a(\ud835\udc56) \ud835\udc63 . (3) The output for modality\ud835\udc5a\u2019s feature is then generated by combining the outputs from all heads and applying a linear transformation: \ud835\udc40\ud835\udc3b\ud835\udc36\ud835\udc34(\u210e\ud835\udc5a) = \u00ca\ud835\udc41\u210e \ud835\udc56=1 \u210e\ud835\udc52\ud835\udc4e\ud835\udc51\ud835\udc5a \ud835\udc56\u00b7\ud835\udc4a0 , (4) \u210e\ud835\udc52\ud835\udc4e\ud835\udc51\ud835\udc5a \ud835\udc56= \u2211\ufe01 \ud835\udc57\u2208M \ud835\udefd(\ud835\udc56) \ud835\udc5a\ud835\udc57\ud835\udc49(\ud835\udc56) \ud835\udc57 , (5) where \ud835\udc4a0 \u2208R\ud835\udc51\u00d7\ud835\udc51. The attention weight \ud835\udefd\ud835\udc5a\ud835\udc57calculates the relevance between modalities \ud835\udc5aand \ud835\udc57: \ud835\udefd\ud835\udc5a\ud835\udc57= exp(\ud835\udc44\u22a4 \ud835\udc5a\ud835\udc3e\ud835\udc57/ \u221a\ufe01 \ud835\udc51\u210e) \u00cd \ud835\udc56\u2208M exp(\ud835\udc44\u22a4 \ud835\udc5a\ud835\udc3e\ud835\udc56/ \u221a\ufe01 \ud835\udc51\u210e) . (6) Besides, layer normalization (LN) and residual connection (RC) are incorporated to stabilize training: \u00af \u210e\ud835\udc5a= \ud835\udc3f\ud835\udc4e\ud835\udc66\ud835\udc52\ud835\udc5f\ud835\udc41\ud835\udc5c\ud835\udc5f\ud835\udc5a(\ud835\udc40\ud835\udc3b\ud835\udc36\ud835\udc34(\u210e\ud835\udc5a) + \u210e\ud835\udc5a) . (7) (ii) FFN: This network, consisting of two linear transformations and a ReLU activation, further processes the MHCA output: \ud835\udc39\ud835\udc39\ud835\udc41( \u00af \u210e\ud835\udc5a) = \ud835\udc45\ud835\udc52\ud835\udc3f\ud835\udc48( \u00af \u210e\ud835\udc5a\ud835\udc4a1 + \ud835\udc4f1)\ud835\udc4a2 + \ud835\udc4f2 , (8) \u00af \u210e\ud835\udc5a\u2190\ud835\udc3f\ud835\udc4e\ud835\udc66\ud835\udc52\ud835\udc5f\ud835\udc41\ud835\udc5c\ud835\udc5f\ud835\udc5a(\ud835\udc39\ud835\udc39\ud835\udc41( \u00af \u210e\ud835\udc5a) + \u00af \u210e\ud835\udc5a) , (9) where \ud835\udc4a1 \u2208R\ud835\udc51\u00d7\ud835\udc51\ud835\udc56\ud835\udc5band \ud835\udc4a2 \u2208R\ud835\udc51\ud835\udc56\ud835\udc5b\u00d7\ud835\udc51. (iii) ILC: We calculate the confidence \u02dc \ud835\udc64\ud835\udc5afor each modality via: \u02dc \ud835\udc64\ud835\udc5a= exp(\u00cd \ud835\udc57\u2208M \u00cd\ud835\udc41\u210e \ud835\udc56=0 \ud835\udefd(\ud835\udc56) \ud835\udc5a\ud835\udc57/ \u221a\ufe01 |M| \u00d7 \ud835\udc41\u210e) \u00cd \ud835\udc58\u2208M exp(\u00cd \ud835\udc57\u2208M \u00cd\ud835\udc41\u210e \ud835\udc56=0 \ud835\udefd(\ud835\udc56) \ud835\udc58\ud835\udc57 \u221a\ufe01 |M| \u00d7 \ud835\udc41\u210e) , (10) which captures crucial inter-modal interactions and tailors the model\u2019s confidence for each entity\u2019s modality. 3.5 Task-Specific Training Building upon the foundational processes detailed in previous sections, we have derived multi-modal KG representations denoted as \u210e\ud835\udc5a(discussed in \u00a7 3.3) and \u00af \u210e\ud835\udc5a(elaborated in \u00a7 3.4), along with confidence scores \u02dc \ud835\udc64\ud835\udc5afor each modality \ud835\udc5awithin the MMKG (introduced in \u00a7 3.4). \fThe Power of Noise: Toward a Unified Multi-modal Knowledge Graph Representation Framework Conference\u201917, July 2017, Washington, DC, USA 3.5.1 MMKG Completion. Within MKGC, we consider two methods for entity representation as candidates: (i) \u00af \u210e\ud835\udc54: Reflecting insights from previous research [9, 64], graph structure embedding emerges as crucial for model performance. After being processed by the Transformer layer, \u00af \u210e\ud835\udc54not only maintains its structural essence but also blends in other modal insights (refer to Equation (4) and (5)), offering a comprehensive multi-modal entity representation. (ii) \u00af \u210e\ud835\udc4e\ud835\udc63\ud835\udc54: For an equitable multi-modal representation, we average all modality-specific representations via \u00af \u210e\ud835\udc4e\ud835\udc63\ud835\udc54= 1 |M| \u00cd \ud835\udc5a\u2208M \u00af \u210e\ud835\udc5a, where M is the set of all modalities. This averaging ensures equal modality contribution, leveraging the rich, diverse information within MMKGs. For consistency in the following descriptions, we will refer to both using the notation \u00af \u210e. We apply the RotatE model [45] as our score function to assess the plausibility of triples. It is defined as: F (\ud835\udc52\u210e,\ud835\udc5f,\ud835\udc52\ud835\udc61) = || \u00af \u210e\u210e\ud835\udc52\ud835\udc4e\ud835\udc51\u25e6\ud835\udc65\ud835\udc5f\u2212\u00af \u210e\ud835\udc61\ud835\udc4e\ud835\udc56\ud835\udc59|| , (11) where \u25e6represents the rotation operation in complex space, which transforms the head entity\u2019s embedding by the relation to approximate the tail entity\u2019s embedding. To prioritize positive triples with higher scores, we optimize the embeddings using a sigmoid-based loss function [45]. The loss function is given by: L\ud835\udc58\ud835\udc54\ud835\udc50= 1 |T R| \u2211\ufe01 (\ud835\udc52\u210e,\ud835\udc5f,\ud835\udc52\ud835\udc61)\u2208T R \u0010 \u2212log\ud835\udf0e(\ud835\udf06\u2212F (\ud835\udc52\u210e,\ud835\udc5f,\ud835\udc52\ud835\udc61)) \u2212 \u2211\ufe01\ud835\udc3e \ud835\udc56=1 \ud835\udf10\ud835\udc56log\ud835\udf0e(F (\ud835\udc52\u210e\u2032,\ud835\udc5f\u2032,\ud835\udc52\ud835\udc61\u2032) \u2212\ud835\udf06) \u0011 , (12) where \ud835\udf0edenotes the sigmoid function, \ud835\udf06is the margin, \ud835\udc3eis the number of negative samples per positive triple, and\ud835\udf10\ud835\udc56represents the selfadversarial weight for each negatively sampled triple (\ud835\udc52\u210e\u2032,\ud835\udc5f\u2032,\ud835\udc52\ud835\udc61\u2032). Concretely, \ud835\udf10\ud835\udc56is calculated as: \ud835\udf10\ud835\udc56= exp(\ud835\udf0f\ud835\udc58\ud835\udc54\ud835\udc50F (\ud835\udc52\u210e\u2032 \ud835\udc56,\ud835\udc5f\u2032 \ud835\udc56,\ud835\udc52\ud835\udc61\u2032 \ud835\udc56)) \u00cd\ud835\udc3e \ud835\udc57=1 exp(\ud835\udf0f\ud835\udc58\ud835\udc54\ud835\udc50F (\ud835\udc52\u210e\u2032 \ud835\udc57,\ud835\udc5f\u2032 \ud835\udc57,\ud835\udc52\ud835\udc61\u2032 \ud835\udc57)) , (13) with \ud835\udf0f\ud835\udc58\ud835\udc54\ud835\udc50being the temperature parameter. Our primary objective is to minimize L\ud835\udc58\ud835\udc54\ud835\udc50, thereby refining the embeddings to accurately capture MMKG\u2019s underlying relationships. 3.5.2 Multi-modal Entity Alignment. In MMEA, following [9, 10], we adopt the Global Modality Integration (GMI) derived multimodal features as the representations for entities. GMI emphasizes global alignment by concatenating and aligning multi-modal embeddings with a learnable global weight, enabling adaptive learning of each modality\u2019s quality across two MMKGs. The GMI joint embedding \u210e\ud835\udc3a\ud835\udc40\ud835\udc3c \ud835\udc56 for entity \ud835\udc52\ud835\udc56is defined as: \u210e\ud835\udc3a\ud835\udc40\ud835\udc3c \ud835\udc56 = \u00ca \ud835\udc5a\u2208M [\ud835\udc64\ud835\udc5a\u210e\ud835\udc5a \ud835\udc56] , (14) where \u00c9 signifies vector concatenation and \ud835\udc64\ud835\udc5ais the global weight for modality\ud835\udc5a, distinct from the entity-level dynamic modality weights \u02dc \ud835\udc64\ud835\udc5ain Equation (10). The distinction between MMEA and MKGC lies in their focus: MMEA emphasizes aligning modal features between entities and distinguishing non-aligned entities, prioritizing original feature retention. In contrast, MKGC emphasizes the inferential benefits of modality fusion across different multi-modal entities. As demonstrated by Chen et al. [10], the modality feature is often smoothed by the Transformer Layer in MMEA, potentially reducing entity distinction. GMI addresses this by preserving essential information, aiding alignment stability. Moreover, as a unified MMKG representation framework, modal features extracted earlier are optimized through MMEA-specific training objectives [33]. Specifically, for each aligned entity pair (\ud835\udc521 \ud835\udc56,\ud835\udc522 \ud835\udc56) in training set (seed alignments S), we define a negative entity set N\ud835\udc5b\ud835\udc54 \ud835\udc56 = {\ud835\udc521 \ud835\udc57|\u2200\ud835\udc521 \ud835\udc57\u2208E1, \ud835\udc57\u2260\ud835\udc56} \u222a{\ud835\udc522 \ud835\udc57|\u2200\ud835\udc522 \ud835\udc57\u2208E2, \ud835\udc57\u2260\ud835\udc56} and utilize in-batch (B) negative sampling [7] to enhance efficiency. The alignment probability distribution is: \ud835\udc5d\ud835\udc5a(\ud835\udc521 \ud835\udc56,\ud835\udc522 \ud835\udc56) = \ud835\udefe\ud835\udc5a(\ud835\udc521 \ud835\udc56,\ud835\udc522 \ud835\udc56) \ud835\udefe\ud835\udc5a(\ud835\udc521 \ud835\udc56,\ud835\udc522 \ud835\udc56) + \u00cd \ud835\udc52\ud835\udc57\u2208N\ud835\udc5b\ud835\udc54 \ud835\udc56 \ud835\udefe\ud835\udc5a(\ud835\udc521 \ud835\udc56,\ud835\udc52\ud835\udc57) , (15) where \ud835\udefe\ud835\udc5a(\ud835\udc52\ud835\udc56,\ud835\udc52\ud835\udc57) = exp(\u210e\ud835\udc5a\u22a4 \ud835\udc56 \u210e\ud835\udc5a \ud835\udc57/\ud835\udf0f\ud835\udc52\ud835\udc4e) and \ud835\udf0f\ud835\udc52\ud835\udc4eis the temperature hyper-parameter. We establish a bi-directional alignment objective to account for MMEA directions: L\ud835\udc5a= \u2212E\ud835\udc56\u2208B log[ \ud835\udc5d\ud835\udc5a(\ud835\udc521 \ud835\udc56,\ud835\udc522 \ud835\udc56) + \ud835\udc5d\ud835\udc5a(\ud835\udc522 \ud835\udc56,\ud835\udc521 \ud835\udc56) ]/2, (16) (i) The training objective denoted as L\ud835\udc3a\ud835\udc40\ud835\udc3cwhen using GMI joint embeddings, i.e., \ud835\udefe\ud835\udc3a\ud835\udc40\ud835\udc3c(\ud835\udc52\ud835\udc56,\ud835\udc52\ud835\udc57) is set to exp(\u210e\ud835\udc3a\ud835\udc40\ud835\udc3c\u22a4 \ud835\udc56 \u210e\ud835\udc3a\ud835\udc40\ud835\udc3c \ud835\udc57 /\ud835\udf0f\ud835\udc52\ud835\udc4e). To integrate dynamic confidences into the training process and enhance multi-modal entity alignment, we adopt two specialized training objectives from UMAEA [10]: (ii) Explicit Confidenceaugmented Intra-modal Alignment (ECIA): This objective modifies Equation (16) to incorporate explicit confidence levels within the same modality, defined as: L\ud835\udc38\ud835\udc36\ud835\udc3c\ud835\udc34= \u00cd \ud835\udc5a\u2208M e L\ud835\udc5a, where: e L\ud835\udc5a= \u2212E\ud835\udc56\u2208B log[ \ud835\udf19\ud835\udc5a(\ud835\udc521 \ud835\udc56,\ud835\udc522 \ud835\udc56) \u2217(\ud835\udc5d\ud835\udc5a(\ud835\udc521 \ud835\udc56,\ud835\udc522 \ud835\udc56) +\ud835\udc5d\ud835\udc5a(\ud835\udc522 \ud835\udc56,\ud835\udc521 \ud835\udc56)) ]/2 . (17) Here,\ud835\udf19\ud835\udc5a(\ud835\udc521 \ud835\udc56,\ud835\udc522 \ud835\udc56) represents the minimum confidence value between entities \ud835\udc521 \ud835\udc56and \ud835\udc522 \ud835\udc56in modality\ud835\udc5a, i.e., \ud835\udf19\ud835\udc5a(\ud835\udc52\ud835\udc56,\ud835\udc52\ud835\udc57) = \ud835\udc40\ud835\udc56\ud835\udc5b( \u02dc \ud835\udc64\ud835\udc5a \ud835\udc56, \u02dc \ud835\udc64\ud835\udc5a \ud835\udc57), addressing the issue of aligning high-quality features with potentially lower-quality ones or noise. (iii) Implicit Inter-modal Refinement (IIR) refines entity-level modality alignment by leveraging the transformer layer outputs \u00af \u210e\ud835\udc5a, aiming to align output hidden states directly and adjust attention scores adaptively. The corresponding loss function is: L\ud835\udc3c\ud835\udc3c\ud835\udc45= \u00cd \ud835\udc5a\u2208M \u00af L\ud835\udc5a, where \u00af L\ud835\udc5ais also a variant of L\ud835\udc5a(Equation (16)) with \u00af \ud835\udefe\ud835\udc5a(\ud835\udc52\ud835\udc56,\ud835\udc52\ud835\udc57) = exp( \u00af \u210e\ud835\udc5a\u22a4 \ud835\udc56 \u00af \u210e\ud835\udc5a \ud835\udc57/\ud835\udf0f\ud835\udc52\ud835\udc4e). The comprehensive training objective is formulated as: L\ud835\udc52\ud835\udc4e= L\ud835\udc3a\ud835\udc40\ud835\udc3c+ L\ud835\udc38\ud835\udc36\ud835\udc3c\ud835\udc34+ L\ud835\udc3c\ud835\udc3c\ud835\udc45. Note that our SnAg framework can not only function as a standalone model but also enhance other existing methods, providing stable performance improvements in MMEA, as demonstrated in Table 4 from \u00a7 4.2.2. 4 EXPERIMENTS 4.1 Experiment Setup In MMKG datasets like DBP15KJA-EN, where 67.58% of entities have images, the image association ratio (\ud835\udc45\ud835\udc56\ud835\udc5a\ud835\udc54) varies due to the data collection process [12]. 4.1.1 Datasets. MKGC: (i) DB15K [35] is constructed from DBPedia [24], enriched with images obtained via a search engine. (ii) MKG-W and MKG-Y [59] are subsets of Wikidata [51] and YAGO [44] respectively. Text descriptions are aligned with the corresponding entities using the additional sameAs links provided by the \fConference\u201917, July 2017, Washington, DC, USA Zhuo Chen et al. Table 1: MKGC performance on DB15K [35], MKG-W and MKG-Y [59] datasets. The best results are highlighted in bold, and the third-best results are underlined for each column. Models DB15K [35] MKG-W [59] MKG-Y [59] MRR H@1 H@3 H@10 MRR H@1 H@3 H@10 MRR H@1 H@3 H@10 IKRL (IJCAI \u201917) [58] .268 .141 .349 .491 .324 .261 .348 .441 .332 .304 .343 .383 TBKGC (NAACL \u201918) [41] .284 .156 .370 .499 .315 .253 .340 .432 .340 .305 .353 .401 TransAE (IJCNN \u201919) [55] .281 .213 .312 .412 .300 .212 .349 .447 .281 .253 .291 .330 RSME (ACM MM \u201921) [53] .298 .242 .321 .403 .292 .234 .320 .404 .344 .318 .361 .391 VBKGC (KDD \u201922) [66] .306 .198 .372 .494 .306 .249 .330 .409 .370 .338 .388 .423 OTKGE (NeurIPS \u201922) [3] .239 .185 .259 .342 .344 .289 .363 .449 .355 .320 .372 .414 IMF (WWW \u201923) [27] .323 .242 .360 .482 .345 .288 .366 .454 .358 .330 .371 .406 QEB (ACM MM \u201923) [54] .282 .148 .367 .516 .324 .255 .351 .453 .344 .295 .370 .423 VISTA (EMNLP \u201923) [23] .304 .225 .336 .459 .329 .261 .354 .456 .305 .249 .324 .415 MANS (IJCNN \u201923) [62] .288 .169 .366 .493 .309 .249 .336 .418 .290 .253 .314 .345 MMRNS (ACM MM \u201922) [59] .297 .179 .367 .510 .341 .274 .375 .468 .359 .306 .391 .455 AdaMF (COLING \u201924) [64] .325 .213 .397 .517 .343 .272 .379 .472 .381 .335 .404 .455 SnAg (Ours) .363 .274 .411 .530 .373 .302 .405 .503 .395 .354 .411 .471 w/o GMNM .357 .269 .406 .523 .365 .296 .398 .490 .387 .345 .407 .457 Table 2: Statistics for the MKGC datasets, where the symbol definitions in the table header align with Definition 1. Dataset |E| |R| |T R (Train)| |T R (Valid)| |T R (Test)| DB15K 12842 279 79222 9902 9904 MKG-W 15000 169 34196 4276 4274 MKG-Y 15000 28 21310 2665 2663 Table 3: Statistics for the MMEA datasets. Each dataset contains 15,000 pre-aligned entity pairs (|S| = 15000). Note that not every entity is paired with associated images or equivalent counterparts in the other KG. Additional abbreviations include: DB (DBpedia), WD (Wikidata), ZH (Chinese), JA (Japanese), FR (French), EN (English), DE (German). Dataset G |E| |R| |A| |T R| |T A| |V \ud835\udc40\ud835\udc40| DBP15KZH-EN ZH 19,388 1,701 8,111 70,414 248,035 15,912 EN 19,572 1,323 7,173 95,142 343,218 14,125 DBP15KJA-EN JA 19,814 1,299 5,882 77,214 248,991 12,739 EN 19,780 1,153 6,066 93,484 320,616 13,741 DBP15KFR-EN FR 19,661 903 4,547 105,998 273,825 14,174 EN 19,993 1,208 6,422 115,722 351,094 13,858 OpenEAEN-FR EN 15,000 267 308 47,334 73,121 15,000 FR 15,000 210 404 40,864 67,167 15,000 OpenEAEN-DE EN 15,000 215 286 47,676 83,755 15,000 DE 15,000 131 194 50,419 156,150 15,000 OpenEAD-W-V1 DB 15,000 248 342 38,265 68,258 15,000 WD 15,000 169 649 42,746 138,246 15,000 OpenEAD-W-V2 DB 15,000 167 175 73,983 66,813 15,000 WD 15,000 121 457 83,365 175,686 15,000 OpenEA benchmarks [48]. Detailed statistics are available in the Appendix. MMEA: (i) Multi-modal DBP15K [34] extends DBP15K [46] by adding images from DBpedia and Wikipedia [14], covering three bilingual settings (DBP15KZH-EN, DBP15KJA-EN, DBP15KFR-EN) and featuring around 400K triples and 15K aligned entity pairs per setting. (ii) MMEA-UMVM [10] includes two bilingual datasets (ENFR-15K, EN-DE-15K) and two monolingual datasets (D-W-15K-V1, D-W-15K-V2) derived from Multi-OpenEA datasets (\ud835\udc45\ud835\udc60\ud835\udc4e= 0.2) [28] and all three bilingual datasets from DBP15K [34]. It offers variability in visual information by randomly removing images, resulting in 97 distinct dataset splits with different \ud835\udc45\ud835\udc56\ud835\udc5a\ud835\udc54. For this study, we focus on representative \ud835\udc45\ud835\udc56\ud835\udc5a\ud835\udc54values of {0.4, 0.6, \ud835\udc5a\ud835\udc4e\ud835\udc65\ud835\udc56\ud835\udc5a\ud835\udc62\ud835\udc5a} to validate our experiments. When \ud835\udc45\ud835\udc56\ud835\udc5a\ud835\udc54= \ud835\udc5a\ud835\udc4e\ud835\udc65\ud835\udc56\ud835\udc5a\ud835\udc62\ud835\udc5a, the dataset corresponds to the original Standard dataset (as shown in Table 4). Note that for the Multi-modal DBP15K dataset, the \u201c\ud835\udc5a\ud835\udc4e\ud835\udc65\ud835\udc56\ud835\udc5a\ud835\udc62\ud835\udc5a\u201d value is not 1.0. 4.1.2 Iterative Training for MMEA. We employ a probation technique for iterative training, which acts as a buffering mechanism, temporarily storing a cache of mutual nearest entity pairs across KGs from the testing set [33]. Specifically, at every \ud835\udc3e\ud835\udc52(where \ud835\udc3e\ud835\udc52= 5) epochs, models identify and add mutual nearest neighbor entity pairs from different KGs to a candidate list N\ud835\udc50\ud835\udc51. An entity pair in N\ud835\udc50\ud835\udc51is then added to the training set if it continues to be mutual nearest neighbors for \ud835\udc3e\ud835\udc60(= 10) consecutive iterations. This iterative expansion of the training dataset serves as data augmentation in the EA domain, enabling further evaluation of the model\u2019s robustness across various scenarios. 4.1.3 Implementation Details. MKGC: (i) Following Zhang et al. [64], vision encoders \ud835\udc38\ud835\udc5b\ud835\udc50\ud835\udc63are configured with VGG [42] for DBP15K, and BEiT [1] for MKG-W and MKG-Y. For entities associated with multiple images, the feature vectors of these images are averaged to obtain a singular representation. (ii) The head number \ud835\udc41\u210ein MHCA is set to 2. For entity representation in DBP15K, graph structure embedding \u00af \u210e\ud835\udc54is used, while for MKG-W and MKG-Y, mean pooling across modality-specific representations \u00af \u210e\ud835\udc4e\ud835\udc63\ud835\udc54is employed. This distinction is made due to DBP15K\u2019s denser KG and greater absence of modality information compared to MKG-W and MKG-Y. (iii) We simply selected a set of candidate parameters in AdaMF [64]. Specifically, the number of negative samples \ud835\udc3eper positive triple is 32, the hidden dimension \ud835\udc51is 256, the training \fThe Power of Noise: Toward a Unified Multi-modal Knowledge Graph Representation Framework Conference\u201917, July 2017, Washington, DC, USA Table 4: Non-iterative MMEA results across three degrees of visual modality missing. Results are underlined when the baseline, equipped with the Gauss Modality Noise Masking (GMNM) module, surpasses its own original performance, and highlighted in bold when achieving SOTA performance. Models \ud835\udc79\ud835\udc8a\ud835\udc8e\ud835\udc88=0.4 \ud835\udc79\ud835\udc8a\ud835\udc8e\ud835\udc88=0.6 Standard H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR DBP15KZH-EN EVA [34] .623 .876 .715 .625 .877 .717 .683 .906 .762 w/ GMNM .629 .883 .724 .625 .881 .717 .680 .907 .760 MCLEA [33] .627 .880 .715 .670 .899 .751 .732 .926 .801 w/ GMNM .652 .895 .740 .699 .912 .775 .754 .933 .819 MEAformer [9] .678 .924 .766 .720 .938 .798 .776 .953 .840 w/ GMNM .680 .925 .767 .719 .939 .798 .777 .955 .841 SnAg (Ours) .735 .945 .812 .757 .953 .830 .798 .963 .858 DBP15KJA-EN EVA [34] .546 .829 .644 .552 .829 .647 .587 .851 .678 w/ GMNM .618 .876 .709 .625 .874 .714 .664 .902 .748 MCLEA [33] .568 .848 .665 .639 .882 .723 .678 .897 .755 w/ GMNM .659 .901 .745 .723 .924 .795 .752 .935 .818 MEAformer [9] .677 .933 .768 .736 .953 .815 .767 .959 .837 w/ GMNM .678 .937 .770 .738 .953 .816 .767 .958 .837 SnAg (Ours) .735 .952 .814 .771 .961 .841 .795 .963 .857 DBP15KFR-EN EVA [34] .622 .895 .719 .634 .899 .728 .686 .926 .771 w/ GMNM .628 .897 .725 .634 .900 .728 .686 .929 .772 MCLEA [33] .622 .892 .722 .694 .915 .774 .734 .926 .805 w/ GMNM .663 .916 .756 .726 .934 .802 .759 .942 .827 MEAformer [9] .676 .944 .774 .734 .958 .816 .776 .967 .846 w/ GMNM .678 .946 .776 .735 .965 .819 .779 .969 .849 SnAg (Ours) .757 .963 .835 .790 .970 .858 .814 .974 .875 OpenEAEN-FR EVA [34] .532 .830 .635 .553 .835 .652 .784 .931 .836 w/ GMNM .537 .829 .638 .554 .833 .652 .787 .935 .839 MCLEA [33] .535 .842 .641 .607 .858 .696 .821 .945 .866 w/ GMNM .554 .848 .658 .624 .873 .714 .830 .950 .874 MEAformer [9] .582 .891 .690 .645 .904 .737 .846 .862 .889 w/ GMNM .588 .895 .696 .647 .905 .738 .847 .963 .890 SnAg (Ours) .621 .905 .721 .667 .922 .757 .848 .964 .891 OpenEAEN-DE EVA [34] .718 .918 .789 .734 .921 .800 .922 .982 .945 w/ GMNM .728 .919 .794 .740 .921 .803 .923 .983 .946 MCLEA [33] .702 .910 .774 .748 .912 .805 .940 .988 .957 w/ GMNM .711 .912 .782 .762 .928 .821 .942 .990 .960 MEAformer [9] .749 .938 .816 .789 .951 .847 .955 .994 .971 w/ GMNM .753 .939 .817 .791 .952 .848 .957 .995 .971 SnAg (Ours) .776 .948 .837 .810 .958 .862 .958 .995 .972 OpenEAD-W-V1 EVA [34] .567 .796 .651 .592 .810 .671 .859 .945 .890 w/ GMNM .597 .826 .678 .611 .826 .688 .870 .953 .900 MCLEA [33] .586 .821 .672 .663 .854 .732 .882 .955 .909 w/ GMNM .604 .841 .689 .678 .869 .748 .889 .960 .915 MEAformer [9] .640 .877 .725 .706 .898 .776 .902 .969 .927 w/ GMNM .656 .884 .738 .718 .905 .786 .904 .971 .929 SnAg (Ours) .678 .897 .758 .728 .915 .796 .905 .971 .930 OpenEAD-W-V2 EVA [34] .774 .949 .838 .789 .953 .848 .889 .981 .922 w/ GMNM .787 .956 .848 .799 .958 .856 .892 .983 .924 MCLEA [33] .751 .941 .822 .801 .950 .856 .929 .984 .950 w/ GMNM .766 .956 .836 .811 .965 .868 .938 .990 .957 MEAformer [9] .807 .976 .869 .834 .980 .886 .939 .994 .960 w/ GMNM .833 .980 .886 .857 .983 .903 .942 .995 .962 SnAg (Ours) .852 .986 .901 .870 .988 .913 .946 .996 .965 batch size is 1024, the margin \ud835\udf06is 12, the temperature\ud835\udf0f\ud835\udc58\ud835\udc54\ud835\udc50is 2.0, and the learning rate is set to 1\ud835\udc52\u22124. No extensive parameter tuning was conducted; theoretically, SnAg could achieve better performance with parameter optimization. (iv) The probability \ud835\udf0cof applying noise in GMNM is set at 0.2, with a noise ratio \ud835\udf16of 0.7. MMEA: (i) Following Yang et al. [61], Bag-of-Words (BoW) is employed for encoding relations (\ud835\udc65\ud835\udc5f) and attributes (\ud835\udc65\ud835\udc4e) into fixed-length vectors (\ud835\udc51\ud835\udc5f= \ud835\udc51\ud835\udc4e= 1000). This process entails sorting relations and attributes by frequency, followed by truncation or padding to Table 5: Iterative MMEA results. Models \ud835\udc79\ud835\udc8a\ud835\udc8e\ud835\udc88=0.4 \ud835\udc79\ud835\udc8a\ud835\udc8e\ud835\udc88=0.6 Standard H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR DBP15KZH-EN EVA [34] .696 .902 .773 .699 .903 .775 .749 .914 .810 w/ GMNM .708 .906 .780 .705 .911 .778 .752 .919 .813 MCLEA [33] .719 .921 .796 .764 .941 .831 .818 .956 .871 w/ GMNM .741 .945 .818 .782 .954 .846 .830 .968 .882 MEAformer [9] .754 .953 .829 .788 .958 .853 .843 .966 .890 w/ GMNM .763 .947 .832 .799 .959 .860 .845 .970 .891 SnAg (Ours) .798 .957 .859 .821 .963 .876 .857 .972 .900 DBP15KJA-EN EVA [34] .646 .888 .733 .657 .892 .743 .695 .904 .770 w/ GMNM .696 .910 .773 .700 .912 .776 .745 .916 .807 MCLEA [33] .690 .922 .778 .756 .948 .828 .788 .955 .851 w/ GMNM .739 .937 .815 .796 .959 .858 .820 .969 .877 MEAformer [9] .759 .957 .833 .808 .969 .868 .831 .972 .882 w/ GMNM .769 .953 .838 .817 .967 .872 .842 .974 .890 SnAg (Ours) .808 .959 .864 .839 .975 .890 .861 .976 .904 DBP15KFR-EN EVA [34] .710 .931 .792 .716 .935 .797 .769 .946 .834 w/ GMNM .714 .929 .794 .720 .932 .798 .777 .950 .841 MCLEA [33] .731 .943 .814 .789 .958 .854 .814 .967 .873 w/ GMNM .759 .964 .840 .806 .974 .871 .837 .980 .893 MEAformer [9] .763 .963 .842 .811 .976 .874 .844 .980 .897 w/ GMNM .779 .968 .847 .817 .974 .876 .852 .981 .899 SnAg (Ours) .826 .976 .885 .852 .983 .904 .875 .987 .919 OpenEAEN-FR EVA [34] .605 .869 .700 .619 .870 .710 .848 .973 .896 w/ GMNM .606 .870 .701 .621 .874 .713 .856 .971 .898 MCLEA [33] .613 .889 .714 .702 .928 .785 .893 .983 .928 w/ GMNM .625 .902 .726 .707 .934 .790 .893 .983 .928 MEAformer [9] .660 .913 .751 .729 .947 .810 .895 .984 .930 w/ GMNM .666 .916 .755 .741 .943 .815 .905 .984 .937 SnAg (Ours) .692 .927 .778 .743 .945 .817 .907 .986 .939 OpenEAEN-DE EVA [34] .776 .935 .833 .784 .937 .839 .954 .984 .965 w/ GMNM .779 .936 .837 .789 .938 .843 .955 .984 .966 MCLEA [33] .766 .942 .829 .821 .956 .871 .969 .994 .979 w/ GMNM .779 .948 .840 .829 .959 .876 .971 .995 .980 MEAformer [9] .803 .950 .854 .835 .958 .878 .963 .994 .976 w/ GMNM .807 .949 .856 .841 .961 .882 .975 .995 .982 SnAg (Ours) .826 .962 .874 .859 .970 .899 .977 .998 .984 OpenEAD-W-V1 EVA [34] .647 .856 .727 .669 .860 .741 .916 .984 .943 w/ GMNM .663 .859 .735 .673 .862 .743 .927 .986 .950 MCLEA [33] .686 .896 .766 .770 .941 .836 .947 .991 .965 w/ GMNM .699 .907 .778 .776 .946 .840 .949 .991 .966 MEAformer [9] .718 .901 .787 .785 .934 .841 .943 .990 .962 w/ GMNM .728 .901 .793 .803 .942 .855 .956 .991 .970 SnAg (Ours) .753 .930 .820 .808 .953 .864 .958 .993 .972 OpenEAD-W-V2 EVA [34] .854 .980 .904 .859 .983 .908 .925 .996 .951 w/ GMNM .866 .980 .909 .872 .981 .913 .948 .997 .969 MCLEA [33] .841 .984 .899 .877 .990 .923 .971 .998 .983 w/ GMNM .845 .987 .902 .882 .992 .926 .973 .999 .984 MEAformer [9] .886 .990 .926 .904 .992 .938 .965 .999 .979 w/ GMNM .902 .990 .936 .918 .993 .948 .975 .999 .985 SnAg (Ours) .904 .994 .939 .924 .994 .952 .980 .999 .988 standardize vector lengths, thus streamlining representation and prioritizing significant features. For any entity \ud835\udc52\ud835\udc56, vector positions correspond to the presence or frequency of top-ranked attributes and relations, respectively. (ii) Following [5, 33], vision encoders \ud835\udc38\ud835\udc5b\ud835\udc50\ud835\udc63are selected as ResNet-152 [20] for DBP15K, and CLIP [39] for Multi-OpenEA. (iii) An alignment editing method is applied to minimize error accumulation [47]. (iv) The head number \ud835\udc41\u210ein MHCA is set to 1. The hidden layer dimensions \ud835\udc51for all networks are unified into 300. The total epochs for baselines are set to 500 with an option for an additional 500 epochs of iterative training [33]. Our training strategies incorporates a cosine warm-up schedule (15% of steps for LR warm-up), early stopping, and gradient accumulation, using the AdamW optimizer (\ud835\udefd1 = 0.9, \ud835\udefd2 = 0.999) \fConference\u201917, July 2017, Washington, DC, USA Zhuo Chen et al. Table 6: Component Analysis for SnAg on MKGC datasets. The icon v indicates the activation of the Gauss Modality Noise Masking (GMNM) module; u denotes its deactivation. By default, GMNM\u2019s noise application probability \ud835\udf0cis set to 0.2, with a noise ratio \ud835\udf16of 0.7. Our Transformer-based structure serves as the default fusion method for SnAg. Alternatives include: \u201cFC\u201d (concatenating features from various modalities followed by a fully connected layer); \u201cWS\u201d (summing features weighted by a global learnable weight per modality); \u201cAT\u201d (leveraging an Attention network for entitylevel weighting); \u201cTS\u201d (using a Transformer for weighting to obtain confidence scores \u02dc \ud835\udc64\ud835\udc5afor weighted summing); \u201cw/ Only \u210e\ud835\udc54\u201d (using Graph Structure embedding for uni-modal KGC). \u201cDropout\u201d is an experimental adjustment where Equation (1) is replaced with the Dropout function to randomly zero modal input features, based on a defined probability. Variants DB15K [35] MKG-W [59] MKG-Y [59] MRR H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 v SnAg (Full) .363 .274 .530 .373 .302 .503 .395 .354 .471 v \ud835\udf0c= 0.3, \ud835\udf16= 0.6 .361 .272 .528 .373 .302 .502 .393 .353 .468 v \ud835\udf0c= 0.1, \ud835\udf16= 0.8 .360 .272 .525 .371 .299 .496 .391 .348 .463 v \ud835\udf0c= 0.4, \ud835\udf16= 0.4 .358 .268 .526 .365 .296 .492 .388 .346 .458 v \ud835\udf0c= 0.5, \ud835\udf16= 0.2 .360 .270 .528 .368 .299 .493 .389 .348 .457 v \ud835\udf0c= 0.7, \ud835\udf16= 0.2 .359 .270 .526 .367 .299 .490 .387 .345 .456 u SnAg .357 .269 .523 .365 .296 .490 .387 .345 .457 u FC Fusion .327 .210 .522 .350 .287 .467 .378 .340 .442 u WS Fusion .334 .218 .529 .361 .298 .480 .384 .345 .449 u AT Fusion .336 .225 .528 .361 .296 .481 .379 .343 .445 u TS Fusion .335 .221 .529 .358 .292 .472 .378 .344 .437 u w/ Only \u210e\ud835\udc54 .293 .179 .497 .337 .268 .467 .350 .291 .453 u Dropout (0.1) .349 .252 .527 .361 .297 .479 .382 .344 .446 u Dropout (0.2) .346 .249 .526 .359 .294 .478 .381 .343 .446 u Dropout (0.3) .343 .242 .524 .356 .290 .477 .381 .343 .445 u Dropout (0.4) .341 .238 .521 .356 .295 .467 .379 .341 .442 with a consistent batch size of 3500. (v) The total learnable parameters of our model are comparable to those of baseline models. For instance, under the DBP15KJA-EN dataset: EVA has 13.27M, MCLEA has 13.22M, and our SnAg has 13.82M learnable parameters. 4.2 Overall Results 4.2.1 MKGC Results. As shown in Table 1, SnAg achieves SOTA performance across all metrics on three MKGC datasets, especially notable when compared with recent works like MANS [62] and MMRNS [59] which all have refined the Negative Sampling techniques. Our Entity-level Modality Interaction approach for MMKG representation learning not only demonstrates a significant advantage but also benefits from the consistent performance enhancement provided by our Gauss Modality Noise Masking (GMNM) module, maintaining superior performance even in its absence. 4.2.2 MMEA Results. As illustrated in the third segment of Table 4, our SnAg achieves SOTA performance across all metrics on seven standard MMEA datasets. Notably, in the latter four datasets of the OpenEA series (EN-FR-15K, EN-DE-15K, D-W-15K-V1, D-W-15KV2) under the Standard setting where \ud835\udc45\ud835\udc56\ud835\udc5a\ud835\udc54= 1.0 indicating full image representation for each entity, our GMNM module maintains or even boosts performance. This suggests that strategic noise integration can lead to beneficial results, demonstrating the module\u2019s effectiveness even in scenarios where visual data is abundant and complete. This aligns with findings from related work [10, 12], which suggest that image ambiguities and multi-aspect visual information can sometimes misguide the use of MMKGs. Unlike these studies that typically design models to refuse and combat noise, our SnAg accepts and intentionally integrates noise to better align with the inherently noisy conditions of real-world scenarios. Most importantly, as a versatile MMKG representation learning approach, it is compatible with both MMEA and MKGC tasks, illustrating its robust adaptability in diverse operational contexts. 4.3 Uncertainly Missing Modality. The first two segments from Table 4 present entity alignment performance with \ud835\udc45\ud835\udc56\ud835\udc5a\ud835\udc54= 0.4, 0.6, where 60%/40% of entities lack image data. These missing images are substituted with random image features following a normal distribution based on the observed mean and standard deviation across other entities\u2019 images (details in 3.2.3). This simulates uncertain modality absence in real-world scenarios. Our method outperforms baselines more significantly when the modality absence is greater (i.e., \ud835\udc45\ud835\udc56\ud835\udc5a\ud835\udc54= 0.4), with the GMNM module providing notable benefits. This demonstrates that intentionally introducing noise can increase training challenges while enhancing model robustness in realistic settings. 4.4 Ablation studies. In Table 6, we dissect the influence of various components on our model\u2019s performance, focusing on three key aspects: (i) Noise Parameters: The noise application probability \ud835\udf0cand noise ratio \ud835\udf16are pivotal. Optimal values of \ud835\udf0c= 0.2 and \ud835\udf16= 0.7 were determined empirically, suggesting that the model tolerates up to 20% of entities missing images and that a modality-mask ratio of 0.7 acts as a soft mask. For optimal performance, we recommend empirically adjusting these parameters to suit other specific scenario. Generally, conducting a grid search on a smaller dataset subset can quickly identify suitable parameter combinations. (ii) Entity-Level Modality Interaction: Our exploration shows that absence of image information (w/ Only \u210e\ud835\udc54) markedly reduces performance, emphasizing MKGC\u2019s importance. Weighted summing methods (WS, AT, TS) surpass simple FC-based approaches, indicating the superiority of nuanced modality integration. Purely using Transformer modality weights \u02dc \ud835\udc64\ud835\udc5afor weighting does not show a clear advantage over Attention-based or globally learnable weight methods in MKGC. In contrast, our approach using \u00af \u210e\ud835\udc54(for DBP15K) and \u00af \u210e\ud835\udc4e\ud835\udc63\ud835\udc54(for MKG-W and MKG-Y) which significantly outperforms others, demonstrating their efficacy. (iii) Modality-Mask vs. Dropout: In assessing their differential impacts, we observe that even minimal dropout (0.1) adversely affects performance, likely because dropout to some extent distorts the original modal feature distribution, thereby hindering model optimization toward the alignment objective. Conversely, our modality-mask\u2019s noise is inherent, replicating the feature distribution seen when modality is absent, and consequently enhancing model robustness more effectively. \fThe Power of Noise: Toward a Unified Multi-modal Knowledge Graph Representation Framework Conference\u201917, July 2017, Washington, DC, USA 5 CONCLUSION AND FUTURE WORK In this work, we introduce a unified multi-modal knowledge graph representation framework that accepts and intentionally integrates noise, thereby aligning with the complexities of real-world scenarios. This initiative also stands out as the first in the MMKG domain to support both MKGC and MMEA tasks simultaneously, showcasing the adaptability of our approach. Building on this foundation, we encourage future researchers to adopt a broader perspective on MMKG representation learning, extending beyond the focus on individual sub-tasks. As the field evolves, there is a promising avenue for integrating this unified representation into multi-modal knowledge pre-training, which could facilitate diverse downstream tasks, including but not limited to Multi-modal Knowledge Injection and Multi-modal RetrievalAugmented Generation (RAG). Such advancements have the potential to make significant contributions to the community, especially with the rapid development of Large Language Models [63, 65]."
}
]
}