| { |
| "paper_id": "P19-1026", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:27:27.467514Z" |
| }, |
| "title": "Relation Embedding with Dihedral Group in Knowledge Graph", |
| "authors": [ |
| { |
| "first": "Canran", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Ruijiang", |
| "middle": [], |
| "last": "Li", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Link prediction is critical for the application of incomplete knowledge graph (KG) in the downstream tasks. As a family of effective approaches for link predictions, embedding methods try to learn low-rank representations for both entities and relations such that the bilinear form defined therein is a well-behaved scoring function. Despite of their successful performances, existing bilinear forms overlook the modeling of relation compositions, resulting in lacks of interpretability for reasoning on KG. To fulfill this gap, we propose a new model called DihEdral, named after dihedral symmetry group. This new model learns knowledge graph embeddings that can capture relation compositions by nature. Furthermore, our approach models the relation embeddings parametrized by discrete values, thereby decrease the solution space drastically. Our experiments show that DihEdral is able to capture all desired properties such as (skew-) symmetry, inversion and (non-) Abelian composition, and outperforms existing bilinear form based approach and is comparable to or better than deep learning models such as ConvE (Dettmers et al., 2018).", |
| "pdf_parse": { |
| "paper_id": "P19-1026", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Link prediction is critical for the application of incomplete knowledge graph (KG) in the downstream tasks. As a family of effective approaches for link predictions, embedding methods try to learn low-rank representations for both entities and relations such that the bilinear form defined therein is a well-behaved scoring function. Despite of their successful performances, existing bilinear forms overlook the modeling of relation compositions, resulting in lacks of interpretability for reasoning on KG. To fulfill this gap, we propose a new model called DihEdral, named after dihedral symmetry group. This new model learns knowledge graph embeddings that can capture relation compositions by nature. Furthermore, our approach models the relation embeddings parametrized by discrete values, thereby decrease the solution space drastically. Our experiments show that DihEdral is able to capture all desired properties such as (skew-) symmetry, inversion and (non-) Abelian composition, and outperforms existing bilinear form based approach and is comparable to or better than deep learning models such as ConvE (Dettmers et al., 2018).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Large-scale knowledge graph (KG) plays a critical role in the downstream tasks such as semantic search (Berant et al., 2013) , dialogue management (He et al., 2017) and question answering (Bordes et al., 2014) . In most cases, despite of its large scale, KG is not complete due to the difficulty to enumerate all facts in the real world. The capability of predicting the missing links based on existing dataset is one of the most important research topics for years. A common representation of KG is a set of triples (head, relation, tail) , and the problem of link prediction can be viewed as predicting new triples from the existing set. A * Equal contribution. popular approach is KG embeddings, which maps both entities and relations in the KG to a vector space such that the scoring function of entities and relations for ground truth distinguishes from false facts (Socher et al., 2013; Bordes et al., 2013; Yang et al., 2015) . Another family of approaches explicitly models the reasoning process on KG by synthesizing information from paths (Guu et al., 2015) . More recently, researchers are applying deep learning methods to KG embeddings so that non-linear interaction between entities and relations are enabled (Schlichtkrull et al., 2018; Dettmers et al., 2018) .", |
| "cite_spans": [ |
| { |
| "start": 103, |
| "end": 124, |
| "text": "(Berant et al., 2013)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 147, |
| "end": 164, |
| "text": "(He et al., 2017)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 188, |
| "end": 209, |
| "text": "(Bordes et al., 2014)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 517, |
| "end": 539, |
| "text": "(head, relation, tail)", |
| "ref_id": null |
| }, |
| { |
| "start": 871, |
| "end": 892, |
| "text": "(Socher et al., 2013;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 893, |
| "end": 913, |
| "text": "Bordes et al., 2013;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 914, |
| "end": 932, |
| "text": "Yang et al., 2015)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 1049, |
| "end": 1067, |
| "text": "(Guu et al., 2015)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 1223, |
| "end": 1251, |
| "text": "(Schlichtkrull et al., 2018;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 1252, |
| "end": 1274, |
| "text": "Dettmers et al., 2018)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The standard task for link prediction is to answer queries (h, r, ?) or (? r, t) . In this context, recent works on KG embedding focusing on bilinear form methods (Trouillon et al., 2016; Nickel et al., 2016; Liu et al., 2017; Kazemi and Poole, 2018) are known to perform reasonably well. The success of this pack of models resides in the fact they are able to model relation (skew-) symmetries. Furthermore, when serving for downstream tasks such as learning first-order logic rule and reasoning over the KG, the learned relation representation is expected to discover relation composition by itself. One key property of relation composition is that in many cases it can be noncommutative. For example, exchanging the order between parent_of and spouse_of will result in completely different relation (parent_of as opposed to parent_in_law_of). We argue that, in order to learn relation composition within the link prediction task, this non-commutative property should be explicitly modeled.", |
| "cite_spans": [ |
| { |
| "start": 72, |
| "end": 80, |
| "text": "(? r, t)", |
| "ref_id": null |
| }, |
| { |
| "start": 163, |
| "end": 187, |
| "text": "(Trouillon et al., 2016;", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 188, |
| "end": 208, |
| "text": "Nickel et al., 2016;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 209, |
| "end": 226, |
| "text": "Liu et al., 2017;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 227, |
| "end": 250, |
| "text": "Kazemi and Poole, 2018)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we proposed DihEdral to model the relation in KG with the representation of dihedral group. The elements in a dihedral group are constructed by rotation and reflection operations over a 2D symmetric polygon. As the matrix representations of dihedral group can be symmetric or skew-symmetric, and the multiplication of the group elements can be Abelian or non-Abelian, it is a good candidate to model the relations with all the corresponding properties desired.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To the best of our knowledge, this is the first attempt to employ finite non-Abelian group in KG embedding to account for relation compositions. Besides, another merit of using dihedral group is that even the parameters are quantized or even binarized, the performance in link prediction tasks can be improved over state-of-the-arts methods in bilinear form due to the implicit regularization imposed by quantization.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The rest of paper is organized as follows: in ( \u00a72) we present the mathematical framework of bilinear form modeling for link prediction task, followed by an introduction to group theory and dihedral group. In ( \u00a73) we formalize a novel model DihEdral to represent relations with fully expressiveness. In ( \u00a74, \u00a75) we develop two efficient ways to parametrize DihEdral and reveal that both approaches outperform existing bilinear form methods. In ( \u00a76) we carried out extensive case studies to demonstrate the enhanced interpretability of relation embedding space by showing that the desired properties of (skew-) symmetry, inversion and relation composition are coherent with the relation embeddings learned from DihEdral.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Let E and R be the set of entities and relations. A triple (h, r, t), where {h, t} \u2208 E are the head and tail entities, and r \u2208 R is a relation corresponding to an edge in the KG.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bilinear From for KB Link Prediction", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "In a bilinear form, the entities h, t are represented by vectors h, t \u2208 R M where M \u2208 Z + , and relation r is represented by a matrix R \u2208 R M \u00d7M . The score for the triple is defined as \u03c6(h, r, t) = h Rt. A good representation of the entities and relations are learned such that the scores are high for positive triples and low for negative triples.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bilinear From for KB Link Prediction", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Let g i , g j be two elements in a set G, and be a binary operation between any two elements in G . The set G forms a group when the following axioms are satisfied:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Group and Dihedral Group", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Closure For any two element g i , g j \u2208 G, g k = g i g j is also an element in G.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Group and Dihedral Group", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Associativity For any g i , g j , g k \u2208 G, (g i g j ) g k = g i (g j g k ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Group and Dihedral Group", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Identity There exists an identity element e in G such that, for every element g in G, the equation e g = g e = g holds.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Group and Dihedral Group", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Inverse For each element g, there is its inverse element g \u22121 such that g g \u22121 = g \u22121 g = e.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Group and Dihedral Group", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "If the number of group elements is finite, the group is called a finite group. If the group operation is commutative, i.e. g i g j = g j g i for all g i and g j , the group is called Abelian; otherwise the group is non-Abelian.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Group and Dihedral Group", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Moreover, if the group elements can be represented by a matrix, with group operations defined as matrix multiplications, the identity element is represented by the identity matrix and the inverse element is represented as matrix inverse. In the following, we will not distinguish between group element and its corresponding matrix representation when no confusion exists.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Group and Dihedral Group", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "A dihedral group is a finite group that supports symmetric operations of a regular polygon in two dimensional space. Here the symmetric operations refer to the operator preserving the polygon. For a K-side (K \u2208 Z + ) polygon, the corresponding dihedral group is denoted as D K that consists of 2K elements, within which there are K rotation operators and K reflection operators. A rotation operator O k rotates the polygon anti-clockwise around the center by a degree of (2\u03c0m/K), and a reflection operator F k mirrors the rotation O k vertically. The element in the dihedral group D K can be represented as 2D orthogonal matrices 1 : ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Group and Dihedral Group", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "O (m) K = cos 2\u03c0m K \u2212 sin 2\u03c0m K sin 2\u03c0m K cos 2\u03c0m K F (m) K = cos 2\u03c0m K sin 2\u03c0m K sin 2\u03c0m K \u2212 cos 2\u03c0m K (1) where m \u2208 {0, 1, \u2022 \u2022 \u2022 ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Group and Dihedral Group", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "(0) K , O (K/2) K", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Group and Dihedral Group", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "are symmetric. The representation of D 4 is shown in Figure 1 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 53, |
| "end": 62, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Group and Dihedral Group", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "We propose to model the relations by the group elements in D K . Like ComplEx (Trouillon et al., 2016) , we assume an even number of latent dimensions 2L. More specifically, the relation matrix takes a block diagonal form", |
| "cite_spans": [ |
| { |
| "start": 78, |
| "end": 102, |
| "text": "(Trouillon et al., 2016)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation Modeling with Dihedral Group and Expressiveness", |
| "sec_num": "3" |
| }, |
| { |
| "text": "R = diag R (1) , R (2) , \u2022 \u2022 \u2022 , R (L) where R (l) \u2208 D K for l \u2208 {1, 2, \u2022 \u2022 \u2022 , L}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation Modeling with Dihedral Group and Expressiveness", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The corresponding embedding vectors h \u2208 R 2L and t \u2208 R 2L take the form of", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation Modeling with Dihedral Group and Expressiveness", |
| "sec_num": "3" |
| }, |
| { |
| "text": "h (1) , \u2022 \u2022 \u2022 , h (L) and t (1) , \u2022 \u2022 \u2022 , t (L)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation Modeling with Dihedral Group and Expressiveness", |
| "sec_num": "3" |
| }, |
| { |
| "text": "where h (l) , t (l) \u2208 R 2 respectively. As a result, the score for a triple (h, r, t) in bilinear form can be written as a sum of these L components h Rt = L l=1 h (l) R (l) t (l) , We name the model DihEdral because each component R (l) is a representation matrix of a dihedral group element.", |
| "cite_spans": [ |
| { |
| "start": 176, |
| "end": 179, |
| "text": "(l)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation Modeling with Dihedral Group and Expressiveness", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Lemma 1. The relation matrix R of DihEdral is orthogonal, i.e. RR = R R = I. Lemma 2. The score of (h, r, t) satisfies h Rt = \u2212 1 2 R h \u2212 t 2 2 \u2212 h h \u2212 t t , consequently maximizing score w.r.t. R is equivalent to mini- mizing R h \u2212 t 2 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation Modeling with Dihedral Group and Expressiveness", |
| "sec_num": "3" |
| }, |
| { |
| "text": ". Theorem 1. The relations matrices in DihEdral form a group under matrix multiplication.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation Modeling with Dihedral Group and Expressiveness", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Though its relation embedding takes discrete values, DihEdral is fully expressive as it is able to model relations with desired properties for each component R l by the corresponding matrices in D K . The properties are summarized in Table 1 , with comparison to DistMult (Yang et al., 2015) , ComplEx (Trouillon et al., 2016) , ANALOGY (Liu et al., 2017) and SimplE (Kazemi and Poole, 2018). 2 The details of expressiveness are described as follows. For notation convenience, we denote T + all the possible true triples, and T \u2212 all the possible false triples.", |
| "cite_spans": [ |
| { |
| "start": 272, |
| "end": 291, |
| "text": "(Yang et al., 2015)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 302, |
| "end": 326, |
| "text": "(Trouillon et al., 2016)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 337, |
| "end": 355, |
| "text": "(Liu et al., 2017)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 393, |
| "end": 394, |
| "text": "2", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 234, |
| "end": 241, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Relation Modeling with Dihedral Group and Expressiveness", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Symmetric A relation r is symmetric iff (h, r, t) \u2208 T + \u21d4 (t, r, h) \u2208 T + . Symmetric relations in the real world include synonym, similar_to.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation Modeling with Dihedral Group and Expressiveness", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Note that with DihEdral, the component R l can be a reflection matrix which is symmetric and offdiagonal. This is in contrast to DistMult and Com-plEx where the relation matrix has to be diagonal when it is symmetric at the same time.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation Modeling with Dihedral Group and Expressiveness", |
| "sec_num": "3" |
| }, |
| { |
| "text": "A relation r is skew-symmetric iff (h, r, t) \u2208 T + \u21d4 (t, r, h) \u2208 T \u2212 . Skew- symmetric relations in the real world include father_of, member_of.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Skew-Symmetric", |
| "sec_num": null |
| }, |
| { |
| "text": "When K is a multiple of 4, pure skew-symmetric matrices in D 4 can be chosen. As a result, the relation is guaranteed to be skew-symmetric satisfying \u03c6(h, r, t) = \u2212\u03c6(t, r, h).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Skew-Symmetric", |
| "sec_num": null |
| }, |
| { |
| "text": "Inversion r 2 is the inverse of r 1 iff (h, r 1 , t) \u2208 T + \u21d4 (t, r 2 , h) \u2208 T + . As a real world example, parent_of is the inversion of child_of.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Skew-Symmetric", |
| "sec_num": null |
| }, |
| { |
| "text": "The inverse of the relation r is represented by R \u22121 in an ideal situation: For two positive triples (h, r 1 , t) and (t, r 2 , h), we have R 1 h \u2248 t and R 2 t \u2248 h in an ideal situation (cf. Lemma 2), With enough occurrences of pair {h, t} we have", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Skew-Symmetric", |
| "sec_num": null |
| }, |
| { |
| "text": "R 2 = R \u22121 1 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Skew-Symmetric", |
| "sec_num": null |
| }, |
| { |
| "text": "Composition r 3 is composition of r 1 and r 2 , denoted as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Skew-Symmetric", |
| "sec_num": null |
| }, |
| { |
| "text": "r 3 = r 1 r 2 iff (h, r 1 , m) \u2208 T + \u2227 (m, r 2 , t) \u2208 T + \u21d4 (h, r 3 , t) \u2208 T + .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Skew-Symmetric", |
| "sec_num": null |
| }, |
| { |
| "text": "Example of composition in the real world includes nationality = born_in_city city_belong_to_nation. Depending on the commutative property, there are two cases of relation compositions: ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Skew-Symmetric", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Abelian r 1 and r 2 are Abelian if (h, r 1 r 2 , t) \u2208 T + \u21d4 (h, r 2 r 1 , t) \u2208 T + . Real world example includes Component Symmetric Skew-Symmetric Composition Abelian Non-Abelian DistMult ri \u2208 R ? * NA \u2020 ComplEx ai \u2212bi bi ai bi = 0 ai = 0 NA \u2020 ANALOGY ai \u2212bi bi ai \u222a {cj} bi = 0 ai, cj = 0 NA \u2020 SimplE 0 ai bi 0 ai = bi ai = \u2212bi NA \u2020 DihEdral DK F (m) K \u222a O (0,K/2) K O (K/4,3K/4) K both in O (m) K either in F (m) K", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Skew-Symmetric", |
| "sec_num": null |
| }, |
| { |
| "text": "opposite_gender profession = profession opposite_gender. \u2022 Non-Abelian r 1 and r 2 are non-Abelian if (h, r 1 r 2 , t) \u2208 T + (h, r 2 r 1 , t) \u2208 T + .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Skew-Symmetric", |
| "sec_num": null |
| }, |
| { |
| "text": "Real world example include parent_of spouse_of = spouse_of parent_of.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Skew-Symmetric", |
| "sec_num": null |
| }, |
| { |
| "text": "In DihEdral, the relation composition operator corresponds to the matrix multiplication of the corresponding representations, i.e. R 3 \u2248 R 1 R 2 . Consider three positive triples (h, r 1 , m), (m, r 2 , t) and (h, r 3 , t). In the ideal situation, we have R 1 h \u2248 m, R 2 m \u2248 t, R 3 h \u2248 t (cf. Lemma 2), and further R 2 R 1 h \u2248 t. With enough occurrences of such {h, t} pairs in the training dataset, we have", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Skew-Symmetric", |
| "sec_num": null |
| }, |
| { |
| "text": "R 3 \u2248 R 1 R 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Skew-Symmetric", |
| "sec_num": null |
| }, |
| { |
| "text": "Note that although all the rotation matrices form a subgroup to dihedral group, and hence algebraically closedthe rotation subgroup could not model non-Abelian relations. To model non-Abelian relation compositions at least one reflection matrix should be involved.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Skew-Symmetric", |
| "sec_num": null |
| }, |
| { |
| "text": "In the standard traing framework for KG embedding models, parameters \u0398 = \u0398 E \u222a \u0398 R , i.e. the union of entity and relation embeddings, are learnt by stochastic optimization methods. For each minibatch of positive triples, a small number of negative triples are sampled by corrupting head or tail for each positive triple, then related parameters in the model are updated by minimizing the binary negative log-likelihood such that positive triples will get higher scores than negative triples.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Specifically, the loss function is written as follows,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "min \u0398 (h,r,t)\u2208T + \u222aT \u2212 \u2212 log \u03c3 (y\u03c6(h, r, t))+\u03bb||\u0398 E || 2 ,", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Training", |
| "sec_num": "4" |
| }, |
| { |
| "text": "where \u03bb \u2208 R is the L 2 regularization coefficient for entity embeddings only, T + and T \u2212 are the sets of positive and sampled negative triples in a minibatch, and y equals to 1 if (h, r, t) \u2208 T + otherwise \u22121. \u03c3 is a sigmoid function defined as \u03c3(x) = 1/(1 + exp(\u2212x)).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Special treatments of the relation representations R are required as they takes discrete values. In the next subsections we describe a reparametrization method for general K, followed by a simple approach when K takes small integers values. With these treatments, DihEdral could be trained within the standard framework.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Each relation component R (l) can be parametrized with a one-hot variable c (l) ", |
| "cite_spans": [ |
| { |
| "start": 76, |
| "end": 79, |
| "text": "(l)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gumbel-Softmax Approach", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2208 {0, 1} 2K encod- ing 2K choices of matrices in D K : R (l) = 2K k=1 c (l) k D k where {D k , k \u2208 {1, \u2022 \u2022 \u2022 , 2K}} enumerates D K .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gumbel-Softmax Approach", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The number of parameters for each relation is 2LK in this approach.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gumbel-Softmax Approach", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "One-hot variable c (l) is further parametrized by s (l) \u2208 R 2K by Gumbel trick (Jang et al., 2017) with the following steps: 1) take i.", |
| "cite_spans": [ |
| { |
| "start": 79, |
| "end": 98, |
| "text": "(Jang et al., 2017)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gumbel-Softmax Approach", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "i.d. samples q 1 , q 2 , . . . , q 2K from a Gumbel distribution: q i = \u2212 log(\u2212 log u i ),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gumbel-Softmax Approach", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "where u i \u223c U(0, 1) are samples from a uniform distribution; 2) use log-softmax form of s (l) to parametrize c (l) \u2208 {0, 1} 2K :", |
| "cite_spans": [ |
| { |
| "start": 90, |
| "end": 93, |
| "text": "(l)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gumbel-Softmax Approach", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "c (l) k = exp (s (l) k + q k )/\u03c4 2K k=1 exp (s (l) k + q k )/\u03c4 (3)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gumbel-Softmax Approach", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "where \u03c4 is the tunable temperature. During training, we start with high temperature, e.g. \u03c4 0 = 3, to drive the system out of pool local minimums, and gradually cool the system with \u03c4 = max(0.5, \u03c4 0 exp(\u22120.001t)) where t is the number of epochs elapsed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gumbel-Softmax Approach", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Another parametrization technique for D K where K \u2208 {4, 6} is to parametrize each element in the matrix R (l) directly. Specifically we have", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reparametrization with Binary Variables", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "R (l) = \u03bb \u2212\u03b1\u03b3 \u03b3 \u03b1\u03bb ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reparametrization with Binary Variables", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "where", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reparametrization with Binary Variables", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u03bb = cos(2\u03c0k/K), \u03b3 = sin(2\u03c0k/K), k \u2208 {0, 1, \u2022 \u2022 \u2022 , 2K \u2212 1} and \u03b1 \u2208 {\u22121, 1}", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reparametrization with Binary Variables", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "is the reflection indicator . Both \u03bb and \u03b3 can be parametrized by the same set of binary variables {x, y, z}:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reparametrization with Binary Variables", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u03bb = (x + y)/2 K = 4 y(3 \u2212 x)/4 K = 6 , \u03b3 = (x \u2212 y)/2 K = 4 z(x + 1) \u221a 3/4 K = 6 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reparametrization with Binary Variables", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In the forward pass, each binary variable b \u2208 {x, y, z} is parametrized by taking a element-wise sign function of a real number: b = sign(b real ) where b real \u2208 R.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reparametrization with Binary Variables", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In the backward pass, since the original gradient of sign function is almost zero everywhere such that b real will not be activated, the gradient of loss with respect to the real variable is estimated with the straight-through estimator (STE) (Yin et al., 2019) . The functional form for STE is not unique and worth profound theoretical study. In our experiments, we used identity STE (Bengio et al., 2013) :", |
| "cite_spans": [ |
| { |
| "start": 243, |
| "end": 261, |
| "text": "(Yin et al., 2019)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 385, |
| "end": 406, |
| "text": "(Bengio et al., 2013)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reparametrization with Binary Variables", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2202loss \u2202b real = \u2202loss \u2202b 1,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reparametrization with Binary Variables", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "where 1 stands for element-wise identity. For these two approaches, we name the model as DK-Gumbel for Gumbel-Softmax approach and DK-STE for reparametrization using binary variable approach.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reparametrization with Binary Variables", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "This section presents our experiments and results. We first introduce the benchmark datasets used in our experiments, after that we evaluate our approach in the link prediction task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Result", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Introduced in Bordes et al. (2013) , WN18 and FB15K are popular benchmarks for link prediction tasks. WN18 is a subset of the famous WordNet database that describes relations between words. In WN18 the most frequent types of relations form reversible pairs (e.g., hypernym to hyponym, part_of to has_part). FB15K is a subsampling of Freebase limited to 15k entities, introduced in Bordes et al. (2013) . It contains triples with different characteristics (e.g., one toone relations such as capital_of to many-tomany such as actor_in_film). YAGO3-10 (Dettmers et al., 2018) is a subset of YAGO3 (Suchanek et al., 2007) with each entity contains at least 10 relations.", |
| "cite_spans": [ |
| { |
| "start": 14, |
| "end": 34, |
| "text": "Bordes et al. (2013)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 381, |
| "end": 401, |
| "text": "Bordes et al. (2013)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 594, |
| "end": 617, |
| "text": "(Suchanek et al., 2007)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Datasets", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "As noted in Toutanova et al. 2015; Dettmers et al. (2018) , in the original WN18 and FB15k datasets there are a large amount of test triples appear as reciprocal form of the training samples, due to the reversible relation pairs. Therefore, these authors eliminated the inverse relations and constructed corresponding subsets: WN18RR with 11 relations and FB15K-237 with 237 relations, both of which are free from test data leak. All datasets statistics are shown in Table 2 ", |
| "cite_spans": [ |
| { |
| "start": 35, |
| "end": 57, |
| "text": "Dettmers et al. (2018)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 467, |
| "end": 474, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Datasets", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We use the popular metrics filtered HITS@1, 3, 10 and mean reciprocal rank (MRR) as our evaluation metrics as in Bordes et al. (2013) .", |
| "cite_spans": [ |
| { |
| "start": 113, |
| "end": 133, |
| "text": "Bordes et al. (2013)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metric", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We implemented DihEdral in PyTorch (Paszke et al., 2017) . In all our experiments, we selected the hyperparameters of our model in a grid search setting for the best MRR in the validation set. We WN18 FB15K HITS@N MRR HITS@N MRR 1 3 10 1 3 10 TransE \u2020 (Bordes et al., 2013) 8.9 82.3 93.4 45.4 23.1 47.2 64.1 22.1 DistMult \u2020 (Yang et al., 2015) 72.8 91.4 93.6 82.2 54.6 73.3 82.4 65.4 ComplEx \u2020 (Trouillon et al., 2016) 93.6 94.5 94.7 94.1 59.9 75.9 84.0 69.2 HolE (Nickel et al., 2016) 93.0 94.5 94.7 93.8 40.2 61.3 73.9 52.4 ANALOGY (Liu et al., 2017) 93.9 94.4 94.7 94.2 64.6 78.5 85.4 72.5 Single DistMult (Kadlec et al., 2017) --94.6 79.7 --89.3 79.8 SimplE (Kazemi and Poole, 2018) 93.9 94.4 94.7 94.2 66.0 77.3 83.8 72.7 R- GCN (Schlichtkrull et al., 2018) 69.7 92.9 96.4 81.9 60.1 76.0 84.2 69.6 ConvE (Dettmers et al., 2018) 93 Table 3 : Link prediction results on WN18 and FB15K datasets. Results marked by ' \u2020' are taken from (Trouillon et al., 2016) , and the rest of the results are taken from original literatures.", |
| "cite_spans": [ |
| { |
| "start": 35, |
| "end": 56, |
| "text": "(Paszke et al., 2017)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 252, |
| "end": 273, |
| "text": "(Bordes et al., 2013)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 324, |
| "end": 343, |
| "text": "(Yang et al., 2015)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 394, |
| "end": 418, |
| "text": "(Trouillon et al., 2016)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 464, |
| "end": 485, |
| "text": "(Nickel et al., 2016)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 534, |
| "end": 552, |
| "text": "(Liu et al., 2017)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 609, |
| "end": 673, |
| "text": "(Kadlec et al., 2017) --94.6 79.7 --89.3 79.8 SimplE (Kazemi and", |
| "ref_id": null |
| }, |
| { |
| "start": 674, |
| "end": 686, |
| "text": "Poole, 2018)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 730, |
| "end": 762, |
| "text": "GCN (Schlichtkrull et al., 2018)", |
| "ref_id": null |
| }, |
| { |
| "start": 809, |
| "end": 832, |
| "text": "(Dettmers et al., 2018)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 936, |
| "end": 960, |
| "text": "(Trouillon et al., 2016)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 836, |
| "end": 843, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model Selection and Hyper-parameters", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "trained DK-Gumbel for K \u2208 {4, 6, 8} and DK-STE for K \u2208 {4, 6} with AdaGrad optimizer (Duchi et al., 2011) , and we didn't notice significant difference in terms of the evaluation metrics when varying K. In the following we only report the result for K = 4.", |
| "cite_spans": [ |
| { |
| "start": 85, |
| "end": 105, |
| "text": "(Duchi et al., 2011)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Selection and Hyper-parameters", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "For D4-Gumbel, we performed grid search for the L 2 regularization coefficient \u03bb \u2208 [10 \u22125 , 10 \u22124 , 10 \u22123 ] and learning rate \u2208 [0.5, 1]. For D4-STE, hyperparamter ranges for the grid search were as follows: \u03bb \u2208 [0.001, 0.01, 0.1, 0.2], learning rate \u2208 [0.01, 0.02, 0.03, 0.05, 0.1]. For both settings we performed grid search with batch sizes \u2208 [512, 1024, 2048] and negative sample ratio \u2208 [1, 6, 10]. We used embedding dimension 2L = 1500 for FB15K, 2L = 600 for both FB15K-237 and YAGO3-10, 2L = 200 for WN18 and WN18RR. We used the standard train/valid/test splits provided with these datasets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Selection and Hyper-parameters", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "The results of link predictions are shown in Table 3 and 4, where the results for the baselines are directly taken from original literature. Di-hEdral outperforms almost all models in bilinear form, and even ConvE in FB15K, WN18RR and YAGO3-10. The result demonstrates that even Di-hEdral takes discretized value in relation representations, proper modeling the underlying structure of relations using D K is essential.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Selection and Hyper-parameters", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "The learned representation from DihEdral is not only able to reach the state-of-the-art performance in link prediction tasks, but also provides insights with its special properties. In this section, we present the detailed case studies on these properties. In order to achieve better resolutions, we increased the embedding dimension to 2L = 600 for WN18 datasets. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Case Studies", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We show the multiplication of some pairs of inversion relations on WN18 and FB15K in Figure 2 , WN18RR FB15K-237 YAGO3-10 HITS@N MRR HITS@N MRR HITS@N MRR 1 3 10 1 3 10 1 3 Table 4 : Link prediction results on WN18RR and FB15K-237 datasets. Results marked by ' \u2020' are taken from (Dettmers et al., 2018) , and result marked by ' * ' is taken from (Das et al., 2018) . and the result is close to an identity matrix. For the relation pair {_member_of_domain_usage, _synset_domain_usage_of}, the multiplication deviates from ideal identity matrix as the performance for these two relations are poorer compared to the others. We also repeat the same case study for other bilinear embedding methods, however their multiplications are not identity, but close to diagonal matrices with different elements. are skew-symmetric components and others are symmetric.", |
| "cite_spans": [ |
| { |
| "start": 295, |
| "end": 318, |
| "text": "(Dettmers et al., 2018)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 362, |
| "end": 380, |
| "text": "(Das et al., 2018)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 85, |
| "end": 93, |
| "text": "Figure 2", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 96, |
| "end": 188, |
| "text": "WN18RR FB15K-237 YAGO3-10 HITS@N MRR HITS@N MRR HITS@N MRR 1 3 10 1 3 10 1 3", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 189, |
| "end": 196, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Inversion", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "O (0) 4 O (1) 4 O (2) 4 O (3) 4 F (0) 4 F (1) 4 F (2) 4 F (3) 4 0 25 50 _verb_group O (0) 4 O (1) 4 O (2) 4 O (3) 4 F (0) 4 F (1) 4 F (2) 4 F (3) 4 0 25 50 _similar_to Symmetric Relations O (0) 4 O (1) 4 O (2) 4 O (3) 4 F (0) 4 F (1) 4 F (2) 4 F (3) 4 0 50 100 _hyponym O (0) 4 O (1) 4 O (2) 4 O (3) 4 F (0) 4 F (1) 4 F (2) 4 F (3) 4 0 25 50 75 100 _instance_hypernym O (0) 4 O (1) 4 O (2) 4 O (3) 4 F (0) 4 F (1) 4 F (2) 4 F (3) 4 0 20 40 60 80 _synset_domain_region_of O (0) 4 O (1) 4 O (2) 4 O (3) 4 F (0) 4 F (1) 4 F (2) 4 F", |
| "eq_num": "(" |
| } |
| ], |
| "section": "Inversion", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Since the KB datasets do not contain negative triples explicitly, there is no penalty to model skew-symmetric relations with symmetric matri-ces. This is perhaps the reason why DistMult performs well on FB15K dataset in which a lot of relations are skew-symmetric.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Symmetry and Skew-Symmetry", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "To resolve this ambiguity, for each positive triple (h, r, t) with a definite skew-symmetric relation r, a negative triple (t, r, h) is sampled with probability 0.5. After adding this new negative sampling scheme in D4-Gumbel, the symmetric and skew-symmetric relations can be distinguished on WN18 dataset without reducing performance on link prediction tasks. Figure 3 shows that both symmetric and skew-symmetric relations favor corresponding components in D 4 as expected. Again, due to imperfect performance of _synset_domain_topic_of, its corresponding representation is imperfect as well. We also conduct the same experiment without adding this sampling scheme, the histogram for the symmetric relations are similar, but there is no strong preference for skew-symmetric relations.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 362, |
| "end": 370, |
| "text": "Figure 3", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Symmetry and Skew-Symmetry", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "In FB15K-237 dataset the majority of patterns is relation composition. However, these compositions are Abelian only because all the inverse relations are filtered out on purpose. To justify if non-Abelian relation compositions can be discovered by DihEdral in an ideal situation, we generate a synthetic dataset called FAMILY. Specifically, we first generated two generations of people with equal number of male and females in each generation, and randomly assigned spouse edges within each generation and child and parent edges between the two generations, after which the sibling, parent_in_law and ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation Composition", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "diag- onal elements in R 1 R 2 R \u22121", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation Composition", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "3 where r 3 is treated as the composition of r 1 and r 2 . The name of the three relations and the percentage of the 1s in the diagonal are shown in the 1st, 2nd, 3rd and 4th line of subplot title. The two subplots in the first rows shows composition for FB15K-237, and subplots on the second and third row are used to check composition and non-Abelian on FAMILY.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation Composition", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "child_in_law edges are connected based on commonsense logic.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation Composition", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "We trained D4-Gumbel on FAMILY with latent dimension 2L = 400. In addition to the loss in Eq. 2, we add the following regularization term to encourage the score of positive triple to be higher than that of negative triple for each component independently.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation Composition", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "\u2212 L l=1 log \u03c3 h (l) R (l) t (l) \u2212 h * (l) R (l) t * (l) .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation Composition", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "where (h, r, t) \u2208 T + , and the corresponding negative triple (h * , r, t * ) \u2208 T \u2212 . For each composition r 3 = r 1 r 2 , we compute the histogram of R 1 R 2 R \u22121 3 . The result for relation compositions in FB15K-237 and FAMILY is shown in Figure 4 , from which we could see good composition as matrix multiplication. We also reveal the non-Abelian property in FAMILY by exchanging the order of r 1 and r 2 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 241, |
| "end": 249, |
| "text": "Figure 4", |
| "ref_id": "FIGREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Relation Composition", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "In this section we discuss the related works and their connections to our approach.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Works", |
| "sec_num": "7" |
| }, |
| { |
| "text": "TransE (Bordes et al., 2013) takes relations as a translating operator between head and tail entities. More complicated distance functions (Wang et al., 2014; Lin et al., 2015b,a) are also proposed as extensions to TransE. TorusE (Ebisu and Ichise, 2018) proposed a novel distance function defined over a torus by transform the vector space by an Abelian group onto a n-dimensional torus. ProjE (Shi and Weninger, 2017 ) designs a neural network with a combination layer and a projection layer. R- GCN (Schlichtkrull et al., 2018) employs convolution over multiple entities to capture spectrum of the knowledge graph. ConvE (Dettmers et al., 2018) performs 2D convolution on the concatenation of entity and relation embeddings, thus by nature introduces non-linearity to enhance expressiveness.", |
| "cite_spans": [ |
| { |
| "start": 7, |
| "end": 28, |
| "text": "(Bordes et al., 2013)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 139, |
| "end": 158, |
| "text": "(Wang et al., 2014;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 159, |
| "end": 179, |
| "text": "Lin et al., 2015b,a)", |
| "ref_id": null |
| }, |
| { |
| "start": 230, |
| "end": 254, |
| "text": "(Ebisu and Ichise, 2018)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 395, |
| "end": 418, |
| "text": "(Shi and Weninger, 2017", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 498, |
| "end": 530, |
| "text": "GCN (Schlichtkrull et al., 2018)", |
| "ref_id": null |
| }, |
| { |
| "start": 624, |
| "end": 647, |
| "text": "(Dettmers et al., 2018)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Works", |
| "sec_num": "7" |
| }, |
| { |
| "text": "In RESCAL (Nickel et al., 2011) each relation is represented by a full-rank matrix. As a downside, there is a huge number of parameters in RESCAL making the model prone to overfitting. A totally symmetric DistMult (Yang et al., 2015) model simplifies RESCAL by representing each relation with a diagonal matrix. To parametrize skewsymmetric relations, ComplEx (Trouillon et al., 2016) extends DistMult by using complex-valued instead of real-valued vectors for entities and relations. The representation matrix of ComplEx supports both symmetric and skew-symmetric relations while being closed under matrix multiplication. HolE (Nickel et al., 2016 ) models the skewsymmetry with circular correlation between entity embeddings, thus ensures shifts in covariance between embeddings at different dimensions. It was recently showed that HolE is isomophic to Com-plEx (Hayashi and Shimbo, 2017) . ANALOGY (Liu et al., 2017) and SimplE (Kazemi and Poole, 2018) both reformulate the tensor decomposition approach in light of analogical and reversible relations.", |
| "cite_spans": [ |
| { |
| "start": 10, |
| "end": 31, |
| "text": "(Nickel et al., 2011)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 214, |
| "end": 233, |
| "text": "(Yang et al., 2015)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 360, |
| "end": 384, |
| "text": "(Trouillon et al., 2016)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 628, |
| "end": 648, |
| "text": "(Nickel et al., 2016", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 864, |
| "end": 890, |
| "text": "(Hayashi and Shimbo, 2017)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 901, |
| "end": 919, |
| "text": "(Liu et al., 2017)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Works", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Though embedding based approach achieves state-of-the-art performance on link prediction task, symbolic relation composition is not explicitly modeled. In contrast, the latter goal is currently popularized by directly modeling the reasoning paths (Neelakantan et al., 2015; Xiong et al., 2017; Das et al., 2018; Lin et al., 2018; Guo et al., 2019) . As paths are consistent with rea-soning logic structure, non-Abelian composition is supported by nature.", |
| "cite_spans": [ |
| { |
| "start": 247, |
| "end": 273, |
| "text": "(Neelakantan et al., 2015;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 274, |
| "end": 293, |
| "text": "Xiong et al., 2017;", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 294, |
| "end": 311, |
| "text": "Das et al., 2018;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 312, |
| "end": 329, |
| "text": "Lin et al., 2018;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 330, |
| "end": 347, |
| "text": "Guo et al., 2019)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Works", |
| "sec_num": "7" |
| }, |
| { |
| "text": "DihEdral is more expressive when compared to other bilinear form based embedding methods such as DistMult, ComplEX and ANALOGY. As the relation matrix is restricted to be orthogonal, DihEdral could bridge translation based and bilinear form based approaches as the training objective w.r.t. the relation matrix is similar (cf Lemma 2). Besides, DihEdral is the first embedding method to incorporate non-Abelian relation compositions in terms of matrix multiplications (cf. Theorem 1).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Works", |
| "sec_num": "7" |
| }, |
| { |
| "text": "This paper proposed DihEdral for KG relation embedding. By leveraging the desired properties of dihedral group, relation (skew-) symmetry, inversion, and (non-) Abelian compositions are all supported. Our experimental results on benchmark KGs showed that DihEdral outperforms existing bilinear form models and even deep learning methods. Finally, we demonstrated that the above g properties can be learned from DihEdral by extensive case studies, yielding a substantial increase in interpretability from existing models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8" |
| }, |
| { |
| "text": "There are more than one 2D representations for the dihedral group DK , and we use the orthogonal representation throughout the paper. Check Steinberg 2012 for details.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Note that the condition listed in the table is sufficient but not necessary for the desired property.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The authors would like to thank Vivian Tian, Hua Yang, Steven Li and Xiaoyuan Wu for their supports, and anonymous reviewers for their helpful comments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", |
| "authors": [ |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicholas", |
| "middle": [], |
| "last": "L\u00e9onard", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [ |
| "C" |
| ], |
| "last": "Courville", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1308.3432" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoshua Bengio, Nicholas L\u00e9onard, and Aaron C. Courville. 2013. Estimating or propagating gradi- ents through stochastic neurons for conditional com- putation. arXiv preprint arXiv:1308.3432.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Semantic parsing on Freebase from question-answer pairs", |
| "authors": [ |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Berant", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Chou", |
| "suffix": "" |
| }, |
| { |
| "first": "Roy", |
| "middle": [], |
| "last": "Frostig", |
| "suffix": "" |
| }, |
| { |
| "first": "Percy", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Question answering with subgraph embeddings", |
| "authors": [ |
| { |
| "first": "Antoine", |
| "middle": [], |
| "last": "Bordes", |
| "suffix": "" |
| }, |
| { |
| "first": "Sumit", |
| "middle": [], |
| "last": "Chopra", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Antoine Bordes, Sumit Chopra, and Jason Weston. 2014. Question answering with subgraph embed- dings. In Proceedings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Translating embeddings for modeling multirelational data", |
| "authors": [ |
| { |
| "first": "Antoine", |
| "middle": [], |
| "last": "Bordes", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicolas", |
| "middle": [], |
| "last": "Usunier", |
| "suffix": "" |
| }, |
| { |
| "first": "Alberto", |
| "middle": [], |
| "last": "Garcia-Duran", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| }, |
| { |
| "first": "Oksana", |
| "middle": [], |
| "last": "Yakhnenko", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of NeurIPs", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Antoine Bordes, Nicolas Usunier, Alberto Garcia- Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In Proceedings of NeurIPs.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Go for a walk and arrive at the answer: Reasoning over paths in knowledge bases using reinforcement learning", |
| "authors": [ |
| { |
| "first": "Rajarshi", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "Shehzaad", |
| "middle": [], |
| "last": "Dhuliawala", |
| "suffix": "" |
| }, |
| { |
| "first": "Manzil", |
| "middle": [], |
| "last": "Zaheer", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Vilnis", |
| "suffix": "" |
| }, |
| { |
| "first": "Ishan", |
| "middle": [], |
| "last": "Durugkar", |
| "suffix": "" |
| }, |
| { |
| "first": "Akshay", |
| "middle": [], |
| "last": "Krishnamurthy", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Smola", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings in ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, Luke Vilnis, Ishan Durugkar, Akshay Krishna- murthy, Alex Smola, and Andrew McCallum. 2018. Go for a walk and arrive at the answer: Reasoning over paths in knowledge bases using reinforcement learning. In Proceedings in ICLR.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Convolutional 2d knowledge graph embeddings", |
| "authors": [ |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Dettmers", |
| "suffix": "" |
| }, |
| { |
| "first": "Pasquale", |
| "middle": [], |
| "last": "Minervini", |
| "suffix": "" |
| }, |
| { |
| "first": "Pontus", |
| "middle": [], |
| "last": "Stenetorp", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Riedel", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In Proceedings of AAAI.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Adaptive subgradient methods for online learning and stochastic optimization", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Duchi", |
| "suffix": "" |
| }, |
| { |
| "first": "Elad", |
| "middle": [], |
| "last": "Hazan", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoram", |
| "middle": [], |
| "last": "Singer", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "J. Mach. Learn. Res", |
| "volume": "12", |
| "issue": "", |
| "pages": "2121--2159", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res., 12:2121-2159.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "TorusE: Knowledge graph embedding on a lie group", |
| "authors": [ |
| { |
| "first": "Takuma", |
| "middle": [], |
| "last": "Ebisu", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryutaro", |
| "middle": [], |
| "last": "Ichise", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Takuma Ebisu and Ryutaro Ichise. 2018. TorusE: Knowledge graph embedding on a lie group. In Pro- ceedings of AAAI.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Learning to exploit long-term relational dependencies in knowledge graphs", |
| "authors": [ |
| { |
| "first": "Lingbing", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| }, |
| { |
| "first": "Zequn", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lingbing Guo, Zequn Sun, and Wei Hu. 2019. Learn- ing to exploit long-term relational dependencies in knowledge graphs. In Proceedings of ICML.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Traversing knowledge graphs in vector space", |
| "authors": [ |
| { |
| "first": "Kelvin", |
| "middle": [], |
| "last": "Guu", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| }, |
| { |
| "first": "Percy", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kelvin Guu, John Miller, and Percy Liang. 2015. Traversing knowledge graphs in vector space. In Proceedings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "On the equivalence of holographic and complex embeddings for link prediction", |
| "authors": [ |
| { |
| "first": "Katsuhiko", |
| "middle": [], |
| "last": "Hayashi", |
| "suffix": "" |
| }, |
| { |
| "first": "Masashi", |
| "middle": [], |
| "last": "Shimbo", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Katsuhiko Hayashi and Masashi Shimbo. 2017. On the equivalence of holographic and complex embed- dings for link prediction. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Learning symmetric collaborative dialogue agents with dynamic knowledge graph embeddings", |
| "authors": [ |
| { |
| "first": "He", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Anusha", |
| "middle": [], |
| "last": "Balakrishnan", |
| "suffix": "" |
| }, |
| { |
| "first": "Mihail", |
| "middle": [], |
| "last": "Eric", |
| "suffix": "" |
| }, |
| { |
| "first": "Percy", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "He He, Anusha Balakrishnan, Mihail Eric, and Percy Liang. 2017. Learning symmetric collaborative di- alogue agents with dynamic knowledge graph em- beddings. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Categorical reparameterization with gumbel-softmax", |
| "authors": [ |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Jang", |
| "suffix": "" |
| }, |
| { |
| "first": "Shixiang", |
| "middle": [], |
| "last": "Gu", |
| "suffix": "" |
| }, |
| { |
| "first": "Ben", |
| "middle": [], |
| "last": "Poole", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eric Jang, Shixiang Gu, and Ben Poole. 2017. Cate- gorical reparameterization with gumbel-softmax. In Proceedings of ICLR.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Knowledge base completion: Baselines strike back", |
| "authors": [ |
| { |
| "first": "Rudolf", |
| "middle": [], |
| "last": "Kadlec", |
| "suffix": "" |
| }, |
| { |
| "first": "Ondrej", |
| "middle": [], |
| "last": "Bajgar", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 2nd Workshop on Representation Learning for NLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rudolf Kadlec, Ondrej Bajgar, and Jan Kleindienst. 2017. Knowledge base completion: Baselines strike back. In Proceedings of the 2nd Workshop on Rep- resentation Learning for NLP.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "SimplE embedding for link prediction in knowledge graphs", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Seyed Mehran Kazemi", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Poole", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of NeurIPs", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Seyed Mehran Kazemi and David Poole. 2018. SimplE embedding for link prediction in knowledge graphs. In Proceedings of NeurIPs.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Multi-hop knowledge graph reasoning with reward shaping", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Xi Victoria Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Caiming", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Xiong", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings in EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xi Victoria Lin, Richard Socher, and Caiming Xiong. 2018. Multi-hop knowledge graph reasoning with reward shaping. In Proceedings in EMNLP.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Modeling relation paths for representation learning of knowledge bases", |
| "authors": [ |
| { |
| "first": "Yankai", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhiyuan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Huan-Bo", |
| "middle": [], |
| "last": "Luan", |
| "suffix": "" |
| }, |
| { |
| "first": "Maosong", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Siwei", |
| "middle": [], |
| "last": "Rao", |
| "suffix": "" |
| }, |
| { |
| "first": "Song", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yankai Lin, Zhiyuan Liu, Huan-Bo Luan, Maosong Sun, Siwei Rao, and Song Liu. 2015a. Modeling relation paths for representation learning of knowl- edge bases. In Proceedings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Learning entity and relation embeddings for knowledge graph completion", |
| "authors": [ |
| { |
| "first": "Yankai", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhiyuan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Maosong", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Xuan", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015b. Learning entity and relation em- beddings for knowledge graph completion. In Pro- ceedings of AAAI.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Analogical inference for multi-relational embeddings", |
| "authors": [ |
| { |
| "first": "Hanxiao", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuexin", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Yiming", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hanxiao Liu, Yuexin Wu, and Yiming Yang. 2017. Analogical inference for multi-relational embed- dings. In Proceedings of ICML.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Compositional vector space models for knowledge base completion", |
| "authors": [ |
| { |
| "first": "Arvind", |
| "middle": [], |
| "last": "Neelakantan", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mc-Callum", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Arvind Neelakantan, Benjamin Roth, and Andrew Mc- Callum. 2015. Compositional vector space models for knowledge base completion. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Holographic embeddings of knowledge graphs", |
| "authors": [ |
| { |
| "first": "Maximilian", |
| "middle": [], |
| "last": "Nickel", |
| "suffix": "" |
| }, |
| { |
| "first": "Lorenzo", |
| "middle": [], |
| "last": "Rosasco", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomaso", |
| "middle": [], |
| "last": "Poggio", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Maximilian Nickel, Lorenzo Rosasco, and Tomaso Poggio. 2016. Holographic embeddings of knowl- edge graphs. In Proceedings of AAAI.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "A three-way model for collective learning on multi-relational data", |
| "authors": [ |
| { |
| "first": "Maximilian", |
| "middle": [], |
| "last": "Nickel", |
| "suffix": "" |
| }, |
| { |
| "first": "Hans-Peter", |
| "middle": [], |
| "last": "Volker Tresp", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kriegel", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of ICML.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Automatic differentiation in PyTorch", |
| "authors": [ |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Paszke", |
| "suffix": "" |
| }, |
| { |
| "first": "Sam", |
| "middle": [], |
| "last": "Gross", |
| "suffix": "" |
| }, |
| { |
| "first": "Soumith", |
| "middle": [], |
| "last": "Chintala", |
| "suffix": "" |
| }, |
| { |
| "first": "Gregory", |
| "middle": [], |
| "last": "Chanan", |
| "suffix": "" |
| }, |
| { |
| "first": "Edward", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Zachary", |
| "middle": [], |
| "last": "Devito", |
| "suffix": "" |
| }, |
| { |
| "first": "Zeming", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Alban", |
| "middle": [], |
| "last": "Desmaison", |
| "suffix": "" |
| }, |
| { |
| "first": "Luca", |
| "middle": [], |
| "last": "Antiga", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Lerer", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of NIPS Autodiff Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adam Paszke, Sam Gross, Soumith Chintala, Gre- gory Chanan, Edward Yang, Zachary DeVito, Zem- ing Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In Proceedings of NIPS Autodiff Workshop.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Modeling relational data with graph convolutional networks", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Michael Sejr", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [ |
| "N" |
| ], |
| "last": "Schlichtkrull", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Kipf", |
| "suffix": "" |
| }, |
| { |
| "first": "Rianne", |
| "middle": [], |
| "last": "Bloem", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Van Den", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Berg", |
| "suffix": "" |
| }, |
| { |
| "first": "Max", |
| "middle": [], |
| "last": "Titov", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Welling", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1703.06103" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Sejr Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. arXiv preprint arXiv:1703.06103.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "ProjE: Embedding projection for knowledge graph completion", |
| "authors": [ |
| { |
| "first": "Baoxu", |
| "middle": [], |
| "last": "Shi", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Weninger", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Baoxu Shi and Tim Weninger. 2017. ProjE: Embed- ding projection for knowledge graph completion. In Proceedings of AAAI.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Reasoning with neural tensor networks for knowledge base completion", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Danqi", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of NeurIPs", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Socher, Danqi Chen, Christopher D. Manning, and Andrew Y. Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Proceedings of NeurIPs.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Representation Theory of Finite Groups", |
| "authors": [ |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Steinberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Benjamin Steinberg. 2012. Representation Theory of Finite Groups. Springer-Verlag New York.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "YAGO: A core of semantic knowledge", |
| "authors": [ |
| { |
| "first": "Fabian", |
| "middle": [ |
| "M" |
| ], |
| "last": "Suchanek", |
| "suffix": "" |
| }, |
| { |
| "first": "Gjergji", |
| "middle": [], |
| "last": "Kasneci", |
| "suffix": "" |
| }, |
| { |
| "first": "Gerhard", |
| "middle": [], |
| "last": "Weikum", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of WWW", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. YAGO: A core of semantic knowl- edge. In Proceedings of WWW.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Representing text for joint embedding of text and knowledge bases", |
| "authors": [ |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| }, |
| { |
| "first": "Danqi", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Pantel", |
| "suffix": "" |
| }, |
| { |
| "first": "Hoifung", |
| "middle": [], |
| "last": "Poon", |
| "suffix": "" |
| }, |
| { |
| "first": "Pallavi", |
| "middle": [], |
| "last": "Choudhury", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Gamon", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoi- fung Poon, Pallavi Choudhury, and Michael Gamon. 2015. Representing text for joint embedding of text and knowledge bases. In Proceedings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Complex embeddings for simple link prediction", |
| "authors": [ |
| { |
| "first": "Th\u00e9o", |
| "middle": [], |
| "last": "Trouillon", |
| "suffix": "" |
| }, |
| { |
| "first": "Johannes", |
| "middle": [], |
| "last": "Welbl", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Riedel", |
| "suffix": "" |
| }, |
| { |
| "first": "\u00c9ric", |
| "middle": [], |
| "last": "Gaussier", |
| "suffix": "" |
| }, |
| { |
| "first": "Guillaume", |
| "middle": [], |
| "last": "Bouchard", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Th\u00e9o Trouillon, Johannes Welbl, Sebastian Riedel,\u00c9ric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In Proceed- ings of ICML.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Knowledge graph embedding by translating on hyperplanes", |
| "authors": [ |
| { |
| "first": "Zhen", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianwen", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianlin", |
| "middle": [], |
| "last": "Feng", |
| "suffix": "" |
| }, |
| { |
| "first": "Zheng", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by trans- lating on hyperplanes. In Proceedings of AAAI.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "DeepPath: A reinforcement learning method for knowledge graph reasoning", |
| "authors": [ |
| { |
| "first": "Wenhan", |
| "middle": [], |
| "last": "Xiong", |
| "suffix": "" |
| }, |
| { |
| "first": "Thien", |
| "middle": [], |
| "last": "Hoang", |
| "suffix": "" |
| }, |
| { |
| "first": "William", |
| "middle": [ |
| "Yang" |
| ], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings in EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wenhan Xiong, Thien Hoang, and William Yang Wang. 2017. DeepPath: A reinforcement learning method for knowledge graph reasoning. In Proceed- ings in EMNLP.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Embedding entities and relations for learning and inference in knowledge bases", |
| "authors": [ |
| { |
| "first": "Bishan", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Wen-Tau", |
| "middle": [], |
| "last": "Yih", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaodong", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianfeng", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "Li", |
| "middle": [], |
| "last": "Deng", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In Proceedings of ICLR.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Understanding straight-through estimator in training activation quantized neural nets", |
| "authors": [ |
| { |
| "first": "Penghang", |
| "middle": [], |
| "last": "Yin", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiancheng", |
| "middle": [], |
| "last": "Lyu", |
| "suffix": "" |
| }, |
| { |
| "first": "Shuai", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Stanley", |
| "middle": [ |
| "J" |
| ], |
| "last": "Osher", |
| "suffix": "" |
| }, |
| { |
| "first": "Yingyong", |
| "middle": [], |
| "last": "Qi", |
| "suffix": "" |
| }, |
| { |
| "first": "Jack", |
| "middle": [], |
| "last": "Xin", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Penghang Yin, Jiancheng Lyu, Shuai Zhang, Stanley J. Osher, Yingyong Qi, and Jack Xin. 2019. Under- standing straight-through estimator in training ac- tivation quantized neural nets. In Proceedings of ICLR.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "Elements in D 4 . Each subplot represents result after applying the corresponding operator to the square of 'ACL' on the upper left corner (on top of O (0) 4 ). The top row corresponds to the rotation operators and the bottom row corresponds to the reflection operators.", |
| "num": null, |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF2": { |
| "text": "Relation inversion in WN18 (top) and FB15K (bottom). Each subplot shows the histogram of diagonal elements in R 1 R 2 where r 1 is inverse relation of r 2 . The name of the two relations and the percentage of the 1s in the diagonal are shown in the first, second and third row of the subplot title, respectively.", |
| "num": null, |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF4": { |
| "text": "Historgram of each component of D 4 for WN18. The top and bottom row corresponds to symmetric and skew-symmetric relations, respectively. Note that O (1,3) 4", |
| "num": null, |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF6": { |
| "text": "Relation composition on FB15K-237 and FAMILY. Each subplot shows the histogram of", |
| "num": null, |
| "type_str": "figure", |
| "uris": null |
| }, |
| "TABREF1": { |
| "text": "Comparison on expressiveness for bilinear KB models. 'NA' stands for 'not available', and ' ' stands for 'always'. * DistMult has no skew-symmetric relation representations but it performs well in benchmark datasets because the entity type of head and tails are different. \u2020 The contents in column 'Composition' are subject to the assumption that relation composition corresponds the multiplication of the relation representation. We are not certain if there are other composition rules with which these properties are satisfied.", |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "content": "<table/>" |
| }, |
| "TABREF2": { |
| "text": ".", |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>Dataset</td><td>|E|</td><td colspan=\"4\">|R| Train Valid Test</td></tr><tr><td>WN18</td><td>41k</td><td colspan=\"2\">18 141k</td><td>5k</td><td>5k</td></tr><tr><td>WN18RR</td><td>41k</td><td>11</td><td>87k</td><td>3k</td><td>3k</td></tr><tr><td>FB15K</td><td colspan=\"3\">15k 1.3k 483k</td><td colspan=\"2\">50k 59k</td></tr><tr><td>FB15K-237</td><td>15k</td><td colspan=\"2\">237 273k</td><td colspan=\"2\">18k 20k</td></tr><tr><td>YAGO3-10</td><td>123k</td><td>37</td><td>1M</td><td>5k</td><td>5k</td></tr></table>" |
| }, |
| "TABREF3": { |
| "text": "Statistics of Datasets.", |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "content": "<table/>" |
| } |
| } |
| } |
| } |