| { |
| "paper_id": "K16-1026", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T07:11:23.099986Z" |
| }, |
| "title": "Entity Disambiguation by Knowledge and Text Jointly Embedding", |
| "authors": [ |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Fang", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "fangwei7@mail2" |
| }, |
| { |
| "first": "Jianwen", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Microsoft", |
| "location": { |
| "settlement": "Redmond", |
| "region": "WA" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Dilin", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "dilin.wang.gr@dartmouth.edu" |
| }, |
| { |
| "first": "Zheng", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Microsoft", |
| "location": { |
| "settlement": "Redmond", |
| "region": "WA" |
| } |
| }, |
| "email": "zhengc@microsoft.com" |
| }, |
| { |
| "first": "Ming", |
| "middle": [], |
| "last": "Li", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "liming46@mail.sysu.edu.cn" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "For most entity disambiguation systems, the secret recipes are feature representations for mentions and entities, most of which are based on Bag-of-Words (BoW) representations. Commonly, BoW has several drawbacks: (1) It ignores the intrinsic meaning of words/entities; (2) It often results in high-dimension vector spaces and expensive computation; (3) For different applications, methods of designing handcrafted representations may be quite different, lacking of a general guideline. In this paper, we propose a different approach named EDKate. We first learn low-dimensional continuous vector representations for entities and words by jointly embedding knowledge base and text in the same vector space. Then we utilize these embeddings to design simple but effective features and build a two-layer disambiguation model. Extensive experiments on real-world data sets show that (1) The embedding-based features are very effective. Even a single one embedding-based feature can beat the combination of several BoW-based features. (2) The superiority is even more promising in a difficult set where the mention-entity prior cannot work well. (3) The proposed embedding method is much better than trivial implementations of some off-the-shelf embedding algorithms. (4) We compared our EDKate with existing methods/systems and the results are also positive.", |
| "pdf_parse": { |
| "paper_id": "K16-1026", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "For most entity disambiguation systems, the secret recipes are feature representations for mentions and entities, most of which are based on Bag-of-Words (BoW) representations. Commonly, BoW has several drawbacks: (1) It ignores the intrinsic meaning of words/entities; (2) It often results in high-dimension vector spaces and expensive computation; (3) For different applications, methods of designing handcrafted representations may be quite different, lacking of a general guideline. In this paper, we propose a different approach named EDKate. We first learn low-dimensional continuous vector representations for entities and words by jointly embedding knowledge base and text in the same vector space. Then we utilize these embeddings to design simple but effective features and build a two-layer disambiguation model. Extensive experiments on real-world data sets show that (1) The embedding-based features are very effective. Even a single one embedding-based feature can beat the combination of several BoW-based features. (2) The superiority is even more promising in a difficult set where the mention-entity prior cannot work well. (3) The proposed embedding method is much better than trivial implementations of some off-the-shelf embedding algorithms. (4) We compared our EDKate with existing methods/systems and the results are also positive.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Entity disambiguation is the task of linking entity mentions in unstructured text to the corresponding entities in a knowledge base. For example, in the sentence \"Michael Jordan is newly elected as AAAI fellow\", the mention \"Michael Jordan\" should be linked to \"Michael I. Jordan\" (Berkeley Professor) rather than \"Michael Jordan\" (NBA Player). Formally, given a set of mentions M = {m 1 , m 2 , ..., m k } (specified or detected automatically) in a document d, for each mention m i \u2208 M , the task is to find the correct entity e i in the knowledge base (KB) K to which the mention m i refers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "There are various methods proposed for the problem in the past decades. But generally speaking, an entity disambiguation method is commonly composed of three stages/components. (1) Constructing representations for mentions/entities from raw data, often as the form of sparse vectors. (2) Extracting features for disambiguation models based on the representations of mentions and entities constructed in stage (1). (3) Optimizing the disambiguation model by empirically setting or learning weights on the extracted features, e.g., by training a classifier/ranker. There exist few features directly defined by heuristics, skipping the first stage. For example, string similarity or edit distance between a mention surface and an entity's canonical form (Cucerzan, 2011; Cassidy et al., 2011) , and the prior probability of a mention surface being some entity, etc. However, they are the minority as it is difficult for human to design such features.", |
| "cite_spans": [ |
| { |
| "start": 751, |
| "end": 767, |
| "text": "(Cucerzan, 2011;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 768, |
| "end": 789, |
| "text": "Cassidy et al., 2011)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Almost all the existing methods focus on the second or the third stages while the importance of the first stage is often overlooked. The common practice to deal with the first stage of representa-tions is defining handcrafted BoW representations. For example, an entity is often represented by a sparse vector of weights on the n-grams contained in the description text of the entity, i.e., the standard Bag-of-Words (BoW) representation. TF-IDF is often used to set the weights. There are several variants for this way, e.g., using selected key phrases or Wikipedia in-links/out-links instead of all n-grams as the dimensions of the vectors . The problem is more challenging when representing a mention. The common choice is using the n-gram vector of the surrounding text. Obviously the information of the local text window is too limited to well represent a mention. In practice, there is another constraint, the representations of entities and mentions should be in the same space, i.e., the dimensions of the vectors should be shared. This constraint makes the representation design more difficult. How to define such representations and the features based on them almost become the secrete sauce of a disambiguation system. For example, Cucerzan 2007uses Wikipedia anchor surfaces and \"Category\" values as dimensions and designed complex mechanisms to represent words, mentions and entities as sparse vectors on those dimensions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "BoW representations have several intrinsic drawbacks: First, the semantic meaning of a dimension is largely ignored. For example, \"cat\", \"cats\" and \"tree\" are equally distant under onehot BoW representations. Second, BoW representations often introduce high dimension vector spaces and lead to expensive computation. Third, for different applications, methods of designing handcrafted representations may be quite different, lacking of a general guideline. The intuitive questions like \"why using n-grams, Wikipedia links or category values as dimensions\" and \"why using TF-IDF as weights\" are hinting us it is very likely these handcrafted representations are not the best and there should be some better representations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper we focus on the first stage, the problem of representations. Inspired by the recent works on word embedding (Bengio et al., 2003; Mikolov et al., 2013a; Mikolov et al., 2013b) , knowledge embedding (Bordes et al., 2011; Bordes et al., 2013; Socher et al., 2013; Wang et al., 2014b) and joint embedding KBs and texts (Wang et al., 2014a; Zhong and Zhang, 2015) , we propose to learn representations for entity disambiguation. Specifically, from KBs and texts, we jointly embed entities and words into the same low-dimensional continuous vector space. The embeddings are obtained by optimizing a global objective considering all the information in the KBs and texts thus the intrinsic semantics of words and entities are believed to be preserved during the embedding. Then we design simple but effective features based on embeddings and build a two-layer disambiguation model. We conduct extensive experiments on real-word data sets and exhibit the effectiveness of our words and entities' representation.", |
| "cite_spans": [ |
| { |
| "start": 122, |
| "end": 143, |
| "text": "(Bengio et al., 2003;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 144, |
| "end": 166, |
| "text": "Mikolov et al., 2013a;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 167, |
| "end": 189, |
| "text": "Mikolov et al., 2013b)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 212, |
| "end": 233, |
| "text": "(Bordes et al., 2011;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 234, |
| "end": 254, |
| "text": "Bordes et al., 2013;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 255, |
| "end": 275, |
| "text": "Socher et al., 2013;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 276, |
| "end": 295, |
| "text": "Wang et al., 2014b)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 330, |
| "end": 350, |
| "text": "(Wang et al., 2014a;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 351, |
| "end": 373, |
| "text": "Zhong and Zhang, 2015)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Entity Disambiguation Entity disambiguation methods roughly fall into two categories: local approaches and collective approaches. Local approaches disambiguate each mention in a document separately. For example, Bunescu and Pasca (2006) compare the context of each mention with the Wikipedia categories of an entity candidate; Milne and Witten (2008) come up with the concept \"unambiguous link\" and make it convenient to compute entity relatedness. Differently, collective approaches require all entities in a document \"coherent\" in semantic, measured by some objective functions. Cucerzan (2007) proposes a topic representation for document by aggregating topic vectors of all entity candidates in the document. Kulkarni et al. (2009) model pair-wise coherence of entity candidates for two different mentions and use hill-climbing algorithm to get a proximate solution. Hoffart et al. (2011) treat entity disambiguation as the task of finding a dense subgraph which contain all mention nodes and exactly one mention-entity edge for each mention from a large graph.", |
| "cite_spans": [ |
| { |
| "start": 212, |
| "end": 236, |
| "text": "Bunescu and Pasca (2006)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 327, |
| "end": 350, |
| "text": "Milne and Witten (2008)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 713, |
| "end": 735, |
| "text": "Kulkarni et al. (2009)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 871, |
| "end": 892, |
| "text": "Hoffart et al. (2011)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Most methods above design various representations for mentions and entities. For example, based on Wikipedia, Cucerzan (2007) uses anchor surfaces to represent entities in \"context space\" and use items in the category boxes to represent entities in \"topic space\". For mentions, he takes context words among a fixed-size window around the mention as the context vector. Kulkarni et al. (2009) exploit sets of words, sets of word counts and sets of TF-IDFs to represent entities. express entities with extensive in-links and out-links in Wikipedia.", |
| "cite_spans": [ |
| { |
| "start": 369, |
| "end": 391, |
| "text": "Kulkarni et al. (2009)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In recent years, some works are considering how to apply neural network to disambiguate entities from context. For example, He et al. (2013) use feed-forward network to represent context based on BoW input while Sun et al. (2015) turn to convolution network directly based on the original word2vec (Mikolov et al., 2013a) . However, they pay little attention to design effective word and entity representations. In this paper, we focus on learning representative word and entity vectors for disambiguation. Embedding Word embedding aims to learn continuous vector representation for words. Word embeddings are usually learned from unlabeled text corpus by predicting context words surrounded or predicting the current word given context words (Bengio et al., 2003; Mikolov et al., 2013a; Mikolov et al., 2013b) . These embeddings can usually catch syntactic and semantic relations between words. Recently knowledge embedding also becomes popular. The goal is to embed entities and relations of knowledge graphs into a low-dimension continuous vector space while certain properties in the graph are preserved (Bordes et al., 2011; Bordes et al., 2013; Socher et al., 2013; Wang et al., 2014a; Wang et al., 2014b) . To connect word embedding and knowledge embedding, (Wang et al., 2014a) propose to align these two spaces by Wikipedia anchors and names of entities. (Zhong and Zhang, 2015) conduct alignment by entities' description.", |
| "cite_spans": [ |
| { |
| "start": 124, |
| "end": 140, |
| "text": "He et al. (2013)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 212, |
| "end": 229, |
| "text": "Sun et al. (2015)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 298, |
| "end": 321, |
| "text": "(Mikolov et al., 2013a)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 743, |
| "end": 764, |
| "text": "(Bengio et al., 2003;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 765, |
| "end": 787, |
| "text": "Mikolov et al., 2013a;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 788, |
| "end": 810, |
| "text": "Mikolov et al., 2013b)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 1108, |
| "end": 1129, |
| "text": "(Bordes et al., 2011;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 1130, |
| "end": 1150, |
| "text": "Bordes et al., 2013;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 1151, |
| "end": 1171, |
| "text": "Socher et al., 2013;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 1172, |
| "end": 1191, |
| "text": "Wang et al., 2014a;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 1192, |
| "end": 1211, |
| "text": "Wang et al., 2014b)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 1265, |
| "end": 1285, |
| "text": "(Wang et al., 2014a)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 1364, |
| "end": 1387, |
| "text": "(Zhong and Zhang, 2015)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In this part, we first refine current joint embedding techniques to train word and entity embeddings from Freebase and Wikipedia texts for disambiguation tasks. Then in section 3.2, we design simple features based on embeddings. Finally in section 3.3, we propose a two-layer disambiguation model to balance mention-entity prior and other features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Disambiguation by Embedding", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We mainly base the joint learning framework on (Wang et al., 2014a )'s joint model and also utilize the alignment technique from (Zhong and Zhang, 2015) to better align word and entity embeddings into a same space. Furthermore, we optimize the embedding for disambiguation from two aspects. First, we add url-anchor (entity-entity) cooccurrence from Wikipedia. Second, we refine the traditional negative sampling part to have entities in candidate list more probable to be sampled, which aims to discriminate entity candidates from each other.", |
| "cite_spans": [ |
| { |
| "start": 47, |
| "end": 66, |
| "text": "(Wang et al., 2014a", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 129, |
| "end": 152, |
| "text": "(Zhong and Zhang, 2015)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Embeddings Jointly Learning", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "A knowledge base K is usually composed of a set of triplets (h, r, t), where h, t \u2208 E (the set of entities) and r \u2208 R (the set of relations). Here, we follow (Wang et al., 2014a) to use h, r, t to denote the embeddings of h, r, t respectively. And score a triplet in this way:", |
| "cite_spans": [ |
| { |
| "start": 158, |
| "end": 178, |
| "text": "(Wang et al., 2014a)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Knowledge Model", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "z(h, r, t) = b \u2212 1 2 \u2225h + r \u2212 t\u2225 2 (1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Knowledge Model", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "where b is a constant for numerical stability in the approximate optimization stage described in 3.1.5. Then normalize z and define:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Knowledge Model", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "Pr(h|r, t) = exp{z(h, r, t)} \u2211h \u2208E exp{z(h, r, t)}", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Knowledge Model", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "(2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Knowledge Model", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "Pr (r|h, t) and Pr (t|h, r) are also defined in a similar way. And the likelihood of observing a triplet is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Knowledge Model", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "L triplet (h, r, t) = log Pr(h|r, t) + log Pr(r|h, t) + log Pr(t|h, r)", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Knowledge Model", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "Then the goal is to maximize the likelihood of all triplets in the whole knowledge graph:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Knowledge Model", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "L K = \u2211 (h,r,t)\u2208K L triplet (h, r, t)", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Knowledge Model", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "In text model, to be compatible with the knowledge model, a pair of co-occurrence words is scored in this way:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text Model", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "z(w, v) = b \u2212 1 2 \u2225w \u2212 v\u2225 2", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "Text Model", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "where w and v represent co-occurrence of two words in a context window; w and v represent the corresponding embeddings for w and v. Then normalize z(w, v) and give a probability representation:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text Model", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "Pr(w|v) = exp{z(w, v)} \u2211w \u2208V exp{z(w, v)} (6)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text Model", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "where V is our vocabulary. Then the goal of the text model is to maximize the likelihood of all word co-occurrence pairs:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text Model", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "L T = \u2211 (w,v)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text Model", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "[log Pr(w|v) + log Pr(v|w)] (7)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text Model", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "Alignment model guarantees the vectors of entities and words are in the same space, i.e., the similarity/distance between an entity vector and an word vector is meaningful. We combine all the three alignment models proposed in (Wang et al., 2014a) and (Zhong and Zhang, 2015) . Alignment by Wikipedia Anchors (Wang et al., 2014a) . Mentions are replaced with the entities they link to and word-word co-occurrence becomes word-entity co-occurrence.", |
| "cite_spans": [ |
| { |
| "start": 227, |
| "end": 247, |
| "text": "(Wang et al., 2014a)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 252, |
| "end": 275, |
| "text": "(Zhong and Zhang, 2015)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 309, |
| "end": 329, |
| "text": "(Wang et al., 2014a)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment Model", |
| "sec_num": "3.1.3" |
| }, |
| { |
| "text": "L AA = \u2211 (w,a),a\u2208A", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment Model", |
| "sec_num": "3.1.3" |
| }, |
| { |
| "text": "[log Pr(w|e a ) + log Pr(e a |w)]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment Model", |
| "sec_num": "3.1.3" |
| }, |
| { |
| "text": "(8) where A denotes the set of anchors and e a denotes the entity behind the anchor a.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment Model", |
| "sec_num": "3.1.3" |
| }, |
| { |
| "text": "Alignment by Names of Entities (Wang et al., 2014a) . For each triplet (h, r, t), h or t are replaced with their corresponding names, so we get (w h , r, t), (h, r, w t ) and (w h , r, w t ), where w h denotes the name of h and w t denotes the name of t.", |
| "cite_spans": [ |
| { |
| "start": 31, |
| "end": 51, |
| "text": "(Wang et al., 2014a)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment Model", |
| "sec_num": "3.1.3" |
| }, |
| { |
| "text": "L AN = \u2211 (h,r,t) [L triplet (w h , r, t) + L triplet (h, r, w t ) + L triplet (w h , r, w t )]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment Model", |
| "sec_num": "3.1.3" |
| }, |
| { |
| "text": "(9) Alignment by Entities' Description (Zhong and Zhang, 2015) . This alignment utilizes the cooccurrence of Wikipedia url and words in the description of that url page, which is similar to the PV-DBOW model in (Le and Mikolov, 2014) .", |
| "cite_spans": [ |
| { |
| "start": 39, |
| "end": 62, |
| "text": "(Zhong and Zhang, 2015)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 219, |
| "end": 233, |
| "text": "Mikolov, 2014)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment Model", |
| "sec_num": "3.1.3" |
| }, |
| { |
| "text": "L AD = \u2211 e\u2208E \u2211 w\u2208De", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment Model", |
| "sec_num": "3.1.3" |
| }, |
| { |
| "text": "[log Pr(e|w) + log Pr(w|e)]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment Model", |
| "sec_num": "3.1.3" |
| }, |
| { |
| "text": "(10) where D e denotes the description of an entity e. To clarify again, \"url\" is equivalent with \"entity\" in this paper. Combine these three kinds of alignment techniques, we get the whole alignment model:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment Model", |
| "sec_num": "3.1.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "L A = L AA + L AN + L AD", |
| "eq_num": "(11)" |
| } |
| ], |
| "section": "Alignment Model", |
| "sec_num": "3.1.3" |
| }, |
| { |
| "text": "3.1.4 Url-Anchor Co-occurrence", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment Model", |
| "sec_num": "3.1.3" |
| }, |
| { |
| "text": "For entity disambiguation, the entity relatedness graph is useful to capture the \"topics\" of an entity in Wikipedia. Thus we also hope to encode such information into our embedding. Specifically we further incorporate \"url-anchor\" co-occurrence to the training objective. \"url\" stands for the url of a Wikipedia page and \"anchor\" stands for the hyperlinks of anchor fields in that page. Considering knowledge model, text model, alignment model and url-anchor co-occurrence all together, we get the overall objective (likelihood) to maximize:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment Model", |
| "sec_num": "3.1.3" |
| }, |
| { |
| "text": "L U = \u2211 e\u2208E \u2211 a\u2208A", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment Model", |
| "sec_num": "3.1.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "L = L K + L T + L A + L U", |
| "eq_num": "(13)" |
| } |
| ], |
| "section": "Alignment Model", |
| "sec_num": "3.1.3" |
| }, |
| { |
| "text": "In training phase, to avoid the computation of the normalizer in equation 2and 6, we follow (Mikolov et al., 2013b) to transform the origin softmax-like objective to a simpler binary classification objective, which aims to distinguish observed data from noise. To optimize for entity disambiguation, when using the context words to predict an anchor (entity), i.e., optimizing Pr(e a |w), rather than uniformly sampling negatives from the vocabulary as (Mikolov et al., 2013b) , we conduct our sampling according to the candidates' prior distribution.", |
| "cite_spans": [ |
| { |
| "start": 92, |
| "end": 115, |
| "text": "(Mikolov et al., 2013b)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 453, |
| "end": 476, |
| "text": "(Mikolov et al., 2013b)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Negative Sampling Refinement", |
| "sec_num": "3.1.5" |
| }, |
| { |
| "text": "With the embeddings we train above, many entity disambiguation methods can directly take them as the words and entities' representation and redefine their features. In this section, we only design some simple features to illustrate the capability of the embeddings in disambiguation. In the section of experiment, we can observe that even a single embedding-based feature can beat the combination of several BoW-based features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Disambiguation Features Design", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "This feature is directly counted from Wikipedia's anchor fields and measures the link probability of an entity e given a mention m. Prior is a strong indicator (Fader et al., 2009) to select the correct entity. However, it is unwise to take prior as a feature all the time because prior usually get a very large weight, which overfits the training data. Later in this paper, we will propose a classifier to tell when to use the prior or not.", |
| "cite_spans": [ |
| { |
| "start": 160, |
| "end": 180, |
| "text": "(Fader et al., 2009)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Mention-Entity Prior", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "This feature comes from the hypothesis that the true entity of a mention will coincide with the meaning of most of the other words in the same document. So this feature sums up all idfweighted relatedness scores between an entity candidate and each context word, then average them:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Global Context Relatedness (E-GCR)", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u2200e \u2208 \u0393(m),E \u2212 GCR(e, d|m) = 1 |d| \u2211 w\u2208d idf(w) \u2022 \u2126(e, w)", |
| "eq_num": "(14)" |
| } |
| ], |
| "section": "Global Context Relatedness (E-GCR)", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "where \u0393(m) denotes the entity candidate set of mention m; d denotes the document containing m; \u2126(e, w) denotes a distance-based relatedness b \u2212 1 2 \u2225e \u2212 w\u2225 2 , which is compatible with the embedding model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Global Context Relatedness (E-GCR)", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "E-GCR can only coarsely rank topic-related candidates to one of the top positions. But sometimes there is nearly no relation between the true entity and the topic of the document: Ex.1 \"Stefani, a DJ at The Zone 101.5 FM in Phoenix, AZ, sent me an awesome MP3 of the interview...\" In this example, E-GCR will link AZ to AZ (rapper) because the context is all about music although Phoenix should be a strong hint to link AZ to Arizona.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Local Context Relatedness (E-LCR)", |
| "sec_num": "3.2.3" |
| }, |
| { |
| "text": "To avoid this kind of errors, we design a feature to describe the relatedness between an entity candidate and some important words around the mention.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Local Context Relatedness (E-LCR)", |
| "sec_num": "3.2.3" |
| }, |
| { |
| "text": "To identify these words, we turn to dependency parser provided by Stanford CoreNLP (Manning et al., 2014) . Formulate this feature:", |
| "cite_spans": [ |
| { |
| "start": 83, |
| "end": 105, |
| "text": "(Manning et al., 2014)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Local Context Relatedness (E-LCR)", |
| "sec_num": "3.2.3" |
| }, |
| { |
| "text": "\u2200e \u2208 \u0393(m),E \u2212 LCR(e, d|m) = 1 |S depend | \u2211 w\u2208S depend \u2126(e, w) (15)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Local Context Relatedness (E-LCR)", |
| "sec_num": "3.2.3" |
| }, |
| { |
| "text": "where S depend is the set consisting all adjacent words of m in the dependency graph of the document d.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Local Context Relatedness (E-LCR)", |
| "sec_num": "3.2.3" |
| }, |
| { |
| "text": "In practice, there are usually many casual mentions linked to an entity, such as w v for West Virginia. Ex.2 \"We would like to welcome you to the official website for the city of Chester, w v. \" In this case, \"w v\" should be a strong hint for the disambiguation of \"Chester\". However, \"w\", \"v\" or \"w v\" is too casual to catch useful information if we only take their lexical expression. So we should not only take the relative surface forms but also their entity candidates into consideration. Then the entity \"West Virginia\" will be quite helpful to link \"Chester\" to \"Chester, West Virginia\" This feature is similar to the previous collective or topic-coherence methods. And our local entity coherence is more accurate because we only consider relative mentions/entities around rather than all entities in a document. Formulate this feature:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Local Entity Coherence (E-LEC)", |
| "sec_num": "3.2.4" |
| }, |
| { |
| "text": "\u2200e \u2208 \u0393(m), E \u2212 LEC(e, d|m) = 1 |Sdepend| \u2211 w\u2208S depend max e \u2032 \u2208\u0393(w),e \u2032 \u0338 =e", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Local Entity Coherence (E-LEC)", |
| "sec_num": "3.2.4" |
| }, |
| { |
| "text": "\u2126(e, e \u2032 ) (16)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Local Entity Coherence (E-LEC)", |
| "sec_num": "3.2.4" |
| }, |
| { |
| "text": "To balance the usage of prior and other features, we propose a two-layer disambiguation model. It includes two steps: (1) Build a binary classifier to give a probability p conf denoting the confidence to use prior only. Features used to construct this classifier are E-GCR, mention word itself and context words in a window sized 4 around the mention. (2) If p conf achieve a designated threshold \u03be, we only adopt prior to select the best candidate, otherwise we only consider other embedding-based features described in section 3.2. Formulate this model:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Two-layer Disambiguation Model", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u2200m, e * = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 arg max e\u2208\u0393(m) prior(e|m), p conf \u2265 \u03be arg max e\u2208\u0393(m) \u2211 |F | i w i \u2022 f i , p conf < \u03be", |
| "eq_num": "(17)" |
| } |
| ], |
| "section": "Two-layer Disambiguation Model", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "where e * is the entity we choose for the mention m.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Two-layer Disambiguation Model", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "In the experiments, we first compare our embedding-based features with some traditional BoW-based features. Then we illustrate the capability of the two-layer disambiguation model. After that we compare our embedding technique EDKate in entity disambiguation tasks with some other straightforward work-arounds. Finally we incorporate mention detection and construct a disambiguation system to compare with other existing systems.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We take Freebase as the KB, full Wikipedia corpus as text corpus. For comparison, we also use some small benchmark corpus for testing purpose.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We adopt the Wikipedia dumped from Feb. 13th, 2015. With the raw htmls, we first filter out non-English and non-entity pages. Then we extract text and anchors information according to the html templates. After the preprocessing procedures, we get 4,532,397 pages with 93,299,855 anchors. Furthermore, we split the remained pages into training, developing and testing sets with proportion 8:1:1. In some experiments, only \"valid entities\" will be considered and a \"(filtered)\" tag will be added to the name of the dataset. For statistical summary, please refer to Table 1 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 563, |
| "end": 570, |
| "text": "Table 1", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Wikipedia", |
| "sec_num": "4.1.1" |
| }, |
| { |
| "text": "In some experiments, we limit our KB entities to the Wikipedia training set and remove entities which are mentioned less than 3 times in Wikipedia training set for efficiency. We call the remaining entities \"valid entities\".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Valid Entities", |
| "sec_num": "4.1.2" |
| }, |
| { |
| "text": "We use Freebase dumped from Feb. 13th, 2015 as our knowledge base. We only want to link mentions to Wikipedia entities so we filter out triplets whose head or tail entity isn't covered by Wikipedia. Finally we get 99,980,159 triplets. If we only consider valid entities, there are 37,606,158.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Knowledge Base", |
| "sec_num": "4.1.3" |
| }, |
| { |
| "text": "Besides Wikipedia, we also evaluate our embedding-based method in some small benchmark datasets. KBP 2010 comes from the KBP's annual tracks held by TAC and contains only one mention in one document. AQUAINT is originally collected by (Milne and Witten, 2008) and mimics the structure of Wikipedia. MSNBC is taken from (Cucerzan, 2007) and focuses on news wire text; ACE is collected by from the ACE co-reference dataset. For statistics in detail, see Table 1 .", |
| "cite_spans": [ |
| { |
| "start": 235, |
| "end": 259, |
| "text": "(Milne and Witten, 2008)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 319, |
| "end": 335, |
| "text": "(Cucerzan, 2007)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 452, |
| "end": 459, |
| "text": "Table 1", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Small Benchmark Corpus", |
| "sec_num": "4.1.4" |
| }, |
| { |
| "text": "We find that in all the data sets, large part of the examples can be simply well solved by the mention-entity prior without considering any contexts. But there indeed exist some examples the We think disambiguation should pay more attention to this part of examples rather than the part where prior already works well. Thus from the testing sets \"Wikipedia:test (filtered)\" and \"KBP 2010 (filtered)\", we collect the cases where prior cannot rank the correct entity to top 1 and construct the separate \"difficult\" set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Difficult Set", |
| "sec_num": "4.1.5" |
| }, |
| { |
| "text": "We use stochastic gradient descent (SGD) to optimize the objective (see equation 13). We set the dimension of word and entity embeddings to 150 and initialize each element of an embedding with a random number near 0. For the constant b, we empirically set it to 7.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Embedding Training", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In knowledge model, we use Freebase as our knowledge base. We don't set a fix epoch number and the knowledge training thread will not terminate until the text training thread stop. Furthermore, we also adapt the learning rate in knowledge training to that in text training. When a triplet (h, r, t) is considered, the numbers of negative samples to construct (h, r, t), (h,r, t) and (h, r,t) are all 10, in whichh andt are uniformly sampled from E whiler is uniformly sampled from R.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Embedding Training", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In text model, we use the filtered Wikipedia training set as our text corpus. We set the number of epoch to 6 and set initial learning rate to 0.025, which will decreases linearly with the training process. When a word is encountered, we take words inside a 5-word-window as co-occurred words. For each co-occurred word, we sample 30 negatives from the unigram distribution raised to the 3/4rd power.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Embedding Training", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In alignment model, \"alignment by Wikipedia anchors\" and \"alignment by entity names\" can be absorbed into text model and knowledge model re-spectively. For \"alignment by entity's description, we sample 10 negatives in Pr(e|w) and 30 negatives in Pr(w|e).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Embedding Training", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "For Pr(e a |w) in \"url-anchor co-occurrence\", we sample 20 negatives from the candidate list of the anchor mention and 10 negatives from the whole entity set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Embedding Training", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "To balance the training process, we give knowledge model 10 threads and text model 20 threads. We adopt the share-memory scheme like (Bordes et al., 2013) and don't apply locks.", |
| "cite_spans": [ |
| { |
| "start": 133, |
| "end": 154, |
| "text": "(Bordes et al., 2013)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Embedding Training", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We set up this experiment to exhibit the expressiveness of our embeddings. We compare E-GCR (global context relatedness) with some traditional BoW-based features. Moreover, in this experiment, we report the results on \"difficult set\" where the mention-entity prior fails. Following the same metric used in (Cucerzan, 2011), we take accuracy to evaluate the disambiguation performance, that is, the fraction of all mentions for which the correct entity is ranked to top 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison between Embedding-based and BoW-based Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "For embedding-based features, we only consider the E-GCR, which is more comparable with the BoW-based features B-CS and B-TS we use here because they all consider the whole document as context. These BoW-based features include: (1)Mention-Entity Prior;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "(2)BoW Context Similarity (B-CS). This feature is proposed by (Cucerzan, 2011). First, for each entity in Wikipedia, take all surface forms of anchors in that page as its representation vector. Then compute scalar product between this representation vector and the context word vector of a given mention;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "(3)BoW Topic Similarity (B-TS). First construct the topic vector for each entity from category boxes like (Cucerzan, 2011). Then compute scalar product between topic vector and context word vector of a given mention.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "From Table 2 , we get (1) E-GCR can beat the combination of several BoW-based features. This is mainly because, embeddings training owns a inner optimizaiton objective and embraces the information of these BoW-based representations. 2Embedding-based feature appear robust and significantly outperform BoW-based features in diffi- cult set, which indicate that these BoW representation cannot well catch the information in these cases while embeddings are still expressive . 3Unlike the situation in difficult set, the gap between \"E-GCR\" and \"Prior+B-CS+B-TS\" is not so large mainly because \"difficult set\" only occupy a small proportion and prior would cover the drawbacks of BoW-based features.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 5, |
| "end": 12, |
| "text": "Table 2", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.3.2" |
| }, |
| { |
| "text": "This experiment hints us to pay more attention to the difficult set, which is helpful to improve the overall performance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.3.2" |
| }, |
| { |
| "text": "In this section we evaluate the quality of the two-layer disambiguation model and compare it with the linear disambiguation model (Cucerzan, 2011) . Moreover, we also report results in the \"difficult set\" defined above to see whether our twolayer model could balance prior and other features or not. The features we use here are prior and E-GCR. Accuracy is used as the evaluation metric.", |
| "cite_spans": [ |
| { |
| "start": 130, |
| "end": 146, |
| "text": "(Cucerzan, 2011)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison between Linear and Two-layer Disambiguation Model", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "We use logistic regression for both models. For the two-layer model, we first apply the prior classification and get p conf . Here we set the threshold \u03be to 0.95 according to experiments in development set. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation", |
| "sec_num": "4.4.1" |
| }, |
| { |
| "text": "From Figure 1 , we see that the classifier is quite good at classifying positive cases to the correct class. From Table 3 , we observe that our twolayer model receive a promising result in overall and difficult set against the linear model. This evidence indicates the prior classifier works and the two-layer model can balance the usage of prior and other features.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 5, |
| "end": 13, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 114, |
| "end": 121, |
| "text": "Table 3", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.4.2" |
| }, |
| { |
| "text": "This experiment compares our embeddings technique (EDKate) with some other methods: (Mikolov et al., 2013b) , (Wang et al., 2014a) and (Zhong and Zhang, 2015) . Here, We only consider the cases given mentions and take accuracy as the evaluation metric.", |
| "cite_spans": [ |
| { |
| "start": 84, |
| "end": 107, |
| "text": "(Mikolov et al., 2013b)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 110, |
| "end": 130, |
| "text": "(Wang et al., 2014a)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 135, |
| "end": 158, |
| "text": "(Zhong and Zhang, 2015)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison between Different Embeddings", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "For (Mikolov et al., 2013b) , we directly take their word2vec model and replace anchor surface with the entity symbol in the training corpus. In this way, we get embeddings for words and entities. For (Wang et al., 2014a) and (Zhong and Zhang, 2015) , we completely follow the model in the original paper. For our method. We use the knowledge model and text model described in (Wang et al., 2014a) and combine the alignment techniques in both of (Wang et al., 2014a) and (Zhong and Zhang, 2015) . Moreover, we add \"url-anchor\" cooccurence to the training objective and refine the negative sampling method by having entities in candidates list more probable to be sampled. In this experiment, we only use E-GCR as our feature for simplicity.", |
| "cite_spans": [ |
| { |
| "start": 4, |
| "end": 27, |
| "text": "(Mikolov et al., 2013b)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 201, |
| "end": 221, |
| "text": "(Wang et al., 2014a)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 226, |
| "end": 249, |
| "text": "(Zhong and Zhang, 2015)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 377, |
| "end": 397, |
| "text": "(Wang et al., 2014a)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 446, |
| "end": 466, |
| "text": "(Wang et al., 2014a)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 471, |
| "end": 494, |
| "text": "(Zhong and Zhang, 2015)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation", |
| "sec_num": "4.5.1" |
| }, |
| { |
| "text": "From Table 4 , we observe that (Wang et al., 2014a) outperform (Mikolov et al., 2013b) , which indicates the introduction of some structure informa-Embedding Version Wiki: test KBP 2010 (filtered) (filtered) (Mikolov et al., 2013b) 0.8062 0.7311 (Wang et al., 2014a) 0.8283 0.7922 (Zhong and Zhang, 2015) 0 (Lehmann et al., 2010) 0.806 (He et al., 2013) 0.809 (Sun et al., 2015) 0.839 (Cucerzan, 2011) 0.873 EDKate 0.889 tion like knowledge base is quite beneficial. And the utilization of description message for entities improve the performance as well. For our method EDKate, we further take advantages of \"url-anchor\" co-occurrence and special sampling method, which make embeddings more expressive and guarantee the performance.", |
| "cite_spans": [ |
| { |
| "start": 31, |
| "end": 51, |
| "text": "(Wang et al., 2014a)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 63, |
| "end": 86, |
| "text": "(Mikolov et al., 2013b)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 208, |
| "end": 231, |
| "text": "(Mikolov et al., 2013b)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 246, |
| "end": 266, |
| "text": "(Wang et al., 2014a)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 281, |
| "end": 304, |
| "text": "(Zhong and Zhang, 2015)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 307, |
| "end": 329, |
| "text": "(Lehmann et al., 2010)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 336, |
| "end": 353, |
| "text": "(He et al., 2013)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 360, |
| "end": 378, |
| "text": "(Sun et al., 2015)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 385, |
| "end": 401, |
| "text": "(Cucerzan, 2011)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 5, |
| "end": 12, |
| "text": "Table 4", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.5.2" |
| }, |
| { |
| "text": "In this section, we will compare our result on KBP 2010 with other existing reported results, in which Cucerzan (2011) holds the best record in KBP 2010 so far; Lehmann et al. (2010) rank first in the 2010 competition while He et al. (2013) and Sun et al. (2015) adopt neural-network-based methods. We still use accuracy as the evaluation metric because KBP 2010 specifies the input mentions. Because some papers only report the accuracy with 3 decimal places, we unify all results to 3 decimal places.", |
| "cite_spans": [ |
| { |
| "start": 161, |
| "end": 182, |
| "text": "Lehmann et al. (2010)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 224, |
| "end": 240, |
| "text": "He et al. (2013)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 245, |
| "end": 262, |
| "text": "Sun et al. (2015)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison with Reported Results on KBP 2010", |
| "sec_num": "4.6" |
| }, |
| { |
| "text": "We take the dataset \"Wikipedia:all\" to train embeddings here and use all features we defined in section 3.2. In this experiment, we adopt the unfiltered version of KBP 2010 as the test corpus. fective as ours, which shows the importance of the embedding quality in this disambiguation task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation", |
| "sec_num": "4.6.1" |
| }, |
| { |
| "text": "In this section, we equip EDKate with mention detection and compare our system with Wikipedi-aMiner (Milne and Witten, 2008) , Wikifier v1 and Wikifier v2 (Cheng and Roth, 2013) . For the evaluation metric, we adopt the Bag-of-Title (BoT) F1 evaluation metric which is used in all other systems we choose here.", |
| "cite_spans": [ |
| { |
| "start": 100, |
| "end": 124, |
| "text": "(Milne and Witten, 2008)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 155, |
| "end": 177, |
| "text": "(Cheng and Roth, 2013)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison with Other Wikification Systems", |
| "sec_num": "4.7" |
| }, |
| { |
| "text": "We first make use of all mentions in the mentionentity table to construct a Trie-tree, which is used to detect mentions in input text. To remove noise, we simply retain mentions which contain at least one noun and filter mentions that completely consist of stop words. Then we apply our disambiguation technique to the mentions detected. The same as experiment 4.6, we make use of all features described in section 3.2 here. Table 6 shows that our embedding-based method EDKate is better than two popular systems but cannot outperform Wikifier v2 in these three datasets. It should be mentioned that Wikifier v2 is largely based on Wikifier v1 and its magic is to add relational inference with some handcrafted rules. Actually, the embedding methods can performs well to model relations (Wang et al., 2014a) , so the idea to introduce relational information into our current framework is promising and will be the future work.", |
| "cite_spans": [ |
| { |
| "start": 787, |
| "end": 807, |
| "text": "(Wang et al., 2014a)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 425, |
| "end": 432, |
| "text": "Table 6", |
| "ref_id": "TABREF10" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Implementation", |
| "sec_num": "4.7.1" |
| }, |
| { |
| "text": "In this paper, we propose to refine a knowledge and text joint learning framework for entity disambiguation tasks and learn semantics-rich embeddings for words and entities. Then we design some simple embedding-based features and build a two-layer disambiguation model. Extensive experiments show that our embeddings are very expressive and is quite helpful in the entity disambiguation tasks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A neural probabilistic language model", |
| "authors": [ |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| }, |
| { |
| "first": "R\u00e9jean", |
| "middle": [], |
| "last": "Ducharme", |
| "suffix": "" |
| }, |
| { |
| "first": "Pascal", |
| "middle": [], |
| "last": "Vincent", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Janvin", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "The Journal of Machine Learning Research", |
| "volume": "3", |
| "issue": "", |
| "pages": "1137--1155", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic lan- guage model. The Journal of Machine Learning Re- search, 3:1137-1155.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Learning structured embeddings of knowledge bases", |
| "authors": [ |
| { |
| "first": "Antoine", |
| "middle": [], |
| "last": "Bordes", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| }, |
| { |
| "first": "Ronan", |
| "middle": [], |
| "last": "Collobert", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Conference on Artificial Intelligence, number EPFL-CONF-192344", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Antoine Bordes, Jason Weston, Ronan Collobert, and Yoshua Bengio. 2011. Learning structured embed- dings of knowledge bases. In Conference on Artifi- cial Intelligence, number EPFL-CONF-192344.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Translating embeddings for modeling multirelational data", |
| "authors": [ |
| { |
| "first": "Antoine", |
| "middle": [], |
| "last": "Bordes", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicolas", |
| "middle": [], |
| "last": "Usunier", |
| "suffix": "" |
| }, |
| { |
| "first": "Alberto", |
| "middle": [], |
| "last": "Garcia-Duran", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| }, |
| { |
| "first": "Oksana", |
| "middle": [], |
| "last": "Yakhnenko", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "2787--2795", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Antoine Bordes, Nicolas Usunier, Alberto Garcia- Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In Advances in Neural Information Processing Systems, pages 2787-2795.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Using encyclopedic knowledge for named entity disambiguation", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Razvan", |
| "suffix": "" |
| }, |
| { |
| "first": "Marius", |
| "middle": [], |
| "last": "Bunescu", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pasca", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of European Chapter of the Association for Computational Linguistics", |
| "volume": "6", |
| "issue": "", |
| "pages": "9--16", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Razvan C Bunescu and Marius Pasca. 2006. Using en- cyclopedic knowledge for named entity disambigua- tion. In Proceedings of European Chapter of the As- sociation for Computational Linguistics, volume 6, pages 9-16.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Cuny-uiuc-sri tac-kbp2011 entity linking system description", |
| "authors": [ |
| { |
| "first": "Taylor", |
| "middle": [], |
| "last": "Cassidy", |
| "suffix": "" |
| }, |
| { |
| "first": "Zheng", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Javier", |
| "middle": [], |
| "last": "Artiles", |
| "suffix": "" |
| }, |
| { |
| "first": "Heng", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "" |
| }, |
| { |
| "first": "Hongbo", |
| "middle": [], |
| "last": "Deng", |
| "suffix": "" |
| }, |
| { |
| "first": "Lev-Arie", |
| "middle": [], |
| "last": "Ratinov", |
| "suffix": "" |
| }, |
| { |
| "first": "Jing", |
| "middle": [], |
| "last": "Zheng", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiawei", |
| "middle": [], |
| "last": "Han", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of Text Analysis Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Taylor Cassidy, Zheng Chen, Javier Artiles, Heng Ji, Hongbo Deng, Lev-Arie Ratinov, Jing Zheng, Ji- awei Han, and Dan Roth. 2011. Cuny-uiuc-sri tac- kbp2011 entity linking system description. In Pro- ceedings of Text Analysis Conference.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Relational inference for wikification", |
| "authors": [ |
| { |
| "first": "Xiao", |
| "middle": [], |
| "last": "Cheng", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xiao Cheng and Dan Roth. 2013. Relational inference for wikification. In Proceedings of Conference on Empirical Methods in Natural Language Process- ing.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Natural language processing (almost) from scratch", |
| "authors": [ |
| { |
| "first": "Ronan", |
| "middle": [], |
| "last": "Collobert", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| }, |
| { |
| "first": "L\u00e9on", |
| "middle": [], |
| "last": "Bottou", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Karlen", |
| "suffix": "" |
| }, |
| { |
| "first": "Koray", |
| "middle": [], |
| "last": "Kavukcuoglu", |
| "suffix": "" |
| }, |
| { |
| "first": "Pavel", |
| "middle": [], |
| "last": "Kuksa", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "The Journal of Machine Learning Research", |
| "volume": "12", |
| "issue": "", |
| "pages": "2493--2537", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Re- search, 12:2493-2537.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Large-scale named entity disambiguation based on wikipedia data", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Silviu Cucerzan", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
| "volume": "7", |
| "issue": "", |
| "pages": "708--716", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Silviu Cucerzan. 2007. Large-scale named entity dis- ambiguation based on wikipedia data. In Proceed- ings of the Conference on Empirical Methods in Nat- ural Language Processing, volume 7, pages 708- 716.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Tac entity linking by performing full-document entity extraction and disambiguation", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Silviu Cucerzan", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of Text Analysis Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Silviu Cucerzan. 2011. Tac entity linking by perform- ing full-document entity extraction and disambigua- tion. In Proceedings of Text Analysis Conference, volume 2011.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Scaling wikipedia-based named entity disambiguation to arbitrary web text", |
| "authors": [ |
| { |
| "first": "Anthony", |
| "middle": [], |
| "last": "Fader", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Soderland", |
| "suffix": "" |
| }, |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| }, |
| { |
| "first": "Turing", |
| "middle": [], |
| "last": "Center", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the International Joint Conferences on Artificial Intelligence Workshop on Usercontributed Knowledge and Artificial Intelligence: An Evolving Synergy", |
| "volume": "", |
| "issue": "", |
| "pages": "21--26", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anthony Fader, Stephen Soderland, Oren Etzioni, and Turing Center. 2009. Scaling wikipedia-based named entity disambiguation to arbitrary web text. In Proceedings of the International Joint Confer- ences on Artificial Intelligence Workshop on User- contributed Knowledge and Artificial Intelligence: An Evolving Synergy, Pasadena, CA, USA, pages 21-26.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Learning entity representation for entity disambiguation", |
| "authors": [ |
| { |
| "first": "Zhengyan", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Shujie", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Mu", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Longkai", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Houfeng", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "30--34", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhengyan He, Shujie Liu, Mu Li, Ming Zhou, Longkai Zhang, and Houfeng Wang. 2013. Learning entity representation for entity disambiguation. In ACL (2), pages 30-34.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Robust disambiguation of named entities in text", |
| "authors": [ |
| { |
| "first": "Johannes", |
| "middle": [], |
| "last": "Hoffart", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohamed", |
| "middle": [ |
| "Amir" |
| ], |
| "last": "Yosef", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilaria", |
| "middle": [], |
| "last": "Bordino", |
| "suffix": "" |
| }, |
| { |
| "first": "Hagen", |
| "middle": [], |
| "last": "F\u00fcrstenau", |
| "suffix": "" |
| }, |
| { |
| "first": "Manfred", |
| "middle": [], |
| "last": "Pinkal", |
| "suffix": "" |
| }, |
| { |
| "first": "Marc", |
| "middle": [], |
| "last": "Spaniol", |
| "suffix": "" |
| }, |
| { |
| "first": "Bilyana", |
| "middle": [], |
| "last": "Taneva", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Thater", |
| "suffix": "" |
| }, |
| { |
| "first": "Gerhard", |
| "middle": [], |
| "last": "Weikum", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "782--792", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bor- dino, Hagen F\u00fcrstenau, Manfred Pinkal, Marc Span- iol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named entities in text. In Proceedings of the Conference on Empirical Methods in Natural Language Process- ing, pages 782-792.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Collective annotation of wikipedia entities in web text", |
| "authors": [ |
| { |
| "first": "Sayali", |
| "middle": [], |
| "last": "Kulkarni", |
| "suffix": "" |
| }, |
| { |
| "first": "Amit", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| }, |
| { |
| "first": "Ganesh", |
| "middle": [], |
| "last": "Ramakrishnan", |
| "suffix": "" |
| }, |
| { |
| "first": "Soumen", |
| "middle": [], |
| "last": "Chakrabarti", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of Special Interest Group on Knowledge Discovery and data Mining", |
| "volume": "", |
| "issue": "", |
| "pages": "457--466", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sayali Kulkarni, Amit Singh, Ganesh Ramakrishnan, and Soumen Chakrabarti. 2009. Collective annota- tion of wikipedia entities in web text. In Proceed- ings of Special Interest Group on Knowledge Dis- covery and data Mining, pages 457-466.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Distributed representations of sentences and documents", |
| "authors": [ |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Quoc", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1405.4053" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Quoc V Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. arXiv preprint arXiv:1405.4053.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Lcc approaches to knowledge base population at tac 2010", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Lehmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Sean", |
| "middle": [], |
| "last": "Monahan", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Nezda", |
| "suffix": "" |
| }, |
| { |
| "first": "Arnold", |
| "middle": [], |
| "last": "Jung", |
| "suffix": "" |
| }, |
| { |
| "first": "Ying", |
| "middle": [], |
| "last": "Shi", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of Text Analysis Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Lehmann, Sean Monahan, Luke Nezda, Arnold Jung, and Ying Shi. 2010. Lcc approaches to knowledge base population at tac 2010. In Proceed- ings of Text Analysis Conference.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "The stanford corenlp natural language processing toolkit", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "Mihai", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Surdeanu", |
| "suffix": "" |
| }, |
| { |
| "first": "Jenny", |
| "middle": [], |
| "last": "Bauer", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Finkel", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Steven", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Bethard", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mc-Closky", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "55--60", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J Bethard, and David Mc- Closky. 2014. The stanford corenlp natural lan- guage processing toolkit. In Proceedings of Asso- ciation for Computational Linguistics, pages 55-60.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Efficient estimation of word representations in vector space", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1301.3781" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Distributed representations of words and phrases and their compositionality", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [ |
| "S" |
| ], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeff", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Advances in neural information processing systems", |
| "volume": "", |
| "issue": "", |
| "pages": "3111--3119", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed represen- tations of words and phrases and their composition- ality. In Advances in neural information processing systems, pages 3111-3119.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Learning to link with wikipedia", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Milne", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ian", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Witten", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of Information and knowledge management", |
| "volume": "", |
| "issue": "", |
| "pages": "509--518", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Milne and Ian H Witten. 2008. Learning to link with wikipedia. In Proceedings of Information and knowledge management, pages 509-518.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Local and global algorithms for disambiguation to wikipedia", |
| "authors": [ |
| { |
| "first": "Lev", |
| "middle": [], |
| "last": "Ratinov", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1375--1384", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lev Ratinov, Dan Roth, Doug Downey, and Mike An- derson. 2011. Local and global algorithms for dis- ambiguation to wikipedia. In Proceedings of Asso- ciation for Computational Linguistics, pages 1375- 1384.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Reasoning with neural tensor networks for knowledge base completion", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Danqi", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "926--934", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural ten- sor networks for knowledge base completion. In Ad- vances in Neural Information Processing Systems, pages 926-934.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Modeling mention, context and entity with neural networks for entity disambiguation", |
| "authors": [ |
| { |
| "first": "Yaming", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Lei", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Duyu", |
| "middle": [], |
| "last": "Tang", |
| "suffix": "" |
| }, |
| { |
| "first": "Nan", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhenzhou", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaolong", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI)", |
| "volume": "", |
| "issue": "", |
| "pages": "1333--1339", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yaming Sun, Lei Lin, Duyu Tang, Nan Yang, Zhen- zhou Ji, and Xiaolong Wang. 2015. Modeling men- tion, context and entity with neural networks for en- tity disambiguation. In Proceedings of the Twenty- Fourth International Joint Conference on Artificial Intelligence (IJCAI), pages 1333-1339.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Knowledge graph and text jointly embedding", |
| "authors": [ |
| { |
| "first": "Zhen", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianwen", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianlin", |
| "middle": [], |
| "last": "Feng", |
| "suffix": "" |
| }, |
| { |
| "first": "Zheng", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1591--1601", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014a. Knowledge graph and text jointly em- bedding. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Pro- cessing. Association for Computational Linguistics, pages 1591-1601.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Knowledge graph embedding by translating on hyperplanes", |
| "authors": [ |
| { |
| "first": "Zhen", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianwen", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianlin", |
| "middle": [], |
| "last": "Feng", |
| "suffix": "" |
| }, |
| { |
| "first": "Zheng", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of Association for the Advancement of Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "1112--1119", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014b. Knowledge graph embedding by translating on hyperplanes. In Proceedings of As- sociation for the Advancement of Artificial Intelli- gence, pages 1112-1119.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Aligning knowledge and text embeddings by entity descriptions", |
| "authors": [ |
| { |
| "first": "Huaping", |
| "middle": [], |
| "last": "Zhong", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianwen", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Huaping Zhong and Jianwen Zhang. 2015. Aligning knowledge and text embeddings by entity descrip- tions. In Proceedings of Conference on Empirical Methods in Natural Language Processing.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "Precision-Recall Curves for Prior Classification", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "text": "De[Pr(e|e a ) + Pr(e a |e)](12)where A De stands for all anchors in Wikpedia page D e . Pr(e|e a ) and Pr(e a |e) are defined similarly as equation 6.", |
| "num": null, |
| "html": null, |
| "type_str": "table", |
| "content": "<table/>" |
| }, |
| "TABREF2": { |
| "text": "Statistics for each corpus prior cannot work well.", |
| "num": null, |
| "html": null, |
| "type_str": "table", |
| "content": "<table/>" |
| }, |
| "TABREF4": { |
| "text": "", |
| "num": null, |
| "html": null, |
| "type_str": "table", |
| "content": "<table><tr><td colspan=\"5\">: Comparison between Embedding-based</td></tr><tr><td colspan=\"3\">Feature and BoW-based Feature</td><td/><td/></tr><tr><td/><td colspan=\"2\">Wiki:test</td><td colspan=\"2\">KBP 2010</td></tr><tr><td>Model Type</td><td colspan=\"2\">(filtered)</td><td colspan=\"2\">(filtered)</td></tr><tr><td/><td colspan=\"4\">overall difficult overall difficult</td></tr><tr><td>Linear</td><td>0.8671</td><td>0.1310</td><td>0.7791</td><td>0.0617</td></tr><tr><td>Two-layer</td><td>0.8931</td><td>0.4795</td><td>0.8474</td><td>0.5140</td></tr></table>" |
| }, |
| "TABREF5": { |
| "text": "", |
| "num": null, |
| "html": null, |
| "type_str": "table", |
| "content": "<table><tr><td>: Comparison between Linear Disam-</td></tr><tr><td>biguation Model and Two-layer Model</td></tr></table>" |
| }, |
| "TABREF7": { |
| "text": "", |
| "num": null, |
| "html": null, |
| "type_str": "table", |
| "content": "<table><tr><td colspan=\"2\">: Comparison between Different Embed-</td></tr><tr><td>dings</td><td/></tr><tr><td>Method</td><td>Accuracy on KBP 2010</td></tr></table>" |
| }, |
| "TABREF8": { |
| "text": "", |
| "num": null, |
| "html": null, |
| "type_str": "table", |
| "content": "<table><tr><td>: Comparison with other reported results</td></tr><tr><td>on KBP 2010</td></tr></table>" |
| }, |
| "TABREF9": { |
| "text": "shows that EDKate outperforms the current best record (Cucerzan, 2011) in KBP 2010 dataset.Sun et al. (2015) apply convolution neural network and take advantages of(Mikolov et al., 2013a)'s word2vec as input but it seems not so ef-", |
| "num": null, |
| "html": null, |
| "type_str": "table", |
| "content": "<table><tr><td>System</td><td>AQUAINT</td><td>ACE</td><td>MSNBC</td></tr><tr><td>WikipediaMiner</td><td>0.8361</td><td>0.7276</td><td>0.6849</td></tr><tr><td>Wikifier v1</td><td>0.8394</td><td>0.7725</td><td>0.7488</td></tr><tr><td>EDKate</td><td>0.8515</td><td>0.8079</td><td>0.7550</td></tr><tr><td>Wikifier v2</td><td>0.8888</td><td>0.8530</td><td>0.8120</td></tr></table>" |
| }, |
| "TABREF10": { |
| "text": "Comparison with other Wikification systems in BoT F1 metric", |
| "num": null, |
| "html": null, |
| "type_str": "table", |
| "content": "<table/>" |
| } |
| } |
| } |
| } |