| { |
| "paper_id": "C16-1017", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:00:59.512275Z" |
| }, |
| "title": "Label Embedding for Zero-shot Fine-grained Named Entity Typing", |
| "authors": [ |
| { |
| "first": "Yukun", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Nanyang Technological University", |
| "location": { |
| "country": "Singapore" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Erik", |
| "middle": [], |
| "last": "Cambria", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Nanyang Technological University", |
| "location": { |
| "country": "Singapore" |
| } |
| }, |
| "email": "cambria@ntu.edu.sg" |
| }, |
| { |
| "first": "Sa", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Nanyang Technological University", |
| "location": { |
| "country": "Singapore" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Named entity typing is the task of detecting the types of a named entity in context. For instance, given \"Eric is giving a presentation\", our goal is to infer that 'Eric' is a speaker or a presenter and a person. Existing approaches to named entity typing cannot work with a growing type set and fails to recognize entity mentions of unseen types. In this paper, we present a label embedding method that incorporates prototypical and hierarchical information to learn pre-trained label embeddings. In addition, we adapt a zero-shot framework that can predict both seen and previously unseen entity types. We perform evaluation on three benchmark datasets with two settings: 1) few-shots recognition where all types are covered by the training set; and 2) zero-shot recognition where fine-grained types are assumed absent from training set. Results show that prior knowledge encoded using our label embedding methods can significantly boost the performance of classification for both cases.", |
| "pdf_parse": { |
| "paper_id": "C16-1017", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Named entity typing is the task of detecting the types of a named entity in context. For instance, given \"Eric is giving a presentation\", our goal is to infer that 'Eric' is a speaker or a presenter and a person. Existing approaches to named entity typing cannot work with a growing type set and fails to recognize entity mentions of unseen types. In this paper, we present a label embedding method that incorporates prototypical and hierarchical information to learn pre-trained label embeddings. In addition, we adapt a zero-shot framework that can predict both seen and previously unseen entity types. We perform evaluation on three benchmark datasets with two settings: 1) few-shots recognition where all types are covered by the training set; and 2) zero-shot recognition where fine-grained types are assumed absent from training set. Results show that prior knowledge encoded using our label embedding methods can significantly boost the performance of classification for both cases.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Named entity typing (NET) is the task of inferring types of named entity mentions in text. NET is a useful pre-processing step for many natural language processing (NLP) tasks, e.g., auto-categorization and sentiment analysis. Named entity linking, for instance, can use NET to refine entity candidates of a given mention (Ling and Weld, 2012) . Besides, NET is capable of supporting applications based on a deeper understanding of natural language, e.g., knowledge completion (Dong et al., 2014) and question answering (Lin et al., 2012; Fader et al., 2014) . Standard NET approaches or sometime known as named entity recognition (Chinchor and Robinson, 1997; Tjong Kim Sang and De Meulder, 2003; Doddington et al., 2004) are concerned with coarse-grained types (e.g, person, location, organization) that are flat in structure. In comparison, fine-grained named entity typing (FNET) (Ling and Weld, 2012) , which has been studied as an extension of standard NET task, uses a tree-structured taxonomy including not only coarse-grained types but also fine-grained types of named entities. For instance, given \" [Intel] said that over the past decade\", standard NET only classifies Intel as organization, whereas FNET further classifies it as organization/corporation.", |
| "cite_spans": [ |
| { |
| "start": 322, |
| "end": 343, |
| "text": "(Ling and Weld, 2012)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 477, |
| "end": 496, |
| "text": "(Dong et al., 2014)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 520, |
| "end": 538, |
| "text": "(Lin et al., 2012;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 539, |
| "end": 558, |
| "text": "Fader et al., 2014)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 631, |
| "end": 660, |
| "text": "(Chinchor and Robinson, 1997;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 661, |
| "end": 697, |
| "text": "Tjong Kim Sang and De Meulder, 2003;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 698, |
| "end": 722, |
| "text": "Doddington et al., 2004)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 884, |
| "end": 905, |
| "text": "(Ling and Weld, 2012)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 1110, |
| "end": 1117, |
| "text": "[Intel]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "FNET is faced with two major challenges: growing type set and label noises. Since the type hierarchy of entities is typically built from knowledge bases such as DBpedia, which is regularly updated with new types (especially fine-grained types) and entities, it is natural to assume that the type hierarchy is growing rather than fixed over time. However, current FNET systems are impeded from handling a growing type set for that information learned from training set cannot be transferred to unseen types. Another problem with FNET is that the weakly supervised tagging process used for automatically generating labeled data inevitably introduces label noises. Current solutions rely on heuristic rules (Gillick et al., 2014) or embedding method (Ren et al., 2016) to remove noises prior to training the multi-label classifier. In order to address these two problems at the same time, we propose a simple yet effective method for learning prototype-driven label embeddings that works for both seen and unseen types and is robust to the label noises. Another contribution of this work is that we combine prototypical and hierarchical information for learning label embeddings.", |
| "cite_spans": [ |
| { |
| "start": 704, |
| "end": 726, |
| "text": "(Gillick et al., 2014)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 747, |
| "end": 765, |
| "text": "(Ren et al., 2016)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The remainder of this paper is organized as follows: Section 2 proposes a survey of prior works related to FNET; Section 3 introduces the embedding-based FNET method and its zero-shot extension; Section 4 describes our label embedding method; Section 5 illustrates experiments and analysis for both few-shot and zero-shot settings; finally, Section 6 concludes the paper and discusses future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "There is little related work specifically on zero-shot FNET but several research lines are considered related to this work: fine-grained named entity recognition, prototype-driven learning, and multi-label classification models based on embeddings. As FNET works with a much larger type set as compared with standard NET, it becomes difficult to have a sufficient training set for every type when relying on manual annotation. Instead, training data can be automatically generated from semi-structural data such as Wikipedia pages (Ling and Weld, 2012) . Consequently, a single supervised classifier (Ling and Weld, 2012; Yogatama et al., 2015) or a series of classifiers (Yosef et al., 2012) are trained on this autoannotated training set. This auto-annotating practice has been followed by later works on FNET (Yosef et al., 2012; Yogatama et al., 2015; Ren et al., 2016) . However, since the automated tagging process is not accurate all the time, a number of noisy labels are then propagated to supervised training and affect the performance negatively.", |
| "cite_spans": [ |
| { |
| "start": 531, |
| "end": 552, |
| "text": "(Ling and Weld, 2012)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 600, |
| "end": 621, |
| "text": "(Ling and Weld, 2012;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 622, |
| "end": 644, |
| "text": "Yogatama et al., 2015)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 672, |
| "end": 692, |
| "text": "(Yosef et al., 2012)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 807, |
| "end": 832, |
| "text": "FNET (Yosef et al., 2012;", |
| "ref_id": null |
| }, |
| { |
| "start": 833, |
| "end": 855, |
| "text": "Yogatama et al., 2015;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 856, |
| "end": 873, |
| "text": "Ren et al., 2016)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The starting point of this work is the embedding method, WSABIE (Weston et al., 2011) , adapted by (Yogatama et al., 2015) to FNET. WSABIE maps input features and labels to a joint space, where information is shared among correlated labels. However, the joint embedding method still suffers from label noises which have negative impacts on the learning of joint embeddings. In addition, since the labeled training set is the only source used for learning label embeddings, WSABIE cannot learn label embeddings for unseen types. DeViSE (Frome et al., 2013) is proposed for annotating image with words or phrases. As in such case, labels are natural words, e.g., fruit, that can be found in textual data, Skipgram word embeddings learned from a large text corpus are directly used for representing labels. In addition to label itself, prior works have also tried to learn label embeddings from side information such as attributes (Akata et al., 2013) , manually-written descriptions (Larochelle et al., 2008) , taxonomy of types (Weinberger and Chapelle, 2009; Akata et al., 2013; Akata et al., 2015) , and so on.", |
| "cite_spans": [ |
| { |
| "start": 64, |
| "end": 85, |
| "text": "(Weston et al., 2011)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 99, |
| "end": 122, |
| "text": "(Yogatama et al., 2015)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 535, |
| "end": 555, |
| "text": "(Frome et al., 2013)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 928, |
| "end": 948, |
| "text": "(Akata et al., 2013)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 981, |
| "end": 1006, |
| "text": "(Larochelle et al., 2008)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 1027, |
| "end": 1058, |
| "text": "(Weinberger and Chapelle, 2009;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 1059, |
| "end": 1078, |
| "text": "Akata et al., 2013;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 1079, |
| "end": 1098, |
| "text": "Akata et al., 2015)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Another related line of research is prototype-driven learning. (Haghighi and Klein, 2006) presented a sequence labeling model using prototypes as features and has tested the model on NLP tasks such as part-of-speech (POS) tagging. Prototype-based features (Guo et al., 2014) are then adapted for coarsegrained named entity recognition task. Even though we select prototypes in the same way as (Guo et al., 2014) , we use prototypes in a very different manner: we consider prototypes as the basis for representing labels, whereas prototypes are mainly used as additional features in prior works (Haghighi and Klein, 2006; Guo et al., 2014) . In other words, prototypes are previously used on the input side, while we use them on the label side.", |
| "cite_spans": [ |
| { |
| "start": 63, |
| "end": 89, |
| "text": "(Haghighi and Klein, 2006)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 256, |
| "end": 274, |
| "text": "(Guo et al., 2014)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 393, |
| "end": 411, |
| "text": "(Guo et al., 2014)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 594, |
| "end": 620, |
| "text": "(Haghighi and Klein, 2006;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 621, |
| "end": 638, |
| "text": "Guo et al., 2014)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In this section, we introduce the embedding method for FNET proposed by (Yogatama et al., 2015) and its extension to zero-shot entity typing.", |
| "cite_spans": [ |
| { |
| "start": 72, |
| "end": 95, |
| "text": "(Yogatama et al., 2015)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Embedding Methods for FNET", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Each entity mention m is represented as a feature vector x \u2208 R V ; and each label y \u2208 Y is a one-hot vector, where Y is the set of true labels associated with x.\u0232 denotes the set of false labels of the given entity mention. The bi-linear scoring function for a given pair of x and y is defined as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Joint Embedding Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "f (x, y, W ) = x W y,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Joint Embedding Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "where W \u2208 R M \u00d7N matrix with M the dimension of feature vector and N the number of types.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Joint Embedding Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Instead of using a single compatibility matrix, WSABIE (Weston et al., 2011; Yogatama et al., 2015) considers an alternate low-rank decomposition of W , i.e., W = A B, in order to reduce the number of parameters. WSABIE rewrites the scoring function as", |
| "cite_spans": [ |
| { |
| "start": 55, |
| "end": 76, |
| "text": "(Weston et al., 2011;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 77, |
| "end": 99, |
| "text": "Yogatama et al., 2015)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Joint Embedding Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "f (x, y, A, B) = \u03c6(x, A) \u2022 \u03b8(y, B) = x A By,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Joint Embedding Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "which maps feature vector x and label vector y to a joint space. Note that it actually defines feature embeddings and label embeddings as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Joint Embedding Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u03c6(x, A) : x \u2192 Ax, \u03b8(y, B) : y \u2192 By,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Joint Embedding Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "where A \u2208 R D\u00d7M and B \u2208 R D\u00d7N are matrices corresponding to lookup tables of feature embeddings and label embeddings, respectively. The embedding matrices A and B are the only parameters to be learned from supervised training process. In (Weston et al., 2011) , the learning is formulated as a learning-to-rank problem using weighted approximate-rank pairwise (WARP) loss,", |
| "cite_spans": [ |
| { |
| "start": 238, |
| "end": 259, |
| "text": "(Weston et al., 2011)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Joint Embedding Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "y\u2208Y y \u2208\u0232 L(rank(x, y)) max(1 \u2212 f (x, y, A, B) + f (x, y , A, B), 0), where the ranking function rank(x, y) = y \u2208\u0232 I(1 + f (x, y , A, B) > f (x, y, A, B)), and L(k) = k i=1 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Joint Embedding Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "i which maps the ranking to a floating-point weight.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Joint Embedding Model", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "A zero-shot extension of above WSABIE method can be done by introducing pre-trained label embeddings into the framework. The pre-trained label embeddings are learned from additional resources, e.g., text corpora, to encode semantic relation and dependency between labels. Similar to (Akata et al., 2013) , we use two different methods for incorporating pre-trained label embeddings. The first one is to fully trust pre-trained label embeddings. Namely, we fix B as the pre-trainedB and only learn A in an iterative process. The second method is to use pre-trained label embedding as prior knowledge while adjusting both A and B according to the labeled data, i.e., adding a regularizer to the WARP loss function,", |
| "cite_spans": [ |
| { |
| "start": 283, |
| "end": 303, |
| "text": "(Akata et al., 2013)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Zero-shot FNET Extension", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "y\u2208Y y \u2208\u0232 L(rank(x, y)) max(1 \u2212 f (x, y, A, B) + f (x, y , A, B), 0) + \u03bb||B \u2212B|| 2 F ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Zero-shot FNET Extension", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "where || \u2022 || F is the Frobenius norm, and \u03bb is the trade-off parameter.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Zero-shot FNET Extension", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Joint embedding methods such as WSABIE learn label embeddings from the whole training set including noisy labeled instances resulting from weak supervision. It is inevitable that the resulting label embeddings are affected by noisy labels and fail to accurately capture the semantic correlation between types. Another issue is that zero-shot frameworks such as DeViSE are not directly applicable to FNET as conceptually complex types, e.g, GPE (Geo-political Entity) cannot be simply mapped to a single natural word or phrase.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prototype-driven Label Embedding", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "To address this issue, we propose a simple yet effective solution which is referred to as prototypedriven label embedding (ProtoLE), and henceforth we useB P to denote the label embedding matrix learned by ProtoLE. The first step is to learn a set of prototypes for each type in the type set. ProtoLE does not fully rely on training data to generate label embeddings. Instead, it selects a subset of entity mentions as the prototypes of each type. These prototypes are less ambiguous and noisy compared to the rest of the full set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prototype-driven Label Embedding", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Even though it is already far less labor-intensive to manually select prototypes than annotating entity mentions one by one, we consider an alternative automated process using Normalized Point-wise Mutual Information (NPMI) as the particular criterion for prototype selection. The NPMI between a label and an entity mention is computed as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prototype-driven Label Embedding", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "NPMI(y, m) = PMI(y, m) \u2212 ln p(y, m) ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prototype-driven Label Embedding", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "where NPMI (\u2022, \u2022) is the point-wise mutual information computed as follows:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 11, |
| "end": 17, |
| "text": "(\u2022, \u2022)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Prototype-driven Label Embedding", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "PMI(y, m) = log p(y, m) p(y)p(m) ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prototype-driven Label Embedding", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "where p(y), p(m) and p(y, m) are the probability of entity mention m, label y and their joint probability. For each label, NPMI is computed for all the entity mentions and only a list of top k mentions are selected as prototypes. Note that NPMI is not applicable to unseen labels. In such case, it is necessary to combine manual selection and NPMI.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prototype-driven Label Embedding", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Word embeddings methods such as Skip-gram model are shown capable of learning distributional semantics of words from unlabeled text corpora. To further avoid affected by label noises, we use pre-trained word embeddings as the source to compute prototype-driven label embeddings. For each label y i , we compute its label embedding as the average of pre-trained word embeddings of the head words of prototypes, i.e.,B", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prototype-driven Label Embedding", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "P i = 1 k k j=1 v m ik ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prototype-driven Label Embedding", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "where v m ik denotes the word embedding of kth word in the prototype list of label y i . In the case of using phrase embeddings, the full strings of multi-word prototypes could be used directly.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prototype-driven Label Embedding", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Another side information that is available for generating label embeddings is the label hierarchy. We adapt the Hierarchical Label Embeddings (HLE) (Akata et al., 2013) to FNET task. Unlike (Akata et al., 2013) , which uses the WordNet hierarchy, FNET systems typically have direct access to predefined tree hierarchy of type set. We denote the label embedding matrix resulting from label hierarchy asB H . Each row inB H corresponds to a binary label embedding and has a dimension equal to the size of label set. For each label, the setsB H ij to 1 when y j is the parent of y i or i = j, and 0 to the remainder,", |
| "cite_spans": [ |
| { |
| "start": 148, |
| "end": 168, |
| "text": "(Akata et al., 2013)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 190, |
| "end": 210, |
| "text": "(Akata et al., 2013)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hierarchical Label Embedding", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "B H ij = 1 if i = j or y j \u2208 P arent(y i ) 0 otherwise .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hierarchical Label Embedding", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "HLE explicitly encodes the hierarchical dependency between labels by scoring a type y i given m using not only y i but also its parent type P arent(y i ). The underlying intuition is that recognition of a child type should be also based on the recognition of its parent.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hierarchical Label Embedding", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "One shortcoming of HLE is that it is too sparse. A natural solution is combining HLE with ProtoLE, which is denoted as Proto-HLE. SinceB H \u2208 R N \u00d7N andB P \u2208 R D\u00d7N , the combined embedding matrix B HP can be obtained by simply multiplyingB H byB P , i.e.,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prototype-driven Hierarchical Label Embedding", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "B", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prototype-driven Hierarchical Label Embedding", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "HP =B PBH .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prototype-driven Hierarchical Label Embedding", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Note thatB HP has the same shape asB P , and it is actually representing the child label as a linear combination of the ProtoLE vectors of its parent and itself.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prototype-driven Hierarchical Label Embedding", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Having computed the scoring function for each label given a feature vector of the mention, we conduct type inference to refine the top k type candidates. In the setting of few-shots FNET, k is typically set to the maximum depth of type hierarchy, while different values for k may be used for a better prediction of unseen labels in zero-shot typing. For top k type candidates, we greedily remove the labels that conflict with others. However, unlike (Yogatama et al., 2015) , we use a relative threshold t to decide whether the selected type should remain in the final results, which is more consistent with the margin-infused objective function than a global threshold. Namely, a type candidate will be passed to type inference only if the difference of score from the 1-best is less than a threshold.", |
| "cite_spans": [ |
| { |
| "start": 450, |
| "end": 473, |
| "text": "(Yogatama et al., 2015)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Type Inference", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Our method uses feature templates similar to what have been used by state-of-the-art FNET methods (Ling and Weld, 2012; Gillick et al., 2014; Yogatama et al., 2015; Xiang Ren, 2015) . Table 1 illustrates the full set of feature templates used in this work. We evaluate the performance of our methods on three benchmark datasets that have been used for the FNET task: BBN dataset (Weischedel and Brunstein, 2005) , OntoNotes dataset (Weischedel et al., 2011) and Wikipedia dataset (Ling and Weld, 2012) . (Xiang Ren, 2015) has pre-processed the training sets of BBN and OntoNotes using DBpedia Spotlight 1 . Entity mentions in the training set are automatically linked to a named entity in Freebase and assigned with the Freebase types of induced named entity. As shown in Table 2 , BBN dataset contains 2.3K news articles of Wall Street Journal, which includes 109K entity mentions belonging to 47 types. OntoNotes contains 13.1K news articles and 223.3K entity mentions belonging to 89 entity types. The size of Wikipedia dataset is much larger than the other two with 2.69M entity mentions of 113 types extracted from 780.5K Wikipedia articles. Each data set has a test set that is manually annotated for purpose of evaluation. To tune parameters such as the type inference threshold t and trade-off parameter \u03bb, we randomly sample 10% instances from each testing set as the development sets and use the rest as evaluation sets.", |
| "cite_spans": [ |
| { |
| "start": 98, |
| "end": 119, |
| "text": "(Ling and Weld, 2012;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 120, |
| "end": 141, |
| "text": "Gillick et al., 2014;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 142, |
| "end": 164, |
| "text": "Yogatama et al., 2015;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 165, |
| "end": 181, |
| "text": "Xiang Ren, 2015)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 379, |
| "end": 411, |
| "text": "(Weischedel and Brunstein, 2005)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 432, |
| "end": 457, |
| "text": "(Weischedel et al., 2011)", |
| "ref_id": null |
| }, |
| { |
| "start": 480, |
| "end": 501, |
| "text": "(Ling and Weld, 2012)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 184, |
| "end": 191, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| }, |
| { |
| "start": 772, |
| "end": 779, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Unigram words in the mentions \"White\", \"House\" Head Head word of the mention \"House\" Cluster Brown Cluster IDs of the head word \"4 1111\", .. ,\"8 11111101\" POS Tag POS tag of the mention \"NNP\" Character Lower-cased character trigrams in the head word \"hou\",\"ous\",\"use\" Word Shape The word shape of words in the mention \"Aa\",\"Aa\" Context Unigram/bigram words in context of the mention \"Bennett\",\"the\", \"Bennett the\" Dependency Dependency relations involving the head word \"gov nn director\" ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Description Example Tokens", |
| "sec_num": null |
| }, |
| { |
| "text": "Our ProtoLE embeddings use Continuous-Bag-of-Words (CBOW) word embedding model trained on Wikipedia dump using a window of 2 words to both directions. We use 300 dimensions for all embedding methods except HLE. Table 3 illustrates examples of prototypes learned for types in BBN dataset. It can be observed that most of the top ranked mentions are correctly linked to types, even though there are still some noises, e.g., north american for /LOCATION/CONTINENT. It also shows that prototypes of related types such as /LOCATION and /GPE are also semantically related. Figure1 visualizes the prototype-driven label embeddings for BBN dataset using -Distributed Stochastic Neighbor Embedding (t-SNE) (Maaten and Hinton, 2008) . It can be easily observed that semantic related types are close to each other in the new space, which proves that prototype-driven label embeddings can capture the semantic correlation between labels. Figure 2 shows the Micro-F1 score of FNET with regard to the number of PMI prototypes used by ProtoLE. It shows that the Micro-F1 score does not change significantly on BBN and Wikipedia dataset, whereas using fewer prototypes per type (\u2264 40) results in a drop of Micro-F1. Since the definitions of several types, especially the coarse-grained types, are actually very general, it may introduce bias into the label embeddings if using too few prototypes. We use K = 60 for all our experiments for that it achieves decent performance on all three datasets. Our pre-trained label embeddings and manuallyselected prototypes (zero-shot typing) are available for download 2 .", |
| "cite_spans": [ |
| { |
| "start": 697, |
| "end": 722, |
| "text": "(Maaten and Hinton, 2008)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 211, |
| "end": 218, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 926, |
| "end": 934, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Generating ProtoLE", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Thirdly, our ProtoLE and its combination with HLE consistently outperform both non-embedding and embedding baselines. Using the prototype information and non-adaptive framework results in absolute 3%-5% improvement with both loose and strict evaluation metrics. Non-adaptive HLE performs poorer than other embedding methods, which is most likely due to its sparsity in representing labels. However, Proto-HLE performs very close to ProtoLE on BBN and Wiki, while it improves all three measures by another absolute \u22482.5% on OntoNotes .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Generating ProtoLE", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Adapt Table 4 : Performance of FNET in a few-shots learning on 3 benchmark datasets", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 6, |
| "end": 13, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": null |
| }, |
| { |
| "text": "BBN OntoNotes Wiki Ma-F1 Mi-F1 Acc. Ma-F1 Mi-F1 Acc. Ma-F1 Mi-F1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": null |
| }, |
| { |
| "text": "In this section, we evaluate our method's capability recognizing mentions of unseen fine-grained types. We assume that the training set contains only coarse-grained types (i.e., Level-1), and Level-2 types are unseen types to be removed from the training set. Table 5 shows the Micro-Precision for Level-1 and Level-2 types using top k type candidates for type inference. NPMI is computed for Level-1 types. We manually build prototype lists for unseen types by choosing from a randomly sampled list of entity mentions. Level-3 types are ignored for OntoNotes as Level-3 types never show in top-10 list produced by all methods. As the prediction for coarse-grained types are the same with regard to k, we only list the results using k = 3. One interesting finding on all three datasets is that combining hierarchical and prototypical information results in better classification of coarse-grained types. It suggests that embeddings of unseen fine-grained types contains information complementary to the embeddings of coarse-grained types. Since HLE actually produces random prediction on Level-2 types due to its sparse representation, HLE perform poorly on Level-2 types.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 260, |
| "end": 267, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Zero-shot Fine-grained Entity Typing", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "Micro-Precision @k Micro-Precision @k Table 5 : Performance of zero-shot entity typing ProtoLE outperforms HLE by 100%-300% in terms of Micro-Precision. However, again the combination of prototypes and hierarchy achieves similar or better results than ProtoLE on BBN and Wikipedia dataset. The drop of precision of Proto-HLE on OntoNotes is likely due to a different nature of annotation. It is more prevalent in test set of OntoNotes that one entity mention is annotated with multiple Level-1 types, and the presence of fine-grained types are less constrained by the label hierarchy. In such case, hierarchical constrains enforced by Proto-HLE might have negative impacts on type inference.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 38, |
| "end": 45, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data Set Method", |
| "sec_num": null |
| }, |
| { |
| "text": "In this paper, we presented a prototype-driven label embedding method for fine-grained named entity typing (FNET). It shows that our method outperforms state-of-the-art embedding-based FNET methods in both few-shots and zero-shots settings. It also shows that combining prototype-driven label embeddings and type hierarchy can improve the prediction on coarse-grained types. In the near future, we plan to integrate our method with other types of side information such as definition sentences as well as label noise reduction framework (Ren et al., 2016) to further boost the robustness of FNET.", |
| "cite_spans": [ |
| { |
| "start": 536, |
| "end": 554, |
| "text": "(Ren et al., 2016)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "http://github.com/fnet-coling/ner-zero/tree/master/label_embedding", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work was conducted within the Rolls-Royce@NTU Corp Lab with support from the National Research Foundation Singapore under the Corp Lab@University Scheme.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| }, |
| { |
| "text": "areas connaught earth lane brooklyn /LOCATION/CONTINENT north america europe africa north american asia /LOCATION/LAKE SEA OCEAN big bear lake erie champ lake geneva fujisawa /LOCATION/RIVER hudson thompson mississippi river james river tana /GPE soviet edisto canada china france /GPE/STATE PROVINCE california texas ohio arizona jersey In this section, we compare performances of FNET methods in the setting of few-shots FNET where the training set covers all types. Methods compared in this section are trained using the entire type set. We use evaluation metrics for our experiments: macro-F1, micro-F1 and accuracy. As in section 3.2, we train our label embeddings in two different ways: 1) non-adaptive training where label embeddings are fixed during training; and 2) adaptive training where label embeddings are also updated. Table 4 shows the comparison with state-of-the-art FNET methods: FIGER (Ling and Weld, 2012) , HYENA(Yosef et al., 2012) and WSABIE (Yogatama et al., 2015) . We make several findings from the results. Firstly, embedding methods with WARP loss function consistently outperform non-embedding methods (i.e., FIGER and HYENA) on all three datasets. The performance gaps are huge for BBN and OntoNotes, where the best embedding method achieves 10%-20% absolute improvement over the best non-embedding method (FIGER). However, the gap is much smaller on Wikipedia dataset whose size is significantly larger than the other two.Secondly, non-adaptive embedding methods always outperform their adaptive versions except HLE on Wikipedia dataset. Performance of adaptive label embeddings are all close to WSABIE, which suggests that adaptive label embeddings might suffer from same label noise problem as WSABIE does.", |
| "cite_spans": [ |
| { |
| "start": 905, |
| "end": 926, |
| "text": "(Ling and Weld, 2012)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 929, |
| "end": 954, |
| "text": "HYENA(Yosef et al., 2012)", |
| "ref_id": null |
| }, |
| { |
| "start": 966, |
| "end": 989, |
| "text": "(Yogatama et al., 2015)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 834, |
| "end": 841, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Prototypes /LOCATION", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Label-embedding for attributebased classification", |
| "authors": [ |
| { |
| "first": "Zeynep", |
| "middle": [], |
| "last": "Akata", |
| "suffix": "" |
| }, |
| { |
| "first": "Florent", |
| "middle": [], |
| "last": "Perronnin", |
| "suffix": "" |
| }, |
| { |
| "first": "Zad", |
| "middle": [], |
| "last": "Harchaoui", |
| "suffix": "" |
| }, |
| { |
| "first": "Cordelia", |
| "middle": [], |
| "last": "Schmid", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "819--826", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zeynep Akata, Florent Perronnin, Zad Harchaoui, and Cordelia Schmid. 2013. Label-embedding for attribute- based classification. In Proceedings of CVPR, pages 819-826.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Evaluation of output embeddings for fine-grained image classification", |
| "authors": [ |
| { |
| "first": "Zeynep", |
| "middle": [], |
| "last": "Akata", |
| "suffix": "" |
| }, |
| { |
| "first": "Scott", |
| "middle": [], |
| "last": "Reed", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Walter", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "2927--2936", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zeynep Akata, Scott Reed, Daniel Walter, Honglak Lee, and Bernt Schiele. 2015. Evaluation of output embed- dings for fine-grained image classification. In Proceedings of CVPR, pages 2927-2936.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Muc-7 named entity task definition", |
| "authors": [ |
| { |
| "first": "Nancy", |
| "middle": [], |
| "last": "Chinchor", |
| "suffix": "" |
| }, |
| { |
| "first": "Patricia", |
| "middle": [], |
| "last": "Robinson", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of the 7th MUC", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nancy Chinchor and Patricia Robinson. 1997. Muc-7 named entity task definition. In Proceedings of the 7th MUC, page 29.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "The automatic content extraction (ace) program-tasks, data, and evaluation", |
| "authors": [ |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "George R Doddington", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mitchell", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Mark", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Przybocki", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Lance", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephanie", |
| "middle": [], |
| "last": "Ramshaw", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [ |
| "M" |
| ], |
| "last": "Strassel", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Weischedel", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George R Doddington, Alexis Mitchell, Mark A Przybocki, Lance A Ramshaw, Stephanie Strassel, and Ralph M Weischedel. 2004. The automatic content extraction (ace) program-tasks, data, and evaluation. In LREC, page 1.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Knowledge vault: A web-scale approach to probabilistic knowledge fusion", |
| "authors": [ |
| { |
| "first": "Xin", |
| "middle": [], |
| "last": "Dong", |
| "suffix": "" |
| }, |
| { |
| "first": "Evgeniy", |
| "middle": [], |
| "last": "Gabrilovich", |
| "suffix": "" |
| }, |
| { |
| "first": "Geremy", |
| "middle": [], |
| "last": "Heitz", |
| "suffix": "" |
| }, |
| { |
| "first": "Wilko", |
| "middle": [], |
| "last": "Horn", |
| "suffix": "" |
| }, |
| { |
| "first": "Ni", |
| "middle": [], |
| "last": "Lao", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Murphy", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Strohmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Shaohua", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of SIGKDD", |
| "volume": "", |
| "issue": "", |
| "pages": "601--610", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In Proceedings of SIGKDD, pages 601-610.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Open question answering over curated and extracted knowledge bases", |
| "authors": [ |
| { |
| "first": "Anthony", |
| "middle": [], |
| "last": "Fader", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of SIGKDD", |
| "volume": "", |
| "issue": "", |
| "pages": "1156--1165", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and extracted knowledge bases. In Proceedings of SIGKDD, pages 1156-1165.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Devise: A deep visual-semantic embedding model", |
| "authors": [ |
| { |
| "first": "Andrea", |
| "middle": [], |
| "last": "Frome", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [ |
| "S" |
| ], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jon", |
| "middle": [], |
| "last": "Shlens", |
| "suffix": "" |
| }, |
| { |
| "first": "Samy", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeff", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "2121--2129", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrea Frome, Greg S Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Tomas Mikolov, et al. 2013. Devise: A deep visual-semantic embedding model. In NIPS, pages 2121-2129.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Context-dependent finegrained entity type tagging", |
| "authors": [ |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Gillick", |
| "suffix": "" |
| }, |
| { |
| "first": "Nevena", |
| "middle": [], |
| "last": "Lazic", |
| "suffix": "" |
| }, |
| { |
| "first": "Kuzman", |
| "middle": [], |
| "last": "Ganchev", |
| "suffix": "" |
| }, |
| { |
| "first": "Jesse", |
| "middle": [], |
| "last": "Kirchner", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Huynh", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1412.1820" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dan Gillick, Nevena Lazic, Kuzman Ganchev, Jesse Kirchner, and David Huynh. 2014. Context-dependent fine- grained entity type tagging. arXiv preprint arXiv:1412.1820.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Revisiting embedding features for simple semisupervised learning", |
| "authors": [ |
| { |
| "first": "Jiang", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| }, |
| { |
| "first": "Wanxiang", |
| "middle": [], |
| "last": "Che", |
| "suffix": "" |
| }, |
| { |
| "first": "Haifeng", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Ting", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proc. of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "110--120", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jiang Guo, Wanxiang Che, Haifeng Wang, and Ting Liu. 2014. Revisiting embedding features for simple semi- supervised learning. In Proc. of EMNLP, pages 110-120.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Prototype-driven learning for sequence models", |
| "authors": [ |
| { |
| "first": "Aria", |
| "middle": [], |
| "last": "Haghighi", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "320--327", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aria Haghighi and Dan Klein. 2006. Prototype-driven learning for sequence models. In Proceedings of HLT- NAACL, pages 320-327. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Zero-data learning of new tasks", |
| "authors": [ |
| { |
| "first": "Hugo", |
| "middle": [], |
| "last": "Larochelle", |
| "suffix": "" |
| }, |
| { |
| "first": "Dumitru", |
| "middle": [], |
| "last": "Erhan", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hugo Larochelle, Dumitru Erhan, and Yoshua Bengio. 2008. Zero-data learning of new tasks. In AAAI, page 3.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "No noun phrase left behind: detecting and typing unlinkable entities", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "893--903", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Lin, Oren Etzioni, et al. 2012. No noun phrase left behind: detecting and typing unlinkable entities. In Proceedings of EMNLP, pages 893-903.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Fine-grained entity recognition", |
| "authors": [ |
| { |
| "first": "Xiao", |
| "middle": [], |
| "last": "Ling", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [ |
| "S" |
| ], |
| "last": "Weld", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proc. of the 26th AAAI Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xiao Ling and Daniel S. Weld. 2012. Fine-grained entity recognition. In In Proc. of the 26th AAAI Conference on Artificial Intelligence.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Visualizing data using t-sne", |
| "authors": [ |
| { |
| "first": "Laurens", |
| "middle": [], |
| "last": "Van Der Maaten", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Hinton", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "9", |
| "issue": "", |
| "pages": "2579--2605", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov):2579-2605.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Distributed representations of words and phrases and their compositionality", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Gregory", |
| "middle": [ |
| "S" |
| ], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "NIPS'13", |
| "volume": "", |
| "issue": "", |
| "pages": "3111--3119", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed representa- tions of words and phrases and their compositionality. In NIPS'13, pages 3111-3119.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Label noise reduction in entity typing by heterogeneous partial-label embedding", |
| "authors": [ |
| { |
| "first": "Xiang", |
| "middle": [], |
| "last": "Ren", |
| "suffix": "" |
| }, |
| { |
| "first": "Wenqi", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Meng", |
| "middle": [], |
| "last": "Qu", |
| "suffix": "" |
| }, |
| { |
| "first": "Clare", |
| "middle": [ |
| "R" |
| ], |
| "last": "Voss", |
| "suffix": "" |
| }, |
| { |
| "first": "Heng", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiawei", |
| "middle": [], |
| "last": "Han", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1602.05307" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xiang Ren, Wenqi He, Meng Qu, Clare R Voss, Heng Ji, and Jiawei Han. 2016. Label noise reduction in entity typing by heterogeneous partial-label embedding. arXiv preprint arXiv:1602.05307.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Introduction to the conll-2003 shared task: Languageindependent named entity recognition", |
| "authors": [ |
| { |
| "first": "Erik F Tjong Kim", |
| "middle": [], |
| "last": "Sang", |
| "suffix": "" |
| }, |
| { |
| "first": "Fien", |
| "middle": [], |
| "last": "De Meulder", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003", |
| "volume": "", |
| "issue": "", |
| "pages": "142--147", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language- independent named entity recognition. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003, pages 142-147.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Large margin taxonomy embedding for document categorization", |
| "authors": [ |
| { |
| "first": "Q", |
| "middle": [], |
| "last": "Kilian", |
| "suffix": "" |
| }, |
| { |
| "first": "Olivier", |
| "middle": [], |
| "last": "Weinberger", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Chapelle", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "1737--1744", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kilian Q Weinberger and Olivier Chapelle. 2009. Large margin taxonomy embedding for document categoriza- tion. In Advances in Neural Information Processing Systems, pages 1737-1744.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "BBN pronoun coreference and entity type corpus", |
| "authors": [ |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Weischedel", |
| "suffix": "" |
| }, |
| { |
| "first": "Ada", |
| "middle": [], |
| "last": "Brunstein", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Linguistic Data Consortium", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ralph Weischedel and Ada Brunstein. 2005. BBN pronoun coreference and entity type corpus. Technical report, Linguistic Data Consortium, Philadelphia.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Wsabie: Scaling up to large vocabulary image annotation", |
| "authors": [ |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| }, |
| { |
| "first": "Samy", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicolas", |
| "middle": [], |
| "last": "Usunier", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of IJCAI'11", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jason Weston, Samy Bengio, and Nicolas Usunier. 2011. Wsabie: Scaling up to large vocabulary image annota- tion. In Proceedings of IJCAI'11.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Clustype: Effective entity recognition and typing by relation phrase-based clustering", |
| "authors": [ |
| { |
| "first": "Chi", |
| "middle": [], |
| "last": "Wang Xiang Ren", |
| "suffix": "" |
| }, |
| { |
| "first": "Ahmed", |
| "middle": [], |
| "last": "El-Kishky", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of SIGKDD'15", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chi Wang Xiang Ren, Ahmed El-Kishky. 2015. Clustype: Effective entity recognition and typing by relation phrase-based clustering. In Proceedings of SIGKDD'15.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Embedding methods for fine grained entity type classification", |
| "authors": [ |
| { |
| "first": "Dani", |
| "middle": [], |
| "last": "Yogatama", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Gillick", |
| "suffix": "" |
| }, |
| { |
| "first": "Nevena", |
| "middle": [], |
| "last": "Lazic", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of ACL'15", |
| "volume": "", |
| "issue": "", |
| "pages": "291--296", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dani Yogatama, Daniel Gillick, and Nevena Lazic. 2015. Embedding methods for fine grained entity type classi- fication. In Proceedings of ACL'15, pages 291-296.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "HYENA: Hierarchical type classification for entity names", |
| "authors": [ |
| { |
| "first": "Mohamed", |
| "middle": [], |
| "last": "Amir Yosef", |
| "suffix": "" |
| }, |
| { |
| "first": "Sandro", |
| "middle": [], |
| "last": "Bauer", |
| "suffix": "" |
| }, |
| { |
| "first": "Johannes", |
| "middle": [], |
| "last": "Hoffart", |
| "suffix": "" |
| }, |
| { |
| "first": "Marc", |
| "middle": [], |
| "last": "Spaniol", |
| "suffix": "" |
| }, |
| { |
| "first": "Gerhard", |
| "middle": [], |
| "last": "Weikum", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of COLING 2012: Posters", |
| "volume": "", |
| "issue": "", |
| "pages": "1361--1370", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mohamed Amir Yosef, Sandro Bauer, Johannes Hoffart, Marc Spaniol, and Gerhard Weikum. 2012. HYENA: Hierarchical type classification for entity names. In Proceedings of COLING 2012: Posters, pages 1361-1370.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "type_str": "figure", |
| "text": ": t-SNE visualization of the prototype-driven label embeddings for BBN dataset Following prior works(Ling and Weld, 2012), we evaluate our methods and baseline systems using both loose and strict metrics, i.e., Macro-F1, Micro-F1, and strict Accuracy (Acc.). Given the evaluation set D, we denote Y m as the ground truth types for entity mention m \u2208 D and Y m as the predicted labels. Strict accuracy (Acc) can be computed as:Acc=1 D m\u2208D \u03c3(Y m = Y m ), where \u03c3(\u2022) is an indicator function. Macro-F1 is based on Macro-Precision (Ma-P) and Micro-Recall (Ma-R), where Ma--F1 is based on Micro-Precision (Mi-P) and Micro-Recall (Mi-R), where Mi-P= m\u2208D |Ym\u2229 Ym| m\u2208D Ym , and Mi-R= m\u2208D |Ym\u2229 Ym| m\u2208D Ym .", |
| "num": null |
| }, |
| "TABREF0": { |
| "html": null, |
| "content": "<table><tr><td>Dataset</td><td colspan=\"4\">Types Documents Sentences Mentions</td></tr><tr><td>BBN</td><td>train 47 test</td><td>2.3K 459</td><td>48.8K 6.4K</td><td>109K 13.8K</td></tr><tr><td>OntoNotes</td><td>train 89 test</td><td>13.1K 76</td><td>147.7K 1.3K</td><td>223.3K 9.6K</td></tr><tr><td>Wikipedia</td><td>train 113 test</td><td>780.5K -</td><td>1.15M 434</td><td>2.69M 563</td></tr></table>", |
| "type_str": "table", |
| "num": null, |
| "text": "Features extracted for context \"William Bennet, the [White House] drug-policy director....\"" |
| }, |
| "TABREF1": { |
| "html": null, |
| "content": "<table/>", |
| "type_str": "table", |
| "num": null, |
| "text": "Statistics of datasets" |
| } |
| } |
| } |
| } |