ACL-OCL / Base_JSON /prefixP /json /P18 /P18-1010.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P18-1010",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:41:47.881706Z"
},
"title": "Hierarchical Losses and New Resources for Fine-grained Entity Typing and Linking",
"authors": [
{
"first": "Shikhar",
"middle": [],
"last": "Murty",
"suffix": "",
"affiliation": {},
"email": "smurty@cs.umass.edu"
},
{
"first": "Patrick",
"middle": [],
"last": "Verga",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Luke",
"middle": [],
"last": "Vilnis",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Irena",
"middle": [],
"last": "Radovanovic",
"suffix": "",
"affiliation": {},
"email": "iradovanovic@chanzuckerberg.com"
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": "",
"affiliation": {},
"email": "mccallum@cs.umass.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Extraction from raw text to a knowledge base of entities and fine-grained types is often cast as prediction into a flat set of entity and type labels, neglecting the rich hierarchies over types and entities contained in curated ontologies. Previous attempts to incorporate hierarchical structure have yielded little benefit and are restricted to shallow ontologies. This paper presents new methods using real and complex bilinear mappings for integrating hierarchical information, yielding substantial improvement over flat predictions in entity linking and fine-grained entity typing, and achieving new state-of-the-art results for end-to-end models on the benchmark FIGER dataset. We also present two new human-annotated datasets containing wide and deep hierarchies which we will release to the community to encourage further research in this direction: MedMentions, a collection of PubMed abstracts in which 246k mentions have been mapped to the massive UMLS ontology; and Type-Net, which aligns Freebase types with the WordNet hierarchy to obtain nearly 2k entity types. In experiments on all three datasets we show substantial gains from hierarchy-aware training.",
"pdf_parse": {
"paper_id": "P18-1010",
"_pdf_hash": "",
"abstract": [
{
"text": "Extraction from raw text to a knowledge base of entities and fine-grained types is often cast as prediction into a flat set of entity and type labels, neglecting the rich hierarchies over types and entities contained in curated ontologies. Previous attempts to incorporate hierarchical structure have yielded little benefit and are restricted to shallow ontologies. This paper presents new methods using real and complex bilinear mappings for integrating hierarchical information, yielding substantial improvement over flat predictions in entity linking and fine-grained entity typing, and achieving new state-of-the-art results for end-to-end models on the benchmark FIGER dataset. We also present two new human-annotated datasets containing wide and deep hierarchies which we will release to the community to encourage further research in this direction: MedMentions, a collection of PubMed abstracts in which 246k mentions have been mapped to the massive UMLS ontology; and Type-Net, which aligns Freebase types with the WordNet hierarchy to obtain nearly 2k entity types. In experiments on all three datasets we show substantial gains from hierarchy-aware training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Identifying and understanding entities is a central component in knowledge base construction (Roth et al., 2015) and essential for enhancing downstream tasks such as relation extraction *equal contribution Data and code for experiments: https://github. com/MurtyShikhar/Hierarchical-Typing (Yaghoobzadeh et al., 2017b) , question answering (Das et al., 2017; Welbl et al., 2017) and search (Dalton et al., 2014) . This has led to considerable research in automatically identifying entities in text, predicting their types, and linking them to existing structured knowledge sources.",
"cite_spans": [
{
"start": 93,
"end": 112,
"text": "(Roth et al., 2015)",
"ref_id": "BIBREF34"
},
{
"start": 290,
"end": 318,
"text": "(Yaghoobzadeh et al., 2017b)",
"ref_id": "BIBREF53"
},
{
"start": 340,
"end": 358,
"text": "(Das et al., 2017;",
"ref_id": "BIBREF8"
},
{
"start": 359,
"end": 378,
"text": "Welbl et al., 2017)",
"ref_id": "BIBREF49"
},
{
"start": 390,
"end": 411,
"text": "(Dalton et al., 2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Current state-of-the-art models encode a textual mention with a neural network and classify the mention as being an instance of a fine grained type or entity in a knowledge base. Although in many cases the types and their entities are arranged in a hierarchical ontology, most approaches ignore this structure, and previous attempts to incorporate hierarchical information yielded little improvement in performance (Shimaoka et al., 2017) . Additionally, existing benchmark entity typing datasets only consider small label sets arranged in very shallow hierarchies. For example, FIGER (Ling and Weld, 2012) , the de facto standard fine grained entity type dataset, contains only 113 types in a hierarchy only two levels deep.",
"cite_spans": [
{
"start": 415,
"end": 438,
"text": "(Shimaoka et al., 2017)",
"ref_id": "BIBREF36"
},
{
"start": 585,
"end": 606,
"text": "(Ling and Weld, 2012)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we investigate models that explicitly integrate hierarchical information into the embedding space of entities and types, using a hierarchy-aware loss on top of a deep neural network classifier over textual mentions. By using this additional information, we learn a richer, more robust representation, gaining statistical efficiency when predicting similar concepts and aiding the classification of rarer types. We first validate our methods on the narrow, shallow type system of FIGER, out-performing state-of-the-art methods not incorporating hand-crafted features and matching those that do.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To evaluate on richer datasets and stimulate further research into hierarchical entity/typing prediction with larger and deeper ontologies, we introduce two new human annotated datasets. The first is MedMentions, a collection of PubMed ab-stracts in which 246k concept mentions have been annotated with links to the Unified Medical Language System (UMLS) ontology (Bodenreider, 2004) , an order of magnitude more annotations than comparable datasets. UMLS contains over 3.5 million concepts in a hierarchy having average depth 14.4. Interestingly, UMLS does not distinguish between types and entities (an approach we heartily endorse), and the technical details of linking to such a massive ontology lead us to refer to our MedMentions experiments as entity linking. Second, we present TypeNet, a curated mapping from the Freebase type system into the WordNet hierarchy. TypeNet contains over 1900 types with an average depth of 7.8.",
"cite_spans": [
{
"start": 364,
"end": 383,
"text": "(Bodenreider, 2004)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In experimental results, we show improvements with a hierarchically-aware training loss on each of the three datasets. In entity-linking MedMentions to UMLS, we observe a 6% relative increase in accuracy over the base model. In experiments on entity-typing from Wikipedia into TypeNet, we show that incorporating the hierarchy of types and including a hierarchical loss provides a dramatic 29% relative increase in MAP. Our models even provide benefits for shallow hierarchies allowing us to match the state-of-art results of Shimaoka et al. (2017) on the FIGER (GOLD) dataset without requiring hand-crafted features.",
"cite_spans": [
{
"start": 526,
"end": 548,
"text": "Shimaoka et al. (2017)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We will publicly release the TypeNet and Med-Mentions datasets to the community to encourage further research in truly fine-grained, hierarchical entity-typing and linking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Over the years researchers have constructed many large knowledge bases in the biomedical domain (Apweiler et al., 2004; Davis et al., 2008; Chatraryamontri et al., 2017) . Many of these knowledge bases are specific to a particular sub-domain encompassing a few particular types such as genes and diseases (Pi\u00f1ero et al., 2017) .",
"cite_spans": [
{
"start": 96,
"end": 119,
"text": "(Apweiler et al., 2004;",
"ref_id": "BIBREF0"
},
{
"start": 120,
"end": 139,
"text": "Davis et al., 2008;",
"ref_id": "BIBREF9"
},
{
"start": 140,
"end": 169,
"text": "Chatraryamontri et al., 2017)",
"ref_id": null
},
{
"start": 305,
"end": 326,
"text": "(Pi\u00f1ero et al., 2017)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MedMentions",
"sec_num": "2.1"
},
{
"text": "UMLS (Bodenreider, 2004) is particularly comprehensive, containing over 3.5 million concepts (UMLS does not distinguish between entities and types) defining their relationships and a curated hierarchical ontology. For example LETM1 Protein IS-A Calcium Binding Protein IS-A Binding Protein IS-A Protein IS-A Genome Encoded Entity. This fact makes UMLS particularly well suited for methods explicitly exploiting hierarchical struc-ture.",
"cite_spans": [
{
"start": 5,
"end": 24,
"text": "(Bodenreider, 2004)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MedMentions",
"sec_num": "2.1"
},
{
"text": "Accurately linking textual biological entity mentions to an existing knowledge base is extremely important but few richly annotated resources are available. Even when resources do exist, they often contain no more than a few thousand annotated entity mentions which is insufficient for training state-of-the-art neural network entity linkers. State-of-the-art methods must instead rely on string matching between entity mentions and canonical entity names (Leaman et al., 2013; . To address this, we constructed MedMentions, a new, large dataset identifying and linking entity mentions in PubMed abstracts to specific UMLS concepts. Professional annotators exhaustively annotated UMLS entity mentions from 3704 PubMed abstracts, resulting in 246,000 linked mention spans. The average depth in the hierarchy of a concept from our annotated set is 14.4 and the maximum depth is 43.",
"cite_spans": [
{
"start": 456,
"end": 477,
"text": "(Leaman et al., 2013;",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MedMentions",
"sec_num": "2.1"
},
{
"text": "MedMentions contains an order of magnitude more annotations than similar biological entity linking PubMed datasets (Dogan et al., 2014; Li et al., 2016) . Additionally, these datasets contain annotations for only one or two entity types (genes or chemicals and disease etc.). MedMentions instead contains annotations for a wide diversity of entities linking to UMLS. Statistics for several other datasets are in Table 1 ",
"cite_spans": [
{
"start": 115,
"end": 135,
"text": "(Dogan et al., 2014;",
"ref_id": "BIBREF11"
},
{
"start": 136,
"end": 152,
"text": "Li et al., 2016)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 412,
"end": 419,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "MedMentions",
"sec_num": "2.1"
},
{
"text": "TypeNet is a new dataset of hierarchical entity types for extremely fine-grained entity typing. TypeNet was created by manually aligning Freebase types (Bollacker et al., 2008) to noun synsets from the WordNet hierarchy (Fellbaum, 1998) , naturally producing a hierarchical type set.",
"cite_spans": [
{
"start": 152,
"end": 176,
"text": "(Bollacker et al., 2008)",
"ref_id": "BIBREF2"
},
{
"start": 220,
"end": 236,
"text": "(Fellbaum, 1998)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TypeNet",
"sec_num": "2.2"
},
{
"text": "To construct TypeNet, we first consider all Freebase types that were linked to more than 20 entities. This is done to eliminate types that are either very specific or very rare. We also remove all Freebase API types, e.g. the [/freebase, /dataworld, /schema, /atom, /scheme, and /topics] domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TypeNet",
"sec_num": "2.2"
},
{
"text": "For each remaining Freebase type, we generate a list of candidate WordNet synsets through a substring match. An expert annotator then attempted to map the Freebase type to one or more synsets in the candidate list with a parent-of, child-of or equivalence link by comparing the definitions of each synset with example entities of the Freebase type. If no match was found, the annotator manually formulated queries for the online WordNet API until an appropriate synset was found. See Table 9 for an example annotation.",
"cite_spans": [],
"ref_spans": [
{
"start": 484,
"end": 491,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "TypeNet",
"sec_num": "2.2"
},
{
"text": "Two expert annotators independently aligned each Freebase type before meeting to resolve any conflicts. The annotators were conservative with assigning equivalence links resulting in a greater number of child-of links. The final dataset contained 13 parent-of, 727 child-of, and 380 equivalence links. Note that some Freebase types have multiple child-of links to WordNet, making Type-Net, like WordNet, a directed acyclic graph. We then took the union of each of our annotated Freebase types, the synset that they linked to, and any ancestors of that synset. We also added an additional set of 614 FB \u2192 FB links 4. This was done by computing conditional probabilities of Freebase types given other Freebase types from a collection of 5 million randomly chosen Freebase entities. The conditional probability P(t 2 | t 1 ) of a Freebase type t 2 given another Freebase type t 1 was calculated as #(t 1 ,t 2 ) #t 1 . Links with a conditional probability less than or equal to 0.7 were discarded. The remaining links were manually verified by an expert annotator and valid links were added to the final dataset, preserving acyclicity. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TypeNet",
"sec_num": "2.2"
},
{
"text": "We define a textual mention m as a sentence with an identified entity. The goal is then to classify m with one or more labels. For example, we could take the sentence m = \"Barack Obama is the President of the United States.\" with the identified entity string Barack Obama. In the task of entity linking, we want to map m to a specific entity in a knowledge base such as \"m/02mjmr\" in Freebase. In mention-level typing, we label m with one or more types from our type system T such as t m = {president, leader, politician} (Ling and Weld, 2012; Gillick et al., 2014; Shimaoka et al., 2017) . In entity-level typing, we instead consider a bag of mentions B e which are all linked to the same entity. We label B e with t e , the set of all types expressed in all m \u2208 B e (Yao et al., 2013; Neelakantan and Chang, 2015; Verga et al., 2017; Yaghoobzadeh et al., 2017a) .",
"cite_spans": [
{
"start": 522,
"end": 543,
"text": "(Ling and Weld, 2012;",
"ref_id": "BIBREF25"
},
{
"start": 544,
"end": 565,
"text": "Gillick et al., 2014;",
"ref_id": "BIBREF15"
},
{
"start": 566,
"end": 588,
"text": "Shimaoka et al., 2017)",
"ref_id": "BIBREF36"
},
{
"start": 768,
"end": 786,
"text": "(Yao et al., 2013;",
"ref_id": "BIBREF54"
},
{
"start": 787,
"end": 815,
"text": "Neelakantan and Chang, 2015;",
"ref_id": "BIBREF28"
},
{
"start": 816,
"end": 835,
"text": "Verga et al., 2017;",
"ref_id": "BIBREF45"
},
{
"start": 836,
"end": 863,
"text": "Yaghoobzadeh et al., 2017a)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background: Entity Typing and Linking",
"sec_num": "3.1"
},
{
"text": "Our model converts each mention m to a d dimensional vector. This vector is used to classify the type or entity of the mention. The basic model depicted in Figure 1 concatenates the averaged word embeddings of the mention string with the output of a convolutional neural network (CNN). The word embeddings of the mention string capture global, context independent semantics while the CNN encodes a context dependent representation.",
"cite_spans": [],
"ref_spans": [
{
"start": 156,
"end": 164,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Mention Encoder",
"sec_num": "3.2"
},
{
"text": "Each sentence is made up of s tokens which are mapped to d w dimensional word embeddings. Because sentences may contain mentions of more than one entity, we explicitly encode a distinguished mention in the text using position embeddings which have been shown to be useful in state of the art relation extraction models (dos Santos et al., 2015; Lin et al., 2016) and machine translation (Vaswani et al., 2017) . Each word embedding is concatenated with a d p dimensional learned position embedding encoding the token's relative distance to the target entity. Each token within the distinguished mention span has position 0, tokens to the left have a negative distance from [\u2212s, 0), and tokens to the right of the mention span have a positive distance from (0, s]. We denote the final sequence of token representations as M .",
"cite_spans": [
{
"start": 319,
"end": 344,
"text": "(dos Santos et al., 2015;",
"ref_id": "BIBREF35"
},
{
"start": 345,
"end": 362,
"text": "Lin et al., 2016)",
"ref_id": "BIBREF24"
},
{
"start": 387,
"end": 409,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Token Representation",
"sec_num": "3.2.1"
},
{
"text": "The embedded sequence M is then fed into our context encoder. Our context encoder is a single layer CNN followed by a tanh non-linearity to produce C. The outputs are max pooled across time to get a final context embedding, m CNN .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Representation",
"sec_num": "3.2.2"
},
{
"text": "c i = tanh(b + w j=0 W [j]M [i \u2212 w 2 + j]) m CNN = max 0\u2264i\u2264n\u2212w+1 c i Each W [j] \u2208 R d\u00d7d is a CNN filter, the bias b \u2208 R d , M [i] \u2208 R d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Representation",
"sec_num": "3.2.2"
},
{
"text": "is a token representation, and the max is taken pointwise. In all of our experiments we set w = 5. In addition to the contextually encoded mention, we create a global mention encoding, m G , by averaging the word embeddings of the tokens within the mention span.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Representation",
"sec_num": "3.2.2"
},
{
"text": "The final mention representation m F is constructed by concatenating m CNN and m G and applying a two layer feed-forward network with tanh non-linearity (see Figure 1 ):",
"cite_spans": [],
"ref_spans": [
{
"start": 158,
"end": 166,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Sentence Representation",
"sec_num": "3.2.2"
},
{
"text": "m F = W 2 tanh(W 1 m SFM m CNN + b 1 ) + b 2 4 Training 4.1 Mention-Level Typing",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Representation",
"sec_num": "3.2.2"
},
{
"text": "Mention level entity typing is treated as multilabel prediction. Given the sentence vector m F , we compute a score for each type in typeset T as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Representation",
"sec_num": "3.2.2"
},
{
"text": "y j = t j m F",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Representation",
"sec_num": "3.2.2"
},
{
"text": "where t j is the embedding for the j th type in T and y j is its corresponding score. The mention is labeled with t m , a binary vector of all types where t m j = 1 if the j th type is in the set of gold types for m and 0 otherwise. We optimize a multi-label binary cross entropy objective:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Representation",
"sec_num": "3.2.2"
},
{
"text": "L type (m) = \u2212 j t m j log y j + (1 \u2212 t m j ) log(1 \u2212 y j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Representation",
"sec_num": "3.2.2"
},
{
"text": "In the absence of mention-level annotations, we instead must rely on distant supervision (Mintz et al., 2009) to noisily label all mentions of entity e with all types belonging to e. This procedure inevitably leads to noise as not all mentions of an entity express each of its known types. To alleviate this noise, we use multi-instance multi-label learning (MIML) (Surdeanu et al., 2012) which operates over bags rather than mentions. A bag of mentions B e = {m 1 , m 2 , . . . , m n } is the set of all mentions belonging to entity e. The bag is labeled with t e , a binary vector of all types where t e j = 1 if the j th type is in the set of gold types for e and 0 otherwise. For every entity, we subsample k mentions from its bag of mentions. Each mention is then encoded independently using the model described in Section 3.2 resulting in a bag of vectors. Each of the k sentence vectors m i F is used to compute a score for each type in t e :",
"cite_spans": [
{
"start": 89,
"end": 109,
"text": "(Mintz et al., 2009)",
"ref_id": "BIBREF27"
},
{
"start": 365,
"end": 388,
"text": "(Surdeanu et al., 2012)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-Level Typing",
"sec_num": "4.2"
},
{
"text": "y i j = t j m i F",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-Level Typing",
"sec_num": "4.2"
},
{
"text": "where t j is the embedding for the j th type in t e and y i is a vector of logits corresponding to the i th mention. The final bag predictions are obtained using element-wise LogSumExp pooling across the k logit vectors in the bag to produce entity level logits y:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-Level Typing",
"sec_num": "4.2"
},
{
"text": "y = log i exp(y i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-Level Typing",
"sec_num": "4.2"
},
{
"text": "We use these final bag level predictions to optimize a multi-label binary cross entropy objective:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-Level Typing",
"sec_num": "4.2"
},
{
"text": "L type (B e ) = \u2212 j t e j log y j + (1 \u2212 t e j ) log(1 \u2212 y j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-Level Typing",
"sec_num": "4.2"
},
{
"text": "Entity linking is similar to mention-level entity typing with a single correct class per mention. Because the set of possible entities is in the millions, linking models typically integrate an alias table mapping entity mentions to a set of possible candidate entities. Given a large corpus of entity linked data, one can compute conditional probabilities from mention strings to entities (Spitkovsky and Chang, 2012). In many scenarios this data is unavailable. However, knowledge bases such as UMLS contain a canonical string name for each of its curated entities. State-of-the-art biological entity linking systems tend to operate on various string edit metrics between the entity mention string and the set of canonical entity strings in the existing structured knowledge base (Leaman et al., 2013; . For each mention in our dataset, we generate 100 candidate entities e c = (e 1 , e 2 , . . . , e 100 ) each with an associated string similarity score csim. See Appendix A.5.1 for more details on candidate generation. We generate the sentence representation m F using our encoder and compute a similarity score between m F and the learned embedding e of each of the candidate entities. This score and string cosine similarity csim are combined via a learned linear combination to generate our final score. The final prediction at test time\u00ea is the maximally similar entity to the mention. ",
"cite_spans": [
{
"start": 781,
"end": 802,
"text": "(Leaman et al., 2013;",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Linking",
"sec_num": "4.3"
},
{
"text": "Both entity typing and entity linking treat the label space as prediction into a flat set. To explicitly incorporate the structure between types/entities into our training, we add an additional loss. We consider two methods for modeling the hierarchy of the embedding space: real and complex bilinear maps, which are two of the state-of-the-art knowledge graph embedding models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding Hierarchies",
"sec_num": "5"
},
{
"text": "Bilinear: Our standard bilinear model scores a hypernym link between (c 1 , c 2 ) as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Structure Models",
"sec_num": "5.1"
},
{
"text": "s(c 1 , c 2 ) = c 1 Ac 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Structure Models",
"sec_num": "5.1"
},
{
"text": "where A \u2208 R d\u00d7d is a learned real-valued nondiagonal matrix and c 1 is the child of c 2 in the hierarchy. This model is equivalent to RESCAL (Nickel et al., 2011 ) with a single IS-A relation type. The type embeddings are the same whether used on the left or right side of the relation. We merge this with the base model by using the parameter A as an additional map before type/entity scoring. Complex Bilinear: We also experiment with a complex bilinear map based on the ComplEx model (Trouillon et al., 2016) , which was shown to have strong performance predicting the hypernym relation in WordNet, suggesting suitability for asymmetric, transitive relations such as those in our type hierarchy. ComplEx uses complex valued vectors for types, and diagonal complex matrices for relations, using Hermitian inner products (taking the complex conjugate of the second argument, equivalent to treating the right-hand-side type embedding to be the complex conjugate of the left hand side), and finally taking the real part of the score 1 . The score of a hypernym link between (c 1 , c 2 ) in the ComplEx model is defined as:",
"cite_spans": [
{
"start": 141,
"end": 161,
"text": "(Nickel et al., 2011",
"ref_id": "BIBREF30"
},
{
"start": 487,
"end": 511,
"text": "(Trouillon et al., 2016)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Structure Models",
"sec_num": "5.1"
},
{
"text": "s(c 1 , c 2 ) = Re(< c 1 , r IS-A , c 2 >) = Re( k c 1k r kc2k ) = Re(c 1 ), Re(r IS-A ), Re(c 2 ) + Re(c 1 ), Im(r IS-A ), Im(c 2 ) + Im(c 1 ), Re(r IS-A ), Im(c 2 ) \u2212 Im(c 1 ), Im(r IS-A ), Re(c 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Structure Models",
"sec_num": "5.1"
},
{
"text": "where c 1 , c 2 and r IS-A are complex valued vectors representing c 1 , c 2 and the IS-A relation respectively. Re(z) represents the real component of z and Im(z) is the imaginary component. As noted in Trouillon et al. (2016) , the above function is antisymmetric when r IS-A is purely imaginary.",
"cite_spans": [
{
"start": 204,
"end": 227,
"text": "Trouillon et al. (2016)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Structure Models",
"sec_num": "5.1"
},
{
"text": "Since entity/type embeddings are complex vectors, in order to combine it with our base model, we also need to represent mentions with complex vectors for scoring. To do this, we pass the output of the mention encoder through two different affine transformations to generate a real and imaginary component:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Structure Models",
"sec_num": "5.1"
},
{
"text": "Re(m F ) = W real m F + b real Im(m F ) = W img m F + b img",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Structure Models",
"sec_num": "5.1"
},
{
"text": "where m F is the output of the mention encoder, and W real , W img \u2208 R d\u00d7d and b real , b img \u2208 R d .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Structure Models",
"sec_num": "5.1"
},
{
"text": "Learning a hierarchy is analogous to learning embeddings for nodes of a knowledge graph with a single hypernym/IS-A relation. To train these embeddings, we sample (c 1 , c 2 ) pairs, where each pair is a positive link in our hierarchy. For each positive link, we sample a set N of n negative links. We encourage the model to output high scores for positive links, and low scores for negative links via a binary cross entropy (BCE) loss:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training with Hierarchies",
"sec_num": "5.2"
},
{
"text": "L struct = \u2212 log \u03c3(s(c 1i , c 2i )) + N log(1 \u2212 \u03c3(s(c 1i , c 2i ))) L = L type/link + \u03b3L struct",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training with Hierarchies",
"sec_num": "5.2"
},
{
"text": "where s(c 1 , c 2 ) is the score of a link (c 1 , c 2 ), and \u03c3(\u2022) is the logistic sigmoid. The weighting parameter \u03b3 is \u2208 {0.1, 0.5, 0.8, 1, 2.0, 4.0}. The final loss function that we optimize is L.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training with Hierarchies",
"sec_num": "5.2"
},
{
"text": "We perform three sets of experiments: mentionlevel entity typing on the benchmark dataset FIGER, entity-level typing using Wikipedia and TypeNet, and entity linking using MedMentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "CNN: Each mention is encoded using the model described in Section 3.2. The resulting embedding is used for classification into a flat set labels. Specific implementation details can be found in Appendix A.2. CNN+Complex: The CNN+Complex model is equivalent to the CNN model but uses complex embeddings and Hermitian dot products. Transitive: This model does not add an additional hierarchical loss to the training objective (unless otherwise stated). We add additional labels to each entity corresponding to the transitive closure, or the union of all ancestors of its known types. This provides a rich additional learning signal that greatly improves classification of specific types. Hierarchy: These models add an explicit hierarchical loss to the training objective, as described in Section 5, using either complex or real-valued bilinear mappings, and the associated parameter sharing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "6.1"
},
{
"text": "To evaluate the efficacy of our methods we first compare against the current state-of-art models of Shimaoka et al. (2017) . The most widely used type system for fine-grained entity typing is FIGER which consists of 113 types organized in a 2 level hierarchy. For training, we use the publicly available W2M data (Ren et al., 2016) and optimize the mention typing loss function defined in Section-4.1 with the additional hierarchical loss where specified. For evaluation, we use the manually annotated FIGER (GOLD) data by Ling and Weld (2012) . See Appendix A.2 and A.3 for specific implementation details.",
"cite_spans": [
{
"start": 100,
"end": 122,
"text": "Shimaoka et al. (2017)",
"ref_id": "BIBREF36"
},
{
"start": 313,
"end": 331,
"text": "(Ren et al., 2016)",
"ref_id": "BIBREF33"
},
{
"start": 523,
"end": 543,
"text": "Ling and Weld (2012)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mention-Level Typing in FIGER",
"sec_num": "6.2"
},
{
"text": "In Table 5 we see that our base CNN models (CNN and CNN+Complex) match LSTM models of Shimaoka et al. (2017) and Gupta et al. (2017) , the previous state-of-the-art for models without handcrafted features. When incorporating structure into our models, we gain 2.5 points of accuracy in our CNN+Complex model, matching the overall state of the art attentive LSTM that relied on handcrafted features from syntactic parses, topic models, and character n-grams. The structure can help our model predict lower frequency types which is a similar role played by hand-crafted features.",
"cite_spans": [
{
"start": 86,
"end": 108,
"text": "Shimaoka et al. (2017)",
"ref_id": "BIBREF36"
},
{
"start": 113,
"end": 132,
"text": "Gupta et al. (2017)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.2.1"
},
{
"text": "Next we evaluate our models on entity-level typing in TypeNet using Wikipedia. For each entity, we follow the procedure outlined in Section 4.2. We predict labels for each instance in the entity's bag and aggregate them into entity-level predictions using LogSumExp pooling. Each type is assigned a predicted score by the model. We then rank these scores and calculate average precision for each of the types in the test set, and use these scores to calculate mean average precision (MAP). We evaluate using MAP instead of accuracy which is standard in large knowledge base link prediction tasks (Verga et al., 2017; Trouillon et al., 2016) . These scores are calculated only over Freebase types, which tend to be lower in the hierarchy. This is to avoid artificial score inflation caused by trivial predictions such as 'entity.' See Appendix A.4 for more implementation details. Table 6 shows the results for entity level typing on our Wikipedia TypeNet dataset. We see that both the basic CNN and the CNN+Complex models perform similarly with the CNN+Complex model doing slightly better on the full data regime.",
"cite_spans": [
{
"start": 596,
"end": 616,
"text": "(Verga et al., 2017;",
"ref_id": "BIBREF45"
},
{
"start": 617,
"end": 640,
"text": "Trouillon et al., 2016)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [
{
"start": 880,
"end": 887,
"text": "Table 6",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Entity-Level Typing in TypeNet",
"sec_num": "6.3"
},
{
"text": "We also see that both models get an improvement when adding an explicit hierarchy loss, even before adding in the transitive closure. The transitive closure itself gives an additional increase Normalized scores consider only mentions which contain the gold entity in the candidate set. Mention tfidf is csim from Section 4.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.3.1"
},
{
"text": "in performance to both models. In both of these cases, the basic CNN model improves by a greater amount than CNN+Complex. This could be a result of the complex embeddings being more difficult to optimize and therefore more susceptible to variations in hyperparameters. When adding in both the transitive closure and the explicit hierarchy loss, the performance improves further. We observe similar trends when training our models in a lower data regime with~150,000 examples, or about 5% of the total data. In all cases, we note that the baseline models that do not incorporate any hierarchical information (neither the transitive closure nor the hierarchy loss) perform~9 MAP worse, demonstrating the benefits of incorporating structure information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.3.1"
},
{
"text": "In addition to entity typing, we evaluate our model's performance on an entity linking task using MedMentions, our new PubMed / UMLS dataset described in Section 2.1. Table 7 shows results for baselines and our proposed variant with additional hierarchical loss. None of these models incorporate transitive clo- Table 8 : Example predictions from MedMentions. Each example shows the sentence with entity mention span in bold. Baseline, shows the predicted entity and its ancestors of a model not incorporating structure. Finally, +hierarchy shows the prediction and ancestors for a model which explicitly incorporates the hierarchical structure information. sure information, due to difficulty incorporating it in our candidate generation, which we leave to future work. The Normalized metric considers performance only on mentions with an alias table hit; all models have 0 accuracy for mentions otherwise. We also report the overall score for comparison in future work with improved candidate generation. We see that incorporating structure information results in a 1.1% reduction in absolute error, corresponding to a~6% reduction in relative error on this large-scale dataset. Table 8 shows qualitative predictions for models with and without hierarchy information incorporated. Each example contains the sentence (with target entity in bold), predictions for the baseline and hierarchy aware models, and the ancestors of the predicted entity. In the first and second example, the baseline model becomes extremely dependent on TFIDF string similarities when the gold candidate is rare (\u2264 10 occurrences). This shows that modeling the structure of the entity hierarchy helps the model disambiguate rare entities. In the third example, structure helps the model understand the hierarchical nature of the labels and prevents it from predicting an entity that is overly specific (e.g predicting Interleukin-27 rather than the correct and more general entity IL2 Gene).",
"cite_spans": [],
"ref_spans": [
{
"start": 167,
"end": 174,
"text": "Table 7",
"ref_id": "TABREF11"
},
{
"start": 312,
"end": 319,
"text": "Table 8",
"ref_id": null
},
{
"start": 1181,
"end": 1188,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "MedMentions Entity Linking with UMLS",
"sec_num": "6.4"
},
{
"text": "Note that, in contrast with the previous tasks, the complex hierarchical loss provides a significant boost, while the real-valued bilinear model does not. A possible explanation is that UMLS is a far larger/deeper ontology than even TypeNet, and the additional ability of complex embeddings to model intricate graph structure is key to realizing gains from hierarchical modeling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.4.1"
},
{
"text": "By directly linking a large set of mentions and typing a large set of entities with respect to a new ontology and corpus, and our incorporation of structural learning between the many entities and types in our ontologies of interest, our work draws on many different but complementary threads of research in information extraction, knowledge base population, and completion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "Our structural, hierarchy-aware loss between types and entities draws on research in Knowledge Base Inference such as Jain et al. 2018, Trouillon et al. (2016) and Nickel et al. (2011) . Combining KB completion with hierarchical structure in knowledge bases has been explored in (Dalvi et al., 2015; Xie et al., 2016) . Recently, Wu et al. (2017) proposed a hierarchical loss for text classification.",
"cite_spans": [
{
"start": 136,
"end": 159,
"text": "Trouillon et al. (2016)",
"ref_id": "BIBREF42"
},
{
"start": 164,
"end": 184,
"text": "Nickel et al. (2011)",
"ref_id": "BIBREF30"
},
{
"start": 279,
"end": 299,
"text": "(Dalvi et al., 2015;",
"ref_id": "BIBREF7"
},
{
"start": 300,
"end": 317,
"text": "Xie et al., 2016)",
"ref_id": "BIBREF51"
},
{
"start": 330,
"end": 346,
"text": "Wu et al. (2017)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "Linking mentions to a flat set of entities, often in Freebase or Wikipedia, is a long-standing task in NLP (Bunescu and Pasca, 2006; Cucerzan, 2007; Durrett and Klein, 2014; Francis-Landau et al., 2016) . Typing of mentions at varying levels of granularity, from CoNLL-style named entity recognition (Tjong Kim Sang and De Meulder, 2003) , to the more fine-grained recent approaches (Ling and Weld, 2012; Gillick et al., 2014; Shimaoka et al., 2017) , is also related to our task. A few prior attempts to incorporate a very shallow hierarchy into fine-grained entity typing have not lead to significant or consistent improvements (Gillick et al., 2014; Shimaoka et al., 2017) .",
"cite_spans": [
{
"start": 107,
"end": 132,
"text": "(Bunescu and Pasca, 2006;",
"ref_id": "BIBREF3"
},
{
"start": 133,
"end": 148,
"text": "Cucerzan, 2007;",
"ref_id": "BIBREF5"
},
{
"start": 149,
"end": 173,
"text": "Durrett and Klein, 2014;",
"ref_id": "BIBREF12"
},
{
"start": 174,
"end": 202,
"text": "Francis-Landau et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 311,
"end": 337,
"text": "Sang and De Meulder, 2003)",
"ref_id": "BIBREF41"
},
{
"start": 383,
"end": 404,
"text": "(Ling and Weld, 2012;",
"ref_id": "BIBREF25"
},
{
"start": 405,
"end": 426,
"text": "Gillick et al., 2014;",
"ref_id": "BIBREF15"
},
{
"start": 427,
"end": 449,
"text": "Shimaoka et al., 2017)",
"ref_id": "BIBREF36"
},
{
"start": 630,
"end": 652,
"text": "(Gillick et al., 2014;",
"ref_id": "BIBREF15"
},
{
"start": 653,
"end": 675,
"text": "Shimaoka et al., 2017)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "The knowledge base Yago (Suchanek et al., 2007) includes integration with WordNet and type hierarchies have been derived from its type system (Yosef et al., 2012) . Del Corro et al. (2015) use manually crafted rules and patterns (Hearst patterns (Hearst, 1992) , appositives, etc) to automati-cally match entity types to Wordnet synsets.",
"cite_spans": [
{
"start": 24,
"end": 47,
"text": "(Suchanek et al., 2007)",
"ref_id": "BIBREF39"
},
{
"start": 142,
"end": 162,
"text": "(Yosef et al., 2012)",
"ref_id": "BIBREF55"
},
{
"start": 246,
"end": 260,
"text": "(Hearst, 1992)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "Recent work has moved towards unifying these two highly related tasks by improving entity linking by simultaneously learning a fine grained entity type predictor (Gupta et al., 2017) . Learning hierarchical structures or transitive relations between concepts has been the subject of much recent work (Vilnis and McCallum, 2015; Vendrov et al., 2016; Nickel and Kiela, 2017) We draw inspiration from all of this prior work, and contribute datasets and models to address previous challenges in jointly modeling the structure of large-scale hierarchical ontologies and mapping textual mentions into an extremely fine-grained space of entities and types.",
"cite_spans": [
{
"start": 162,
"end": 182,
"text": "(Gupta et al., 2017)",
"ref_id": "BIBREF17"
},
{
"start": 300,
"end": 327,
"text": "(Vilnis and McCallum, 2015;",
"ref_id": "BIBREF47"
},
{
"start": 328,
"end": 349,
"text": "Vendrov et al., 2016;",
"ref_id": "BIBREF44"
},
{
"start": 350,
"end": 373,
"text": "Nickel and Kiela, 2017)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "We demonstrate that explicitly incorporating and modeling hierarchical information leads to increased performance in experiments on entity typing and linking across three challenging datasets. Additionally, we introduce two new humanannotated datasets: MedMentions, a corpus of 246k mentions from PubMed abstracts linked to the UMLS knowledge base, and TypeNet, a new hierarchical fine-grained entity typeset an order of magnitude larger and deeper than previous datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "While this work already demonstrates considerable improvement over non-hierarchical modeling, future work will explore techniques such as Box embeddings (Vilnis et al., 2018) and Poincar\u00e9 embeddings (Nickel and Kiela, 2017) to represent the hierarchical embedding space, as well as methods to improve recall in the candidate generation process for entity linking. Most of all, we are excited to see new techniques from the NLP community using the resources we have presented.",
"cite_spans": [
{
"start": 153,
"end": 174,
"text": "(Vilnis et al., 2018)",
"ref_id": "BIBREF46"
},
{
"start": 199,
"end": 223,
"text": "(Nickel and Kiela, 2017)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "Freebase type: musical chord Example entities: psalms chord, power chord harmonic seventh chord chord.n.01: a straight line connecting two points on a curve chord.n.02: a combination of three or more notes that blend harmoniously when sounded together musical.n.01: a play or film whose action and dialogue is interspersed with singing and dancing Table 9 : Example given to TypeNet annotators. Here, the Freebase type to be linked is musical chord. This type is annotated in Freebase belonging to the entities psalms chord, harmonic seventh chord, and power chord. Below the list of example entities are candidate Word-Net synsets obtained by substring matching between the Freebase type and all WordNet synsets. The correctly aligned synset is chord.n.02 shown in bold.",
"cite_spans": [],
"ref_spans": [
{
"start": 348,
"end": 355,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "A.1 TypeNet Construction",
"sec_num": null
},
{
"text": "For all of our experiments, we use pretrained 300 dimensional word vectors from Pennington et al. (2014) . These embeddings are fixed during training. The type vectors and entity vectors are all 300 dimensional vectors initialized using Glorot initialization (Glorot and Bengio, 2010) . The number of negative links for hierarchical training n \u2208 {16, 32, 64, 128, 256}.",
"cite_spans": [
{
"start": 80,
"end": 104,
"text": "Pennington et al. (2014)",
"ref_id": "BIBREF31"
},
{
"start": 259,
"end": 284,
"text": "(Glorot and Bengio, 2010)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Model Implementation Details",
"sec_num": null
},
{
"text": "For regularization, we use dropout (Srivastava et al., 2014) with p \u2208 {0.5, 0.75, 0.8} on the sentence encoder output and L2 regularize all learned parameters with \u03bb \u2208 {1e-5, 5e-5, 1e-4}. All our parameters are optimized using Adam (Kingma and Ba, 2014) with a learning rate of 0.001. We tune our hyper-parameters via grid search and early stopping on the development set.",
"cite_spans": [
{
"start": 35,
"end": 60,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Model Implementation Details",
"sec_num": null
},
{
"text": "To train our models, we use the mention typing loss function defined in Section-5. For models with structure training, we additionally add in the hierarchical loss, along with a weight that is obtained by tuning on the dev set. We follow the same inference time procedure as Shimaoka et al. (2017) For each mention, we first assign the type with the largest probability according to the logits, and then assign additional types based on the condition that their corresponding probability be greater than 0.5.",
"cite_spans": [
{
"start": 275,
"end": 297,
"text": "Shimaoka et al. (2017)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.3 FIGER Implementation Details",
"sec_num": null
},
{
"text": "At train time, each training example randomly samples an entity bag of 10 mentions. At test time we classify bags of 20 mentions of an entity. The dataset contains a total of 344,246 entities mapped to the 1081 Freebase types from TypeNet. We consider all sentences in Wikipedia between 10 and 50 tokens long. Tokenization and sentence splitting was performed using NLTK (Loper and Bird, 2002) . From these sentences, we considered all entities annotated with a cross-link in Wikipedia that we could link to Freebase and assign types in TypeNet. We then split the data by entities into a 90-5-5 train, dev, test split.",
"cite_spans": [
{
"start": 371,
"end": 393,
"text": "(Loper and Bird, 2002)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.4 Wikipedia Data and Implementation Details",
"sec_num": null
},
{
"text": "We pre-process each string by lowercasing and removing stop words. We consider ngrams from size 1 to 5 and keep the top 100,000 features and the final vectors are L2 normalized. For each mention, In our experiments we consider the top 100 most similar entities as the candidate set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.5 UMLS Implementation details",
"sec_num": null
},
{
"text": "Each mention and each canonical entity string in UMLS are mapped to TFIDF character ngram vectors. We pre-process each string by lowercasing and removing stop words. We consider ngrams from size 1 to 5 and keep the top 100,000 features and the final vectors are L2 normalized. For each mention, we calculate the cosine similarity, csim, between the mention string and each canonical entity string. In our experiments we consider the top 100 most similar entities as the candidate set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.5.1 Candidate Generation Details",
"sec_num": null
},
{
"text": "This step makes the scoring function technically not bilinear, as it commutes with addition but not complex multiplication, but we term it bilinear for ease of exposition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Nicholas Monath, Haw-Shiuan Chang and Emma Strubell for helpful comments on early drafts of the paper. Creation of the Med-Mentions corpus is supported and managed by the Meta team at the Chan Zuckerberg Initiative. A pre-release of the dataset is available at http://github.com/chanzuckerberg/ MedMentions. This work was supported in part by the Center for Intelligent Information Retrieval and the Center for Data Science, in part by the Chan Zuckerberg Initiative under the project Scientific Knowledge Base Construction., and in part by the National Science Foundation under Grant No. IIS-1514053. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "9"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Uniprot: the universal protein knowledgebase",
"authors": [
{
"first": "Rolf",
"middle": [],
"last": "Apweiler",
"suffix": ""
},
{
"first": "Amos",
"middle": [],
"last": "Bairoch",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Cathy",
"suffix": ""
},
{
"first": "Winona",
"middle": [
"C"
],
"last": "Wu",
"suffix": ""
},
{
"first": "Brigitte",
"middle": [],
"last": "Barker",
"suffix": ""
},
{
"first": "Serenella",
"middle": [],
"last": "Boeckmann",
"suffix": ""
},
{
"first": "Elisabeth",
"middle": [],
"last": "Ferro",
"suffix": ""
},
{
"first": "Hongzhan",
"middle": [],
"last": "Gasteiger",
"suffix": ""
},
{
"first": "Rodrigo",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Michele",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Magrane",
"suffix": ""
}
],
"year": 2004,
"venue": "Nucleic acids research",
"volume": "32",
"issue": "1",
"pages": "115--119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rolf Apweiler, Amos Bairoch, Cathy H Wu, Winona C Barker, Brigitte Boeckmann, Serenella Ferro, Elis- abeth Gasteiger, Hongzhan Huang, Rodrigo Lopez, Michele Magrane, et al. 2004. Uniprot: the univer- sal protein knowledgebase. Nucleic acids research, 32(suppl 1):D115-D119.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The unified medical language system (umls): integrating biomedical terminology",
"authors": [
{
"first": "Olivier",
"middle": [],
"last": "Bodenreider",
"suffix": ""
}
],
"year": 2004,
"venue": "Nucleic acids research",
"volume": "32",
"issue": "1",
"pages": "267--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olivier Bodenreider. 2004. The unified medical lan- guage system (umls): integrating biomedical termi- nology. Nucleic acids research, 32(suppl 1):D267- D270.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Freebase: a collaboratively created graph database for structuring human knowledge",
"authors": [
{
"first": "Kurt",
"middle": [],
"last": "Bollacker",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "Praveen",
"middle": [],
"last": "Paritosh",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Sturge",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Taylor",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 ACM SIGMOD international conference on Management of data",
"volume": "",
"issue": "",
"pages": "1247--1250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collab- oratively created graph database for structuring hu- man knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247-1250. AcM.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Using encyclopedic knowledge for named entity disambiguation",
"authors": [
{
"first": "C",
"middle": [],
"last": "Razvan",
"suffix": ""
},
{
"first": "Marius",
"middle": [],
"last": "Bunescu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pasca",
"suffix": ""
}
],
"year": 2006,
"venue": "Eacl",
"volume": "6",
"issue": "",
"pages": "9--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Razvan C Bunescu and Marius Pasca. 2006. Using en- cyclopedic knowledge for named entity disambigua- tion. In Eacl, volume 6, pages 9-16.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The biogrid interaction database: 2017 update. Nucleic acids research",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Chatr-Aryamontri",
"suffix": ""
},
{
"first": "Rose",
"middle": [],
"last": "Oughtred",
"suffix": ""
},
{
"first": "Lorrie",
"middle": [],
"last": "Boucher",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Rust",
"suffix": ""
},
{
"first": "Christie",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Nadine",
"middle": [
"K"
],
"last": "Kolas",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Lara",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Donnell",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Oster",
"suffix": ""
},
{
"first": "Adnane",
"middle": [],
"last": "Theesfeld",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sellam",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "45",
"issue": "",
"pages": "369--379",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Chatr-aryamontri, Rose Oughtred, Lorrie Boucher, Jennifer Rust, Christie Chang, Nadine K Kolas, Lara O'Donnell, Sara Oster, Chandra Theesfeld, Adnane Sellam, et al. 2017. The biogrid interaction database: 2017 update. Nucleic acids re- search, 45(D1):D369-D379.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Large-scale named entity disambiguation based on wikipedia data",
"authors": [
{
"first": "",
"middle": [],
"last": "Silviu Cucerzan",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 joint conference on empirical methods in natural language processing and computational natural language learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Silviu Cucerzan. 2007. Large-scale named entity dis- ambiguation based on wikipedia data. In Proceed- ings of the 2007 joint conference on empirical meth- ods in natural language processing and computa- tional natural language learning (EMNLP-CoNLL).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Entity query feature expansion using knowledge base links",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Dalton",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Dietz",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Allan",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval",
"volume": "",
"issue": "",
"pages": "365--374",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Dalton, Laura Dietz, and James Allan. 2014. Entity query feature expansion using knowledge base links. In Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval, pages 365-374. ACM.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic gloss finding for a knowledge base using ontological constraints",
"authors": [
{
"first": "Bhavana",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "Einat",
"middle": [],
"last": "Minkov",
"suffix": ""
},
{
"first": "Partha",
"middle": [
"P"
],
"last": "Talukdar",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Eighth ACM International Conference on Web Search and Data Mining",
"volume": "",
"issue": "",
"pages": "369--378",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bhavana Dalvi, Einat Minkov, Partha P Talukdar, and William W Cohen. 2015. Automatic gloss finding for a knowledge base using ontological constraints. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining, pages 369-378. ACM.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Question answering on knowledge bases and text using universal schema and memory networks",
"authors": [
{
"first": "Rajarshi",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Manzil",
"middle": [],
"last": "Zaheer",
"suffix": ""
},
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "358--365",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rajarshi Das, Manzil Zaheer, Siva Reddy, and Andrew McCallum. 2017. Question answering on knowl- edge bases and text using universal schema and memory networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 358- 365, Vancouver, Canada. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Comparative toxicogenomics database: a knowledgebase and discovery tool for chemicalgene-disease networks",
"authors": [
{
"first": "Allan",
"middle": [],
"last": "Peter Davis",
"suffix": ""
},
{
"first": "Cynthia",
"middle": [
"G"
],
"last": "Murphy",
"suffix": ""
},
{
"first": "Cynthia",
"middle": [
"A"
],
"last": "Saraceni-Richards",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rosenstein",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [
"J"
],
"last": "Wiegers",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mattingly",
"suffix": ""
}
],
"year": 2008,
"venue": "Nucleic acids research",
"volume": "37",
"issue": "1",
"pages": "786--792",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allan Peter Davis, Cynthia G Murphy, Cynthia A Saraceni-Richards, Michael C Rosenstein, Thomas C Wiegers, and Carolyn J Mattingly. 2008. Comparative toxicogenomics database: a knowledgebase and discovery tool for chemical- gene-disease networks. Nucleic acids research, 37(suppl 1):D786-D792.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Finet: Context-aware fine-grained named entity typing",
"authors": [
{
"first": "Luciano",
"middle": [],
"last": "Del Corro",
"suffix": ""
},
{
"first": "Abdalghani",
"middle": [],
"last": "Abujabal",
"suffix": ""
},
{
"first": "Rainer",
"middle": [],
"last": "Gemulla",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luciano Del Corro, Abdalghani Abujabal, Rainer Gemulla, and Gerhard Weikum. 2015. Finet: Context-aware fine-grained named entity typing. In Proceedings of the Conference on Empirical Meth- ods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Ncbi disease corpus: a resource for disease name recognition and concept normalization",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Rezarta Islamaj Dogan",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of biomedical informatics",
"volume": "47",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rezarta Islamaj Dogan, Robert Leaman, and Zhiyong Lu. 2014. Ncbi disease corpus: a resource for dis- ease name recognition and concept normalization. Journal of biomedical informatics, 47:1-10.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A joint model for entity analysis: Coreference, typing, and linking",
"authors": [
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "477--490",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greg Durrett and Dan Klein. 2014. A joint model for entity analysis: Coreference, typing, and linking. Transactions of the Association for Computational Linguistics, 2:477-490.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Capturing semantic similarity for entity linking with convolutional neural networks",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Francis-Landau",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "1256--1261",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Francis-Landau, Greg Durrett, and Dan Klein. 2016. Capturing semantic similarity for en- tity linking with convolutional neural networks. In Proceedings of NAACL-HLT, pages 1256-1261.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Contextdependent fine-grained entity type tagging",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Gillick",
"suffix": ""
},
{
"first": "Nevena",
"middle": [],
"last": "Lazic",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Kirchner",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Huynh",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Gillick, Nevena Lazic, Kuzman Ganchev, Jesse Kirchner, and David Huynh. 2014. Context- dependent fine-grained entity type tagging. CoRR, abs/1412.1820.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Understanding the difficulty of training deep feedforward neural networks",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Glorot",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the International Conference on Artificial Intelligence and Statistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Glorot and Yoshua Bengio. 2010. Understand- ing the difficulty of training deep feedforward neural networks. In Proceedings of the International Con- ference on Artificial Intelligence and Statistics (AIS- TATS).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Entity linking via joint encoding of types, descriptions, and context",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2671--2680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Gupta, Sameer Singh, and Dan Roth. 2017. En- tity linking via joint encoding of types, descriptions, and context. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Pro- cessing, pages 2671-2680, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Automatic acquisition of hyponyms from large text corpora",
"authors": [
{
"first": "A",
"middle": [],
"last": "Marti",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hearst",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the International Conference on Computational Linguistics (COLING)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marti A Hearst. 1992. Automatic acquisition of hy- ponyms from large text corpora. In Proceedings of the International Conference on Computational Lin- guistics (COLING).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Mitigating the effect of out-ofvocabulary entity pairs in matrix factorization for knowledge base inference",
"authors": [
{
"first": "Prachi",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Shikhar",
"middle": [],
"last": "Murty",
"suffix": ""
},
{
"first": "Mausam",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Soumen",
"middle": [],
"last": "Chakrabarti",
"suffix": ""
}
],
"year": 2018,
"venue": "The 27th International Joint Conference on Artificial Intelligence (IJ-CAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prachi Jain, Shikhar Murty, Mausam, and Soumen Chakrabarti. 2018. Mitigating the effect of out-of- vocabulary entity pairs in matrix factorization for knowledge base inference. In The 27th Interna- tional Joint Conference on Artificial Intelligence (IJ- CAI), Stockholm, Sweden.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Dnorm: disease name normalization with pairwise learning to rank",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Rezarta Islamaj Dogan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2013,
"venue": "Bioinformatics",
"volume": "29",
"issue": "22",
"pages": "2909--2917",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Leaman, Rezarta Islamaj Dogan, and Zhiy- ong Lu. 2013. Dnorm: disease name normaliza- tion with pairwise learning to rank. Bioinformatics, 29(22):2909-2917.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Taggerone: joint named entity recognition and normalization with semi-markov models",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2016,
"venue": "Bioinformatics",
"volume": "32",
"issue": "18",
"pages": "2839--2846",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Leaman and Zhiyong Lu. 2016. Taggerone: joint named entity recognition and normaliza- tion with semi-markov models. Bioinformatics, 32(18):2839-2846.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Biocreative v cdr task corpus: a resource for chemical disease relation extraction. Database",
"authors": [
{
"first": "Jiao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yueping",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Robin",
"suffix": ""
},
{
"first": "Daniela",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Chih-Hsuan",
"middle": [],
"last": "Sciaky",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Allan",
"middle": [
"Peter"
],
"last": "Leaman",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [
"J"
],
"last": "Davis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mattingly",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Wiegers",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiao Li, Yueping Sun, Robin J Johnson, Daniela Sci- aky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J Mattingly, Thomas C Wiegers, and Zhiyong Lu. 2016. Biocreative v cdr task corpus: a resource for chemical disease relation extraction. Database, 2016.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Neural relation extraction with selective attention over instances",
"authors": [
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Shiqi",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Huanbo",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2124--2133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 2124-2133, Berlin, Germany. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Fine-grained entity recognition",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Daniel S Weld",
"suffix": ""
}
],
"year": 2012,
"venue": "Twenty-Sixth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiao Ling and Daniel S Weld. 2012. Fine-grained en- tity recognition. In Twenty-Sixth AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Nltk: The natural language toolkit",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL-02 Workshop on Effective tools and methodologies for teaching natural language processing and computational linguistics",
"volume": "1",
"issue": "",
"pages": "63--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Loper and Steven Bird. 2002. Nltk: The natu- ral language toolkit. In Proceedings of the ACL-02 Workshop on Effective tools and methodologies for teaching natural language processing and computa- tional linguistics-Volume 1, pages 63-70. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Distant supervision for relation extraction without labeled data",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Mintz",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bills",
"suffix": ""
},
{
"first": "Rion",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "1003--1011",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Daniel Ju- rafsky. 2009. Distant supervision for relation ex- traction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1003-1011, Suntec, Singapore. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"authors": [
{
"first": "Arvind",
"middle": [],
"last": "Neelakantan",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "515--525",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arvind Neelakantan and Ming-Wei Chang. 2015. In- ferring missing entity type instances for knowledge base completion: New dataset and methods. In Pro- ceedings of the 2015 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 515-525, Denver, Colorado. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Poincar\\'e embeddings for learning hierarchical representations",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "Nickel",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.08039"
]
},
"num": null,
"urls": [],
"raw_text": "Maximilian Nickel and Douwe Kiela. 2017. Poincar\\'e embeddings for learning hierarchical representations. arXiv preprint arXiv:1705.08039.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A three-way model for collective learning on multi-relational data",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "Nickel",
"suffix": ""
},
{
"first": "Hans-Peter",
"middle": [],
"last": "Volker Tresp",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kriegel",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of the International Conference on Machine Learning (ICML).",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Disgenet: a comprehensive platform integrating information on human diseaseassociated genes and variants. Nucleic acids research",
"authors": [
{
"first": "Janet",
"middle": [],
"last": "Pi\u00f1ero",
"suffix": ""
},
{
"first": "\u00c0lex",
"middle": [],
"last": "Bravo",
"suffix": ""
},
{
"first": "N\u00faria",
"middle": [],
"last": "Queralt-Rosinach",
"suffix": ""
},
{
"first": "Alba",
"middle": [],
"last": "Guti\u00e9rrez-Sacrist\u00e1n",
"suffix": ""
},
{
"first": "Jordi",
"middle": [],
"last": "Deu-Pons",
"suffix": ""
},
{
"first": "Emilio",
"middle": [],
"last": "Centeno",
"suffix": ""
},
{
"first": "Javier",
"middle": [],
"last": "Garc\u00eda-Garc\u00eda",
"suffix": ""
},
{
"first": "Ferran",
"middle": [],
"last": "Sanz",
"suffix": ""
},
{
"first": "Laura",
"middle": [
"I"
],
"last": "Furlong",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "45",
"issue": "",
"pages": "833--839",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janet Pi\u00f1ero,\u00c0lex Bravo, N\u00faria Queralt-Rosinach, Alba Guti\u00e9rrez-Sacrist\u00e1n, Jordi Deu-Pons, Emilio Centeno, Javier Garc\u00eda-Garc\u00eda, Ferran Sanz, and Laura I Furlong. 2017. Disgenet: a comprehensive platform integrating information on human disease- associated genes and variants. Nucleic acids re- search, 45(D1):D833-D839.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Label noise reduction in entity typing by heterogeneous partial-label embedding",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Wenqi",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Clare",
"middle": [
"R"
],
"last": "Voss",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "1825--1834",
"other_ids": {
"DOI": [
"10.1145/2939672.2939822"
]
},
"num": null,
"urls": [],
"raw_text": "Xiang Ren, Wenqi He, Meng Qu, Clare R. Voss, Heng Ji, and Jiawei Han. 2016. Label noise reduction in entity typing by heterogeneous partial-label embed- ding. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pages 1825-1834.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Building knowledge bases with universal schema: Cold start and slot-filling approaches",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Monath",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Belanger",
"suffix": ""
},
{
"first": "Emma",
"middle": [],
"last": "Strubell",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Verga",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Roth, Nicholas Monath, David Belanger, Emma Strubell, Patrick Verga, and Andrew McCal- lum. 2015. Building knowledge bases with universal schema: Cold start and slot-filling approaches.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Classifying relations by ranking with convolutional neural networks",
"authors": [
{
"first": "C\u00edcero",
"middle": [],
"last": "Nogueira Dos Santos",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C\u00edcero Nogueira dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convolutional neural networks. In Proceedings of the Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing ACL.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Neural architectures for fine-grained entity type classification",
"authors": [
{
"first": "Sonse",
"middle": [],
"last": "Shimaoka",
"suffix": ""
},
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1271--1280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, and Sebastian Riedel. 2017. Neural architectures for fine-grained entity type classification. In Proceed- ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1271-1280, Valencia, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A cross-lingual dictionary for english wikipedia concepts",
"authors": [
{
"first": "I",
"middle": [],
"last": "Valentin",
"suffix": ""
},
{
"first": "Angel X",
"middle": [],
"last": "Spitkovsky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valentin I Spitkovsky and Angel X Chang. 2012. A cross-lingual dictionary for english wikipedia con- cepts.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Dropout: a simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Machine Learning Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdi- nov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Yago: a core of semantic knowledge",
"authors": [
{
"first": "Fabian",
"middle": [
"M"
],
"last": "Suchanek",
"suffix": ""
},
{
"first": "Gjergji",
"middle": [],
"last": "Kasneci",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowl- edge. In Proceedings of the International Confer- ence on World Wide Web (WWW).",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Multi-instance multi-label learning for relation extraction",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Tibshirani",
"suffix": ""
},
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning",
"volume": "",
"issue": "",
"pages": "455--465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In Pro- ceedings of the 2012 joint conference on empirical methods in natural language processing and compu- tational natural language learning, pages 455-465. Association for Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Introduction to the conll-2003 shared task: Language-independent named entity recognition",
"authors": [
{
"first": "Erik F Tjong Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003",
"volume": "4",
"issue": "",
"pages": "142--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4, pages 142-147. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Complex embeddings for simple link prediction",
"authors": [
{
"first": "Th\u00e9o",
"middle": [],
"last": "Trouillon",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Welbl",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "\u00c9ric",
"middle": [],
"last": "Gaussier",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Bouchard",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Th\u00e9o Trouillon, Johannes Welbl, Sebastian Riedel,\u00c9ric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In Proceed- ings of the International Conference on Machine Learning (ICML).",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Conference on Advances in Neural Information Processing (NIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Conference on Advances in Neural In- formation Processing (NIPS).",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Order-embeddings of images and language",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vendrov",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "Fidler",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Urtasun",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. 2016. Order-embeddings of images and language. ICLR.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Generalizing to unseen entities and entity pairs with row-less universal schema",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Verga",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Neelakantan",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mc-Callum",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter",
"volume": "1",
"issue": "",
"pages": "613--622",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Verga, Arvind Neelakantan, and Andrew Mc- Callum. 2017. Generalizing to unseen entities and entity pairs with row-less universal schema. In Pro- ceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 1, Long Papers, pages 613-622, Valencia, Spain. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Probabilistic embedding of knowledge graphs with box lattice measures",
"authors": [
{
"first": "Luke",
"middle": [],
"last": "Vilnis",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shikhar",
"middle": [],
"last": "Murty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2018,
"venue": "The 56th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luke Vilnis, Xiang Li, Shikhar Murty, and An- drew McCallum. 2018. Probabilistic embedding of knowledge graphs with box lattice measures. In The 56th Annual Meeting of the Association for Compu- tational Linguistics (ACL), Melbourne, Australia.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Word representations via gaussian embedding. ICLR",
"authors": [
{
"first": "Luke",
"middle": [],
"last": "Vilnis",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luke Vilnis and Andrew McCallum. 2015. Word rep- resentations via gaussian embedding. ICLR.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Gnormplus: an integrative approach for tagging genes, gene families, and protein domains",
"authors": [
{
"first": "Chih-Hsuan",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Hung-Yu",
"middle": [],
"last": "Kao",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chih-Hsuan Wei, Hung-Yu Kao, and Zhiyong Lu. 2015. Gnormplus: an integrative approach for tag- ging genes, gene families, and protein domains. BioMed research international, 2015.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Constructing datasets for multi-hop reading comprehension across documents",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Welbl",
"suffix": ""
},
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1710.06481"
]
},
"num": null,
"urls": [],
"raw_text": "Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2017. Constructing datasets for multi-hop reading comprehension across documents. arXiv preprint arXiv:1710.06481.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Hierarchical loss for classification",
"authors": [
{
"first": "Cinna",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Tygert",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cinna Wu, Mark Tygert, and Yann LeCun. 2017. Hierarchical loss for classification. CoRR, abs/1709.01062.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Representation learning of knowledge graphs with hierarchical types",
"authors": [
{
"first": "Ruobing",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "IJCAI",
"volume": "",
"issue": "",
"pages": "2965--2971",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2016. Representation learning of knowledge graphs with hierarchical types. In IJCAI, pages 2965-2971.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Corpus-level fine-grained entity typing",
"authors": [
{
"first": "Yadollah",
"middle": [],
"last": "Yaghoobzadeh",
"suffix": ""
},
{
"first": "Heike",
"middle": [],
"last": "Adel",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1708.02275"
]
},
"num": null,
"urls": [],
"raw_text": "Yadollah Yaghoobzadeh, Heike Adel, and Hinrich Sch\u00fctze. 2017a. Corpus-level fine-grained entity typing. arXiv preprint arXiv:1708.02275.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Noise mitigation for neural entity typing and relation extraction",
"authors": [
{
"first": "Yadollah",
"middle": [],
"last": "Yaghoobzadeh",
"suffix": ""
},
{
"first": "Heike",
"middle": [],
"last": "Adel",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1183--1194",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yadollah Yaghoobzadeh, Heike Adel, and Hinrich Sch\u00fctze. 2017b. Noise mitigation for neural entity typing and relation extraction. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1183-1194, Valencia, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Universal schema for entity type prediction",
"authors": [
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 workshop on Automated knowledge base construction",
"volume": "",
"issue": "",
"pages": "79--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Limin Yao, Sebastian Riedel, and Andrew McCallum. 2013. Universal schema for entity type prediction. In Proceedings of the 2013 workshop on Automated knowledge base construction, pages 79-84. ACM.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Hyena: Hierarchical type classification for entity names",
"authors": [
{
"first": "Mohamed",
"middle": [],
"last": "Amir Yosef",
"suffix": ""
},
{
"first": "Sandro",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Hoffart",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Spaniol",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the International Conference on Computational Linguistics (COLING)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohamed Amir Yosef, Sandro Bauer, Johannes Hof- fart, Marc Spaniol, and Gerhard Weikum. 2012. Hyena: Hierarchical type classification for entity names. In Proceedings of the International Confer- ence on Computational Linguistics (COLING).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Sentence encoder for all our models. The input to the CNN consists of the concatenation of position embeddings with word embeddings. The output of the CNN is concatenated with the mean of mention surface form embeddings, and then passed through a 2 layer MLP.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"text": "\u03c6(m, e) = \u03b1 e m F + \u03b2 csim(m, e) e = argmax e\u2208ec \u03c6(m, e) We optimize this model by multinomial cross entropy over the set of candidate entities and correct entity e. L link (m, e c ) = \u2212 \u03c6(m, e) + log e \u2208ec exp \u03c6(m, e )",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF1": {
"content": "<table><tr><td colspan=\"4\">: Statistics from various biological entity</td></tr><tr><td colspan=\"4\">linking data sets from scientific articles. NCBI</td></tr><tr><td colspan=\"4\">Disease (Dogan et al., 2014) focuses exclusively</td></tr><tr><td colspan=\"4\">on disease entities. BCV-CDR (Li et al., 2016)</td></tr><tr><td colspan=\"4\">contains both chemicals and diseases. BCII-GN</td></tr><tr><td colspan=\"4\">and NLM (Wei et al., 2015) both contain genes.</td></tr><tr><td>Statistic</td><td>Train</td><td>Dev</td><td>Test</td></tr><tr><td>#Abstracts</td><td>2,964</td><td>370</td><td>370</td></tr><tr><td>#Sentences</td><td>28,457</td><td>3,497</td><td>3,268</td></tr><tr><td>#Mentions</td><td colspan=\"3\">199,977 24,026 22,141</td></tr><tr><td>#Entities</td><td>22,416</td><td>5,934</td><td>5,521</td></tr></table>",
"html": null,
"type_str": "table",
"text": "",
"num": null
},
"TABREF2": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "MedMentions statistics.",
"num": null
},
"TABREF4": {
"content": "<table><tr><td>: Statistics from various type sets. Type-</td></tr><tr><td>Net is the largest type hierarchy with a gold map-</td></tr><tr><td>ping to KB entities. *The entire WordNet could be</td></tr><tr><td>added to TypeNet increasing the total size to 17k</td></tr><tr><td>types.</td></tr></table>",
"html": null,
"type_str": "table",
"text": "",
"num": null
},
"TABREF6": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Stats for the final TypeNet dataset. childof, parent-of, and equivalence links are from Freebase types \u2192 WordNet synsets.",
"num": null
},
"TABREF8": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Accuracy and Macro/Micro F1 on FIGER (GOLD). \u2020 is an LSTM model. \u2021 is an attentive LSTM along with additional hand crafted features.",
"num": null
},
"TABREF10": {
"content": "<table><tr><td>Model</td><td colspan=\"2\">original normalized</td></tr><tr><td>mention tfidf</td><td>61.09</td><td>74.66</td></tr><tr><td>CNN</td><td>67.42</td><td>82.40</td></tr><tr><td>+ hierarchy</td><td>67.73</td><td>82.77</td></tr><tr><td>CNN+Complex</td><td>67.23</td><td>82.17</td></tr><tr><td>+ hierarchy</td><td>68.34</td><td>83.52</td></tr></table>",
"html": null,
"type_str": "table",
"text": "MAP of entity-level typing in Wikipedia data using TypeNet. The second column shows results using 5% of the total data. The last column shows results using the full set of 344,246 entities.",
"num": null
},
"TABREF11": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Accuracy on entity linking in MedMentions. Maximum recall is 81.82% because we use an imperfect alias table to generate candidates.",
"num": null
},
"TABREF12": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Tips and Pitfalls in Direct Ligation of Large Spontaneous Splenorenal Shunt during Liver Transplantation Patients with large spontaneous splenorenal shunt . . . baseline: Direct [Direct \u2192 General Modifier \u2192 Qualifier \u2192 Property or Attribute] +hierarchy: Ligature (correct) [Ligature \u2192 Surgical Procedures \u2192 medical treatment approach ] A novel approach for selective chemical functionalization and localized assembly of one-dimensional nanostructures. baseline: Structure [Structure \u2192 order or structure \u2192 general epistemology] +hierarchy: Nanomaterials (correct) [Nanomaterials \u2192 Nanoparticle Complex \u2192 Drug or Chemical by Structure] Gcn5 is recruited onto the il-2 promoter by interacting with the NFAT in T cells upon TCR stimulation . baseline: Interleukin-27 [Interleukin-27 \u2192 IL2 \u2192 Interleukin Gene] +hierarchy: IL2 Gene (correct) [IL2 Gene \u2192 Interleukin Gene]",
"num": null
}
}
}
}