ACL-OCL / Base_JSON /prefixP /json /P19 /P19-1023.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P19-1023",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:25:48.414607Z"
},
"title": "Neural Relation Extraction for Knowledge Base Enrichment",
"authors": [
{
"first": "Bayu",
"middle": [],
"last": "Distiawan Trisedya",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Melbourne",
"location": {
"country": "Australia"
}
},
"email": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": "",
"affiliation": {},
"email": "weikum@mpi-inf.mpg.de"
},
{
"first": "Jianzhong",
"middle": [],
"last": "Qi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Melbourne",
"location": {
"country": "Australia"
}
},
"email": "jianzhong.qi@"
},
{
"first": "Rui",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Melbourne",
"location": {
"country": "Australia"
}
},
"email": "rui.zhang@unimelb.edu.au"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We study relation extraction for knowledge base (KB) enrichment. Specifically, we aim to extract entities and their relationships from sentences in the form of triples and map the elements of the extracted triples to an existing KB in an end-to-end manner. Previous studies focus on the extraction itself and rely on Named Entity Disambiguation (NED) to map triples into the KB space. This way, NED errors may cause extraction errors that affect the overall precision and recall. To address this problem, we propose an end-to-end relation extraction model for KB enrichment based on a neural encoder-decoder model. We collect high-quality training data by distant supervision with co-reference resolution and paraphrase detection. We propose an n-gram based attention model that captures multi-word entity names in a sentence. Our model employs jointly learned word and entity embeddings to support named entity disambiguation. Finally, our model uses a modified beam search and a triple classifier to help generate high-quality triples. Our model outperforms state-of-theart baselines by 15.51% and 8.38% in terms of F1 score on two real-world datasets.",
"pdf_parse": {
"paper_id": "P19-1023",
"_pdf_hash": "",
"abstract": [
{
"text": "We study relation extraction for knowledge base (KB) enrichment. Specifically, we aim to extract entities and their relationships from sentences in the form of triples and map the elements of the extracted triples to an existing KB in an end-to-end manner. Previous studies focus on the extraction itself and rely on Named Entity Disambiguation (NED) to map triples into the KB space. This way, NED errors may cause extraction errors that affect the overall precision and recall. To address this problem, we propose an end-to-end relation extraction model for KB enrichment based on a neural encoder-decoder model. We collect high-quality training data by distant supervision with co-reference resolution and paraphrase detection. We propose an n-gram based attention model that captures multi-word entity names in a sentence. Our model employs jointly learned word and entity embeddings to support named entity disambiguation. Finally, our model uses a modified beam search and a triple classifier to help generate high-quality triples. Our model outperforms state-of-theart baselines by 15.51% and 8.38% in terms of F1 score on two real-world datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Knowledge bases (KBs), often in the form of knowledge graphs (KGs), have become essential resources in many tasks including Q&A systems, recommender system, and natural language generation. Large KBs such as DBpedia (Auer et al., 2007) , Wikidata (Vrandecic and Kr\u00f6tzsch, 2014) and Yago (Suchanek et al., 2007) contain millions of facts about entities, which are represented in the form of subject-predicate-object triples. However, these KBs are far from complete and mandate continuous enrichment and curation. Previous studies work on embedding-based model (Nguyen et al., 2018; and entity alignment model (Chen et al., 2017; Trisedya et al., 2019) to enrich a knowledge base. Following the success of the sequence-to-sequence architecture (Bahdanau et al., 2015) for generating sentences from structured data (Marcheggiani and Perez-Beltrachini, 2018; Trisedya et al., 2018) , we employ this architecture to do the opposite, which is extracting triples from a sentence.",
"cite_spans": [
{
"start": 216,
"end": 235,
"text": "(Auer et al., 2007)",
"ref_id": "BIBREF1"
},
{
"start": 247,
"end": 277,
"text": "(Vrandecic and Kr\u00f6tzsch, 2014)",
"ref_id": "BIBREF51"
},
{
"start": 287,
"end": 310,
"text": "(Suchanek et al., 2007)",
"ref_id": "BIBREF44"
},
{
"start": 560,
"end": 581,
"text": "(Nguyen et al., 2018;",
"ref_id": "BIBREF35"
},
{
"start": 609,
"end": 628,
"text": "(Chen et al., 2017;",
"ref_id": "BIBREF8"
},
{
"start": 629,
"end": 651,
"text": "Trisedya et al., 2019)",
"ref_id": "BIBREF48"
},
{
"start": 743,
"end": 766,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF2"
},
{
"start": 813,
"end": 855,
"text": "(Marcheggiani and Perez-Beltrachini, 2018;",
"ref_id": "BIBREF27"
},
{
"start": 856,
"end": 878,
"text": "Trisedya et al., 2018)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we study how to enrich a KB by relation exaction from textual sources. Specifically, we aim to extract triples in the form of h, r, t , where h is a head entity, t is a tail entity, and r is a relationship between the entities. Importantly, as KBs typically have much better coverage on entities than on relationships, we assume that h and t are existing entities in a KB, r is a predicate that falls in a predefined set of predicates we are interested in, but the relationship h, r, t does not exist in the KB yet. We aim to find more relationships between h and t and add them to the KB. For example, from the first extracted triples in Table 1 we may recognize two entities \"NYU\" (abbreviation of New York University) and \"Private University\", which already exist in the KB; also the predicate \"instance of\" is in the set of predefined predicates we are interested in, but the relationship of NYU, instance of, Private University does not exist in the KB. We aim to add this relationship to our KB. This is the typical situation for KB enrichment (as opposed to constructing a KB from scratch or performing relation extraction for other purposes, such as Q&A or summarization).",
"cite_spans": [],
"ref_spans": [
{
"start": 654,
"end": 661,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "KB enrichment mandates that the entities and relationships of the extracted triples are canonicalized by mapping them to their proper entity and predicate IDs in a KB. Table 1 illustrates an example of triples extracted from a sentence. The entities and predicate of the first extracted triple, including NYU, instance of, and Private University, are mapped to their unique IDs Q49210, P31, and Q902104, respectively, to comply with the semantic space of the KB.",
"cite_spans": [],
"ref_spans": [
{
"start": 168,
"end": 175,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous studies on relation extraction have employed both unsupervised and supervised approaches. Unsupervised approaches typically start with a small set of manually defined extraction patterns to detect entity names and phrases about relationships in an input text. This paradigm is known as Open Information Extraction (Open IE) (Banko et al., 2007; Corro and Gemulla, 2013; Gashteovski et al., 2017) . In this line of approaches, both entities and predicates are captured in their surface forms without canonicalization. Supervised approaches train statistical and neural models for inferring the relationship between two known entities in a sentence (Mintz et al., 2009; Riedel et al., 2010 Riedel et al., , 2013 Zeng et al., 2015; Lin et al., 2016) . Most of these studies employ a preprocessing step to recognize the entities. Only few studies have fully integrated the mapping of extracted triples onto uniquely identified KB entities by using logical reasoning on the existing KB to disambiguate the extracted entities (e.g., (Suchanek et al., 2009; Sa et al., 2017) ).",
"cite_spans": [
{
"start": 333,
"end": 353,
"text": "(Banko et al., 2007;",
"ref_id": "BIBREF3"
},
{
"start": 354,
"end": 378,
"text": "Corro and Gemulla, 2013;",
"ref_id": "BIBREF11"
},
{
"start": 379,
"end": 404,
"text": "Gashteovski et al., 2017)",
"ref_id": "BIBREF16"
},
{
"start": 656,
"end": 676,
"text": "(Mintz et al., 2009;",
"ref_id": "BIBREF32"
},
{
"start": 677,
"end": 696,
"text": "Riedel et al., 2010",
"ref_id": "BIBREF37"
},
{
"start": 697,
"end": 718,
"text": "Riedel et al., , 2013",
"ref_id": "BIBREF38"
},
{
"start": 719,
"end": 737,
"text": "Zeng et al., 2015;",
"ref_id": "BIBREF54"
},
{
"start": 738,
"end": 755,
"text": "Lin et al., 2016)",
"ref_id": "BIBREF26"
},
{
"start": 1036,
"end": 1059,
"text": "(Suchanek et al., 2009;",
"ref_id": "BIBREF45"
},
{
"start": 1060,
"end": 1076,
"text": "Sa et al., 2017)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Named Entity Disambiguation (NED) (cf. the survey by Shen et al. (2015) ) as a separate processing step. In addition, the mapping of relationship phrases onto KB predicates necessitates another mapping step, typically aided by paraphrase dictionaries. This two-stage architecture is inherently prone to error propagation across its two stages: NED errors may cause extraction errors (and vice versa) that lead to inaccurate relationships being added to the KB.",
"cite_spans": [
{
"start": 53,
"end": 71,
"text": "Shen et al. (2015)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Most existing methods thus entail the need for",
"sec_num": null
},
{
"text": "We aim to integrate the extraction and the canonicalization tasks by proposing an endto-end neural learning model to jointly extract triples from sentences and map them into an existing KB. Our method is based on the encoder-decoder framework (Cho et al., 2014) by treating the task as a translation of a sentence into a sequence of elements of triples. For the example in Table 1 , our model aims to translate \"New York University is a private university in Manhattan\" into a sequence of IDs \"Q49210 P31 Q902104 Q49210 P131 Q11299\", from which we can derive two triples to be added to the KB.",
"cite_spans": [
{
"start": 243,
"end": 261,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 373,
"end": 380,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Most existing methods thus entail the need for",
"sec_num": null
},
{
"text": "A standard encoder-decoder model with attention (Bahdanau et al., 2015) is, however, unable to capture the multi-word entity names and verbal or noun phrases that denote predicates. To address this problem, we propose a novel form of n-gram based attention that computes the ngram combination of attention weight to capture the verbal or noun phrase context that complements the word level attention of the standard attention model. Our model thus can better capture the multi-word context of entities and relationships. Our model harnesses pre-trained word and entity embeddings that are jointly learned with skip gram (Mikolov et al., 2013) and TransE (Bordes et al., 2013) . The advantages of our jointly learned embeddings are twofold. First, the embeddings capture the relationship between words and entities, which is essential for named entity disambiguation. Second, the entity embeddings preserve the relationships between entities, which help to build a highly accurate classifier to filter the invalid extracted triples. To cope with the lack of fully labeled training data, we adapt distant supervision to generate aligned pairs of sentence and triple as the training data. We augment the process with co-reference resolution (Clark and Manning, 2016) and dictionary-based paraphrase detection (Ganitkevitch et al., 2013; Grycner and Weikum, 2016) . The co-reference resolution helps extract sentences with implicit entity names, which enlarges the set of candidate sentences to be aligned with existing triples in a KB. The paraphrase detection helps filter sentences that do not express any relationships between entities.",
"cite_spans": [
{
"start": 48,
"end": 71,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF2"
},
{
"start": 620,
"end": 642,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF30"
},
{
"start": 654,
"end": 675,
"text": "(Bordes et al., 2013)",
"ref_id": "BIBREF4"
},
{
"start": 1238,
"end": 1263,
"text": "(Clark and Manning, 2016)",
"ref_id": "BIBREF10"
},
{
"start": 1306,
"end": 1333,
"text": "(Ganitkevitch et al., 2013;",
"ref_id": "BIBREF15"
},
{
"start": 1334,
"end": 1359,
"text": "Grycner and Weikum, 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Most existing methods thus entail the need for",
"sec_num": null
},
{
"text": "The main contributions of this paper are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Most existing methods thus entail the need for",
"sec_num": null
},
{
"text": "\u2022 We propose an end-to-end model for extract-ing and canonicalizing triples to enrich a KB. The model reduces error propagation between relation extraction and NED, which existing approaches are prone to. \u2022 We propose an n-gram based attention model to effectively map the multi-word mentions of entities and their relationships into uniquely identified entities and predicates. We propose joint learning of word and entity embeddings to capture the relationship between words and entities for named entity disambiguation. We further propose a modified beam search and a triple classifier to generate high-quality triples. \u2022 We evaluate the proposed model over two real-world datasets. We adapt distant supervision with co-reference resolution and paraphrase detection to obtain high-quality training data. The experimental results show that our model consistently outperforms a strong baseline for neural relation extraction (Lin et al., 2016) coupled with state-of-the-art NED models (Hoffart et al., 2011; Kolitsas et al., 2018) .",
"cite_spans": [
{
"start": 926,
"end": 944,
"text": "(Lin et al., 2016)",
"ref_id": "BIBREF26"
},
{
"start": 986,
"end": 1008,
"text": "(Hoffart et al., 2011;",
"ref_id": "BIBREF20"
},
{
"start": 1009,
"end": 1031,
"text": "Kolitsas et al., 2018)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Most existing methods thus entail the need for",
"sec_num": null
},
{
"text": "2 Related Work 2.1 Open Information Extraction Banko et al. (2007) introduced the paradigm of Open Information Extraction (Open IE) and proposed a pipeline that consists of three stages: learner, extractor, and assessor. The learner uses dependency-parsing information to learn patterns for extraction, in an unsupervised way. The extractor generates candidate triples by identifying noun phrases as arguments and connecting phrases as predicates. The assessor assigns a probability to each candidate triple based on statistical evidence. This approach was prone to extracting incorrect, verbose and uninformative triples. Various followup studies (Fader et al., 2011; Mausam et al., 2012; Angeli et al., 2015; Mausam, 2016) improved the accuracy of Open IE, by adding handcrafted patterns or by using distant supervision. Corro and Gemulla (2013) developed ClausIE, a method that analyzes the clauses in a sentence and derives triples from this structure. Gashteovski et al. (2017) developed MinIE to advance ClausIE by making the resulting triples more concise. Stanovsky et al. (2018) proposed a supervised learner for Open IE by casting relation extraction into sequence tagging. A bi-LSTM model is trained to predict the label (entity, predicate, or other) of each token of the input. The work most related to ours is Neural Open IE (Cui et al., 2018) , which proposed an encoder-decoder with attention model to extract triples. However, this work is not geared for extracting relations of canonicalized entities. Another line of studies use neural learning for semantic role labeling (He et al., 2018) , but the goal here is to recognize the predicate-argument structure of a single input sentence -as opposed to extracting relations from a corpus.",
"cite_spans": [
{
"start": 47,
"end": 66,
"text": "Banko et al. (2007)",
"ref_id": "BIBREF3"
},
{
"start": 648,
"end": 668,
"text": "(Fader et al., 2011;",
"ref_id": "BIBREF13"
},
{
"start": 669,
"end": 689,
"text": "Mausam et al., 2012;",
"ref_id": "BIBREF29"
},
{
"start": 690,
"end": 710,
"text": "Angeli et al., 2015;",
"ref_id": "BIBREF0"
},
{
"start": 711,
"end": 724,
"text": "Mausam, 2016)",
"ref_id": "BIBREF28"
},
{
"start": 957,
"end": 982,
"text": "Gashteovski et al. (2017)",
"ref_id": "BIBREF16"
},
{
"start": 1064,
"end": 1087,
"text": "Stanovsky et al. (2018)",
"ref_id": "BIBREF43"
},
{
"start": 1338,
"end": 1356,
"text": "(Cui et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 1590,
"end": 1607,
"text": "(He et al., 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Most existing methods thus entail the need for",
"sec_num": null
},
{
"text": "All of these methods generate triples where the head and tail entities and the predicate stay in their surface forms. Therefore, different names and phrases for the same entities result in multiple triples, which would pollute the KG if added this way. The only means to map triples to uniquely identified entities in a KG is by post-processing via entity linking (NED) methods (Shen et al., 2015) or by clustering with subsequent mapping (Gal\u00e1rraga et al., 2014) .",
"cite_spans": [
{
"start": 378,
"end": 397,
"text": "(Shen et al., 2015)",
"ref_id": "BIBREF40"
},
{
"start": 439,
"end": 463,
"text": "(Gal\u00e1rraga et al., 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Most existing methods thus entail the need for",
"sec_num": null
},
{
"text": "Inspired by the work of Brin (1998), state-of-theart methods employ distant supervision by leveraging seed facts from an existing KG (Mintz et al., 2009; Suchanek et al., 2009; Carlson et al., 2010) . These methods learn extraction patterns from seed facts, apply the patterns to extract new fact candidates, iterate this principle, and finally use statistical inference (e.g., a classifier) for reducing the false positive rate. Some of these methods hinge on the assumption that the co-occurrence of a seed fact's entities in the same sentence is an indicator of expressing a semantic relationship between the entities. This is a potential source of wrong labeling. Follow-up studies (Hoffmann et al., 2010; Riedel et al., 2010 Riedel et al., , 2013 Surdeanu et al., 2012) overcome this limitation by various means, including the use of relation-specific lexicons and latent factor models. Still, these methods treat entities by their surface forms and disregard their mapping to existing entities in the KG. Suchanek et al. (2009) and Sa et al. (2017) used probabilistic-logical inference to eliminate false positives, based on constraint solving or Monte Carlo sampling over probabilistic graphical models, respectively. These methods integrate entity linking (i.e., NED) into their models. However, both have high computational complexity and rely on modeling constraints and appropriate priors.",
"cite_spans": [
{
"start": 133,
"end": 153,
"text": "(Mintz et al., 2009;",
"ref_id": "BIBREF32"
},
{
"start": 154,
"end": 176,
"text": "Suchanek et al., 2009;",
"ref_id": "BIBREF45"
},
{
"start": 177,
"end": 198,
"text": "Carlson et al., 2010)",
"ref_id": null
},
{
"start": 686,
"end": 709,
"text": "(Hoffmann et al., 2010;",
"ref_id": "BIBREF21"
},
{
"start": 710,
"end": 729,
"text": "Riedel et al., 2010",
"ref_id": "BIBREF37"
},
{
"start": 730,
"end": 751,
"text": "Riedel et al., , 2013",
"ref_id": "BIBREF38"
},
{
"start": 752,
"end": 774,
"text": "Surdeanu et al., 2012)",
"ref_id": "BIBREF47"
},
{
"start": 1011,
"end": 1033,
"text": "Suchanek et al. (2009)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-aware Relation Extraction",
"sec_num": "2.2"
},
{
"text": "Recent studies employ neural networks to learn the extraction of triples. Nguyen and Grish-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-aware Relation Extraction",
"sec_num": "2.2"
},
{
"text": "Joint None of these neural models is geared for KG enrichment, as the canonicalization of entities is out of their scope.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wikipedia article",
"sec_num": null
},
{
"text": "learning skip-gram & TransE Word Embeddings 0.2 0.4 0.1 0.2 0.1 0.1 0.5 0.1 0.1 0.2 0.2 0.3 0.3 0.3 0.2 0.4 0.2 0.2 0.1 0.1 Entity Embeddings 0.1 0.5 0.1 0.4 0.2 0.1 0.5 0.1 0.5 0.1 0.2 0.3 0.3 0.3 0.3 0.2 0.3 0.3 0.3 0.1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wikipedia article",
"sec_num": null
},
{
"text": "We start with the problem definition. Let G = (E, R) be an existing KG where E and R are the sets of entities and relationships (predicates) in G, respectively. We consider a sentence S = w 1 , w 2 , ..., w i as the input, where w i is a token at position i in the sentence. We aim to extract a set of triples O = {o 1 , o 2 , ..., o j } from the sentence, where o j = h j , r j , t j , h j , t j \u2208 E, and r j \u2208 R. Table 1 illustrates the input and target output of our problem. Figure 1 illustrates the overall solution framework. Our framework consists of three components: data collection module, embedding module, and neural relation extraction module.",
"cite_spans": [],
"ref_spans": [
{
"start": 415,
"end": 422,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 479,
"end": 487,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Proposed Model",
"sec_num": "3"
},
{
"text": "In the data collection module (detailed in Section 3.2), we align known triples in an existing KB with sentences that contain such triples from a text corpus. The aligned pairs of sentences and triples will later be used as the training data in our neural relation extraction module. This alignment is done by distant supervision. To obtain a large number of high-quality alignments, we augment the process with a co-reference resolution to extract sentences with implicit entity names, which enlarges the set of candidate sentences to be aligned. We further use dictionary based paraphrase detection to filter sentences that do not express any relationships between entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Solution Framework",
"sec_num": "3.1"
},
{
"text": "In the embedding module (detailed in Section 3.3), we propose a joint learning of word and entity embeddings by combining skip-gram (Mikolov et al., 2013) to compute the word embeddings and TransE (Bordes et al., 2013) to compute the entity embeddings. The objective of the joint learning is to capture the similarity of words and entities that helps map the entity names into the related entity IDs. Moreover, the resulting entity embeddings are used to train a triple classifier that helps filter invalid triples generated by our neural relation extraction model.",
"cite_spans": [
{
"start": 132,
"end": 154,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF30"
},
{
"start": 197,
"end": 218,
"text": "(Bordes et al., 2013)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Solution Framework",
"sec_num": "3.1"
},
{
"text": "In the neural relation extraction module (detailed in Section 3.4), we propose an n-gram based attention model by expanding the attention mechanism to the n-gram token of a sentence. The ngram attention computes the n-gram combination of attention weight to capture the verbal or noun phrase context that complements the word level attention of the standard attention model. This expansion helps our model to better capture the multi-word context of entities and relationships.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Solution Framework",
"sec_num": "3.1"
},
{
"text": "The output of the encoder-decoder model is a sequence of the entity and predicate IDs where every three IDs indicate a triple. To generate highquality triples, we propose two strategies. The first strategy uses a modified beam search that computes the lexical similarity of the extracted entities with the surface form of entity names in the input sentence to ensure the correct entity prediction. The second strategy uses a triple classifier that is trained using the entity embeddings from the joint learning to filter the invalid triples. The triple generation process is detailed in Section 3.5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Solution Framework",
"sec_num": "3.1"
},
{
"text": "We aim to extract triples from a sentence for KB enrichment by proposing a supervised relation extraction model. To train such a model, we need a large volume of fully labeled training data in the form of sentence-triple pairs. Following Sorokin and Gurevych (2017), we use distant supervision (Mintz et al., 2009) to align sentences in Wikipedia 1 with triples in Wikidata 2 (Vrandecic and Kr\u00f6tzsch, 2014) .",
"cite_spans": [
{
"start": 294,
"end": 314,
"text": "(Mintz et al., 2009)",
"ref_id": "BIBREF32"
},
{
"start": 376,
"end": 406,
"text": "(Vrandecic and Kr\u00f6tzsch, 2014)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Collection",
"sec_num": "3.2"
},
{
"text": "We map an entity mention in a sentence to the corresponding entity entry (i.e., Wikidata ID) in Wikidata via the hyperlink associated to the entity mention, which is recorded in Wikidata as the url property of the entity entry. Each pair may contain one sentence and multiple triples. We sort the order of the triples based on the order of the predicate paraphrases that indicate the relationships between entities in the sentence. We collect sentence-triple pairs by extracting sentences that contain both head and tail entities of Wikidata triples. To generate high-quality sentence-triple pairs, we propose two additional steps: (1) extracting sentences that contain implicit entity names using co-reference resolution, and (2) filtering sentences that do not express any relationships using paraphrase detection. We detail these steps below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Collection",
"sec_num": "3.2"
},
{
"text": "Prior to aligning the sentences with triples, in Step (1), we find the implicit entity names to increase the number of candidate sentences to be aligned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Collection",
"sec_num": "3.2"
},
{
"text": "We apply co-reference resolution (Clark and Manning, 2016) to each paragraph in a Wikipedia article and replace the extracted co-references with the proper entity name.",
"cite_spans": [
{
"start": 33,
"end": 58,
"text": "(Clark and Manning, 2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Collection",
"sec_num": "3.2"
},
{
"text": "We observe that the first sentence of a paragraph in a Wikipedia article may contain a pronoun that refers to the main entity. For example, there is a paragraph in the Barack Obama article that starts with a sentence \"He was reelected to the Illinois Senate in 1998\". This may cause the standard co-reference resolution to miss the implicit entity names for the rest of the paragraph. To address this problem, we heuristically replace the pronouns in the first sentence of a paragraph if the main entity name of the Wikipedia page is not mentioned. For the sentence in the previous example, we replace \"He\" with \"Barack Obama\". The intuition is that a Wikipedia article contains content of a single entity of interest, and that the pronouns mentioned in the first sentence of a paragraph mostly relate to the main entity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Collection",
"sec_num": "3.2"
},
{
"text": "In Step (2), we use a dictionary based paraphrase detection to capture relationships between entities in a sentence. First, we create a dictionary by populating predicate paraphrases from three sources including PATTY (Nakashole et al., 2012) , POLY (Grycner and Weikum, 2016) , and PPDB (Ganitkevitch et al., 2013) ship \"place of birth\" are {born in, was born in, ...}. Then, we use this dictionary to filter sentences that do not express any relationships between entities. We use exact string matching to find verbal or noun phrases in a sentence which is a paraphrases of a predicate of a triple. For example, for the triple Barack Obama, place of birth, Honolulu , the sentence \"Barack Obama was born in 1961 in Honolulu, Hawaii\" will be retained while the sentence \"Barack Obama visited Honolulu in 2010\" will be removed (the sentence may be retained if there is another valid triple Barack Obama, visited, Honolulu ). This helps filter noises for the sentence-triple alignment.",
"cite_spans": [
{
"start": 218,
"end": 242,
"text": "(Nakashole et al., 2012)",
"ref_id": "BIBREF34"
},
{
"start": 250,
"end": 276,
"text": "(Grycner and Weikum, 2016)",
"ref_id": "BIBREF17"
},
{
"start": 288,
"end": 315,
"text": "(Ganitkevitch et al., 2013)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Collection",
"sec_num": "3.2"
},
{
"text": "The collected dataset contains 255,654 sentence-triple pairs. For each pair, the maximum number of triples is four (i.e., a sentence can produce at most four triples). We split the dataset into train set (80%), dev set (10%) and test set (10%) (we call it the WIKI test dataset). For stress testing (to test the proposed model on a different style of text than the training data), we also collect another test dataset outside Wikipedia. We apply the same procedure to the user reviews of a travel website. First, we collect user reviews on 100 popular landmarks in Australia. Then, we apply the adapted distant supervision to the reviews and collect 1,000 sentence-triple pairs (we call it the GEO test dataset). Table 2 summarizes the statistics of our datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 713,
"end": 720,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Dataset Collection",
"sec_num": "3.2"
},
{
"text": "Our relation extraction model is based on the encoder-decoder framework which has been widely used in Neural Machine Translation to translate text from one language to another. In our setup, we aim to translate a sentence into triples, and hence the vocabulary of the source input is a set of English words while the vocabulary of the target output is a set of entity and predicate IDs in an existing KG. To compute the embeddings of the source and target vocabularies, we propose a joint learning of word and entity embeddings that is effective to capture the similarity between words and entities for named entity disambiguation (Yamada et al., 2016) . Note that our method differs from that of Yamada et al. (2016) . We use joint learning by combining skip-gram (Mikolov et al., 2013) to compute the word embeddings and TransE (Bordes et al., 2013) to compute the entity embeddings (including the relationship embeddings), while Yamada et al. (2016) use Wikipedia Link-based Measure (WLM) (Milne and Witten, 2008) that does not consider the relationship embeddings.",
"cite_spans": [
{
"start": 631,
"end": 652,
"text": "(Yamada et al., 2016)",
"ref_id": "BIBREF53"
},
{
"start": 697,
"end": 717,
"text": "Yamada et al. (2016)",
"ref_id": "BIBREF53"
},
{
"start": 765,
"end": 787,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF30"
},
{
"start": 830,
"end": 851,
"text": "(Bordes et al., 2013)",
"ref_id": "BIBREF4"
},
{
"start": 932,
"end": 952,
"text": "Yamada et al. (2016)",
"ref_id": "BIBREF53"
},
{
"start": 992,
"end": 1016,
"text": "(Milne and Witten, 2008)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning of Word and Entity Embeddings",
"sec_num": "3.3"
},
{
"text": "Our model learns the entity embeddings by minimizing a margin-based objective function J E :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning of Word and Entity Embeddings",
"sec_num": "3.3"
},
{
"text": "JE = tr \u2208Tr t r \u2208T r max 0, \u03b3 + f (tr) \u2212 f (t r )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning of Word and Entity Embeddings",
"sec_num": "3.3"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning of Word and Entity Embeddings",
"sec_num": "3.3"
},
{
"text": "Tr = { h, r, t | h, r, t \u2208 G} (2) Tr = h , r, t | h \u2208 E \u222a h, r, t | t \u2208 E (3) f (tr) = h + r \u2212 t (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning of Word and Entity Embeddings",
"sec_num": "3.3"
},
{
"text": "Here, x is the L1-Norm of vector x, \u03b3 is a margin hyperparameter, T r is the set of valid relationship triples from a KG G, and T r is the set of corrupted relationship triples (recall that E is the set of entities in G). The corrupted triples are used as negative samples, which are created by replacing the head or tail entity of a valid triple in T r with a random entity. We use all triples in Wikidata except those which belong to the testing data to compute the entity embeddings. To establish the interaction between the entity and word embeddings, we follow the Anchor Context Model proposed by Yamada et al. (2016) . First, we generate a text corpus by combining the original text and the modified anchor text of Wikipedia. This is done by replacing the entity names in a sentence with the related entity or predicate IDs. For example, the sentence \"New York University is a private university in Manhattan\" is modified into \"Q49210 is a Q902104 in Q11299\". Then, we use the skip-gram method to compute the word embeddings from the generated corpus (the entity IDs in the modified anchor text are treated as words in the skip-gram model). Given a sequence of n words [w 1 , w 2 , ..., w n ], The model learns the word embeddings, by minimizing the following objective function J W :",
"cite_spans": [
{
"start": 603,
"end": 623,
"text": "Yamada et al. (2016)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning of Word and Entity Embeddings",
"sec_num": "3.3"
},
{
"text": "J W = 1 T n t=1 \u2212c\u2264j\u2264c,j =0 log P (w t+j |w t ) (5) P (w t+j |w t ) = exp(v w t+j v wt ) W i=1 (v i v wt ) (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning of Word and Entity Embeddings",
"sec_num": "3.3"
},
{
"text": "where c is the size of the context window, w t denotes the target word, and w t+j is the context word; v w and v w are the input and output vector representations of word w, and W is the vocabulary size. The overall objective function of the joint learning of word and entity embeddings is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning of Word and Entity Embeddings",
"sec_num": "3.3"
},
{
"text": "J = J E + J W (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning of Word and Entity Embeddings",
"sec_num": "3.3"
},
{
"text": "Our proposed relation extraction model integrates the extraction and canonicalization tasks for KB enrichment in an end-to-end manner. To build such a model, we employ an encoder-decoder model (Cho et al., 2014) to translate a sentence into a sequence of triples. The encoder encodes a sentence into a vector that is used by the decoder as a context to generate a sequence of triples. Because we treat the input and output as a sequence, We use the LSTM networks (Hochreiter and Schmidhuber, 1997) in the encoder and the decoder.",
"cite_spans": [
{
"start": 193,
"end": 211,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF9"
},
{
"start": 463,
"end": 497,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Based Attention Model",
"sec_num": "3.4"
},
{
"text": "The encoder-decoder with attention model (Bahdanau et al., 2015) has been used in machine translation. However, in the relation extraction task, the attention model cannot capture the multiword entity names. In our preliminary investigation, we found that the attention model yields misalignment between the word and the entity.",
"cite_spans": [
{
"start": 41,
"end": 64,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Based Attention Model",
"sec_num": "3.4"
},
{
"text": "The above problem is due to the same words in the names of different entities (e.g., the word University in different university names such as New York University, Washington University, etc.). During training, the model pays more attention to the word University to differentiate different types of entities of a similar name, e.g., New York University, New York Times Building, or New York Life Building, but not the same types of entities of different names (e.g., New York University and Washington University). This may cause errors in entity alignment, especially when predicting the ID of an entity that is not in the training data. Even though we add Entity-name, Entity-ID pairs as training data (see the Training section), the misalignments still take place.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Based Attention Model",
"sec_num": "3.4"
},
{
"text": "We address the above problem by proposing an n-gram based attention model. This model computes the attention of all possible n-grams of the sentence input. The attention weights are computed over the n-gram combinations of the word embeddings, and hence the context vector for the decoder is computed as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Based Attention Model",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c d t = \uf8ee \uf8f0 h e ; |N | n=1 W n \uf8eb \uf8ed |X n | i=1 \u03b1 n i x n i \uf8f6 \uf8f8 \uf8f9 \uf8fb (8) \u03b1 n i = exp(h e V n x n i ) |X n | j=1 exp(h e V n x n j )",
"eq_num": "(9)"
}
],
"section": "N-gram Based Attention Model",
"sec_num": "3.4"
},
{
"text": "Here, c d t is the context vector of the decoder at timestep t, h e is the last hidden state of the encoder, the superscript n indicates the n-gram combination, x is the word embeddings of input sentence, |X n | is the total number of n-gram token combination, N indicates the maximum value of n used in the n-gram combinations (N = 3 in our experiments), W and V are learned parameter matrices, and \u03b1 is the attention weight.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Based Attention Model",
"sec_num": "3.4"
},
{
"text": "In the training phase, in addition to the sentencetriple pairs collected using distant supervision (see Section 3.2), we also add pairs of Entity-name, Entity-ID of all entities in the KB to the training data, e.g., New York University, Q49210 . This allows the model to learn the mapping between entity names and entity IDs, especially for the unseen entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": null
},
{
"text": "The output of the encoder-decoder model is a sequence of the entity and predicate IDs where every three tokens indicate a triple. Therefore, to extract a triple, we simply group every three tokens of the generated output. However, the greedy approach (i.e., picking the entity with the highest probability of the last softmax layer of the decoder) may lead the model to extract incorrect entities due to the similarity between entity embeddings (e.g., the embeddings of New York City and Chicago may be similar because both are cities in USA). To address this problem, we propose two strategies: re-ranking the predicted entities using a modified beam search and filtering invalid triples using a triple classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Triple Generation",
"sec_num": "3.5"
},
{
"text": "The modified beam search re-ranks top-k (k = 10 in our experiments) entity IDs that are predicted by the decoder by computing the edit distance between the entity names (obtained from the KB) and every n-gram token of the input sentence. The intuition is that the entity name should be mentioned in the sentence so that the entity with the highest similarity will be chosen as the output. Our triple classifier is trained with entity embeddings from the joint learning (see Section 3.3). Triple classification is one of the metrics to evaluate the quality of entity embeddings (Socher et al., 2013) . We build a classifier to determine the validity of a triple h, r, t . We train a binary classifier based on the plausibility score (h + r \u2212 t) (the score to compute the entity embeddings). We create negative samples by corrupting the valid triples (i.e., replacing the head or tail entity by a random entity). The triple classifier is effective to filter invalid triple such as New York University, capital of, Manhattan .",
"cite_spans": [
{
"start": 577,
"end": 598,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Triple Generation",
"sec_num": "3.5"
},
{
"text": "We evaluate our model on two real datasets including WIKI and GEO test datasets (see Section 3.2). We use precision, recall, and F1 score as the evaluation metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We use grid search to find the best hyperparameters for the networks. We use 512 hidden units for both the encoder and the decoder. We use 64 dimensions of pre-trained word and entity embeddings (see Section 3.3). We use a 0.5 dropout rate for regularization on both the encoder and the decoder. We use Adam (Kingma and Ba, 2015) with a learning rate of 0.0002.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperparameters",
"sec_num": "4.1"
},
{
"text": "We compare our proposed model 3 with three existing models including CNN (the state-of-theart supervised approach by Lin et al. (2016) ), MiniE (the state-of-the-art unsupervised approach by Gashteovski et al. (2017) ), and ClausIE by Corro and Gemulla (2013). To map the extracted entities by these models, we use two state-of-theart NED systems including AIDA (Hoffart et al., 2011) and NeuralEL (Kolitsas et al., 2018) . The precision (tested on our test dataset) of AIDA and NeuralEL are 70% and 61% respectively. To map the extracted predicates (relationships) of the unsupervised approaches output, we use the dictionary based paraphrase detection. We use the same dictionary that is used to collect the dataset (i.e., the combination of three paraphrase dictionaries including PATTY (Nakashole et al., 2012) , POLY (Grycner and Weikum, 2016) , and PPDB (Ganitkevitch et al., 2013)). We replace the extracted predicate with the correct predicate ID if one of the paraphrases of the correct predicate (i.e., the gold standard) appear in the extracted predicate. Otherwise, we replace the extracted predicate with \"NA\" to indicate an unrecognized predicate. We also compare our N-gram Attention model with two encoder-decoder based models including the Single Attention model (Bahdanau et al., 2015) and Transformer model (Vaswani et al., 2017) . Table 3 shows that the end-to-end models outperform the existing model. In particular, our proposed n-gram attention model achieves the best results in terms of precision, recall, and F1 score. Our proposed model outperforms the best existing model (MinIE) by 33.39% and 34.78% in terms of F1 score on the WIKI and GEO test dataset respectively. These results are expected since the existing models are affected by the error propagation of the NED. As expected, the combination of the existing models with AIDA achieves higher F1 scores than the combination with NeuralEL as AIDA achieves a higher precision than NeuralEL.",
"cite_spans": [
{
"start": 117,
"end": 134,
"text": "Lin et al. (2016)",
"ref_id": "BIBREF26"
},
{
"start": 191,
"end": 216,
"text": "Gashteovski et al. (2017)",
"ref_id": "BIBREF16"
},
{
"start": 362,
"end": 384,
"text": "(Hoffart et al., 2011)",
"ref_id": "BIBREF20"
},
{
"start": 398,
"end": 421,
"text": "(Kolitsas et al., 2018)",
"ref_id": "BIBREF24"
},
{
"start": 790,
"end": 814,
"text": "(Nakashole et al., 2012)",
"ref_id": "BIBREF34"
},
{
"start": 822,
"end": 848,
"text": "(Grycner and Weikum, 2016)",
"ref_id": "BIBREF17"
},
{
"start": 1280,
"end": 1303,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF2"
},
{
"start": 1326,
"end": 1348,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [
{
"start": 1351,
"end": 1358,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Models",
"sec_num": "4.2"
},
{
"text": "To further show the effect of error propagation, we set up an experiment without the canonicalization task (i.e., the objective is predicting a relationship between known entities). We remove the NED pre-processing step by allowing the CNN model to access the correct entities. Meanwhile, we provide the correct entities to the decoder of our proposed model. In this setup, our proposed model achieves 86.34% and 79.11%, while CNN achieves 81.92% and 75.82% in precision over the WIKI and GEO test datasets, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "Our proposed n-gram attention model outperforms the end-to-end models by 15.51% and 8.38% in terms of F1 score on the WIKI and GEO test datasets, respectively. The Transformer model also only yields similar performance to that of the Single Attention model, which is worse than ours. These results indicate that our model captures multi-word entity name (in both datasets, 82.9% of the entities have multi-word entity name) in the input sentence better than the other models. Table 3 also shows that the pre-trained embeddings improve the performance of the model in all measures. Moreover, the pre-trained embeddings help the model to converge faster. In our experiments, the models that use the pre-trained embeddings converge in 20 epochs on average, while the models that do not use the pre-trained embeddings converge in 30 \u2212 40 epochs. Our triple classifier combined with the modified beam search boost the performance of the model. The modified beam search provides a high recall by extracting the correct entities based on the surface form in the input sentence while the triple classifier provides a high precision by filtering the invalid triples.",
"cite_spans": [],
"ref_spans": [
{
"start": 476,
"end": 483,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "We further perform manual error analysis. We found that the incorrect output of our model is caused by the same entity name of two different entities (e.g., the name of Michael Jordan that refers to the American basketball player or the English footballer). The modified beam search cannot disambiguate those entities as it only considers the lexical similarity. We consider using context-based similarity as future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": null
},
{
"text": "We proposed an end-to-end relation extraction model for KB enrichment that integrates the extraction and canonicalization tasks. Our model thus reduces the error propagation between relation extraction and NED that existing approaches are prone to. To obtain high-quality training data, we adapt distant supervision and augment it with co-reference resolution and paraphrase detection. We propose an n-gram based attention model that better captures the multi-word entity names in a sentence. Moreover, we propose a modified beam search and a triple classification that helps the model to generate high-quality triples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Experimental results show that our proposed model outperforms the existing models by 33.39% and 34.78% in terms of F1 score on the WIKI and GEO test dataset respectively. These results confirm that our model reduces the error propagation between NED and relation extraction. Our proposed n-gram attention model outperforms the other encoder-decoder models by 15.51% and 8.38% in terms of F1 score on the two real-world datasets. These results confirm that our model better captures the multi-word entity names in a sentence. In the future, we plan to explore contextbased similarity to complement the lexical similarity to improve the overall performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "https://dumps.wikimedia.org/enwiki/latest/enwikilatest-pages-articles.xml.bz22 https://dumps.wikimedia.org/wikidatawiki/entities/latestall.ttl.gz",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The code and the dataset are made available at http://www.ruizhang.info/GKB/gkb.htm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Bayu Distiawan Trisedya is supported by the Indonesian Endowment Fund for Education (LPDP). This work is done while Bayu Distiawan Trisedya is visiting the Max Planck Institute for Informatics. This work is supported by Australian Research Council (ARC) Discovery Project DP180102050, Google Faculty Research Award, and the National Science Foundation of China (Project No. 61872070 and No. 61402155).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Leveraging linguistic structure for open domain information extraction",
"authors": [
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Melvin Jose Johnson",
"middle": [],
"last": "Premkumar",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "344--354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D. Manning. 2015. Leveraging linguis- tic structure for open domain information extrac- tion. In Proceedings of Association for Computa- tional Linguistics, pages 344-354.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Dbpedia: A nucleus for a web of open data",
"authors": [
{
"first": "S\u00f6ren",
"middle": [],
"last": "Auer",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Bizer",
"suffix": ""
},
{
"first": "Georgi",
"middle": [],
"last": "Kobilarov",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Lehmann",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Cyganiak",
"suffix": ""
},
{
"first": "Zachary",
"middle": [
"G"
],
"last": "Ives",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of International Semantic Web Conference",
"volume": "",
"issue": "",
"pages": "722--735",
"other_ids": {
"DOI": [
"10.1007/978-3-540-76298-0_52"
]
},
"num": null,
"urls": [],
"raw_text": "S\u00f6ren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary G. Ives. 2007. Dbpedia: A nucleus for a web of open data. In Proceedings of International Semantic Web Con- ference, pages 722-735.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Represen- tations.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Open information extraction from the web",
"authors": [
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"J"
],
"last": "Cafarella",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Broadhead",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of International Joint Conference on Artifical intelligence",
"volume": "",
"issue": "",
"pages": "2670--2676",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michele Banko, Michael J. Cafarella, Stephen Soder- land, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Pro- ceedings of International Joint Conference on Artif- ical intelligence, pages 2670-2676.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Translating embeddings for modeling multirelational data",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Garcia-Duran",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Oksana",
"middle": [],
"last": "Yakhnenko",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of International Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "2787--2795",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes, Nicolas Usunier, Alberto Garcia- Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In Proceedings of International Conference on Neural Information Processing Sys- tems, pages 2787-2795.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Extracting patterns and relations from the world wide web",
"authors": [
{
"first": "",
"middle": [],
"last": "Sergey Brin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of The World Wide Web and Databases International Workshop",
"volume": "",
"issue": "",
"pages": "172--183",
"other_ids": {
"DOI": [
"10.1007/10704656_11"
]
},
"num": null,
"urls": [],
"raw_text": "Sergey Brin. 1998. Extracting patterns and relations from the world wide web. In Proceedings of The World Wide Web and Databases International Work- shop, pages 172-183.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Toward an architecture for neverending language learning",
"authors": [
{
"first": "",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1306--1313",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell. 2010. Toward an architecture for never- ending language learning. In Proceedings of AAAI Conference on Artificial Intelligence, pages 1306- 1313.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Multilingual knowledge graph embeddings for cross-lingual knowledge alignment",
"authors": [
{
"first": "Muhao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yingtao",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Mohan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Carlo",
"middle": [],
"last": "Zaniolo",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1511--1517",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muhao Chen, Yingtao Tian, Mohan Yang, and Carlo Zaniolo. 2017. Multilingual knowledge graph em- beddings for cross-lingual knowledge alignment. In Proceedings of International Joint Conference on Artificial Intelligence, pages 1511-1517.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of Empirical Methods in Natural Language Process- ing, pages 1724-1734.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Deep reinforcement learning for mention-ranking coreference models",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2256--2262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Clark and Christopher D. Manning. 2016. Deep reinforcement learning for mention-ranking corefer- ence models. In Proceedings of Empirical Methods in Natural Language Processing, pages 2256-2262.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Clausie: clause-based open information extraction",
"authors": [
{
"first": "Luciano",
"middle": [],
"last": "Del Corro",
"suffix": ""
},
{
"first": "Rainer",
"middle": [],
"last": "Gemulla",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "355--366",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luciano Del Corro and Rainer Gemulla. 2013. Clausie: clause-based open information extraction. In Pro- ceedings of International Conference on World Wide Web, pages 355-366.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Neural open information extraction",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "407--413",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lei Cui, Furu Wei, and Ming Zhou. 2018. Neural open information extraction. In Proceedings of Associa- tion for Computational Linguistics, pages 407-413.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Identifying relations for open information extraction",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Fader",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1535--1545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information ex- traction. In Proceedings of Empirical Methods in Natural Language Processing, pages 1535-1545.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Canonicalizing open knowledge bases",
"authors": [
{
"first": "Luis",
"middle": [],
"last": "Gal\u00e1rraga",
"suffix": ""
},
{
"first": "Geremy",
"middle": [],
"last": "Heitz",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "Fabian",
"middle": [
"M"
],
"last": "Suchanek",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of International Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "1679--1688",
"other_ids": {
"DOI": [
"http://doi.acm.org/10.1145/2661829.2662073"
]
},
"num": null,
"urls": [],
"raw_text": "Luis Gal\u00e1rraga, Geremy Heitz, Kevin Murphy, and Fabian M. Suchanek. 2014. Canonicalizing open knowledge bases. In Proceedings of International Conference on Information and Knowledge Man- agement, pages 1679-1688.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Ppdb: The paraphrase database",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "758--764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. Ppdb: The paraphrase database. In Proceedings of North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 758-764.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Minie: Minimizing facts in open information extraction",
"authors": [
{
"first": "Kiril",
"middle": [],
"last": "Gashteovski",
"suffix": ""
},
{
"first": "Rainer",
"middle": [],
"last": "Gemulla",
"suffix": ""
},
{
"first": "Luciano",
"middle": [
"Del"
],
"last": "Corro",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2620--2630",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kiril Gashteovski, Rainer Gemulla, and Luciano Del Corro. 2017. Minie: Minimizing facts in open in- formation extraction. In Proceedings of Empiri- cal Methods in Natural Language Processing, pages 2620-2630.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Poly: Mining relational paraphrases from multilingual sentences",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Grycner",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2183--2192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Grycner and Gerhard Weikum. 2016. Poly: Mining relational paraphrases from multilingual sentences. In Proceedings of Empirical Methods in Natural Language Processing, pages 2183-2192.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Jointly predicting predicates and arguments in neural semantic role labeling",
"authors": [
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "364--369",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luheng He, Kenton Lee, Omer Levy, and Luke Zettle- moyer. 2018. Jointly predicting predicates and argu- ments in neural semantic role labeling. In Proceed- ings of Association for Computational Linguistics, pages 364-369.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {
"DOI": [
"10.1162/neco.1997.9.8.1735"
]
},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Robust disambiguation of named entities in text",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Hoffart",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "782--792",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Hoffart et al. 2011. Robust disambiguation of named entities in text. In Proceedings of Empiri- cal Methods in Natural Language Processing, pages 782-792.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Learning 5000 relational extractors",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Hoffmann",
"suffix": ""
},
{
"first": "Congle",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "286--295",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raphael Hoffmann, Congle Zhang, and Daniel S. Weld. 2010. Learning 5000 relational extractors. In Proceedings of Association for Computational Lin- guistics, pages 286-295.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Distant supervision for relation extraction with sentence-level attention and entity descriptions",
"authors": [
{
"first": "Guoliang",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "3060--3066",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guoliang Ji, Kang Liu, Shizhu He, and Jun Zhao. 2017. Distant supervision for relation extraction with sentence-level attention and entity descriptions. In Proceedings of AAAI Conference on Artificial In- telligence, pages 3060-3066.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [
"Lei"
],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimization. In Proceed- ings of International Conference on Learning Rep- resentations.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "End-to-end neural entity linking",
"authors": [
{
"first": "Nikolaos",
"middle": [],
"last": "Kolitsas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Octavian-Eugen",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Ganea",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "519--529",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikolaos Kolitsas, Octavian-Eugen Ganea, and Thomas Hofmann. 2018. End-to-end neural en- tity linking. In Proceedings of Conference on Computational Natural Language Learning, pages 519-529.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Neural relation extraction with multi-lingual attention",
"authors": [
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "34--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yankai Lin, Zhiyuan Liu, and Maosong Sun. 2017. Neural relation extraction with multi-lingual atten- tion. In Proceedings of Association for Computa- tional Linguistics, volume 1, pages 34-43.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Neural relation extraction with selective attention over instances",
"authors": [
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Shiqi",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Huanbo",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2124--2133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceed- ings of Association for Computational Linguistics, volume 1, pages 2124-2133.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Deep graph convolutional encoders for structured data to text generation",
"authors": [
{
"first": "Diego",
"middle": [],
"last": "Marcheggiani",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Perez-Beltrachini",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diego Marcheggiani and Laura Perez-Beltrachini. 2018. Deep graph convolutional encoders for struc- tured data to text generation. Proceedings of Inter- national Conference on Natural Language Genera- tion, pages 1-9.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Open information extraction systems and downstream applications",
"authors": [
{
"first": "",
"middle": [],
"last": "Mausam",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "4074--4077",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mausam. 2016. Open information extraction systems and downstream applications. In Proceedings of International Joint Conference on Artificial Intelli- gence, pages 4074-4077.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Open language learning for information extraction",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Mausam",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Schmitz",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Bart",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "523--534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mausam, Michael Schmitz, Stephen Soderland, Robert Bart, and Oren Etzioni. 2012. Open language learn- ing for information extraction. In Proceedings of Empirical Methods in Natural Language Process- ing, pages 523-534.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of International Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed rep- resentations of words and phrases and their com- positionality. In Proceedings of International Con- ference on Neural Information Processing Systems, pages 3111-3119.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "An effective, low-cost measure of semantic relatedness obtained from wikipedia links",
"authors": [
{
"first": "David",
"middle": [],
"last": "Milne",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of AAAI Workshop on Wikipedia and Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "25--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Milne and Ian H. Witten. 2008. An effective, low-cost measure of semantic relatedness obtained from wikipedia links. In Proceedings of AAAI Work- shop on Wikipedia and Artificial Intelligence, pages 25-30.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Distant supervision for relation extraction without labeled data",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Mintz",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bills",
"suffix": ""
},
{
"first": "Rion",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of Association for Computational Linguistics and International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "1003--1011",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Daniel Ju- rafsky. 2009. Distant supervision for relation ex- traction without labeled data. In Proceedings of the Joint Conference of Association for Computational Linguistics and International Joint Conference on Natural Language Processing of the AFNLP, pages 1003-1011.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "End-to-end relation extraction using lstms on sequences and tree structures",
"authors": [
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Makoto Miwa and Mohit Bansal. 2016. End-to-end re- lation extraction using lstms on sequences and tree structures. In Proceedings of Association for Com- putational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Patty: A taxonomy of relational patterns with semantic types",
"authors": [
{
"first": "Ndapandula",
"middle": [],
"last": "Nakashole",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
},
{
"first": "Fabian",
"middle": [
"M"
],
"last": "Suchanek",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1135--1145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ndapandula Nakashole, Gerhard Weikum, and Fabian M. Suchanek. 2012. Patty: A taxonomy of relational patterns with semantic types. In Proceed- ings of Empirical Methods in Natural Language Processing, pages 1135-1145.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "A novel embedding model for knowledge base completion based on convolutional neural network",
"authors": [
{
"first": "Tu",
"middle": [
"Dinh"
],
"last": "Dai Quoc Nguyen",
"suffix": ""
},
{
"first": "Dat",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Dinh",
"middle": [
"Q"
],
"last": "Quoc Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Phung",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "327--333",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dai Quoc Nguyen, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Q. Phung. 2018. A novel embed- ding model for knowledge base completion based on convolutional neural network. In Proceedings of North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, volume 2, pages 327-333.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Relation extraction: Perspective from convolutional neural networks",
"authors": [
{
"first": "Huu",
"middle": [],
"last": "Thien",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of Workshop on Vector Space Modeling for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "39--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thien Huu Nguyen and Ralph Grishman. 2015. Rela- tion extraction: Perspective from convolutional neu- ral networks. In Proceedings of Workshop on Vector Space Modeling for Natural Language Processing, pages 39-48.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Modeling relations and their mentions without labeled text",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of European Conference on Machine Learning and Knowledge Discovery in Databases",
"volume": "",
"issue": "",
"pages": "148--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions with- out labeled text. In Proceedings of European Con- ference on Machine Learning and Knowledge Dis- covery in Databases, pages 148-163.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Relation extraction with matrix factorization and universal schemas",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [
"M"
],
"last": "Marlin",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "74--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M. Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Pro- ceedings of North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 74-84.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Incremental knowledge base construction using deepdive. Very Large Data Bases",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Christopher De Sa",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Ratner",
"suffix": ""
},
{
"first": "Jaeho",
"middle": [],
"last": "R\u00e9",
"suffix": ""
},
{
"first": "Feiran",
"middle": [],
"last": "Shin",
"suffix": ""
},
{
"first": "Sen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ce",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal",
"volume": "26",
"issue": "1",
"pages": "81--105",
"other_ids": {
"DOI": [
"10.1007/s00778-016-0437-2"
]
},
"num": null,
"urls": [],
"raw_text": "Christopher De Sa, Alexander Ratner, Christopher R\u00e9, Jaeho Shin, Feiran Wang, Sen Wu, and Ce Zhang. 2017. Incremental knowledge base construction using deepdive. Very Large Data Bases Journal, 26(1):81-105.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Entity linking with a knowledge base: Issues, techniques, and solutions",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Jianyong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2015,
"venue": "IEEE Trans. Knowl. Data Eng",
"volume": "27",
"issue": "2",
"pages": "443--460",
"other_ids": {
"DOI": [
"10.1109/TKDE.2014.2327028"
]
},
"num": null,
"urls": [],
"raw_text": "Wei Shen, Jianyong Wang, and Jiawei Han. 2015. En- tity linking with a knowledge base: Issues, tech- niques, and solutions. IEEE Trans. Knowl. Data Eng., 27(2):443-460.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Reasoning with neural tensor networks for knowledge base completion",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of International Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "926--934",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Danqi Chen, Christopher D. Manning, and Andrew Y. Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Proceedings of International Conference on Neural Information Processing Systems, pages 926-934.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Contextaware representations for knowledge base relation extraction",
"authors": [
{
"first": "Daniil",
"middle": [],
"last": "Sorokin",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1784--1789",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniil Sorokin and Iryna Gurevych. 2017. Context- aware representations for knowledge base relation extraction. In Proceedings of Empirical Methods in Natural Language Processing, pages 1784-1789.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Supervised open information extraction",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Stanovsky",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "885--895",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriel Stanovsky, Julian Michael, Ido Dagan, and Luke Zettlemoyer. 2018. Supervised open infor- mation extraction. In Proceedings of North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 885-895.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Yago: a core of semantic knowledge",
"authors": [
{
"first": "Fabian",
"middle": [
"M"
],
"last": "Suchanek",
"suffix": ""
},
{
"first": "Gjergji",
"middle": [],
"last": "Kasneci",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "697--706",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowl- edge. In Proceedings of International Conference on World Wide Web, pages 697-706.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "SOFIE: a self-organizing framework for information extraction",
"authors": [
{
"first": "Fabian",
"middle": [
"M"
],
"last": "Suchanek",
"suffix": ""
},
{
"first": "Mauro",
"middle": [],
"last": "Sozio",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "631--640",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian M. Suchanek, Mauro Sozio, and Gerhard Weikum. 2009. SOFIE: a self-organizing frame- work for information extraction. In Proceedings of International Conference on World Wide Web, pages 631-640.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Cross-lingual entity alignment via joint attributepreserving embedding",
"authors": [
{
"first": "Zequn",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Chengkai",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of International Semantic Web Conference",
"volume": "",
"issue": "",
"pages": "628--644",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zequn Sun, Wei Hu, and Chengkai Li. 2017. Cross-lingual entity alignment via joint attribute- preserving embedding. Proceedings of Interna- tional Semantic Web Conference, pages 628-644.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Multi-instance multi-label learning for relation extraction",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Tibshirani",
"suffix": ""
},
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "455--465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D. Manning. 2012. Multi-instance multi-label learning for relation extraction. In Pro- ceedings of Empirical Methods in Natural Language Processing, pages 455-465.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Entity alignment between knowledge graphs using attribute embeddings",
"authors": [
{
"first": "Jianzhong",
"middle": [],
"last": "Bayu Distiawan Trisedya",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bayu Distiawan Trisedya, Jianzhong Qi, and Rui Zhang. 2019. Entity alignment between knowledge graphs using attribute embeddings. In Proceedings of AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Gtr-lstm: A triple encoder for sentence generation from rdf data",
"authors": [
{
"first": "Jianzhong",
"middle": [],
"last": "Bayu Distiawan Trisedya",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1627--1637",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bayu Distiawan Trisedya, Jianzhong Qi, Rui Zhang, and Wei Wang. 2018. Gtr-lstm: A triple encoder for sentence generation from rdf data. In Proceedings of Association for Computational Linguistics, pages 1627-1637.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of Neural Information Processing Systems, pages 5998-6008.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Wikidata: a free collaborative knowledgebase",
"authors": [
{
"first": "Denny",
"middle": [],
"last": "Vrandecic",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Kr\u00f6tzsch",
"suffix": ""
}
],
"year": 2014,
"venue": "Commun. ACM",
"volume": "57",
"issue": "10",
"pages": "78--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denny Vrandecic and Markus Kr\u00f6tzsch. 2014. Wiki- data: a free collaborative knowledgebase. Commun. ACM, 57(10):78-85.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Knowledge base completion using embeddings and rules",
"authors": [
{
"first": "Quan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1859--1865",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quan Wang, Bin Wang, and Li Guo. 2015. Knowl- edge base completion using embeddings and rules. In Proceedings of International Joint Conference on Artificial Intelligence, pages 1859-1865.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Joint learning of the embedding of words and entities for named entity disambiguation",
"authors": [
{
"first": "Ikuya",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Hiroyuki",
"middle": [],
"last": "Shindo",
"suffix": ""
},
{
"first": "Hideaki",
"middle": [],
"last": "Takeda",
"suffix": ""
},
{
"first": "Yoshiyasu",
"middle": [],
"last": "Takefuji",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "250--259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. 2016. Joint learning of the embedding of words and entities for named entity disambiguation. In Proceedings of Conference on Computational Natural Language Learning, pages 250-259.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Distant supervision for relation extraction via piecewise convolutional neural networks",
"authors": [
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1753--1762",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Pro- ceedings of Empirical Methods in Natural Language Processing, pages 1753-1762.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Distant supervision for relation extraction with hierarchical selective attention",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jiaming",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Zhenyu",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Hongyun",
"middle": [],
"last": "Bao",
"suffix": ""
},
{
"first": "Zhineng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2018,
"venue": "Neural Networks",
"volume": "108",
"issue": "",
"pages": "240--247",
"other_ids": {
"DOI": [
"10.1016/j.neunet.2018.08.016"
]
},
"num": null,
"urls": [],
"raw_text": "Peng Zhou, Jiaming Xu, Zhenyu Qi, Hongyun Bao, Zhineng Chen, and Bo Xu. 2018. Distant supervi- sion for relation extraction with hierarchical selec- tive attention. Neural Networks, 108:240-247.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Sentence input: New York University is a private university in Manhattan. Expected output: Q49210 P31 Q902104 Q49210 P131 Q11299 Sentence input: New York Times Building is a skyscraper in Manhattan Expected output: Q192680 P131 Q11299 ...Sentence input:New York University is a private university in Manhattan. Overview of our proposed solution. man (2015) proposed Convolution Networks with multi-sized window kernel.Zeng et al. (2015) proposed Piecewise Convolution Neural Networks (PCNN).Lin et al. (2016Lin et al. ( , 2017 improved this approach by proposing PCNN with sentence-level attention. This method performed best in experimental studies; hence we choose it as the main baseline against which we compare our approach. Follow-up studies considered further variations: proposed hierarchical attention,Ji et al. (2017) incorporated entity descriptions,Miwa and Bansal (2016) incorporated syntactic features, and Sorokin and Gurevych (2017) used background knowledge for contextualization.",
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"content": "<table/>",
"text": "Relation extraction example.",
"type_str": "table",
"num": null,
"html": null
},
"TABREF2": {
"content": "<table/>",
"text": "Statistics of the dataset.",
"type_str": "table",
"num": null,
"html": null
},
"TABREF4": {
"content": "<table/>",
"text": "Experiments result.",
"type_str": "table",
"num": null,
"html": null
}
}
}
}