ACL-OCL / Base_JSON /prefixD /json /D17 /D17-1004.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D17-1004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:14:22.758290Z"
},
"title": "Position-aware Attention and Supervised Data Improve Slot Filling",
"authors": [
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University Stanford",
"location": {
"postCode": "94305",
"region": "CA"
}
},
"email": ""
},
{
"first": "Victor",
"middle": [],
"last": "Zhong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University Stanford",
"location": {
"postCode": "94305",
"region": "CA"
}
},
"email": "vzhong@cs.stanford.edu"
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University Stanford",
"location": {
"postCode": "94305",
"region": "CA"
}
},
"email": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University Stanford",
"location": {
"postCode": "94305",
"region": "CA"
}
},
"email": "angeli@cs.stanford.edu"
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University Stanford",
"location": {
"postCode": "94305",
"region": "CA"
}
},
"email": "manning@cs.stanford.edu"
},
{
"first": "John",
"middle": [],
"last": "Penner",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University Stanford",
"location": {
"postCode": "94305",
"region": "CA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Organized relational knowledge in the form of \"knowledge graphs\" is important for many applications. However, the ability to populate knowledge bases with facts automatically extracted from documents has improved frustratingly slowly. This paper simultaneously addresses two issues that have held back prior work. We first propose an effective new model, which combines an LSTM sequence model with a form of entity position-aware attention that is better suited to relation extraction. Then we build TACRED, a large (119,474 examples) supervised relation extraction dataset, obtained via crowdsourcing and targeted towards TAC KBP relations. The combination of better supervised data and a more appropriate high-capacity model enables much better relation extraction performance. When the model trained on this new dataset replaces the previous relation extraction component of the best TAC KBP 2015 slot filling system, its F 1 score increases markedly from 22.2% to 26.7%.",
"pdf_parse": {
"paper_id": "D17-1004",
"_pdf_hash": "",
"abstract": [
{
"text": "Organized relational knowledge in the form of \"knowledge graphs\" is important for many applications. However, the ability to populate knowledge bases with facts automatically extracted from documents has improved frustratingly slowly. This paper simultaneously addresses two issues that have held back prior work. We first propose an effective new model, which combines an LSTM sequence model with a form of entity position-aware attention that is better suited to relation extraction. Then we build TACRED, a large (119,474 examples) supervised relation extraction dataset, obtained via crowdsourcing and targeted towards TAC KBP relations. The combination of better supervised data and a more appropriate high-capacity model enables much better relation extraction performance. When the model trained on this new dataset replaces the previous relation extraction component of the best TAC KBP 2015 slot filling system, its F 1 score increases markedly from 22.2% to 26.7%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A basic but highly important challenge in natural language understanding is being able to populate a knowledge base with relational facts contained in a piece of text. For the text shown in Figure 1, the system should extract triples, or equivalently, knowledge graph edges, such as hPenner, per:spouse, Lisa Dillmani. Combining such extractions, a system can produce a knowledge graph of relational facts between persons, organizations, and locations in the text. This task involves entity recognition, mention coreference and/or entity linking, and relation extraction; we focus on the Penner is survived by his brother, John, a copy editor at the Times, and his former wife, Times sportswriter Lisa Dillman. most challenging \"slot filling\" task of filling in the relations between entities in the text.",
"cite_spans": [],
"ref_spans": [
{
"start": 190,
"end": 196,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Organized relational knowledge in the form of \"knowledge graphs\" has become an important knowledge resource. These graphs are now extensively used by search engine companies, both to provide information to end-users and internally to the system, as a way to understand relationships. However, up until now, automatic knowledge extraction has proven sufficiently difficult that most of the facts in these knowledge graphs have been built up by hand. It is therefore a key challenge to show that NLP technology can effectively contribute to this important problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Existing work on relation extraction (e.g., Zelenko et al., 2003; Mintz et al., 2009; has been unable to achieve sufficient recall or precision for the results to be usable versus hand-constructed knowledge bases. Supervised training data has been scarce and, while techniques like distant supervision appear to be a promising way to extend knowledge bases at low cost, in practice the training data has often been too noisy for reliable training of relation extraction systems (Angeli et al., 2015) . As a result most systems fail to make correct extractions even in apparently straightforward cases like Figure 1 ,",
"cite_spans": [
{
"start": 44,
"end": 65,
"text": "Zelenko et al., 2003;",
"ref_id": "BIBREF33"
},
{
"start": 66,
"end": 85,
"text": "Mintz et al., 2009;",
"ref_id": "BIBREF19"
},
{
"start": 478,
"end": 499,
"text": "(Angeli et al., 2015)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 606,
"end": 614,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Carey will succeed Cathleen P. Black, who held the position for 15 years and will take on a new role as chairwoman of Hearst Magazines, the company said. where the best system at the NIST TAC Knowledge Base Population (TAC KBP) 2015 evaluation failed to recognize the relation between Penner and Dillman. 1 Consequently most automatic systems continue to make heavy use of hand-written rules or patterns because it has been hard for machine learning systems to achieve adequate precision or to generalize as well across text types. We believe machine learning approaches have suffered from two key problems: (1) the models used have been insufficiently tailored to relation extraction, and (2) there has been insufficient annotated data available to satisfy the training of data-hungry models, such as deep learning models. This work addresses both of these problems. We propose a new, effective neural network sequence model for relation classification. Its architecture is better customized for the slot filling task: the word representations are augmented by extra distributed representations of word position relative to the subject and object of the putative relation. This means that the neural attention model can effectively exploit the combination of semantic similarity-based attention and positionbased attention. Secondly, we markedly improve the availability of supervised training data by using Mechanical Turk crowd annotation to produce a large supervised training dataset (Table 1) , suitable for the common relations between people, organizations and locations which are used in the TAC KBP evaluations. We name this dataset the TAC Relation Extraction Dataset (TACRED), and will make it available through the Linguistic Data Consortium (LDC) in order to respect copyrights on the underlying text.",
"cite_spans": [],
"ref_spans": [
{
"start": 1489,
"end": 1498,
"text": "(Table 1)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Example Entity Types & Label",
"sec_num": null
},
{
"text": "Combining these two gives a system with markedly better slot filling performance. This is 1 Note: former spouses count as spouses in the ontology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example Entity Types & Label",
"sec_num": null
},
{
"text": "shown not only for a relation classification task on the crowd-annotated data but also for the incorporation of the resulting classifiers into a complete cold start knowledge base population system. On TACRED, our system achieves a relation classification F 1 score that is 7.9% higher than that of a strong feature-based classifier, and 3.5% higher than that of the best previous neural architecture that we re-implemented. When this model is used in concert with a pattern-based system on the TAC KBP 2015 Cold Start Slot Filling evaluation data, the system achieves an F 1 score of 26.7%, which exceeds the previous state-of-the-art by 4.5% absolute. While this performance certainly does not solve the knowledge base population problemachieving sufficient recall remains a formidable challenge -this is nevertheless notable progress.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example Entity Types & Label",
"sec_num": null
},
{
"text": "Existing work on neural relation extraction (e.g., Zeng et al., 2014; Nguyen and Grishman, 2015; Zhou et al., 2016) has focused on convolutional neural networks (CNNs), recurrent neural networks (RNNs), or their combination. While these models generally work well on the datasets they are tested on, as we will show, they often fail to generalize to the longer sentences that are common in real-world text (such as in TAC KBP).",
"cite_spans": [
{
"start": 51,
"end": 69,
"text": "Zeng et al., 2014;",
"ref_id": "BIBREF34"
},
{
"start": 70,
"end": 96,
"text": "Nguyen and Grishman, 2015;",
"ref_id": "BIBREF20"
},
{
"start": 97,
"end": 115,
"text": "Zhou et al., 2016)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Position-aware Neural Sequence Model Suitable for Relation Extraction",
"sec_num": "2"
},
{
"text": "We believe that existing model architectures suffer from two problems: (1) Although modern sequence models such as Long Short-Term Memory (LSTM) networks have gating mechanisms to control the relative influence of each individual word to the final sentence representation (Hochreiter and Schmidhuber, 1997), these controls are not explicitly conditioned on the entire sentence being classified; (2) Most existing work either does not explicitly model the positions of entities (i.e., subject and object) in the sequence, or models the positions only within a local region.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Position-aware Neural Sequence Model Suitable for Relation Extraction",
"sec_num": "2"
},
{
"text": "h p s 1 p o 1 h 2 p s 2 p o 2 h n p s n p o n q z \u2026 a n a 2 a 1 x 1 x 2 x n Mike and Lisa 0 -2 1 -1 h 3 x 3 p s 3 p o 3 2 0 married \u2026 4 2 a 3 (subject) (object)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Position-aware Neural Sequence Model Suitable for Relation Extraction",
"sec_num": "2"
},
{
"text": "Here, we propose a new neural sequence model with a position-aware attention mechanism over an LSTM network to tackle these challenges. This model can (1) evaluate the relative contribution of each word after seeing the entire sequence, and (2) base this evaluation not only on the semantic information of the sequence, but also on the global positions of the entities within the sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Position-aware Neural Sequence Model Suitable for Relation Extraction",
"sec_num": "2"
},
{
"text": "We formalize the relation extraction task as follows: Let X = [x 1 , ..., x n ] denote a sentence, where x i is the i-th token. A subject entity s and an object entity o are identified in the sentence, corresponding to two non-overlapping consecutive spans:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Position-aware Neural Sequence Model Suitable for Relation Extraction",
"sec_num": "2"
},
{
"text": "X s = [x s 1 , x s 1 +1 , . . . , x s 2 ] and X o = [x o 1 , x o 1 +1 , . . . , x o 2 ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Position-aware Neural Sequence Model Suitable for Relation Extraction",
"sec_num": "2"
},
{
"text": ". Given the sentence X and the positions of s and o, the goal is to predict a relation r 2 R (R is the set of relations) that holds between s and o or no relation otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Position-aware Neural Sequence Model Suitable for Relation Extraction",
"sec_num": "2"
},
{
"text": "Inspired by the position encoding vectors used in Collobert et al. (2011) and Zeng et al. (2014) , we define a position sequence relative to the subject entity",
"cite_spans": [
{
"start": 50,
"end": 73,
"text": "Collobert et al. (2011)",
"ref_id": "BIBREF7"
},
{
"start": 78,
"end": 96,
"text": "Zeng et al. (2014)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Position-aware Neural Sequence Model Suitable for Relation Extraction",
"sec_num": "2"
},
{
"text": "[p s 1 , ..., p s n ],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Position-aware Neural Sequence Model Suitable for Relation Extraction",
"sec_num": "2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Position-aware Neural Sequence Model Suitable for Relation Extraction",
"sec_num": "2"
},
{
"text": "p s i = 8 > < > : i s 1 , i < s 1 0, s 1 \uf8ff i \uf8ff s 2 i s 2 , i > s 2 (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Position-aware Neural Sequence Model Suitable for Relation Extraction",
"sec_num": "2"
},
{
"text": "Here s 1 , s 2 are the starting and ending indices of the subject entity respectively, and p s i 2 Z can be viewed as the relative distance of token x i to the subject entity. Similarly, we obtain a position sequence [p o 1 , ..., p o n ] relative to the object entities. Let x = [x 1 , ..., x n ] be word embeddings of the sentence, obtained using an embedding matrix E. Similarly, we obtain position embedding vectors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Position-aware Neural Sequence Model Suitable for Relation Extraction",
"sec_num": "2"
},
{
"text": "p s = [p s 1 , ..., p s n ] and p o = [p o 1 , ..., p o n ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Position-aware Neural Sequence Model Suitable for Relation Extraction",
"sec_num": "2"
},
{
"text": "using a shared position embedding matrix P respectively. Next, as shown in Figure 2 , we obtain hidden state representations of the sentence by feeding x into an LSTM:",
"cite_spans": [],
"ref_spans": [
{
"start": 75,
"end": 83,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "A Position-aware Neural Sequence Model Suitable for Relation Extraction",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "{h 1 , ..., h n } = LSTM({x 1 , ..., x n })",
"eq_num": "(2)"
}
],
"section": "A Position-aware Neural Sequence Model Suitable for Relation Extraction",
"sec_num": "2"
},
{
"text": "We define a summary vector q = h n (i.e., the output state of the LSTM). This summary vector encodes information about the entire sentence. Then for each hidden state h i , we calculate an attention weight a i as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Position-aware Neural Sequence Model Suitable for Relation Extraction",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "u i = v > tanh(W h h i + W q q+ W s p s i + W o p o i ) (3) a i = exp(u i ) P n j=1 exp(u j )",
"eq_num": "(4)"
}
],
"section": "A Position-aware Neural Sequence Model Suitable for Relation Extraction",
"sec_num": "2"
},
{
"text": "Here",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Position-aware Neural Sequence Model Suitable for Relation Extraction",
"sec_num": "2"
},
{
"text": "W h , W q 2 R da\u21e5d , W s , W o 2 R da\u21e5dp",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Position-aware Neural Sequence Model Suitable for Relation Extraction",
"sec_num": "2"
},
{
"text": "and v 2 R da are learnable parameters of the network, where d is the dimension of hidden states, d p is the dimension of position embeddings, and d a is the size of attention layer. Additional parameters of the network include embedding matrices E 2 R |V|\u21e5d and P 2 R (2L 1)\u21e5dp , where V is the vocabulary and L is the maximum sentence length. We regard attention weight a i as the relative contribution of the specific word to the sentence representation. The final sentence representation z is computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Position-aware Neural Sequence Model Suitable for Relation Extraction",
"sec_num": "2"
},
{
"text": "z = X n i=1 a i h i (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Position-aware Neural Sequence Model Suitable for Relation Extraction",
"sec_num": "2"
},
{
"text": "z is later fed into a fully-connected layer followed by a softmax layer for relation classification. Note that our model significantly differs from the attention mechanism in Bahdanau et al. (2015) and Zhou et al. (2016) in our use of the summary vector and position embeddings, and the way our attention weights are computed. An intuitive way to understand the model is to view the attention calculation as a selection process, where the goal is to select relevant contexts over irrelevant ones. Here the summary vector (q) helps the model to base this selection on the semantic information of the entire sentence (rather than on each word only), while the position vectors (p s i and p o i ) provides important spatial information between each word and the entities.",
"cite_spans": [
{
"start": 175,
"end": 197,
"text": "Bahdanau et al. (2015)",
"ref_id": "BIBREF4"
},
{
"start": 202,
"end": 220,
"text": "Zhou et al. (2016)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Position-aware Neural Sequence Model Suitable for Relation Extraction",
"sec_num": "2"
},
{
"text": "Previous research has shown that slot filling systems can greatly benefit from supervised data. For example, Angeli et al. (2014b) showed that even a small amount of supervised data can boost the end-to-end F 1 score by 3.9% on the TAC KBP tasks. However, existing relation extraction datasets such as the SemEval-2010 Task 8 dataset (Hendrickx et al., 2009) and the Automatic Content Extraction (ACE) (Strassel et al., 2008) dataset are less useful for this purpose. This is mainly because: (1) these datasets are relatively small for effectively training high-capacity models (see Table 2 ), and (2) they capture very different types of relations. For example, the SemEval dataset focuses on semantic relations (e.g., Cause-Effect, Component-Whole) between two nominals.",
"cite_spans": [
{
"start": 109,
"end": 130,
"text": "Angeli et al. (2014b)",
"ref_id": "BIBREF2"
},
{
"start": 334,
"end": 358,
"text": "(Hendrickx et al., 2009)",
"ref_id": "BIBREF12"
},
{
"start": 402,
"end": 425,
"text": "(Strassel et al., 2008)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 583,
"end": 590,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "The TAC Relation Extraction Dataset",
"sec_num": "3"
},
{
"text": "One can further argue that it is easy to obtain a large amount of training data using distant supervision (Mintz et al., 2009) . In practice, however, due to the large amount of noise in the induced data, training relation extractors that perform well becomes very difficult. For example, Riedel et al. (2010) show that up to 31% of the distantly supervised labels are wrong when creating training data from aligning Freebase to newswire text.",
"cite_spans": [
{
"start": 106,
"end": 126,
"text": "(Mintz et al., 2009)",
"ref_id": "BIBREF19"
},
{
"start": 289,
"end": 309,
"text": "Riedel et al. (2010)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The TAC Relation Extraction Dataset",
"sec_num": "3"
},
{
"text": "To tackle these challenges, we collect a large supervised dataset TACRED, targeted towards the TAC KBP relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The TAC Relation Extraction Dataset",
"sec_num": "3"
},
{
"text": "Data collection. We create TACRED based on query entities and annotated system responses in the yearly TAC KBP evaluations. In each year of the TAC KBP evaluation (2009-2015), 100 entities (people or organizations) are given as queries, for which participating systems should find associated relations and object entities. We make use of Mechanical Turk to annotate each sentence in the source corpus that contains one of these query entities. For each sentence, we ask crowd workers to annotate both the subject and object entity spans and the relation types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The TAC Relation Extraction Dataset",
"sec_num": "3"
},
{
"text": "Dataset stratification. In total we collect 119,474 examples. We stratify TACRED across years in which the TAC KBP challenge was run, and use examples corresponding to query entities from 2009 to 2012 as training split, 2013 as development split, and 2014 as test split. We reserve the TAC KBP 2015 evaluation data for running slot filling evaluations, as presented in Section 4. Detailed statistics are given in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 413,
"end": 420,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "The TAC Relation Extraction Dataset",
"sec_num": "3"
},
{
"text": "Discussion. Table 1 presents sampled examples from TACRED. Compared to existing datasets, TACRED has four advantages. First, it contains an order of magnitude more relation instances (Table 2), enabling the training of expressive models. Second, we reuse the entity and relation types of the TAC KBP tasks. We believe these relation types are of more interest to downstream applications. Third, we fully annotate all negative instances that appear in our data collection process, to ensure that models trained on TACRED are not biased towards predicting false positives on realworld text. Lastly, the average sentence length in TACRED is 36.2, compared to 19.1 in the Sem-Eval dataset, reflecting the complexity of contexts in which relations occur in real-world text. Due to space constraints, we describe the data collection and validation process, system interfaces, and more statistics and examples of TAC-RED in the supplementary material. We will make TACRED publicly available through the LDC.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "The TAC Relation Extraction Dataset",
"sec_num": "3"
},
{
"text": "In this section we evaluate the effectiveness of our proposed model and TACRED on improving slot filling systems. Specifically, we run two sets of experiments: (1) we evaluate model performance on the relation extraction task using TACRED, and (2) we evaluate model performance on the TAC KBP 2015 cold start slot filling task, by training the models on TACRED.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We compare our model against the following baseline models for relation extraction and slot filling:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Models",
"sec_num": "4.1"
},
{
"text": "TAC KBP 2015 winning system. To judge our proposed model against a strong baseline, we compare against Stanford's top performing system on the TAC KBP 2015 cold start slot filling task (Angeli et al., 2015) . At the core of this system are two relation extractors: a pattern-based extractor and a logistic regression (LR) classifier. The pattern-based system uses a total of 4,528 surface patterns and 169 dependency patterns. The logistic regression model was trained on approximately 2 million bootstrapped examples (using a small annotated dataset and high-precision pattern system output) that are carefully tuned for TAC KBP slot filling evaluation. It uses a comprehensive feature set similar to the MIML-RE system for relation extraction (Surdeanu et al., 2012) , including lemmatized n-grams, sequence NER tags and POS tags, positions of entities, and various features over dependency paths, etc.",
"cite_spans": [
{
"start": 185,
"end": 206,
"text": "(Angeli et al., 2015)",
"ref_id": "BIBREF3"
},
{
"start": 745,
"end": 768,
"text": "(Surdeanu et al., 2012)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Models",
"sec_num": "4.1"
},
{
"text": "Convolutional neural networks. We follow the 1-dimensional CNN architecture by Nguyen and Grishman (2015) for relation extraction. This model learns a representation of the input sentence, by first running a series of convolutional operations on the sentence with various filters, and then feeding the output into a max-pooling layer to reduce the dimension. The resulting representation is then fed into a fully-connected layer followed by a softmax layer for relation classification. As an extension, positional embeddings are also introduced into this model to better capture the relative position of each word to the subject and object entities and were shown to achieve improved results. We use \"CNN-PE\" to represent the CNN model with positional embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Models",
"sec_num": "4.1"
},
{
"text": "In dependency-based neural models, shortest dependency paths between entities are often used as input to the neural networks. The intuition is to eliminate tokens that are potentially less relevant to the classification of the relation. For the example in Figure 1 , the shortest dependency path between the two entities is: [Penner] survived ! brother",
"cite_spans": [
{
"start": 325,
"end": 333,
"text": "[Penner]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 256,
"end": 264,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Dependency-based recurrent neural networks.",
"sec_num": null
},
{
"text": "! wife ! [Lisa Dillman]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency-based recurrent neural networks.",
"sec_num": null
},
{
"text": "We follow the SDP-LSTM model proposed by Xu et al. (2015b) . In this model, each shortest dependency path is divided into two separate sub-paths from the subject entity and the object entity to the lowest common ancestor node. Each sub-path is fed into an LSTM network, and the resulting hidden units at each word position are passed into a max-over-time pooling layer to form the output of this sub-path. Outputs from the two sub-paths are then concatenated to form the final representation.",
"cite_spans": [
{
"start": 41,
"end": 58,
"text": "Xu et al. (2015b)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency-based recurrent neural networks.",
"sec_num": null
},
{
"text": "In addition to the above models, we also compare our proposed model against an LSTM sequence model without attention mechanism.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency-based recurrent neural networks.",
"sec_num": null
},
{
"text": "We map words that occur less than 2 times in the training set to a special <UNK> token. We use the pre-trained GloVe vectors (Pennington et al., 2014) to initialize word embeddings. For all the LSTM layers, we find that 2-layer stacked LSTMs generally work better than one-layer LSTMs. We minimize cross-entropy loss over all 42 relations using AdaGrad (Duchi et al., 2011) . We apply Dropout with p = 0.5 to CNNs and LSTMs. During training we also find a word dropout strategy to be very effective: we randomly set a token to be <UNK> with a probability p. We set p to be 0.06 for the SDP-LSTM model and 0.04 for all other models.",
"cite_spans": [
{
"start": 125,
"end": 150,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF21"
},
{
"start": 353,
"end": 373,
"text": "(Duchi et al., 2011)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.2"
},
{
"text": "Entity masking. We replace each subject entity in the original sentence with a special <NER>-SUBJ token where <NER> is the corresponding NER signature of the subject as provided in TAC-RED. We do the same processing for object entities. This processing step helps (1) provide a model with entity type information, and (2) prevent a model from overfitting its predictions to specific entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.2"
},
{
"text": "Multi-channel augmentation. Instead of using only word vectors as input to the network, we augment the input with part-of-speech (POS) and named entity recognition (NER) embeddings. We run Stanford CoreNLP to obtain the POS and NER annotations. We describe our model hyperparameters and training in detail in the supplementary material.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.2"
},
{
"text": "We first evaluate all models on TACRED. We train each model for 5 separate runs with independent random initializations. For each run we perform early stopping using the dev set. We then select the run (among 5) that achieves the median F 1 score on the dev set, and report its test set performance. Table 4 summarizes our results. We observe that all neural models achieve higher F 1 scores than the logistic regression and patterns systems, which demonstrates the effectiveness of neural models for relation extraction. Although positional embeddings help increase the F 1 by around 2% over the plain CNN model, a simple (2-layer) LSTM model performs surprisingly better than CNN and dependency-based models. Lastly, our proposed position-aware mechanism is very effective and achieves an F 1 score of 65.4%, with an absolute increase of 3.9% over the best baseline neural model (LSTM) and 7.9% over the baseline logistic regression system. We also run an ensemble of our position-aware attention model which takes majority votes from 5 runs with random initializations and it further pushes the F 1 score up by 1.6%.",
"cite_spans": [],
"ref_spans": [
{
"start": 300,
"end": 307,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation on TACRED",
"sec_num": "4.3"
},
{
"text": "We find that different neural architectures show a different balance between precision and recall. CNN-based models tend to have higher precision; RNN-based models have better recall. This can be explained by noting that the filters in CNNs are essentially a form of \"fuzzy n-gram patterns\". Figure 3 : An example query and corresponding fillers in the TAC KBP cold start slot filling task.",
"cite_spans": [],
"ref_spans": [
{
"start": 292,
"end": 300,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation on TACRED",
"sec_num": "4.3"
},
{
"text": "Second, we evaluate the slot filling performance of all models using the TAC KBP 2015 cold start slot filling task (Ellis et al., 2015) . In this task, about 50k newswire and Web forum documents are selected as the evaluation corpus. A slot filling system is asked to answer a series of queries with two-hop slots (Figure 3) : The first slot asks about fillers of a relation with the query entity as the subject (Mike Penner), and we term this a hop-0 slot; the second slot asks about fillers with the system's hop-0 output as the subject, and we term this a hop-1 slot. System predictions are then evaluated against gold annotations, and micro-averaged precision, recall and F 1 scores are calculated at the hop-0 and hop-1 levels. Lastly hop-all scores are calculated by combining hop-0 and hop-1 scores. 2 Evaluating relation extraction systems on slot filling is particularly challenging in that: (1) Endto-end cold start slot filling scores conflate the performance of all modules in the system (i.e., entity recognizer, entity linker and relation extractor). (2) Errors in hop-0 predictions can easily propagate to hop-1 predictions. To fairly evaluate each relation extraction model on this task, we use Stanford's 2015 slot filling system as our basic pipeline. 3 It is a very strong baseline specifically tuned for TAC KBP evaluation and ranked top in the 2015 evaluation. We then plug in the corresponding relation extractor trained on TACRED, keeping all other modules unchanged. Table 5 presents our results. We find that: (1) by only training our logistic regression model on TACRED (in contrast to on the 2 million bootstrapped examples used in the 2015 Stanford system) and combining it with patterns, we obtain a higher hop-0 F 1 score than the 2015 Stanford sys- tem, and a similar hop-all F 1 ; (2) our proposed position-aware attention model substantially outperforms the 2015 Stanford system on all hop-0, hop-1 and hop-all F 1 scores. Combining it with the patterns, we achieve a hop-all F 1 of 26.7%, an absolute improvement of 4.5% over the previous state-of-the-art result.",
"cite_spans": [
{
"start": 115,
"end": 135,
"text": "(Ellis et al., 2015)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 314,
"end": 324,
"text": "(Figure 3)",
"ref_id": null
},
{
"start": 1491,
"end": 1498,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation on TAC KBP Slot Filling",
"sec_num": "4.4"
},
{
"text": "Model ablation. Table 6 presents the results of an ablation test of our position-aware attention model on the development set of TACRED. The entire attention mechanism contributes about 1.5% F 1 , where the position-aware term in Eq. 3alone contributes about 1% F 1 score.",
"cite_spans": [],
"ref_spans": [
{
"start": 16,
"end": 23,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.5"
},
{
"text": "Impact of negative examples. Figure 4 shows how the slot filling evaluation scores change as we change the amount of negative (i.e., no relation) training data provided to our proposed model. We find that: (1) At hop-0 level, precision increases as we provide more negative examples, while recall stays almost unchanged. F 1 score keeps increasing. 2 about 10% as we change the amount of negative examples from 20% to 100%.",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 37,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.5"
},
{
"text": "Performance by sentence length. Figure 5 shows performance on varying sentence lengths. We find that: (1) Performance of all models degrades substantially as the sentences get longer.",
"cite_spans": [],
"ref_spans": [
{
"start": 32,
"end": 40,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.5"
},
{
"text": "(2) Compared to the baseline Logistic Regression model, all neural models handle long sentences better. (3) Compared to CNN-PE model, RNNbased models are more robust on long sentences, and notably SDP-LSTM model is least sensitive to sentence length. (4) Our proposed model achieves equal or better results on sentences of all lengths, except for sentences with more than 60 tokens where SDP-LSTM model achieves the best result. Improvement by slot types. We calculate the F 1 score for each slot type and compare the improvement from using our proposed model across slot types. When compared with the CNN-PE model, our position-aware attention model achieves improved F 1 scores on 30 out of the 41 slot types, with the top 5 slot types being org:members, per:country of death, org:shareholders, per:children and per:religion. When compared with SDP-LSTM model, our model achieves improved F 1 scores on 26 out of the 41 slot types, with the top 5 slot types being org:political/religious affiliation, per:country of death, org:alternate names, per:religion and per:alternate names. We observe that slot types with relatively sparse training examples tend to be improved by using the position-aware attention model. Attention visualization. Lastly, Figure 6 shows the visualization of attention weights assigned by our model on sampled sentences from the development set. We find that the model learns to pay more attention to words that are informative for the relation (e.g., \"graduated from\", \"niece\" and \"chairman\"), though it still makes mistakes (e.g., \"refused to name the three\"). We also observe that the model tends to put a lot of weight onto object entities, as the object NER signatures are very informative to the classification of relations.",
"cite_spans": [],
"ref_spans": [
{
"start": 1250,
"end": 1258,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.5"
},
{
"text": "Relation extraction. There are broadly three main lines of work on relation extraction: first, fully-supervised approaches (Zelenko et al., 2003; Bunescu and Mooney, 2005) , where a statisti-cal classifier is trained on an annotated dataset; second, distant supervision (Mintz et al., 2009; Surdeanu et al., 2012) , where a training set is formed by projecting the relations in an existing knowledge base onto textual instances that contain the entities that the relation connects; and third, Open IE (Fader et al., 2011; Mausam et al., 2012) , which views its goal as producing subject-relationobject triples and expressing the relation in text.",
"cite_spans": [
{
"start": 123,
"end": 145,
"text": "(Zelenko et al., 2003;",
"ref_id": "BIBREF33"
},
{
"start": 146,
"end": 171,
"text": "Bunescu and Mooney, 2005)",
"ref_id": "BIBREF5"
},
{
"start": 270,
"end": 290,
"text": "(Mintz et al., 2009;",
"ref_id": "BIBREF19"
},
{
"start": 291,
"end": 313,
"text": "Surdeanu et al., 2012)",
"ref_id": "BIBREF26"
},
{
"start": 501,
"end": 521,
"text": "(Fader et al., 2011;",
"ref_id": "BIBREF10"
},
{
"start": 522,
"end": 542,
"text": "Mausam et al., 2012)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Slot filling and knowledge base population. The most widely-known effort to evaluate slot filling and KBP systems is the yearly TAC KBP slot filling tasks, starting from 2009 (McNamee and Dang, 2009) . Participants in slot filling tasks usually make use of hybrid systems that combine patterns, Open IE, distant supervision and supervised systems for relation extraction (Kisiel et al., 2015; Finin et al., 2015; Zhang et al., 2016) .",
"cite_spans": [
{
"start": 175,
"end": 199,
"text": "(McNamee and Dang, 2009)",
"ref_id": "BIBREF18"
},
{
"start": 371,
"end": 392,
"text": "(Kisiel et al., 2015;",
"ref_id": "BIBREF15"
},
{
"start": 393,
"end": 412,
"text": "Finin et al., 2015;",
"ref_id": "BIBREF11"
},
{
"start": 413,
"end": 432,
"text": "Zhang et al., 2016)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Datasets for relation extraction. Popular general-domain datasets include the ACE dataset (Strassel et al., 2008) and the SemEval-2010 task 8 dataset (Hendrickx et al., 2009) . In addition, the BioNLP Shared Tasks are yearly efforts on creating datasets and evaluations for biomedical information extraction systems.",
"cite_spans": [
{
"start": 90,
"end": 113,
"text": "(Strassel et al., 2008)",
"ref_id": "BIBREF25"
},
{
"start": 150,
"end": 174,
"text": "(Hendrickx et al., 2009)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Deep learning models for relation extraction. Many deep learning models have been proposed for relation extraction, with a focus on end-to-end training using CNNs (Zeng et al., 2014; Nguyen and Grishman, 2015) and RNNs (Zhang et al., 2015) . Other popular approaches include using CNN or RNN over dependency paths between entities (Xu et al., 2015a,b) , augmenting RNNs with different components Zhou et al., 2016) , and combining RNNs and CNNs (Vu et al., 2016; Wang et al., 2016) . compares the performance of CNN models against traditional approaches on slot filling using a portion of the TAC KBP evaluation data.",
"cite_spans": [
{
"start": 163,
"end": 182,
"text": "(Zeng et al., 2014;",
"ref_id": "BIBREF34"
},
{
"start": 183,
"end": 209,
"text": "Nguyen and Grishman, 2015)",
"ref_id": "BIBREF20"
},
{
"start": 219,
"end": 239,
"text": "(Zhang et al., 2015)",
"ref_id": "BIBREF35"
},
{
"start": 331,
"end": 351,
"text": "(Xu et al., 2015a,b)",
"ref_id": null
},
{
"start": 396,
"end": 414,
"text": "Zhou et al., 2016)",
"ref_id": "BIBREF37"
},
{
"start": 445,
"end": 462,
"text": "(Vu et al., 2016;",
"ref_id": "BIBREF27"
},
{
"start": 463,
"end": 481,
"text": "Wang et al., 2016)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "We introduce a state-of-the-art position-aware neural sequence model for relation extraction, as well as TACRED, a large-scale, crowd-sourced dataset that is orders of magnitude larger than previous relation extraction datasets. Our proposed model outperforms a strong feature-based classifier and all baseline neural models. In combination with the new dataset, it improves the state-of-the- ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The cause was a heart attack following a case of pneumonia , said PER-SUBJ 's niece , PER-OBJ PER-OBJ .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "per:schools attended",
"sec_num": null
},
{
"text": "per:other family Independent ORG-SUBJ ORG-SUBJ ORG-SUBJ ( ECC ) chairman PER-OBJ PER-OBJ refused to name the three , saying they would be identified when the final list of candidates for the august 20 polls is published on Friday .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "per:schools attended",
"sec_num": null
},
{
"text": "org:top members/employees Figure 6 : Sampled sentences from the TACRED development set, with words highlighted according to the attention weights produced by our best model. art hop-all F 1 on the TAC KBP 2015 slot filling task by 4.5% absolute.",
"cite_spans": [],
"ref_spans": [
{
"start": 26,
"end": 34,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "per:schools attended",
"sec_num": null
},
{
"text": "In the TAC KBP cold start slot filling evaluation, a hop-1 slot is transferred to a pseudo-slot which is treated equally as a hop-0 slot. Hop-all precision, recall and F1 are then calculated by combining these pseudo-slot predictions and hop-0 predictions.3 This system uses the fine-grained NER system in Stanford CoreNLP for entity detection and the Illinois Wikifier(Ratinov et al., 2011) for entity linking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the anonymous reviewers for their helpful suggestions. We gratefully acknowledge the support of the Allen Institute for Artificial Intelligence and the support of the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) contract No. FA8750-13-2-0040. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of the DARPA, AFRL, or the US government.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Comparing convolutional neural networks to traditional models for slot filling",
"authors": [
{
"first": "Heike",
"middle": [],
"last": "Adel",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology (NAACL-HLT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heike Adel, Benjamin Roth, and Hinrich Sch\u00fctze. 2016. Comparing convolutional neural networks to traditional models for slot filling. Proceedings of the 2016 Conference of the North American Chap- ter of the Association for Computational Linguistics on Human Language Technology (NAACL-HLT).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Stanford's distantly supervised slot filling systems for KBP",
"authors": [
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Sonal",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Melvin",
"middle": [
"Johnson"
],
"last": "Premkumar",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Tibshirani",
"suffix": ""
},
{
"first": "Jean",
"middle": [
"Y"
],
"last": "Wu",
"suffix": ""
},
{
"first": "Sen",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Ce",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2014,
"venue": "Text Analysis Conference (TAC) Proceedings",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabor Angeli, Sonal Gupta, Melvin Johnson Premku- mar, Christopher D. Manning, Christopher R\u00e9, Julie Tibshirani, Jean Y. Wu, Sen Wu, and Ce Zhang. 2014a. Stanford's distantly supervised slot filling systems for KBP 2014. In Text Analysis Conference (TAC) Proceedings 2014.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Combining distant and partial supervision for relation extraction",
"authors": [
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Tibshirani",
"suffix": ""
},
{
"first": "Jean",
"middle": [
"Y"
],
"last": "Wu",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabor Angeli, Julie Tibshirani, Jean Y. Wu, and Christopher D. Manning. 2014b. Combining dis- tant and partial supervision for relation extraction. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bootstrapped self training for knowledge base population",
"authors": [
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Arun",
"middle": [],
"last": "Chaganty",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Bolton",
"suffix": ""
},
{
"first": "Melvin",
"middle": [
"Johnson"
],
"last": "Premkumar",
"suffix": ""
},
{
"first": "Panupong",
"middle": [],
"last": "Pasupat",
"suffix": ""
},
{
"first": "Sonal",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Text Analysis Conference (TAC) Proceedings",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabor Angeli, Victor Zhong, Danqi Chen, Arun Cha- ganty, Jason Bolton, Melvin Johnson Premkumar, Panupong Pasupat, Sonal Gupta, and Christopher D Manning. 2015. Bootstrapped self training for knowledge base population. In Text Analysis Con- ference (TAC) Proceedings 2015.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In International Con- ference on Learning Representations (ICLR).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A shortest path dependency kernel for relation extraction",
"authors": [
{
"first": "C",
"middle": [],
"last": "Razvan",
"suffix": ""
},
{
"first": "Raymond J",
"middle": [],
"last": "Bunescu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing (EMNLP 2005)",
"volume": "",
"issue": "",
"pages": "724--731",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Razvan C Bunescu and Raymond J Mooney. 2005. A shortest path dependency kernel for relation ex- traction. In Proceedings of the Conference on Hu- man Language Technology and Empirical Methods in Natural Language Processing (EMNLP 2005), pages 724-731.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Entity-centric coreference resolution with model stacking",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Clark and Christopher D. Manning. 2015. Entity-centric coreference resolution with model stacking. In Proceedings of the 53th Annual Meet- ing of the Association for Computational Linguistics (ACL 2015).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493-2537.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Adaptive subgradient methods for online learning and stochastic optimization",
"authors": [
{
"first": "John",
"middle": [],
"last": "Duchi",
"suffix": ""
},
{
"first": "Elad",
"middle": [],
"last": "Hazan",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2121--2159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121-2159.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Overview of linguistic resources for the TAC KBP 2015 evaluations: Methodologies and results",
"authors": [
{
"first": "Joe",
"middle": [],
"last": "Ellis",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Getman",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Fore",
"suffix": ""
},
{
"first": "Neil",
"middle": [],
"last": "Kuster",
"suffix": ""
},
{
"first": "Zhiyi",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Bies",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Strassel",
"suffix": ""
}
],
"year": 2015,
"venue": "Text Analysis Conference (TAC) Proceedings",
"volume": "",
"issue": "",
"pages": "16--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joe Ellis, Jeremy Getman, Dana Fore, Neil Kuster, Zhiyi Song, Ann Bies, and Stephanie Strassel. 2015. Overview of linguistic resources for the TAC KBP 2015 evaluations: Methodologies and results. In Text Analysis Conference (TAC) Proceedings 2015, pages 16-17.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Identifying relations for open information extraction",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Fader",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1535--1545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information ex- traction. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP 2011), pages 1535-1545.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "HLTCOE participation in TAC KBP 2015: Cold start and TEDL",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Finin",
"suffix": ""
},
{
"first": "Dawn",
"middle": [],
"last": "Lawrie",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Mcnamee",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Mayfield",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Oard",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Ning",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Yiu-Chang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Joshi",
"middle": [],
"last": "Mackin",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Dowd",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Finin, Dawn Lawrie, Paul McNamee, James May- field, Douglas Oard, Nanyun Peng, Ning Gao, Yiu- Chang Lin, Joshi MacKin, and Tim Dowd. 2015. HLTCOE participation in TAC KBP 2015: Cold start and TEDL. In Text Analysis Conference (TAC) Proceedings 2015.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals",
"authors": [
{
"first": "Iris",
"middle": [],
"last": "Hendrickx",
"suffix": ""
},
{
"first": "Su",
"middle": [
"Nam"
],
"last": "Kim",
"suffix": ""
},
{
"first": "Zornitsa",
"middle": [],
"last": "Kozareva",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Diarmuid\u00f3",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Pennacchiotti",
"suffix": ""
},
{
"first": "Lorenza",
"middle": [],
"last": "Romano",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Szpakowicz",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions",
"volume": "",
"issue": "",
"pages": "94--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid\u00d3 S\u00e9aghdha, Sebastian Pad\u00f3, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2009. Semeval-2010 task 8: Multi-way classification of semantic relations be- tween pairs of nominals. In Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions, pages 94-99.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Overview of BioNLP'09 shared task on event extraction",
"authors": [
{
"first": "Jin-Dong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Tomoko",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Yoshinobu",
"middle": [],
"last": "Kano",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing: Shared Task",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jin-Dong Kim, Tomoko Ohta, Sampo Pyysalo, Yoshi- nobu Kano, and Jun'ichi Tsujii. 2009. Overview of BioNLP'09 shared task on event extraction. In Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing: Shared Task, pages 1-9.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "CMUML System for KBP 2015 cold start slot filling",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Kisiel",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Mcdowell",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Ndapandula",
"middle": [],
"last": "Nakashole",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Emmanouil",
"suffix": ""
},
{
"first": "Abulhair",
"middle": [],
"last": "Platanios",
"suffix": ""
},
{
"first": "Shashank",
"middle": [],
"last": "Saparov",
"suffix": ""
},
{
"first": "Derry",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Wijaya",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2015,
"venue": "Text Analysis Conference (TAC) Proceedings",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bryan Kisiel, Bill McDowell, Matt Gardner, Ndapan- dula Nakashole, Emmanouil A Platanios, Abulhair Saparov, Shashank Srivastava, Derry Wijaya, and Tom Mitchell. 2015. CMUML System for KBP 2015 cold start slot filling. In Text Analysis Con- ference (TAC) Proceedings 2015.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The Stanford CoreNLP natural language processing toolkit",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"J"
],
"last": "Bethard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mc-Closky",
"suffix": ""
}
],
"year": 2014,
"venue": "Association for Computational Linguistics (ACL) System Demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Association for Compu- tational Linguistics (ACL) System Demonstrations, pages 55-60.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Open language learning for information extraction",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Mausam",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Schmitz",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Bart",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "523--534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mausam, Michael Schmitz, Robert Bart, Stephen Soderland, and Oren Etzioni. 2012. Open language learning for information extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 523-534.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Overview of the TAC 2009 knowledge base population track",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Mcnamee",
"suffix": ""
},
{
"first": "Hoa",
"middle": [
"Trang"
],
"last": "Dang",
"suffix": ""
}
],
"year": 2009,
"venue": "Text Analysis Conference (TAC) Proceedings",
"volume": "17",
"issue": "",
"pages": "111--113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul McNamee and Hoa Trang Dang. 2009. Overview of the TAC 2009 knowledge base population track. In Text Analysis Conference (TAC) Proceedings 2009, volume 17, pages 111-113.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Distant supervision for relation extraction without labeled data",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Mintz",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bills",
"suffix": ""
},
{
"first": "Rion",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "1003--1011",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Dan Juraf- sky. 2009. Distant supervision for relation extrac- tion without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1003-1011.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Relation extraction: Perspective from convolutional neural networks",
"authors": [
{
"first": "Huu",
"middle": [],
"last": "Thien",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology (NAACL-HLT)",
"volume": "",
"issue": "",
"pages": "39--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thien Huu Nguyen and Ralph Grishman. 2015. Rela- tion extraction: Perspective from convolutional neu- ral networks. In Proceedings of the 2015 Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics on Human Lan- guage Technology (NAACL-HLT), pages 39-48.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "14",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP 2014), volume 14, pages 1532- 1543.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Local and global algorithms for disambiguation to Wikipedia",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Downey",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL 2011)",
"volume": "",
"issue": "",
"pages": "1375--1384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Ratinov, Dan Roth, Doug Downey, and Mike An- derson. 2011. Local and global algorithms for dis- ambiguation to Wikipedia. In Proceedings of the 49th Annual Meeting of the Association for Com- putational Linguistics: Human Language Technolo- gies (ACL 2011), pages 1375-1384.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Modeling relations and their mentions without labeled text. Machine learning and knowledge discovery in databases",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "148--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions with- out labeled text. Machine learning and knowledge discovery in databases, pages 148-163.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Dropout: a simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Machine Learning Research",
"volume": "15",
"issue": "1",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Re- search, 15(1):1929-1958.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Linguistic resources and evaluation techniques for evaluation of cross-document automatic content extraction",
"authors": [
{
"first": "Stephanie",
"middle": [],
"last": "Strassel",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "Kay",
"middle": [],
"last": "Przybocki",
"suffix": ""
},
{
"first": "Zhiyi",
"middle": [],
"last": "Peterson",
"suffix": ""
},
{
"first": "Kazuaki",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Maeda",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the International Conference on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephanie Strassel, Mark A Przybocki, Kay Peterson, Zhiyi Song, and Kazuaki Maeda. 2008. Linguis- tic resources and evaluation techniques for evalua- tion of cross-document automatic content extraction. In Proceedings of the International Conference on Language Resources and Evaluation (LREC).",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Multi-instance multi-label learning for relation extraction",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Tibshirani",
"suffix": ""
},
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "455--465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In Pro- ceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Com- putational Natural Language Learning, pages 455- 465.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Combining recurrent and convolutional neural networks for relation classification",
"authors": [
{
"first": "Ngoc",
"middle": [
"Thang"
],
"last": "Vu",
"suffix": ""
},
{
"first": "Heike",
"middle": [],
"last": "Adel",
"suffix": ""
},
{
"first": "Pankaj",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology (NAACL-HLT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ngoc Thang Vu, Heike Adel, Pankaj Gupta, and Hin- rich Sch\u00fctze. 2016. Combining recurrent and convo- lutional neural networks for relation classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics on Human Language Technology (NAACL-HLT).",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Relation classification via multi-level attention CNNs",
"authors": [
{
"first": "Linlin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhu",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Gerard",
"middle": [],
"last": "De Melo",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linlin Wang, Zhu Cao, Gerard de Melo, and Zhiyuan Liu. 2016. Relation classification via multi-level at- tention CNNs. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (ACL 2016).",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Semantic relation classification via convolutional neural networks with simple negative sampling",
"authors": [
{
"first": "Kun",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yansong",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Songfang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Dongyan",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kun Xu, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2015a. Semantic relation classifica- tion via convolutional neural networks with simple negative sampling. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Language Processing (EMNLP 2015).",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Improved relation classification by deep recurrent neural networks with data augmentation",
"authors": [
{
"first": "Yan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Ran",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Lili",
"middle": [],
"last": "Mou",
"suffix": ""
},
{
"first": "Ge",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yunchuan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yangyang",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Zhi",
"middle": [],
"last": "Jin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 26th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yan Xu, Ran Jia, Lili Mou, Ge Li, Yunchuan Chen, Yangyang Lu, and Zhi Jin. 2016. Improved relation classification by deep recurrent neural networks with data augmentation. In Proceedings of the 26th Inter- national Conference on Computational Linguistics (COLING 2016).",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Classifying relations via long short term memory networks along shortest dependency paths",
"authors": [
{
"first": "Yan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Lili",
"middle": [],
"last": "Mou",
"suffix": ""
},
{
"first": "Ge",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yunchuan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Zhi",
"middle": [],
"last": "Jin",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1785--1794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015b. Classifying relations via long short term memory networks along shortest depen- dency paths. In Proceedings of the 2015 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP 2015), pages 1785-1794.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Recurrent neural network regularization",
"authors": [
{
"first": "Wojciech",
"middle": [],
"last": "Zaremba",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.2329"
]
},
"num": null,
"urls": [],
"raw_text": "Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Kernel methods for relation extraction",
"authors": [
{
"first": "Dmitry",
"middle": [],
"last": "Zelenko",
"suffix": ""
},
{
"first": "Chinatsu",
"middle": [],
"last": "Aone",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Richardella",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of machine learning research",
"volume": "3",
"issue": "",
"pages": "1083--1106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation ex- traction. Journal of machine learning research, 3:1083-1106.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Relation classification via convolutional deep neural network",
"authors": [
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Siwei",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Guangyou",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 24th International Conference on Computational Linguistics (COLING 2014)",
"volume": "",
"issue": "",
"pages": "2335--2344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, Jun Zhao, et al. 2014. Relation classification via convolutional deep neural network. In Proceedings of the 24th International Conference on Compu- tational Linguistics (COLING 2014), pages 2335- 2344.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Relation classification via recurrent neural network",
"authors": [
{
"first": "Dongxu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Rong",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "CSLT 20150024, Tsinghua University",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dongxu Zhang, Dong Wang, and Rong Liu. 2015. Re- lation classification via recurrent neural network. Technical report, CSLT 20150024, Tsinghua Uni- versity.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Stanford at TAC KBP 2016: Sealing pipeline leaks and understanding chinese",
"authors": [
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Arun",
"middle": [],
"last": "Chaganty",
"suffix": ""
},
{
"first": "Ashwin",
"middle": [],
"last": "Paranjape",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Bolton",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2016,
"venue": "Text Analysis Conference (TAC) Proceedings",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuhao Zhang, Arun Chaganty, Ashwin Paranjape, Danqi Chen, Jason Bolton, Peng Qi, and Christo- pher D. Manning. 2016. Stanford at TAC KBP 2016: Sealing pipeline leaks and understanding chi- nese. In Text Analysis Conference (TAC) Proceed- ings 2016.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Attentionbased bidirectional long short-term memory networks for relation classification",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Zhenyu",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Bingchen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hongwei",
"middle": [],
"last": "Hao",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attention- based bidirectional long short-term memory net- works for relation classification. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics (ACL 2016), page 207.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "An example of relation extraction from the TAC KBP corpus.",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "Our proposed position-aware neural sequence model. The model is shown with an example sentence Mike and Lisa got married.",
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"num": null,
"text": "At hop-all level, F 1 score increases by Change of slot filling hop-0 and hopall scores as number of negative training examples changes. 100% is with all the negative examples included in the training set; the left side scores have positives and negatives roughly balanced.",
"type_str": "figure"
},
"FIGREF4": {
"uris": null,
"num": null,
"text": "TACRED development set F 1 scores for sentences of varying lengths.",
"type_str": "figure"
},
"TABREF0": {
"text": "Types: PERSON/TITLE Relation: per:title Irene Morgan Kirkaldy, who was born and reared in Baltimore, lived on Long Island and ran a child-care center in Queens with her second husband, Stanley Kirkaldy. Types: PERSON/CITY Relation: per:city of birth Pandit worked at the brokerage Morgan Stanley for about 11 years until 2005, when he and some Morgan Stanley colleagues quit and later founded the hedge fund Old Lane Partners.Baldwin declined further comment, and said JetBlue chief executive Dave Barger was unavailable.",
"html": null,
"content": "<table><tr><td>Types: ORGANIZATION/PERSON</td></tr><tr><td>Relation: org:founded by</td></tr><tr><td>Types: PERSON/TITLE</td></tr><tr><td>Relation: no relation</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF1": {
"text": "Sampled examples from the TACRED dataset. Subject entities are highlighted in blue and object entities are highlighted in red.",
"html": null,
"content": "<table/>",
"num": null,
"type_str": "table"
},
"TABREF3": {
"text": "A comparison of existing datasets and our proposed TACRED dataset. % Neg. denotes the percentage of negative examples (no relation).",
"html": null,
"content": "<table/>",
"num": null,
"type_str": "table"
},
"TABREF5": {
"text": "Statistics on TACRED: number of examples and the source of each portion.",
"html": null,
"content": "<table/>",
"num": null,
"type_str": "table"
},
"TABREF7": {
"text": "Patterns 63.8 17.7 27.7 49.3 8.6 14.7 58.9 13.3 21.8 LR 36.6 21.9 27.4 15.1 10.1 12.2 25.6 16.3 19.9 + Patterns (2015 winning system) 37.5 24.5 29.7 16.5 12.8 14.4 26.6 19.0 22.2",
"html": null,
"content": "<table><tr><td/><td/><td>Hop-0</td><td/><td/><td>Hop-1</td><td/><td/><td>Hop-all</td><td/></tr><tr><td>Model</td><td>P</td><td>R</td><td>F 1</td><td>P</td><td>R</td><td>F 1</td><td>P</td><td>R</td><td>F 1</td></tr><tr><td>LR trained on TACRED</td><td colspan=\"9\">32.7 20.6 25.3 7.9 9.5 8.6 16.8 15.3 16.0</td></tr><tr><td>+ Patterns</td><td colspan=\"9\">36.5 26.5 30.7 11.0 15.3 12.8 20.1 21.2 20.6</td></tr><tr><td>Our model</td><td colspan=\"9\">39.0 28.9 33.2 17.7 13.9 15.6 28.2 21.5 24.4</td></tr><tr><td>+ Patterns</td><td colspan=\"9\">40.2 31.5 35.3 19.4 16.5 17.8 29.7 24.2 26.7</td></tr><tr><td colspan=\"10\">Table 5: Model performance on TAC KBP 2015 slot filling evaluation, micro-averaged over queries.</td></tr><tr><td colspan=\"10\">Hop-0 scores are calculated on the simple single-hop slot filling results; hop-1 scores are calculated</td></tr><tr><td colspan=\"10\">on slot filling results chained on systems' hop-0 predictions; hop-all scores are calculated based on the</td></tr><tr><td colspan=\"3\">combination of the two. LR = logistic regression.</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Model</td><td>Dev F 1</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Final Model</td><td>66.22</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>-Position-aware attention</td><td>65.12</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>-Attention</td><td>64.71</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>-Pre-trained embeddings</td><td>65.34</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>-Word dropout</td><td>65.69</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>-All above</td><td>63.60</td><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF8": {
"text": "",
"html": null,
"content": "<table><tr><td>: An ablation test of our position-aware</td></tr><tr><td>attention model, evaluated on TACRED dev set.</td></tr><tr><td>Scores are median of 5 models.</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF9": {
"text": "SUBJ graduated from North Korea 's elite Kim Il Sung University and ORG-OBJ ORG-OBJ .",
"html": null,
"content": "<table><tr><td>Sampled Sentences</td><td>Predicted Labels</td></tr><tr><td>PER-</td><td/></tr></table>",
"num": null,
"type_str": "table"
}
}
}
}