ACL-OCL / Base_JSON /prefixP /json /P14 /P14-1038.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P14-1038",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:07:02.861530Z"
},
"title": "Incremental Joint Extraction of Entity Mentions and Relations",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rensselaer Polytechnic Institute Troy",
"location": {
"postCode": "12180",
"region": "NY",
"country": "USA"
}
},
"email": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rensselaer Polytechnic Institute Troy",
"location": {
"postCode": "12180",
"region": "NY",
"country": "USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present an incremental joint framework to simultaneously extract entity mentions and relations using structured perceptron with efficient beam-search. A segment-based decoder based on the idea of semi-Markov chain is adopted to the new framework as opposed to traditional token-based tagging. In addition, by virtue of the inexact search, we developed a number of new and effective global features as soft constraints to capture the interdependency among entity mentions and relations. Experiments on Automatic Content Extraction (ACE) 1 corpora demonstrate that our joint model significantly outperforms a strong pipelined baseline, which attains better performance than the best-reported end-to-end system.",
"pdf_parse": {
"paper_id": "P14-1038",
"_pdf_hash": "",
"abstract": [
{
"text": "We present an incremental joint framework to simultaneously extract entity mentions and relations using structured perceptron with efficient beam-search. A segment-based decoder based on the idea of semi-Markov chain is adopted to the new framework as opposed to traditional token-based tagging. In addition, by virtue of the inexact search, we developed a number of new and effective global features as soft constraints to capture the interdependency among entity mentions and relations. Experiments on Automatic Content Extraction (ACE) 1 corpora demonstrate that our joint model significantly outperforms a strong pipelined baseline, which attains better performance than the best-reported end-to-end system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The goal of end-to-end entity mention and relation extraction is to discover relational structures of entity mentions from unstructured texts. This problem has been artificially broken down into several components such as entity mention boundary identification, entity type classification and relation extraction. Although adopting such a pipelined approach would make a system comparatively easy to assemble, it has some limitations: First, it prohibits the interactions between components. Errors in the upstream components are propagated to the downstream components without any feedback. Second, it over-simplifies the problem as multiple local classification steps without modeling long-distance and cross-task dependencies. By contrast, we re-formulate this task as a structured prediction problem to reveal the linguistic and logical properties of the hidden 1 http://www.itl.nist.gov/iad/mig//tests/ace structures. For example, in Figure 1 , the output structure of each sentence can be interpreted as a graph in which entity mentions are nodes and relations are directed arcs with relation types. By jointly predicting the structures, we aim to address the aforementioned limitations by capturing: (i) The interactions between two tasks. For example, in Figure 1a , although it may be difficult for a mention extractor to predict \"1,400\" as a Person (PER) mention, the context word \"employs\" between \"tire maker\" and \"1,400\" strongly indicates an Employment-Organization (EMP-ORG) relation which must involve a PER mention. (ii) The global features of the hidden structure. Various entity mentions and relations share linguistic and logical constraints. For example, we can use the triangle feature in Figure 1b to ensure that the relations between \"forces\", and each of the entity mentions \"Somalia /GPE \", \"Haiti /GPE \" and \"Kosovo /GPE \", are of the same type (Physical (PHYS), in this case).",
"cite_spans": [],
"ref_spans": [
{
"start": 939,
"end": 947,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1263,
"end": 1272,
"text": "Figure 1a",
"ref_id": "FIGREF0"
},
{
"start": 1711,
"end": 1720,
"text": "Figure 1b",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Following the above intuitions, we introduce a joint framework based on structured perceptron (Collins, 2002; Collins and Roark, 2004) with beam-search to extract entity mentions and relations simultaneously. With the benefit of inexact search, we are also able to use arbitrary global features with low cost. The underlying learning algorithm has been successfully applied to some other Natural Language Processing (NLP) tasks. Our task differs from dependency parsing (such as (Huang and Sagae, 2010) ) in that relation structures are more flexible, where each node can have arbitrary relation arcs. Our previous work (Li et al., 2013) used perceptron model with token-based tagging to jointly extract event triggers and arguments. By contrast, we aim to address a more challenging task: identifying mention boundaries and types together with relations, which raises the issue that assignments for the same sentence with different mention boundaries are difficult to syn- chronize during search. To tackle this problem, we adopt a segment-based decoding algorithm derived from (Sarawagi and Cohen, 2004; Zhang and Clark, 2008) based on the idea of semi-Markov chain (a.k.a, multiple-beam search algorithm). Most previous attempts on joint inference of entity mentions and relations (such as (Roth and Yih, 2004; Roth and Yih, 2007) ) assumed that entity mention boundaries were given, and the classifiers of mentions and relations are separately learned. As a key difference, we incrementally extract entity mentions together with relations using a single model. The main contributions of this paper are as follows: 1. This is the first work to incrementally predict entity mentions and relations using a single joint model (Section 3). 2. Predicting mention boundaries in the joint framework raises the challenge of synchronizing different assignments in the same beam. We solve this problem by detecting entity mentions on segment-level instead of traditional tokenbased approaches (Section 3.1.1). 3. We design a set of novel global features based on soft constraints over the entire output graph structure with low cost (Section 4).",
"cite_spans": [
{
"start": 94,
"end": 109,
"text": "(Collins, 2002;",
"ref_id": "BIBREF4"
},
{
"start": 110,
"end": 134,
"text": "Collins and Roark, 2004)",
"ref_id": "BIBREF3"
},
{
"start": 479,
"end": 502,
"text": "(Huang and Sagae, 2010)",
"ref_id": "BIBREF9"
},
{
"start": 620,
"end": 637,
"text": "(Li et al., 2013)",
"ref_id": "BIBREF16"
},
{
"start": 1079,
"end": 1105,
"text": "(Sarawagi and Cohen, 2004;",
"ref_id": "BIBREF26"
},
{
"start": 1106,
"end": 1128,
"text": "Zhang and Clark, 2008)",
"ref_id": "BIBREF31"
},
{
"start": 1293,
"end": 1313,
"text": "(Roth and Yih, 2004;",
"ref_id": "BIBREF24"
},
{
"start": 1314,
"end": 1333,
"text": "Roth and Yih, 2007)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Experimental results show that the proposed framework achieves better performance than pipelined approaches, and global features provide further significant gains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The entity mention extraction and relation extraction tasks we are addressing are those of the Automatic Content Extraction (ACE) program 2 . ACE defined 7 main entity types including Person (PER), Organization (ORG), Geographical Entities (GPE), Location (LOC), 2 http://www.nist.gov/speech/tests/ace Facility (FAC), Weapon (WEA) and Vehicle (VEH). The goal of relation extraction 3 is to extract semantic relations of the targeted types between a pair of entity mentions which appear in the same sentence. ACE'04 defined 7 main relation types: Physical (PHYS), Person-Social (PER-SOC), Employment-Organization (EMP-ORG), Agent-Artifact (ART), PER/ORG Affiliation (Other-AFF), GPE-Affiliation (GPE-AFF) and Discourse (DISC). ACE'05 kept PER-SOC, ART and GPE-AFF, split PHYS into PHYS and a new relation type Part-Whole, removed DISC, and merged EMP-ORG and Other-AFF into EMP-ORG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "2.1"
},
{
"text": "Throughout this paper, we use \u22a5 to denote nonentity or non-relation classes. We consider relation asymmetric. The same relation type with opposite directions is considered to be two classes, which we refer to as directed relation types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "2.1"
},
{
"text": "Most previous research on relation extraction assumed that entity mentions were given In this work we aim to address the problem of end-to-end entity mention and relation extraction from raw texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "2.1"
},
{
"text": "In order to develop a baseline system representing state-of-the-art pipelined approaches, we trained a linear-chain Conditional Random Fields model (Lafferty et al., 2001 ) for entity mention extraction and a Maximum Entropy model for relation extraction.",
"cite_spans": [
{
"start": 148,
"end": 170,
"text": "(Lafferty et al., 2001",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline System",
"sec_num": "2.2"
},
{
"text": "Entity Mention Extraction Model We re-cast the problem of entity mention extraction as a sequential token tagging task as in the state-of-theart system (Florian et al., 2006) . We applied the BILOU scheme, where each tag means a token is the Beginning, Inside, Last, Outside, and Unit of an entity mention, respectively. Most of our features are similar to the work of (Florian et al., 2004; Florian et al., 2006 ) except that we do not have their gazetteers and outputs from other mention detection systems as features. Our additional features are as follows:",
"cite_spans": [
{
"start": 152,
"end": 174,
"text": "(Florian et al., 2006)",
"ref_id": "BIBREF7"
},
{
"start": 369,
"end": 391,
"text": "(Florian et al., 2004;",
"ref_id": "BIBREF6"
},
{
"start": 392,
"end": 412,
"text": "Florian et al., 2006",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline System",
"sec_num": "2.2"
},
{
"text": "\u2022 Governor word of the current token based on dependency parsing (Marneffe et al., 2006) . \u2022 Prefix of each word in Brown clusters learned from TDT5 corpus (Sun et al., 2011) .",
"cite_spans": [
{
"start": 65,
"end": 88,
"text": "(Marneffe et al., 2006)",
"ref_id": "BIBREF17"
},
{
"start": 156,
"end": 174,
"text": "(Sun et al., 2011)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline System",
"sec_num": "2.2"
},
{
"text": "Relation Extraction Model Given a sentence with entity mention annotations, the goal of baseline relation extraction is to classify each mention pair into one of the pre-defined relation types with direction or \u22a5 (non-relation). Most of our relation extraction features are based on the previous work of (Zhou et al., 2005) and (Kambhatla, 2004) . We designed the following additional features:",
"cite_spans": [
{
"start": 304,
"end": 323,
"text": "(Zhou et al., 2005)",
"ref_id": "BIBREF33"
},
{
"start": 328,
"end": 345,
"text": "(Kambhatla, 2004)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline System",
"sec_num": "2.2"
},
{
"text": "\u2022 The label sequence of phrases covering the two mentions. For example, for the sentence in Figure 1a, the sequence is \"NP VP NP\". We also augment it by head words of each phrase. \u2022 Four syntactico -semantic patterns described in (Chan and Roth, 2010). \u2022 We replicated each lexical feature by replacing each word with its Brown cluster.",
"cite_spans": [],
"ref_spans": [
{
"start": 92,
"end": 98,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baseline System",
"sec_num": "2.2"
},
{
"text": "3 Algorithm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline System",
"sec_num": "2.2"
},
{
"text": "Our goal is to predict the hidden structure of each sentence based on arbitrary features and constraints. Let x \u2208 X be an input sentence, y \u2208 Y be a candidate structure, and f (x, y ) be the feature vector that characterizes the entire structure. We use the following linear model to predict the most probable structure\u0177 for x:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "3.1"
},
{
"text": "y = argmax y \u2208Y(x) f (x, y ) \u2022 w (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "3.1"
},
{
"text": "where the score of each candidate assignment is defined as the inner product of the feature vector f (x, y ) and feature weights w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "3.1"
},
{
"text": "Since the structures contain both entity mentions relations, and we also aim to exploit global features. There does not exist a polynomial-time algorithm to find the best structure. In practice we apply beam-search to expand partial configurations for the input sentence incrementally to find the structure with the highest score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "3.1"
},
{
"text": "One main challenge to search for entity mentions and relations incrementally is the alignment of dif-ferent assignments. Assignments for the same sentence can have different numbers of entity mentions and relation arcs. The entity mention extraction task is often re-cast as a token-level sequential labeling problem with BIO or BILOU scheme (Ratinov and Roth, 2009; Florian et al., 2006) . A naive solution to our task is to adopt this strategy by treating each token as a state. However, different assignments for the same sentence can have various mention boundaries. It is unfair to compare the model scores of a partial mention and a complete mention. It is also difficult to synchronize the search process of relations. For example, consider the two hypotheses ending at \"York\" for the same sentence:",
"cite_spans": [
{
"start": 342,
"end": 366,
"text": "(Ratinov and Roth, 2009;",
"ref_id": "BIBREF22"
},
{
"start": 367,
"end": 388,
"text": "Florian et al., 2006)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Decoding Algorithm",
"sec_num": "3.1.1"
},
{
"text": "Allan U-PER from ? New B-ORG York I-ORG Stock Exchange Allan U-PER from ? New B-GPE York L-GPE Stock Exchange PHYS PHYS",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Decoding Algorithm",
"sec_num": "3.1.1"
},
{
"text": "The model would bias towards the incorrect assignment \"New /B-GPE York /L-GPE \" since it can have more informative features as a complete mention (e.g., a binary feature indicating if the entire mention appears in a GPE gazetter). Furthermore, the predictions of the two PHYS relations cannot be synchronized since \"New /B-FAC York /I-FAC \" is not yet a complete mention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Decoding Algorithm",
"sec_num": "3.1.1"
},
{
"text": "To tackle these problems, we employ the idea of semi-Markov chain (Sarawagi and Cohen, 2004) , in which each state corresponds to a segment of the input sequence. They presented a variant of Viterbi algorithm for exact inference in semi-Markov chain. We relax the max operation by beam-search, resulting in a segment-based decoder similar to the multiple-beam algorithm in (Zhang and Clark, 2008) . Letd be the upper bound of entity mention length. The k-best partial assignments ending at the i-th token can be calculated as:",
"cite_spans": [
{
"start": 66,
"end": 92,
"text": "(Sarawagi and Cohen, 2004)",
"ref_id": "BIBREF26"
},
{
"start": 373,
"end": 396,
"text": "(Zhang and Clark, 2008)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Decoding Algorithm",
"sec_num": "3.1.1"
},
{
"text": "B[i] = k-BEST y \u2208{y [1..i] |y [1:i\u2212d] \u2208B[i\u2212d], d=1...d} f (x, y ) \u2022 w where y [1:i\u2212d]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Decoding Algorithm",
"sec_num": "3.1.1"
},
{
"text": "stands for a partial configuration ending at the (i-d)-th token, and y [i\u2212d+1,i] corresponds to the structure of a new segment (i.e., sub-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Decoding Algorithm",
"sec_num": "3.1.1"
},
{
"text": "sequence of x) x [i\u2212d+1,i] .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Decoding Algorithm",
"sec_num": "3.1.1"
},
{
"text": "Our joint decoding algorithm is shown in Figure 2 . For each token index i, it maintains a beam for the partial assignments whose last segments end at the i-th token. There are two types of actions during the search:",
"cite_spans": [],
"ref_spans": [
{
"start": 41,
"end": 49,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Joint Decoding Algorithm",
"sec_num": "3.1.1"
},
{
"text": "Input: input sentence x = (x 1 , x 2 , ..., x m ). k: beam size. T \u222a {\u22a5}: entity mention type alphabet. R \u222a {\u22a5}: directed relation type alphabet. 4 d t : max length of type-t segment, t \u2208 T \u222a {\u22a5}. Output: best configuration\u0177 for x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Decoding Algorithm",
"sec_num": "3.1.1"
},
{
"text": "1 initialize m empty beams B[1..m] 2 for i \u2190 1...m do 3 for t \u2208 T \u222a {\u22a5} do 4 for d \u2190 1...d t , y \u2208 B[i \u2212 d] do 5 k \u2190 i \u2212 d + 1 6 B[i] \u2190 B[i] \u222a APPEND(y , t, k, i) 7 B[i] \u2190 k-BEST(B[i]) 8 for j \u2190 (i \u2212 1)...1 do 9 buf \u2190 \u2205 for y \u2208 B[i] do if HASPAIR(y , i, j) then 12 for r \u2208 R \u222a {\u22a5} do 13 buf \u2190 buf \u222a LINK(y , r, i, j) else 15 buf \u2190 buf \u222a {y } B[i] \u2190 k-BEST(buf) 17 return B[m][0]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Decoding Algorithm",
"sec_num": "3.1.1"
},
{
"text": "Figure 2: Joint Decoding for Entity Mentions and Relations. HASPAIR(y , i, j) checks if there are two entity mentions in y that end at token x i and token x j , respectively. APPEND(y , t, k, i) appends y with a type-t segment spanning from x k to x i . Similarly LINK(y , r, i, j) augments y by assigning a directed relation r to the pair of entity mentions ending at x i and x j respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Decoding Algorithm",
"sec_num": "3.1.1"
},
{
"text": "1. APPEND (Lines 3-7). First, the algorithm enumerates all possible segments (i.e., subsequences) of x ending at the current token with various entity types. A special type of segment is a single token with non-entity label (\u22a5). Each segment is then appended to existing partial assignments in one of the previous beams to form new assignments. Finally the top k results are recorded in the current beam. 2. LINK (Lines 8-16). After each step of APPEND, the algorithm looks backward to link the newly identified entity mentions and previous ones (if any) with relation arcs. At the j-th sub-step, it only considers the previous mention ending at the j-th previous token. Therefore different configurations are guaranteed to have the same number of sub-steps. Finally, all assignments are re-ranked with new relation information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Decoding Algorithm",
"sec_num": "3.1.1"
},
{
"text": "There are m APPEND actions, each is followed by at most (i \u2212 1) LINK actions (line 8). Therefore the worst-case time complexity is x-axis and y-axis represent the input sentence and entity types, respectively. The rectangles denote segments with entity types, among which the shaded ones are three competing hypotheses ending at \"1,400\". The solid lines and arrows indicate correct APPEND and LINK actions respectively, while the dashed indicate incorrect actions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Decoding Algorithm",
"sec_num": "3.1.1"
},
{
"text": "O(d \u2022 k \u2022 m 2 ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Decoding Algorithm",
"sec_num": "3.1.1"
},
{
"text": "Here we demonstrate a simple but concrete example by considering again the sentence described in Figure 1a . Suppose we are at the token \"1,400\". At this point we can propose multiple entity mentions with various lengths. Assuming \"1,400 /PER \", \"1,400 /\u22a5 \" and \"(employs 1,400) /PER \" are possible assignments, the algorithm appends these new segments to the partial assignments in the beams of the tokens \"employs\" and \"still\", respectively. Figure 3 illustrates this process. For simplicity, only a small part of the search space is presented. The algorithm then links the newly identified mentions to the previous ones in the same configuration. In this example, the only previous mention is \"(tire maker) /ORG \". Finally, \"1,400 /PER \" will be preferred by the model since there are more indicative context features for EMP-ORG relation between \"(tire maker) /PER \" and \"1,400 /PER \".",
"cite_spans": [],
"ref_spans": [
{
"start": 97,
"end": 106,
"text": "Figure 1a",
"ref_id": "FIGREF0"
},
{
"start": 444,
"end": 452,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Joint Decoding Algorithm",
"sec_num": "3.1.1"
},
{
"text": "To estimate the feature weights, we use structured perceptron (Collins, 2002) , an extension of the standard perceptron for structured prediction, as the learning framework. Huang et al. (2012) proved the convergency of structured perceptron when inexact search is applied with violation-fixing update methods such as earlyupdate (Collins and Roark, 2004 ). Since we use beam-search in this work, we apply early-update. In addition, we use averaged parameters to reduce overfitting as in (Collins, 2002) . Figure 4 shows the pseudocode for structured perceptron training with early-update. Here BEAMSEARCH is identical to the decoding algorithm described in Figure 2 except that if y , the prefix of the gold standard y, falls out of the beam after each execution of the k-BEST function (line 7 and 16), then the top assignment z and y are returned for parameter update. It is worth noting that this can only happen if the gold-standard has a segment ending at the current token. For instance, in the example of Figure 1a , B[2] cannot trigger any early-update since the gold standard does not contain any segment ending at the second token. Figure 4 : Perceptron algorithm with beamsearch and early-update. y is the prefix of the gold-standard and z is the top assignment.",
"cite_spans": [
{
"start": 62,
"end": 77,
"text": "(Collins, 2002)",
"ref_id": "BIBREF4"
},
{
"start": 174,
"end": 193,
"text": "Huang et al. (2012)",
"ref_id": "BIBREF10"
},
{
"start": 330,
"end": 354,
"text": "(Collins and Roark, 2004",
"ref_id": "BIBREF3"
},
{
"start": 488,
"end": 503,
"text": "(Collins, 2002)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 506,
"end": 514,
"text": "Figure 4",
"ref_id": null
},
{
"start": 658,
"end": 666,
"text": "Figure 2",
"ref_id": null
},
{
"start": 1012,
"end": 1021,
"text": "Figure 1a",
"ref_id": "FIGREF0"
},
{
"start": 1142,
"end": 1150,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Structured-Perceptron Learning",
"sec_num": "3.2"
},
{
"text": "Input: training set D = {(x (j) , y (j) )} N i=1 , maximum iteration number T Output: model parameters w 1 initialize w \u2190 0 2 for t \u2190 1...T do 3 foreach (x, y) \u2208 D do 4 (x, y , z) \u2190 BEAMSEARCH (x, y, w) 5 if z = y then 6 w \u2190 w + f (x, y ) \u2212 f (x, z) 7 return w",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structured-Perceptron Learning",
"sec_num": "3.2"
},
{
"text": "Entity type constraints have been shown effective in predicting relations (Roth and Yih, 2007; Chan and Roth, 2010) . We automatically collect a mapping table of permissible entity types for each relation type from our training data. Instead of applying the constraints in post-processing inference, we prune the branches that violate the type constraints during search. This type of pruning can reduce search space as well as make the input for parameter update less noisy. In our experiments, only 7 relation mentions (0.5%) in the dev set and 5 relation mentions (0.3%) in the test set violate the constraints collected from the training data.",
"cite_spans": [
{
"start": 74,
"end": 94,
"text": "(Roth and Yih, 2007;",
"ref_id": "BIBREF25"
},
{
"start": 95,
"end": 115,
"text": "Chan and Roth, 2010)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Type Constraints",
"sec_num": "3.3"
},
{
"text": "An advantage of our framework is that we can easily exploit arbitrary features across the two tasks. This section describes the local features (Section 4.1) and global features (Section 4.2) we developed in this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "We design segment-based features to directly evaluate the properties of an entity mention instead of the individual tokens it contains. Let\u0177 be a predicted structure of a sentence x. The entity segments of\u0177 can be expressed as a list of triples (e 1 , ..., e m ), where each segment e i = u i , v i , t i is a triple of start index u i , end index v i , and entity type t i . The following is an example of segmentbased feature:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Features",
"sec_num": "4.1"
},
{
"text": "f 001 (x,\u0177, i) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 1 if x [\u0177.u i ,\u0177.v i ] = tire maker y.t (i\u22121) ,\u0177.t i = \u22a5,ORG 0 otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Features",
"sec_num": "4.1"
},
{
"text": "This feature is triggered if the labels of the (i \u2212 1)th and the i-th segments are \"\u22a5,ORG\", and the text of the i-th segment is \"tire maker\". Our segmentbased features are described as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Features",
"sec_num": "4.1"
},
{
"text": "Gazetteer features Entity type of each segment based on matching a number of gazetteers including persons, countries, cities and organizations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Features",
"sec_num": "4.1"
},
{
"text": "Case features Whether a segment's words are initial-capitalized, all lower cased, or mixture.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Features",
"sec_num": "4.1"
},
{
"text": "Contextual features Unigrams and bigrams of the text and part-of-speech tags in a segment's contextual window of size 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Features",
"sec_num": "4.1"
},
{
"text": "Parsing-based features Features derived from constituent parsing trees, including (a) the phrase type of the lowest common ancestor of the tokens contained in the segment, (b) the depth of the lowest common ancestor, (c) a binary feature indicating if the segment is a base phrase or a suffix of a base phrase, and (d) the head words of the segment and its neighbor phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Features",
"sec_num": "4.1"
},
{
"text": "In addition, we convert each triple u i , v i , t i to BILOU tags for the tokens it contains to implement token-based features. The token-based men-tion features and local relation features are identical to those of our pipelined system (Section 2.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Features",
"sec_num": "4.1"
},
{
"text": "By virtue of the efficient inexact search, we are able to use arbitrary features from the entire structure of\u0177 to capture long-distance dependencies. The following features between related entity mentions are extracted once a new segment is appended during decoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Global Entity Mention Features",
"sec_num": "4.2"
},
{
"text": "Coreference consistency Coreferential entity mentions should be assigned the same entity type. We determine high-recall coreference links between two segments in the same sentence using some simple heuristic rules:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Global Entity Mention Features",
"sec_num": "4.2"
},
{
"text": "\u2022 Two segments exactly or partially string match.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Global Entity Mention Features",
"sec_num": "4.2"
},
{
"text": "\u2022 A pronoun (e.g., \"their\",\"it\") refers to previous entity mentions. For example, in \"they have no insurance on their cars\", \"they\" and \"their\" should have the same entity type. \u2022 A relative pronoun (e.g., \"which\",\"that\", and \"who\") refers to the noun phrase it modifies in the parsing tree. For example, in \"the starting kicker is nikita kargalskiy, who may be 5,000 miles from his hometown\", \"nikita kargalskiy\" and \"who\" should both be labeled as persons.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Global Entity Mention Features",
"sec_num": "4.2"
},
{
"text": "Then we encode a global feature to check whether two coreferential segments share the same entity type. This feature is particularly effective for pronouns because their contexts alone are often not informative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Global Entity Mention Features",
"sec_num": "4.2"
},
{
"text": "Neighbor coherence Neighboring entity mentions tend to have coherent entity types. For example, in \"Barbara Starr was reporting from the Pentagon\", \"Barbara Starr\" and \"Pentagon\" are connected by a dependency link prep from and thus they are unlikely to be a pair of PER mentions. Two types of neighbor are considered: (i) the first entity mention before the current segment, and (ii) the segment which is connected by a single word or a dependency link with the current segment. We take the entity types of the two segments and the linkage together as a global feature. For instance, \"PER prep from PER\" is a feature for the above example when \"Barbara Starr\" and \"Pentagon\" are both labeled as PER mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Global Entity Mention Features",
"sec_num": "4.2"
},
{
"text": "Part-of-whole consistency If an entity mention is semantically part of another mention (connected by a prep of dependency link), they should be assigned the same entity type. For example, in \"some of Iraq's exiles\", \"some\" and \"exiles\" are both PER mentions; in \"one of the town's two meat-packing plants\", \"one\" and \"plants\" are both FAC mentions; in \"the rest of America\", \"rest\" and \"America\" are both GPE mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Global Entity Mention Features",
"sec_num": "4.2"
},
{
"text": "Relation arcs can also share inter-dependencies or obey soft constraints. We extract the following relation-centric global features when a new relation hypothesis is made during decoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Global Relation Features",
"sec_num": "4.3"
},
{
"text": "Role coherence If an entity mention is involved in multiple relations with the same type, then its roles should be coherent. For example, a PER mention is unlikely to have more than one employer. However, a GPE mention can be a physical location for multiple entity mentions. We combine the relation type and the entity mention's argument roles as a global feature, as shown in Figure 5a .",
"cite_spans": [],
"ref_spans": [
{
"start": 378,
"end": 387,
"text": "Figure 5a",
"ref_id": null
}
],
"eq_spans": [],
"section": "Global Relation Features",
"sec_num": "4.3"
},
{
"text": "Triangle constraint Multiple entity mentions are unlikely to be fully connected with the same relation type. We use a negative feature to penalize any configuration that contains this type of structure. An example is shown in Figure 5b .",
"cite_spans": [],
"ref_spans": [
{
"start": 226,
"end": 235,
"text": "Figure 5b",
"ref_id": null
}
],
"eq_spans": [],
"section": "Global Relation Features",
"sec_num": "4.3"
},
{
"text": "Inter-dependent compatibility If two entity mentions are connected by a dependency link, they tend to have compatible relations with other entities. For example, in Figure 5c , the conj and dependency link between \"Somalia\" and \"Kosovo\" indicates they may share the same relation type with the third entity mention \"forces\".",
"cite_spans": [],
"ref_spans": [
{
"start": 165,
"end": 174,
"text": "Figure 5c",
"ref_id": null
}
],
"eq_spans": [],
"section": "Global Relation Features",
"sec_num": "4.3"
},
{
"text": "Neighbor coherence Similar to the entity mention neighbor coherence feature, we also combine the types of two neighbor relations in the same sentence as a bigram feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Global Relation Features",
"sec_num": "4.3"
},
{
"text": "Most previous work on ACE relation extraction has reported results on ACE'04 data set. As we will show later in our experiments, ACE'05 made significant improvement on both relation type definition and annotation quality. Therefore we present the overall performance on ACE'05 data. We removed two small subsets in informal genres -cts and un, and then randomly split the remaining 511 documents into 3 parts: 351 for training, 80 for development, and the rest 80 for blind test. In order to compare with state-of-the-art we also performed the same 5-fold cross-validation on bnews and nwire subsets of ACE'04 corpus as in previous work. The statistics of these data sets are summarized in Table 1 . We ran the Stanford CoreNLP toolkit 5 to automatically recover the true cases for lowercased documents. We use the standard F 1 measure to evaluate the performance of entity mention extraction and relation extraction. An entity mention is considered correct if its entity type is correct and the offsets of its mention head are correct. A relation mention is considered correct if its relation type is correct, and the head offsets of two entity mention arguments are both correct. As in Chan and Roth (2011), we excluded the DISC relation type, and removed relations in the system output which are implicitly correct via coreference links for fair comparison. Furthermore, we combine these two criteria to evaluate the performance of end-to-end entity mention and relation extraction.",
"cite_spans": [],
"ref_spans": [
{
"start": 690,
"end": 697,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Data and Scoring Metric",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(GPE Somalia) (PER forces) (GPE US) E M P -O R G E M P -O R G \u21e5",
"eq_num": "("
}
],
"section": "Data and Scoring Metric",
"sec_num": "5.1"
},
{
"text": "In general a larger beam size can yield better performance but increase training and decoding time. As a tradeoff, we set the beam size as 8 throughout the experiments. Figure 6 shows the learning curves on the development set, and compares the performance with and without global features. From these figures we can clearly see that global features consistently improve the extraction performance of both tasks. We set the number of training iterations as 22 based on these curves. Table 2 shows the overall performance of various methods on the ACE'05 test data. We compare our proposed method (Joint w/ Global) with the pipelined system (Pipeline), the joint model with only local features (Joint w/ Local), and two human annotators who annotated 73 documents in ACE'05 corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 169,
"end": 177,
"text": "Figure 6",
"ref_id": "FIGREF2"
},
{
"start": 483,
"end": 490,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Development Results",
"sec_num": "5.2"
},
{
"text": "We can see that our approach significantly outperforms the pipelined approach for both tasks. As a real example, for the partial sentence \"a marcher from Florida\" from the test data, the pipelined approach failed to identify \"marcher\" as a PER mention, and thus missed the GEN-AFF relation between \"marcher\" and \"Florida\". Our joint model correctly identified the entity mentions and their relation. Figure 7 shows the details when the joint model is applied to this sentence. At the token \"marcher\", the top hypothesis in the beam is \" \u22a5, \u22a5 \", while the correct one is ranked second best. After the decoder processes the token \"Florida\", the correct hypothesis is promoted to the top in the beam by the Neighbor Coherence features for PER-GPE pair. Furthermore, after Figure 7: Two competing hypotheses for \"a marcher from Florida\" during decoding.",
"cite_spans": [],
"ref_spans": [
{
"start": 400,
"end": 408,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Overall Performance",
"sec_num": "5.3"
},
{
"text": "linking the two mentions by GEN-AFF relation, the ranking of the incorrect hypothesis \" \u22a5, \u22a5 \" is dropped to the 4-th place in the beam, resulting in a large margin from the correct hypothesis. The human F 1 score on end-to-end relation extraction is only about 70%, which indicates it is a very challenging task. Furthermore, the F 1 score of the inter-annotator agreement is 51.9%, which is only 2.4% above that of our proposed method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Performance",
"sec_num": "5.3"
},
{
"text": "Compared to human annotators, the bottleneck of automatic approaches is the low recall of relation extraction. Among the 631 remaining missing relations, 318 (50.3%) of them were caused by missing entity mention arguments. A lot of nominal mention heads rarely appear in the training data, such as persons (\"supremo\", \"shepherd\", \"oligarchs\", \"rich\"), geo-political entity mentions (\"stateside\"), facilities (\"roadblocks\", \"cells\"), weapons (\"sim lant\", \"nukes\") and vehicles (\"prams\"). In addition, relations are often implicitly expressed in a variety of forms. Some examples are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Performance",
"sec_num": "5.3"
},
{
"text": "\u2022 \"Rice has been chosen by President Bush to become the new Secretary of State\" indicates \"Rice\" has a PER-SOC relation with \"Bush\". \u2022 \"U.S. troops are now knocking on the door of Baghdad\" indicates \"troops\" has a PHYS relation with \"Baghdad\". \u2022 \"Russia and France sent planes to Baghdad\" indicates \"Russia\" and \"France\" are involved in an ART relation with \"planes\" as owners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Performance",
"sec_num": "5.3"
},
{
"text": "In addition to contextual features, deeper semantic knowledge is required to capture such implicit semantic relations. Table 3 compares the performance on ACE'04 corpus. For entity mention extraction, our joint model achieved 79.7% on 5-fold cross-validation, which is comparable with the best F 1 score 79.2% reported by (Florian et al., 2006) on singlefold. However, Florian et al. (2006) used some gazetteers and the output of other Information Extraction (IE) models as additional features, which provided significant gains ((Florian et al., 2004) ). Since these gazetteers, additional data sets and external IE models are all not publicly available, it is not fair to directly compare our joint model with their results.",
"cite_spans": [
{
"start": 322,
"end": 344,
"text": "(Florian et al., 2006)",
"ref_id": "BIBREF7"
},
{
"start": 369,
"end": 390,
"text": "Florian et al. (2006)",
"ref_id": "BIBREF7"
},
{
"start": 528,
"end": 551,
"text": "((Florian et al., 2004)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 119,
"end": 126,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Overall Performance",
"sec_num": "5.3"
},
{
"text": "For end-to-end entity mention and relation extraction, both the joint approach and the pipelined baseline outperform the best results reported by (Chan and Roth, 2011) under the same setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with State-of-the-art",
"sec_num": "5.4"
},
{
"text": "Entity mention extraction (e.g., (Florian et al., 2004; Florian et al., 2006; Florian et al., 2010; Zitouni and Florian, 2008; Ohta et al., 2012) ) and relation extraction (e.g., (Reichartz et al., 2009; Sun et al., 2011; Jiang and Zhai, 2007; Bunescu and Mooney, 2005; Zhao and Grishman, 2005; Culotta and Sorensen, 2004; Zhou et al., 2007; Qian and Zhou, 2010; Qian et al., 2008; Chan and Roth, 2011; Plank and Moschitti, 2013) Table 3 : 5-fold cross-validation on ACE'04 corpus. Bolded scores indicate highly statistical significant improvement as measured by paired t-test (p < 0.01) usually studied separately. Most relation extraction work assumed that entity mention boundaries and/or types were given. Chan and Roth (2011) reported the best results using predicted entity mentions. Some previous work used relations and entity mentions to enhance each other in joint inference frameworks, including re-ranking (Ji and Grishman, 2005) , Integer Linear Programming (ILP) (Roth and Yih, 2004; Roth and Yih, 2007; Yang and Cardie, 2013) , and Card-pyramid Parsing (Kate and Mooney, 2010). All these work noted the advantage of exploiting crosscomponent interactions and richer knowledge. However, they relied on models separately learned for each subtask. As a key difference, our approach jointly extracts entity mentions and relations using a single model, in which arbitrary soft constraints can be easily incorporated. Some other work applied probabilistic graphical models for joint extraction (e.g., (Singh et al., 2013; Yu and Lam, 2010) ). By contrast, our work employs an efficient joint search algorithm without modeling joint distribution over numerous variables, therefore it is more flexible and computationally simpler. In addition, (Singh et al., 2013) used goldstandard mention boundaries.",
"cite_spans": [
{
"start": 33,
"end": 55,
"text": "(Florian et al., 2004;",
"ref_id": "BIBREF6"
},
{
"start": 56,
"end": 77,
"text": "Florian et al., 2006;",
"ref_id": "BIBREF7"
},
{
"start": 78,
"end": 99,
"text": "Florian et al., 2010;",
"ref_id": "BIBREF8"
},
{
"start": 100,
"end": 126,
"text": "Zitouni and Florian, 2008;",
"ref_id": "BIBREF35"
},
{
"start": 127,
"end": 145,
"text": "Ohta et al., 2012)",
"ref_id": "BIBREF18"
},
{
"start": 179,
"end": 203,
"text": "(Reichartz et al., 2009;",
"ref_id": "BIBREF23"
},
{
"start": 204,
"end": 221,
"text": "Sun et al., 2011;",
"ref_id": "BIBREF28"
},
{
"start": 222,
"end": 243,
"text": "Jiang and Zhai, 2007;",
"ref_id": "BIBREF12"
},
{
"start": 244,
"end": 269,
"text": "Bunescu and Mooney, 2005;",
"ref_id": "BIBREF0"
},
{
"start": 270,
"end": 294,
"text": "Zhao and Grishman, 2005;",
"ref_id": "BIBREF32"
},
{
"start": 295,
"end": 322,
"text": "Culotta and Sorensen, 2004;",
"ref_id": "BIBREF5"
},
{
"start": 323,
"end": 341,
"text": "Zhou et al., 2007;",
"ref_id": "BIBREF34"
},
{
"start": 342,
"end": 362,
"text": "Qian and Zhou, 2010;",
"ref_id": "BIBREF20"
},
{
"start": 363,
"end": 381,
"text": "Qian et al., 2008;",
"ref_id": "BIBREF21"
},
{
"start": 382,
"end": 402,
"text": "Chan and Roth, 2011;",
"ref_id": "BIBREF2"
},
{
"start": 403,
"end": 429,
"text": "Plank and Moschitti, 2013)",
"ref_id": "BIBREF19"
},
{
"start": 918,
"end": 941,
"text": "(Ji and Grishman, 2005)",
"ref_id": "BIBREF11"
},
{
"start": 977,
"end": 997,
"text": "(Roth and Yih, 2004;",
"ref_id": "BIBREF24"
},
{
"start": 998,
"end": 1017,
"text": "Roth and Yih, 2007;",
"ref_id": "BIBREF25"
},
{
"start": 1018,
"end": 1040,
"text": "Yang and Cardie, 2013)",
"ref_id": "BIBREF29"
},
{
"start": 1510,
"end": 1530,
"text": "(Singh et al., 2013;",
"ref_id": "BIBREF27"
},
{
"start": 1531,
"end": 1548,
"text": "Yu and Lam, 2010)",
"ref_id": "BIBREF30"
},
{
"start": 1751,
"end": 1771,
"text": "(Singh et al., 2013)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 430,
"end": 437,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Our previous work (Li et al., 2013) used structured perceptron with token-based decoder to jointly predict event triggers and arguments based on the assumption that entity mentions and other argument candidates are given as part of the input. In this paper, we solve a more challenging problem: take raw texts as input and identify the boundaries, types of entity mentions and relations all together in a single model. Sarawagi and Cohen (2004) proposed a segment-based CRFs model for name tagging. Zhang and Clark (2008) used a segment-based decoder for word segmentation and pos tagging. We extended the similar idea to our end-to-end task by incrementally predicting relations along with entity mention segments.",
"cite_spans": [
{
"start": 18,
"end": 35,
"text": "(Li et al., 2013)",
"ref_id": "BIBREF16"
},
{
"start": 419,
"end": 444,
"text": "Sarawagi and Cohen (2004)",
"ref_id": "BIBREF26"
},
{
"start": 499,
"end": 521,
"text": "Zhang and Clark (2008)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "In this paper we introduced a new architecture for more powerful end-to-end entity mention and relation extraction. For the first time, we addressed this challenging task by an incremental beam-search algorithm in conjunction with structured perceptron. While detecting mention boundaries jointly with other components raises the challenge of synchronizing multiple assignments in the same beam, a simple yet effective segmentbased decoder is adopted to solve this problem. More importantly, we exploited a set of global features based on linguistic and logical properties of the two tasks to predict more coherent structures. Experiments demonstrated our approach significantly outperformed pipelined approaches for both tasks and dramatically advanced state-of-the-art.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "In future work, we plan to explore more soft and hard constraints to reduce search space as well as improve accuracy. In addition, we aim to incorporate other IE components such as event extraction into the joint model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "Throughout this paper we refer to relation mention as relation since we do not consider relation mention coreference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The same relation type with opposite directions is considered to be two classes in R.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://nlp.stanford.edu/software/corenlp.shtml",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the three anonymous reviewers for their insightful comments. This work was supported by the U. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A shortest path dependency kernel for relation extraction",
"authors": [
{
"first": "C",
"middle": [],
"last": "Razvan",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Bunescu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. HLT/EMNLP",
"volume": "",
"issue": "",
"pages": "724--731",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Razvan C. Bunescu and Raymond J. Mooney. 2005. A shortest path dependency kernel for relation extrac- tion. In Proc. HLT/EMNLP, pages 724-731.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Exploiting background knowledge for relation extraction",
"authors": [
{
"first": "Yee",
"middle": [],
"last": "Seng",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. COLING",
"volume": "",
"issue": "",
"pages": "152--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yee Seng Chan and Dan Roth. 2010. Exploiting back- ground knowledge for relation extraction. In Proc. COLING, pages 152-160.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Exploiting syntactico-semantic structures for relation extraction",
"authors": [
{
"first": "Yee",
"middle": [],
"last": "Seng",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "551--560",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yee Seng Chan and Dan Roth. 2011. Exploiting syntactico-semantic structures for relation extrac- tion. In Proc. ACL, pages 551-560.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Incremental parsing with the perceptron algorithm",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "111--118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins and Brian Roark. 2004. Incremen- tal parsing with the perceptron algorithm. In Proc. ACL, pages 111-118.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden markov models: Theory and experi- ments with perceptron algorithms. In Proc. EMNLP, pages 1-8.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Dependency tree kernels for relation extraction",
"authors": [
{
"first": "Aron",
"middle": [],
"last": "Culotta",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Sorensen",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "423--429",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aron Culotta and Jeffrey Sorensen. 2004. Dependency tree kernels for relation extraction. In Proc. ACL, pages 423-429.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A statistical model for multilingual entity detection and tracking",
"authors": [
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
},
{
"first": "Hany",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "Abraham",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
},
{
"first": "Hongyan",
"middle": [],
"last": "Jing",
"suffix": ""
},
{
"first": "Nanda",
"middle": [],
"last": "Kambhatla",
"suffix": ""
},
{
"first": "Xiaoqiang",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Nicolov",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. HLT-NAACL",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radu Florian, Hany Hassan, Abraham Ittycheriah, Hongyan Jing, Nanda Kambhatla, Xiaoqiang Luo, Nicolas Nicolov, and Salim Roukos. 2004. A sta- tistical model for multilingual entity detection and tracking. In Proc. HLT-NAACL, pages 1-8.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Factorizing complex models: A case study in mention detection",
"authors": [
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
},
{
"first": "Hongyan",
"middle": [],
"last": "Jing",
"suffix": ""
}
],
"year": 2006,
"venue": "Nanda Kambhatla, and Imed Zitouni",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radu Florian, Hongyan Jing, Nanda Kambhatla, and Imed Zitouni. 2006. Factorizing complex models: A case study in mention detection. In Proc. ACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Improving mention detection robustness to noisy input",
"authors": [
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
},
{
"first": "John",
"middle": [
"F"
],
"last": "Pitrelli",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Imed",
"middle": [],
"last": "Zitouni",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "335--345",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radu Florian, John F. Pitrelli, Salim Roukos, and Imed Zitouni. 2010. Improving mention detection robust- ness to noisy input. In Proc. EMNLP, pages 335- 345.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Dynamic programming for linear-time incremental parsing",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Sagae",
"suffix": ""
}
],
"year": 2010,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "1077--1086",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang and Kenji Sagae. 2010. Dynamic pro- gramming for linear-time incremental parsing. In ACL, pages 1077-1086.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Structured perceptron with inexact search",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Suphan",
"middle": [],
"last": "Fayong",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. HLT-NAACL",
"volume": "",
"issue": "",
"pages": "142--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang, Suphan Fayong, and Yang Guo. 2012. Structured perceptron with inexact search. In Proc. HLT-NAACL, pages 142-151.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Improving name tagging by reference resolution and relation detection",
"authors": [
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "411--418",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heng Ji and Ralph Grishman. 2005. Improving name tagging by reference resolution and relation detec- tion. In Proc. ACL, pages 411-418.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A systematic exploration of the feature space for relation extraction",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing Jiang and ChengXiang Zhai. 2007. A systematic exploration of the feature space for relation extrac- tion. In Proc. HLT-NAACL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Combining lexical, syntactic, and semantic features with maximum entropy models for information extraction",
"authors": [
{
"first": "Nanda",
"middle": [],
"last": "Kambhatla",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "178--181",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nanda Kambhatla. 2004. Combining lexical, syntac- tic, and semantic features with maximum entropy models for information extraction. In Proc. ACL, pages 178-181.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Joint entity and relation extraction using card-pyramid parsing",
"authors": [
{
"first": "J",
"middle": [],
"last": "Rohit",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Kate",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "203--212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rohit J. Kate and Raymond Mooney. 2010. Joint en- tity and relation extraction using card-pyramid pars- ing. In Proc. ACL, pages 203-212.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [
"D"
],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [
"C N"
],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. ICML",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proc. ICML, pages 282-289.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Joint event extraction via structured prediction with global features",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Heng",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "73--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global fea- tures. In Proc. ACL, pages 73-82.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Generating typed dependency parses from phrase structure parses",
"authors": [
{
"first": "Marie-Catherine De",
"middle": [],
"last": "Marneffe",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Maccartney",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie-Catherine De Marneffe, Bill Maccartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proc. LREC, pages 449,454.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Open-domain anatomical entity mention detection",
"authors": [
{
"first": "Tomoko",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Jun'ichi Tsujii",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ananiadou",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. ACL Workshop on Detecting Structure in Scholarly Discourse",
"volume": "",
"issue": "",
"pages": "27--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomoko Ohta, Sampo Pyysalo, Jun'ichi Tsujii, and Sophia Ananiadou. 2012. Open-domain anatomi- cal entity mention detection. In Proc. ACL Work- shop on Detecting Structure in Scholarly Discourse, pages 27-36.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Embedding semantic similarity in tree kernels for domain adaptation of relation extraction",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "1498--1507",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara Plank and Alessandro Moschitti. 2013. Em- bedding semantic similarity in tree kernels for do- main adaptation of relation extraction. In Proc. ACL, pages 1498-1507.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Clusteringbased stratified seed sampling for semi-supervised relation classification",
"authors": [
{
"first": "Longhua",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "346--355",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Longhua Qian and Guodong Zhou. 2010. Clustering- based stratified seed sampling for semi-supervised relation classification. In Proc. EMNLP, pages 346- 355.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Exploiting constituent dependencies for tree kernel-based semantic relation extraction",
"authors": [
{
"first": "Longhua",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Fang",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Qiaoming",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Peide",
"middle": [],
"last": "Qian",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. COLING",
"volume": "",
"issue": "",
"pages": "697--704",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Longhua Qian, Guodong Zhou, Fang Kong, Qiaoming Zhu, and Peide Qian. 2008. Exploiting constituent dependencies for tree kernel-based semantic relation extraction. In Proc. COLING, pages 697-704.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Design challenges and misconceptions in named entity recognition",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. CONLL",
"volume": "",
"issue": "",
"pages": "147--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proc. CONLL, pages 147-155.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Composite kernels for relation extraction",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Reichartz",
"suffix": ""
},
{
"first": "Hannes",
"middle": [],
"last": "Korte",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Paass",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. ACL-IJCNLP (Short Papers)",
"volume": "",
"issue": "",
"pages": "365--368",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank Reichartz, Hannes Korte, and Gerhard Paass. 2009. Composite kernels for relation extraction. In Proc. ACL-IJCNLP (Short Papers), pages 365-368.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A linear programming formulation for global inference in natural language tasks",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Roth and Wen-tau Yih. 2004. A linear program- ming formulation for global inference in natural lan- guage tasks. In Proc. CoNLL.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Global inference for entity and relation identification via a lin-ear programming formulation",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2007,
"venue": "Introduction to Statistical Relational Learning. MIT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Roth and Wen-tau Yih. 2007. Global inference for entity and relation identification via a lin-ear programming formulation. In Introduction to Sta- tistical Relational Learning. MIT.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Semimarkov conditional random fields for information extraction",
"authors": [
{
"first": "Sunita",
"middle": [],
"last": "Sarawagi",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sunita Sarawagi and William W. Cohen. 2004. Semi- markov conditional random fields for information extraction. In Proc. NIPS.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Joint inference of entities, relations, and coreference",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Jiaping",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. CIKM Workshop on Automated Knowledge Base Construction",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Singh, Sebastian Riedel, Brian Martin, Jiap- ing Zheng, and Andrew McCallum. 2013. Joint inference of entities, relations, and coreference. In Proc. CIKM Workshop on Automated Knowledge Base Construction.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Semi-supervised relation extraction with large-scale word clustering",
"authors": [
{
"first": "Ang",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "521--529",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ang Sun, Ralph Grishman, and Satoshi Sekine. 2011. Semi-supervised relation extraction with large-scale word clustering. In Proc. ACL, pages 521-529.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Joint inference for fine-grained opinion extraction",
"authors": [
{
"first": "Bishan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "1640--1649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bishan Yang and Claire Cardie. 2013. Joint inference for fine-grained opinion extraction. In Proc. ACL, pages 1640-1649.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Jointly identifying entities and extracting relations in encyclopedia text via a graphical model approach",
"authors": [
{
"first": "Xiaofeng",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Wai",
"middle": [],
"last": "Lam",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. COLING (Posters)",
"volume": "",
"issue": "",
"pages": "1399--1407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaofeng Yu and Wai Lam. 2010. Jointly identifying entities and extracting relations in encyclopedia text via a graphical model approach. In Proc. COLING (Posters), pages 1399-1407.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Joint word segmentation and pos tagging using a single perceptron",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "1147--1157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang and Stephen Clark. 2008. Joint word seg- mentation and pos tagging using a single perceptron. In Proc. ACL, pages 1147-1157.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Extracting relations with integrated information using kernel methods",
"authors": [
{
"first": "Shubin",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "419--426",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shubin Zhao and Ralph Grishman. 2005. Extracting relations with integrated information using kernel methods. In Proc. ACL, pages 419-426.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Exploring various knowledge in relation extraction",
"authors": [
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "427--434",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guodong Zhou, Jian Su, Jie Zhang, and Min Zhang. 2005. Exploring various knowledge in relation ex- traction. In Proc. ACL, pages 427-434.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Tree kernel-based relation extraction with context-sensitive structured parse tree information",
"authors": [
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dong-Hong",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Qiaoming",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "728--736",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guodong Zhou, Min Zhang, Dong-Hong Ji, and Qiaoming Zhu. 2007. Tree kernel-based relation extraction with context-sensitive structured parse tree information. In Proc. EMNLP-CoNLL, pages 728-736.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Mention detection crossing the language barrier",
"authors": [
{
"first": "Imed",
"middle": [],
"last": "Zitouni",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "600--609",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Imed Zitouni and Radu Florian. 2008. Mention detec- tion crossing the language barrier. In Proc. EMNLP, pages 600-609.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "End-to-End Entity Mention and Relation Extraction.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "wher\u00ea d is the upper bound of segment length. Example of decoding steps.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "Learning Curves on Development Set.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF2": {
"text": "Data Sets.",
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF4": {
"text": "Overall performance on ACE'05 corpus.",
"content": "<table><tr><td>steps</td><td>hypotheses</td><td>rank</td></tr><tr><td>(a)</td><td>ha ? marcher ? i</td><td>1</td></tr><tr><td/><td>ha ? marcher PER i</td><td>2</td></tr><tr><td>(b)</td><td>ha ? marcher ? from ? i</td><td>1</td></tr><tr><td/><td>ha ? marcher PER from ? i</td><td>4</td></tr><tr><td>(c)</td><td>ha ? marcher PER from ? Florida GPE i</td><td>1</td></tr><tr><td/><td>ha ? marcher ? from ? Florida GPE i</td><td>2</td></tr><tr><td>(d)</td><td>ha GEN-AFF</td><td>1</td></tr><tr><td/><td>ha</td><td/></tr></table>",
"html": null,
"type_str": "table",
"num": null
}
}
}
}