ACL-OCL / Base_JSON /prefixD /json /D11 /D11-1001.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D11-1001",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:33:03.631515Z"
},
"title": "Fast and Robust Joint Models for Biomedical Event Extraction",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": "",
"affiliation": {},
"email": "riedel@cs.umass.edu"
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": "",
"affiliation": {},
"email": "mccallum@cs.umass.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Extracting biomedical events from literature has attracted much recent attention. The bestperforming systems so far have been pipelines of simple subtask-specific local classifiers. A natural drawback of such approaches are cascading errors introduced in early stages of the pipeline. We present three joint models of increasing complexity designed to overcome this problem. The first model performs joint trigger and argument extraction, and lends itself to a simple, efficient and exact inference algorithm. The second model captures correlations between events, while the third model ensures consistency between arguments of the same event. Inference in these models is kept tractable through dual decomposition. The first two models outperform the previous best joint approaches and are very competitive with respect to the current state-of-theart. The third model yields the best results reported so far on the BioNLP 2009 shared task, the BioNLP 2011 Genia task and the BioNLP 2011 Infectious Diseases task.",
"pdf_parse": {
"paper_id": "D11-1001",
"_pdf_hash": "",
"abstract": [
{
"text": "Extracting biomedical events from literature has attracted much recent attention. The bestperforming systems so far have been pipelines of simple subtask-specific local classifiers. A natural drawback of such approaches are cascading errors introduced in early stages of the pipeline. We present three joint models of increasing complexity designed to overcome this problem. The first model performs joint trigger and argument extraction, and lends itself to a simple, efficient and exact inference algorithm. The second model captures correlations between events, while the third model ensures consistency between arguments of the same event. Inference in these models is kept tractable through dual decomposition. The first two models outperform the previous best joint approaches and are very competitive with respect to the current state-of-theart. The third model yields the best results reported so far on the BioNLP 2009 shared task, the BioNLP 2011 Genia task and the BioNLP 2011 Infectious Diseases task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Whenever we advance our scientific understanding of the world, we seek to publish our findings. The result is a vast and ever-expanding body of natural language text that is becoming increasingly difficult to leverage. This is particularly true in the context of life sciences, where large quantities of biomedical articles are published on a daily basis. To support tasks such data mining, search and visualization, there is a clear need for structured representations of the knowledge these articles convey. This is indicated by a large number of public databases with content ranging from simple protein-protein interactions to complex pathways. To increase coverage of such databases, and to keep up with the rate of publishing, we need to automatically extract structured representations from biomedical text-a process often referred to as biomedical text mining.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One major focus of biomedical text mining has been the extraction of named entities, such genes or gene products, and of flat binary relations between such entities, such as protein-protein interactions. However, in recent years there has also been an increasing interest in the extraction of biomedical events and their causal relations. This gave rise to the BioNLP 2009 and 2011 shared tasks which challenged participants to gather such events from biomedical text (Kim et al., 2009; Kim et al., 2011) . Notably, these events can be complex and recursive: they may have several arguments, and some of the arguments may be events themselves.",
"cite_spans": [
{
"start": 468,
"end": 486,
"text": "(Kim et al., 2009;",
"ref_id": "BIBREF4"
},
{
"start": 487,
"end": 504,
"text": "Kim et al., 2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Current state-of-the-art event extractors follow the same architectural blueprint and divide the extraction process into a pipeline of three stages (Bj\u00f6rne et al., 2009; Miwa et al., 2010c) . First they predict a set of candidate event trigger words (say, tokens 2, 5 and 6 in figure 1), then argument mentions are attached to these triggers (say, token 4 for trigger 2). The final stage decides how arguments are shared between events-compare how one event subsumes all arguments of trigger 6 in figure 1, while two events share the three arguments of trigger 4 in figure 2. This architecture is prone to cascading errors: If we miss a trigger in the first stage, we will never be able to extract the full event 1 ... the phosphorylation of TRAF2 inhibits binding to the CD40 cytoplasmic domain ... it concerns. Current systems attempt to tackle this problem by passing several candidates to the next stage. However, this tends to increase the false positive rate. In fact, Miwa et al. (2010c) observe that 30% of their errors stem from this type of ad-hoc module communication.",
"cite_spans": [
{
"start": 148,
"end": 169,
"text": "(Bj\u00f6rne et al., 2009;",
"ref_id": "BIBREF0"
},
{
"start": 170,
"end": 189,
"text": "Miwa et al., 2010c)",
"ref_id": "BIBREF15"
},
{
"start": 975,
"end": 994,
"text": "Miwa et al. (2010c)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Joint models have been proposed to overcome this problem (Poon and Vanderwende, 2010; . However, besides not being as accurate as their pipelined competitors, mostly because they do not yet exploit the rich set of features used by Miwa et al. (2010b) and Bj\u00f6rne et al. (2009) , they also suffer from the complexity of inference. For example, to remain tractable, the best joint system so far (Poon and Vanderwende, 2010) works with a simplified representation of the problem in which certain features are harder to capture, employs local search without certificates of optimality, and furthermore requires a 32-core cluster for quick train-test cycles. Existing joint models also rely on heuristics when it comes to deciding which arguments share the same event. Contrast this with the best current pipeline (Miwa et al., 2010c; Miwa et al., 2010b) which uses a classifier for this task.",
"cite_spans": [
{
"start": 57,
"end": 85,
"text": "(Poon and Vanderwende, 2010;",
"ref_id": "BIBREF17"
},
{
"start": 231,
"end": 250,
"text": "Miwa et al. (2010b)",
"ref_id": "BIBREF14"
},
{
"start": 255,
"end": 275,
"text": "Bj\u00f6rne et al. (2009)",
"ref_id": "BIBREF0"
},
{
"start": 392,
"end": 420,
"text": "(Poon and Vanderwende, 2010)",
"ref_id": "BIBREF17"
},
{
"start": 808,
"end": 828,
"text": "(Miwa et al., 2010c;",
"ref_id": "BIBREF15"
},
{
"start": 829,
"end": 848,
"text": "Miwa et al., 2010b)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "E1:Phosphorylation",
"sec_num": null
},
{
"text": "We present a family of event extraction models that address the aforementioned problems. The first model jointly predicts triggers and arguments. Notably, the highest scoring event structure under this model can be found efficiently in O (mn) time where m is the number of trigger candidates, and n the number of argument candidates. This is only slightly slower than the O (m n) runtime of a pipeline, where m is the number of trigger candidates as filtered by the first stage. We achieve these guarantees through a novel algorithm that jointly picks best trigger label and arguments on a per-token basis. Remarkably, it takes roughly as much time to train this model on one core as the model of Poon and Vanderwende (2010) on 32 cores, and leads to better results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E1:Phosphorylation",
"sec_num": null
},
{
"text": "The second model enforces additional constraints that ensure consistency between events in hierarchical regulation structures. While inference in this model is more complicated, we show how dual decomposition (Komodakis et al., 2007; can be used to efficiently find exact solutions for a large fraction of problems.",
"cite_spans": [
{
"start": 209,
"end": 233,
"text": "(Komodakis et al., 2007;",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "E1:Phosphorylation",
"sec_num": null
},
{
"text": "Our third model includes the first two, and explicitly captures which arguments are part in the same event-the third stage of existing pipelines. Due to a complex coupling between this model and the first two, inference here requires a projected version of the sub-gradient technique demonstrated by .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E1:Phosphorylation",
"sec_num": null
},
{
"text": "When evaluated on the BioNLP 2009 shared task, the first two models outperform the previous best joint approaches and are competitive when compared to current state-of-the-art. With 57.4 F1 on the test set, the third model yields the best results reported so far with a 1.1 F1 margin to the results of Miwa et al. (2010b) . For the BioNLP 2011 Genia task 1 and the BioNLP 2011 Infectious Diseases task, Model 3 yields the second-best and best results reported so far. The second-best results are achieved with Model 3 as is , the best results when using Stanford event predictions as input features . The margins between Model 3 and the best runner-ups range from 1.9 F1 to 2.8 F1.",
"cite_spans": [
{
"start": 302,
"end": 321,
"text": "Miwa et al. (2010b)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "E1:Phosphorylation",
"sec_num": null
},
{
"text": "In the following we will first introduce biomedical event extraction and our notation. Then we go on to present our models and their inference routines. We present related work, show our empirical evaluation, and conclude. 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "E1:Phosphorylation",
"sec_num": null
},
{
"text": "Binding Binding Theme Theme Theme Theme Theme Theme Theme 1 2 3 4 5 6 7 8 Figure 2: Two binding events with identical trigger. The projection graph does not change even if both events are merged.",
"cite_spans": [],
"ref_spans": [
{
"start": 16,
"end": 89,
"text": "Theme Theme Theme Theme Theme Theme Theme 1 2 3 4 5 6 7 8",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Grb2 can be coimmunoprecipitated with Sos1 and Sos2",
"sec_num": null
},
{
"text": "By bio-molecular event we mean a change of state of one or more bio-molecules. Our task is to extract structured information about such events from natural language text. More concretely, let us consider part (a) of figure 1. We see a snippet of text from a biomedical abstract, and the three events that can be extracted from it. We will use these to characterize the types of events we ought to extract, as defined by the 2009 BioNLP shared task. Note that for the shared task, protein mentions are given by the task organizers and hence do not need to be extracted. The event E1 in the figure refers to a Phosphorylation of the TRAF2 protein. It is an instance of a set of simple events that describe changes to a single gene or gene product. Other members of this set are: Expression, Transcription, Localization, and Catabolism. Each of these events has to have exactly one theme, the protein of which a state change is described. A labelled edge in figure 1a) shows that TRAF2 is the theme of E1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Biomedical Event Extraction",
"sec_num": "2"
},
{
"text": "Event E3 is a Binding of TRAF2 and CD40. Binding events are particular in that they may have more than one theme, as there can be several biomolecules associated in a binding structure. This is in fact the case for E3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Biomedical Event Extraction",
"sec_num": "2"
},
{
"text": "In the top-center of figure 1a) we see the Regulation event E2. Such events describe regulatory or causal relations between events. Other instances of this type of events are: Positive Regulation and Negative Regulation. Regulations have to have exactly one theme; this theme can a be protein or, as in our case, another event. Regulations may also have zero or one cause arguments that denote events or proteins which trigger the regulation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Biomedical Event Extraction",
"sec_num": "2"
},
{
"text": "In the BioNLP shared task, we are also asked to find a trigger (or clue) token for each event. This token grounds the event in text and allows users to quickly validate extracted events. For example, the trigger for event E2 is \"inhibit\", as indicated by a dashed line.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Biomedical Event Extraction",
"sec_num": "2"
},
{
"text": "To formulate the search for event structures of the form shown in figure 1a) as an optimization problem, it will be convenient to represent them through a set of binary variables. We introduce such a representation, inspired by previous work Bj\u00f6rne et al., 2009) and based on a projection of events to a graph structure over tokens, as seen figure 1b).",
"cite_spans": [
{
"start": 242,
"end": 262,
"text": "Bj\u00f6rne et al., 2009)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 66,
"end": 76,
"text": "figure 1a)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Event Projection",
"sec_num": "2.1"
},
{
"text": "Consider sentence x and a set of candidate trigger tokens, denoted by Trig (x). We label each candidate i with the event type it is a trigger for, or None if it is not a trigger. This decision is represented through a set of binary variables e i,t , one for each possible event type t. In our example we have e 6,Binding = 1. The set of possible event types will be denoted as T , the regulation event types as T Reg def = {PosReg, NegReg, Reg} and its complement as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event Projection",
"sec_num": "2.1"
},
{
"text": "T \u00acreg def = T \\ T Reg .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event Projection",
"sec_num": "2.1"
},
{
"text": "For each candidate trigger i we consider the arguments of all events that have i as trigger. Each argument a will either be an event itself, or a protein.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event Projection",
"sec_num": "2.1"
},
{
"text": "For events we add a labelled edge between i and the trigger j of a. For proteins we add an edge between i and the syntactic head j of the protein mention. In both cases we label the edge i \u2192 j with the role of the argument a. The edge is represented through a binary variable a i,j,r , where r \u2208 R is the argument role and R def = {Theme, Cause, None}. The role None is active whenever no Theme or Cause role is present. In our example we get, among others, a 2,4,Theme = 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event Projection",
"sec_num": "2.1"
},
{
"text": "So far our representation is equivalent to mappings in previous work Bj\u00f6rne et al., 2009) and hence shares their main shortcoming: we cannot differentiate between two (or more) binding events with the same trigger but different arguments, or one binding event with several arguments. Consider, for example, the arguments of trigger 6 in figure 1b) that are all subsumed in a single event. By contrast, the arguments of trigger 4 shown in figure 2 are split between two events.",
"cite_spans": [
{
"start": 69,
"end": 89,
"text": "Bj\u00f6rne et al., 2009)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Event Projection",
"sec_num": "2.1"
},
{
"text": "Previous work has resolved this ambiguity through ad-hoc rules (Bj\u00f6rne et al., 2009) or with a post-processing classifier (Miwa et al., 2010c) . We propose to augment the graph representation through edges between pairs of proteins that are themes in the same binding event. For two protein tokens p and q we represent this edge through the binary variable b p,q . Hence, in figure 1b) we have b 4,9 = 1, whereas for figure 2 we get b 1,6 = b 1,8 = 1 but b 6,8 = 0. By explicitly modeling such \"sibling\" edges we not only minimize the need for postprocessing. We can also improve attachment decisions akin to second order models in dependency parsing (McDonald and Pereira, 2006) . Note that while merely introducing such variables is easy, enforcing consistency between them and the e i,t and a i,j,r variables is not. We address this in section 3.3.1. Reconstruction of events from solutions (e, a, b) can be done almost exactly as described by Bj\u00f6rne et al. (2009) . However, while they group binding arguments according to ad-hoc rules based on dependency paths from trigger to argument, we simply query the variables b p,q .",
"cite_spans": [
{
"start": 63,
"end": 84,
"text": "(Bj\u00f6rne et al., 2009)",
"ref_id": "BIBREF0"
},
{
"start": 122,
"end": 142,
"text": "(Miwa et al., 2010c)",
"ref_id": "BIBREF15"
},
{
"start": 651,
"end": 679,
"text": "(McDonald and Pereira, 2006)",
"ref_id": "BIBREF11"
},
{
"start": 947,
"end": 967,
"text": "Bj\u00f6rne et al. (2009)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Event Projection",
"sec_num": "2.1"
},
{
"text": "To simplify our exposition we introduce additional notation. We denote the set of protein head tokens with Prot (x); the set of a possible targets for outgoing edges from a trigger is Cand(x) def = Trig (x) \u222a Prot (x). We will often omit the domains of indices and instead assign them a fixed domain in advance:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event Projection",
"sec_num": "2.1"
},
{
"text": "i, l \u2208 Trig (x), j, k \u2208 Cand (x), p, q \u2208 Prot (x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event Projection",
"sec_num": "2.1"
},
{
"text": ", r \u2208 R and t \u2208 T . Bold face letters are used to denote composite vectors e, a and b of variables e i,t , a i,j,r and b p,q . The vector y is the joint vector of e, a and b. The short-form e i \u2190 t will mean \u2200t : e i,t \u2190 \u03b4 t,t where \u03b4 t,t is the Kronecker Delta. Likewise, a i,j \u2190 r means \u2200r :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event Projection",
"sec_num": "2.1"
},
{
"text": "a i,j,r \u2190 \u03b4 r,r .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event Projection",
"sec_num": "2.1"
},
{
"text": "In this section we will present three structured prediction models of increasing complexity and expressiveness, as well as their corresponding MAP inference algorithms. Each model m can be represented by a mapping from sentence x to a set of legal structures Y m (x), and a linear scoring function",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s m (y; x, w) = w, f (y, x) .",
"eq_num": "(1)"
}
],
"section": "Models",
"sec_num": "3"
},
{
"text": "Here f is a feature function on structures y and input x, and w is a weight vector for these features. We can use the scoring function s m and the set of legal structures Y m (x) to predict the event h m (x) for a given sentence x according to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h m (x) def = arg max y\u2208Ym(x) s m (y; x, w) .",
"eq_num": "(2)"
}
],
"section": "Models",
"sec_num": "3"
},
{
"text": "For brevity we will from now on omit observations x and weights w when they are clear from the context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3"
},
{
"text": "Model 1 performs a simple version of joint trigger and argument extraction. It independently scores trigger labels and argument roles:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 1",
"sec_num": "3.1"
},
{
"text": "s 1 (e, a) def = e i,t =1 s T (i, t) + a i,j,r =1 s R (i, j, r) . (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 1",
"sec_num": "3.1"
},
{
"text": "Here",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 1",
"sec_num": "3.1"
},
{
"text": "s T (i, t) = w T , f T (i, t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 1",
"sec_num": "3.1"
},
{
"text": "is a per-trigger scoring function that measures how well the event label t fits to token i. Likewise,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 1",
"sec_num": "3.1"
},
{
"text": "s R (i, j, r) = w R , f R (i, j, r)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 1",
"sec_num": "3.1"
},
{
"text": "measures the compatibility of role r as label for the edge i \u2192 j.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 1",
"sec_num": "3.1"
},
{
"text": "The jointness of Model 1 stems from enforcing consistency between the trigger label of i and its outgoing edges. By consistency we mean that: (a) there is at least one Theme whenever there is an event at i; (b) only regulation events are allowed to have Cause arguments; (c) all arguments of a None trigger must have the None role. We will denote the set assignments that fulfill these constraints by O and hence have Y 1 def = O. Enforcing (e, a) \u2208 O guarantees that we never predict triggers i for which no sensible, highscoring, argument j can be found. It also ensures that when we see an \"obvious\" argument edge i r \u2192 j with high score s R (i, j, r) there is pressure to extract a trigger at i, even if the fact that i is a trigger may not be as obvious.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 1",
"sec_num": "3.1"
},
{
"text": "As it turns out, the maximizer of equation 2 can be found very efficiently in O (mn) time where m = |Trig (x)| and n = |Cand (x)|. The corresponding procedure, bestOut(\u2022), is shown in algorithm 1. It takes as input a vector of trigger and edge penalties c that are added to the local scores of the s T and s R functions. For Model 2 and 3 we will use these penalties to enforce agreement with predictions of other inference subroutines. When using Model 1 by itself we set them to 0. We point out that the scoring function s 1 is multiplied with 1 2 throughout the algorithm. For doing inference in Model 1 and 2 this has no effect, but when we use bestOut(\u2022) for Model 3 inference, it is required.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.1.1"
},
{
"text": "The bestOut (c) routine exploits the fact that the constraints of Model 1 only act on the label for trigger i and its outgoing edges. In particular, enforcing consistency between e i,t and outgoing edges a i,j,r has no effect on consistency between e l,t and a i ,j ,r for any other trigger i = i. Moreover, for a given trigger the constraints only differentiate between three cases: (a) regulation event, (b) non-regulation event and (c) no event. This means that we can extract events on a per-trigger basis, and find the best per-trigger structure by comparing cases (a), (b) and (c). Note that bestOut (c) uses the shorthand emptyOut (i) to denote the partial assignment e i \u2190 None and \u2200j :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.1.1"
},
{
"text": "a i,j \u2190 None. The function s c 1 (i, y) def = t e i,t c i,t + 1 2 s T (i, t) + j,r a i,j,r c i,j,r + 1 2 s R (i, j, r)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.1.1"
},
{
"text": "is a per-trigger frame score with penalties c.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.1.1"
},
{
"text": "Model 1 may still predict structures that cannot be mapped to events. For example, in figure 1b) we may label token 5 as Regulation, add the edge 5 Cause \u2192 2 but fail to label token 2 as an event. While consistent with (e, a) \u2208 O, this violates the constraint that every active edge must either end at a protein, or at an active event trigger. This is a requirement on the label of a trigger and the assignment of roles for its incoming edges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 2",
"sec_num": "3.2"
},
{
"text": "Model 2 enforces the above constraint in addition to (e, a) \u2208 O, while inheriting the scoring function from Model 1. Hence, using I to denote the set of assignments with consistent trigger labels and incoming edges, we get Y 2 def = Y 1 \u2229 I and s 2 (y) def = s 1 (y).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 2",
"sec_num": "3.2"
},
{
"text": "Inference in Model 2 amounts to optimizing s 2 (e, a) over O \u2229 I. This is more involved, as we now have to ensure that when predicting an outgoing edge from trigger i to trigger l there is a high-scoring event at l. We follow and solve this problem in the framework of dual decomposi-Algorithm 1 Sub-procedures for inference in Model 1, 2 and 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2.1"
},
{
"text": "best label and outgoing edges for all triggers under penalties c bestOut (c) :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2.1"
},
{
"text": "\u2200i y 0 \u2190 emptyOut (i) y 1 \u2190 out i, c, T reg , R y 2 \u2190 out i, c, T \u00acreg , R \\ {Cause} y i \u2190 arg max y\u2208{y 0 ,y 1 ,y 2 } s c 1 (i, y) return (y i ) i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2.1"
},
{
"text": "best label and incoming edges for all triggers under penalties c bestIn (c) :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2.1"
},
{
"text": "\u2200l y 0 \u2190 emptyIn (l) y 1 \u2190 in (l, c, T , R \\ {None}) y l \u2190 arg max y\u2208{y 0 ,y 1 } s c 2 (l, y) return (y l ) l",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2.1"
},
{
"text": "pick best binding pairs p, q and trigger i for each using penalties c bestBind (c) :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2.1"
},
{
"text": "\u2200p, q b p,q \u2190 [s B (p, q) + max i c i,p,q > 0] I p,q \u2190 i|c i,p,q = max i c i ,p,q if b p,q = 1 or max i c i ,p,q > 0 \u2200i : t i,p,q \u2190 [i \u2208 I p,q ] |I p,q | \u22121 else \u2200i : t i,p,q \u2190 0 return (b, t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2.1"
},
{
"text": "best label in T and outgoing edge roles in R for i, using penalties c out (i, c, T, R) :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2.1"
},
{
"text": "e i \u2190 arg max t\u2208T 1 2 s T (i, t) + c i,t a i,bestTheme(i,c) \u2190 Theme \u2200j a i,j \u2190 arg max r\u2208R 1 2 s R (i, j, r) + c i,j,r return (e i , a i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2.1"
},
{
"text": "best label in T , incoming edge roles in R and outgoing protein roles, using costs c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2.1"
},
{
"text": "in (l, c, T, R) : e l \u2190 arg max t\u2208T 1 2 s T (l, t) + c l,t \u2200i a i,l \u2190 arg max r\u2208R 1 2 s R (i, l, r) + c i,l,r \u2200p a l,p \u2190 arg max r\u2208R 1 2 s R (l, p, r) + c l,p,r return (e i , a i ) best Theme argument for i bestTheme (i, c) : s (j) def = max j,r 1 2 s R (i, j, r) + c i,j,r \u2206 (j) def = 1 2 s R (i, j, Theme) + c i,j,Theme \u2212 s (j) return arg max j \u2206 (j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2.1"
},
{
"text": "tion. To this end we write our optimization problem as maximize e,a,\u0113,\u0101 1 2 s 2 (e, a) + 1 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2.1"
},
{
"text": "s 2 (\u0113,\u0101) subject to (e, a) \u2208 O \u2227 (\u0113,\u0101) \u2208 I\u2227 e =\u0113 \u2227 a =\u0101 (M2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2.1"
},
{
"text": "and note that this problem could be solved separately for e, a and\u0113,\u0101 if the coupling constraints e =\u0113 and a =\u0101 were removed. M2 is an Integer Linear Program, as variables are binary and both objective and constraints can be represented through linear constraints. 1 Dual decomposition solves a Linear Programming (LP) relaxation of M2 (that allows fractional values for all binary variables) through subgradient descent on a particular dual of M2. This dual can be derived by introducing Lagrange multipliers for the coupling constraints. Its attractiveness stems from the fact that calculating the subgradient amounts to solving the decoupled problems in isolation. If, by design, these decoupled problems can be solved efficiently, we can often quickly find the optimal solution to an LP relaxation of our original problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2.1"
},
{
"text": "Dual decomposition applied to Model 2 is shown in algorithm 2. It maintains the dual variables \u03bb that will appear as local penalties in the subproblems to be solved. The algorithm will try to tune these variables such that at convergence the coupling constraints will be fulfilled. This is done by first optimizing s 2 (e, a) over O and s 2 (\u0113,\u0101) over I. Now, whenever there is disagreement between two variables to be coupled, the corresponding dual parameter is shifted, increasing the chance that next time both models will agree. For example, if in the first iteration we predict e 6,Bind = 1 but\u0113 6,Bind = 0, we set \u03bb 6,Bind = \u2212\u03b1 where \u03b1 is some stepsize (chosen according to Koo et al. (2010) ). This will decrease the coefficient for e 6,Bind , and increase the coefficient for\u0113 6,Bind . Hence, we have a higher chance of agreement for this variable in the next iteration.",
"cite_spans": [
{
"start": 681,
"end": 698,
"text": "Koo et al. (2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2.1"
},
{
"text": "The algorithm repeats the process described above until all variables agree, or some predefined number R of iterations is reached. In the former case we in fact have the exact solution to the original ILP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2.1"
},
{
"text": "1 The ILP representation could be taken from the MLNs of and the mapping to ILPs of Riedel (2008) .",
"cite_spans": [
{
"start": 84,
"end": 97,
"text": "Riedel (2008)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2.1"
},
{
"text": "Algorithm 2 Subgradient descent for Model 2, and projected subgradient descent for Model 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2.1"
},
{
"text": "require:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2.1"
},
{
"text": "R: max. iteration, \u03b1 t : stepsizes t \u2190 0 [model 2,3] \u03bb \u2190 0 [model 2,3] \u00b5 \u2190 0 [model 3] repeat model 2 (e, a)\u2190 bestOut (\u03bb) 2,3 (\u0113,\u0101)\u2190 bestIn (\u2212\u03bb) 3 (e, a)\u2190 bestOut (c out (\u03bb, \u00b5)) 3 (b, t)\u2190 bestBind c bind (\u00b5) 2,3 \u03bb i,t \u2190 \u03bb i,t \u2212 \u03b1 t (e i,t \u2212\u0113 i,t ) 2,3 \u03bb i,j,r \u2190 \u03bb i,j,r \u2212 \u03b1 t (a i,j,r \u2212\u0101 i,j,r ) 3 \u00b5 trig i,p,q \u2190 \u00b5 trig i,p,q \u2212 \u03b1 t (e i,Bind \u2212 t i,p,q ) + 3 \u00b5 arg1 i,j,k \u2190 \u00b5 arg1 i,p,q \u2212 \u03b1 t (a i,p,Theme \u2212 t i,p,q ) + 3 \u00b5 arg2 i,p,q \u2190 \u00b5 arg2 i,p,q \u2212 \u03b1 t (a i,q,Theme \u2212 t i,p,q ) + 2,3 t \u2190 t + 1 until no \u03bb, \u00b5 changed or t > R return (e, a)[model 2] or (e, a, b) [model 3]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2.1"
},
{
"text": "In the later case we have no such guarantee, but find that in practice the solutions are still of high quality. Notice that we could still assess the quality of this approximation by measuring the duality gap between primal score and the final dual score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2.1"
},
{
"text": "Algorithm 2 for Model 2 requires us to optimize s 2 (e, a) over O and s 2 (\u0113,\u0101) over I. The former, with added penalties, can be done with bestOut(c). As the constraint set for I again decomposes on a per-token basis, solving the latter problem requires a very similar procedure, and again O (mn) time. Algorithm 1 shows this procedure under bestIn(c). It chooses, for each trigger candidate, the best label and incoming set of arguments together with the best outgoing edges to proteins. Adding edges to proteins is not strictly required, but simplifies our exposition. Algorithm bestIn(c) requires a per-trigger incoming score: , r) . Finally, note that emptyIn (i) not only assigns None as trigger label of i and to all incoming edges, but also greedily picks outgoing protein edges (as done within in(\u2022)).",
"cite_spans": [],
"ref_spans": [
{
"start": 630,
"end": 634,
"text": ", r)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2.1"
},
{
"text": "s c 2 (l, y l ) def = t e l,t c l,t + 1 2 s T (l, t) + i,r a i,l,r c i,l,r + 1 2 s R (i, l, r) + p,r a l,p,r c l,p,r + 1 2 s R (l, p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.2.1"
},
{
"text": "Model 2 does not predict the b p,q variables that represent protein pairs p, q in bindings. Model 3 fixes this by (a) adding binding variables b p,q into the objective, and (b) enforcing that the binding assignment b is consistent with the trigger and argument assignments e and a. We will also enforce that the same pair of entities p, q cannot be arguments in more than one event together.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 3",
"sec_num": "3.3"
},
{
"text": "The scoring function for Model 3 is simply",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 3",
"sec_num": "3.3"
},
{
"text": "s 3 (e, a, b) def = s 2 (e, a, b) + bp,q=1 s B (p, q) . (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 3",
"sec_num": "3.3"
},
{
"text": "Here ) is a per-protein-pair score based on a feature representation of the lexical and syntactic relation between both protein heads. Our strategy will be based on enforcing consistency partly through linear constraints which we dualize, and partly within our search algorithm. To this end we first introduce a set of auxiliary binary variables t i,p,q . When a t i,p,q is active, we enforce that there is a binding trigger at i with proteins p and q as Theme arguments. A set of linear constraints can be used for this: e i,Bind \u2212 t i,p,q \u2265 0, a i,p,Theme \u2212 t i,p,q \u2265 0 and a i,q,Theme \u2212 t i,p,q \u2265 0 for all suitable i, p and q. We denote the set of assignments (e, a, t) that fulfill these constraints by T.",
"cite_spans": [],
"ref_spans": [
{
"start": 5,
"end": 6,
"text": ")",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model 3",
"sec_num": "3.3"
},
{
"text": "s B (p, q) = w B , f B (p, q",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 3",
"sec_num": "3.3"
},
{
"text": "Consistency between e, a and b can now be enforced by making sure that t is consistent with e and a, and that b is consistent with this t. The latter means that an active b p,q requires a trigger i to point to p and q. Or in other words, t i,p,q = 1 for exactly one trigger i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 3",
"sec_num": "3.3"
},
{
"text": "With the set of consistent assignments (b, t) referred to as B, and a slight abuse of notation, this gives us Y 3 def = Y 2 \u2229T\u2229B. Note that it is (e, a, t) \u2208 T that will be enforced by dualizing constraints, and (b, t) \u2208 B that will be enforced within search.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 3",
"sec_num": "3.3"
},
{
"text": "We note that inference in Model 3 can be performed by solving the following problem: maximize e,a,\u0113,\u0101,b,t 1 2 s 1 (e, a) + 1 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.3.1"
},
{
"text": "s 2 (\u0113,\u0101) + bp,q=1 s B (p, q)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.3.1"
},
{
"text": "subject to (e, a) \u2208 O \u2227 (\u0113,\u0101) \u2208 I \u2227 (b, t) \u2208 B\u2227 e =\u0113 \u2227 a =\u0101 \u2227 (e, a, t) \u2208 T.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.3.1"
},
{
"text": "Again, without the final row, M3 would be separable. We exploit this by performing dual decomposition with a dual objective that has multipliers \u03bb for the coupling constraints and multipliers \u00b5 for the constraints which enforce (e, a, t) \u2208 T. The resulting subgradient descent method is also shown in algorithm 2. Notably, since the constraints for T are inequalities, we require a projected version of the descent algorithm which enforces \u00b5 \u2265 0. This manifests itself when \u00b5 is updated using the [\u2022] + projection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.3.1"
},
{
"text": "We have already described how to find the best e, a and\u0113,\u0101 assignments. What changes for Model 3 is the derivation of the penalties for e and a that now come from both \u03bb and \u00b5. We set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.3.1"
},
{
"text": "c out i,t (\u03bb, \u00b5) def = \u03bb i,t + \u03b4 t,Bind p,q \u00b5 trig i,p,q . For j / \u2208 Prot (x) we set c out i,j,r (\u03bb, \u00b5) def = \u03bb i,j,r ; otherwise we use c out i,j,r (\u03bb, \u00b5) def = \u03bb i,j,r + p \u00b5 arg1 i,j,p + q \u00b5 arg2 i,q,j . For finding a (b, t) \u2208 B that maximizes bp,q=1 s B (p, q)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.3.1"
},
{
"text": "we use bestBind (c), as shown in algorithm 1. It groups together two proteins p, q if their score plus the penalty of the best possible trigger i exceeds 0. In this case, or if there is at least one trigger with positive penalty c i,p,q > 0 , we activate the set of triggers I (p, q) with maximal score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.3.1"
},
{
"text": "Note that when several triggers i maximize the score, we assign them all the same fractional value |I (p, q)| \u22121 . This enforces the constraint that at most one binding event can point to both p and q and also means that we are solving an LP relaxation. We could enforce integer solutions and pick arbitrary triggers at a tie, but this would lower the chances of matching against predictions of other routines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.3.1"
},
{
"text": "The penalties for bestBind (c) are derived from the dual \u00b5 by setting",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.3.1"
},
{
"text": "c bind i,p,q (\u00b5) = \u2212\u00b5 trig i,p,q \u2212 \u00b5 arg1 i,p,q \u2212 \u00b5 arg2 i,,p,q .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "3.3.1"
},
{
"text": "We choose prediction-based passive-aggressive (PA) online learning (Crammer and Singer, 2003) with averaging to estimate the weights w for each of our models. PA is an error-driven learner that shifts weights towards features of the gold solution, and away from features of the current guess, whenever the current model makes a mistake.",
"cite_spans": [
{
"start": 67,
"end": 93,
"text": "(Crammer and Singer, 2003)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.4"
},
{
"text": "PA learning takes into account a user-defined loss function for which we use a weighted sum of false positives and false negatives: l (y, y ) def = FP (y, y ) + \u03b1FN (y, y ). We set \u03b1 = 3.8 by optimizing on the BioNLP 2009 development set. use Integer Linear Programming and cutting planes (Riedel, 2008) for inference in a model similar to Model 2. By using dual decomposition instead, we can exploit tractable substructure and achieve quadratic (Model 2) and cubic (Model 3) runtime guarantees. An advantage of ILP inference are guaranteed certificates of optimality. However, in practice we also gain certificates of optimality for a large fraction of the instances we process. Poon and Vanderwende (2010) use local search and hence provide no such certificates. Their problem formulation also makes n-gram dependency path features harder to incorporate. Mc-Closky et al. (2011b) cast event extraction as dependency parsing task. Their model assumes that event structures are trees, an assumption that is frequently violated in practice. Finally, all previous joint approaches use heuristics to decide whether binding arguments are part of the same event, while we capture these decisions in the joint model.",
"cite_spans": [
{
"start": 289,
"end": 303,
"text": "(Riedel, 2008)",
"ref_id": "BIBREF24"
},
{
"start": 680,
"end": 707,
"text": "Poon and Vanderwende (2010)",
"ref_id": "BIBREF17"
},
{
"start": 857,
"end": 881,
"text": "Mc-Closky et al. (2011b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.4"
},
{
"text": "We follow a long line of research in NLP that addresses search problems using (Integer) Linear Programs (Germann et al., 2001; Riedel and Clarke, 2006) . However, instead of using off-the-shelf solvers, we work in the framework of dual decomposition. Here we extend the approach of in that in addition to equality constraints we dualize more complex coupling constraints between models. This requires us to work with a projected version of subgradient descent.",
"cite_spans": [
{
"start": 104,
"end": 126,
"text": "(Germann et al., 2001;",
"ref_id": "BIBREF3"
},
{
"start": 127,
"end": 151,
"text": "Riedel and Clarke, 2006)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "While tailored towards (biomedical) event extraction, we believe that our models can also be effective in a more general Semantic Role Labeling (SRL) context. Using variants of Model 1, we can enforce many of the SRL constraints-such as \"unique agent\" constraints (Punyakanok et al., 2004) -without having to call out to ILP optimizers. Meza-Ruiz and Riedel (2009) showed that inducing pressure on arguments to be attached to at least one predicate is helpful; this is a soft incoming edge constraint. Finally, Model 3 can be used to efficiently capture compatibilities between semantic ar-guments; such compatibilities have also been shown to be helpful in SRL (Toutanova et al., 2005) .",
"cite_spans": [
{
"start": 264,
"end": 289,
"text": "(Punyakanok et al., 2004)",
"ref_id": "BIBREF19"
},
{
"start": 337,
"end": 364,
"text": "Meza-Ruiz and Riedel (2009)",
"ref_id": "BIBREF12"
},
{
"start": 662,
"end": 686,
"text": "(Toutanova et al., 2005)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "We evaluate our models on several tracks of the 2009 and 2011 BioNLP shared tasks, using the official \"Approximate Span Matching/Approximate Recursive Matching\" F1 metric for each. We also investigate the runtime behavior of our algorithms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Each document is first processed by the Stanford CoreNLP 2 tokenizer and sentence splitter. Parse trees come from the Charniak-Johnson parser (Charniak and Johnson, 2005 ) with a self-trained biomedical parsing model (McClosky and Charniak, 2008) , and are converted to dependency structures again using Stanford CoreNLP. Based on trigger words collected from the training set, a set of candidate trigger tokens Trig (x) is generated for each sentence x.",
"cite_spans": [
{
"start": 118,
"end": 169,
"text": "Charniak-Johnson parser (Charniak and Johnson, 2005",
"ref_id": null
},
{
"start": 217,
"end": 246,
"text": "(McClosky and Charniak, 2008)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "5.1"
},
{
"text": "The feature function f T (i, t) extracts a per-trigger feature vector for trigger i and type t \u2208 T . It creates one active feature for each element in t, t \u2208 T Reg \u00d7 feats (i). Here feats (i) denotes a collection of representations for the token i: wordform, lemma, POS tag, syntactic heads, syntactic children, and membership in two dictionaries taken from .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5.2"
},
{
"text": "For f R (i, j, r) we create active features for each element of {r} \u00d7 feats (i, j). Here feats (i, j) is a collection of representations of the token pair (i, j) taken from Miwa et al. (2010c) and contains: labelled and unlabeled n-gram dependency paths; edge and vertex walk features, argument and trigger modifiers and heads, words in between.",
"cite_spans": [
{
"start": 173,
"end": 192,
"text": "Miwa et al. (2010c)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5.2"
},
{
"text": "For f B (p, q) we re-use the token pair representations from f R . In particular, we create one active feature for each element in feats (p, q).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "5.2"
},
{
"text": "We first evaluate our models on the Bionlp 2009 task 1. task consist of 797, 150 and 250 documents, respectively. Table 1 shows our results for the development set. We compare our three models (M1, M2 and M3) and previous state-of-the-art systems: McClosky (Mc-Closky et al., 2011a) , Poon (Poon and Vanderwende, 2010) , Bjoerne (Bj\u00f6rne et al., 2009) and Miwa (Miwa et al., 2010b; Miwa et al., 2010a) . Presented is F1 score for all events (TOT), regulation events (REG), binding events (BIND) and simple events (SVT).",
"cite_spans": [
{
"start": 257,
"end": 282,
"text": "(Mc-Closky et al., 2011a)",
"ref_id": null
},
{
"start": 290,
"end": 318,
"text": "(Poon and Vanderwende, 2010)",
"ref_id": "BIBREF17"
},
{
"start": 329,
"end": 350,
"text": "(Bj\u00f6rne et al., 2009)",
"ref_id": "BIBREF0"
},
{
"start": 355,
"end": 380,
"text": "Miwa (Miwa et al., 2010b;",
"ref_id": "BIBREF14"
},
{
"start": 381,
"end": 400,
"text": "Miwa et al., 2010a)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 114,
"end": 121,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Shared Task 2009",
"sec_num": "5.3"
},
{
"text": "Model 1 is outperforming the previous best joint models of Poon and Vanderwende (2010) , as well as the best entry of the 2009 task (Bj\u00f6rne et al., 2009) . This is achieved without careful tuning of thresholds that control flow of information between trigger and argument extraction. Notably, training Model 1 takes approximately 20 minutes using a single core implementation. Contrast this with 20 minutes on 32 cores reported by Poon and Vanderwende (2010) .",
"cite_spans": [
{
"start": 59,
"end": 86,
"text": "Poon and Vanderwende (2010)",
"ref_id": "BIBREF17"
},
{
"start": 132,
"end": 153,
"text": "(Bj\u00f6rne et al., 2009)",
"ref_id": "BIBREF0"
},
{
"start": 431,
"end": 458,
"text": "Poon and Vanderwende (2010)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shared Task 2009",
"sec_num": "5.3"
},
{
"text": "Model 2 focuses on regulation structures and results demonstrate this: F1 for regulations goes up by nearly 2 points. While the impact of joint modeling relative to weaker local baselines has been shown shown by Poon and Vanderwende (2010) and , our findings here provide evidence that it remains effective even when the baseline system is very competitive.",
"cite_spans": [
{
"start": 212,
"end": 239,
"text": "Poon and Vanderwende (2010)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shared Task 2009",
"sec_num": "5.3"
},
{
"text": "With Model 3 our focus is extended to binding events, improving F1 for such events by at least 5 F1. This also has a positive effect on regulation events, as regulations of binding events can now be more accurately extracted. In total we see a 1.1 F1 increase over the best results reported so far (Miwa et al., 2010b) . Crucially, this is achieved using only a single parse tree per sentence, as opposed to three used by Miwa et al. (2010a) . Table 2 shows results for the test set. Here with Model 1 we again already outperform all but the results of Miwa et al. (2010a) . Model 2 improves F1 for regulations, while Model 3 again increases F1 for both regulations and binding events. This yields the best binding event results reported so far. Notably, not only are we able to resolve binding ambiguity better. Binding attachments themselves also improve, as we increase attachment F1 from 61.4 to 62.7 when going from Model 2 to Model 3. Miwa et al. (2010b) use two parsers to generate their input features. For fairer comparison we augment Model 3 with syntactic features based on the enju parser (Miyao et al., 2009) . With these features (M3+enju) we achieve the best results on this dataset reported so far, and outperform Miwa et al. (2010b) by 1.1 F1 in total, 1.6 F1 on regulation events and 2.0 F1 on binding events.",
"cite_spans": [
{
"start": 298,
"end": 318,
"text": "(Miwa et al., 2010b)",
"ref_id": "BIBREF14"
},
{
"start": 422,
"end": 441,
"text": "Miwa et al. (2010a)",
"ref_id": "BIBREF13"
},
{
"start": 553,
"end": 572,
"text": "Miwa et al. (2010a)",
"ref_id": "BIBREF13"
},
{
"start": 941,
"end": 960,
"text": "Miwa et al. (2010b)",
"ref_id": "BIBREF14"
},
{
"start": 1101,
"end": 1121,
"text": "(Miyao et al., 2009)",
"ref_id": "BIBREF16"
},
{
"start": 1230,
"end": 1249,
"text": "Miwa et al. (2010b)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 444,
"end": 451,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Shared Task 2009",
"sec_num": "5.3"
},
{
"text": "We also apply Model 3, with slight modifications, to the BioNLP 2009 task 2 which requires cellular locations to be extracted as well. With 53.0 F1 we fall 2 points short of the results of Miwa et al. (2010b) but still substantially outperform any other reported results on the dataset. More parse trees may again substantially improve results, as well as taskspecific constraint and feature sets.",
"cite_spans": [
{
"start": 189,
"end": 208,
"text": "Miwa et al. (2010b)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shared Task 2009",
"sec_num": "5.3"
},
{
"text": "We entered the Shared Task 2011 with Model 3, primarily focusing on Genia track (task 1), and the Infectious Diseases track. The Genia track differs from the 2009 task by including both abstracts and full text articles. In total 908 training, 259 development and 347 test documents are provided. The top five entries are shown in table 3. Model 3 is the best-performing system that does not use model combination, only outperformed by a version of Model 3 that includes Stanford predictions (Mc-Closky et al., 2011b) as input features . Not shown in the table are results for full papers only. Here M3 ranks first with 53.1 F1, while M3+Stanford comes in second with 52.7 F1.",
"cite_spans": [
{
"start": 491,
"end": 516,
"text": "(Mc-Closky et al., 2011b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shared Task 2011",
"sec_num": "5.4"
},
{
"text": "The Infectious Diseases (ID) track of the 2011 task has 152 train, 46 development and 118 test documents. Relative to Genia it provides less data and introduces more types of entities as well as the biological process event type. Incorporating these changes into our models is straightforward, and hence we omit details for brevity. Table 3 shows the top five entries for the Infectious Diseases track. Again Model 3 is the bestperforming system that does not use model combination, outperformed only by Model 3 with Stanford predictions as features. We should point out that the feature sets and learning parameters were kept constant when moving from Genia to ID data. The strong results we observe without any tuning to the domain indicate the robustness of joint modeling. Table 4 shows the asymptotic complexity of our three models with respect to m = |Trig (x)|, n = |Cand (x)| and p = |Prot (x)|. We also show the number of iterations needed on average, the average time in milliseconds per sentence, 3 and the fraction of sentences we get certificates of optimality for.",
"cite_spans": [],
"ref_spans": [
{
"start": 333,
"end": 340,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 777,
"end": 784,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Shared Task 2011",
"sec_num": "5.4"
},
{
"text": "As expected, Model 1 is most efficient, both asymptotically and on average. Given that its accuracy is already good, it can serve as a basis for 3 Measured without preprocessing and feature extraction.",
"cite_spans": [
{
"start": 145,
"end": 146,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime Behavior",
"sec_num": "5.5"
},
{
"text": "Iter. Time Exact M1 O (nm)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": null
},
{
"text": "1.0 60ms 100% M2 O (Rnm)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": null
},
{
"text": "10.4 183ms 96% M3 O Rnm + Rp 2 m 11.7 297ms 94% large-scale extraction tasks. Models 2 and 3 require several iterations and more time, while providing slightly less certificates. However, given the improvement in F1 they deliver, and the fact preprocessing steps such as parsing would still dominate the average time, this seems like a reasonable price to pay.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": null
},
{
"text": "We presented three joint models for biomedical event extraction. Model 1 reaches near-state-of-theart results, outperforms all previous joint models and has quadratic runtime guarantees. By explicitly capturing regulation events (Model 2), and binding events (Model 3) we achieve the best results reported so far on several event extraction tasks. The runtime penalty we pay is kept minimal by using dual decomposition. We also show how dual decomposition can be used for constraints that go beyond coupling equalities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We use joint models, a decomposition technique and supervised online learning. This recipe can be successful in many settings, but requires expensive manual annotation. In the future we want to integrate weak supervision techniques to train extractors with existing biomedical databases, such as KEGG, and only minimal amounts of annotated text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "http://nlp.stanford.edu/software/ corenlp.shtml",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported in part by the Center for Intelligent Information Retrieval. The University of Massachusetts gratefully acknowledges the support of Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0181. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of the DARPA, AFRL, or the US government.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Extracting complex biological events with rich graph-based feature sets",
"authors": [
{
"first": "Jari",
"middle": [],
"last": "Bj\u00f6rne",
"suffix": ""
},
{
"first": "Juho",
"middle": [],
"last": "Heimonen",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Antti",
"middle": [],
"last": "Airola",
"suffix": ""
},
{
"first": "Tapio",
"middle": [],
"last": "Pahikkala",
"suffix": ""
},
{
"first": "Tapio",
"middle": [],
"last": "Salakoski",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Natural Language Processing in Biomedicine NAACL 2009 Workshop (BioNLP '09)",
"volume": "",
"issue": "",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jari Bj\u00f6rne, Juho Heimonen, Filip Ginter, Antti Airola, Tapio Pahikkala, and Tapio Salakoski. 2009. Extract- ing complex biological events with rich graph-based feature sets. In Proceedings of the Natural Language Processing in Biomedicine NAACL 2009 Workshop (BioNLP '09), pages 10-18, Morristown, NJ, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Coarse-tofine n-best parsing and maxent discriminative reranking",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL '05)",
"volume": "",
"issue": "",
"pages": "173--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak and Mark Johnson. 2005. Coarse-to- fine n-best parsing and maxent discriminative rerank- ing. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL '05), pages 173-180.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Ultraconservative online algorithms for multiclass problems",
"authors": [
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "951--991",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koby Crammer and Yoram Singer. 2003. Ultraconserva- tive online algorithms for multiclass problems. Jour- nal of Machine Learning Research, 3:951-991.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Fast decoding and optimal decoding for machine translation",
"authors": [
{
"first": "Ulrich",
"middle": [],
"last": "Germann",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Jahr",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Yamada",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics (ACL '01)",
"volume": "",
"issue": "",
"pages": "228--235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ulrich Germann, Michael Jahr, Kevin Knight, Daniel Marcu, and Kenji Yamada. 2001. Fast decoding and optimal decoding for machine translation. In Proceed- ings of the 39th Annual Meeting of the Association for Computational Linguistics (ACL '01), pages 228-235.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Overview of bionlp'09 shared task on event extraction",
"authors": [
{
"first": "Jin-Dong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Tomoko",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Yoshinobu",
"middle": [],
"last": "Kano",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Natural Language Processing in Biomedicine NAACL 2009 Workshop (BioNLP '09)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jin-Dong Kim, Tomoko Ohta, Sampo Pyysalo, Yoshi- nobu Kano, and Jun'ichi Tsujii. 2009. Overview of bionlp'09 shared task on event extraction. In Proceedings of the Natural Language Processing in Biomedicine NAACL 2009 Workshop (BioNLP '09).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Overview of BioNLP Shared Task",
"authors": [
{
"first": "Jin-Dong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Tomoko",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Bossy",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the BioNLP 2011 Workshop Companion Volume for Shared Task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jin-Dong Kim, Sampo Pyysalo, Tomoko Ohta, Robert Bossy, and Jun'ichi Tsujii. 2011. Overview of BioNLP Shared Task 2011. In Proceedings of the BioNLP 2011 Workshop Companion Volume for Shared Task, Portland, Oregon, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Mrf optimization via dual decomposition: Message-passing revisited",
"authors": [
{
"first": "Nikos",
"middle": [],
"last": "Komodakis",
"suffix": ""
},
{
"first": "Nikos",
"middle": [],
"last": "Paragios",
"suffix": ""
},
{
"first": "Georgios",
"middle": [],
"last": "Tziritas",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 11st IEEE International Conference on Computer Vision (ICCV '07)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikos Komodakis, Nikos Paragios, and Georgios Tziri- tas. 2007. Mrf optimization via dual decomposition: Message-passing revisited. In Proceedings of the 11st IEEE International Conference on Computer Vision (ICCV '07).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Dual decomposition for parsing with nonprojective head automata",
"authors": [
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Sontag",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Conference on Empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Terry Koo, Alexander M. Rush, Michael Collins, Tommi Jaakkola, and David Sontag. 2010. Dual decomposi- tion for parsing with nonprojective head automata. In Proceedings of the Conference on Empirical methods in natural language processing (EMNLP '10).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Selftraining for biomedical parsing",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL '08)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David McClosky and Eugene Charniak. 2008. Self- training for biomedical parsing. In Proceedings of the 46th Annual Meeting of the Association for Computa- tional Linguistics (ACL '08).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Event extraction as dependency parsing",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics (ACL '11)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David McClosky, Mihai Surdeanu, and Chris Manning. 2011a. Event extraction as dependency parsing. In Proceedings of the 49th Annual Meeting of the Associ- ation for Computational Linguistics (ACL '11), Port- land, Oregon, June.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Event extraction as dependency parsing in bionlp",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2011,
"venue": "BioNLP 2011 Shared Task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David McClosky, Mihai Surdeanu, and Christopher D. Manning. 2011b. Event extraction as dependency parsing in bionlp 2011. In BioNLP 2011 Shared Task.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Online learning of approximate dependency parsing algorithms",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 11th Conference of the European Chapter of the ACL (EACL '06)",
"volume": "",
"issue": "",
"pages": "81--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. McDonald and F. Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proceedings of the 11th Conference of the European Chapter of the ACL (EACL '06), pages 81-88.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Jointly identifying predicates, arguments and senses using markov logic",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Meza",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Ruiz",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2009,
"venue": "Joint Human Language Technology Conference/Annual Meeting of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL '09)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Meza-Ruiz and Sebastian Riedel. 2009. Jointly identifying predicates, arguments and senses using markov logic. In Joint Human Language Technol- ogy Conference/Annual Meeting of the North Ameri- can Chapter of the Association for Computational Lin- guistics (HLT-NAACL '09).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A comparative study of syntactic parsers for event extraction",
"authors": [
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Tadayoshi",
"middle": [],
"last": "Hara",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Workshop on Biomedical Natural Language Processing, BioNLP '10",
"volume": "",
"issue": "",
"pages": "37--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Makoto Miwa, Sampo Pyysalo, Tadayoshi Hara, and Jun'ichi Tsujii. 2010a. A comparative study of syn- tactic parsers for event extraction. In Proceedings of the 2010 Workshop on Biomedical Natural Language Processing, BioNLP '10, pages 37-45, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Evaluating dependency representation for event extraction",
"authors": [
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Tadayoshi",
"middle": [],
"last": "Hara",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics, COLING '10",
"volume": "",
"issue": "",
"pages": "779--787",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Makoto Miwa, Sampo Pyysalo, Tadayoshi Hara, and Jun'ichi Tsujii. 2010b. Evaluating dependency rep- resentation for event extraction. In Proceedings of the 23rd International Conference on Computational Lin- guistics, COLING '10, pages 779-787, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Event extraction with complex event classification using rich features",
"authors": [
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Rune",
"middle": [],
"last": "Saetre",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jin-Dong",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of bioinformatics and computational biology",
"volume": "8",
"issue": "1",
"pages": "131--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Makoto Miwa, Rune Saetre, Jin-Dong D. Kim, and Jun'ichi Tsujii. 2010c. Event extraction with com- plex event classification using rich features. Journal of bioinformatics and computational biology, 8(1):131- 146, February.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Evaluating contributions of natural language parsers to protein-protein interaction extraction",
"authors": [
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Sagae",
"suffix": ""
},
{
"first": "Rune",
"middle": [],
"last": "Saetre",
"suffix": ""
},
{
"first": "Takuya",
"middle": [],
"last": "Matsuzaki",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2009,
"venue": "Bioinformatics/computer Applications in The Biosciences",
"volume": "25",
"issue": "",
"pages": "394--400",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yusuke Miyao, Kenji Sagae, Rune Saetre, Takuya Mat- suzaki, and Jun ichi Tsujii. 2009. Evaluating contribu- tions of natural language parsers to protein-protein in- teraction extraction. Bioinformatics/computer Appli- cations in The Biosciences, 25:394-400.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Joint Inference for Knowledge Extraction from Biomedical Literature",
"authors": [
{
"first": "Hoifung",
"middle": [],
"last": "Poon",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vanderwende",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hoifung Poon and Lucy Vanderwende. 2010. Joint Infer- ence for Knowledge Extraction from Biomedical Lit- erature. In Human Language Technologies: The 2010",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "813--821",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 813-821, Los Angeles, California, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Semantic role labeling via integer linear programming inference",
"authors": [
{
"first": "Vasin",
"middle": [],
"last": "Punyakanok",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Wen",
"middle": [],
"last": "Tau Yih",
"suffix": ""
},
{
"first": "Dav",
"middle": [],
"last": "Zimak",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th international conference on Computational Linguistics (COLING '04)",
"volume": "",
"issue": "",
"pages": "1346--1352",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vasin Punyakanok, Dan Roth, Wen tau Yih, and Dav Zi- mak. 2004. Semantic role labeling via integer linear programming inference. In Proceedings of the 20th in- ternational conference on Computational Linguistics (COLING '04), pages 1346-1352, Morristown, NJ, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Incremental integer linear programming for non-projective dependency parsing",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Clarke",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Conference on Empirical methods in natural language processing (EMNLP '06)",
"volume": "",
"issue": "",
"pages": "129--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel and James Clarke. 2006. Incremen- tal integer linear programming for non-projective de- pendency parsing. In Proceedings of the Conference on Empirical methods in natural language processing (EMNLP '06), pages 129-137.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Robust biomedical event extraction with dual decomposition and minimal domain adaptation",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Natural Language Processing in Biomedicine NAACL 2011 Workshop (BioNLP '11)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel and Andrew McCallum. 2011. Robust biomedical event extraction with dual decomposition and minimal domain adaptation. In Proceedings of the Natural Language Processing in Biomedicine NAACL 2011 Workshop (BioNLP '11), June.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A markov logic approach to bio-molecular event extraction",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Hong-Woo",
"middle": [],
"last": "Chun",
"suffix": ""
},
{
"first": "Toshihisa",
"middle": [],
"last": "Takagi",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Natural Language Processing in Biomedicine NAACL 2009 Workshop (BioNLP '09)",
"volume": "",
"issue": "",
"pages": "41--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Hong-Woo Chun, Toshihisa Takagi, and Jun'ichi Tsujii. 2009. A markov logic approach to bio-molecular event extraction. In Proceedings of the Natural Language Processing in Biomedicine NAACL 2009 Workshop (BioNLP '09), pages 41-49.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Model combination for event extraction in BioNLP",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Natural Language Processing in Biomedicine NAACL 2011 Workshop (BioNLP '11)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, David McClosky, Mihai Surdeanu, Christopher D. Manning, and Andrew McCallum. 2011. Model combination for event extraction in BioNLP 2011. In Proceedings of the Natural Lan- guage Processing in Biomedicine NAACL 2011 Work- shop (BioNLP '11), June.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Improving the accuracy and efficiency of MAP inference for markov logic",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 24th Annual Conference on Uncertainty in AI (UAI '08)",
"volume": "",
"issue": "",
"pages": "468--475",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel. 2008. Improving the accuracy and ef- ficiency of MAP inference for markov logic. In Pro- ceedings of the 24th Annual Conference on Uncer- tainty in AI (UAI '08), pages 468-475.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A linear programming formulation for global inference in natural language tasks",
"authors": [
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 8th Conference on Computational Natural Language Learning (CoNLL' 04)",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Roth and W. Yih. 2004. A linear programming formu- lation for global inference in natural language tasks. In Proceedings of the 8th Conference on Computational Natural Language Learning (CoNLL' 04), pages 1-8.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "On dual decomposition and linear programming relaxations for natural language processing",
"authors": [
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Sontag",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Conference on Empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander M. Rush, David Sontag, Michael Collins, and Tommi Jaakkola. 2010. On dual decomposition and linear programming relaxations for natural lan- guage processing. In Proceedings of the Conference on Empirical methods in natural language processing (EMNLP '10).",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Joint learning improves semantic role labeling",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL '05)",
"volume": "",
"issue": "",
"pages": "589--596",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, Aria Haghighi, and Christopher D. Manning. 2005. Joint learning improves semantic role labeling. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL '05), pages 589-596, Morristown, NJ, USA. Associa- tion for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "(a) sentence with target event structure to extract; (b) projection to a set of labelled graph over tokens.",
"num": null,
"type_str": "figure"
},
"TABREF3": {
"html": null,
"content": "<table/>",
"text": "F1 scores for the test set of Task 1 of the BioNLP 2009 shared task.",
"num": null,
"type_str": "table"
},
"TABREF5": {
"html": null,
"content": "<table/>",
"text": "F1 scores for the test sets of two tracks in the BioNLP 2011 Shared Task.",
"num": null,
"type_str": "table"
},
"TABREF6": {
"html": null,
"content": "<table/>",
"text": "Complexity and Runtime Behavior.",
"num": null,
"type_str": "table"
}
}
}
}