| { |
| "paper_id": "N16-1034", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T14:36:16.050132Z" |
| }, |
| "title": "Joint Event Extraction via Recurrent Neural Networks", |
| "authors": [ |
| { |
| "first": "Huu", |
| "middle": [], |
| "last": "Thien", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "thien@cs.nyu.edu" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Nguyen", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "kyunghyun.cho@nyu.edu" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Grishman", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "grishman@cs.nyu.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Event extraction is a particularly challenging problem in information extraction. The stateof-the-art models for this problem have either applied convolutional neural networks in a pipelined framework (Chen et al., 2015) or followed the joint architecture via structured prediction with rich local and global features (Li et al., 2013). The former is able to learn hidden feature representations automatically from data based on the continuous and generalized representations of words. The latter, on the other hand, is capable of mitigating the error propagation problem of the pipelined approach and exploiting the inter-dependencies between event triggers and argument roles via discrete structures. In this work, we propose to do event extraction in a joint framework with bidirectional recurrent neural networks, thereby benefiting from the advantages of the two models as well as addressing issues inherent in the existing approaches. We systematically investigate different memory features for the joint model and demonstrate that the proposed model achieves the state-of-the-art performance on the ACE 2005 dataset.", |
| "pdf_parse": { |
| "paper_id": "N16-1034", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Event extraction is a particularly challenging problem in information extraction. The stateof-the-art models for this problem have either applied convolutional neural networks in a pipelined framework (Chen et al., 2015) or followed the joint architecture via structured prediction with rich local and global features (Li et al., 2013). The former is able to learn hidden feature representations automatically from data based on the continuous and generalized representations of words. The latter, on the other hand, is capable of mitigating the error propagation problem of the pipelined approach and exploiting the inter-dependencies between event triggers and argument roles via discrete structures. In this work, we propose to do event extraction in a joint framework with bidirectional recurrent neural networks, thereby benefiting from the advantages of the two models as well as addressing issues inherent in the existing approaches. We systematically investigate different memory features for the joint model and demonstrate that the proposed model achieves the state-of-the-art performance on the ACE 2005 dataset.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "We address the problem of event extraction (EE): identifying event triggers of specified types and their arguments in text. Triggers are often single verbs or normalizations that evoke some events of interest while arguments are the entities participating into such events. This is an important and challenging task of information extraction in natural language processing (NLP), as the same event might be present in various expressions, and an expression might expresses different events in different contexts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "There are two main approaches to EE: (i) the joint approach that predicts event triggers and arguments for sentences simultaneously as a structured prediction problem, and (ii) the pipelined approach that first performs trigger prediction and then identifies arguments in separate stages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The most successful joint system for EE (Li et al., 2013) is based on the structured perceptron algorithm with a large set of local and global features 1 . These features are designed to capture the discrete structures that are intuitively helpful for EE using the NLP toolkits (e.g., part of speech tags, dependency and constituent tags). The advantages of such a joint system are twofold: (i) mitigating the error propagation from the upstream component (trigger identification) to the downstream classifier (argument identification), and (ii) benefiting from the the inter-dependencies among event triggers and argument roles via global features. For example, consider the following sentence (taken from Li et al. (2013) ) in the ACE 2005 dataset:", |
| "cite_spans": [ |
| { |
| "start": 40, |
| "end": 57, |
| "text": "(Li et al., 2013)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 707, |
| "end": 723, |
| "text": "Li et al. (2013)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In Baghdad, a cameraman died when an American tank fired on the Palestine hotel.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this sentence, died and fired are the event triggers for the events of types Die and Attack, respectively. In the pipelined approach, it is often simple for the argument classifiers to realize that camera-man is the Target argument of the Die event due to the proximity between cameraman and died in the sentence. However, as cameraman is far away from fired, the argument classifiers in the pipelined approach might fail to recognize cameraman as the Target argument for the event Attack with their local features. The joint approach can overcome this issue by relying on the global features to encode the fact that a Victim argument for the Die event is often the Target argument for the Attack event in the same sentence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Despite the advantages presented above, the joint system by Li et al. (2013) suffers from the lack of generalization over the unseen words/features and the inability to extract the underlying structures for EE (due to its discrete representation from the handcrafted feature set) (Nguyen and Grishman, 2015b; Chen et al., 2015) .", |
| "cite_spans": [ |
| { |
| "start": 60, |
| "end": 76, |
| "text": "Li et al. (2013)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 292, |
| "end": 308, |
| "text": "Grishman, 2015b;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 309, |
| "end": 327, |
| "text": "Chen et al., 2015)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The most successful pipelined system for EE to date (Chen et al., 2015) addresses these drawbacks of the joint system by Li et al. (2013) via dynamic multi-pooling convolutional neural networks (DMCNN). In this system, words are represented by the continuous representations (Bengio et al., 2003; Turian et al., 2010; Mikolov et al., 2013a) and features are automatically learnt from data by the DM-CNN, thereby alleviating the unseen word/feature problem and extracting more effective features for the given dataset. However, as the system by Chen et al. (2015) is pipelined, it still suffers from the inherent limitations of error propagation and failure to exploit the inter-dependencies between event triggers and argument roles (Li et al., 2013) . Finally, we notice that the discrete features, shown to be helpful in the previous studies for EE (Li et al., 2013) , are not considered in Chen et al. (2015) .", |
| "cite_spans": [ |
| { |
| "start": 52, |
| "end": 71, |
| "text": "(Chen et al., 2015)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 121, |
| "end": 137, |
| "text": "Li et al. (2013)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 275, |
| "end": 296, |
| "text": "(Bengio et al., 2003;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 297, |
| "end": 317, |
| "text": "Turian et al., 2010;", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 318, |
| "end": 340, |
| "text": "Mikolov et al., 2013a)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 544, |
| "end": 562, |
| "text": "Chen et al. (2015)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 733, |
| "end": 750, |
| "text": "(Li et al., 2013)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 851, |
| "end": 868, |
| "text": "(Li et al., 2013)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 893, |
| "end": 911, |
| "text": "Chen et al. (2015)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Guided by these characteristics of the EE systems by Li et al. (2013) and Chen et al. (2015) , in this work, we propose to solve the EE problem with the joint approach via recurrent neural networks (RNNs) (Hochreiter and Schmidhuber, 1997; augmented with the discrete features, thus inheriting all the benefits from both systems as well as overcoming their inherent issues. To the best of our knowledge, this is the first work to employ neural networks to do joint EE.", |
| "cite_spans": [ |
| { |
| "start": 53, |
| "end": 69, |
| "text": "Li et al. (2013)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 74, |
| "end": 92, |
| "text": "Chen et al. (2015)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 205, |
| "end": 239, |
| "text": "(Hochreiter and Schmidhuber, 1997;", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our model involves two RNNs that run over the sentences in both forward and reverse directions to learn a richer representation for the sentences. This representation is then utilized to predict event triggers and argument roles jointly. In order to capture the inter-dependencies between triggers and argument roles, we introduce memory vectors/matrices to store the prediction information during the course of labeling the sentences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We systematically explore various memory vector/matrices as well as different methods to learn word representations for the joint model. The experimental results show that our system achieves the state-of-the-art performance on the widely used ACE 2005 dataset.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We focus on the EE task of the Automatic Context Extraction (ACE) evaluation 2 . ACE defines an event as something that happens or leads to some change of state. We employ the following terminology:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Event Extraction Task", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 Event mention: a phrase or sentence in which an event occurs, including one trigger and an arbitrary number of arguments. \u2022 Event trigger: the main word that most clearly expresses an event occurrence. \u2022 Event argument: an entity mention, temporal expression or value (e.g. Job-Title) that servers as a participant or attribute with a specific role in an event mention. ACE annotates 8 types and 33 subtypes (e.g., Attack, Die, Start-Position) for event mentions that also correspond to the types and subtypes of the event triggers. Each event subtype has its own set of roles to be filled by the event arguments. For instance, the roles for the Die event include Place, Victim and Time. The total number of roles for all the event subtypes is 36.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Event Extraction Task", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Given an English text document, an event extraction system needs to recognize event triggers with specific subtypes and their corresponding arguments with the roles for each sentence. Following the previous work (Liao and Grishman, 2011; Li et al., 2013; Chen et al., 2015) , we assume that the argument candidates (i.e, the entity mentions, temporal expressions and values) are provided (by the ACE annotation) to the event extraction systems.", |
| "cite_spans": [ |
| { |
| "start": 212, |
| "end": 237, |
| "text": "(Liao and Grishman, 2011;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 238, |
| "end": 254, |
| "text": "Li et al., 2013;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 255, |
| "end": 273, |
| "text": "Chen et al., 2015)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Event Extraction Task", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We formalize the EE task as follow. Let W = w 1 w 2 . . . w n be a sentence where n is the sentence length and w i is the i-th token. Also, let E = e 1 , e 2 , . . . , e k be the entity mentions 3 in this sentence (k is the number of the entity mentions and can be zero). Each entity mention comes with the offsets of the head and the entity type. We further assume that i 1 , i 2 , . . . , i k be the indexes of the last words of the mention heads for e 1 , e 2 , . . . , e k , respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In EE, for every token w i in the sentence, we need to predict the event subtype (if any) for it. If w i is a trigger word for some event of interest, we then need to predict the roles (if any) that each entity mention e j plays in such event.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The joint model for event extraction in this work consists of two phases: (i) the encoding phase that applies recurrent neural networks to induce a more abstract representation of the sentence, and (ii) the prediction phase that uses the new representation to perform event trigger and argument role identification simultaneously for W . Figure 1 shows an overview of the model.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 338, |
| "end": 346, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In the encoding phase, we first transform each token w i into a real-valued vector x i using the concatenation of the following three vectors:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence Encoding", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "1. The word embedding vector of w i : This is obtained by looking up a pre-trained word embedding table D (Collobert and Weston, 2008; Turian et al., 2010; Mikolov et al., 2013a) .", |
| "cite_spans": [ |
| { |
| "start": 106, |
| "end": 134, |
| "text": "(Collobert and Weston, 2008;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 135, |
| "end": 155, |
| "text": "Turian et al., 2010;", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 156, |
| "end": 178, |
| "text": "Mikolov et al., 2013a)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence Encoding", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "2. The real-valued embedding vector for the entity type of w i : This vector is motivated from the prior work (Nguyen and Grishman, 2015b) and generated by looking up the entity type embedding table (initialized randomly) for the entity type of w i . Note that we also employ the BIO annotation schema to assign entity type labels to each token in the sentences using the heads of the entity mentions as do Nguyen and Grishman (2015b) .", |
| "cite_spans": [ |
| { |
| "start": 418, |
| "end": 434, |
| "text": "Grishman (2015b)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence Encoding", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "3. The binary vector whose dimensions correspond to the possible relations between words in the dependency trees. The value at each dimension of this vector is set to 1 only if there exists one edge of the corresponding relation connected to w i in the dependency tree of W . This vector represents the dependency features that are shown to be helpful in the previous research (Li et al., 2013) .", |
| "cite_spans": [ |
| { |
| "start": 377, |
| "end": 394, |
| "text": "(Li et al., 2013)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence Encoding", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "Note that we do not use the relative position features, unlike the prior work on neural networks for EE (Nguyen and Grishman, 2015b; Chen et al., 2015) . The reason is we predict the whole sentences for triggers and argument roles jointly, thus having no fixed positions for anchoring in the sentences.", |
| "cite_spans": [ |
| { |
| "start": 104, |
| "end": 132, |
| "text": "(Nguyen and Grishman, 2015b;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 133, |
| "end": 151, |
| "text": "Chen et al., 2015)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence Encoding", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "The transformation from the token w i to the vector x i essentially converts the input sentence W into a sequence of real-valued vectors X = (x 1 , x 2 , . . . , x n ), to be used by recurrent neural networks to learn a more effective representation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence Encoding", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "Consider the input sequence X = (x 1 , x 2 , . . . , x n ). At each step i, we compute the hidden vector \u03b1 i based on the current input vector x i and the previous hidden vector \u03b1 i\u22121 , using the non-linear transformation function \u03a6:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recurrent Neural Networks", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "\u03b1 i = \u03a6(x i , \u03b1 i\u22121 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recurrent Neural Networks", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": ". This recurrent computation is done over X to generate the hidden vector sequence", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recurrent Neural Networks", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "(\u03b1 1 , \u03b1 2 , . . . , \u03b1 n ), denoted by \u2212\u2212\u2192 RNN(x 1 , x 2 , . . . , x n ) = (\u03b1 1 , \u03b1 2 , . . . , \u03b1 n ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recurrent Neural Networks", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "An important characteristics of the recurrent mechanism is that it adaptively accumulates the context information from position 1 to i into the hidden vector \u03b1 i , making \u03b1 i a rich representation. However, \u03b1 i is not sufficient for the event trigger and argument predictions at position i as such predictions might need to rely on the context information in the future (i.e, from position i to n). In order to address this issue, we run a second RNN in the reverse direction from X n to X 1 to generate the second hidden vector sequence:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recurrent Neural Networks", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "\u2190\u2212\u2212 RNN(x n , x n\u22121 , . . . , x 1 ) = (\u03b1 n , \u03b1 n\u22121 , . . . , \u03b1 1 ) in which \u03b1 i summarizes the context information from position n to i. Eventually, we obtain the new representation (h 1 , h 2 , . . . , h n ) for X by concate- nating the hidden vectors in (\u03b1 1 , \u03b1 2 , . . . , \u03b1 n ) and (\u03b1 n , \u03b1 n\u22121 , . . . , \u03b1 1 ): h i = [\u03b1 i , \u03b1 i ].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recurrent Neural Networks", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "Note that h i essentially encapsulates the context information over the whole sentence (from 1 to n) with a greater focus on position i.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recurrent Neural Networks", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "Regarding Figure 1 : The joint EE model for the input sentence \"a man died when a tank fired in Baghdad\" with local context window d = 1. We only demonstrate the memory matrices G arg/trg i in this figure. Green corresponds to the trigger candidate \"died\" at the current step while violet and red are for the entity mentions \"man\" and \"Baghdad\" respectively. form of \u03a6 in the literature considers it as a one-layer feed-forward neural network. Unfortunately, this function is prone to the \"vanishing gradient\" problem (Bengio et al., 1994) , making it challenging to train RNNs properly. This problem can be alleviated by long-short term memory units (LSTM) (Hochreiter and Schmidhuber, 1997; Gers, 2001) . In this work, we use a variant of LSTM; called the Gated Recurrent Units (GRU) from . GRU has been shown to achieve comparable performance (Chung et al., 2014; J\u00f3zefowicz et al., 2015) .", |
| "cite_spans": [ |
| { |
| "start": 518, |
| "end": 539, |
| "text": "(Bengio et al., 1994)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 693, |
| "end": 704, |
| "text": "Gers, 2001)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 846, |
| "end": 866, |
| "text": "(Chung et al., 2014;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 867, |
| "end": 891, |
| "text": "J\u00f3zefowicz et al., 2015)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 10, |
| "end": 18, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Recurrent Neural Networks", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "In order to jointly predict triggers and argument roles for W , we maintain a binary memory vector G trg i for triggers, and binary memory matrices G arg i and G arg/trg i for arguments (at each time i). These vector/matrices are set to zeros initially (i = 0) and updated during the prediction process for W . Given the bidirectional representation h 1 , h 2 , . . . , h n in the encoding phase and the initialized memory vector/matrices, the joint prediction procedure loops over n tokens in the sentence (from 1 to n). At each time step i, we perform the following three stages in order: The output of this process would be the predicted trigger subtype t i for w i , the predicted argument roles a i1 , a i2 , . . . , a ik and the memory vector/matrices G trg i , G arg i and G arg/trg i for the current step. Note that t i should be the event subtype if w i is a trigger word for some event of interest, or \"Other\" in the other cases. a ij , in constrast, should be the argument role of the entity mention e j with respect to w i if w i is a trigger word and e j is an argument of the corresponding event, otherwise a ij is set to \"Other\" (j = 1 to k).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prediction", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In the trigger prediction stage for the current token w i , we first compute the feature representation vector R trg i for w i using the concatenation of the following three vectors:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Trigger Prediction", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "\u2022 h i : the hidden vector to encapsulate the global context of the input sentence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Trigger Prediction", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "\u2022 L trg i : the local context vector for w i . L trg i is generated by concatenating the vectors of the words in a context window", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Trigger Prediction", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "d of w i : L trg i = [D[w i\u2212d ], . . . , D[w i ], . . . , D[w i+d ]].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Trigger Prediction", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "\u2022 G trg i\u22121 : the memory vector from the previous step. The representation vector R", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Trigger Prediction", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "trg i = [h i , L trg i , G trg i\u22121 ]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Trigger Prediction", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "is then fed into a feed-forward neural network F trg with a softmax layer in the end to compute the probability distribution P trg i;t over the possible trigger subtypes:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Trigger Prediction", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "P trg i;t = P trg i (l = t) = F trg t (R trg i )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Trigger Prediction", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "where t is a trigger subtype. Finally, we compute the predicted type t i for w i by:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Trigger Prediction", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "t i = argmax t (P trg i;t ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Trigger Prediction", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "In the argument role prediction stage, we first check if the predicted trigger subtype t i in the previous stage is \"Other\" or not. If yes, we can simply set a ij to \"Other\" for all j = 1 to k and go to the next stage immediately. Otherwise, we loop over the entity mentions e 1 , e 2 , . . . , e k . For each entity mention e j with the head index of i j , we predict the argument role a ij with respect to the trigger word w i using the following procedure.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Argument Role Prediction", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "First, we generate the feature representation vector R arg ij for e j and w i by concatenating the following vectors:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Argument Role Prediction", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "\u2022 h i and h i j : the hidden vectors to capture the global context of the input sentence for w i and e j , respectively. \u2022 L arg ij : the local context vector for w i and e j . L arg ij is the concatenation of the vectors of the words in the context windows of size d for w", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Argument Role Prediction", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "i and w i j : L arg ij = [D[w i\u2212d ], . . . , D[w i ], . . . , D[w i+d ], D[w i j \u2212d ], . . . , D[w i j ], . . . , D[w i j +d ]].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Argument Role Prediction", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "\u2022 B ij : the hidden vector for the binary feature vector V ij . V ij is based on the local argument features between the tokens i and i j from (Li et al., 2013) . B ij is then computed by feeding V ij into a feed-forward neural network F binary for further abstraction:", |
| "cite_spans": [ |
| { |
| "start": 143, |
| "end": 160, |
| "text": "(Li et al., 2013)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Argument Role Prediction", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "B ij = F binary (V ij ). \u2022 G arg i\u22121 [j] and G arg/trg i\u22121 [j]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Argument Role Prediction", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": ": the memory vectors for e j that are extracted out of the memory matrices G arg i\u22121 and G arg/trg i\u22121 from the previous step. In the next step, we again use a feedforward neural network F arg with a soft-max layer in the end to transform R", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Argument Role Prediction", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "arg ij = [h i , h i j , L arg ij , B ij , G arg i\u22121 [j], G arg/trg i\u22121 [j]]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Argument Role Prediction", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "into the probability distribution P trg ij;a over the possible argument roles:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Argument Role Prediction", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "P arg ij;a = P arg ij (l = a) = F arg a (R arg ij )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Argument Role Prediction", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "where a is an argument role. Eventually, the predicted argument role for w i and e j is a ij = argmax a (P arg ij;a ). Note that the binary vector V ij enriches the feature representation R arg ij for argument labeling with the discrete structures discovered in the prior work on feature analysis for EE (Li et al., 2013) . These features include the shortest dependency paths, the entity types, subtypes, etc.", |
| "cite_spans": [ |
| { |
| "start": 304, |
| "end": 321, |
| "text": "(Li et al., 2013)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Argument Role Prediction", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "An important characteristics of EE is the existence of the dependencies between trigger labels and argument roles within the same sentences (Li et al., 2013) . In this work, we encode these dependencies into the memory vectors/matrices G A motivation for such dependencies is that if a Die event appears somewhere in the sentences, the possibility for the later occurrence of an Attack event would be likely.", |
| "cite_spans": [ |
| { |
| "start": 140, |
| "end": 157, |
| "text": "(Li et al., 2013)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Memory Vector/Matrices", |
| "sec_num": "3.2.3" |
| }, |
| { |
| "text": "2. The dependencies among argument roles: are encoded by the memory matrix G arg i (G arg i \u2208 {0, 1} k\u00d7n A for i = 0, . . . , n, and n A is the number of the possible argument roles). At time i, G arg i summarizes the argument roles that the entity mentions has played with some event in the past. In particular, G ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Memory Vector/Matrices", |
| "sec_num": "3.2.3" |
| }, |
| { |
| "text": "Denote the given trigger subtypes and argument roles for W in the training time as T = t * 1 , t * 2 , . . . , t * n and A = (a * ij ) j=1,k i=1,n . We train the network by minimizing the joint negative log-likelihood function C for triggers and argument roles:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "C(T, A, X, E) = \u2212 log P (T, A|X, E) = \u2212 log P (T |X, E) \u2212 log P (A|T, X, E) = \u2212 n i=1 log P trg i;t * i \u2212 n i=1 I(t i = \"Other\") k j=1 log P arg ij;a * ij", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "where I is the indicator function. We apply the stochastic gradient descent algorithm with mini-batches and the AdaDelta update rule (Zeiler, 2012). The gradients are computed using back-propagation. During training, besides the weight matrices, we also optimize the word and entity type embedding tables to achieve the optimal states. Finally, we rescale the weights whose Frobenius norms exceed a hyperparameter (Kim, 2014; Nguyen and Grishman, 2015a) .", |
| "cite_spans": [ |
| { |
| "start": 414, |
| "end": 425, |
| "text": "(Kim, 2014;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 426, |
| "end": 453, |
| "text": "Nguyen and Grishman, 2015a)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Following the prior work (Nguyen and Grishman, 2015b; Chen et al., 2015) , we pre-train word embeddings from a large corpus and employ them to initialize the word embedding table. One of the models to train word embeddings have been proposed in Mikolov et al. (2013a; 2013b ) that introduce two log-linear models, i.e the continuous bag-of-words model (CBOW) and the continuous skipgram model (SKIP-GRAM). The CBOW model attempts to predict the current word based on the average of the context word vectors while the SKIP-GRAM model aims to predict the surrounding words in a sentence given the current word. In this work, besides the CBOW and SKIP-GRAM models, we examine a concatenation-based variant of CBOW (C-CBOW) to train word embeddings and compare the three models to understand their effectiveness for EE. The objective of C-CBOW is to predict the target word using the concatenation of the vectors of the words surrounding it.", |
| "cite_spans": [ |
| { |
| "start": 37, |
| "end": 53, |
| "text": "Grishman, 2015b;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 54, |
| "end": 72, |
| "text": "Chen et al., 2015)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 245, |
| "end": 267, |
| "text": "Mikolov et al. (2013a;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 268, |
| "end": 273, |
| "text": "2013b", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Representation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "For all the experiments below, in the encoding phase, we use 50 dimensions for the entity type embeddings, 300 dimensions for the word embeddings and 300 units in the hidden layers for the RNNs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Resources, Parameters and Dataset", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Regarding the prediction phase, we employ the context window of 2 for the local features, and the feed-forward neural networks with one hidden layer for F trg , F arg and F binary (the size of the hidden layers are 600, 600 and 300 respectively).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Resources, Parameters and Dataset", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Finally, for training, we use the mini-batch size = 50 and the parameter for the Frobenius norms = 3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Resources, Parameters and Dataset", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "These parameter values are either inherited from the prior research (Nguyen and Grishman, 2015b; Chen et al., 2015) or selected according to the validation data.", |
| "cite_spans": [ |
| { |
| "start": 80, |
| "end": 96, |
| "text": "Grishman, 2015b;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 97, |
| "end": 115, |
| "text": "Chen et al., 2015)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Resources, Parameters and Dataset", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We pre-train the word embeddings from the English Gigaword corpus utilizing the word2vec toolkit 4 (modified to add the C-CBOW model). Following Baroni et al. (2014) , we employ the context window of 5, the subsampling of the frequent words set to 1e-05 and 10 negative samples. We evaluate the model with the ACE 2005 corpus. For the purpose of comparison, we use the same data split as the previous work (Ji and Grishman, 2008; Liao and Grishman, 2010; Li et al., 2013; Nguyen and Grishman, 2015b; Chen et al., 2015) . This data split includes 40 newswire articles (672 sentences) for the test set, 30 other documents (836 sentences) for the development set and 529 remaining documents (14,849 sentences) for the training set. Also, we follow the criteria of the previous work (Ji and Grishman, 2008; Liao and Grishman, 2010; Li et al., 2013; Chen et al., 2015) to judge the correctness of the predicted event mentions.", |
| "cite_spans": [ |
| { |
| "start": 145, |
| "end": 165, |
| "text": "Baroni et al. (2014)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 406, |
| "end": 429, |
| "text": "(Ji and Grishman, 2008;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 430, |
| "end": 454, |
| "text": "Liao and Grishman, 2010;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 455, |
| "end": 471, |
| "text": "Li et al., 2013;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 472, |
| "end": 499, |
| "text": "Nguyen and Grishman, 2015b;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 500, |
| "end": 518, |
| "text": "Chen et al., 2015)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 779, |
| "end": 802, |
| "text": "(Ji and Grishman, 2008;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 803, |
| "end": 827, |
| "text": "Liao and Grishman, 2010;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 828, |
| "end": 844, |
| "text": "Li et al., 2013;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 845, |
| "end": 863, |
| "text": "Chen et al., 2015)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Resources, Parameters and Dataset", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "This section evaluates the effectiveness of the memory vector and matrices presented in Section 3.2.3. In particular, we test the joint model on different cases where the memory vector for triggers G trg and the memory matrices for arguments G arg/trg and G arg are included or excluded from the model. As there are 4 different ways to combine G arg/trg and G arg for argument labeling and two options to to employ G trg or not for trigger labeling, we have 8 systems for comparison in total. Table 1 reports the identification and classification performance (F1 scores) for triggers and argument roles on the development set. Note that we are using the word embeddings trained with the C-CBOW technique in this section.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 493, |
| "end": 500, |
| "text": "Table 1", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Memory Vector/Matrices", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "No G arg/trg G arg G We observe that the memory vector G trg is not helpful for the joint model as it worsens both trigger and argument role performance (considering the same choice of the memory matrices G arg/trg and G arg (i.e, the same row in the table) and except in the row with G arg/trg + G arg ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System", |
| "sec_num": null |
| }, |
| { |
| "text": "The clearest trend is that G arg/trg is very effective in improving the performance of argument labeling. This is true in both the inclusion and exclusion of G trg . G arg and its combination with G arg/trg , on the other hand, have negative effect on this task. Finally, G arg/trg and G arg do not contribute much to the trigger labeling performance in general (except in the case where G t , G arg/trg and G arg are all applied).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System", |
| "sec_num": null |
| }, |
| { |
| "text": "These observations suggest that the dependencies among trigger subtypes and among argument roles are not strong enough to be helpful for the joint model in this dataset. This is in contrast to the de-pendencies between argument roles and trigger subtypes that improve the joint model significantly.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System", |
| "sec_num": null |
| }, |
| { |
| "text": "The best system corresponds to the application of the memory matrix G arg/trg and will be used in all the experiments below.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System", |
| "sec_num": null |
| }, |
| { |
| "text": "We investigate different techniques to obtain the pretrained word embeddings for initialization in the joint model of EE. Table 2 presents the performance (for both triggers and argument roles) on the development set when the CBOW, SKIP-GRAM and C-CBOW techniques are utilized to obtain word embeddings from the same corpus. We also report the performance of the joint model when it is initialized with the Word2Vec word embeddings from Mikolov et al. (2013a; 2013b) The first observation from the table is that RAN-DOM is not good enough to initialize the word embeddings for joint EE and we need to borrow some pre-trained word embeddings for this purpose. Second, SKIP-GRAM, WORD2VEC and CBOW have comparable performance on trigger labeling while the argument labeling performance of SKIP-GRAM and WORD2VEC is much better than that of CBOW for the joint EE model. Third and most importantly, among the compared word embeddings, it is clear that C-CBOW significantly outperforms all the others. We believe that the better performance of C-CBOW stems from its concatenation of the multiple context word vectors, thus providing more information to learn better word embeddings than SKIP-GRAM and WORD2VEC. In addition, the concate- nation mechanism essentially helps to assign different weights to different context words, thereby being more flexible than CBOW that applies a single weight for all the context words. From now on, for consistency, C-CBOW would be utilized in all the following experiments.", |
| "cite_spans": [ |
| { |
| "start": 437, |
| "end": 459, |
| "text": "Mikolov et al. (2013a;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 460, |
| "end": 466, |
| "text": "2013b)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 122, |
| "end": 129, |
| "text": "Table 2", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Word Embedding Evaluation", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "The state-of-the-art systems for EE on the ACE 2005 dataset have been the pipelined system with dynamic multi-pooling convolutional neural networks by Chen et al. (2015) (DMCNN) and the joint system with structured prediction and various discrete local and global features by Li et al. (2013) (Li's structure) .", |
| "cite_spans": [ |
| { |
| "start": 151, |
| "end": 177, |
| "text": "Chen et al. (2015) (DMCNN)", |
| "ref_id": null |
| }, |
| { |
| "start": 276, |
| "end": 292, |
| "text": "Li et al. (2013)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 293, |
| "end": 309, |
| "text": "(Li's structure)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison to the State of the art", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "Note that the pipelined system in Chen et al. (2015) is also the best-reported system based on neural networks for EE. Table 3 compares these state-of-the-art systems with the joint RNN-based model in this work (denoted by JRNN). For completeness, we also report the performance of the following representative systems: 1) Li's baseline: This is the pipelined system with local features by Li et al. (2013) .", |
| "cite_spans": [ |
| { |
| "start": 34, |
| "end": 52, |
| "text": "Chen et al. (2015)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 390, |
| "end": 406, |
| "text": "Li et al. (2013)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 119, |
| "end": 126, |
| "text": "Table 3", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Comparison to the State of the art", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "2) Liao's cross event: is the system by Liao and Grishman (2010) with the document-level information.", |
| "cite_spans": [ |
| { |
| "start": 40, |
| "end": 64, |
| "text": "Liao and Grishman (2010)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison to the State of the art", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "3) Hong's cross-entity (Hong et al., 2011) : This system exploits the cross-entity inference, and is also the best-reported pipelined system with discrete features in the literature.", |
| "cite_spans": [ |
| { |
| "start": 23, |
| "end": 42, |
| "text": "(Hong et al., 2011)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison to the State of the art", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "From the table, we see that JRNN achieves the best F1 scores (for both trigger and argument labeling) among all of the compared models. This is significant with the argument role labeling per-formance (an improvement of 1.9% over the bestreported model DMCNN in Chen et al. (2015) ), and demonstrates the benefit of the joint model with RNNs and memory features in this work. In addition, as JRNN significantly outperforms the joint model with discrete features in Li et al. (2013) (an improvement of 1.8% and 2.7% for trigger and argument role labeling respectively), we can confirm the effectiveness of RNNs to learn effective feature representations for EE.", |
| "cite_spans": [ |
| { |
| "start": 262, |
| "end": 280, |
| "text": "Chen et al. (2015)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 465, |
| "end": 481, |
| "text": "Li et al. (2013)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison to the State of the art", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "In order to further prove the effectiveness of JRNN, especially for those sentences with multiple events, we divide the test data into two parts according to the number of events in the sentences (i.e, single event and multiple events) and evaluate the performance separately, following Chen et al. (2015) . Table 4 shows the performance (F1 scores) of JRNN, DMCNN and two other baseline systems, named Embeddings+T and CNN in Chen et al. (2015) . Embeddings+T uses word embeddings and the traditional sentence-level features in (Li et al., 2013) while CNN is similar to DMCNN, except that it applies the standard pooling mechanism instead of the dynamic multi-pooling method (Chen et al., 2015) .", |
| "cite_spans": [ |
| { |
| "start": 287, |
| "end": 305, |
| "text": "Chen et al. (2015)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 427, |
| "end": 445, |
| "text": "Chen et al. (2015)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 529, |
| "end": 546, |
| "text": "(Li et al., 2013)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 676, |
| "end": 695, |
| "text": "(Chen et al., 2015)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentences with Multiple Events", |
| "sec_num": "5.5" |
| }, |
| { |
| "text": "The most important observation from the table is that JRNN significantly outperforms all the other methods with large margins when the input sentences contain more than one events (i.e, the row labeled with 1/N in the table). In particular, JRNN is 13.9% better than DMCNN on trigger labeling while the corresponding improvement for argument role labeling is 6.5%, thereby further suggesting the benefit of JRNN with the memory features. Regard- ing the performance on the single event sentences, JRNN is still the best system on trigger labeling although it is worse than DMCNN on argument role labeling. This can be partly explained by the fact that DMCNN includes the position embedding features for arguments and the memory matrix G arg/trg in JRNN is not functioning in this single event case.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentences with Multiple Events", |
| "sec_num": "5.5" |
| }, |
| { |
| "text": "Early research on event extraction has primarily focused on local sentence-level representations in a pipelined architecture (Grishman et al., 2005; Ahn, 2006) . After that, higher level features has been investigated to improve the performance (Ji and Grishman, 2008; Gupta and Ji, 2009; Patwardhan and Riloff, 2009; Liao and Grishman, 2010; Liao and Grishman, 2011; Hong et al., 2011; McClosky et al., 2011; Huang and Riloff, 2012; Li et al., 2013) . Besides, some recent research has proposed joint models for EE, including the methods based on Markov Logic Networks (Riedel et al., 2009; Poon and Vanderwende, 2010; Venugopal et al., 2014) , structured perceptron (Li et al., 2013; Li et al., 2014b) , and dual decomposition (Riedel et al. (2009; 2011a; 2011b) ). The application of neural networks to EE is very recent. In particular, Nguyen and Grishman (2015b) study domain adaptation and event detection via CNNs while Chen et al. (2015) apply dynamic multi-pooling CNNs for EE in a pipelined framework. However, none of these work utilizes RNNs to perform joint EE as we do in this work.", |
| "cite_spans": [ |
| { |
| "start": 125, |
| "end": 148, |
| "text": "(Grishman et al., 2005;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 149, |
| "end": 159, |
| "text": "Ahn, 2006)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 245, |
| "end": 268, |
| "text": "(Ji and Grishman, 2008;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 269, |
| "end": 288, |
| "text": "Gupta and Ji, 2009;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 289, |
| "end": 317, |
| "text": "Patwardhan and Riloff, 2009;", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 318, |
| "end": 342, |
| "text": "Liao and Grishman, 2010;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 343, |
| "end": 367, |
| "text": "Liao and Grishman, 2011;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 368, |
| "end": 386, |
| "text": "Hong et al., 2011;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 387, |
| "end": 409, |
| "text": "McClosky et al., 2011;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 410, |
| "end": 433, |
| "text": "Huang and Riloff, 2012;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 434, |
| "end": 450, |
| "text": "Li et al., 2013)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 570, |
| "end": 591, |
| "text": "(Riedel et al., 2009;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 592, |
| "end": 619, |
| "text": "Poon and Vanderwende, 2010;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 620, |
| "end": 643, |
| "text": "Venugopal et al., 2014)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 668, |
| "end": 685, |
| "text": "(Li et al., 2013;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 686, |
| "end": 703, |
| "text": "Li et al., 2014b)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 729, |
| "end": 750, |
| "text": "(Riedel et al. (2009;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 751, |
| "end": 757, |
| "text": "2011a;", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 758, |
| "end": 764, |
| "text": "2011b)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 927, |
| "end": 945, |
| "text": "Chen et al. (2015)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We present a joint model to do EE based on bidirectional RNN to overcome the limitation of the previ-ous models for this task. We introduce the memory matrix that can effectively capture the dependencies between argument roles and trigger subtypes. We demonstrate that the concatenation-based variant of the CBOW word embeddings is very helpful for the joint model. The proposed joint model is empirically shown to be effective on the sentences with multiple events as well as yields the state-of-the-art performance on the ACE 2005 dataset. In the future, we plan to apply this joint model on the event argument extraction task of the KBP evaluation as well as extend it to other joint tasks such as mention detection together with relation extraction etc.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Local features encapsulate the characteristics for the individual tasks (i.e, trigger and argument role labeling) while global features target the dependencies between triggers and arguments and are only available in the joint approach.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://projects.ldc.upenn.edu/ace", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "From now on, when mentioning entity mentions, we always refer to the ACE entity mentions, times and values.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://code.google.com/p/word2vec/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "The stages of event extraction", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Ahn", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the Workshop on Annotating and Reasoning about Time and Events", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Ahn. 2006. The stages of event extraction. In Proceedings of the Workshop on Annotating and Rea- soning about Time and Events.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| }, |
| { |
| "first": "Georgiana", |
| "middle": [], |
| "last": "Dinu", |
| "suffix": "" |
| }, |
| { |
| "first": "Germ\u00e1n", |
| "middle": [], |
| "last": "Kruszewski", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marco Baroni, Georgiana Dinu, and Germ\u00e1n Kruszewski. 2014. Don't count, predict! a systematic compari- son of context-counting vs. context-predicting seman- tic vectors. In ACL.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Learning long-term dependencies with gradient descent is difficult", |
| "authors": [ |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrice", |
| "middle": [], |
| "last": "Simard", |
| "suffix": "" |
| }, |
| { |
| "first": "Paolo", |
| "middle": [], |
| "last": "Frasconi", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "In Journal of Machine Learning Research", |
| "volume": "3", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradient descent is difficult. In Journal of Machine Learning Research 3.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "A neural probabilistic language model", |
| "authors": [ |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| }, |
| { |
| "first": "R\u00e9jean", |
| "middle": [], |
| "last": "Ducharme", |
| "suffix": "" |
| }, |
| { |
| "first": "Pascal", |
| "middle": [], |
| "last": "Vincent", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Jauvin", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "In Journal of Machine Learning Research", |
| "volume": "3", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic lan- guage model. In Journal of Machine Learning Re- search 3.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Event extraction via dynamic multi-pooling convolutional neural networks", |
| "authors": [ |
| { |
| "first": "Yubo", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Liheng", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Kang", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Daojian", |
| "middle": [], |
| "last": "Zeng", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "ACL-IJCNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multi-pooling convolutional neural networks. In ACL- IJCNLP.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Bart", |
| "middle": [], |
| "last": "Van Merrienboer", |
| "suffix": "" |
| }, |
| { |
| "first": "Caglar", |
| "middle": [], |
| "last": "Gulcehre", |
| "suffix": "" |
| }, |
| { |
| "first": "Dzmitry", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| }, |
| { |
| "first": "Fethi", |
| "middle": [], |
| "last": "Bougares", |
| "suffix": "" |
| }, |
| { |
| "first": "Holger", |
| "middle": [], |
| "last": "Schwenk", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase represen- tations using rnn encoder-decoder for statistical ma- chine translation. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Empirical evaluation of gated recurrent neural networks on sequence modeling", |
| "authors": [ |
| { |
| "first": "Junyoung", |
| "middle": [], |
| "last": "Chung", |
| "suffix": "" |
| }, |
| { |
| "first": "Caglar", |
| "middle": [], |
| "last": "Gulcehre", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXivpreprintarXiv:1412.3555" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence model- ing. In arXiv preprint arXiv:1412.3555.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "A unified architecture for natural language processing: deep neural networks with multitask learning", |
| "authors": [ |
| { |
| "first": "Ronan", |
| "middle": [], |
| "last": "Collobert", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ronan Collobert and Jason Weston. 2008. A unified ar- chitecture for natural language processing: deep neural networks with multitask learning. In ICML.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Long short-term memory in recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Felix", |
| "middle": [], |
| "last": "Gers", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Felix Gers. 2001. Long short-term memory in recurrent neural networks. In PhD Thesis.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Nyus english ace 2005 system description", |
| "authors": [ |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Grishman", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Westbrook", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Meyers", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "ACE 2005 Evaluation Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ralph Grishman, David Westbrook, and Adam Meyers. 2005. Nyus english ace 2005 system description. In ACE 2005 Evaluation Workshop.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Predicting unknown time arguments based on cross-event propagation", |
| "authors": [ |
| { |
| "first": "Prashant", |
| "middle": [], |
| "last": "Gupta", |
| "suffix": "" |
| }, |
| { |
| "first": "Heng", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "ACL-IJCNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Prashant Gupta and Heng Ji. 2009. Predicting unknown time arguments based on cross-event propagation. In ACL-IJCNLP.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Long short-term memory", |
| "authors": [ |
| { |
| "first": "Sepp", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "Jurgen", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural Computation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sepp Hochreiter and Jurgen Schmidhuber. 1997. Long short-term memory. In Neural Computation.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Using cross-entity inference to improve event extraction", |
| "authors": [ |
| { |
| "first": "Yu", |
| "middle": [], |
| "last": "Hong", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianfeng", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Bin", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianmin", |
| "middle": [], |
| "last": "Yao", |
| "suffix": "" |
| }, |
| { |
| "first": "Guodong", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Qiaoming", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yu Hong, Jianfeng Zhang, Bin Ma, Jianmin Yao, Guodong Zhou, and Qiaoming Zhu. 2011. Using cross-entity inference to improve event extraction. In ACL.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Modeling textual cohesion for event extraction", |
| "authors": [ |
| { |
| "first": "Ruihong", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Ellen", |
| "middle": [], |
| "last": "Riloff", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ruihong Huang and Ellen Riloff. 2012. Modeling tex- tual cohesion for event extraction. In AAAI.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Refining event extraction through cross-document inference", |
| "authors": [ |
| { |
| "first": "Heng", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Grishman", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Heng Ji and Ralph Grishman. 2008. Refining event ex- traction through cross-document inference. In ACL.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "An empirical exploration of recurrent network architectures", |
| "authors": [ |
| { |
| "first": "Rafal", |
| "middle": [], |
| "last": "J\u00f3zefowicz", |
| "suffix": "" |
| }, |
| { |
| "first": "Wojciech", |
| "middle": [], |
| "last": "Zaremba", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rafal J\u00f3zefowicz, Wojciech Zaremba, and Ilya Sutskever. 2015. An empirical exploration of recurrent network architectures. In ICML.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Convolutional neural networks for sentence classification", |
| "authors": [ |
| { |
| "first": "Yoon", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoon Kim. 2014. Convolutional neural networks for sen- tence classification. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Joint event extraction via structured prediction with global features", |
| "authors": [ |
| { |
| "first": "Qi", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Ji", |
| "middle": [], |
| "last": "Heng", |
| "suffix": "" |
| }, |
| { |
| "first": "Liang", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Qi Li, Heng Ji, and Liang Huang. 2013. Joint event ex- traction via structured prediction with global features. In ACL.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Constructing information networks using one single model", |
| "authors": [ |
| { |
| "first": "Qi", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Heng", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "" |
| }, |
| { |
| "first": "Yu", |
| "middle": [], |
| "last": "Hong", |
| "suffix": "" |
| }, |
| { |
| "first": "Sujian", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Qi Li, Heng Ji, Yu Hong, and Sujian Li. 2014b. Constructing information networks using one single model. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Using document level cross-event inference to improve event extraction", |
| "authors": [ |
| { |
| "first": "Shasha", |
| "middle": [], |
| "last": "Liao", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Grishman", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shasha Liao and Ralph Grishman. 2010. Using docu- ment level cross-event inference to improve event ex- traction. In ACL.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Acquiring topic features to improve event extraction: in pre-selected and balanced collections", |
| "authors": [ |
| { |
| "first": "Shasha", |
| "middle": [], |
| "last": "Liao", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Grishman", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "RANLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shasha Liao and Ralph Grishman. 2011. Acquiring topic features to improve event extraction: in pre-selected and balanced collections. In RANLP.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Event extraction as dependency parsing", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Mcclosky", |
| "suffix": "" |
| }, |
| { |
| "first": "Mihai", |
| "middle": [], |
| "last": "Surdeanu", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "BioNLP Shared Task Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David McClosky, Mihai Surdeanu, and Christopher Man- ning. 2011. Event extraction as dependency parsing. In BioNLP Shared Task Workshop.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Efficient estimation of word representations in vector space", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representa- tions in vector space. In ICLR.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Distributed representations of words and phrases and their compositionality", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In NIPS.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Relation extraction: Perspective from convolutional neural networks", |
| "authors": [ |
| { |
| "first": "Huu", |
| "middle": [], |
| "last": "Thien", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Nguyen", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Grishman", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "The NAACL Workshop on Vector Space Modeling for NLP (VSM)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thien Huu Nguyen and Ralph Grishman. 2015a. Rela- tion extraction: Perspective from convolutional neural networks. In The NAACL Workshop on Vector Space Modeling for NLP (VSM).", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Event detection and domain adaptation with convolutional neural networks", |
| "authors": [ |
| { |
| "first": "Huu", |
| "middle": [], |
| "last": "Thien", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Nguyen", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Grishman", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "ACL-IJCNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thien Huu Nguyen and Ralph Grishman. 2015b. Event detection and domain adaptation with convolutional neural networks. In ACL-IJCNLP.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "A unified model of phrasal and sentential evidence for information extraction", |
| "authors": [ |
| { |
| "first": "Siddharth", |
| "middle": [], |
| "last": "Patwardhan", |
| "suffix": "" |
| }, |
| { |
| "first": "Ellen", |
| "middle": [], |
| "last": "Riloff", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Siddharth Patwardhan and Ellen Riloff. 2009. A unified model of phrasal and sentential evidence for informa- tion extraction. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Joint inference for knowledge extraction from biomedical literature", |
| "authors": [ |
| { |
| "first": "Hoifung", |
| "middle": [], |
| "last": "Poon", |
| "suffix": "" |
| }, |
| { |
| "first": "Lucy", |
| "middle": [], |
| "last": "Vanderwende", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "NAACL-HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hoifung Poon and Lucy Vanderwende. 2010. Joint in- ference for knowledge extraction from biomedical lit- erature. In NAACL-HLT.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Fast and robust joint models for biomedical event extraction", |
| "authors": [ |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Riedel", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sebastian Riedel and Andrew McCallum. 2011a. Fast and robust joint models for biomedical event extrac- tion. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Robust biomedical event extraction with dual decomposition and minimal domain adaptation", |
| "authors": [ |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Riedel", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "BioNLP Shared Task", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sebastian Riedel and Andrew McCallum. 2011b. Robust biomedical event extraction with dual decomposition and minimal domain adaptation. In BioNLP Shared Task 2011 Workshop.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "A markov logic approach to bio-molecular event extraction", |
| "authors": [ |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Riedel", |
| "suffix": "" |
| }, |
| { |
| "first": "Hong-Woo", |
| "middle": [], |
| "last": "Chun", |
| "suffix": "" |
| }, |
| { |
| "first": "Toshihisa", |
| "middle": [], |
| "last": "Takagi", |
| "suffix": "" |
| }, |
| { |
| "first": "Jun'ichi", |
| "middle": [], |
| "last": "Tsujii", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sebastian Riedel, Hong-Woo Chun, Toshihisa Takagi, and Jun'ichi Tsujii. 2009. A markov logic approach to bio-molecular event extraction. In BioNLP 2009 Workshop.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Word representations: A simple and general method for semi-supervised learning", |
| "authors": [ |
| { |
| "first": "Joseph", |
| "middle": [], |
| "last": "Turian", |
| "suffix": "" |
| }, |
| { |
| "first": "Lev-Arie", |
| "middle": [], |
| "last": "Ratinov", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joseph Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In ACL.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Relieving the computational bottleneck: Joint inference for event extraction with highdimensional features", |
| "authors": [ |
| { |
| "first": "Deepak", |
| "middle": [], |
| "last": "Venugopal", |
| "suffix": "" |
| }, |
| { |
| "first": "Chen", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Deepak Venugopal, Chen Chen, Vibhav Gogate, and Vin- cent Ng. 2014. Relieving the computational bottle- neck: Joint inference for event extraction with high- dimensional features. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Adadelta: An adaptive learning rate method", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Matthew", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zeiler", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "CoRR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew D. Zeiler. 2012. Adadelta: An adaptive learn- ing rate method. In CoRR, abs/1212.5701.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "num": null, |
| "text": "(i) trigger prediction for w i . (ii) argument role prediction for all the entity mentions e 1 , e 2 , . . . , e k with respect to the current token w i . (iii) compute G trg i , G arg i and G arg/trg i for the current step using the previous memory vector/matrices G trg i\u22121 , G arg i\u22121 and G arg/trg i\u22121 , and the prediction output in the earlier stages.", |
| "uris": null |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "num": null, |
| "text": "0 to n) and use them as features in the trigger and argument prediction explicitly (as shown in the representation vectors R trg i and R arg ij above). We classify the dependencies into the following three categories:1. The dependencies among trigger subtypes: are captured by the memory vectors G trg n T for i = 0, . . . , n, and n T is the number of the possible trigger subtypes). At time i, G trg i indicates which event subtypes have been recognized before i. We obtain Gtrg i from G trg i\u22121 and the trigger prediction output t i at time i: G trg i [t] = 1 if t = t i and G trg i\u22121 [t] otherwise.", |
| "uris": null |
| }, |
| "FIGREF2": { |
| "type_str": "figure", |
| "num": null, |
| "text": "arg i [j][a] = 1 if and only if e j has the role of a with some event before time i. G arg i is computed from G arg i\u22121 , and the prediction outputs t i and a i1 , . . . , a ik at time i: G arg i [j][a] = 1 if t i = \"Other\" and a = a ij , and G arg i\u22121 [j][a] otherwise (for j = 1 to k).", |
| "uris": null |
| }, |
| "FIGREF3": { |
| "type_str": "figure", |
| "num": null, |
| "text": "The dependencies between argument roles and trigger subtypes: are encoded by the memory matrix G 1} k\u00d7n T for i = 0 to n). At time i, G arg/trg i specifies which entity mentions have been identified as arguments for which event subtypes before. In particular, G arg/trg i[j][t] = 1 if and only if e j has been detected as an argument for some event of subtype t before i. G arg/trg i is computed from G arg/trg i\u22121 and the trigger prediction output t i at time i: G arg/trg i [j][t] = 1 if t i = \"Other\" and t = t i , and G arg/trg i\u22121 [j][t] otherwise (for all j = 1 to k).", |
| "uris": null |
| }, |
| "TABREF2": { |
| "text": "Performance of the Memory Vector/Matrices on the development set. No means not using the memory vector/matrices.", |
| "content": "<table/>", |
| "html": null, |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF4": { |
| "text": "Performance of the Word Embedding Techniques.", |
| "content": "<table/>", |
| "html": null, |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF6": { |
| "text": "Overall Performance on the Blind Test Data. \" \u2020\" designates the systems that employ the evidences beyond sentence level.", |
| "content": "<table/>", |
| "html": null, |
| "num": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |