ACL-OCL / Base_JSON /prefixC /json /coling /2020.coling-main.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:44:38.762968Z"
},
"title": "Is Killed More Significant than Fled? A Contextual Model for Salient Event Detection",
"authors": [
{
"first": "Disha",
"middle": [],
"last": "Jindal",
"suffix": "",
"affiliation": {},
"email": "djjindal@seas.upenn.edu"
},
{
"first": "Daniel",
"middle": [],
"last": "Deutsch",
"suffix": "",
"affiliation": {},
"email": "ddeutsch@seas.upenn.edu"
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": "",
"affiliation": {},
"email": "danroth@seas.upenn.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Identifying the key events in a document is critical to holistically understanding its important information. Although measuring the salience of events is highly contextual, most previous work has used a limited representation of events that omits essential information. In this work, we propose a highly contextual model of event salience that uses a rich representation of events, incorporates document-level information and allows for interactions between latent event encodings. Our experimental results on an event salience dataset (Liu et al., 2018) demonstrate that our model improves over previous work by an absolute 2-4% on standard metrics, establishing a new state-of-the-art performance for the task. We also propose a new evaluation metric which addresses flaws in previous evaluation methodologies. Finally, we discuss the importance of salient event detection for the downstream task of summarization. 1",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Identifying the key events in a document is critical to holistically understanding its important information. Although measuring the salience of events is highly contextual, most previous work has used a limited representation of events that omits essential information. In this work, we propose a highly contextual model of event salience that uses a rich representation of events, incorporates document-level information and allows for interactions between latent event encodings. Our experimental results on an event salience dataset (Liu et al., 2018) demonstrate that our model improves over previous work by an absolute 2-4% on standard metrics, establishing a new state-of-the-art performance for the task. We also propose a new evaluation metric which addresses flaws in previous evaluation methodologies. Finally, we discuss the importance of salient event detection for the downstream task of summarization. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Identifying the salient information in a given piece of text is a ubiquitous and important problem in natural language understanding. While important parts of the text have been identified by attending to entities (Dunietz and Gillick, 2014) , elementary discourse units (Xu et al., 2020) , or whole sentences Liu and Lapata, 2019) , in this work, we choose to model extracting important events (Liu et al., 2018; Choubey et al., 2018) . Events are the core parts of most sentences -they center around a predicate and include its key arguments -yet they are compact semantic units, and a salient event in a sentence could carry the sentence's meaning efficiently. Extracting important events has been shown to be central to many downstream tasks, such as summarization (Marujo, 2015) , storyline creation (Martin et al., 2018) and question answering (Kocisk\u00fd et al., 2018) .",
"cite_spans": [
{
"start": 214,
"end": 241,
"text": "(Dunietz and Gillick, 2014)",
"ref_id": "BIBREF8"
},
{
"start": 271,
"end": 288,
"text": "(Xu et al., 2020)",
"ref_id": "BIBREF29"
},
{
"start": 310,
"end": 331,
"text": "Liu and Lapata, 2019)",
"ref_id": "BIBREF16"
},
{
"start": 395,
"end": 413,
"text": "(Liu et al., 2018;",
"ref_id": "BIBREF17"
},
{
"start": 414,
"end": 435,
"text": "Choubey et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 769,
"end": 783,
"text": "(Marujo, 2015)",
"ref_id": "BIBREF20"
},
{
"start": 805,
"end": 826,
"text": "(Martin et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 850,
"end": 872,
"text": "(Kocisk\u00fd et al., 2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To model the importance of an event, it is critical to understand its context: who is involved, where did it happen, what other events is it related to, and more. For instance, in Figure 1 , the difference in salience of the \"fled\" events in each document is significantly influenced by their arguments (\"2,100 Colombians\" versus \"shopkeepers\"). Previous work that identifies salient events has a limited event representation that is unable to capture these important contextual signals (Liu et al., 2018; Choubey et al., 2018) . In contrast, the model which we propose in this work ( \u00a73) directly models the context of an event in three different ways as follows.",
"cite_spans": [
{
"start": 487,
"end": 505,
"text": "(Liu et al., 2018;",
"ref_id": "BIBREF17"
},
{
"start": 506,
"end": 527,
"text": "Choubey et al., 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 180,
"end": 188,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "First, instead of representing an event by only its predicate mention, our representation includes the subject, object, time, and location of the event ( \u00a73.1). Second, we directly incorporate global features into the model that capture hierarchical relations between events, abstract event frames, position information, and more ( \u00a73.2). Third, we use a neural network architecture that includes an inter-event interaction layer, which allows information to be passed between latent event encodings so that other events may increase or decrease another's importance ( \u00a73.3). Our experimental results on a standard event salience dataset (Liu et al., 2018) demonstrate that these contextual signals significantly increase performance ( \u00a74.4). We find that our model performs 2-4% better than previous work, setting a new state-of-the-art performance for the task. If Colombia is going to be another Vietnam, as everyone keeps saying, then Ecuador is going to become the Cambodia of this war, Maximo Abad Jaramillo, the mayor here, warned. We are not ready for this war, we don't want to be a part of it, but we are being dragged into the conflict against our will. In December alone, the local police say, {20 people}subject were killed {here}object, 15 of them in clashes among Colombians. As of Dec. 31, nearly {2,100 Colombians}subject had fled {the fighting just across the border}object and registered with the Roman Catholic Church in Lago Agrio.",
"cite_spans": [
{
"start": 638,
"end": 656,
"text": "(Liu et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Israeli troops and tanks occupied positions in Jenin on Oct. 18, the day after {Palestinian radicals}subject killed the {Israeli minister of tourism}object {in a Jerusalem hotel}location. Yael Shaluka, who works at a bakery in the market, said she heard one of the gunmen screaming, Kill them! As terrified {shopkeepers}subject fled {their stalls}object, the men cut through an aisle of the market, now pursued by policemen and an army reservist.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Figure 1: Two sample documents from the New York Times Annotated Corpus with \"killed\" and \"fled\" events. Event mentions in bold, arguments in braces. The colors red and blue indicate whether that event is annotated as salient or not. Some of the events' arguments (e.g., \"2,100 Colombians\" and \"Israeli minister of tourism\") elevate the importance of their respective events.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In addition to proposing a model for salient event detection, we also provide a new evaluation metric for the task ( \u00a74.2). Previous work has evaluated the top-k salient events a model outputs using precision@k and recall@k, where the recall term is normalized by the total number of salient events in the document. However, this metric has some undesirable properties; for example, a perfect model could have, for different documents, variable recall@k scores that depend on the number of events in the document. We propose a more interpretable metric, normalized recall@k, that addresses these issues. In addition, our new metric avoids duplication counting due to co-referenced events.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Finally, we discuss the potential impact that modeling salient events could have on the downstream task of extractive summarization ( \u00a75). We find that the sentence-level extractive oracle that is frequently used to train summarization systems misses a significantly large portion of sentences with important events, a result which suggests that an event-based oracle could provide a better supervision signal. Further, because events are more fine-grained than full sentences, an event-focused model can discard unimportant information in a sentence to generate a more-concise summary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The contributions of this work are three-fold: (1) We propose a contextual model for salient event detection that achieves state-of-the-art results; (2) We provide a sensible and more interpretable metric for evaluating extracted salient events; (3) We demonstrate that extractive summarization systems could potentially gain from modeling important events.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Important Information Identification has been a topic of interest in NLP community since the 1980s and through these years, researchers have defined salience in multiple ways. Mann and Thompson (1988) divides the text into nuclei and satellite with an idea that a satellite is incomprehensible without the nucleus but, after removing some satellites, the text can still be understood. Upadhyay et al. (2016) discusses identifying events that would have triggered the author to write that article. Choubey et al. (2018) proposed the idea that the central events have a large number of coreferential event mentions and those mentions are spread throughout the document. However, we believe that in realistic documents, redundancy is commonly used for various rhetorical or other reasons, it is not necessarily the case that frequently mentioned events convey the main point of the article. We follow proposals from entity salience (Dunietz and Gillick, 2014) and event salience work (Liu et al., 2018) that suggest that these are difficult to explicitly define, but can be learned from observing human summaries: events that appear in the summary are salient.",
"cite_spans": [
{
"start": 176,
"end": 200,
"text": "Mann and Thompson (1988)",
"ref_id": "BIBREF18"
},
{
"start": 385,
"end": 407,
"text": "Upadhyay et al. (2016)",
"ref_id": "BIBREF26"
},
{
"start": 497,
"end": 518,
"text": "Choubey et al. (2018)",
"ref_id": "BIBREF3"
},
{
"start": 929,
"end": 956,
"text": "(Dunietz and Gillick, 2014)",
"ref_id": "BIBREF8"
},
{
"start": 981,
"end": 999,
"text": "(Liu et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Event representation has also evolved over time. Chambers and Jurafsky (2008) represented narrative events as pairs of verb and the grammatical dependency relation between the verb and the entity. Do et al. (2011) included nominal predicates, using the nominal form of verbs and lexical items under the Event frame in FrameNet (Baker et al., 1998) . Further work by Balasubramanian et al. (2013) , Pichotta and Mooney (2014) , and others incorporated arguments such as propositional objects. However, most of the Figure 2 : Illustration of our proposed framework. Given a document, we first extract all events from it. Then, we grab the token level embeddings of all constituents from the document encoded using BERT, and compose an event level embedding. Finally, in the classification module, all events attend to/vote each other and the salience score of an event is calculated by accumulating the votes from all events.",
"cite_spans": [
{
"start": 49,
"end": 77,
"text": "Chambers and Jurafsky (2008)",
"ref_id": "BIBREF2"
},
{
"start": 197,
"end": 213,
"text": "Do et al. (2011)",
"ref_id": "BIBREF7"
},
{
"start": 327,
"end": 347,
"text": "(Baker et al., 1998)",
"ref_id": "BIBREF0"
},
{
"start": 366,
"end": 395,
"text": "Balasubramanian et al. (2013)",
"ref_id": "BIBREF1"
},
{
"start": 398,
"end": 424,
"text": "Pichotta and Mooney (2014)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 513,
"end": 521,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "work on event salience identification has represented events as verbs/nominal event mentions. To address this, we use a more holistic event representation which includes nominal/verbal event mentions, entities in the subject and object of the predicate as well as the time and location. A similar representation was used by Peng et al. (2016) to build an event detection and co-reference system. From modeling perspective, most earlier work (Decker, 1985; Kay and Aylett, 1996) on event salience has built rule based systems (e.g. presence of the event in the main clause, its voice, etc.). More recent work has focused on capturing coreference relation between events (Choubey et al., 2018) and on automatically capturing salient specific interactions between the discourse units (Liu et al., 2018) . However, as mentioned earlier, we believe that events are highly contextual, so we use a more expressive model to obtain contextualized embeddings for events. It additionally helps us in capturing local interevent interactions and, to capture the document level interactions, we design a number of global document level features.",
"cite_spans": [
{
"start": 324,
"end": 342,
"text": "Peng et al. (2016)",
"ref_id": "BIBREF23"
},
{
"start": 441,
"end": 455,
"text": "(Decker, 1985;",
"ref_id": "BIBREF5"
},
{
"start": 456,
"end": 477,
"text": "Kay and Aylett, 1996)",
"ref_id": "BIBREF11"
},
{
"start": 781,
"end": 799,
"text": "(Liu et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We choose to model salient event extraction as a binary classification task. For every verbal and nominal event mention e 1 , . . . , e n in a document, the goal is to predict whether or not each event is salient. In our experimental study we assume that the event mentions are provided.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Salient Event Extraction",
"sec_num": "3"
},
{
"text": "In the following sections, we describe our model's event representation ( \u00a73.1), the global features which are incorporated into the representation ( \u00a73.2), and the network architecture and inter-event interaction mechanisms ( \u00a73.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Salient Event Extraction",
"sec_num": "3"
},
{
"text": "The standard method for representing an event is to define it based on the span of text that represents its predicate mention in a document. However, this representation is clearly suboptimal because many other important signals in the text are missing which could add critical information to determine the importance of an event. For instance, it is difficult to determine whether a \"snow\" event is important by itself, but knowing that the event took place in during the summer increases the rarity of the event, potentially elevating its importance in the document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event Representation",
"sec_num": "3.1"
},
{
"text": "Subsequently, we define an event to be a 5-tuple",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event Representation",
"sec_num": "3.1"
},
{
"text": "e i = (m i , s i , o i , t i , i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event Representation",
"sec_num": "3.1"
},
{
"text": "where each item in the tuple corresponds to the contiguous span of text for the event mention, subject, object, time, and location, respectively. In practice, the arguments for each event correspond to the ARG0, ARG1, ARGM-LOC, and ARGM-TMP arguments output from verbal and nominal semantic role labeling systems (He et al., 2017; Khashabi et al., 2018) . After the arguments for each event have been extracted, they are combined together to form a vector representation for the event, denoted e i , as follows. We first obtain the BERT embedding (Devlin et al., 2019) for every token in the input document. Then, since each of the items of the event tuple is a contiguous span of text, we create a fixed-size representation for each argument by encoding the corresponding tokens using a bidirectional LSTM. There is a separate LSTM encoder for each argument type. Finally, the vectors for all of the arguments are concatenated together to form the event encoding,",
"cite_spans": [
{
"start": 313,
"end": 330,
"text": "(He et al., 2017;",
"ref_id": "BIBREF10"
},
{
"start": 331,
"end": 353,
"text": "Khashabi et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 547,
"end": 568,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Event Representation",
"sec_num": "3.1"
},
{
"text": "e i = [m i ; s i ; o i ; t i ; i ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event Representation",
"sec_num": "3.1"
},
{
"text": "Although BERT embeddings have proven to be highly beneficial for a large number of tasks, they may not encode all of the information that is useful for the task. This is especially true for high-level document features with long-range dependencies. Therefore, we augment the event representation from the previous section with features that leverage the event structure, document-level statistics, event-event relations, and event abstractions. Refer to Table 1(b) for a summary of the statistics of these features on the New York Times (NYT) Annotated Corpus (Sandhaus, 2008) . We extract the following features:",
"cite_spans": [
{
"start": 560,
"end": 576,
"text": "(Sandhaus, 2008)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Event Augmentation with Global Features",
"sec_num": "3.2"
},
{
"text": "1. Parent Score: This feature leverages the hierarchical relations between events in a document. The intuition is that the higher level and more abstract events are relatively more salient. We use the model from Wang et al. (2020) to identify the parent-child relationship between every event pair. An event is called a child of another event if it is a subevent of the parent (e.g., \"shooting\" may be a subevent/child of an \"attack\" event). The Parent Score of an event is defined as the number of child events it has.",
"cite_spans": [
{
"start": 212,
"end": 230,
"text": "Wang et al. (2020)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Event Augmentation with Global Features",
"sec_num": "3.2"
},
{
"text": "2. Frame Name: Framename provides a more abstract understanding of the event compared to the event trigger. All event triggers (9716 in total) in the Event Salience corpus (Liu et al., 2018) belong to a total of 569 frames (annotated using Semafor (Das and Smith, 2011) ). For instance, Figure 3 (right) shows all event triggers under the frame \"Killing.\" As can be seen from the figure, the frequency of all events within a frame is very different, whereas their salience in the text is usually the same. Therefore, this feature enables events with low frequency to leverage the understanding from more frequent events under the same frame.",
"cite_spans": [
{
"start": 172,
"end": 190,
"text": "(Liu et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 248,
"end": 269,
"text": "(Das and Smith, 2011)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 287,
"end": 296,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Event Augmentation with Global Features",
"sec_num": "3.2"
},
{
"text": "3. Sentence Location: One of the most commonly used (Dunietz and Gillick, 2014; Liu et al., 2018) features for salience related tasks. It represents the first location of sentence containing this event.",
"cite_spans": [
{
"start": 52,
"end": 79,
"text": "(Dunietz and Gillick, 2014;",
"ref_id": "BIBREF8"
},
{
"start": 80,
"end": 97,
"text": "Liu et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Event Augmentation with Global Features",
"sec_num": "3.2"
},
{
"text": "The number of times the event trigger appears in the document. Table 1 shows that salient events are, on average, significantly more frequent than non salient events.",
"cite_spans": [],
"ref_spans": [
{
"start": 63,
"end": 70,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Event Trigger Frequency:",
"sec_num": "4."
},
{
"text": "We leverage the event structure to design this feature. Argument Frequency of an event is the maximum number of times any of its arguments (ARG0/1) appear in the document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Frequency:",
"sec_num": "5."
},
{
"text": "6. Named Argument: Since Named Entities signify their relative importance compared to other entities, we add a binary feature representing whether any of the event arguments is a Named Entity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Frequency:",
"sec_num": "5."
},
{
"text": "After these features have been computed for each event, they are concatenated to the event encoding e i to get the final representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Frequency:",
"sec_num": "5."
},
{
"text": "After each event has been encoded into a vector representation e i , the model needs to make a binary decision about whether or not an event is salient. Our baseline model assigns a label\u0177 i \u2208 {0, 1} to event e i using a sigmoid classification layer:\u0177 Figure 3 : (Left) Example inter-event relations. Yellow lines represent lexical matches between event representations, and brown lines imply a transitive relationship between events. Although Events 1 and 3 have no lexical matches, we are able to infer a relationship between them due to both having lexical matches with Event 2. (Right) Event trigger frequencies for the Killing frame on the New York Times Annotated Corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 252,
"end": 260,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Inter-Event Interactions",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "i = \u03c3(We i + b)",
"eq_num": "(1)"
}
],
"section": "Inter-Event Interactions",
"sec_num": "3.3"
},
{
"text": "where W and b are learned parameters. The model is trained using a binary cross-entropy loss and denoted CEE-BASE (Contextual Event Extractor).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-Event Interactions",
"sec_num": "3.3"
},
{
"text": "Intuitively, it may be beneficial to allow for the event representation vectors to interact with each other, thereby allowing one event to increase or decrease the salience of another event. In our model, this is done by adding additional modules in between the event encoding and classification layer. We experiment with two different methods of inter-event interactions as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-Event Interactions",
"sec_num": "3.3"
},
{
"text": "Inter-Event Attention Module The idea behind this classification module is to capture inter-event votes (Gu et al., 2020) . The events which get higher votes from others will have a larger attention score, with the intuition that the supporting events will increase the salience of their corresponding main events and that the irrelevant and noisy events will be ignored. Specifically, given the representation of an event e i , a key and query vector are calculated as",
"cite_spans": [
{
"start": 104,
"end": 121,
"text": "(Gu et al., 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-Event Interactions",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "k i = W k e i (2) q i = W q e i",
"eq_num": "(3)"
}
],
"section": "Inter-Event Interactions",
"sec_num": "3.3"
},
{
"text": "where W k and W q are learned parameters. Then, the attention score a i of an event is calculated as follows:\u0177",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-Event Interactions",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "i = \u03c3 \uf8eb \uf8ed n j=1,j =i q j \u2022 k i \uf8f6 \uf8f8",
"eq_num": "(4)"
}
],
"section": "Inter-Event Interactions",
"sec_num": "3.3"
},
{
"text": "We refer to models that use this attention mechanism as CEE-IEA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-Event Interactions",
"sec_num": "3.3"
},
{
"text": "Dynamic Memory Module Since events are highly contextual, a given event forms discourse relations with other events in the document. For instance, in Figure 3 (left), we assume that events 1 and 2 are related to each other because their arguments have some lexical overlap. Since the same is true for events 2 and 3, we can infer that events 1 and 3 might be related even though they share no lexical overlap themselves. To capture such transitive inter-event relations, we repurpose Dynamic Memory Networks (Xiong et al., 2016, DMNs) for our task. DMNs make T passes over the input event vectors, each time refining a episodic memory vector m t i for event e i based on the previous iteration and the other events. The multiple iterations allow information to flow transitively across events. Finally, the output score is calculated based on m T i :",
"cite_spans": [
{
"start": 508,
"end": 534,
"text": "(Xiong et al., 2016, DMNs)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 150,
"end": 158,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Inter-Event Interactions",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "m t i = ReLU(W 1 [m t\u22121 i ; c t i ; e i ] + b 1 )",
"eq_num": "(5)"
}
],
"section": "Inter-Event Interactions",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y i = \u03c3(W 2 m T i + b 2 )",
"eq_num": "(6)"
}
],
"section": "Inter-Event Interactions",
"sec_num": "3.3"
},
{
"text": "where c t i = AttnGRU([e j ]); j = [1, n], j = i (see Xiong et al. (2016) for details about the AttnGRU function). ",
"cite_spans": [
{
"start": 54,
"end": 73,
"text": "Xiong et al. (2016)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-Event Interactions",
"sec_num": "3.3"
},
{
"text": "For the experimental evaluation of our contextual event salience module (and subsequent discussion of extractive summarization), we use the New York Times dataset (Sandhaus, 2008) , which is a large corpus of articles that were published between 1996 and 2007 and their corresponding summaries. We use the salient event annotations for this dataset provided by Liu et al. (2018) . In their work, events in the document are marked as salient based on whether the event mention's lemma is present in the abstractive summary. Due to the annotation procedure, around 18.3% of the instances had no salient events and were subsequently omitted from the evaluation. Refer to Table 1 (left) for the details on dataset preparation.",
"cite_spans": [
{
"start": 163,
"end": 179,
"text": "(Sandhaus, 2008)",
"ref_id": "BIBREF25"
},
{
"start": 361,
"end": 378,
"text": "Liu et al. (2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 668,
"end": 675,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "We evaluate our event salience model on three metrics: Precision@k (P@k), Recall@k (R@k) and Normalized Recall@k (NR@k) for k = 1, 5, 10. Previous work has reported results on P@k and R@k where: P@k = # of salient events in top k predictions k R@k = # of salient events in top k predictions # of salient events in the doc However, R@k is not a very comprehensible metric because: i) The maximum value of recall is variable and ii) averaging R@k across documents with different number of salient events is biased towards documents with fewer events. Consequently, we propose a new metric, NR@k, which gives equal importance to all documents and has a maximum value is 1 for all k, making the metric easier to understand. NR@k = # of salient events in top k predictions min(k, # of salient events in the doc)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "4.2"
},
{
"text": "Additionally, we can see from the Table 1 (right) that the average trigger frequency of non salient events is 2.18 whereas those of salient events is 6.76. So, depending upon the correference ability of a model, for each true positive, on an average the model can be given a reward of 6.76 whereas for each false positive, the model can be penalized by a score of 2.18. For a fair understanding of a model's ability to identify top k events, a reward/penalty of one should be given to each unique event. Therefore, we also calculate P@k and NR@k using only the top-k unique model predictions. For a direct comparison of previously reported results, we also include results with the original metrics.",
"cite_spans": [],
"ref_spans": [
{
"start": 34,
"end": 41,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "4.2"
},
{
"text": "We use sub word tokenizer from BERT to tokenize the documents and fine-tune bert-base-uncased 2 version of BERT in all of our settings. Our models are implemented in PyTorch (Paszke et al., 2017) . Token level BERT embeddings of size 768 are passed through different BiLSTM modules to get embeddings of the event mention (of size 512) and all other constituents (of size 64 each) to get the final event embedding of size 768. To add the Frame Name feature from \u00a7 3.2, we first get the event embedding of size 768 as Method P@1 P@5 P@10 NR@1 NR@5 NR@10 mentioned above. After that, we get a BERT embedding of size 768 for the corresponding frame name. Both of these are concatenated to get the final event embedding of size 1536. Model parameters are optimized by Adam (Kingma and Ba, 2015). Our models are trained for 30k steps on 4 GPUs (TITAN RTX). We evaluated the model after every 750 steps and saved the best checkpoint based on the validation loss. The test results are reported by evaluating our test set's performance on this checkpoint.",
"cite_spans": [
{
"start": 174,
"end": 195,
"text": "(Paszke et al., 2017)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.3"
},
{
"text": "LOCATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.3"
},
{
"text": "We compare our model to three baseline models: LOCATION (which selects the first k events), FRE-QUENCY (which selects the k events which appear most frequently in the document), and the Kernel Centrality Estimation (KCE) model from Liu et al. (2018) . KCE models relationships between events using K Gaussian kernels and adds Sentence Location and Frequency features along with three others which capture similarity with entities and other events in the document. The main results on the event salience task are summarized in Table 2 .",
"cite_spans": [
{
"start": 232,
"end": 249,
"text": "Liu et al. (2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 526,
"end": 533,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "First, among the baseline models, KCE performs consistently the best across all evaluation metrics with an absolute 10 point improvement over LOCATION and FREQUENCY on both P@1 and NR@1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "Then, we compare the results of our base model with the two attention-based variants without including any global features from the document. The dynamic memory network CEE-DMN provides an 0.41 point improvement in P@1 and NR@1. The inter-event attention module CEE-IEA provides an even larger improvement, with 1.27. The stronger performance of the CEE-IEA model shows that the voting attention module helps to promote the salient events and suppress the noisy and background events.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "The addition of the global features in Table 2 on top of the better performing inter-event attention model provides yet the largest improvement over the baseline models. Specifically, it improves over KCE by 3.89% P@1. The improvement is consistent even with higher values of k, with a 1.38% improvement in P@10 and 2.48% NR@10. This result suggests that the global features are indeed critical for identifying salient events and that neither the event representation nor inter-event interactions capture the same information that the features do.",
"cite_spans": [],
"ref_spans": [
{
"start": 39,
"end": 46,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "Then, in Table 3 , we present the results of our best performing model and the baseline models using the Method P@1 P@5 P@10 NR@1 NR@5 NR@10 original metrics formulation (without the normalization or uniqueness). The improvement of the model in this work over the baseline models is similarly consistent at all values of k. All together, the results from our experiments demonstrate that the CEE-IEA model with global features performs better at extracting salient events and sets a new state-of-the-art performance for this task.",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "Due to the significant improvement in the model's performance when the global features were added, we conduct an ablation study on the features to understand what contributes most to the gains. Table 4 shows the performance of the CEE-IEA model as different features are successively added to the model. We observe that all of the features contribute positively to both the precision and recall of the models at all values of k. Among the features, the two which provided the largest P@1 improvements are those which have not been used by previous work: Parent Score and Frame Name. This result suggests that providing the model with information that distinguishes high-and low-level events is beneficial for detecting salient events.",
"cite_spans": [],
"ref_spans": [
{
"start": 194,
"end": 201,
"text": "Table 4",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Ablation: Feature Contribution",
"sec_num": "4.5"
},
{
"text": "As the value of k increases, the benefit of Sentence Location and Trigger Frequency features increase. This shows that these features help identifying salient events from the rest. Whereas, Parent Score and Frame Name features further help recognize the most salient event among these.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation: Feature Contribution",
"sec_num": "4.5"
},
{
"text": "In this section, we discuss the potential benefits of modeling event salience for the downstream task of extractive summarization. Ideally, a good summary captures information about the key events in the original document. However, most common models for extractive summarization operate at the more coarse-grained sentence level. We discuss below why we believe that modeling event importance rather than sentence importance has the potential to benefit summarization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study: Extractive Summarization",
"sec_num": "5"
},
{
"text": "An Improved Supervision Signal First, the extractive summarization training oracles often miss important signals in the data. Extractive summarization systems are trained on 0/1 labels assigned to each sentence in the document to indicate whether or not that sentence should be selected to the summary. The most common procedure for obtaining these labels is to greedily select document sentences while the ROUGE (Lin, 2004) score between the selected sentences and a reference summary still increases (Nallapati et al., 2017) .",
"cite_spans": [
{
"start": 413,
"end": 424,
"text": "(Lin, 2004)",
"ref_id": "BIBREF15"
},
{
"start": 502,
"end": 526,
"text": "(Nallapati et al., 2017)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study: Extractive Summarization",
"sec_num": "5"
},
{
"text": "We compared this labeling procedure to an alternative method based on events. The event-based oracle selects the sentence which contains the first occurrence of an event in the document for each event in the summary. For our analysis, the sentences selected by the ROUGE-based and event-based methods are divided into three sets: B, sentences chosen by both; R, sentences chosen by ROUGE only; and E, sentences chosen by the event method only. The relative sentence coverage for both sets of selected sentences is calculated as follows. Figure 4 : (Left) Predicted salience scores of all events from our event based model are shown in the subscripts. In contrast to a sentence based summarization system, event based system provides a much more fine grained output by predicting the relative importance of each event. (Middle) Disjoint sentence sets E, B and R. It shows relative coverage of 79.8% by salient event signal and 48.4% by ROUGE based signal. (Right) Proportion of salient events in {B \u222a R}, the light blue area shows that 27.6% of the events from sentences selected by ROUGE are not salient.",
"cite_spans": [],
"ref_spans": [
{
"start": 537,
"end": 545,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Case Study: Extractive Summarization",
"sec_num": "5"
},
{
"text": "E Cov = 1 N N i=1 |B i | |B i | + |R i | R Cov = 1 N N i=1 |B i | |B i | + |E i |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study: Extractive Summarization",
"sec_num": "5"
},
{
"text": "We observe that, E Cov = 79.8% which means that most of the sentences in the ROUGE-based oracle are covered by event based summaries. However, R Cov = 48.4%, which means that the standard supervision signal misses a significant number of events present in the reference summary (all salient events in the remaining 51.6% sentences; see Figure 4 (middle)). The lower value of R Cov is evidence that ROUGE-based oracles are missing a valuable supervision signal.",
"cite_spans": [],
"ref_spans": [
{
"start": 336,
"end": 344,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Case Study: Extractive Summarization",
"sec_num": "5"
},
{
"text": "Finer-Grained Selection Since the most popular extractive summarization models are forced to select full sentences to be in the summary, it likely that the model selects a lot of unimportant information. For instance, if a document sentence is quite long and covers a lot of information, it is likely that some of the predicate mentions in it are not salient and may not need to be in the summary. Consequently, including only semantic unit that consist of important events and their arguments may be a better method of identifying text that should belong in the summary. To quantify this, we calculate the number of salient events in the sentences selected by ROUGE (set {B \u222a R}) and observed that 27.6% of the events in these selected sentences are not salient (see Figure 4 (right) ).",
"cite_spans": [],
"ref_spans": [
{
"start": 768,
"end": 784,
"text": "Figure 4 (right)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Case Study: Extractive Summarization",
"sec_num": "5"
},
{
"text": "Together, these two points demonstrate that the current method of creating oracle for training summarization systems misses important events which are present in the gold summaries while including a significant number of non salient events. This underscores that a good summarization system can benefit from our event salience model to select better events for generating summaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study: Extractive Summarization",
"sec_num": "5"
},
{
"text": "In this work, we proposed a contextual model for salient event identification. We demonstrated that the three different components of our model (the event representation, global features, and inter-event interaction) combine to produce state-of-the-art results on the New York Times Annotated Corpus. Further, we identified issues with previous evaluation metrics and proposed new intuitive evaluation methods. Finally, we discussed how event salience can be helpful for other downstream applications through a case study of extraction summarization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://git.io/fhbJQ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their helpful feedback and suggestions. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Berkeley FrameNet Project",
"authors": [
{
"first": "Collin",
"middle": [
"F"
],
"last": "Baker",
"suffix": ""
},
{
"first": "Charles",
"middle": [
"J"
],
"last": "Fillmore",
"suffix": ""
},
{
"first": "John",
"middle": [
"B"
],
"last": "Lowe",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, COLING-ACL 1998",
"volume": "",
"issue": "",
"pages": "86--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet Project. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, COLING-ACL 1998, pages 86-90.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Generating coherent event schemas at scale",
"authors": [
{
"first": "Niranjan",
"middle": [],
"last": "Balasubramanian",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Mausam",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2013,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Niranjan Balasubramanian, Stephen Soderland, Mausam, and Oren Etzioni. 2013. Generating coherent event schemas at scale. In EMNLP.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Unsupervised learning of narrative event chains",
"authors": [
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2008,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathanael Chambers and Daniel Jurafsky. 2008. Unsupervised learning of narrative event chains. In ACL.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Identifying the most dominant event in a news article by mining event coreference relations",
"authors": [
{
"first": "Prafulla",
"middle": [],
"last": "Kumar Choubey",
"suffix": ""
},
{
"first": "Kaushik",
"middle": [],
"last": "Raju",
"suffix": ""
},
{
"first": "Ruihong",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2018,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prafulla Kumar Choubey, Kaushik Raju, and Ruihong Huang. 2018. Identifying the most dominant event in a news article by mining event coreference relations. In NAACL-HLT.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Semi-supervised frame-semantic parsing for unknown predicates",
"authors": [
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1435--1444",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dipanjan Das and Noah A. Smith. 2011. Semi-supervised frame-semantic parsing for unknown predicates. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1435-1444, Portland, Oregon, USA, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The use of syntactic clues in discourse processing",
"authors": [
{
"first": "Nan",
"middle": [],
"last": "Decker",
"suffix": ""
}
],
"year": 1985,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nan Decker. 1985. The use of syntactic clues in discourse processing. In ACL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Minimally Supervised Event Causality Identification",
"authors": [
{
"first": "Quang",
"middle": [],
"last": "Do",
"suffix": ""
},
{
"first": "Yee",
"middle": [],
"last": "Seng Chan",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of the Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quang Do, Yee Seng Chan, and Dan Roth. 2011. Minimally Supervised Event Causality Identification. In Proc. of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Edinburgh, Scotland, 7.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A new entity salience task with millions of training examples",
"authors": [
{
"first": "Jesse",
"middle": [],
"last": "Dunietz",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gillick",
"suffix": ""
}
],
"year": 2014,
"venue": "EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jesse Dunietz and Daniel Gillick. 2014. A new entity salience task with millions of training examples. In EACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Generating representative headlines for news stories",
"authors": [
{
"first": "Xiaotao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Yuning",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Jialu",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Hongkun",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Yingfang",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Cong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Finnie",
"suffix": ""
},
{
"first": "Jiaqi",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Zukoski",
"suffix": ""
}
],
"year": 2020,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaotao Gu, Yuning Mao, Jiawei Han, Jialu Liu, Hongkun Yu, Yingfang Wu, Cong Yu, Daniel Finnie, Jiaqi Zhai, and Nicholas Zukoski. 2020. Generating representative headlines for news stories. ArXiv, abs/2001.09386.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Deep semantic role labeling: What works and what's next",
"authors": [
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "473--483",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luheng He, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2017. Deep semantic role labeling: What works and what's next. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 473-483, Vancouver, Canada, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Transitivity and foregrounding in news articles: Experiments in information retrieval and automatic summarising",
"authors": [
{
"first": "Roderick",
"middle": [],
"last": "Kay",
"suffix": ""
},
{
"first": "Ruth",
"middle": [],
"last": "Aylett",
"suffix": ""
}
],
"year": 1996,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roderick Kay and Ruth Aylett. 1996. Transitivity and foregrounding in news articles: Experiments in information retrieval and automatic summarising. In ACL.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "CogCompNLP: Your Swiss Army Knife for NLP",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Khashabi",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Sammons",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Redman",
"suffix": ""
},
{
"first": "Christos",
"middle": [],
"last": "Christodoulopoulos",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Srikumar",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Rizzolo",
"suffix": ""
},
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Guanheng",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Quang",
"middle": [],
"last": "Do",
"suffix": ""
},
{
"first": "Chen-Tse",
"middle": [],
"last": "Tsai",
"suffix": ""
},
{
"first": "Subhro",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Mayhew",
"suffix": ""
},
{
"first": "Zhili",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Yangqiu",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Shashank",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Shyam",
"middle": [],
"last": "Upadhyay",
"suffix": ""
},
{
"first": "Naveen",
"middle": [],
"last": "Arivazhagan",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Ning",
"suffix": ""
},
{
"first": "Shaoshi",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of the International Conference on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Khashabi, Mark Sammons, Ben Zhou, Tom Redman, Christos Christodoulopoulos, Vivek Srikumar, Nicholas Rizzolo, Lev Ratinov, Guanheng Luo, Quang Do, Chen-Tse Tsai, Subhro Roy, Stephen Mayhew, Zhili Feng, John Wieting, Xiaodong Yu, Yangqiu Song, Shashank Gupta, Shyam Upadhyay, Naveen Arivazha- gan, Qiang Ning, Shaoshi Ling, and Dan Roth. 2018. CogCompNLP: Your Swiss Army Knife for NLP. In Proc. of the International Conference on Language Resources and Evaluation (LREC).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The narrativeqa reading comprehension challenge",
"authors": [
{
"first": "Tom\u00e1s",
"middle": [],
"last": "Kocisk\u00fd",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Schwarz",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Karl",
"middle": [
"Moritz"
],
"last": "Hermann",
"suffix": ""
},
{
"first": "G\u00e1bor",
"middle": [],
"last": "Melis",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "317--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1s Kocisk\u00fd, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, G\u00e1bor Melis, and Edward Grefenstette. 2018. The narrativeqa reading comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317-328.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Rouge: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text summarization branches out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Text summarization with pretrained encoders",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3730--3740",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3730-3740, Hong Kong, China, November. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Automatic event salience identification",
"authors": [
{
"first": "Zhengzhong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Chenyan",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Teruko",
"middle": [],
"last": "Mitamura",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1226--1236",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhengzhong Liu, Chenyan Xiong, Teruko Mitamura, and Eduard Hovy. 2018. Automatic event salience identifi- cation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1226-1236, Brussels, Belgium, October-November. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Rhetorical structure theory: Toward a functional theory of text organization",
"authors": [
{
"first": "C",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Sandra",
"middle": [
"A"
],
"last": "Mann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Thompson",
"suffix": ""
}
],
"year": 1988,
"venue": "Text",
"volume": "8",
"issue": "3",
"pages": "243--281",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text, 8(3):243-281.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Event representations for automated story generation with deep neural nets",
"authors": [
{
"first": "Lara",
"middle": [
"J"
],
"last": "Martin",
"suffix": ""
},
{
"first": "Prithviraj",
"middle": [],
"last": "Ammanabrolu",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Hancock",
"suffix": ""
},
{
"first": "Shruti",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Brent",
"middle": [],
"last": "Harrison",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"O"
],
"last": "Riedl",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lara J. Martin, Prithviraj Ammanabrolu, William Hancock, Shruti Singh, Brent Harrison, and Mark O. Riedl. 2018. Event representations for automated story generation with deep neural nets. ArXiv, abs/1706.01331.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Event-based multi-document summarization",
"authors": [
{
"first": "Lu\u00eds",
"middle": [],
"last": "Marujo",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lu\u00eds Marujo. 2015. Event-based multi-document summarization. Ph.D. thesis. Carnegie Mellon University.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Summarunner: A recurrent neural network based sequence model for extractive summarization of documents",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Feifei",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2017,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based se- quence model for extractive summarization of documents. In AAAI.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Automatic differentiation in pytorch",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Event detection and co-reference with minimal supervision",
"authors": [
{
"first": "Haoruo",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Yangqiu",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "392--402",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haoruo Peng, Yangqiu Song, and Dan Roth. 2016. Event detection and co-reference with minimal supervision. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 392-402, Austin, Texas, November. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Statistical script learning with multi-argument events",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Pichotta",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
}
],
"year": 2014,
"venue": "EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Pichotta and Raymond J. Mooney. 2014. Statistical script learning with multi-argument events. In EACL.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The New York Times Annotated Corpus. Linguistic Data Consortium",
"authors": [
{
"first": "E",
"middle": [],
"last": "Sandhaus",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Sandhaus. 2008. The New York Times Annotated Corpus. Linguistic Data Consortium, Philadelphia.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "making the news\": Identifying noteworthy events in news articles",
"authors": [
{
"first": "Shyam",
"middle": [],
"last": "Upadhyay",
"suffix": ""
},
{
"first": "Christos",
"middle": [],
"last": "Christodoulopoulos",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Fourth Workshop on Events",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shyam Upadhyay, Christos Christodoulopoulos, and Dan Roth. 2016. \"making the news\": Identifying noteworthy events in news articles. In Proceedings of the Fourth Workshop on Events, pages 1-7, San Diego, California, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Joint constrained learning for event-event relation extraction",
"authors": [
{
"first": "Haoyu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Muhao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hongming",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haoyu Wang, Muhao Chen, Hongming Zhang, and Dan Roth. 2020. Joint constrained learning for event-event relation extraction.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Dynamic memory networks for visual and textual question answering",
"authors": [
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2016,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Caiming Xiong, Stephen Merity, and Richard Socher. 2016. Dynamic memory networks for visual and textual question answering. In ICML.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Discourse-aware neural extractive text summarization",
"authors": [
{
"first": "Jiacheng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiacheng Xu, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. Discourse-aware neural extractive text summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Neural document summarization by jointly learning to score and select sentences",
"authors": [
{
"first": "Qingyu",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Shaohan",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Tiejun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "654--663",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qingyu Zhou, Nan Yang, Furu Wei, Shaohan Huang, Ming Zhou, and Tiejun Zhao. 2018. Neural document summarization by jointly learning to score and select sentences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 654-663, Melbourne, Australia, July. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"type_str": "table",
"num": null,
"text": "",
"content": "<table><tr><td>(a) EVENT EXTRACTION (b) EVENT EMBEDDING (c) CLASSIFICATION</td><td>Salience Scores Inter Event Attention Argument Embedding</td><td>Kuwait's Interior Ministry says young Kuwaiti man who fled to Saudi Arabia after terrorist shooting in Kuwait that killed one American and wounded another has confessed to the attack. Q1 K2 Q2 K3 Q3 K4 Q4 Kuwait's Interior Ministry says young Kuwaiti man who fled to Saudi Arabia after terrorist shooting in Kuwait that killed one American and wounded another has confessed to the attack. 0.6 0.8 0.3 0.3 0.7 K4 Q4 Global Document Features Arg-LOC BiLSTM Arg0 Arg1 Arg-TMP Kuwait's Interior Ministry says young Kuwaiti man K1 fled shooting killed wounded confessed</td><td>Event Document</td></tr></table>",
"html": null
},
"TABREF1": {
"type_str": "table",
"num": null,
"text": "Kuwait's Interior Ministry says young Kuwaiti man who fled to Saudi Arabia after terrorist shooting in Kuwait that killed one American and wounded another has confessed to the attack.",
"content": "<table><tr><td colspan=\"2\">Transitively, E3 and E4 also</td><td/><td/></tr><tr><td colspan=\"2\">make E1 more important</td><td/><td/></tr><tr><td/><td colspan=\"2\">E3 and E4 make E2</td><td/></tr><tr><td colspan=\"2\">E2 makes E1</td><td>more important</td><td/></tr><tr><td colspan=\"2\">more important</td><td/><td/></tr><tr><td>Event 1</td><td>Event 2</td><td>Event 3</td><td>Event 4</td></tr><tr><td>{fled, young Kuwaiti man,</td><td>{shooting, terrorist,</td><td>{killed, Shooting,</td><td>{wounded, Shooting,</td></tr><tr><td>to Saudi Arabia}</td><td>Kuwait}</td><td>American}</td><td>American}</td></tr></table>",
"html": null
},
"TABREF3": {
"type_str": "table",
"num": null,
"text": "Dataset Statistics. (Left) Preprocessing: \u223c18.3% documents do not have any salient events and \u223c1.5% are missing body/abstract. Results in Tables 2, 3 and 4 are computed on the clean filtered set. (Right) Feature statistics of Salient vs Non Salient Events.",
"content": "<table/>",
"html": null
},
"TABREF5": {
"type_str": "table",
"num": null,
"text": "Event Salience Identification results on the New York Times Annotated Corpus using the normalized and unique version of the metrics. CEE-BASE represents the base model from \u00a73.3. CEE-IEA and CEE-DMN are the models with inter-event interaction layers. -GF signifies the corresponding models without augmenting with global document features from \u00a73.2. Improvements (in parentheses) and statistical significance (underlined) are with respect to the KCE baseline.",
"content": "<table><tr><td>Method</td><td>P@1</td><td>P@5</td><td>P@10</td><td>R@1</td><td>R@5</td><td>R@10</td></tr><tr><td>LOCATION</td><td>43.50</td><td>37.63</td><td>30.62</td><td>9.88</td><td>32.71</td><td>46.43</td></tr><tr><td>FREQUENCY</td><td>55.62</td><td>49.27</td><td>42.18</td><td>9.69</td><td>34.86</td><td>52.30</td></tr><tr><td>KCE</td><td>61.81</td><td>52.36</td><td>44.54</td><td>11.58</td><td>39.36</td><td>57.79</td></tr><tr><td>CEE-IEA</td><td colspan=\"6\">65.47(+3.66%) 54.26(+1.90%) 44.93(+0.39%) 13.12(+1.54%) 42.03(+2.67%) 59.65(+1.86%)</td></tr></table>",
"html": null
},
"TABREF6": {
"type_str": "table",
"num": null,
"text": "Event Salience Identification results on the New York Times Annotated Corpus using the metrics (non-unique or normalized) used in the previous work.",
"content": "<table/>",
"html": null
},
"TABREF7": {
"type_str": "table",
"num": null,
"text": "+0.16%) 60.88(+1.44%) 61.21(+1.23%) 68.68(+0.65%) +(SL,TF) 61.77(+0.89%) 44.80(+2.53%) 33.27(+2.52%) 61.77(+0.89%) 63.49(+2.28%) 71.98(+3.30%) +(PS) 65.07(+3.30%) 46.48(+1.68%) 33.99(+0.72%) 65.07(+3.30%) 65.78(+2.29%) 73.42(+1.44%) +(AF,NA) 65.44(+0.37%) 46.85(+0.37%) 34.11(+0.12%) 65.44(+0.37%) 66.90(+1.12%) 74.33(+0.91%)",
"content": "<table><tr><td colspan=\"2\">CEE-IEA(-GF) 59.44</td><td>41.47</td><td>30.59</td><td>59.44</td><td>59.98</td><td>68.03</td></tr><tr><td>+(FN)</td><td colspan=\"2\">60.88(+1.44%) 42.27(+0.8%)</td><td>30.75(</td><td/><td/></tr></table>",
"html": null
},
"TABREF8": {
"type_str": "table",
"num": null,
"text": "Ablation results demonstrating relative importance of each feature. Abbreviations: FN = Frame Name, SL = Sentence Location, TF = Trigger Frequency, AF = Argument Frequency, NA = Named Argument and PS = Parent Score. Improvements are shown w.r.t the previous row and statistically significant improvements using a permutation test with p < 0.0001 are underlined.",
"content": "<table/>",
"html": null
}
}
}
}