ACL-OCL / Base_JSON /prefixQ /json /Q15 /Q15-1009.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q15-1009",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:08:00.534326Z"
},
"title": "Exploiting Parallel News Streams for Unsupervised Event Extraction",
"authors": [
{
"first": "Congle",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington Seattle",
"location": {
"postCode": "98195",
"region": "WA",
"country": "USA"
}
},
"email": "clzhang@cs.washington.edu"
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington Seattle",
"location": {
"postCode": "98195",
"region": "WA",
"country": "USA"
}
},
"email": "soderlan@cs.washington.edu"
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington Seattle",
"location": {
"postCode": "98195",
"region": "WA",
"country": "USA"
}
},
"email": "weld@cs.washington.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Most approaches to relation extraction, the task of extracting ground facts from natural language text, are based on machine learning and thus starved by scarce training data. Manual annotation is too expensive to scale to a comprehensive set of relations. Distant supervision, which automatically creates training data, only works with relations that already populate a knowledge base (KB). Unfortunately, KBs such as FreeBase rarely cover event relations (e.g. \"person travels to location\"). Thus, the problem of extracting a wide range of events-e.g., from news streamsis an important, open challenge. This paper introduces NEWSSPIKE-RE, a novel, unsupervised algorithm that discovers event relations and then learns to extract them. NEWSSPIKE-RE uses a novel probabilistic graphical model to cluster sentences describing similar events from parallel news streams. These clusters then comprise training data for the extractor. Our evaluation shows that NEWSSPIKE-RE generates high quality training sentences and learns extractors that perform much better than rival approaches, more than doubling the area under a precision-recall curve compared to Universal Schemas.",
"pdf_parse": {
"paper_id": "Q15-1009",
"_pdf_hash": "",
"abstract": [
{
"text": "Most approaches to relation extraction, the task of extracting ground facts from natural language text, are based on machine learning and thus starved by scarce training data. Manual annotation is too expensive to scale to a comprehensive set of relations. Distant supervision, which automatically creates training data, only works with relations that already populate a knowledge base (KB). Unfortunately, KBs such as FreeBase rarely cover event relations (e.g. \"person travels to location\"). Thus, the problem of extracting a wide range of events-e.g., from news streamsis an important, open challenge. This paper introduces NEWSSPIKE-RE, a novel, unsupervised algorithm that discovers event relations and then learns to extract them. NEWSSPIKE-RE uses a novel probabilistic graphical model to cluster sentences describing similar events from parallel news streams. These clusters then comprise training data for the extractor. Our evaluation shows that NEWSSPIKE-RE generates high quality training sentences and learns extractors that perform much better than rival approaches, more than doubling the area under a precision-recall curve compared to Universal Schemas.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Relation extraction, the process of extracting structured information from natural language text, grows increasingly important for Web search and question answering. Traditional supervised approaches, which can achieve high precision and recall, are limited by the cost of labeling training data and are unlikely to scale to the thousands of relations on the Web. Another approach, distant supervision (Craven and Kumlien, 1999; Wu and Weld, 2007) , creates its own training data by matching the ground instances of a Knowledge base (KB) (e.g. Freebase) to the unlabeled text.",
"cite_spans": [
{
"start": 402,
"end": 428,
"text": "(Craven and Kumlien, 1999;",
"ref_id": "BIBREF9"
},
{
"start": 429,
"end": 447,
"text": "Wu and Weld, 2007)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unfortunately, while distant supervision can work well in some situations, the method is limited to relatively static facts (e.g., born-in(person, location) or capital-of(location,location)) where there is a corresponding knowledge base. But what about dynamic event relations (also known as fluents), such as travel-to(person, location) or fire(organization, person)? Since these time-dependent facts are ephemeral, they are rarely stored in a pre-existing KB. At the same time, knowledge of real-time events is crucial for making informed decisions in fields like finance and politics. Indeed, news stories report events almost exclusively, so learning to extract events is an important open problem. This paper develops a new unsupervised technique, NEWSSPIKE-RE, to both discover event relations and extract them with high precision. The intuition underlying NEWSSPIKE-RE is that the text of articles from two different news sources are not independent, since they are each conditioned on the same real-world events. By looking for rarely described entities that suddenly \"spike\" in popularity on a given date, one can identify paraphrases. Such temporal correspondence (Zhang and Weld, 2013) allow one to cluster diverse sentences, and the resulting clusters may be used to form training data in order to learn event extractors. Furthermore, one can also exploit parallel news to obtain direct negative evidence. To see this, suppose one day the news includes the following: (a) \"Snowden travels to Hong Kong, off southeastern China.\" (b) \"Snowden cannot stay in Hong Kong as Chinese officials will not allow ...\" Since news stories are usually coherent, it is highly unlikely that travel to and stay in (which is negated) are synonymous. By leveraging such direct negative phrases, we can learn extractors capable of distinguishing heavily co-occurring but semantically different phrases, thereby avoiding many extraction errors. Our NEWSSPIKE-RE system encapuslates these intuitions in a novel graphical model making the following contributions:",
"cite_spans": [
{
"start": 131,
"end": 156,
"text": "born-in(person, location)",
"ref_id": null
},
{
"start": 1174,
"end": 1196,
"text": "(Zhang and Weld, 2013)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We develop a method to discover a set of distinct, salient event relations from news streams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We describe an algorithm to exploit parallel news streams to cluster sentences that belong to the same event relations. In particular, we propose the temporal negation heuristic to avoid conflating co-occurring but nonsynonymous phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We introduce a probabilistic graphical model to generate training for a sentential event extractor without requiring any human annotations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We present detailed experiments demonstrating that the event extractors, learned from the generated training data, significantly outperform several competitive baselines, e.g. our system more than doubles the area under the micro-averaged, PR curve (0.80 vs. 0.30) compared to Riedel's Universal Schema (Riedel et al., 2013) .",
"cite_spans": [
{
"start": 279,
"end": 326,
"text": "Riedel's Universal Schema (Riedel et al., 2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Supervised learning approaches have been widely developed for event extraction tasks such as MUC-4 and ACE. They often focus on a hand-crafted ontology and train the extractor with manually created training data. While they can offer high precision and recall, they are often domain-specific (e.g. biological events and entertainment events (Benson et al., 2011; Reichart and Barzilay, 2012) ), and are hard to scale over the events on the Web. Open IE systems extract open domain relations (e.g. (Banko et al., 2007; Fader et al., 2011) ) and events (e.g. (Ritter et al., 2012) ). They often perform self-supervised learning of relation-independent extractions. It allows them to scale but makes them unable to output canonicalized relations.",
"cite_spans": [
{
"start": 341,
"end": 362,
"text": "(Benson et al., 2011;",
"ref_id": "BIBREF4"
},
{
"start": 363,
"end": 391,
"text": "Reichart and Barzilay, 2012)",
"ref_id": "BIBREF24"
},
{
"start": 497,
"end": 517,
"text": "(Banko et al., 2007;",
"ref_id": "BIBREF1"
},
{
"start": 518,
"end": 537,
"text": "Fader et al., 2011)",
"ref_id": "BIBREF11"
},
{
"start": 557,
"end": 578,
"text": "(Ritter et al., 2012)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "Distant supervised approaches have been developed to learn extractors by exploiting the facts existing in a knowledge base, thus avoiding human annotation. Wu et al. (2007) and Reschke et al. (2014) learned Infobox relations from Wikipedia, while Mintz et al. (2009) heuristically matched Freebase facts to texts. Since the training data generated by the heuristic matching is often imperfect, multiinstance learning approaches (Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012) have been developed to combat this problem. Unfortu-nately, most facts existing in the KBs are static facts like geographical or biographical data. They fall short of learning extractors for fluent facts such as sports results or travel and meetings by a person.",
"cite_spans": [
{
"start": 156,
"end": 172,
"text": "Wu et al. (2007)",
"ref_id": "BIBREF34"
},
{
"start": 177,
"end": 198,
"text": "Reschke et al. (2014)",
"ref_id": "BIBREF25"
},
{
"start": 247,
"end": 266,
"text": "Mintz et al. (2009)",
"ref_id": "BIBREF20"
},
{
"start": 428,
"end": 449,
"text": "(Riedel et al., 2010;",
"ref_id": "BIBREF26"
},
{
"start": 450,
"end": 472,
"text": "Hoffmann et al., 2011;",
"ref_id": "BIBREF15"
},
{
"start": 473,
"end": 495,
"text": "Surdeanu et al., 2012)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "Bootstrapping is another common extraction technique (Brin, 1999; Agichtein and Gravano, 2000; Carlson et al., 2010; Nakashole et al., 2011; Huang and Riloff, 2013) . This typically takes a set of seeds as input, which can be ground instances or key phrases. The algorithms then iteratively generate more positive instances and phrases. While there are many successful examples of bootstrapping, the challenge is to avoid semantic drift. Large-scale systems, therefore, often require extra processing such as manual validation between the iterations or additional negative seeds as the input.",
"cite_spans": [
{
"start": 53,
"end": 65,
"text": "(Brin, 1999;",
"ref_id": "BIBREF5"
},
{
"start": 66,
"end": 94,
"text": "Agichtein and Gravano, 2000;",
"ref_id": "BIBREF0"
},
{
"start": 95,
"end": 116,
"text": "Carlson et al., 2010;",
"ref_id": null
},
{
"start": 117,
"end": 140,
"text": "Nakashole et al., 2011;",
"ref_id": "BIBREF22"
},
{
"start": 141,
"end": 164,
"text": "Huang and Riloff, 2013)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "Unsupervised approaches have been developed for relation discovery and extractions. These algorithms are usually based on some clustering assumptions over a large unlabeled corpus. Common assumptions include the distributional hypothesis used by (Hasegawa et al., 2004; Shinyama and Sekine, 2006) , latent topic assumption by (Yao et al., 2012; Yao et al., 2011) , and low rank assumption by (Takamatsu et al., 2011; Riedel et al., 2013) . Since the assumptions largely rely on co-occurrence, previous unsupervised approaches tend to confuse correlated but semantically different phrases during extraction. In contrast to this, our work largely avoids these errors by exploiting the temporal negation heuristic in parallel news streams. In addition, unlike many unsupervised algorithms requiring human effort to canonicalize the clusters, our work automatically discovers events with readable names.",
"cite_spans": [
{
"start": 246,
"end": 269,
"text": "(Hasegawa et al., 2004;",
"ref_id": "BIBREF14"
},
{
"start": 270,
"end": 296,
"text": "Shinyama and Sekine, 2006)",
"ref_id": "BIBREF31"
},
{
"start": 326,
"end": 344,
"text": "(Yao et al., 2012;",
"ref_id": "BIBREF36"
},
{
"start": 345,
"end": 362,
"text": "Yao et al., 2011)",
"ref_id": "BIBREF35"
},
{
"start": 392,
"end": 416,
"text": "(Takamatsu et al., 2011;",
"ref_id": "BIBREF33"
},
{
"start": 417,
"end": 437,
"text": "Riedel et al., 2013)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "Paraphrasing techniques inspire our work. Some techniques, such as DIRT (Lin and Pantel, 2001) and Resolver (Yates and Etzioni, 2009) , are based on the distributional hypothesis. Another common approach is to use parallel corpora, including news streams (Barzilay and Lee, 2003; Dolan et al., 2004; Zhang and Weld, 2013) , multiple translations of the same story (Barzilay and McKeown, 2001 ) and bilingual sentence pairs (Ganitkevitch et al., 2013) to generate the paraphrases. Although these algorithms create many good paraphrases, they can not be directly used to generate enough training data to train a relation extractor for two reasons: first, the semantics of the paraphrases is often context dependent; second, the generated paraphrases are often in ",
"cite_spans": [
{
"start": 72,
"end": 94,
"text": "(Lin and Pantel, 2001)",
"ref_id": "BIBREF17"
},
{
"start": 108,
"end": 133,
"text": "(Yates and Etzioni, 2009)",
"ref_id": "BIBREF37"
},
{
"start": 255,
"end": 279,
"text": "(Barzilay and Lee, 2003;",
"ref_id": "BIBREF2"
},
{
"start": 280,
"end": 299,
"text": "Dolan et al., 2004;",
"ref_id": "BIBREF10"
},
{
"start": 300,
"end": 321,
"text": "Zhang and Weld, 2013)",
"ref_id": "BIBREF38"
},
{
"start": 364,
"end": 391,
"text": "(Barzilay and McKeown, 2001",
"ref_id": "BIBREF3"
},
{
"start": 423,
"end": 450,
"text": "(Ganitkevitch et al., 2013)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "News articles report an enormous number of events every day. Our system, NEWSSPIKE-RE, aligns paralel news streams to indentify and extract these events as shown in Figure 1 . NEWSSPIKE-RE has both training and test phases. Its training phase has two main steps: event-relation discovery and training-set generation. Section 4 describes our event relation discovery algorithm, which processes time-stamped news articles to discern a set of salient, distinct event relations in the form of E = e(t 1 , t 2 ), where e is a representative event phrase and t i are types of the two arguments. NEWSSPIKE-RE generates the event phrases using an Open Information Extraction (IE) system (Fader et al., 2011) , and uses a fine-grained entity recognition system FIGER (Ling and Weld, 2012) to generate type descriptors such as \"company \", \"politician\", and \"medical treatment\". The second part of NEWSSPIKE-RE's training phase, described in Section 5, is a method for building extractors for the discovered event relations. Our approach is motivated by the intuition, adapted from Zhang and Weld (2013) , that articles from different news sources typically use different sentences to describe the same event, and that corresponding sentences can be identified when they mention a unique pair of real-world entities. For example, when an unusual entity pair (Selena, Norway) is suddenly seen in three articles on a single day:",
"cite_spans": [
{
"start": 679,
"end": 699,
"text": "(Fader et al., 2011)",
"ref_id": "BIBREF11"
},
{
"start": 758,
"end": 779,
"text": "(Ling and Weld, 2012)",
"ref_id": "BIBREF18"
},
{
"start": 1071,
"end": 1092,
"text": "Zhang and Weld (2013)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [
{
"start": 165,
"end": 173,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Overview",
"sec_num": "3"
},
{
"text": "Selena traveled to Norway to see her ex-boyfriend. Selena arrived in Norway for a rendezvous with Justin. Selena's trip to Norway was no coincidence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "3"
},
{
"text": "It is likely that all three refer to the same event relation, travel-to(person, location) 1 , and can be used as positive training examples for the relation. As in Zhang & Weld (2013) , we group parallel sentences sharing the same argument pair and date in a structure called a NewsSpike. However, we include all sentences mentioning the arguments (e.g. Selena's trip to Norway) in the NewsSpike (not just those yielding OpenIE extractions), and use the lexicalized dependency path between the arguments (e.g. <-[poss]-trip-[prep-to]-> 2 , as the event phrase. In this way, we can generalize extractors beyond the scope of OpenIE. Formally, a NewsSpike is a tuple, (a 1 , a 2 , d, S), where a 1 and a 2 are arguments (e.g. Selena), d is a date, and S is a set of argumentlabeled sentences {(s, a 1 , a 2 , p) . . .} in which s is a sentence with arguments a i and event phrase p.",
"cite_spans": [
{
"start": 164,
"end": 183,
"text": "Zhang & Weld (2013)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "3"
},
{
"text": "It's important that non-synonomous sentences like \"Selena stays in Norway\" should be excluded from the training data for travel-to(person, location) even if a travel-to event did apply to that argument pair. In order to select only the synonomous sentences, we develop a probabilistic graphical model, described in Section 5.2, to accurately assign sentences from NewsSpikes to each discovered event relation E. Given this annotated data, NEWSSPIKE-RE trains extractors using a multiclass logistic regression classfier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "3"
},
{
"text": "During the testing phase, NEWSSPIKE-RE accepts arbitrary sentences (no date-stamp required), uses FIGER to identify possible arguments, and uses the classifier to predicts which events (if any) hold between an argument pair. We describe the extraction process in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "3"
},
{
"text": "Note that NEWSSPIKE-RE is an unsupervised al- , where E i are event relations and \u03b7 j are NewsSpikes. The optimal solution selects E 1 with edges to \u03b7 1 and \u03b7 2 , and E 3 with edge to \u03b7 3 . These two event relations cover all the NewsSpikes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "3"
},
{
"text": "gorithm that requires no manual labelling of the training instances. Like distant supervision, the key is to automatically generate the training data, at which point a traditional supervised classifier may be applied to learn an extractor. Because distant supervision creates very noisy annotations, researchers often use specialized learners that model the correctness of a training example with a latent variable (Riedel et al., 2010; Hoffmann et al., 2011 ), but we found this unnecessary, because NEWSSPIKE-RE creates high quality training data.",
"cite_spans": [
{
"start": 415,
"end": 436,
"text": "(Riedel et al., 2010;",
"ref_id": "BIBREF26"
},
{
"start": 437,
"end": 458,
"text": "Hoffmann et al., 2011",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "3"
},
{
"text": "The first step of NEWSSPIKE-RE is to discover a set of event relations in the form of E = e(t 1 , t 2 ), where e is an event phrase, and t i are fine-grained argument types generated by FIGER, augmented with the important types \"number\" and \"money\", which are recognized by the Stanford name entity recognition system (Finkel et al., 2005) . To be most useful, the discovered event relations should cover salient events that are frequently reported in the news articles. Formally, we say that a NewsSpike \u03b7 = (a 1 , a 2 , d, S) mentions E = e(t 1 , t 2 ) if the types of a i are t i for each i, and one of its sentence has e as the event phrase between the arguments. To maximize the salience of the events, NEWSSPIKE-RE will prefer event relations that are \"mentioned\" by more NewsSpikes. In addition, the set of event relations should be distict. For example, if the relation travel-to(person, location) is already in the set, then visit(person, location) should not be selected as a separate relation. To reduce overlap, discovered event relations should not be mentioned by the same NewsSpike.",
"cite_spans": [
{
"start": 318,
"end": 339,
"text": "(Finkel et al., 2005)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discovering Salient Events",
"sec_num": "4"
},
{
"text": "Let E be all candidate event relations, N be all NewsSpikes. Our goal is to select the K most salient relations from E, minimizing overlap between relations. We can frame this task as a variant of the bipartite graph edge-cover problem. Let a bipartite graph G have one node E i for each event relation in E and one node \u03b7 j for each NewsSpike in N . There is an edge between E i and \u03b7 j if \u03b7 j mentions E i . The edge-cover problem is to select a largest subset of edges subject to (1) at most K nodes of E i are chosen and all edges incident to them are chosen as the covered edges; (2) each node of \u03b7 j is incident to at most one edge. The first constraint guarantees that there are exactly K event relations discovered; the second constraint ensures that no NewsSpike participates in two event relations. Figure 2 shows the optimized solution of a simple graph with K = 2, which can cover 3 edges with 2 event relations that have no overlapping NewsSpikes.",
"cite_spans": [],
"ref_spans": [
{
"start": 809,
"end": 817,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Discovering Salient Events",
"sec_num": "4"
},
{
"text": "Since both the objective function and constraints are linear, we can optimize this edge-cover problem with integer linear programming (Nemhauser and Wolsey, 1988) . By solving the optimization problem, NEWSSPIKE-RE finds a salient set of event relations incident to the covered edges. The discovered relations with K set to 30 are shown in Table 2 in Section 7. In addition, the covered edges bring us the initial mapping between the event types and NewsSpikes, which is used to train the probablistic model in Section 5.3.",
"cite_spans": [
{
"start": 134,
"end": 162,
"text": "(Nemhauser and Wolsey, 1988)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 340,
"end": 347,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Discovering Salient Events",
"sec_num": "4"
},
{
"text": "After NEWSSPIKE-RE has discovered a set of event relations, it then generates training instances to learn an extractor for each relation. In this section, we present our algorithm for generating the training sentences. As shown in Figure 1 , the generator takes N NewsSpikes",
"cite_spans": [],
"ref_spans": [
{
"start": 231,
"end": 239,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Generating the Training Sentences",
"sec_num": "5"
},
{
"text": "{\u03b7 i = (a 1i , a 2i , d i , S i )|i = 1 . . . N } and K event relations {E k = e k (t 1k , t 2k )|k = 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating the Training Sentences",
"sec_num": "5"
},
{
"text": ". . K} as input. For every event relation, E k , the generator identifies a subset of sentences from \u222a N i=1 S i expressing the event relation as training sentences. In this section, we first characterize the paraphrased event phrases and the parallel sentences in NewsSpikes. Then we show how to encode this heuristic in a probabilistic graphical model that jointly paraphrases the event phrases and identifies a set of training sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating the Training Sentences",
"sec_num": "5"
},
{
"text": "Previous work (Zhang and Weld, 2013) proposed several heuristics that are useful to find similar sentences in a NewsSpike. For example, the temporal functionality heuristic says that sentences in a NewsSpike with the same tense tend to be paraphrases. Unfortunately, these methods are too weak to generate enough data for training high quality event extractors: (1) they are \"in-spike heuristics\" that tend to generate small clusters from individual NewsSpikes. It remains unclear how to merge similar events occuring on different days and between different entities to increase cluster size. (2) they included heuristics to \"gain precision at the expense of recall\" (e.g. news articles do not state the same fact twice), because it is hard to obtain direct negative phrases inside one NewsSpike. In this paper, we exploit news streams in a cross-spike, global manner to obtain accurate positive and negative signals. This allows us to dramatically improve recall while maintaining high precision.",
"cite_spans": [
{
"start": 14,
"end": 36,
"text": "(Zhang and Weld, 2013)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting Properties of Parallel News",
"sec_num": "5.1"
},
{
"text": "Our system starts from the basic observation that the parallel sentences tend to be coherent. So if a NewsSpike \u03b7 = (a 1 , a 2 , d, S) is an instance of an event relation E = e(t 1 , t 2 ), the event phrases in its parallel sentences tend to be paraphrases. But sometimes the sentences in the NewsSpike are related but not paraphrases. For example, one day \"Snowden will stay in Hong Kong ...\" appears together with \"Snowden travels to Hong Kong ...\". Although the fact stay-in(Snowden, Hong Kong) is true, it is harmful to include \"Snowden will stay in Hong Kong\" in the training for travel-to(person, location).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting Properties of Parallel News",
"sec_num": "5.1"
},
{
"text": "Detecting paraphrases remains a challenge to most unsupervised approaches because they tend to cluster heavily co-occurring phrases which may turn out to be semantically different or even antonymous. (Zhang and Weld, 2013) presented a method to avoid confusion between antonym and synonyms in NewsSpikes, but did not address the problem of related but different phrases like travel to and stay in in a NewsSpike.",
"cite_spans": [
{
"start": 200,
"end": 222,
"text": "(Zhang and Weld, 2013)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting Properties of Parallel News",
"sec_num": "5.1"
},
{
"text": "To handle this, our method rests on a simple observation: when you read \"Snowden travels to Hong Kong\" and \"Snowden cannot stay in Hong Kong as Chinese officials do not allow ...\" in the same NewsSpike, it is unlike that travel to and stay in are synonymous event phrases because otherwise the two news stories are describing the opposite event. This observation leads to:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting Properties of Parallel News",
"sec_num": "5.1"
},
{
"text": "Temporal Negation Heuristic. Two event phrases p and q tend to be semantically different if they cooccur in the NewsSpike but one of them is in negated form.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting Properties of Parallel News",
"sec_num": "5.1"
},
{
"text": "The temporal negation heuristic helps in two ways: (1) it provides some direct negative phrases for the event relations; NEWSSPIKE-RE uses these to heuristically label some variables in the model. (2) It creates some useful features to implement a form of transitvity. For example, if we find that live in and stay in are frequently co-occurring and the temporal negation heuristic tells us that travel to and stay in are not paraphrases, this is evidence that live in is unlikely to be a paraphrase of travel to, even if they are heavily co-occurring.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting Properties of Parallel News",
"sec_num": "5.1"
},
{
"text": "The following section describes our implementation that uses these properties to generate high quality training. Our goal is the following: a sentence (s, a 1 , a 2 , p) from NewsSpike \u03b7 = (a 1 , a 2 , d, S) should be included in the training data for event relation E = e(t 1 , t 2 ) if the event phrase p is a paraphrase of e and the event relation E happens to the argument pair (a 1 , a 2 ) at time d.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting Properties of Parallel News",
"sec_num": "5.1"
},
{
"text": "As discussed above, to identify a high quality set of training sentences from NewsSpikes, one needs to combine evidence that event phrases are paraphrases with evidence from NewsSpikes. For this purpose, we define an undirected graphical model to jointly reason about paraphrasing the event phrases and identifying the training sentences from NewsSpikes. We first list the notation used in this section:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Cluster Model",
"sec_num": "5.2"
},
{
"text": "E event relation p \u2208 P event phrases s \u2208 S p sentences w/ the event phrase p Y p Is p a paraphrase for E? Z s p Is s w/ p good training for E? \u03a6 factors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Cluster Model",
"sec_num": "5.2"
},
{
"text": "Let P be the union of all the event phrases from every NewsSpike. For each p \u2208 P , let S p be the set of sentences having p as its event phrase. Figure 3(a) shows the model in plate form. There are two kinds of random variables corresponding to phrases and sentences, respectively. For each event relation E = e(t 1 , t 2 ), there exists a connected component for every event phrase p \u2208 P that models (1) whether p is a paraphrase of e or not (modeled using Boolean phrase variables, Y p ); and (2) whether each sentence of S p is a good training sentence for E (modeled using |S p | Boolean sentence variables {Z s p |s \u2208 S p }. Intuitively, the goal of the model is to find the set of good training sentences, with Figure 3 : (a) The connected components depicted as plate model, where each Y is a Boolean variable for a relation phrase and each Z is a Boolean variable for a training sentence for with that phrase; (b) and (c) are example connected components for the event phrases 's trip to and stay in respectively. The goal of the model is to set Y = 1 for good paraphrases of a relation and to set Z = 1 for good training sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 145,
"end": 156,
"text": "Figure 3(a)",
"ref_id": null
},
{
"start": 717,
"end": 725,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Joint Cluster Model",
"sec_num": "5.2"
},
{
"text": "Z s p = 1. The union of such sentences over the different phrases, \u222a p {s|Z s p = 1}, defines the training sentences for the event. Figure 3(b) and 3(c) show two example connected-components for the event phrases 's trip to and stay in respectively. Now, we can define the joint distribution over the event phrases and the sentences. The joint distribution is a function defined on factors that encode our observations about NewsSpikes as features and constraints. The phrase factor \u03a6 phrase is a loglinear function attaching to Y p with the paraphrasing features, such as whether p and e co-occur in the NewsSpikes, or whether p shares the same head word with e. They are used to distinguish whether p is a good event phrase.",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 143,
"text": "Figure 3(b)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Joint Cluster Model",
"sec_num": "5.2"
},
{
"text": "A sentence should not be identified as a good training sentence if it does not contain a positive event phrase. For example, if Y stay in in Figure 3(b) takes the value of 0, thus all sentences with the event phrase stay in should also take the value of 0. We implement this constraint with a joint factor \u03a6 joint among Y p and Z s p variable. In addition, good training sentences occur when the NewsSpike is an event instance. To encode this observation, we need to featurize the NewsSpikes and let them bias the assignments. Our model implements this with two types of log-linear factors:",
"cite_spans": [],
"ref_spans": [
{
"start": 141,
"end": 152,
"text": "Figure 3(b)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Joint Cluster Model",
"sec_num": "5.2"
},
{
"text": "(1) the unary in-spike factor \u03a6 in depends on the sentence variables and contains features about the corresponding NewsSpike. The factor is used to distinguish whether the NewsSpike is an instance of e(t 1 , t 2 ), such as whether the argument types of the NewsSpike match the designated types t 1 , t 2 ;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Cluster Model",
"sec_num": "5.2"
},
{
"text": "(2) the pairwise cross-spike factors \u03a6 cross connect pairs of sentences. This uses features such as whether the pair of NewsSpikes for the two sentences have high textual similarity, and whether two NewsSpikes contain negated event phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Cluster Model",
"sec_num": "5.2"
},
{
"text": "We define the joint distribution for the connected component for p as follows. Let Z be the vector of sentence variables, let x be the features. The joint distribution is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Cluster Model",
"sec_num": "5.2"
},
{
"text": "p(Y = y, Z = z|x; \u0398) def = 1 Zx \u03a6 phrase (y, x) \u00d7\u03a6 joint (y, z) s \u03a6 in (z s , x) s,s \u03a6 cross (z s , z s , x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Cluster Model",
"sec_num": "5.2"
},
{
"text": "where the parameter vector \u0398 is the weight vector of the features in \u03a6 in and \u03a6 cross , which are loglinear functions. The joint factors \u03a6 joint is zero when Y p = 0 but some Z s p = 1. Otherwise, it is set to 1. We use integer linear programming to perform MAP inference on the model, finding the predictions y, z that maximize the probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Cluster Model",
"sec_num": "5.2"
},
{
"text": "We now present the learning algorithm for our joint cluster model. The goal of the learning algorithm is to set \u0398 for the log-linear functions in the factors in a way that maximizes the likelihood estimation. We do this in a totally unsupervised manner, since manual annotation is expensive and not scalable to large numbers of event relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Heuristic Labels",
"sec_num": "5.3"
},
{
"text": "The weights are learned in three steps: (1) NEWSSPIKE-RE creates a set of heuristic labels for a subset of variables in the graphical model; (2) it uses the heuristic labels as supervision for the model; (3) it updates \u0398 with the perceptron learning algorithm. The weights are used to infer the values of the variables that don't have heuristic labels. The procedure is summarized in Figure 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 384,
"end": 392,
"text": "Figure 4",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Learning from Heuristic Labels",
"sec_num": "5.3"
},
{
"text": "For each event relation E = e(t 1 , t 2 ), NEWSSPIKE-RE creates heuristic labels as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Heuristic Labels",
"sec_num": "5.3"
},
{
"text": "Input: NewsSpikes and the connected components of the model;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Heuristic Labels",
"sec_num": "5.3"
},
{
"text": "Heuristic Labels: 1. find positive and negative phrases and sentences P + , P \u2212 , S + , S \u2212 ; 2. label the connected componenets accordingly and create {(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Heuristic Labels",
"sec_num": "5.3"
},
{
"text": "Y label i , Z label i ) | M i=1 }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Heuristic Labels",
"sec_num": "5.3"
},
{
"text": "Learning: Update \u0398 with the perceptron learning algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Heuristic Labels",
"sec_num": "5.3"
},
{
"text": "Output: the values of all variables in the connected components with the MAP inference. (1) P + : the temporal functionality heuristic (Zhang and Weld, 2013) says that if an event phrase p cooccurs with e in the NewsSpikes, it tends to be a paraphrase of e. We add the most frequently cooccurring event phrases to P + . P + also includes e itself.",
"cite_spans": [
{
"start": 135,
"end": 157,
"text": "(Zhang and Weld, 2013)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Heuristic Labels",
"sec_num": "5.3"
},
{
"text": "(2) P \u2212 : the temporal negation heuristic says that if p and e co-occur in the NewsSpike but one of them is in its negated form, p should be negatively labeled. We add those event phrases to P \u2212 . If a phrase p appears in both P + and P \u2212 , we remove it from both sets. (3) S + : we first get the positive NewsSpikes from the solution of the edgecover problem in section 4. We treat the NewsSpike \u03b7 as positive if the edge between \u03b7 and E is covered. Next, every sentence with p \u2208 P + is added into S + . (4) S \u2212 : since the event relations discovered in section 4 tend to be distinct relations, a sentence is treated as negative sentence for E if it is heuristically labeled as positive for E = E. In addition, S \u2212 includes all sentences with p \u2208 P \u2212 . With P + , P \u2212 , S + , S \u2212 , we define the heuristic labeled set to be {(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Heuristic Labels",
"sec_num": "5.3"
},
{
"text": "Y label i , Z label i ) | M i=1 },",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Heuristic Labels",
"sec_num": "5.3"
},
{
"text": "where M is the number of the connected components with the corresponding event phrases p \u2208 P + \u222a P \u2212 ; Y label i = 1 if p \u2208 P + and Y label i = 0 if p \u2208 P \u2212 . Z i is labeled similarly, but note that if the sentence in the connected component doesn't exist in S + \u222a S \u2212 , NEWSSPIKE-RE doesn't include the corresponding variable in (Collins, 2002) , we use a fast perceptron learning approach to update \u0398. It consists of iterating two steps: (1) MAP inference given the current weight; (2) penalizing the weights if the inferred assignments are different from the heuristic labeled assignments.",
"cite_spans": [
{
"start": 330,
"end": 345,
"text": "(Collins, 2002)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Heuristic Labels",
"sec_num": "5.3"
},
{
"text": "Z label i . With {(Y label i , Z label i ) | M i=1 }, learn- ing can be done with maximum likelihood estima- tion as L(\u0398) = log i p(Y i = y label i , Z i = z label i | x i , \u0398). Following",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Heuristic Labels",
"sec_num": "5.3"
},
{
"text": "As shown in Figure 1 , we learn the extractors from the generated training sentences. Note that most distant supervised (Hoffmann et al., 2011; Surdeanu et al., 2012) approaches use multi-instance, aggregatelevel training (i.e. the supervision comes from labeled sets of instances instead of individually labeled sentences). Coping with the noise inherent in these multi-instance bags remains a big challenge for distant supervision. In contrast, our sentencelevel training data is more direct and minimizes noise. Therefore, we implement the event extractor as a simple multi-class, L2-regularized logistic regression classifier.",
"cite_spans": [
{
"start": 120,
"end": 143,
"text": "(Hoffmann et al., 2011;",
"ref_id": "BIBREF15"
},
{
"start": 144,
"end": 166,
"text": "Surdeanu et al., 2012)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentential Event Extraction",
"sec_num": "6"
},
{
"text": "For features of the classifier, we use the lexicalized dependency paths, the OpenIE phrases, the minimal subtree of the dependency parse and the bag-of-words between the arguments. We also augment them with fine grained argument types produced by FIGER (Ling and Weld, 2012) . The event extractor that is learned can take individual test sentences (s, a 1 , a 2 ) as input and predict whether that sentence expresses the event between (a 1 , a 2 ).",
"cite_spans": [
{
"start": 253,
"end": 274,
"text": "(Ling and Weld, 2012)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentential Event Extraction",
"sec_num": "6"
},
{
"text": "Our evaluation addresses two questions. Section 7.2 considers whether our training generation algorithm identifies accurate and diverse sentences. Then, Section 7.3 investigates whether the event extractor, learned from the training sentences, outperforms other extraction approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Evaluation",
"sec_num": "7"
},
{
"text": "We follow the procedure described in (Zhang and Weld, 2013) to collect parallel news streams and generate the NewsSpikes: first, we get news seeds and query the Bing newswire search engine to gather additional, time-stamped, news articles on a similar topic; next, we extract OpenIE tuples from the news articles and group the sentences that share the same arguments and date into NewsSpikes. We collected the news stream corpus from March 1st 2013 to July 1st 2014. We split the dataset into two parts: in the training phrase, we use the news streams in 2013 (named NS13) to generate the training sentences. NS13 has 33k NewsSpikes containing 173k sentences.",
"cite_spans": [
{
"start": 37,
"end": 59,
"text": "(Zhang and Weld, 2013)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "7.1"
},
{
"text": "We evaluated the extraction performance on news articles collected in 2014 (named NS14). In this way, we make sure the test sentences are unseen during training. There are 15 million sentences in NS14. We randomly sample 100k unique sentences having two different arguments recognized by the name entity recognition system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "7.1"
},
{
"text": "For our event discovery algorithm, we set the number of event relations to be 30 and ran the algorithm on NS13. The algorithm takes 6 seconds to run on a 2.3GHz CPU. Note that most previous unsupervised relation discovery algorithms require additional manual post-processing to assign names to the output clusters. In contrast, NEWSSPIKE-RE discovers the event relations fully automatically and the output is self-explanatory. We list them together with the by-event extraction performance in Table 2 . From the table, we can see that most of the discovered event relations are salient with little overlap between relations.",
"cite_spans": [],
"ref_spans": [
{
"start": 493,
"end": 500,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "7.1"
},
{
"text": "While we arbitrarily set K to 30 in our experiments, there is no inherent limit to the number of relation phrases as long as the news corpus provides sufficient support to learn an extractor for each relation. In future, we plan to explore much larger sets of event relations to see if the extraction accuracy is maintained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "7.1"
},
{
"text": "The joint cluster model that identifies training sentences for each event relation E = e(t 1 , t 2 ) uses cosine similarity between the event phrase p of a sentence and the canonical phrases of each relation as features in the phrase factors in Figure 3(a) . It also includes the cosine similarity between p and a set of \"anti-phrases\" for the event relation which are recognized by the temporal negation heuristic.",
"cite_spans": [],
"ref_spans": [
{
"start": 245,
"end": 256,
"text": "Figure 3(a)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "7.1"
},
{
"text": "For the in-spike factor, we measure whether the fine-grained argument types of the sentence returned from the FIGER system matches the required t i respectively. In addition, we implement the features from (Zhang and Weld, 2013) to measure whether the sentence is describing the event of the NewsSpike. For the cross-spike factors, we use textual similarity features between the two sets of parallel sentences to measure the distance between the pair of NewsSpikes.",
"cite_spans": [
{
"start": 206,
"end": 228,
"text": "(Zhang and Weld, 2013)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "7.1"
},
{
"text": "The key to a good learning system is a high-quality training set. In this section, we compare our joint model against pipeline systems that consider paraphrases and argument type matching sequentially, Table 1 : Quality of the generated training sentences (count, micro-and macro-accuracy), where \"all\" includes sentences with all event phrases and \"diverse\" are those with distinct event phrases.",
"cite_spans": [],
"ref_spans": [
{
"start": 202,
"end": 209,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Quality of the Generated Training Set",
"sec_num": "7.2"
},
{
"text": "based on the following paraphrasing techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of the Generated Training Set",
"sec_num": "7.2"
},
{
"text": "Basic is based on the temporal functionality heuristic of (Zhang and Weld, 2013) . It treats all event phrases appearing in the same NewsSpike as paraphrases. Yates09 uses Resolver (Yates and Etzioni, 2009) to create clusters of phrases. Resolver measures the similarity between the phrases by means of both distributional features and textual features. We convert the sentences in NewsSpikes into tuples in the form of (a 1 , p, a 2 ), and run Resolver on these tuples to generate the paraphrases. Zhang13: We used the generated paraphrase set from (Zhang and Weld, 2013) . Ganit13: Ganitkevitch et al. (2013) released a large paraphrase database (PPDB) based on exploiting the bilingual parallel corpora. Note that some of these paraphrasing systems do not handle dependency paths. So when p is a dependency path, we use the surface string between the arguments as the phrase. NewsSpike-RE: We also conduct ablation testing on NEWSSPIKE-RE to measure the effect of the cross-spike factors and the temporal negation heuristic: w/o Cross uses a simpler model by removing the cross-spike factors of NEWSSPIKE-RE; w/o Negation uses the same joint cluster model as NEWSSPIKE-RE but removes the features and the heuristic labels coming from the temporal negation heuristic.",
"cite_spans": [
{
"start": 58,
"end": 80,
"text": "(Zhang and Weld, 2013)",
"ref_id": "BIBREF38"
},
{
"start": 181,
"end": 206,
"text": "(Yates and Etzioni, 2009)",
"ref_id": "BIBREF37"
},
{
"start": 550,
"end": 572,
"text": "(Zhang and Weld, 2013)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of the Generated Training Set",
"sec_num": "7.2"
},
{
"text": "We measured the micro-and macro-accuracy of each system by manually labeling 1000 randomly chosen output from each system 3 . Annotators read each training sentence, and decided if it was a good example for a particular event. We also report the number of generated sentences. Since the extractor should generalize over sentences with dissimilar expressions, it is crucial to identify sentences with diverse event phrases. Therefore we also measured the accuracy and the count of a \"diverse\" condition: only consider the subset of sentences with distinct event phrases. Table 1 shows the accuracy and the number of training examples. The basic temporal system brings us 0.50/0.62 micro-and macro-accuracy overall and 0.38/0.51 in the diverse condition. It shows that NewsSpikes are promising resources to generate the training set, but that elaboration is necessary. Yates09 gets 0.78/0.76 accuracy overall because its textual features help it to recognize many good sentences with similar phrases. But for the diverse condition, it gets lower precision because the distributional hypothesis fails to distinguish those correlated but different phrases.",
"cite_spans": [],
"ref_spans": [
{
"start": 570,
"end": 577,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Quality of the Generated Training Set",
"sec_num": "7.2"
},
{
"text": "Although Ganitkevitch13 and Zhang13 leverage existing paraphrase databases, it is interesting that their accuracy is still not good. It is largely because many times the paraphrasing must depend on the context: e.g. \"Cutler hits Martellus Bennett with TD in closing seconds.\" is not good for the beat(team, team) relation, even though hit is a synonym for beat in general. These two systems show that it is not enough to use an off-the-shelf paraphrasing database for extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of the Generated Training Set",
"sec_num": "7.2"
},
{
"text": "The ablation test shows the effectiveness of the temporal negation hypothesis: after turning off the relevant features and heuristic labels, the precision drops about 10 percentage points. In addition, the cross-spike factors bring NEWSSPIKE-RE about 22% more training sentences and also increase the accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of the Generated Training Set",
"sec_num": "7.2"
},
{
"text": "We did bootstrap sampling to test the statistical significance of NEWSSPIKE-RE's improvement in accuracy over each comparison system and ablation of NEWSSPIKE-RE. For each system we computed the accuracy of 10 samples of 100 labeled outputs. We then ran the paired t-test over the accuracy numbers of each other system compared to NEWSSPIKE-RE. For all but w/o cross the improvement is strongly significant with p-value less than 1%. The increase in accuracy compared to w/o cross has borderline significance (p-value 5.5%), but is a clear win with its 22% increase in training size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of the Generated Training Set",
"sec_num": "7.2"
},
{
"text": "Most previous relation extraction approaches either require a manually labeled training set, or work only on a pre-defined set of relations that have ground instances from KBs. The closest work to NEWSSPIKE-RE is Universal Schemas (Riedel et al., 2013) , which addresses the limitation of distant supervision that the relations must exist in KBs. Their solution is to treat the surface strings, dependency paths, and relations from KBs as equal \"schemas\", and then to exploit the correlation between the instances and the schemas from a very large unlabeled corpus. In their paper, Riedel et al. evaluated only on static relations from Freebase and achieve state-of-the-art performance. But Universal Schemas can be adapted to handle events, by introducing the events as schemas and heuristically finding seed instances.",
"cite_spans": [
{
"start": 231,
"end": 252,
"text": "(Riedel et al., 2013)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Performance of the Event Extractors",
"sec_num": "7.3"
},
{
"text": "We set up a competing system (R13) as follows: (1) We take the NYTimes corpus published between 1987 and 2007 (Sandhaus, 2008) , the dataset used by Riedel et al. (2013) containing 1.8 million NY Times articles; (2) The instances (i.e. the rows of the matrix) come from the entity pairs from the news articles; (3) There are two types of columns: some are the extraction features used by NEWSSPIKE-RE, including the lexicalized dependency paths described in Riedel et al.; others are event relations E = e(t 1 , t 2 ); (4) For an entity pair (a 1 , a 2 ), if there is an OpenIE extraction (a 1 , e, a 2 ) and the entity types of (a 1 , a 2 ) match (t 1 , t 2 ), we assume the event relation E is observed on that instance.",
"cite_spans": [
{
"start": 110,
"end": 126,
"text": "(Sandhaus, 2008)",
"ref_id": "BIBREF30"
},
{
"start": 149,
"end": 169,
"text": "Riedel et al. (2013)",
"ref_id": "BIBREF28"
},
{
"start": 458,
"end": 472,
"text": "Riedel et al.;",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Performance of the Event Extractors",
"sec_num": "7.3"
},
{
"text": "As shown in Table 1 , parallel news streams are a promising resource for clustering because of the strong correlation between the instances and the event phrases. We train another version of Universal Schemas R13P on the parallel news streams NS13. In particular, entity pairs from different NewsSpikes are used as different rows in the matrix.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Performance of the Event Extractors",
"sec_num": "7.3"
},
{
"text": "We would like to measure the precision and recall of the extractors. But note that it is impossible to fully label all the sentences, so we follow the \"pooling\" technique described in (Riedel et al., 2013) to create the labeled dataset. For every competing system, we sample 100 top outputs for every event relation and add this to the pool. The annotators are shown these sentences and asked to judge whether the sentence expresses the event relation or not. After that, the labeled set become \"gold\" and can be used to measure the precision and pseudorecall. There are in all 6,178 distinct sentences in the pool, since some outputs are produced by multiple systems. Among them, 2,903 sentences are labeled as positive. In Table 2 , the # columns show the number of true extractions in the pool for every event relation.",
"cite_spans": [
{
"start": 184,
"end": 205,
"text": "(Riedel et al., 2013)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 725,
"end": 732,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Performance of the Event Extractors",
"sec_num": "7.3"
},
{
"text": "Similar to the diverse condition in Table 1 , it is important that the extractor can correctly predict on diverse sentences that are dissimilar to each other. Thus we conducted a \"diverse pooling\": for each system, we report numbers for the sentences with different dependency paths between the arguments for every discovered event. Figure 5(a) shows the precision pseudo-recall curve for all sentences for the three systems. NEWSSPIKE-RE outperforms the competing systems by a large margin. For example, the area under the curve (AUC) of NEWSSPIKE-RE for all sentences is 0.80 while that of R13P and R13 are 0.59 and 0.30. This is a 35% increase over R13P and 2.7 times the area compared to R13. Similar increases in AUC are observed on diverse sentences. Table 2 further lists the breakdown numbers for each event relation, as well as the micro and macro average. Although Universal Schemas had some success for several relations, NEWSSPIKE-RE achieved the best F1 for 26 out of 30 event relations; best AUC for 26 out of 30. The advantage is even greater in the diverse condition. It is interesting to see that R13P performs much better than R13, since the data coming from NYTimes is much noisier.",
"cite_spans": [],
"ref_spans": [
{
"start": 36,
"end": 43,
"text": "Table 1",
"ref_id": null
},
{
"start": 333,
"end": 344,
"text": "Figure 5(a)",
"ref_id": "FIGREF2"
},
{
"start": 757,
"end": 764,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Performance of the Event Extractors",
"sec_num": "7.3"
},
{
"text": "A closer look shows that Universal Schemas tends to confuse correlated but different phrases. NEWSSPIKE-RE, however, rarely made these errors because our model can effectively exploit negative evidence to distinguish them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance of the Event Extractors",
"sec_num": "7.3"
},
{
"text": "Although the most event relations in Table 2 cannot be handled by the distant supervised approach, it is possible to match buy(org,org) to Freebase relations with appropriate database operators such as to The New York Times. NEWSSPIKE-RE has AUC 0.80, more than doubling R13 (0.30) and 35% higher than R13P (0.59) for all event relations.",
"cite_spans": [],
"ref_spans": [
{
"start": 37,
"end": 44,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Comparing to Distant Supervision",
"sec_num": "7.3.1"
},
{
"text": "join and select (Zhang et al., 2012) . To evaluate how distant supervision performs, we introduce the system DS on NYT based on a manual mapping of buy(org,org) to the join relation 4 in Freebase. Then we match its instances to NYTimes articles and follow the steps of Surdeanu et al. (2012) to train the extractor. The matching to NYTimes brings us 264 positive instances having 5,333 sentences, but unfortunately the sentence-level accuracy is only 13% based on examination of 100 random sentences. Figure 5(b) shows the PR curves for all the competing systems. Distant supervision predicts the top extractions correctly because the multi-instance technique recognizes some common expressions (e.g. buy, acquire), but the precision drops dramatically since most positive expressions are overwhelmed by the noise.",
"cite_spans": [
{
"start": 16,
"end": 36,
"text": "(Zhang et al., 2012)",
"ref_id": "BIBREF39"
},
{
"start": 269,
"end": 291,
"text": "Surdeanu et al. (2012)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 501,
"end": 512,
"text": "Figure 5(b)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Comparing to Distant Supervision",
"sec_num": "7.3.1"
},
{
"text": "Popular distant supervised approaches have limited ability to handle event extraction, since fluent facts are highly time dependent and often do not exist in any KB. This paper presents a novel unsupervised approach for event extraction that exploits parallel news streams. Our NEWSSPIKE-RE system automatically identifies a set of argument-typed events from a news corpus, and then learns a sentential (micro-reading) extractor for each event.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "8"
},
{
"text": "We introduced a novel, temporal negation heuristic for parallel news streams that identifies event phrases that are correlated, but are not paraphrases. We encoded this in a probabilistic graphical model 4 /organization/organization/companies_ acquired1/business/acquisition/company_acquired to cluster sentences, generating high quality training data to learn a sentential extractor. This provides negative evidence crucial to achieving high precision training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "8"
},
{
"text": "Experiments show the high quality of the generated training sentences and confirm the importance of our negation heuristic. Our most important experiment shows that we can learn accurate event extractors from this training data. NEWSSPIKE-RE outperforms comparable extractors by a wide margin, more than doubling the area under a precision-recall curve compared to Universal Schemas.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "8"
},
{
"text": "In future work we plan to implement our system as an end-to-end online service. This would allow users to conveniently define events of interest, learn extractors for each event, and return extracted facts from news streams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "8"
},
{
"text": "Transactions of the Association for Computational Linguistics, vol. 3, pp. 117-129, 2015. Action Editor: Hal Daum\u00e9 III. Submission batch: 10/2014; Revision batch 1/2015; Published 2/2015. c 2015 Association for Computational Linguistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For clarity in the paper, we refer to this relation as travel-to, even though the phrase arrive in is actually more frequent and is selected as the name of this relation by our event discovery algorithm, as shown inTable 2.2 This dependency path will be referred to as \"'s trip to\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Two Odesk workers were asked to label the dataset, a graduate student then reconciled any disagreements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Hal Daume III, Xiao Ling, Luke Zettlemoyer and the reviewers. This work was supported by ONR grant N00014-12-1-0211, the WRF/Cable Professorship, a gift from Google, and the Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0181. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of DARPA, AFRL, or the US government.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Snowball: extracting relations from large plain-text collections",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Agichtein",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Gravano",
"suffix": ""
}
],
"year": 2000,
"venue": "ACM DL",
"volume": "",
"issue": "",
"pages": "85--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Agichtein and Luis Gravano. 2000. Snowball: extracting relations from large plain-text collections. In ACM DL, pages 85-94.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Open information extraction from the web",
"authors": [
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"J"
],
"last": "Cafarella",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Broadhead",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI-07)",
"volume": "",
"issue": "",
"pages": "2670--2676",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michele Banko, Michael J. Cafarella, Stephen Soderland, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI-07), pages 2670-2676.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning to paraphrase: an unsupervised approach using multiplesequence alignment",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology (HLT-NAACL)",
"volume": "",
"issue": "",
"pages": "16--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay and Lillian Lee. 2003. Learning to paraphrase: an unsupervised approach using multiple- sequence alignment. In Proceedings of the 2003 Con- ference of the North American Chapter of the Associ- ation for Computational Linguistics on Human Lan- guage Technology (HLT-NAACL), pages 16-23.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Extracting paraphrases from a parallel corpus",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kathleen R Mckeown",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th Annual Meeting on Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "50--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay and Kathleen R McKeown. 2001. Ex- tracting paraphrases from a parallel corpus. In Pro- ceedings of the 39th Annual Meeting on Association for Computational Linguistics (ACL), pages 50-57.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Event discovery in social media feeds",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Benson",
"suffix": ""
},
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (HLT-NAACL)",
"volume": "",
"issue": "",
"pages": "389--398",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Benson, Aria Haghighi, and Regina Barzilay. 2011. Event discovery in social media feeds. In Pro- ceedings of the 49th Annual Meeting of the Associa- tion for Computational Linguistics: Human Language Technologies (HLT-NAACL), pages 389-398.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Extracting patterns and relations from the world wide web",
"authors": [
{
"first": "",
"middle": [],
"last": "Sergey Brin",
"suffix": ""
}
],
"year": 1999,
"venue": "The World Wide Web and Databases",
"volume": "",
"issue": "",
"pages": "172--183",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergey Brin. 1999. Extracting patterns and relations from the world wide web. In The World Wide Web and Databases, pages 172-183.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Toward an architecture for neverending language learning",
"authors": [
{
"first": "",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence (AAAI-10)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell. 2010. Toward an architecture for never- ending language learning. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI-10).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL-02 conference on Empirical methods in natural language processing",
"volume": "10",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden Markov models: Theory and experi- ments with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical methods in natu- ral language processing-Volume 10, pages 1-8.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Constructing biological knowledge bases by extracting information from text sources",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Craven",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Kumlien",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the Seventh International Conference on Intelligent Systems for Molecular Biology (ISMB)",
"volume": "",
"issue": "",
"pages": "77--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Craven and Johan Kumlien. 1999. Constructing biological knowledge bases by extracting information from text sources. In Proceedings of the Seventh Inter- national Conference on Intelligent Systems for Molec- ular Biology (ISMB), pages 77-86.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources",
"authors": [
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
}
],
"year": 2004,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Un- supervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In Com- putational Linguistics, page 350.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Identifying relations for open information extraction",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Fader",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1535--1545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1535-1545.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Incorporating non-local information into information extraction systems by Gibbs sampling",
"authors": [
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Finkel",
"suffix": ""
},
{
"first": "Trond",
"middle": [],
"last": "Grenager",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "363--370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local informa- tion into information extraction systems by Gibbs sam- pling. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics (ACL), pages 363-370.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "PPDB: The paraphrase database",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2013,
"venue": "Joint Human Language Technology Conference/Annual Meeting of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL 2013)",
"volume": "",
"issue": "",
"pages": "758--764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The paraphrase database. In Joint Human Language Technology Con- ference/Annual Meeting of the North American Chap- ter of the Association for Computational Linguistics (HLT-NAACL 2013), pages 758-764.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Discovering relations among named entities from large corpora",
"authors": [
{
"first": "Takaaki",
"middle": [],
"last": "Hasegawa",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takaaki Hasegawa, Satoshi Sekine, and Ralph Grishman. 2004. Discovering relations among named entities from large corpora. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics (ACL), page 415.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Knowledgebased weak supervision for information extraction of overlapping relations",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Hoffmann",
"suffix": ""
},
{
"first": "Congle",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (HLT-ACL)",
"volume": "",
"issue": "",
"pages": "541--550",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledge- based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies (HLT- ACL), pages 541-550.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Multi-faceted event recognition with bootstrapped dictionaries",
"authors": [
{
"first": "Ruihong",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
}
],
"year": 2013,
"venue": "the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (HLT-NAACL)",
"volume": "",
"issue": "",
"pages": "41--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruihong Huang and Ellen Riloff. 2013. Multi-faceted event recognition with bootstrapped dictionaries. In the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (HLT-NAACL), pages 41-51.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Discovery of inference rules for question-answering",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2001,
"venue": "Natural Language Engineering",
"volume": "7",
"issue": "4",
"pages": "343--360",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin and Patrick Pantel. 2001. Discovery of infer- ence rules for question-answering. Natural Language Engineering, 7(4):343-360.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Fine-grained entity recognition",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Daniel S Weld",
"suffix": ""
}
],
"year": 2012,
"venue": "Association for the Advancement of Artificial Intelligence (AAAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiao Ling and Daniel S Weld. 2012. Fine-grained entity recognition. In Association for the Advancement of Artificial Intelligence (AAAI).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Event extraction as dependency parsing",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (HLT-ACL)",
"volume": "",
"issue": "",
"pages": "1626--1635",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David McClosky, Mihai Surdeanu, and Christopher D Manning. 2011. Event extraction as dependency pars- ing. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies (HLT-ACL), pages 1626- 1635.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Distant supervision for relation extraction without labeled data",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Mintz",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bills",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 47th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Daniel Juraf- sky. 2009. Distant supervision for relation extrac- tion without labeled data. In Proceedings of the 47th",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Annual Meeting of the Association for Computational Linguistics (ACL)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "1003--1011",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics (ACL), pages 1003-1011.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Scalable knowledge harvesting with high precision and high recall",
"authors": [
{
"first": "Ndapandula",
"middle": [],
"last": "Nakashole",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Theobald",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the fourth ACM international conference on Web search and data mining (WSDM)",
"volume": "",
"issue": "",
"pages": "227--236",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ndapandula Nakashole, Martin Theobald, and Gerhard Weikum. 2011. Scalable knowledge harvesting with high precision and high recall. In Proceedings of the fourth ACM international conference on Web search and data mining (WSDM), pages 227-236.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Multi event extraction guided by global constraints",
"authors": [
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (HLT-NAACL)",
"volume": "",
"issue": "",
"pages": "70--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roi Reichart and Regina Barzilay. 2012. Multi event ex- traction guided by global constraints. In Proceedings of the 2012 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies (HLT-NAACL), pages 70-79.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Event extraction using distant supervision",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Reschke",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Jankowiak",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2014,
"venue": "Language Resources and Evaluation Conference (LREC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Reschke, Martin Jankowiak, Mihai Surdeanu, Christopher D Manning, and Daniel Jurafsky. 2014. Event extraction using distant supervision. In Lan- guage Resources and Evaluation Conference (LREC).",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Modeling relations and their mentions without labeled text",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2010,
"venue": "Machine Learning and Knowledge Discovery in Databases (ECML)",
"volume": "",
"issue": "",
"pages": "148--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions with- out labeled text. In Machine Learning and Knowledge Discovery in Databases (ECML), pages 148-163.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Model combination for event extraction in BioNLP",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the BioNLP Shared Task 2011 Workshop",
"volume": "",
"issue": "",
"pages": "51--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, David McClosky, Mihai Surdeanu, An- drew McCallum, and Christopher D Manning. 2011. Model combination for event extraction in BioNLP 2011. In Proceedings of the BioNLP Shared Task 2011 Workshop, pages 51-55.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Relation extraction with matrix factorization and universal schemas",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [
"M"
],
"last": "Marlin",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2013,
"venue": "Joint Human Language Technology Conference/Annual Meeting of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, Benjamin M. Mar- lin, and Andrew McCallum. 2013. Relation extraction with matrix factorization and universal schemas. In Joint Human Language Technology Con- ference/Annual Meeting of the North American Chap- ter of the Association for Computational Linguistics (HLT-NAACL).",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Open domain event extraction from twitter",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD)",
"volume": "",
"issue": "",
"pages": "1104--1112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Ritter, Oren Etzioni, Sam Clark, et al. 2012. Open domain event extraction from twitter. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD), pages 1104-1112.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The New York Times annotated corpus. Linguistic Data Consortium",
"authors": [
{
"first": "Evan",
"middle": [],
"last": "Sandhaus",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evan Sandhaus. 2008. The New York Times annotated corpus. Linguistic Data Consortium.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Preemptive information extraction using unrestricted relation discovery",
"authors": [
{
"first": "Yusuke",
"middle": [],
"last": "Shinyama",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics (HLT-NAACL)",
"volume": "",
"issue": "",
"pages": "304--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yusuke Shinyama and Satoshi Sekine. 2006. Preemp- tive information extraction using unrestricted relation discovery. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Com- putational Linguistics (HLT-NAACL), pages 304-311.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Multi-instance multilabel learning for relation extraction",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Tibshirani",
"suffix": ""
},
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP)",
"volume": "",
"issue": "",
"pages": "455--465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi- label learning for relation extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP), pages 455- 465.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Probabilistic matrix factorization leveraging contexts for unsupervised relation extraction",
"authors": [
{
"first": "Shingo",
"middle": [],
"last": "Takamatsu",
"suffix": ""
},
{
"first": "Issei",
"middle": [],
"last": "Sato",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Nakagawa",
"suffix": ""
}
],
"year": 2011,
"venue": "Advances in Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "87--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shingo Takamatsu, Issei Sato, and Hiroshi Nakagawa. 2011. Probabilistic matrix factorization leveraging contexts for unsupervised relation extraction. In Ad- vances in Knowledge Discovery and Data Mining, pages 87-99.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Autonomously semantifying wikipedia",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the International Conference on Information and Knowledge Management (CIKM)",
"volume": "",
"issue": "",
"pages": "41--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Wu and Daniel S. Weld. 2007. Autonomously se- mantifying wikipedia. In Proceedings of the Inter- national Conference on Information and Knowledge Management (CIKM), pages 41-50.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Structured relation discovery using generative models",
"authors": [
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1456--1466",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Limin Yao, Aria Haghighi, Sebastian Riedel, and Andrew McCallum. 2011. Structured relation discovery using generative models. In Proceedings of the Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 1456-1466.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Unsupervised relation discovery with sense disambiguation",
"authors": [
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "712--720",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Limin Yao, Sebastian Riedel, and Andrew McCallum. 2012. Unsupervised relation discovery with sense dis- ambiguation. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguistics (ACL), pages 712-720.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Unsupervised methods for determining object and relation synonyms on the web",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Yates",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of Artificial Intelligence Research",
"volume": "34",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Yates and Oren Etzioni. 2009. Unsupervised methods for determining object and relation synonyms on the web. Journal of Artificial Intelligence Research, 34(1):255.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Harvesting parallel news streams to generate paraphrases of event relations",
"authors": [
{
"first": "Congle",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Daniel S Weld",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP)",
"volume": "",
"issue": "",
"pages": "455--465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Congle Zhang and Daniel S Weld. 2013. Harvesting par- allel news streams to generate paraphrases of event re- lations. In Proceedings of the 2013 Joint Conference on Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning (EMNLP), pages 455-465.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Ontological smoothing for relation extraction with minimal supervision",
"authors": [
{
"first": "Congle",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Raphael",
"middle": [],
"last": "Hoffmann",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2012,
"venue": "Association for the Advancement of Artificial Intelligence (AAAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Congle Zhang, Raphael Hoffmann, and Daniel S Weld. 2012. Ontological smoothing for relation extraction with minimal supervision. In Association for the Ad- vancement of Artificial Intelligence (AAAI).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "A simple example of the edge-cover algorithm with K=2",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "Learning from Heuristic Labels",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF2": {
"text": "Precision pseudo-recall curves for (a) all event relations; (b) buy(org, org), this figure includes the distant supervision algorithm MIML learned from matching the Freebase relation 5",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF4": {
"text": "Performance of extractors by event relation, reporting both precision and the area under the PR curve. The # column shows the number of true extractions in the pool of sampled output. NEWSSPIKE-RE (labeled N-RE) outperforms two implementations of Riedel's Universal Schemas (See Section 7.3 for details). The advantage of NEWSSPIKE-RE over Universal Schemas is greatest on a diverse test set where each sentence has a distinct event phrase.",
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null
}
}
}
}