ACL-OCL / Base_JSON /prefixS /json /S17 /S17-1019.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S17-1019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:28:27.795005Z"
},
"title": "Acquiring Predicate Paraphrases from News Tweets",
"authors": [
{
"first": "Vered",
"middle": [],
"last": "Shwartz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bar-Ilan University",
"location": {
"settlement": "Ramat-Gan",
"country": "Israel"
}
},
"email": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Stanovsky",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bar-Ilan University",
"location": {
"settlement": "Ramat-Gan",
"country": "Israel"
}
},
"email": "gabriel.satanovsky@gmail.com"
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bar-Ilan University",
"location": {
"settlement": "Ramat-Gan",
"country": "Israel"
}
},
"email": "dagan@cs.biu.ac.il"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a simple method for evergrowing extraction of predicate paraphrases from news headlines in Twitter. Analysis of the output of ten weeks of collection shows that the accuracy of paraphrases with different support levels is estimated between 60-86%. We also demonstrate that our resource is to a large extent complementary to existing resources, providing many novel paraphrases. Our resource is publicly available, continuously expanding based on daily news.",
"pdf_parse": {
"paper_id": "S17-1019",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a simple method for evergrowing extraction of predicate paraphrases from news headlines in Twitter. Analysis of the output of ten weeks of collection shows that the accuracy of paraphrases with different support levels is estimated between 60-86%. We also demonstrate that our resource is to a large extent complementary to existing resources, providing many novel paraphrases. Our resource is publicly available, continuously expanding based on daily news.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recognizing that various textual descriptions across multiple texts refer to the same event or action can benefit NLP applications such as recognizing textual entailment (Dagan et al., 2013) and question answering. For example, to answer \"when did the US Supreme Court approve samesex marriage?\" given the text \"In June 2015, the Supreme Court ruled for same-sex marriage\", approve and ruled for should be identified as describing the same action.",
"cite_spans": [
{
"start": 170,
"end": 190,
"text": "(Dagan et al., 2013)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To that end, much effort has been devoted to identifying predicate paraphrases, some of which resulted in releasing resources of predicate entailment or paraphrases. Two main approaches were proposed for that matter; the first leverages the similarity in argument distribution across a large corpus between two predicates (e.g. [a] 0 buy [a] 1 / [a] 0 acquire [a] 1 ) (Lin and Pantel, 2001; Berant et al., 2010) . The second approach exploits bilingual parallel corpora, extracting as paraphrases pairs of texts that were translated identically to foreign languages (Ganitkevitch et al., 2013) .",
"cite_spans": [
{
"start": 368,
"end": 390,
"text": "(Lin and Pantel, 2001;",
"ref_id": "BIBREF8"
},
{
"start": 391,
"end": 411,
"text": "Berant et al., 2010)",
"ref_id": "BIBREF4"
},
{
"start": 566,
"end": 593,
"text": "(Ganitkevitch et al., 2013)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While these methods have produced exhaustive resources which are broadly used by applications, A third approach was proposed to harvest paraphrases from multiple mentions of the same event in news articles. 1 This approach assumes that various redundant reports make different lexical choices to describe the same event. Although there has been some work following this approach (e.g. Shinyama et al., 2002; Shinyama and Sekine, 2006; Roth and Frank, 2012; Zhang and Weld, 2013) , it was less exhaustively investigated and did not result in creating paraphrase resources.",
"cite_spans": [
{
"start": 385,
"end": 407,
"text": "Shinyama et al., 2002;",
"ref_id": "BIBREF12"
},
{
"start": 408,
"end": 434,
"text": "Shinyama and Sekine, 2006;",
"ref_id": "BIBREF11"
},
{
"start": 435,
"end": 456,
"text": "Roth and Frank, 2012;",
"ref_id": "BIBREF10"
},
{
"start": 457,
"end": 478,
"text": "Zhang and Weld, 2013)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we present a novel unsupervised method for ever-growing extraction of lexicallydivergent predicate paraphrase pairs from news tweets. We apply our methodology to create a resource of predicate paraphrases, exemplified in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 235,
"end": 242,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Analysis of the resource obtained after ten weeks of acquisition shows that the set of paraphrases reaches accuracy of 60-86% at different levels of support. Comparison to existing resources shows that, even as our resource is still smaller in orders of magnitude from existing resources, it complements them with nonconsecutive predicates (e.g. take [a] 0 from [a] 1 ) and paraphrases which are highly context specific. The resource and the source code are available at http://github.com/vered1986/ Chirps. 2 As of the end of May 2017, it contains 456,221 predicate pairs in 1,239,463 different contexts. Our resource is ever-growing and is expected to contain around 2 million predicate paraphrases within a year. Until it reaches a large enough size, we will release a daily update, and at a later stage, we plan to release a periodic update.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A prominent approach to acquire predicate paraphrases is to compare the distribution of their arguments across a corpus, as an extension to the distributional hypothesis (Harris, 1954) . DIRT (Lin and Pantel, 2001 ) is a resource of 10 million paraphrases, in which the similarity between predicate pairs is estimated by the geometric mean of the similarities of their argument slots. Berant (2012) constructed an entailment graph of distributionally similar predicates by enforcing transitivity constraints and applying global optimization, releasing 52 million directional entailment rules (e.g.",
"cite_spans": [
{
"start": 170,
"end": 184,
"text": "(Harris, 1954)",
"ref_id": null
},
{
"start": 192,
"end": 213,
"text": "(Lin and Pantel, 2001",
"ref_id": "BIBREF8"
},
{
"start": 385,
"end": 398,
"text": "Berant (2012)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Existing Paraphrase Resources",
"sec_num": "2.1"
},
{
"text": "[a] 0 shoot [a] 1 \u2192 [a] 0 kill [a] 1 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Existing Paraphrase Resources",
"sec_num": "2.1"
},
{
"text": "A second notable source for extracting paraphrases is multiple translations of the same text (Barzilay and McKeown, 2001 ).",
"cite_spans": [
{
"start": 93,
"end": 120,
"text": "(Barzilay and McKeown, 2001",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Existing Paraphrase Resources",
"sec_num": "2.1"
},
{
"text": "The Paraphrase Database (PPDB) (Ganitkevitch et al., 2013; Pavlick et al., 2015 ) is a huge collection of paraphrases extracted from bilingual parallel corpora. Paraphrases are scored heuristically, and the database is available for download in six increasingly large sizes according to scores (the smallest size being the most accurate). In addition to lexical paraphrases, PPDB also consists of 140 million syntactic paraphrases, some of which include predicates with non-terminals as arguments.",
"cite_spans": [
{
"start": 31,
"end": 58,
"text": "(Ganitkevitch et al., 2013;",
"ref_id": "BIBREF6"
},
{
"start": 59,
"end": 79,
"text": "Pavlick et al., 2015",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Existing Paraphrase Resources",
"sec_num": "2.1"
},
{
"text": "Another line of work extracts paraphrases from redundant comparable news articles (e.g. Shinyama et al., 2002; Barzilay and Lee, 2003) . The assumption is that multiple news articles describing the same event use various lexical choices, providing a good source for paraphrases. Heuristics are applied to recognize that two news articles discuss the same event, such as lexical overlap and same publish date (Shinyama and Sekine, 2006) . Given such a pair of articles, it is likely that predicates connecting the same arguments will be paraphrases, as in the following example:",
"cite_spans": [
{
"start": 88,
"end": 110,
"text": "Shinyama et al., 2002;",
"ref_id": "BIBREF12"
},
{
"start": 111,
"end": 134,
"text": "Barzilay and Lee, 2003)",
"ref_id": "BIBREF1"
},
{
"start": 408,
"end": 435,
"text": "(Shinyama and Sekine, 2006)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Using Multiple Event Descriptions",
"sec_num": "2.2"
},
{
"text": "1. GOP lawmakers introduce new health care plan 2. GOP lawmakers unveil new health care plan Zhang and Weld (2013) and Zhang et al. (2015) introduced methods that leverage parallel news streams to cluster predicates by meaning, using temporal constraints. Since this approach acquires paraphrases from descriptions of the same event, it is potentially more accurate than methods that acquire paraphrases from the entire corpus or translation phrase table. However, there is currently no paraphrase resource acquired in this approach. 3 Finally, Xu et al. 2014developed a supervised model to collect sentential paraphrases from Twitter. They used Twitter's trending topic service, and considered two tweets from the same topic as paraphrases if they shared a single anchor word.",
"cite_spans": [
{
"start": 93,
"end": 114,
"text": "Zhang and Weld (2013)",
"ref_id": "BIBREF19"
},
{
"start": 119,
"end": 138,
"text": "Zhang et al. (2015)",
"ref_id": "BIBREF18"
},
{
"start": 534,
"end": 535,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Using Multiple Event Descriptions",
"sec_num": "2.2"
},
{
"text": "We present a methodology to automatically collect binary verbal predicate paraphrases from Twitter. We first obtain news related tweets ( \u00a73.1) from which we extract propositions ( \u00a73.2). For a candidate pair of propositions, we assume that if both arguments can be matched then the predicates are likely paraphrases ( \u00a73.3). Finally, we rank the predicate pairs according to the number of instances in which they were aligned ( \u00a73.4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resource Construction",
"sec_num": "3"
},
{
"text": "We use Twitter as a source of readily available news headlines. The 140 characters limit makes tweets concise, informative and independent of each other, obviating the need to resolve document-level entity coreference. We query the Twitter Search API 4 via Twitter Search. 5 We use Twitter's news filter that retrieves tweets containing links to news websites, and limit the search to English tweets.",
"cite_spans": [
{
"start": 273,
"end": 274,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Obtaining News Headlines",
"sec_num": "3.1"
},
{
"text": "We extract propositions from news tweets using PropS , which simplifies dependency trees by conveniently marking a wide range of predicates (e.g, verbal, adjectival, nonlexical) and positioning them as direct heads of their corresponding arguments. Specifically, we run PropS over dependency trees predicted by spaCy 6 and extract predicate types (as in Table 1) composed of verbal predicates, datives, prepositions, and auxiliaries.",
"cite_spans": [],
"ref_spans": [
{
"start": 354,
"end": 362,
"text": "Table 1)",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Proposition Extraction",
"sec_num": "3.2"
},
{
"text": "Finally, we employ a pre-trained argument reduction model to remove non-restrictive argument modifications . This is essential for our subsequent alignment step, as it is likely that short and concise phrases will tend to match more frequently in comparison to longer, more specific arguments. Figure 1 exemplifies some of the phenomena handled by this process, along with the automatically predicted output.",
"cite_spans": [],
"ref_spans": [
{
"start": 294,
"end": 302,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Proposition Extraction",
"sec_num": "3.2"
},
{
"text": "Following the assumption that different descriptions of the same event are bound to be redundant (as discussed in Section 2.2), we consider two predicates as paraphrases if: (1) They appear on the same day, and (2) Each of their arguments aligns with a unique argument in the other predicate, either by strict matching (short edit distance, abbreviations, etc.) or by a looser matching (par-6 https://spacy.io tial token matching or WordNet synonyms). 7 Table 2 shows examples of predicate paraphrase instances in the resource.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Paraphrase Instances",
"sec_num": "3.3"
},
{
"text": "The resource release consists of two files: 1. Instances: the specific contexts in which the predicates are paraphrases (as in Table 2 ). In practice, to comply with Twitter policy, we release predicate paraphrase pair types along with their arguments and tweet IDs, and provide a script for downloading the full texts.",
"cite_spans": [],
"ref_spans": [
{
"start": 127,
"end": 134,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Resource Release",
"sec_num": "3.4"
},
{
"text": "2. Types: predicate paraphrase pair types (as in Table 1 ). The types are ranked in a descending order according to a heuristic accuracy score:",
"cite_spans": [],
"ref_spans": [
{
"start": 49,
"end": 56,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Resource Release",
"sec_num": "3.4"
},
{
"text": "s = count \u2022 1 + d N",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resource Release",
"sec_num": "3.4"
},
{
"text": "where count is the number of instances in which the predicate types were aligned (Section 3.3), d is the number of different days in which they were aligned, and N is the number of days since the resource collection begun.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resource Release",
"sec_num": "3.4"
},
{
"text": "Taking into account the number of different days in which predicates were aligned reduces the noise caused by two entities that undergo two different actions on the same day. For example, the following tweets from the day of Chuck Berry's death: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resource Release",
"sec_num": "3.4"
},
{
"text": "We estimate the quality of the resource obtained after ten weeks of collection by annotating a sample of the extracted paraphrases. The annotation task was carried out in Amazon Mechanical Turk. 8 To ensure the quality of workers, we applied a qualification test and required a 99% approval rate for at least 1,000 prior tasks. We assigned each annotation to 3 workers and used the majority vote to determine the correctness of paraphrases.",
"cite_spans": [
{
"start": 195,
"end": 196,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Resource Quality",
"sec_num": "4"
},
{
"text": "We followed a similar approach to instancebased evaluation (Szpektor et al., 2007) , and let workers judge the correctness of a predicate pair (e.g. [a] 0 purchase [a] 1 /[a] 0 acquire [a] 1 ) through 5 different instances (e.g. Intel purchased Mobileye/Intel acquired Mobileye). We considered the type as correct if at least one of its instance-pairs were judged as correct. The idea that lies behind this type of evaluation is that predicate pairs are difficult to judge out-of-context.",
"cite_spans": [
{
"start": 59,
"end": 82,
"text": "(Szpektor et al., 2007)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Resource Quality",
"sec_num": "4"
},
{
"text": "Differently from Szpektor et al. (2007) , we used the instances in which the paraphrases appeared originally, as those are available in the resource.",
"cite_spans": [
{
"start": 17,
"end": 39,
"text": "Szpektor et al. (2007)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Resource Quality",
"sec_num": "4"
},
{
"text": "To evaluate the resource accuracy, and following the instance-based evaluation scheme, we only considered paraphrases that occurred in at least 5 instances (which currently constitute 10% of the paraphrase types). We partition the types into four increasingly large bins according to their scores (the smallest bin being the most accurate), similarly to PPDB (Ganitkevitch et al., 2013) , and annotate a sample of 50 types from each bin. Figure 2(a) shows that the frequent types achieve up to 86% accuracy.",
"cite_spans": [
{
"start": 359,
"end": 386,
"text": "(Ganitkevitch et al., 2013)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 438,
"end": 449,
"text": "Figure 2(a)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Quality of Extractions and Ranking",
"sec_num": "4.1"
},
{
"text": "The accuracy expectedly increases with the score, except for the lowest-score bin ((0, 10]) which is more accurate than the next one ((10, 20] ). At the current stage of the resource there is a long tail of paraphrases that appeared few times. While many of them are incorrect, there are also true paraphrases that are infrequent and therefore have a low accuracy score. We expect that some of these paraphrases will occur again in the future and their accuracy score will be strengthened.",
"cite_spans": [
{
"start": 133,
"end": 142,
"text": "((10, 20]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of Extractions and Ranking",
"sec_num": "4.1"
},
{
"text": "To estimate future usefulness, Figure 2 (b) plots the resource size (in terms of types and instances) and estimated accuracy through each week in the first 10 weeks of collection.",
"cite_spans": [],
"ref_spans": [
{
"start": 31,
"end": 39,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Size and Accuracy Over Time",
"sec_num": "4.2"
},
{
"text": "The accuracy at a specific time was estimated by annotating a sample of 50 predicate pair types with accuracy score \u2265 20 in the resource obtained at that time, which roughly correspond to the top ranked 1.5% types. Figure 2 (b) demonstrates that these types maintain a level of around 80% in accuracy. The resource growth rate (i.e. the number of new types) is expected to change with time. We predict that the resource will contain around 2 million types in one year. 9",
"cite_spans": [],
"ref_spans": [
{
"start": 215,
"end": 223,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Size and Accuracy Over Time",
"sec_num": "4.2"
},
{
"text": "The resources which are most similar to ours are Berant (Berant, 2012) , a resource of predicate entailments, and PPDB (Pavlick et al., 2015) , a resource of paraphrases, both described in Section 2.",
"cite_spans": [
{
"start": 56,
"end": 70,
"text": "(Berant, 2012)",
"ref_id": "BIBREF3"
},
{
"start": 119,
"end": 141,
"text": "(Pavlick et al., 2015)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to Existing Resources",
"sec_num": "5"
},
{
"text": "We expect our resource to be more accurate than resources which are based on the distributional approach (Berant, 2012; Lin and Pantel, 2001 ). In addition, in comparison to PPDB, we specialize on binary verbal predicates, and apply an additional phase of proposition extraction, handling various phenomena such as non-consecutive particles and minimality of arguments.",
"cite_spans": [
{
"start": 105,
"end": 119,
"text": "(Berant, 2012;",
"ref_id": "BIBREF3"
},
{
"start": 120,
"end": 140,
"text": "Lin and Pantel, 2001",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to Existing Resources",
"sec_num": "5"
},
{
"text": "Berant (2012) evaluated their resource against a dataset of predicate entailments (Zeichner et al., 2012) , using a recall-precision curve to show the performance obtained with a range of thresholds on the resource score. This kind of evaluation is less suitable for our resource; first, predicate entailment is directional, causing paraphrases with the wrong entailment direction to be labeled negative in the dataset. Second, since our resource is still relatively small, it is unlikely to have sufficient coverage of the dataset at that point. We therefore leave this evaluation to future work.",
"cite_spans": [
{
"start": 82,
"end": 105,
"text": "(Zeichner et al., 2012)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to Existing Resources",
"sec_num": "5"
},
{
"text": "To demonstrate the added value of our resource, we show that even in its current size, it already contains accurate predicate pairs which are absent from the existing resources. Rather than comparing against labeled data, we use types with score \u2265 50 from our resource (1,778 pairs), which were assessed as accurate (Section 4.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to Existing Resources",
"sec_num": "5"
},
{
"text": "We checked whether these predicate pairs are covered by Berant and PPDB. To eliminate directionality, we looked for types in both directions, i.e. for a predicate pair (p1, p2) we searched for both (p1, p2) and (p2, p1). Overall, we found that 67% of these types do not exist in Berant, 62% in PPDB, and 49% in neither. is the time they are about to serve in prison. Given that get has a broad distribution of argument instantiations, this paraphrase and similar paraphrases are less likely to exist in resources that rely on the distribution of arguments in the entire corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to Existing Resources",
"sec_num": "5"
},
{
"text": "We presented a new unsupervised method to acquire fairly accurate predicate paraphrases from news tweets discussing the same event. We release a growing resource of predicate paraphrases. Qualitative analysis shows that our resource adds value over existing resources. In the future, when the resource is comparable in size to the existing resources, we plan to evaluate its intrinsic accuracy on annotated test sets, as well as its extrinsic benefits in downstream NLP applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "This corresponds to instances of event coreference(Bagga and Baldwin, 1999).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Chirp is a paraphrase of tweet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Zhang and Weld (2013) released a small collection of 10k predicate paraphrase clusters (with average cluster size of 2.4) produced by the system.4 https://apps.twitter.com/ 5 https://github.com/ckoepp/TwitterSearch",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In practice, our publicly available code requires that at least one pair of arguments will strictly match.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.mturk.com/mturk",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For up-to-date resource statistics, see: https://github. com/vered1986/Chirps/tree/master/resource.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was partially supported by an Intel ICRI-CI grant, the Israel Science Foundation grant 880/12, and the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Crossdocument event coreference: Annotations, experiments, and observations",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "References",
"suffix": ""
},
{
"first": "Breck",
"middle": [],
"last": "Bagga",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 1999,
"venue": "Workshop on Coreference and its Applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "References Amit Bagga and Breck Baldwin. 1999. Cross- document event coreference: Annotations, experi- ments, and observations. In Workshop on Corefer- ence and its Applications.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning to paraphrase: An unsupervised approach using multiple-sequence alignment",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay and Lillian Lee. 2003. Learn- ing to paraphrase: An unsupervised approach us- ing multiple-sequence alignment. In Proceed- ings of the 2003 Human Language Technol- ogy Conference of the North American Chapter of the Association for Computational Linguistics. http://aclweb.org/anthology/N03-1003.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Extracting paraphrases from a parallel corpus",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "R. Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay and R. Kathleen McKeown. 2001. Extracting paraphrases from a parallel corpus. In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics. http://aclweb.org/anthology/P01-1008.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Global Learning of Textual Entailment Graphs",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant. 2012. Global Learning of Textual En- tailment Graphs. Ph.D. thesis, Tel Aviv University.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Global learning of focused entailment graphs",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Goldberger",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1220--1229",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant, Ido Dagan, and Jacob Goldberger. 2010. Global learning of focused entailment graphs. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Associ- ation for Computational Linguistics, pages 1220- 1229. http://aclweb.org/anthology/P10-1124.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Recognizing textual entailment",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Ido Dagan",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sammons",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ido Dagan, Dan Roth, and Mark Sammons. 2013. Rec- ognizing textual entailment .",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "PPDB: The paraphrase database",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "758--764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The paraphrase database. In Proceedings of the 2013 Con- ference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies. Association for Computational Linguistics, pages 758-764.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Dirt -Discovery of inference rules from text",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "323--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin and Patrick Pantel. 2001. Dirt -Discovery of inference rules from text. In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pages 323-328.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "PPDB 2.0: Better paraphrase ranking, fine-grained entailment relations, word embeddings, and style classification",
"authors": [
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Pushpendre",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "425--430",
"other_ids": {
"DOI": [
"10.3115/v1/P15-2070"
]
},
"num": null,
"urls": [],
"raw_text": "Ellie Pavlick, Pushpendre Rastogi, Juri Ganitke- vitch, Benjamin Van Durme, and Chris Callison- Burch. 2015. PPDB 2.0: Better paraphrase rank- ing, fine-grained entailment relations, word em- beddings, and style classification. In Proceed- ings of the 53rd Annual Meeting of the Associa- tion for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Language Processing (Volume 2: Short Papers). Associa- tion for Computational Linguistics, pages 425-430. https://doi.org/10.3115/v1/P15-2070.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Aligning predicate argument structures in monolingual comparable texts: A new corpus for a new task",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2012,
"venue": "*SEM 2012: The First Joint Conference on Lexical and Computational Semantics",
"volume": "1",
"issue": "",
"pages": "218--227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Roth and Anette Frank. 2012. Aligning predi- cate argument structures in monolingual comparable texts: A new corpus for a new task. In *SEM 2012: The First Joint Conference on Lexical and Compu- tational Semantics -Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012). Associa- tion for Computational Linguistics, pages 218-227. http://aclweb.org/anthology/S12-1030.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Preemptive information extraction using unrestricted relation discovery",
"authors": [
{
"first": "Yusuke",
"middle": [],
"last": "Shinyama",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Main Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yusuke Shinyama and Satoshi Sekine. 2006. Preemp- tive information extraction using unrestricted rela- tion discovery. In Proceedings of the Human Lan- guage Technology Conference of the NAACL, Main Conference. http://aclweb.org/anthology/N06-1039.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Automatic paraphrase acquisition from news articles",
"authors": [
{
"first": "Yusuke",
"middle": [],
"last": "Shinyama",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
},
{
"first": "Kiyoshi",
"middle": [],
"last": "Sudo",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the second international conference on Human Language Technology Research",
"volume": "",
"issue": "",
"pages": "313--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yusuke Shinyama, Satoshi Sekine, and Kiyoshi Sudo. 2002. Automatic paraphrase acquisition from news articles. In Proceedings of the second interna- tional conference on Human Language Technology Research. Morgan Kaufmann Publishers Inc., pages 313-318.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Annotating and predicting non-restrictive noun phrase modifications",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Stanovsky",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54rd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriel Stanovsky and Ido Dagan. 2016. Annotating and predicting non-restrictive noun phrase modifica- tions. In Proceedings of the 54rd Annual Meeting of the Association for Computational Linguistics (ACL 2016).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Getting more out of syntax with props",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Stanovsky",
"suffix": ""
},
{
"first": "Jessica",
"middle": [],
"last": "Ficler",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriel Stanovsky, Jessica Ficler, Ido Dagan, and Yoav Goldberg. 2016. Getting more out of syntax with props. CoRR abs/1603.01648.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Instance-based evaluation of entailment rule acquisition",
"authors": [
{
"first": "Idan",
"middle": [],
"last": "Szpektor",
"suffix": ""
},
{
"first": "Eyal",
"middle": [],
"last": "Shnarch",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7--1058",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Idan Szpektor, Eyal Shnarch, and Ido Dagan. 2007. Instance-based evaluation of entailment rule acqui- sition. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. As- sociation for Computational Linguistics, pages 456- 463. http://aclweb.org/anthology/P07-1058.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Extracting lexically divergent paraphrases from twitter. Transactions of the Association for",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "William",
"middle": [
"B"
],
"last": "Dolan",
"suffix": ""
},
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2014,
"venue": "Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "435--448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Xu, Alan Ritter, Chris Callison-Burch, William B Dolan, and Yangfeng Ji. 2014. Extracting lexi- cally divergent paraphrases from twitter. Transac- tions of the Association for Computational Linguis- tics 2:435-448.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Crowdsourcing inference-rule evaluation",
"authors": [
{
"first": "Naomi",
"middle": [],
"last": "Zeichner",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "156--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naomi Zeichner, Jonathan Berant, and Ido Da- gan. 2012. Crowdsourcing inference-rule evalua- tion. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Lin- guistics (Volume 2: Short Papers). Association for Computational Linguistics, pages 156-160. http://aclweb.org/anthology/P12-2031.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Exploiting parallel news streams for unsupervised event extraction",
"authors": [
{
"first": "Congle",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "117--129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Congle Zhang, Stephen Soderland, and Daniel S Weld. 2015. Exploiting parallel news streams for unsuper- vised event extraction. Transactions of the Associa- tion for Computational Linguistics 3:117-129.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Harvesting parallel news streams to generate paraphrases of event relations",
"authors": [
{
"first": "Congle",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Daniel S Weld",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1776--1786",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Congle Zhang and Daniel S Weld. 2013. Harvest- ing parallel news streams to generate paraphrases of event relations. In Proceedings of the 2013 Con- ference on Empirical Methods in Natural Language Processing. Seattle, Washington, USA, pages 1776- 1786.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "(a) Estimated accuracy (%) and number of types (\u00d71K) of predicate pairs with at least 5 instances in different score bins. accuracy (%), number of instances(\u00d710K)and types (\u00d710K) in the first 10 weeks.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Resource statistics after ten weeks of collection.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"text": "A sample from the top-ranked predicate paraphrases. their accuracy is limited. Specifically, the first approach may extract antonyms, that also have similar argument distribution (e.g. [a] 0 raise to [a] 1 / [a] 0 fall to [a] 1 ) while the second may conflate multiple senses of the foreign phrase.",
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF1": {
"text": "Turkey intercepts the plane which took off from Moscow PropS structures and the corresponding propositions extracted by our process. Left: multi-word predicates and multiple extractions per tweet. Right: argument reduction.",
"num": null,
"html": null,
"content": "<table><tr><td/><td/><td/><td/><td>subj</td><td/></tr><tr><td/><td/><td/><td/><td>prep about</td><td/></tr><tr><td>subj</td><td>obj</td><td>subj</td><td>prep from</td><td>prop of</td><td>comp</td></tr><tr><td/><td/><td/><td/><td colspan=\"3\">Russia , furious about the plane , threatens to retaliate</td></tr><tr><td colspan=\"2\">(1) [Turkey]0 intercepts [plane]1</td><td colspan=\"2\">(2) [plane]0 took off from [Moscow]1</td><td colspan=\"2\">[Russia]0 threatens to [retaliate]1</td></tr><tr><td colspan=\"4\">Figure 1: Manafort hid payments from Ukraine party with Moscow ties</td><td>[a]0 hide [a]1</td><td>Paul Manafort</td><td>payments</td></tr><tr><td colspan=\"4\">Manafort laundered the payments through Belize</td><td>[a]0 launder [a]1</td><td>Manafort</td><td>payments</td></tr><tr><td colspan=\"4\">Send immigration judges to cities to speed up deportations</td><td colspan=\"3\">to send [a]0 to [a]1 immigration judges cities</td></tr><tr><td colspan=\"7\">Immigration judges headed to 12 cities to speed up deportations [a]0 headed to [a]1 immigration judges 12 cities</td></tr></table>",
"type_str": "table"
},
"TABREF2": {
"text": "Examples of predicate paraphrase instances in our resource: each instance contains two tweets, predicate types extracted from them, and the instantiations of arguments.",
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF5": {
"text": "A sample of types from our resource that are not found in Berant or in PPDB.",
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF6": {
"text": "exemplifies some of the predicate pairs that do not exist in both resources. Specifically, our resource contains many non-consecutive predicates (e.g. reveal [a] 0 to [a] 1 / share [a] 0 with [a] 1 ) that by definition do not exist in Berant. Some pairs, such as [a] 0 get [a] 1 / [a] 0 sentence to [a] 1 , are context-specific, occurring when [a] 0 is a person and [a] 1",
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table"
}
}
}
}