| { |
| "paper_id": "D08-1021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T16:30:01.394167Z" |
| }, |
| "title": "Syntactic Constraints on Paraphrases Extracted from Parallel Corpora", |
| "authors": [ |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Callison-Burch", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Johns Hopkins University Baltimore", |
| "location": { |
| "settlement": "Maryland" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We improve the quality of paraphrases extracted from parallel corpora by requiring that phrases and their paraphrases be the same syntactic type. This is achieved by parsing the English side of a parallel corpus and altering the phrase extraction algorithm to extract phrase labels alongside bilingual phrase pairs. In order to retain broad coverage of non-constituent phrases, complex syntactic labels are introduced. A manual evaluation indicates a 19% absolute improvement in paraphrase quality over the baseline method.", |
| "pdf_parse": { |
| "paper_id": "D08-1021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We improve the quality of paraphrases extracted from parallel corpora by requiring that phrases and their paraphrases be the same syntactic type. This is achieved by parsing the English side of a parallel corpus and altering the phrase extraction algorithm to extract phrase labels alongside bilingual phrase pairs. In order to retain broad coverage of non-constituent phrases, complex syntactic labels are introduced. A manual evaluation indicates a 19% absolute improvement in paraphrase quality over the baseline method.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Paraphrases are alternative ways of expressing the same information. Being able to identify or generate paraphrases automatically is useful in a wide range of natural language applications. Recent work has shown how paraphrases can improve question answering through query expansion (Riezler et al., 2007) , automatic evaluation of translation and summarization by modeling alternative lexicalization (Kauchak and Barzilay, 2006; Zhou et al., 2006; Owczarzak et al., 2006) , and machine translation both by dealing with out of vocabulary words and phrases (Callison-Burch et al., 2006) and by expanding the set of reference translations for minimum error rate training (Madnani et al., 2007) . While all applications require the preservation of meaning when a phrase is replaced by its paraphrase, some additionally require the resulting sentence to be grammatical.", |
| "cite_spans": [ |
| { |
| "start": 283, |
| "end": 305, |
| "text": "(Riezler et al., 2007)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 401, |
| "end": 429, |
| "text": "(Kauchak and Barzilay, 2006;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 430, |
| "end": 448, |
| "text": "Zhou et al., 2006;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 449, |
| "end": 472, |
| "text": "Owczarzak et al., 2006)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 556, |
| "end": 585, |
| "text": "(Callison-Burch et al., 2006)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 669, |
| "end": 691, |
| "text": "(Madnani et al., 2007)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper we examine the effectiveness of placing syntactic constraints on a commonly used paraphrasing technique that extracts paraphrases from parallel corpora (Bannard and Callison-Burch, 2005) . The paraphrasing technique employs various aspects of phrase-based statistical machine translation including phrase extraction heuristics to obtain bilingual phrase pairs from word alignments. English phrases are considered to be potential paraphrases of each other if they share a common foreign language phrase among their translations. Multiple paraphrases are frequently extracted for each phrase and can be ranked using a paraphrase probability based on phrase translation probabilities.", |
| "cite_spans": [ |
| { |
| "start": 166, |
| "end": 200, |
| "text": "(Bannard and Callison-Burch, 2005)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We find that the quality of the paraphrases that are generated in this fashion improves significantly when they are required to be the same syntactic type as the phrase that they are paraphrasing. This constraint:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 Eliminates a trivial but pervasive error that arises from the interaction of unaligned words with phrase extraction heuristics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 Refines the results for phrases that can take on different syntactic labels.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 Applies both to phrases which are linguistically coherent and to arbitrary sequences of words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 Results in much more grammatical output when phrases are replaced with their paraphrases.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "A thorough manual evaluation of the refined paraphrasing technique finds a 19% absolute improve-ment in the number of paraphrases that are judged to be correct. This paper is structured as follows: Section 2 describes related work in syntactic constraints on phrase-based SMT and work utilizing syntax in paraphrase discovery. Section 3 details the problems with extracting paraphrases from parallel corpora and our improvements to the technique. Section 4 describes our experimental design and evaluation methodology. Section 5 gives the results of our experiments, and Section 6 discusses their implications.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "A number of research efforts have focused on employing syntactic constraints in statistical machine translation. Wu (1997) introduced the inversion transduction grammar formalism which treats translation as a process of parallel parsing of the source and target language via a synchronized grammar. The synchronized grammar places constraints on which words can be aligned across bilingual sentence pairs. To achieve computational efficiency, the original proposal used only a single non-terminal label rather than a linguistic grammar.", |
| "cite_spans": [ |
| { |
| "start": 113, |
| "end": 122, |
| "text": "Wu (1997)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Subsequent work used more articulated parses to improve alignment quality by applying cohesion constraints (Fox, 2002; Lin and Cherry, 2002) . If two English phrases are in disjoint subtrees in the parse, then the phrasal cohesion constraint prevents them from being aligned to overlapping sequences in the foreign sentence. Other recent work has incorporated constituent and dependency subtrees into the translation rules used by phrase-based systems (Galley et al., 2004; Quirk et al., 2005) . Phrase-based rules have also been replaced with synchronous context free grammars (Chiang, 2005) and with tree fragments (Huang and Knight, 2006) .", |
| "cite_spans": [ |
| { |
| "start": 107, |
| "end": 118, |
| "text": "(Fox, 2002;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 119, |
| "end": 140, |
| "text": "Lin and Cherry, 2002)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 452, |
| "end": 473, |
| "text": "(Galley et al., 2004;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 474, |
| "end": 493, |
| "text": "Quirk et al., 2005)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 578, |
| "end": 592, |
| "text": "(Chiang, 2005)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 617, |
| "end": 641, |
| "text": "(Huang and Knight, 2006)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A number of techniques for generating paraphrases have employed syntactic information, either in the process of extracting paraphrases from monolingual texts or in the extracted patterns themselves. Lin and Pantel (2001) derived paraphrases based on the distributional similarity of paths in dependency trees. Barzilay and McKeown (2001) incorporated part-of-speech information and other morphosyntactic clues into their co-training algorithm.", |
| "cite_spans": [ |
| { |
| "start": 199, |
| "end": 220, |
| "text": "Lin and Pantel (2001)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 310, |
| "end": 337, |
| "text": "Barzilay and McKeown (2001)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "They extracted paraphrase patterns that incorporate this information. Ibrahim et al. (2003) generated structural paraphrases capable of capturing longdistance dependencies. Pang et al. (2003) employed a syntax-based algorithm to align equivalent English sentences by merging corresponding nodes in parse trees and compressing them down into a word lattice.", |
| "cite_spans": [ |
| { |
| "start": 70, |
| "end": 91, |
| "text": "Ibrahim et al. (2003)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 173, |
| "end": 191, |
| "text": "Pang et al. (2003)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Perhaps the most closely related work is a recent extension to Bannard and Callison-Burch's paraphrasing method. Zhao et al. (2008b) extended the method so that it is capable of generating richer paraphrase patterns that include part-of-speech slots, rather than simple lexical and phrasal paraphrases. For example, they extracted patterns such as consider NN \u2192 take NN into consideration. To accomplish this, Zhao el al. used dependency parses on the English side of the parallel corpus. Their work differs from the work presented in this paper because their syntactic constraints applied to slots within paraphrase patters, and our constraints apply to the paraphrases themselves.", |
| "cite_spans": [ |
| { |
| "start": 113, |
| "end": 132, |
| "text": "Zhao et al. (2008b)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "3 Paraphrasing with parallel corpora Bannard and Callison-Burch (2005) extract paraphrases from bilingual parallel corpora. They give a probabilistic formation of paraphrasing which naturally falls out of the fact that they use techniques from phrase-based statistical machine translation:", |
| "cite_spans": [ |
| { |
| "start": 37, |
| "end": 70, |
| "text": "Bannard and Callison-Burch (2005)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "e 2 = arg max e 2 :e 2 =e 1 p(e 2 |e 1 )", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "where", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "p(e 2 |e 1 ) = f p(f |e 1 )p(e 2 |f, e 1 )", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u2248 f p(f |e 1 )p(e 2 |f )", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Phrase translation probabilities p(f |e 1 ) and p(e 2 |f ) are commonly calculated using maximum likelihood estimation (Koehn et al., 2003) :", |
| "cite_spans": [ |
| { |
| "start": 119, |
| "end": 139, |
| "text": "(Koehn et al., 2003)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "p(f |e) = count(e, f ) f count(e, f )", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "where the counts are collected by enumerating all bilingual phrase pairs that are consistent with the conseguido . opportunities equal create to failed has project european the oportunidades de igualdad la ha no europeo proyecto el Figure 1 : The interaction of the phrase extraction heuristic with unaligned English words means that the Spanish phrase la igualdad aligns with equal, create equal, and to create equal.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 232, |
| "end": 240, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "word alignments for sentence pairs in a bilingual parallel corpus. Various phrase extraction heuristics are possible. Och and Ney (2004) defined consistent bilingual phrase pairs as follows:", |
| "cite_spans": [ |
| { |
| "start": 118, |
| "end": 136, |
| "text": "Och and Ney (2004)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "BP (f J 1 , e I 1 , A) = {(f j+m j , e i+n i )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": ":", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2200(i , j ) \u2208 A : j \u2264 j \u2264 j + m \u2194 i \u2264 i \u2264 i + n \u2227\u2203(i , j ) \u2208 A : j \u2264 j \u2264 j + m\u2227 \u2194 i \u2264 i \u2264 i + n}", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "where f J 1 is a foreign sentence, e I 1 is an English sentence and A is a set of word alignment points.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The heuristic allows unaligned words to be included at the boundaries of the source or target language phrases. For example, when enumerating the consistent phrase pairs for the sentence pair given in Figure 1 , la igualdad would align not only to equal, but also to create equal, and to create equal. In SMT these alternative translations are ranked by the translation probabilities and other feature functions during decoding.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 201, |
| "end": 209, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The interaction between the phrase extraction heuristic and unaligned words results in an undesirable effect for paraphrasing. By Bannard and Callison-Burch's definition, equal, create equal, and to create equal would be considered paraphrases because they are aligned to the same foreign phrase. Tables 1 and 2 show how sub-and super-phrases can creep into the paraphrases: equal can be paraphrased as equal rights and create equal can be paraphrased as equal. Obviously when e 2 is substituted for e 1 the resulting sentence will generally be ungrammatical. The first case could result in equal equal rights, and the second would drop the verb.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 297, |
| "end": 311, |
| "text": "Tables 1 and 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "This problem is pervasive. To test its extent we attempted to generate paraphrases for 900,000 phrases using Bannard and Callison-Burch's method trained on the Europarl corpora (as described in Section 4). It generated a total of 3. 400,000 phrases in the list. 1 We observed that 34% of the paraphrases (excluding the phrase itself) were super-or sub-strings of the original phrase. The most probable paraphrase was a super-or sub-string of the phrase 73% of the time. There are a number of strategies that might be adopted to alleviate this problem:", |
| "cite_spans": [ |
| { |
| "start": 262, |
| "end": 263, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 Bannard and Callison-Burch (2005) rank their paraphrases with a language model when the paraphrases are substituted into a sentence.", |
| "cite_spans": [ |
| { |
| "start": 14, |
| "end": 35, |
| "text": "Callison-Burch (2005)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 Bannard and Callison-Burch (2005) sum over multiple parallel corpora C to reduce the problems associated with systematic errors in the word alignments in one language pair:", |
| "cite_spans": [ |
| { |
| "start": 2, |
| "end": 35, |
| "text": "Bannard and Callison-Burch (2005)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "e 2 = arg max e 2 c\u2208C f p(f |e 1 )p(e 2 |f )", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 We could change the phrase extraction heuristic's treatment of unaligned words, or we could attempt to ensure that we have fewer unaligned items in our word alignments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 The paraphrase criterion could be changed from being e 2 = e 1 to specifying that e2 is not sub-or super-string of e1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In this paper we adopt a different strategy. The essence of our strategy is to constrain paraphrases to be the same syntactic type as the phrases that they are paraphrasing. Syntactic constraints can apply in two places: during phrase extraction and when substituting paraphrases into sentences. These are described in sections 3.1 and 3.2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "When we apply syntactic constraints to the phrase extraction heuristic, we change how bilingual phrase pairs are enumerated and how the component probabilities of the paraphrase probability are calculated. We use the syntactic type s of e 1 in a refined version of the paraphrase probability:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic constraints on phrase extraction", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "e 2 =", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic constraints on phrase extraction", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "arg max e 2 :e 2 =e 1 \u2227s(e 2 )=s(e 1 ) p(e 2 |e 1 , s(e 1 ))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic constraints on phrase extraction", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "where p(e 2 |e 1 , s(e 1 )) can be approximated as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic constraints on phrase extraction", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "c\u2208C f p(f |e 1 , s(e 1 ))p(e 2 |f, s(e 1 )) |C| (7)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic constraints on phrase extraction", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We define a new phrase extraction algorithm that operates on an English parse tree P along with foreign sentence f J 1 , English sentence e I 1 , and word alignment A. We dub this SBP for syntactic bilingual phrases: The SBP phrase extraction algorithm produces tuples containing a foreign phrase, an English phrase and a syntactic label (f, e, s). After enumerating these for all phrase pairs in a parallel corpus, we can calculate p(f |e 1 , s(e 1 )) and p(e 2 |f, s(e 1 )) as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic constraints on phrase extraction", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "SBP (f J 1 , e I 1 ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic constraints on phrase extraction", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2200(i , j ) \u2208 A : j \u2264 j \u2264 j + m \u2194 i \u2264 i \u2264 i + n \u2227\u2203(i , j ) \u2208 A : j \u2264 j \u2264 j + m\u2227 \u2194 i \u2264 i \u2264 i + n \u2227\u2203", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic constraints on phrase extraction", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "p(f |e 1 , s(e 1 )) = count(f, e 1 , s(e 1 ))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic constraints on phrase extraction", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "f count(f, e 1 , s(e 1 )) p(e 2 |f, s(e 1 )) = count(f, e 2 , s(e 1 ))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic constraints on phrase extraction", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "e 2 count(f, e 2 , s(e 1 )) By redefining the probabilities in this way we partition the space of possible paraphrases by their syntactic categories.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic constraints on phrase extraction", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In order to enumerate all phrase pairs with their syntactic labels we need to parse the English side of the parallel corpus (but not the foreign side). This limits the potential applicability of our refined paraphrasing method to languages which have parsers. Table 3 gives an example of the refined paraphrases for equal when it occurs as an adjective or adjectival phrase. Note that most of the paraphrases that were possible under the baseline model (Table 1 ) are now excluded. We no longer get the noun equality, the verb equals, the adverb equally, the determier the or the NP equal rights. The paraphrases seem to be higher quality, especially if one considers their fidelity when they replace the original phrase in the context of some sentence.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 260, |
| "end": 267, |
| "text": "Table 3", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 453, |
| "end": 462, |
| "text": "(Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Syntactic constraints on phrase extraction", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We tested the rate of paraphrases that were suband super-strings when we constrain paraphrases based on non-terminal nodes in parse trees. The percent of the best paraphrases being substrings dropped from 73% to 24%, and the overall percent of paraphrases subsuming or being subsumed by the original phrase dropped from 34% to 12%. However, the number of phrases for which we were able Figure 2 : In addition to extracting phrases that are dominated by a node in the parse tree, we also generate labels for non-syntactic constituents. Three labels are possible for create equal.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 386, |
| "end": 394, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Syntactic constraints on phrase extraction", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "to generated paraphrases dropped from 400,000 to 90,000, since we limited ourselves to phrases that were valid syntactic constituents. The number of unique paraphrases dropped from several million to 800,000.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic constraints on phrase extraction", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The fact that we are able to produce paraphrases for a much smaller set of phrases is a downside to using syntactic constraints as we have initially proposed. It means that we would not be able to generate paraphrases for phrases such as create equal. Many NLP tasks, such as SMT, which could benefit from paraphrases require broad coverage and may need to paraphrases for phrases which are not syntactic constituents.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic constraints on phrase extraction", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "To generate paraphrases for a wider set of phrases, we change our phrase extraction heuristic again so that it produces phrase pairs for arbitrary spans in the sentence, including spans that aren't syntactic constituents. We assign every span in a sentence a syntactic label using CCG-style notation (Steedman, 1999) , which gives a syntactic role with elements missing on the left and/or right hand sides.", |
| "cite_spans": [ |
| { |
| "start": 300, |
| "end": 316, |
| "text": "(Steedman, 1999)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Complex syntactic labels", |
| "sec_num": null |
| }, |
| { |
| "text": "SBP (f J 1 , e I 1 , A, P ) = {(f j+m j , e i+n i , s) : \u2200(i , j ) \u2208 A : j \u2264 j \u2264 j + m \u2194 i \u2264 i \u2264 i + n \u2227\u2203(i , j ) \u2208 A : j \u2264 j \u2264 j + m\u2227 \u2194 i \u2264 i \u2264 i + n \u2227\u2203s \u2208 CCG-labels(e i+n i , P )}", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Complex syntactic labels", |
| "sec_num": null |
| }, |
| { |
| "text": "The function CCG-labels describes the set of CCGlabels for the phrase spanning positions i to i + n in create equal We can use these complex labels instead of atomic non-terminal symbols to handle non-constituent phrases. For example, Table 4 shows the paraphrases and syntactic labels that are generated for the non-constituent phrase create equal. The paraphrases are significantly better than the paraphrases generated for the phrase by the baseline method (refer back to Table 2 ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 235, |
| "end": 242, |
| "text": "Table 4", |
| "ref_id": "TABREF6" |
| }, |
| { |
| "start": 475, |
| "end": 482, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Complex syntactic labels", |
| "sec_num": null |
| }, |
| { |
| "text": "The labels shown in the figure are a fraction of those that can be derived for the phrase in the parallel corpus. Each of these corresponds to a different syntactic context, and each has its own set of associated paraphrases.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Complex syntactic labels", |
| "sec_num": null |
| }, |
| { |
| "text": "We increase the number of phrases that are paraphrasable from the 90,000 in our initial definition of SBP to 250,000 when we use complex CCG labels. The number of unique paraphrases increases from 800,000 to 3.5 million, which is nearly as many paraphrases that were produced by the baseline method for the sample.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Complex syntactic labels", |
| "sec_num": null |
| }, |
| { |
| "text": "In addition to applying syntactic constraints to our phrase extraction algorithm, we can also apply them when we substitute a paraphrase into a sentence. To do so, we limit the paraphrases to be the same syntactic type as the phrase that it is replacing, based on the syntactic labels that are derived from the phrase tree for a test sentence. Since each phrase normally has a set of different CCG labels (instead of a single non-termal symbol) we need a way of choosing which label to use when applying the constraint. There are several different possibilities for choosing among labels. We could simultaneously choose the best paraphrase and the best label for the phrase in the parse tree of the test sentence: e 2 = arg max Alternately, we could average over all of the labels that are generated for the phrase in the parse tree: e 2 = arg max e 2 :e 2 =e 1 s\u2208CCG-labels(e 1 ,P ) p(e 2 |e 1 , s) (9)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic constraints when substituting paraphrases into a test sentence", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The potential drawback of using Equations 8 and 9 is that the CCG labels for a particular sentence significantly reduces the paraphrases that can be used. For instance, VP/(NP/NNS) is the only label for the paraphrases in Table 4 that is compatible with the parse tree given in Figure 2 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 222, |
| "end": 229, |
| "text": "Table 4", |
| "ref_id": "TABREF6" |
| }, |
| { |
| "start": 278, |
| "end": 286, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Syntactic constraints when substituting paraphrases into a test sentence", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Because the CCG labels for a given sentence are so specific, many times there are no matches. Therefore we also investigated a looser constraint. We choose the highest probability paraphrase with any label (i.e. the set of labels extracted from all parse trees in our parallel corpus): e 2 = arg max e 2 :e 2 =e 1 arg max s\u2208\u2229 \u2200T in C CCG-labels(e 1 ,T ) p(e 2 |e 1 , s) (10) Equation 10 only applies syntactic constraints during phrase extraction and ignores them during substitution.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic constraints when substituting paraphrases into a test sentence", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In our experiments, we evaluate the quality of the paraphrases that are generated using Equations 8, 9 and 10. We compare their quality against the Bannard and Callison-Burch (2005) baseline.", |
| "cite_spans": [ |
| { |
| "start": 160, |
| "end": 181, |
| "text": "Callison-Burch (2005)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic constraints when substituting paraphrases into a test sentence", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We conducted a manual evaluation to evaluate paraphrase quality. We evaluated whether paraphrases retained the meaning of their original phrases and whether they remained grammatical when they replaced the original phrase in a sentence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental design", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Our paraphrase model was trained using the Europarl corpus (Koehn, 2005) . We used ten parallel corpora between English and (each of) Danish, Dutch, Finnish, French, German, Greek, Italian, Portuguese, Spanish, and Swedish, with approximately 30 million words per language for a total of 315 million English words. Automatic word alignments were created for these using Giza++ (Och and Ney, 2003) . The English side of each parallel corpus was parsed using the Bikel parser (Bikel, 2002) . A total of 1.6 million unique sentences were parsed. A trigram language model was trained on these English sentences using the SRI language modeling toolkit (Stolcke, 2002 ).", |
| "cite_spans": [ |
| { |
| "start": 59, |
| "end": 72, |
| "text": "(Koehn, 2005)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 377, |
| "end": 396, |
| "text": "(Och and Ney, 2003)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 474, |
| "end": 487, |
| "text": "(Bikel, 2002)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 647, |
| "end": 661, |
| "text": "(Stolcke, 2002", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training materials", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The paraphrase model and language model for the Bannard and Callison-Burch (2005) baseline were trained on the same data to ensure a fair comparison.", |
| "cite_spans": [ |
| { |
| "start": 48, |
| "end": 81, |
| "text": "Bannard and Callison-Burch (2005)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training materials", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The test set was the English portion of test sets used in the shared translation task of the ACL-2007 Workshop on Statistical Machine Translation (Callison-Burch et al., 2007) . The test sentences were also parsed with the Bikel parser.", |
| "cite_spans": [ |
| { |
| "start": 146, |
| "end": 175, |
| "text": "(Callison-Burch et al., 2007)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Test phrases", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The phrases to be evaluated were selected such that there was an even balance of phrase lengths (from one word long up to five words long), with half of the phrases being valid syntactic constituents and half being arbitrary sequences of words. 410 phrases were selected at random for evaluation. 30 items were excluded from our results subsequent to evaluation on the grounds that they consisted solely of punctuation and stop words like determiners, prepositions and pronouns. This left a total of 380 unique phrases.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Test phrases", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We produced paraphrases under the following eight conditions:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental conditions", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "1. Baseline -The paraphrase probability defined by Bannard and Callison-Burch (2005) . Calculated over multiple parallel corpora as given in Equation 5. Note that under this condition the best paraphrase is the same for each occurrence of the phrase irrespective of which sentence it occurs in.", |
| "cite_spans": [ |
| { |
| "start": 51, |
| "end": 84, |
| "text": "Bannard and Callison-Burch (2005)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental conditions", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "2. Baseline + LM -The paraphrase probability (as above) combined with the language model probability calculated for the sentence with the phrase replaced with the paraphrase.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental conditions", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "3. Extraction Constraints -This condition selected the best paraphrase according to Equation 10. It chooses the single best paraphrase over all labels. Conditions 3 and 5 only apply the syntactic constraints at the phrase extraction stage, and do not require that the paraphrase have the same syntactic label as the phrase in the sentence that it is being subtituted into.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental conditions", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "4. Extraction Constraints + LM -As above, but the paraphrases are also ranked with a language model probability.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental conditions", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "5. Substitution Constraints -This condition corresponds to Equation 8, which selects the highest probability paraphrase which matches at least one of the syntactic labels of the phrase in the test sentence. Conditions 5-8 apply the syntactic constraints both and the phrase extraction and at the substitution stages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental conditions", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "6. Syntactic Constraints + LM -As above, but including a language model probability as well.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental conditions", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "7. Averaged Substitution Constraints -This condition corresponds to Equation 9, which averages over all of the syntactic labels for the phrase in the sentence, instead of choosing the single one which maximizes the probability.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental conditions", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "MEANING 5 All of the meaning of the original phrase is retained, and nothing is added 4 The meaning of the original phrase is retained, although some additional information may be added but does not transform the meaning 3 The meaning of the original phrase is retained, although some information may be deleted without too great a loss in the meaning 2 Substantial amount of the meaning is different 1 The paraphrase doesn't mean anything close to the original phrase GRAMMAR 5 The sentence with the paraphrase inserted is perfectly grammatical 4 The sentence is grammatical, but might sound slightly awkward 3 The sentence has an agreement error (such as between its subject and verb, or between a plural noun and singular determiner) 2 The sentence has multiple errors or omits words that would be required to make it grammatical 1 The sentence is totally ungrammatical ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental conditions", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "As above, but including a language model probability.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Averaged Substitution Constraints + LM -", |
| "sec_num": "8." |
| }, |
| { |
| "text": "We evaluated the paraphrase quality through a substitution test. We retrieved a number of sentences which contained each test phrase and substituted the phrase with automatically-generated paraphrases. Annotators judged whether the paraphrases had the same meaning as the original and whether the resulting sentences were grammatical. They assigned two values to each sentence using the 5-point scales given in Table 5 . We considered an item to have the same meaning if it was assigned a score of 3 or greater, and to be grammatical if it was assigned a score of 4 or 5. We evaluated several instances of a phrase when it occurred multiple times in the test corpus, since paraphrase quality can vary based on context (Szpektor et al., 2007) . There were an average of 3.1 instances for each phrase, with a maximum of 6. There were a total of 1,195 sentences that para-phrases were substituted into, with a total of 8,422 judgements collected. Note that 7 different paraphrases were judged on average for every instance. This is because annotators judged paraphrases for eight conditions, and because we collected judgments for the 5-best paraphrases for many of the conditions.", |
| "cite_spans": [ |
| { |
| "start": 718, |
| "end": 741, |
| "text": "(Szpektor et al., 2007)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 411, |
| "end": 418, |
| "text": "Table 5", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Manual evaluation", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "We measured inter-annotator agreement with the Kappa statistic (Carletta, 1996) using the 1,391 items that two annotators scored in common. The two annotators assigned the same absolute score 47% of the time. If we consider chance agreement to be 20% for 5-point scales, then K = 0.33, which is commonly interpreted as \"fair\" (Landis and Koch, 1977 ). If we instead measure agreement in terms of how often the annotators both judged an item to be above or below the thresholds that we set, then their rate of agreement was 80%. In this case chance agreement would be 50%, so K = 0.61, which is \"substantial\".", |
| "cite_spans": [ |
| { |
| "start": 63, |
| "end": 79, |
| "text": "(Carletta, 1996)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 326, |
| "end": 348, |
| "text": "(Landis and Koch, 1977", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Manual evaluation", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "In order to allow other researchers to recreate our results or extend our work, we have prepared the following materials for download 2 :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and code", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "\u2022 The complete set of paraphrases generated for the test set. This includes the 3.7 million paraphrases generated by the baseline method and the 3.5 million paraphrases generated with syntactic constraints.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and code", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "\u2022 The code that we used to produce these paraphrases and the complete data sets (including all 10 word-aligned parallel corpora along with their English parses), so that researchers can extract paraphrases for new sets of phrases.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and code", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "\u2022 The manual judgments about paraphrase quality. These may be useful as development material for setting the weights of a log-linear formulation of paraphrasing, as suggested in Zhao et al. (2008a) . Table 6 summarizes the results of the manual evaluation. We can observe a strong trend in the syntactically constrained approaches performing better Table 6 : The results of the manual evaluation for each of the eight conditions. Correct meaning is the percent of time that a condition was assigned a 3, 4, or 5, and correct grammar is the percent of time that it was given a 4 or 5, using the scales from Table 5. than the baseline. They retain the correct meaning more often (ranging from 4% to up to 15%). They are judged to be grammatical far more frequently (up to 26% more often without the language model, and 24% with the language model) . They perform nearly 20% better when both meaning and grammaticality are used as criteria. 3 Another trend that can be observed is that incorporating a language model probability tends to result in more grammatical output (a 7-9% increase), but meaning suffers as a result in some cases. When the LM is applied there is a drop of 12% in correct meaning for the baseline, but only a slight dip of 1-2% for the syntactically-constrained phrases.", |
| "cite_spans": [ |
| { |
| "start": 178, |
| "end": 197, |
| "text": "Zhao et al. (2008a)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 938, |
| "end": 939, |
| "text": "3", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 200, |
| "end": 207, |
| "text": "Table 6", |
| "ref_id": null |
| }, |
| { |
| "start": 349, |
| "end": 356, |
| "text": "Table 6", |
| "ref_id": null |
| }, |
| { |
| "start": 606, |
| "end": 614, |
| "text": "Table 5.", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data and code", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "Note that for the conditions where the paraphrases were required to have the same syntactic type as the phrase in the parse tree, there was a reduction in the number of paraphrases that could be applied. For the first two conditions, paraphrases were posited for 1194 sentences, conditions 3 and 4 could be applied to 1142 of those sentences, but conditions 5-8 could only be applied to 876 sentences. The substitution constraints reduce coverage to 73% of the test sentences. Given that the extraction constraints have better coverage and nearly identical performance on the meaning criterion, they might be more suitable in some circumstances.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In this paper we have presented a novel refinement to paraphrasing with bilingual parallel corpora. We illustrated that a significantly higher performance can be achieved by constraining paraphrases to have the same syntactic type as the original phrase. A thorough manual evaluation found an absolute improvement in quality of 19% using strict criteria about paraphrase accuracy when comparing against a strong baseline. The syntactically enhanced paraphrases are judged to be grammatically correct over two thirds of the time, as opposed to the baseline method which was grammatically correct under half of the time.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "This paper proposed constraints on paraphrases at two stages: when deriving them from parsed parallel corpora and when substituting them into parsed test sentences. These constraints produce paraphrases that are better than the baseline and which are less commonly affected by problems due to unaligned words. Furthermore, by introducing complex syntactic labels instead of solely relying on non-terminal symbols in the parse trees, we are able to keep the broad coverage of the baseline method.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Syntactic constraints significantly improve the quality of this paraphrasing method, and their use opens the question about whether analogous constraints can be usefully applied to paraphrases generated from purely monolingual corpora. Our improvements to the extraction of paraphrases from parallel corpora suggests that it may be usefully applied to other NLP applications, such as generation, which require grammatical output.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The remaining 500,000 phrases could not be paraphrased either because e2 = e1 or because they were not consistently aligned to any foreign phrases.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Our results show a significantly lower score for the baseline than reported inBannard and Callison-Burch (2005). This is potentially due to the facts that in this work we evaluated on out-of-domain news commentary data, and we randomly selected phrases. In the pervious work the test phrases were drawn from WordNet, and they were evaluated solely on in-domain European parliament data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "Thanks go to Sally Blatz, Emily Hinchcliff and Michelle Bland for conducting the manual evaluation and to Michelle Bland and Omar Zaidan for proofreading and commenting on a draft of this paper.This work was supported by the National Science Foundation under Grant No. 0713448. The views and findings are the author's alone.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Paraphrasing with bilingual parallel corpora", |
| "authors": [ |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Bannard", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Callison-Burch", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Colin Bannard and Chris Callison-Burch. 2005. Para- phrasing with bilingual parallel corpora. In Proceed- ings of ACL.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Extracting paraphrases from a parallel corpus", |
| "authors": [ |
| { |
| "first": "Regina", |
| "middle": [], |
| "last": "Barzilay", |
| "suffix": "" |
| }, |
| { |
| "first": "Kathleen", |
| "middle": [], |
| "last": "Mckeown", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Regina Barzilay and Kathleen McKeown. 2001. Extract- ing paraphrases from a parallel corpus. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Design of a multi-lingual, parallelprocessing statistical parsing engine", |
| "authors": [ |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Bikel", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dan Bikel. 2002. Design of a multi-lingual, parallel- processing statistical parsing engine. In Proceedings of HLT.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Improved statistical machine translation using paraphrases", |
| "authors": [ |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Callison", |
| "suffix": "" |
| }, |
| { |
| "first": "-", |
| "middle": [], |
| "last": "Burch", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of HLT/NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chris Callison-Burch, Philipp Koehn, and Miles Os- borne. 2006. Improved statistical machine translation using paraphrases. In Proceedings of HLT/NAACL.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Meta-) evaluation of machine translation", |
| "authors": [ |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Callison", |
| "suffix": "" |
| }, |
| { |
| "first": "-", |
| "middle": [], |
| "last": "Burch", |
| "suffix": "" |
| }, |
| { |
| "first": "Cameron", |
| "middle": [], |
| "last": "Fordyce", |
| "suffix": "" |
| }, |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Christof", |
| "middle": [], |
| "last": "Monz", |
| "suffix": "" |
| }, |
| { |
| "first": "Josh", |
| "middle": [], |
| "last": "Schroeder", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the Second Workshop on Statistical Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2007. (Meta-) evaluation of machine translation. In Proceedings of the Second Workshop on Statistical Machine Transla- tion.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Assessing agreement on classification tasks: The kappa statistic", |
| "authors": [ |
| { |
| "first": "Jean", |
| "middle": [], |
| "last": "Carletta", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Computational Linguistics", |
| "volume": "22", |
| "issue": "2", |
| "pages": "249--254", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jean Carletta. 1996. Assessing agreement on classifi- cation tasks: The kappa statistic. Computational Lin- guistics, 22(2):249-254.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A hierarchical phrase-based model for statistical machine translation", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Chiang", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Phrasal cohesion and statistical machine translation", |
| "authors": [ |
| { |
| "first": "Heidi", |
| "middle": [ |
| "J" |
| ], |
| "last": "Fox", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Heidi J. Fox. 2002. Phrasal cohesion and statistical ma- chine translation. In Proceedings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "What's in a translation rule", |
| "authors": [ |
| { |
| "first": "Michel", |
| "middle": [], |
| "last": "Galley", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Hopkins", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of HLT/NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What's in a translation rule? In Pro- ceedings of HLT/NAACL.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Relabeling syntax trees to improve syntax-based machine translation quality", |
| "authors": [ |
| { |
| "first": "Bryant", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of HLT/NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bryant Huang and Kevin Knight. 2006. Relabeling syn- tax trees to improve syntax-based machine translation quality. In Proceedings of HLT/NAACL.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Extracting structural paraphrases from aligned monolingual corpora", |
| "authors": [ |
| { |
| "first": "Ali", |
| "middle": [], |
| "last": "Ibrahim", |
| "suffix": "" |
| }, |
| { |
| "first": "Boris", |
| "middle": [], |
| "last": "Katz", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the Second International Workshop on Paraphrasing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ali Ibrahim, Boris Katz, and Jimmy Lin. 2003. Extract- ing structural paraphrases from aligned monolingual corpora. In Proceedings of the Second International Workshop on Paraphrasing (ACL 2003).", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Paraphrasing for automatic evaluation", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Kauchak", |
| "suffix": "" |
| }, |
| { |
| "first": "Regina", |
| "middle": [], |
| "last": "Barzilay", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Kauchak and Regina Barzilay. 2006. Para- phrasing for automatic evaluation. In Proceedings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Statistical phrase-based translation", |
| "authors": [ |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Franz", |
| "middle": [ |
| "Josef" |
| ], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of HLT/NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceed- ings of HLT/NAACL.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "A parallel corpus for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of MT-Summit", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philipp Koehn. 2005. A parallel corpus for statistical machine translation. In Proceedings of MT-Summit, Phuket, Thailand.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "The measurement of observer agreement for categorical data", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Landis", |
| "suffix": "" |
| }, |
| { |
| "first": "Gary", |
| "middle": [ |
| "G" |
| ], |
| "last": "Koch", |
| "suffix": "" |
| } |
| ], |
| "year": 1977, |
| "venue": "Biometrics", |
| "volume": "33", |
| "issue": "", |
| "pages": "159--174", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Landis and Gary G. Koch. 1977. The mea- surement of observer agreement for categorical data. Biometrics, 33:159-174.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Word alignment with cohesion constraint", |
| "authors": [ |
| { |
| "first": "Dekang", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Cherry", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of HLT/NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dekang Lin and Colin Cherry. 2002. Word align- ment with cohesion constraint. In Proceedings of HLT/NAACL.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Discovery of inference rules from text", |
| "authors": [ |
| { |
| "first": "Dekang", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Pantel", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Natural Language Engineering", |
| "volume": "7", |
| "issue": "3", |
| "pages": "343--360", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dekang Lin and Patrick Pantel. 2001. Discovery of infer- ence rules from text. Natural Language Engineering, 7(3):343-360.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Using paraphrases for parameter tuning in statistical machine translation", |
| "authors": [ |
| { |
| "first": "Nitin", |
| "middle": [], |
| "last": "Madnani", |
| "suffix": "" |
| }, |
| { |
| "first": "Philip", |
| "middle": [], |
| "last": "Necip Fazil Ayan", |
| "suffix": "" |
| }, |
| { |
| "first": "Bonnie", |
| "middle": [], |
| "last": "Resnik", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Dorr", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the ACL Workshop on Statistical Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nitin Madnani, Necip Fazil Ayan, Philip Resnik, and Bonnie Dorr. 2007. Using paraphrases for parame- ter tuning in statistical machine translation. In Pro- ceedings of the ACL Workshop on Statistical Machine Translation.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "A systematic comparison of various statistical alignment models", |
| "authors": [ |
| { |
| "first": "Josef", |
| "middle": [], |
| "last": "Franz", |
| "suffix": "" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Computational Linguistics", |
| "volume": "29", |
| "issue": "1", |
| "pages": "19--51", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Franz Josef Och and Hermann Ney. 2003. A system- atic comparison of various statistical alignment mod- els. Computational Linguistics, 29(1):19-51.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "The alignment template approach to statistical machine translation", |
| "authors": [ |
| { |
| "first": "Josef", |
| "middle": [], |
| "last": "Franz", |
| "suffix": "" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Computational Linguistics", |
| "volume": "30", |
| "issue": "4", |
| "pages": "417--449", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Franz Josef Och and Hermann Ney. 2004. The align- ment template approach to statistical machine transla- tion. Computational Linguistics, 30(4):417-449.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Contextual bitext-derived paraphrases in automatic MT evaluation", |
| "authors": [ |
| { |
| "first": "Karolina", |
| "middle": [], |
| "last": "Owczarzak", |
| "suffix": "" |
| }, |
| { |
| "first": "Declan", |
| "middle": [], |
| "last": "Groves", |
| "suffix": "" |
| }, |
| { |
| "first": "Josef", |
| "middle": [], |
| "last": "Van Genabith", |
| "suffix": "" |
| }, |
| { |
| "first": "Andy", |
| "middle": [], |
| "last": "Way", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the SMT Workshop at HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Karolina Owczarzak, Declan Groves, Josef Van Gen- abith, and Andy Way. 2006. Contextual bitext-derived paraphrases in automatic MT evaluation. In Proceed- ings of the SMT Workshop at HLT-NAACL.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Syntax-based alignment of multiple translations: Extracting paraphrases and generating new sentences", |
| "authors": [ |
| { |
| "first": "Bo", |
| "middle": [], |
| "last": "Pang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of HLT/NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bo Pang, Kevin Knight, and Daniel Marcu. 2003. Syntax-based alignment of multiple translations: Ex- tracting paraphrases and generating new sentences. In Proceedings of HLT/NAACL.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Dependency treelet translation: Syntactically informed phrasal smt", |
| "authors": [ |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Quirk", |
| "suffix": "" |
| }, |
| { |
| "first": "Arul", |
| "middle": [], |
| "last": "Menezes", |
| "suffix": "" |
| }, |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Cherry", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chris Quirk, Arul Menezes, and Colin Cherry. 2005. De- pendency treelet translation: Syntactically informed phrasal smt. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Statistical machine translation for query expansion in answer retrieval", |
| "authors": [ |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Riezler", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Vasserman", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stefan Riezler, Alexander Vasserman, Ioannis Tsochan- taridis, Vibhu Mittal, and Yi Liu. 2007. Statistical machine translation for query expansion in answer re- trieval. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Alternative quantier scope in ccg", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Steedman", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark Steedman. 1999. Alternative quantier scope in ccg. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "SRILM -an extensible language modeling toolkit", |
| "authors": [ |
| { |
| "first": "Andreas", |
| "middle": [], |
| "last": "Stolcke", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the International Conference on Spoken Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andreas Stolcke. 2002. SRILM -an extensible language modeling toolkit. In Proceedings of the International Conference on Spoken Language Processing, Denver, Colorado, September.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Instance-based evaluation of entailment rule acquisition", |
| "authors": [ |
| { |
| "first": "Idan", |
| "middle": [], |
| "last": "Szpektor", |
| "suffix": "" |
| }, |
| { |
| "first": "Eyal", |
| "middle": [], |
| "last": "Shnarch", |
| "suffix": "" |
| }, |
| { |
| "first": "Ido", |
| "middle": [], |
| "last": "Dagan", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Idan Szpektor, Eyal Shnarch, and Ido Dagan. 2007. Instance-based evaluation of entailment rule acquisi- tion. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora", |
| "authors": [ |
| { |
| "first": "Dekai", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Computational Linguistics", |
| "volume": "", |
| "issue": "3", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3).", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Combining multiple resources to improve SMT-based paraphrasing model", |
| "authors": [ |
| { |
| "first": "Shiqi", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Cheng", |
| "middle": [], |
| "last": "Niu", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Ting", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Sheng", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of ACL/HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shiqi Zhao, Cheng Niu, Ming Zhou, Ting Liu, and Sheng Li. 2008a. Combining multiple resources to improve SMT-based paraphrasing model. In Proceedings of ACL/HLT.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Pivot approach for extracting paraphrase patterns from bilingual corpora", |
| "authors": [ |
| { |
| "first": "Shiqi", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Haifeng", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Ting", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Sheng", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of ACL/HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shiqi Zhao, Haifeng Wang, Ting Liu, and Sheng Li. 2008b. Pivot approach for extracting paraphrase patterns from bilingual corpora. In Proceedings of ACL/HLT.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Reevaluating machine translation results with paraphrase support", |
| "authors": [ |
| { |
| "first": "Liang", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Chin-Yew", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Liang Zhou, Chin-Yew Lin, and Eduard Hovy. 2006. Re- evaluating machine translation results with paraphrase support. In Proceedings of EMNLP.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF2": { |
| "text": "e 2 :e 2 =e 1 arg max s\u2208CCG-labels(e 1 ,P ) p(e 2 |e 1 , s) (8)", |
| "type_str": "figure", |
| "num": null, |
| "uris": null |
| }, |
| "TABREF1": { |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>create equal</td><td/><td/></tr><tr><td>create equal</td><td>.42 same</td><td>.03</td></tr><tr><td>equal</td><td>.06 created</td><td>.02</td></tr><tr><td>to create a</td><td colspan=\"2\">.05 conditions .02</td></tr><tr><td>create</td><td>.04 playing</td><td>.02</td></tr><tr><td colspan=\"2\">to create equality .03 creating</td><td>.01</td></tr></table>", |
| "text": "The baseline method's paraphrases of equal and their probabilities (excluding items with p < .01).", |
| "type_str": "table" |
| }, |
| "TABREF2": { |
| "num": null, |
| "html": null, |
| "content": "<table/>", |
| "text": "", |
| "type_str": "table" |
| }, |
| "TABREF4": { |
| "num": null, |
| "html": null, |
| "content": "<table/>", |
| "text": "Syntactically constrained paraphrases for equal when it is labeled as an adjective or adjectival phrase.", |
| "type_str": "table" |
| }, |
| "TABREF6": { |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>That noun phrase in turn missing a plural noun</td></tr><tr><td>(NNS) to its right.</td></tr><tr><td>2. SQ\\VBP NP/(VP/(NP/NNS)) -This label corre-</td></tr><tr><td>sponds to the middle circle. It indicates that</td></tr><tr><td>create equal is an SQ missing a VBP and a NP</td></tr><tr><td>to its left, and the complex VP to its right.</td></tr><tr><td>3. SBARQ\\WHADVP (SQ\\VBP NP/(VP/(NP/NNS)))/. -</td></tr><tr><td>This label corresponds to the outermost cir-</td></tr><tr><td>cle. It indicates that create equal is an SBARQ</td></tr><tr><td>missing a WHADVP and the complex SQ to its</td></tr><tr><td>left, and a punctuation mark to its right.</td></tr></table>", |
| "text": "Paraphrases and syntactic labels for the nonconstituent phrase create equal.", |
| "type_str": "table" |
| }, |
| "TABREF7": { |
| "num": null, |
| "html": null, |
| "content": "<table/>", |
| "text": "Annotators rated paraphrases along two 5-point scales.", |
| "type_str": "table" |
| } |
| } |
| } |
| } |