| { |
| "paper_id": "U05-1023", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T03:08:22.930463Z" |
| }, |
| "title": "Paraphrase Identification by Text Canonicalization", |
| "authors": [ |
| { |
| "first": "Yitao", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Sydney Sydney", |
| "location": { |
| "postCode": "2006", |
| "country": "Australia" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Jon", |
| "middle": [], |
| "last": "Patrick", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Sydney Sydney", |
| "location": { |
| "postCode": "2006", |
| "country": "Australia" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper proposes an approach to sentencelevel paraphrase identification by text canonicalization. The source sentence pairs are first converted into surface text that approximates canonical forms. A decision tree learning module which employs simple lexical matching features then takes the output canonicalized texts as its input for a supervised learning process. Experiments on the Microsoft Research (MSR) Paraphrase Corpus give comparable performance to other systems that are equipped with more sophisticated lexical semantic and syntactic matching components, with a Confidence-weighted Score of 0.791. An ancillary experiment using the occurrence of nominalizations suggests that the MSR Paraphrase Corpus might not be a rich source for learning paraphrasing patterns.", |
| "pdf_parse": { |
| "paper_id": "U05-1023", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper proposes an approach to sentencelevel paraphrase identification by text canonicalization. The source sentence pairs are first converted into surface text that approximates canonical forms. A decision tree learning module which employs simple lexical matching features then takes the output canonicalized texts as its input for a supervised learning process. Experiments on the Microsoft Research (MSR) Paraphrase Corpus give comparable performance to other systems that are equipped with more sophisticated lexical semantic and syntactic matching components, with a Confidence-weighted Score of 0.791. An ancillary experiment using the occurrence of nominalizations suggests that the MSR Paraphrase Corpus might not be a rich source for learning paraphrasing patterns.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Paraphrase identification is the task of recognizing text fragments with approximately the same meaning within a specific context. It has been recently proposed as an applicationindependent framework for measuring semantic equivalence in text, which is critical to many natural language systems like Question Answering, Information Extraction, Information Retrieval, Document Summarization, and Machine Translation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This paper proposes an approach to identifying sentence-level paraphrase pairs by transforming source sentences into more canonicalized text forms. By \"canonical form\", we mean a transformed text which is more generic and simpler in someway than the original text, following the idea of restricted languages. For example, the sentence Remaining shares will be held by QVC's management.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "is transformed into a more canonicalized form by changing it from the passive to active voice producing QVC 's management will hold Remaining shares.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "which is more common in Subject-Verb-Object (SVO) languages like English, while the Passive Voice in the source sentence usually occurs in scientific and business text where a more formal writing style is used.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This approach is consistent with Chomsky's Transformational Grammar, in which syntactically different, but semantically equivalent sentences can be related by their identical deep semantic structures (Chomsky, 1957) . However, it is generally difficult to efficiently analyze any corpus by using the Transformational Grammar due to its high complexity and computational overhead (Hausser, 2001) . In our approach, we only attempt to transform parts of the surface structure into a more generic text representation within the context of the paraphrase identification problem. The underlying hypothesis of this approach is that if two sentences are paraphrases of each other, they have a higher chance of being transformed into similar surface texts than a pair of non-paraphrase sentences.", |
| "cite_spans": [ |
| { |
| "start": 200, |
| "end": 215, |
| "text": "(Chomsky, 1957)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 379, |
| "end": 394, |
| "text": "(Hausser, 2001)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, only a set of limited canonicalization rules have been applied as a preliminary attempt to evaluate the effectiveness of the methodology. The objective is not to create grammatically correct text sequences from source sentences, but to enable the true paraphrases to share as much surface text, both lexically and syntactically, as possible. Despite this simple model, experiments on the MSR Paraphrase Corpus nevertheless show comparable results to those scores reported in the recent ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment (2005) . They also show that this approach increases the Recall rate of the system quite significantly.", |
| "cite_spans": [ |
| { |
| "start": 548, |
| "end": 581, |
| "text": "Equivalence and Entailment (2005)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Recent work on sentence-level paraphrasing generally views the problem as one of identifying bidirectional entailment in text pairs. Given an entailment text T and a hypothesis text H, T entails H if H can be inferred from the contents of T . A pair of sentences is therefore considered as a paraphrase pair if the entailment relationship holds from both directions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "However, this strict mutual entailment relationship does not hold in most naturally occurred sentence-level paraphrases. Recent attempts on extracting paraphrase pairs from the web, notably the MSR Paraphrase Corpus (Dolan et al., 2004) , have shown a large quantity of \"more or less semantically equivalent\" paraphrase pairs, such as the examples in Table 1. In the paraphrase pair \"913945-914112\", \"Dewhurst\" in the first sentence cannot be inferred from the second without giving the specific context knowledge that this person is someone belongs to the \"committee\". In the pair \"420631-420719\", the first sentence does not include any information that the minister is Saudi which occurs in the second sentence. Human judges have generally shown little difficulty in identifying these loose semantically equivalent sentence pairs as paraphrases. A surprisingly high inter-rater agreement of 83% was reported in the construction of the MSR Paraphrase Corpus despite the rather vague guideline of identifying sentence-level paraphrases that was used. It suggests that human judges were only interested in the matching of main propositions in sentence pairs, while neglecting the existence of other non-entailed trivial contents. Bar-Haim et al. (2005) decomposed the entailment task into two sub-levels, namely, lexical and lexical-syntactic. At the lexical level, for each word or phrase h in Hypothesis H, if h can be matched with a corresponding item t in Text T using either lexical matching, or a sequence of lexical transformations, then H and T are tagged as a true entailment pair. Lexical transformation rules include morphological derivations like nominalization (example \"913945-914112\" in Table 1 , \"proposal => propose\"), ontological relations like synonym and hypernym, or world knowledge such as \"Taliban => organization\". At the lexical-syntactic level, entailment between H and T holds if both the lexical and syntactic relations in H are also found in T. The relations evaluated at the lexical-syntactic level include syntactic movement triggered by morphological derivation of words, passive to active voice transformation of verbs, co-reference in text, and the syntactic level paraphrases like \"X was born in Y <=> X is Y man by birth\". In an empirical analysis of the PASCAL Recognising Textual Entailment Challenge (RTE) corpus , 240 sentence pairs were randomly chosen and tagged by human annotators based on the above criteria for semantic entailment. What they have found is that working on the lexicalsyntactic level outperforms on the lexical level by a significant increase of the Precision score, namely, from 59% to 86%. However, the Recall rate shows only 6% improvement by switching from lexical to a lexical-syntactic level.", |
| "cite_spans": [ |
| { |
| "start": 216, |
| "end": 236, |
| "text": "(Dolan et al., 2004)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 1230, |
| "end": 1252, |
| "text": "Bar-Haim et al. (2005)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1702, |
| "end": 1709, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In a similar effort to evaluate the contribution of syntactic knowledge in the entailment task, Vanderwende et al. (2005) found that 37% of the RTE Entailment Corpus examples could be handled by syntax alone, assuming the existence of an ideal parser. With additional help from a thesaurus, this figure can be increased to 49%. Corley and Mihalcea (2005) proposed a bagof-words model for identifying entailments and paraphrases by measuring the semantic similarity of two texts. In their model, the semantic similarity of two text segments T i and T j is defined as a score function that combines the semantic similarities of nouns and verbs, the lexical similarities of other open class words, together with word specificities measured by the inverse document frequency metric derived from the British National Corpus. Experimental results on the MSR Paraphrase Corpus showed a 4.4% increase of system accuracy by incorporating semantic knowledge.", |
| "cite_spans": [ |
| { |
| "start": 96, |
| "end": 121, |
| "text": "Vanderwende et al. (2005)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 328, |
| "end": 354, |
| "text": "Corley and Mihalcea (2005)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Inversion Transduction Grammars (ITG), which is previously proposed as a framework for machine translation, has also been applied in the context of the paraphrase and entailment task by Wu (2005) . Without consulting any thesaurus, the Bracketing ITG model worked mainly on a syntactic matching level and achieved a Confidence-weighted Score of 0.761, which is 10% higher than the random baseline.", |
| "cite_spans": [ |
| { |
| "start": 186, |
| "end": 195, |
| "text": "Wu (2005)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The Microsoft Research Paraphrase Corpus has been used throughout our experiments. It is the result of a recent effort to construct a large scale Dewhurst's proposal calls for an abrupt end to the controversial \"Robin Hood\" plan for school finance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Dataset", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The committee would propose a replacement for the \"Robin Hood\" school finance system.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Dataset", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The tour plans to make stops in 103 cities before rallying in Washington on Oct. 1-2, and in New York City on Oct. 3-4.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2484044-2483683", |
| "sec_num": null |
| }, |
| { |
| "text": "The tour will stop in 103 cities before rallying in Washington on Oct. 1 and 2, and New York on Oct. 3 and 4.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2484044-2483683", |
| "sec_num": null |
| }, |
| { |
| "text": "Nominalization + Future Tense", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2484044-2483683", |
| "sec_num": null |
| }, |
| { |
| "text": "Those reports were denied by the interior minister, Prince Nayef.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "420631-420719", |
| "sec_num": null |
| }, |
| { |
| "text": "However, the Saudi interior minister, Prince Nayef, denied the reports.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "420631-420719", |
| "sec_num": null |
| }, |
| { |
| "text": "Passive/Active Voice paraphrase corpus for generic purposes (Dolan et al., 2004) . It consists of 5,801 sentence pairs extracted from online newswire text, in which 3,900 are tagged as true paraphrases by human judges. This high proportion of occurrences of paraphrase pairs can be explained by the methodology used to create the corpus. In the construction of the corpus, edit distance is used as the only metric to filter out lexically unsimilar sentence pairs, which means the remaining instances have large lexical overlaps. As a consequence, although the MSR Paraphrase Corpus is rich in the number of paraphrase pairs, it is not enriched with a good variety of lexical and syntactic patterns. Weeds et al. (2005) argue that this \"high overlap in words\" makes it a poor source for studying the distributional similarity of syntactic paraphrasing patterns.", |
| "cite_spans": [ |
| { |
| "start": 60, |
| "end": 80, |
| "text": "(Dolan et al., 2004)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 699, |
| "end": 718, |
| "text": "Weeds et al. (2005)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "420631-420719", |
| "sec_num": null |
| }, |
| { |
| "text": "In an effort to substantiate this claim, we made an evaluation of the occurrence of nominalization, which is a classical linguistic device for paraphrasing, in both the MSR Paraphrase Corpus and the RTE Entailment Corpus. We used a semi-automatic method to calculate the occurrence of nominalizations. First we postagged sentence pairs in the corpus and lemmatized all the verbs and nouns. If there was an exact string match between a lemmatized verb and a lemmatized noun in a sentence pair, we marked it as a candidate of nominalization, and asked human judges to verify it at a later stage. A walk-through example of finding nominalization is shown in Table 2 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 655, |
| "end": 662, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "420631-420719", |
| "sec_num": null |
| }, |
| { |
| "text": "This method gives a reliable lower bound on the occurrence of nominalizations in the corpora. The results are shown in Table 3 . Notice that in the MSR training dataset only 60 true nominalizations exist in over 4,000 sentence pairs, compared to the number of 44 over 800 in Despite these innate problems of the corpus, it is still by far the largest sample dataset of paraphrasing phenomenon, which provides a solid base for system testing. Therefore, we decided to focus our research on this corpus as the first stage of our experiments.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 119, |
| "end": 126, |
| "text": "Table 3", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "420631-420719", |
| "sec_num": null |
| }, |
| { |
| "text": "This section describes the details of the two modules in the system, namely the text canonicalization module and the supervised learning module.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The function of the text canonicalization module is to constrain the language choices, both at lexical and syntactic level, of any text that carries meanings. In this paper, only a set of limited canonicalization rules has been applied. unchanged tities like percentages. In the experiments, the system will replace those number entities with generic tags in the text.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text Canonicalization", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Passive/Active Voice In the passive to active voice transformation, the system first consults Minipar (Lin, 1998) , which is a principlebase English parser, to get the parsed dependency tree structure of the text. Then it finds all the verbs in passive voice, together with their grammatical subjects and the objects. Finally, the system swaps the child nodes of the subjects and the objects of each verb. The canonicalized text is then created from the transformed syntactic tree. An example of passive to active voice transformation is shown in Table 4 .", |
| "cite_spans": [ |
| { |
| "start": 102, |
| "end": 113, |
| "text": "(Lin, 1998)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 547, |
| "end": 554, |
| "text": "Table 4", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Number Entities Number entities include dates, times, monetary values, and other quan-", |
| "sec_num": null |
| }, |
| { |
| "text": "Future Tense The expression of future tense in text has also been canonicalized to constrain the lexical choices which refer to future action and willingness. An example of future tense usage in the MSR Paraphrase Corpus is given by the text pair \"2484044-2483683\" in Table 1. In the sentence, \"plans to\" and \"will\" both refer to the future actions the subject will be taking. They have to be canonicalized into the same surface text to create higher probabilities to be matched at a later stage. In the experiments, we compile a list of common words and phrase structures(like \"plan to\" and \"be expected to\") to be substituted by a single word \"will\", which the system defines as the generic expression of future actions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Number Entities Number entities include dates, times, monetary values, and other quan-", |
| "sec_num": null |
| }, |
| { |
| "text": "At the supervised learning stage, the decision tree learning module of Weka (Witten and Frank, 1999) was used. The training dataset and the test dataset used in the experiments are the corresponding training and test dataset in MSR Paraphrase Corpus as described in Section 3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supervised Learning", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 Longest Common Substring measures the length of the longest common strings shared by two sentences. It is a consecutive sequence of words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical Matching Features The features used in the supervised learning stage are", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Longest Common Subsequence measures the length of the longest common sequence of strings shared by two sentences. It does not require this sequence to be consecutive in the original text.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical Matching Features The features used in the supervised learning stage are", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Edit Distance describes how many edit operations (add, delete, or replace of a word token at a time) are required to convert a source text into a target text. The fewer edit operations needed, the less edit distance and the more lexical overlap of the two text segments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical Matching Features The features used in the supervised learning stage are", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Modified N-gram Precision is also an important metric adopted from the BLEU algorithm for evaluating machine translations (Papineni et al., 2001) . It was originally proposed to capture both the accuracy and the fluency of a translated text with reference to a set of candidate translations. In the context of paraphrases, we try to calculate the modified n-gram precision from both directions of a sentence pair. For example, given the following sentence pair:", |
| "cite_spans": [ |
| { |
| "start": 124, |
| "end": 147, |
| "text": "(Papineni et al., 2001)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical Matching Features The features used in the supervised learning stage are", |
| "sec_num": null |
| }, |
| { |
| "text": "T 1 : the the the the the the the.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical Matching Features The features used in the supervised learning stage are", |
| "sec_num": null |
| }, |
| { |
| "text": "T 2 :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical Matching Features The features used in the supervised learning stage are", |
| "sec_num": null |
| }, |
| { |
| "text": "The cat is on the mat.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical Matching Features The features used in the supervised learning stage are", |
| "sec_num": null |
| }, |
| { |
| "text": "we first define the modified count of an n-gram t in T 1 as the minimum between the occurrence of t in T 1 and the maximum occurrence of t in T 2 . For instance, Count modified (\"the\") is 2 because the unigram \"the\" occurs only twice in the second sentence. The directional modified ngram precisions from T 1 to T 2 is defined in Equation 1, in which m is the order of ngram (up to trigram m=3 was used in our experiment), and Count(k) simply counts the number of k in the source sentence T 1 . We also calculated the directional modified n-gram precision score from T 2 to T 1 , and used the average of the two directional precision as the modified n-gram precision of the sentence pair by Equation 2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical Matching Features The features used in the supervised learning stage are", |
| "sec_num": null |
| }, |
| { |
| "text": "Moreover, our calculation of the above features is solely based on word token level. For instance, we use word n-gram instead of letter n-gram in calculating the modified n-gram precision.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical Matching Features The features used in the supervised learning stage are", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "mnp T 1 = 1 m m i=1 \u2212 log ( t\u2208n-gram i Count modified (t) k\u2208n-gram i Count(k) )", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Lexical Matching Features The features used in the supervised learning stage are", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "mnp(T 1 , T 2 ) = mnp T 1 + mnp T 2 2", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Lexical Matching Features The features used in the supervised learning stage are", |
| "sec_num": null |
| }, |
| { |
| "text": "Evaluation Metrics To assess the system performance, we adopt the Confidence-weighted Score(CWS) as the main figure for our evaluations. CWS is defined in Equation 3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical Matching Features The features used in the supervised learning stage are", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "cws = 1 n n i=1 #correct-up-to-i i", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Lexical Matching Features The features used in the supervised learning stage are", |
| "sec_num": null |
| }, |
| { |
| "text": "in which #correct-up-to-i is the number of correct tagging instances up to the current position i, and the test data samples are first ranked in decreasing order according to their confidence level of tagging judgments. The CWS metric generally rewards a system that assigns higher confidence values to correct tagging decisions than to those wrong ones . Meanwhile, traditional machine learning metrics like accuracy, precision, recall, and F 1 values are also reported for better understanding of the system. The Baselines Two baselines have been provided for the task. The first baseline system uniformly predicts true for paraphrase pairs. The second baseline system uses the lexical matching features in Section 4.2 on the original text pairs for the supervised learning stage.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical Matching Features The features used in the supervised learning stage are", |
| "sec_num": null |
| }, |
| { |
| "text": "The experiment results are shown in Table 5 . For comparison, scores of Wu (2005) , and Corley and Mihalcea 2005's systems are also included in the table. 1 For the two baseline systems, B2, which employs pure lexical matching features on the source text, outperforms B1, the system that uniformly predicts paraphrases, both in Accuracy by 6%, and in CWS by 12%. The B2 system also shows comparable results with respect to Wu, and Corley and Mihalcea's systems and sets a high standard as a baseline system. This further reveals the main characteristic of the MSR Paraphrase Corpus: paraphrase text pairs in the corpus share more lexical overlaps than non-paraphrase pairs.", |
| "cite_spans": [ |
| { |
| "start": 72, |
| "end": 81, |
| "text": "Wu (2005)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 36, |
| "end": 43, |
| "text": "Table 5", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Compared with B2, systems using canonicalized text, namely S1 -S7, generally suffer a slightly poorer performance in the Accuracy score. However, the Recall rate rises significantly in all systems except in S3 and S6. Interestingly, S3 and S6 also show the highest CWS score and the Precision score at the same time. This suggests that the canonicalization of future tense helps systems to make more precise and reliable tagging decisions. Canonicalization on Passive/Active voice (S2) also increases the Recall rate by almost 10% compared with B2. This suggests that a pure lexical matching system could be further improved by even some preliminary syntactic transformations. Number entity canonicalization helps to increase the Recall rate of the system. This could be explained Wu (2005) 0.761 Corley and Mihalcea (2005) 0.715 0.723 0.925 0.812 by how the MSR Paraphrase Corpus was constructed. During the tagging process, source sentences were already pre-processed by replacing number entities with generic tags. Human judges then made their decisions based on the canonicalized text. While the dataset revealed to the public, the source text is provided instead of the data used by human judges. In general, systems S1-S7 show competitive performance with respect to Wu, and Corley and Mihalcea's systems. Corley and Mihalcea's system gives a better Recall rate, which suggests the importance of introducing lexical semantics features in the system. Our approach currently does not model synonyms into any canonicalized form, therefore loses the possibility of capturing this lexical variance. On the other hand, neither Wu, nor Corley and Mihalcea's system outperforms the lexical matching system B2 in terms of CWS and Accuracy. This again suggests that the nature of the paraphrases in the corpus is that they share more lexical overlaps than non-paraphrases, rather than employing sophisticated syntactic paraphrasing patterns.", |
| "cite_spans": [ |
| { |
| "start": 781, |
| "end": 790, |
| "text": "Wu (2005)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 797, |
| "end": 823, |
| "text": "Corley and Mihalcea (2005)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5" |
| }, |
| { |
| "text": "This paper proposes a text canonicalization approach to the paraphrase identification task. Our approach tries to tackle the problem on both the lexical and the grammatical level, as distinct from existing research which has concentrated on lexical analyses. Despite the simple transformation rules applied, this approach has shown competitive figures of system performance on the MSR Paraphrase Corpus with that reported in current state-of-the-art systems. Moreover, this method reports a significant increase in the recall rate of paraphrases compared with a system using noncanonicalized text. It clearly encourages the use of more conceptualized and more canonical syntax which tries to approximate the deeper semantic information of the original text.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "However, further research is required to reveal how many transformation rules are needed for the task. It would also be interesting to develop an effective engineering method for managing the expanding canonicalization rule set. In the future, more work has also to be done to equip the system with lexical semantic knowledge from either manually constructed lexical databases like WordNet (Fellbaum, 1998) , or other resources that automatically learned from corpora like VerbOcean (Chklovski and Pantel, 2004) .", |
| "cite_spans": [ |
| { |
| "start": 390, |
| "end": 406, |
| "text": "(Fellbaum, 1998)", |
| "ref_id": null |
| }, |
| { |
| "start": 483, |
| "end": 511, |
| "text": "(Chklovski and Pantel, 2004)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Wu only reported the CWS score on MSR corpus in his paper, while Corley and Mihalcea did not report any CWS score in their paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This paper was supported by the School of Information Technologies, University of Sydney. We would also like to thank the reviewers for their insightful comments, and all members of the Sydney Language Technology Group for their support.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Definition and analysis of intermediate entailment levels", |
| "authors": [ |
| { |
| "first": "Roy", |
| "middle": [], |
| "last": "Bar-Haim", |
| "suffix": "" |
| }, |
| { |
| "first": "Idan", |
| "middle": [], |
| "last": "Szpecktor", |
| "suffix": "" |
| }, |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Glickman", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment", |
| "volume": "", |
| "issue": "", |
| "pages": "55--60", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roy Bar-Haim, Idan Szpecktor, and Oren Glick- man. 2005. Definition and analysis of inter- mediate entailment levels. In Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment, pages 55-60, Ann Arbor, Michigan, June. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "VerbOcean: Mining the Web for Fine-Grained Semantic Verb Relations", |
| "authors": [ |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Chklovski", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Pantel", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Timothy Chklovski and Patrick Pantel. 2004. VerbOcean: Mining the Web for Fine- Grained Semantic Verb Relations. In Pro- ceedings of Conference on Empirical Methods in Natural Language Processing (EMNLP- 04), Barcelona, Spain.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Syntactic Structures. Mouton", |
| "authors": [ |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Chomsky", |
| "suffix": "" |
| } |
| ], |
| "year": 1957, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Noam Chomsky. 1957. Syntactic Structures. Mouton.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Measuring the semantic similarity of texts", |
| "authors": [ |
| { |
| "first": "Courtney", |
| "middle": [], |
| "last": "Corley", |
| "suffix": "" |
| }, |
| { |
| "first": "Rada", |
| "middle": [], |
| "last": "Mihalcea", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment", |
| "volume": "", |
| "issue": "", |
| "pages": "13--18", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Courtney Corley and Rada Mihalcea. 2005. Measuring the semantic similarity of texts. In Proceedings of the ACL Workshop on Em- pirical Modeling of Semantic Equivalence and Entailment, pages 13-18, Ann Arbor, Michi- gan, June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "The PASCAL Recognising Textual Entailment Challenge", |
| "authors": [ |
| { |
| "first": "Ido", |
| "middle": [], |
| "last": "Dagan", |
| "suffix": "" |
| }, |
| { |
| "first": "Bernardo", |
| "middle": [], |
| "last": "Magnini", |
| "suffix": "" |
| }, |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Glickman", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "PASCAL Proceedings of the First Challenge Workshop, Recognizing Textual Entailment", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ido Dagan, Bernardo Magnini, and Oren Glick- man. 2005. The PASCAL Recognising Tex- tual Entailment Challenge. In PASCAL Pro- ceedings of the First Challenge Workshop, Recognizing Textual Entailment, Southamp- ton, U.K., April.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment. Association for Computational Linguistics", |
| "authors": [ |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Dolan", |
| "suffix": "" |
| }, |
| { |
| "first": "Ido", |
| "middle": [], |
| "last": "Dagan", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bill Dolan and Ido Dagan, editors. 2005. Pro- ceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and En- tailment. Association for Computational Lin- guistics, Ann Arbor, Michigan, June.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Unsupervised Construction of Large Paraphrase Corpora: Exploiting Massively Parallel News Sources", |
| "authors": [ |
| { |
| "first": "William", |
| "middle": [ |
| "B" |
| ], |
| "last": "Dolan", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Quirk", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Brockett", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "William B. Dolan, Chris Quirk, and Chris Brockett. 2004. Unsupervised Construction of Large Paraphrase Corpora: Exploiting Massively Parallel News Sources. In Proceed- ings of COLING 2004.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "WordNet: An Electronic Lexical Database", |
| "authors": [], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database. MIT Press.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Foundations of Computational Linguistics: Human-Computer Communication in Natural Language", |
| "authors": [ |
| { |
| "first": "Roland", |
| "middle": [], |
| "last": "Hausser", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roland Hausser. 2001. Foundations of Compu- tational Linguistics: Human-Computer Com- munication in Natural Language. Springer.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Dependency-based Evaluation of MINIPAR", |
| "authors": [ |
| { |
| "first": "Dekang", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Workshop on the Evaluation of Parsing Systems, First International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dekang Lin. 1998. Dependency-based Evalua- tion of MINIPAR. In Workshop on the Evalu- ation of Parsing Systems, First International Conference on Language Resources and Eval- uation, Granada, Spain, May.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Bleu: a method for automatic evaluation of machine translation", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Papineni", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Ward", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2001. Bleu: a method for automatic evalua- tion of machine translation.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "What Syntax can Contribute in Entailment Task", |
| "authors": [ |
| { |
| "first": "Lucy", |
| "middle": [], |
| "last": "Vanderwende", |
| "suffix": "" |
| }, |
| { |
| "first": "Deborah", |
| "middle": [], |
| "last": "Coughlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Dolan", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "PASCAL Proceedings of the First Challenge Workshop, Recognizing Textual Entailment", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lucy Vanderwende, Deborah Coughlin, and Bill Dolan. 2005. What Syntax can Contribute in Entailment Task. In PASCAL Proceedings of the First Challenge Workshop, Recognizing Textual Entailment, Southampton, U.K.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "The distributional similarity of sub-parses", |
| "authors": [ |
| { |
| "first": "Julie", |
| "middle": [], |
| "last": "Weeds", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Weir", |
| "suffix": "" |
| }, |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Keller", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment", |
| "volume": "", |
| "issue": "", |
| "pages": "7--12", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Julie Weeds, David Weir, and Bill Keller. 2005. The distributional similarity of sub-parses. In Proceedings of the ACL Workshop on Empir- ical Modeling of Semantic Equivalence and Entailment, pages 7-12, Ann Arbor, Michi- gan, June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ian", |
| "suffix": "" |
| }, |
| { |
| "first": "Eibe", |
| "middle": [], |
| "last": "Witten", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Frank", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ian H. Witten and Eibe Frank. 1999. Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations. Morgan Kaufmann.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Recognizing paraphrases and textual entailment using inversion transduction grammars", |
| "authors": [ |
| { |
| "first": "Dekai", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment", |
| "volume": "", |
| "issue": "", |
| "pages": "25--30", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dekai Wu. 2005. Recognizing paraphrases and textual entailment using inversion transduc- tion grammars. In Proceedings of the ACL Workshop on Empirical Modeling of Seman- tic Equivalence and Entailment, pages 25-30, Ann Arbor, Michigan, June. Association for Computational Linguistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF0": { |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "text": "", |
| "content": "<table><tr><td/><td/><td>Examples of MSR Paraphrase Corpus</td><td/></tr><tr><td>ID</td><td>Text1</td><td>Text2</td><td>Description</td></tr><tr><td>913945-</td><td/><td/><td/></tr><tr><td>914112</td><td/><td/><td/></tr></table>" |
| }, |
| "TABREF1": { |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "text": "", |
| "content": "<table><tr><td/><td colspan=\"2\">Occurrence of Nominalizations True Nomi-Corpus</td></tr><tr><td/><td>nalizations</td><td>Size(sentence</td></tr><tr><td/><td/><td>pairs)</td></tr><tr><td>RTE</td><td>44</td><td>800</td></tr><tr><td>MSR</td><td>60</td><td>4076</td></tr></table>" |
| }, |
| "TABREF2": { |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "text": "NNP \"/NNP plan/NN for/IN school/NN finance/NN ./.The/DT committee/NN would/MD propose/VB a/DT replacement/NN for/IN the/DT \"/NNP Robin/NNP Hood/NNP \"/NNP school/NN finance/NN system/NN ./.", |
| "content": "<table><tr><td/><td/><td colspan=\"2\">An Example of Finding Nominalizations</td></tr><tr><td>ID</td><td>913945</td><td/><td>914112</td></tr><tr><td/><td colspan=\"3\">Dewhurst/NNP 's/POS proposal/NN</td></tr><tr><td/><td colspan=\"3\">calls/VBZ for/IN an/DT abrupt/JJ</td></tr><tr><td/><td colspan=\"3\">end/NN to/TO the/DT contro-</td></tr><tr><td/><td>versial/JJ</td><td>\"/NNP</td><td>Robin/NNP</td></tr><tr><td colspan=\"4\">Hood/Nouns proposal=>propos, end, Robin, Hood,</td><td>committee=>committe,</td></tr><tr><td/><td colspan=\"3\">plan, school, finance=>financ</td><td>replacement=>replac, Robin, Hood,</td></tr><tr><td/><td/><td/><td>school, finance=>financ, system</td></tr><tr><td>Verbs</td><td>calls=>call</td><td/><td>propose=>propos</td></tr><tr><td/><td colspan=\"3\">Candidate Nominalizations: (proposal, propose)</td></tr></table>" |
| }, |
| "TABREF3": { |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "text": "", |
| "content": "<table><tr><td/><td colspan=\"4\">: Passive to Active Voice id = 420631 id = 420719</td></tr><tr><td>Before</td><td colspan=\"2\">Those reports</td><td colspan=\"2\">However, the</td></tr><tr><td>transfor-</td><td colspan=\"2\">were denied</td><td>Saudi</td><td>inte-</td></tr><tr><td>mation</td><td colspan=\"2\">by the inte-</td><td colspan=\"2\">rior minister,</td></tr><tr><td/><td colspan=\"2\">rior minister,</td><td colspan=\"2\">Prince Nayef,</td></tr><tr><td/><td colspan=\"2\">Prince Nayef.</td><td>denied</td><td>the</td></tr><tr><td/><td/><td/><td>reports.</td></tr><tr><td>After</td><td>the</td><td>interior</td><td/></tr><tr><td>transfor-</td><td colspan=\"2\">minister,</td><td/></tr><tr><td>mation</td><td colspan=\"2\">Prince Nayef</td><td/></tr><tr><td/><td colspan=\"2\">denied Those</td><td/></tr><tr><td/><td colspan=\"2\">reports.</td><td/></tr></table>" |
| }, |
| "TABREF4": { |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "text": "", |
| "content": "<table><tr><td/><td colspan=\"3\">Experiment results on MSR Paraphrase Corpus</td><td/><td/></tr><tr><td/><td>CWS</td><td>Acc</td><td>Pre</td><td>Rec</td><td>F 1</td></tr><tr><td/><td colspan=\"3\">Systems using Canonicalized Text</td><td/><td/></tr><tr><td>S1: (a)number entities</td><td>0.740</td><td>0.692</td><td>0.713</td><td>0.898</td><td>0.795</td></tr><tr><td>S2: (b)passive/active</td><td>0.742</td><td>0.719</td><td>0.743</td><td>0.882</td><td>0.807</td></tr><tr><td>S3: (c)future tense</td><td>0.791</td><td>0.708</td><td>0.784</td><td>0.775</td><td>0.779</td></tr><tr><td>S4: (a)+(b)</td><td>0.739</td><td>0.697</td><td>0.716</td><td>0.900</td><td>0.798</td></tr><tr><td>S5: (a)+(c)</td><td>0.731</td><td>0.701</td><td>0.732</td><td>0.869</td><td>0.794</td></tr><tr><td>S6: (b)+(c)</td><td>0.791</td><td>0.709</td><td>0.784</td><td>0.776</td><td>0.780</td></tr><tr><td>S7: (a)+(b)+(c)</td><td>0.723</td><td>0.703</td><td>0.734</td><td>0.867</td><td>0.795</td></tr><tr><td/><td colspan=\"2\">Baselines</td><td/><td/><td/></tr><tr><td>B1: Uniform</td><td>0.664</td><td>0.664</td><td>0.664</td><td>1</td><td>0.798</td></tr><tr><td>B2: LexicalMatch</td><td>0.783</td><td>0.723</td><td>0.788</td><td>0.798</td><td>0.793</td></tr><tr><td/><td colspan=\"3\">Other Systems with Reported Scores</td><td/><td/></tr></table>" |
| } |
| } |
| } |
| } |