| { |
| "paper_id": "C02-1009", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T12:20:13.888701Z" |
| }, |
| "title": "A Robust Cross-Style Bilingual Sentences Alignment Model", |
| "authors": [ |
| { |
| "first": "Tz-Liang", |
| "middle": [], |
| "last": "Kueng", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "IV", |
| "location": { |
| "addrLine": "Science-Based Industrial Park", |
| "postCode": "30077", |
| "settlement": "Hsinchu", |
| "region": "R.O.C", |
| "country": "Taiwan" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Keh-Yih", |
| "middle": [], |
| "last": "Su", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "IV", |
| "location": { |
| "addrLine": "Science-Based Industrial Park", |
| "postCode": "30077", |
| "settlement": "Hsinchu", |
| "region": "R.O.C", |
| "country": "Taiwan" |
| } |
| }, |
| "email": "kysu@bdc.com.tw" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Most current sentence alignment approaches adopt sentence length and cognate as the alignment features; and they are mostly trained and tested in the documents with the same style. Since the length distribution, alignment-type distribution (used by length-based approaches) and cognate frequency vary significantly across texts with different styles, the length-based approaches fail to achieve similar performance when tested in corpora of different styles. The experiments show that the performance in F-measure could drop from 98.2% to 85.6% when a length-based approach is trained by a technical manual and then tested on a general magazine. Since a large percentage of content words in the source text would be translated into the corresponding translation duals to preserve the meaning in the target text, transfer lexicons are usually regarded as more reliable cues for aligning sentences when the alignment task is performed by human. To enhance the robustness, a robust statistical model based on both transfer lexicons and sentence lengths are proposed in this paper. After integrating the transfer lexicons into the model, a 60% F-measure error reduction (from 14.4% to 5.8%) is observed.", |
| "pdf_parse": { |
| "paper_id": "C02-1009", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Most current sentence alignment approaches adopt sentence length and cognate as the alignment features; and they are mostly trained and tested in the documents with the same style. Since the length distribution, alignment-type distribution (used by length-based approaches) and cognate frequency vary significantly across texts with different styles, the length-based approaches fail to achieve similar performance when tested in corpora of different styles. The experiments show that the performance in F-measure could drop from 98.2% to 85.6% when a length-based approach is trained by a technical manual and then tested on a general magazine. Since a large percentage of content words in the source text would be translated into the corresponding translation duals to preserve the meaning in the target text, transfer lexicons are usually regarded as more reliable cues for aligning sentences when the alignment task is performed by human. To enhance the robustness, a robust statistical model based on both transfer lexicons and sentence lengths are proposed in this paper. After integrating the transfer lexicons into the model, a 60% F-measure error reduction (from 14.4% to 5.8%) is observed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Since the bilingual corpus is a valuable resource for training statistical language models [Dagon, 91; Su et al., 95; Su and Chang, 99] and sentence alignment is the first step for most such tasks, many alignment approaches have been proposed in the literature [Brown, 91; Gale and Church, 93; Wu, 94; Vogel et al., 96; Och and Ney, 2000] . Most of those reported approaches use the sentence length as the main feature to perform the alignment task. For example, Brown et al. (91) used the feature of number-of-words for alignment, and [Gale and Church, 93] claimed that better performance can be achieved (5.8% error rate for English-French corpus) if the number-of-characters is adopted instead. As cognates are reliable cues for language pairs derived from the same family, Church (93) also attacked this problem by considering cognates additionally. Because most of those reported work are performed on those Indo-European language-pairs, for testing the performance on non-Indo-European languages, Wu (94) had tried both length and cognate features on the Hong Kong Hansard English-Chinese corpus, and 7.9% error rate has been reported. Besides, sentence alignment can also be indirectly achieved via more complicated word corresponding models [Brown et al., 93; Vogel et al., 96; Och and Ney, 2000] . Since those word corresponding models, which also achieve similar performance, are more complicated and run relatively slow, they seems to be over-killed for the task of aligning sentences and will not be discussed in this paper.", |
| "cite_spans": [ |
| { |
| "start": 261, |
| "end": 268, |
| "text": "[Brown,", |
| "ref_id": null |
| }, |
| { |
| "start": 269, |
| "end": 272, |
| "text": "91;", |
| "ref_id": null |
| }, |
| { |
| "start": 273, |
| "end": 273, |
| "text": "", |
| "ref_id": null |
| }, |
| { |
| "start": 303, |
| "end": 320, |
| "text": "Vogel et al., 96;", |
| "ref_id": null |
| }, |
| { |
| "start": 321, |
| "end": 339, |
| "text": "Och and Ney, 2000]", |
| "ref_id": null |
| }, |
| { |
| "start": 464, |
| "end": 481, |
| "text": "Brown et al. (91)", |
| "ref_id": null |
| }, |
| { |
| "start": 547, |
| "end": 554, |
| "text": "Church,", |
| "ref_id": null |
| }, |
| { |
| "start": 555, |
| "end": 558, |
| "text": "93]", |
| "ref_id": null |
| }, |
| { |
| "start": 1250, |
| "end": 1268, |
| "text": "[Brown et al., 93;", |
| "ref_id": null |
| }, |
| { |
| "start": 1269, |
| "end": 1286, |
| "text": "Vogel et al., 96;", |
| "ref_id": null |
| }, |
| { |
| "start": 1287, |
| "end": 1305, |
| "text": "Och and Ney, 2000]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Although length-based approaches above mentioned are simple and can achieve good performance, they are usually trained and tested in the text with the same style. Therefore, they are style-dependent approaches. Since performing supervised-training for each style is not feasible in many applications, it would be interesting to know whether those length-based approaches can still achieve the similar performance if they are tested in the text with different styles other than the training corpora. An experiment was thus conducted to train the parameters with a machinery technical manual; the performance is then tested on a general magazine (for introducing Taiwan to foreign visi-tors). It shows that the testing set performance of the length-based model (with cognates considered) would drop from 98.2% (tested in the same technical domain) to 85.6% (tested in the new general magazine) in F -measure. After investigating those errors, it has been found that the length distribution and alignment-type distribution (used by those length-based approaches) vary significantly across the texts of different styles (as would be shown in Tables 5.2 and 5. 3), and the cognate-frequency 1 drops greatly from the technical manual to a general magazine in non-Indo-European languages (as would be shown in Table 5 .3).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 1138, |
| "end": 1155, |
| "text": "Tables 5.2 and 5.", |
| "ref_id": "TABREF3" |
| }, |
| { |
| "start": 1303, |
| "end": 1310, |
| "text": "Table 5", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "On the other hand, sentence length is seldom used by a human to align bilingual sentences. They usually do not align bilingual sentences by counting the number of characters (or words) in the sentence pairs. Instead, since a large percentage of content words in the source text would be translated into their translation-duals to preserve the meaning in the target text, transfer-lexicons are usually used for aligning sentences when the alignment task is performed by human. To enhance the robustness across different styles, transfer-lexicons are thus integrated into the traditional sentence-length based model in the proposed robust statistical model described below. After integrating transfer-lexicons into the model, a 60% F -measure error reduction (from 14.4% to 5.8%) has been observed, which corresponds to improving the cross-style performance from 85.6% to 94.2% in F -measure.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The details of the proposed robust model, the associated features extracted from the bilingual corpora, and the probabilistic scoring function will be given in Section 2. In Section 3, we briefly mention some implementation issues. The associated performance evaluation is given in Section 4, and Section 5 would address error analysis and discusses the limitation of the proposed statistical model. Finally, the concluding remarks are given in Section 6.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "1 Here \"Cognate\" mainly refers to those English proper nouns (such as those company names of IBM, HP; or the technical terms such as IEEE-1394, etc.) that appear in the Chinese text. As they are most likely to be directly copied from the English sentence into the corresponding Chinese one, they are reliable cues.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Since an English-Chinese bilingual corpus will be adopted in our experiments, we will denote the source text with m sentences as ES m 1 , and its corresponding target text, with n sentences, as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Sentence Alignment Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "CS n 1 . Let M i = {type i,1 , \u2022 \u2022 \u2022 , type i,N i } denote the i-th possible alignment-candidate, consisting of N i Alignment-P assages of type i,j , j = 1, \u2022 \u2022 \u2022 , N i ;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Sentence Alignment Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "where type i,j is the matching type (e.g., 1\u22121, 0\u22121, 1\u22120, etc.) of the j-th Alignment-Passage in the i-th alignment-candidate, and N i denotes the number of the total Alignment-Passages in the i-th alignmentcandidate. Then the statistical alignment model is to find the Bayesian estimate M * among all possible alignment candidates, shown in the following equation", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Sentence Alignment Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "M * = arg max M i P (M i |ES m 1 , CS n 1 ). (2.1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Sentence Alignment Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "According to the Bayesian rule, the maximization problem in (2.1) is equivalent to solving the following maximization equation", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Sentence Alignment Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "M * = arg max M i P (ES m 1 , CS n 1 |M i )P (M i ) = arg max M i {P (Aligned-P air i,N i i,1 |type i,N i i,1 )P (type i,N i i,1 )} = arg max M i N i j=1 {P (Aligned-P air i,j |Aligned-P air i,j\u22121 i,1 , type i,j i,1 ) \u00d7 P (type i,j |type i,j\u22121 i,1 )}, (2.2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Sentence Alignment Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "where Aligned-P air i,j , j = 1, \u2022 \u2022 \u2022 , N i , denotes the j-th aligned English-Chinese bilingual sentence groups pair in the i-th alignment candidate.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Sentence Alignment Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Assume that", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Sentence Alignment Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P (Aligned-P air i,j |Aligned-P air i,j\u22121 i,1 , type i,j i,1 ) \u2248 P (Aligned-P air i,j |type i,j ),", |
| "eq_num": "(2.3)" |
| } |
| ], |
| "section": "Statistical Sentence Alignment Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "and different type i,j in the i-th alignment candidate are statistically independent 2 , then the above maximization problem can be approached by searching for", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Sentence Alignment Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "M \u2261 arg max Mi Ni j=1 {P (Aligned-P air i,j |type i,j )P (type i,j )}, (2.4)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Sentence Alignment Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "whereM denotes the desired candidate.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Sentence Alignment Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "To make the above model feasible, Aligned-P air i,j should be first transformed into an appropriate feature space. The baseline model will use both the length of sentence [Brown et al., 91 ; Gale and Church, 93] and English cognates [Wu, 94] , and is shown as follows:", |
| "cite_spans": [ |
| { |
| "start": 171, |
| "end": 185, |
| "text": "[Brown et al.,", |
| "ref_id": null |
| }, |
| { |
| "start": 186, |
| "end": 188, |
| "text": "91", |
| "ref_id": null |
| }, |
| { |
| "start": 233, |
| "end": 237, |
| "text": "[Wu,", |
| "ref_id": null |
| }, |
| { |
| "start": 238, |
| "end": 241, |
| "text": "94]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baseline Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "arg max M i N i j=1 f (\u03b4 c , \u03b4 w |type i,j )P (\u03b4 cognate )P (type i,j ), (2.5)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baseline Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "where \u03b4 c and \u03b4 w denote the normalized differences of characters and words as explained in the following; \u03b4 c is defined to be (l tc \u2212 cl sc )/ l sc s 2 c , where l sc and l tc are the character numbers of the aligned bilingual portions of source text and target text, respectively, under consideration; c denotes the proportional constant for target-character-count and s 2 c denotes the corresponding target-charactercount variance per source-character. Similarly, \u03b4 w is defined to be (l tw \u2212 wl sw )/ l sw s 2 w , where l sw and l tw are the word numbers of the aligned bilingual portions of source text and target text, respectively; w denotes the proportional constant for target-word-count and s 2 w denotes the corresponding target-word-count variance per sourceword. Also, the random variables \u03b4 c and \u03b4 w are assumed to have bivariate normal distribution and each possesses a standard normal distribution with mean 0 and variance 1. Furthermore, \u03b4 cognate denotes (\"Number of English cognates found in the given Chinese sentences\"\u2212\"Number of corresponding English cognates found in the given English sentences\"), and is Poisson 3 distributed independent of its associated matching-type; also assume that \u03b4 cognate is independent of other features (i.e., character-count and word-count).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baseline Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Since transfer-lexicons are usually regarded as more reliable cues for aligning sentences when the alignment task is performed by human, the above baseline model is further enhanced by adding those associated transfer lexicons to it. Those translated Chinese words, which are derived from each English word (contained in given English sentences) by looking up some kinds of dictionaries, can be viewed as transfer-lexicons because they are very likely to appear in the translated Chinese sentence. However, as the distribution of various possible translations (for each English lexicon) found in our bilingual corpus is far more diversified 4 compared with those transfer-lexicons obtained from the dictionary, only a small number of transfer-lexicons can be matched if the exact-match is specified. Therefore, each Chinese-Lexicon obtained from the dictionary is first augmented with its associated Chinese characters, and then the augmented transfer-lexicons set are matched with the target Chinese sentence(s). Once an element of the augmented transfer-lexicons set is matched in the target Chinese sentence, it is counted as being matched. So we compute the Normalized-Transfer-Lexicon-Matching-Measure, \u03b4 T ransf er\u2212Lexicons which denotes [(\"Number of augmented transfer-lexicons matched\"\u2212\"Number of augmented transfer-lexicons unmatched\")/ \"Total Number of augmented transfer-lexicons sets\"], and add it to the original model as another additional feature.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proposed Transfer Lexicon Model", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Assume follows normal distribution and the associated parameters are estimated from the training set, Equation (2.5) is then replaced by", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proposed Transfer Lexicon Model", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "arg max Mi Ni j=1 {f 1 (\u03b4 c , \u03b4 w |type i,j )P (\u03b4 cognate ) \u00d7 f 2 (\u03b4 T ransf er\u2212Lexicons ) \u00d7 P (type i,j )}. (2.6)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proposed Transfer Lexicon Model", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "3 Implementation", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proposed Transfer Lexicon Model", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The best bilingual sentence alignment in those above models can be found by utilizing a dynamic programming algorithm, which is similar to the dynamic time warping algorithm used in speech recognition [Rabiner and Juang, 93] . Currently, the where score(h, k) denotes the local scoring function to evaluate the local passage of matching type \"h \u2212 k\".", |
| "cite_spans": [ |
| { |
| "start": 201, |
| "end": 220, |
| "text": "[Rabiner and Juang,", |
| "ref_id": null |
| }, |
| { |
| "start": 221, |
| "end": 224, |
| "text": "93]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proposed Transfer Lexicon Model", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "In the experiments, a training set consisting of 7, 331 pairs of bilingual sentences, and a testing set with 1, 514 pairs of bilingual sentences are extracted from the Caterpillar User Manual which is mainly about machinery. The cross-style testing set contains 274 pairs of bilingual sentences selected from the Sinorama Magazine, which is a general magazine (for introducing Taiwan to foreign visitors) with its topics covering law, politics, education, technology, science, etc. A Sequential-Forward-Selection (SFS) procedure [Devijver, 82] , based on the performance measured from the Caterpillar User Manual, is then adopted to rank different features. Among them, the Chinese transfer lexicon feature (abbreviated as CTL in the table), which only adopts Normalized-Transfer-Lexicon-Matching-Measure and matching-type priori distribution (i.e., P (type i,j )), is first selected, then CL feature (which adopts character-length), WL feature (using word-length) and EC feature (using English cognate) follow in sequence, as reported in Table 4 .1.", |
| "cite_spans": [ |
| { |
| "start": 529, |
| "end": 539, |
| "text": "[Devijver,", |
| "ref_id": null |
| }, |
| { |
| "start": 540, |
| "end": 543, |
| "text": "82]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1039, |
| "end": 1046, |
| "text": "Table 4", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Performance Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The selection sequence verifies our previous supposition that the transfer-lexicon is a more reliable feature and contributes most to the aligning task. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In order to understand more about the behavior of the various features, we classify all errors which occurs in aligning Sinorama Magazine in Table 5 .1; the error dominated by the prior distribution of matching type is called matching-type error, the error dominated by length feature is called lengthtype error, and the error caused from both length features and lexical-related features (either one is not dominant) is called length&lexicon-type error 6 . From Table 5 .1, it is found that the matchingtype errors dominate in the baseline model. To investigate the matching-type error, the prior distributions of matching-types under training set [Caterpillar User Manual] and testing set II [Sinorama Magazine] are given in Table 5 .2. The comparison clearly shows that the matching-type distribution varies significantly across different domains, and that explains why the baseline model (which only considers length-based features and matchingtype distribution) fails to achieve the similar performance in the cross-style test. However, as the \"1-1\" matching-type always dominates in both texts, the matching-type distribution still provide useful information for aligning sentences when it is jointly considered with the lexical-related feature. For those Length-Type errors generated from the baseline model in Table 5 .1, different statistical characteristics across different styles are listed in Table 5 .3. It also clearly shows that the associated statistical characteristics of those length-based features vary significantly across different styles. Furthermore, although English-cognates are reliable cues for aligning bi-lingual sentences and occurs quite a few times in the technical manual (such as company names: IBM, HP, etc., and some special technical terms such as \"RS-232\", etc), they almost never occur in a general magazine such as the one that we test. Therefore, they provide no help for aligning corpus in such domains. Table 5 .1 also shows that errors distribute differently in the proposed robust model. The lengthtype, instead of matching-type, now dominates errors, which implies that the mismatching effect resulting from different distributions of matching types has been diluted by the transfer-lexicon feature. Furthermore, the score of erroneous lexicontype assignment never dominates any error found in the proposed robust model, which verifies our supposition that transfer-lexicons are more reliable cues for aligning sentences.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 141, |
| "end": 148, |
| "text": "Table 5", |
| "ref_id": "TABREF3" |
| }, |
| { |
| "start": 463, |
| "end": 470, |
| "text": "Table 5", |
| "ref_id": "TABREF3" |
| }, |
| { |
| "start": 727, |
| "end": 734, |
| "text": "Table 5", |
| "ref_id": "TABREF3" |
| }, |
| { |
| "start": 1318, |
| "end": 1325, |
| "text": "Table 5", |
| "ref_id": "TABREF3" |
| }, |
| { |
| "start": 1406, |
| "end": 1414, |
| "text": "Table 5", |
| "ref_id": "TABREF3" |
| }, |
| { |
| "start": 1949, |
| "end": 1956, |
| "text": "Table 5", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Error Analysis", |
| "sec_num": "5" |
| }, |
| { |
| "text": "To further investigate those remaining errors generated from the proposed robust model, two error examples are given in Figure 1 ) Generally speaking, if a short source sentence is enclosed by two long source sentences in both sides, and they are jointly translated into two long target sentences, then it is error prone compared with other cases. The main reason is that this short source sentence would contain only a few words and thus its associated transfer- .) The main reason is that the meaning of sentence (E1) is similar to that of (E2) but stated in different words, and the translator has merged the redundant information in his/her translation. Therefore, the length-feature prefers to delete the first source sentence. On the other hand, since most of those associated transfer-lexicons in the source sentence E1 cannot be found in the corresponding target sentence C1, the Transfer-Lexicon feature also prefers to delete the first source sentence E1. It seems that this kind of errors would require further knowledge from language understanding to solve them, and is beyond the scope of this paper. 7 The occurrence rate is defined as \"Number of sentences that contained congates\"/\"Total number of sentences\"", |
| "cite_spans": [ |
| { |
| "start": 1114, |
| "end": 1115, |
| "text": "7", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 120, |
| "end": 128, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Error Analysis", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Although those length-based approaches are simple and can achieve good performance when they are trained and tested in the corpora of the same style, the performance drops significantly when they are tested in different styles other than that of the training corpora.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "(For instance, the F -measure error increases from 1.8% to 14.4% in our experiment.) The main reason is that the statistical characteristics of those features adopted by the length-based approaches (such as length-distribution, alignment-type-distribution and cognate-frequency) vary significantly from one style to another style.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Since human align sentences mainly by examining the similarity between different meanings conveyed by the given bilingual sentences pair, not by counting the number of characters in sentences, the transfer-lexicon is expected to be the more reliable cue than the sentence length. A robust statistical sentences alignment model, which integrates the associated transfer-lexicons into the original lengthbased model, is thus proposed in this paper. Great improvement has been observed in our experiment, which reduces the F -measure error generated from the length-based model from 14.4% to 5.8%, when the proposed approach is tested in the cross-style case.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Last, length-features, cognate-feature and transfer-lexicon-feature are implicitly assumed to contribute equally in aligning sentences in this paper; however this assumption is not usually held because different features might have various dynamic ranges for their scores and thus contribute differently to discrimination power. To overcome this problem, various features would be weighted differently in the future.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "A more reasonable one should be the first-order Markov model (i.e., Type-Bigram model); however, it will significantly increase the searching time and thus is not adopted in this paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Since almost all those English cognates found in the given Chinese sentences can be found in the corresponding English sentences, \u03b4cognate had better to be modeled as a Poisson distribution for a rare event (rather than Normal distribution as some papers did).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For example, the English word \"number\" are found to be translated into \"\u00c9K\", \" K\", \" \u00b6K\", \"\u00c9 K\", \"\u00c9 \u00b6K\", \" \u00b6 }\", \u2022 \u2022 \u2022 etc., for a specific sense in the given corpus; however, the transfer entries listed in the dictionary are \"\u00c9K\" and \" \u00b6 }\" only.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Which is defined as 2pr p+r .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In our experiment, we do not find any error dominated by lexical-related feature.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We would like to thank both Prof. Hsin-Hsi Chen and Prof. Kuang-Hwa Chen for their kindly providing us the aligned bi-lingual Sinorama Magazine for conducting the above experiment. The appreciation is also extended to our Translation Service Center for providing the bilingual Caterpillar User Manual for this study.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgement", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Aligning Sentences in Parallel Corpora", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Peter", |
| "suffix": "" |
| }, |
| { |
| "first": "Jennifer", |
| "middle": [ |
| "C" |
| ], |
| "last": "Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [ |
| "L" |
| ], |
| "last": "Lai", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mercer", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "169--176", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": ". [Brown et al., 91] Peter F. Brown, Jennifer C. Lai, and Robert L. Mercer, (1991). \"Aligning Sentences in Parallel Corpora\", Proceedings of the 29th An- nual Meeting of the Association for Computational Linguistics, pp. 169-176, 18-21 June 1991, UC Berkeley, California, USA.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "The Mathematics of Statistical Machine Translation: Parameter Estimation", |
| "authors": [ |
| { |
| "first": "[", |
| "middle": [], |
| "last": "Brown", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "", |
| "volume": "19", |
| "issue": "", |
| "pages": "263--311", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Brown et al., 93] Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra and Robert L. Mer- cer, (1993). \"The Mathematics of Statistical Ma- chine Translation: Parameter Estimation\", Com- putational Linguistics 19: 263-311.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Aligning Sentences in Bilingual Corpora Using Lexical Information", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Stanley", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Proceedings of the 31th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "22--26", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Chen, 93] Stanley F. Chen, (1993). \"Aligning Sen- tences in Bilingual Corpora Using Lexical Infor- mation\", Proceedings of the 31th Annual Meeting of the Association for Computational Linguistics, pp. 9-16, 22-26 June 1993, Ohio State University, Columbus, Ohio, USA.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Char align: A Program for Aligning Parallel Texts at the Character Level", |
| "authors": [ |
| { |
| "first": "Kenneth", |
| "middle": [ |
| "W" |
| ], |
| "last": "Church", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Proceedings of the 31th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Church, 93] Kenneth W. Church, (1993). \"Char align: A Program for Aligning Parallel Texts at the Character Level\", Proceedings of the 31th Annual Meeting of the Association for Com- putational Linguistics, pp.1-8, 22-26 June 1993, Ohio State University, Columbus, Ohio, USA.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Two Language Are More Informative Than One", |
| "authors": [], |
| "year": 1991, |
| "venue": "Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "130--137", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": ". [Dagon et al., 91] Ido Dagon, Alon Itai and Ulrike Schwall, (1991). \"Two Language Are More Infor- mative Than One\", Proceedings of the 29th Annual Meeting of the Association for Computational Lin- guistics, pp. 130-137, 18-21 June 1991, UC Berke- ley, California, USA.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Pattern Recognition: A Statistical Approach", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Pierre", |
| "suffix": "" |
| }, |
| { |
| "first": "Josef", |
| "middle": [], |
| "last": "Devijver", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kittler", |
| "suffix": "" |
| } |
| ], |
| "year": 1982, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pierre A. Devijver and Josef Kittler, (1982). Pattern Recognition: A Statistical Ap- proach, Prentice-Hall Inc., N.J., USA, 1982.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A Program for Aligning Sentences in Bilingual Corpora", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "William", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenneth", |
| "middle": [ |
| "W" |
| ], |
| "last": "Gale", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Church", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Computational Linguistics", |
| "volume": "19", |
| "issue": "", |
| "pages": "75--102", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Gale and Church, 93] William A. Gale and Ken- neth W. Church, (1991). \"A Program for Aligning Sentences in Bilingual Corpora\", Computational Linguistics 19:75-102.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "A Comparison of Alignment Models for Statistical Machine Translation", |
| "authors": [], |
| "year": 2000, |
| "venue": "Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1086--1090", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Och and Ney, 2000] Franz Josef Och and Hermann Ney, (2000). \"A Comparison of Alignment Models for Statistical Machine Translation\", Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, pp. 1086-1090, 1-8 Oc- tober 2000, Hong Kong.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "A Corpus-Based Statistics-Oriented Two-Way Design", |
| "authors": [ |
| { |
| "first": "Lawrence", |
| "middle": [], |
| "last": "Rabiner", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [ |
| "H" |
| ], |
| "last": "Juang", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Proceedings of TMI-95", |
| "volume": "10", |
| "issue": "", |
| "pages": "334--353", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Rabiner and Juang, 93] Lawrence Rabiner and B.H. Juang, (1993). Fundamentals of Speech Recognition, Prentice-Hall Inc., N.J., USA, 1993. 10. [Su et al., 95] K. Y. Su, J. S. Chang and Una Hsu, (1995). \"A Corpus-Based Statistics-Oriented Two-Way Design\", Proceedings of TMI-95, Vol. II, pp. 334-353, Centre for Computational Linguistics, Katholieke Universiteit Leuven, Leuven, Leuven, Belgium, July 5-7, 1995.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "A Customizable, Self-Learnable Parameterized MT System: Text Generation", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Su", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "S" |
| ], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of MT SUMMIT VII", |
| "volume": "", |
| "issue": "", |
| "pages": "182--188", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Su and Chang, 99] K. Y. Su and J. S. Chang, (1999). \"A Customizable, Self-Learnable Param- eterized MT System: Text Generation\", Proceed- ings of MT SUMMIT VII, pp. 182-188, Singapore. (Invited Talk)", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "HMM-Based Word Alignment in Statistical Translation", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Vogel", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "836--841", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Vogel et al., 96] Stephan Vogel, Hermann Ney and Christoph Tillmann, (1996). \"HMM-Based Word Alignment in Statistical Translation\", Pro- ceedings of the 34th Annual Meeting of the Associ- ation for Computational Linguistics, pp. 836-841, 24-27 June 1996, UC Santa Cruz, California, USA.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Aligning a Parallel English-Chinese Corpus Statistically with Lexical Criteria", |
| "authors": [], |
| "year": 1994, |
| "venue": "Proceedings of the 32th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "80--87", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "[Wu, 94] Dekai Wu, (1994). \"Aligning a Parallel English-Chinese Corpus Statistically with Lexical Criteria\", Proceedings of the 32th Annual Meeting of the Association for Computational Linguistics, pp. 80-87, 27-30 June 1994, New Mexico State University, Las Cruces, New Mexico, USA.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "num": null, |
| "type_str": "figure", |
| "text": "An illustration of length&lexical type error maximum number of either source sentences or target sentences allowed in each alignment unit is set to be \"4\" (i.e., we will not consider those matchingtypes of \"5 \u2212 1\", \"5 \u2212 2\", \"1 \u2212 5\", etc).Let{s 1 , \u2022 \u2022 \u2022 , s m } and {t 1 , \u2022 \u2022 \u2022 , t n }be the parallel bilingual source and target sentences, and let S(m, n) be the maximum accumulated score between {s 1 , \u2022 \u2022 \u2022 , s m } and {t 1 , \u2022 \u2022 \u2022 , t n } under the best alignment path. Then S(m, n) can be evaluated recursively with the initial condition of S(0, 0) = 0 in the following way: S(m, n) = max 0\u2264h,k\u22644 S(m \u2212 h, n \u2212 k) + score(h, k), (3.1)" |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "num": null, |
| "type_str": "figure", |
| "text": "Figure 1is an illustration of bilingual Sinorama Magazine texts.For comparing the performance of alignment, both precision rate (p) and recall rate (r), defined as follows, are measured; however, only their associated F -measure 5 is reported for saving space.p = [Number of correct alignment-passages in system output] [Total number of all alignment-passages generated from system output] , correct alignment-passages in system output] [Total number of all alignment-passages contained in benchmark corpus] ." |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "num": null, |
| "type_str": "figure", |
| "text": ". The first case shows an example of \"Length-Type Error\", in which the short sentence (E2) is erroneously merged with the long sentence (E1) and results in an erroneous alignment [E1, E2 : C1] and [E3 : C2]. (The correct alignment should be [E1 : C1] and [E2, E3 : C2]." |
| }, |
| "TABREF0": { |
| "content": "<table><tr><td>(E1)</td><td/><td/></tr><tr><td>(C1)</td><td colspan=\"2\">\u00f3\u00fa\u00ed, \u00dbHA\u00d0\u00f4\u00df oeo , <\"v\u00de\u00a2wu, \u00de>\u0178byu\u00d7\u00c1, \u00f8\u00de2~%V\u00e3\u0178b\u00f3$\u00d3\u2039, \u00e4qaeP66\u00d3\u00d66.-\u00d1J7</td></tr><tr><td>(E2)</td><td>The problem is not confined to women.</td><td/></tr><tr><td>(E3)</td><td colspan=\"2\">\"Sperm activity also noticeably decreases in men over forty,\" says Taipei Medical College urologist Chang Han-sheng.</td></tr><tr><td>(C2)</td><td>.\u00c9u\u00de4, \u00e34\u00ca\u00fb uJ(, \u00a5\u00ba6}p\u00e9\u00c1\u00fd'\u00d6 C\"\u00bb\u00e7\u00cd+</td><td>3L\u00b1\u00c7;z</td></tr><tr><td colspan=\"2\">Case II (Length&Lexicon-Type Error)</td><td/></tr><tr><td>(E1)</td><td colspan=\"2\">Second, the United States as well as Japan have provided lucrative export markets for countries in this region.</td></tr><tr><td>(E2)</td><td colspan=\"2\">The U.S. was particularly generous in the postwar years, keeping its markets open to products from Asia and giving nascent</td></tr><tr><td/><td>industries in the region a chance to catch up.</td><td/></tr><tr><td>(C1)</td><td>w\u0178, D(\u00ed1\u00c5\u00ear\u00c7[\u00aa\u00a8=\u00b9\u00df\u00b9, U=\u00b9\u00cb-\u00edhE\u00c5\u00f0 oe}\u00ea</td><td/></tr></table>", |
| "type_str": "table", |
| "text": "Compared to this, modern people have relatively better nutrition and mature faster, working women marry later, and there has been a great decrease in frequency of births, so that the number of periods in a lifetime correspondingly increases, so it is not strange that the number of people afflicted with endometriosis increases greatly.", |
| "num": null, |
| "html": null |
| }, |
| "TABREF1": { |
| "content": "<table><tr><td/><td>Training Set</td><td>Testing Set I</td><td>Testing Set II</td></tr><tr><td/><td colspan=\"3\">[Caterpillar User Manual] [Caterpillar User Manual] [Sinorama Manazine]</td></tr><tr><td>Baseline Model</td><td>98.91</td><td>98.21</td><td>85.56</td></tr><tr><td>CTL</td><td>98.26</td><td>97.51</td><td>97.51</td></tr><tr><td>CTL+CL</td><td>99.32</td><td>98.19</td><td>89.61</td></tr><tr><td>CTL+CL+WL</td><td>99.61</td><td>98.83</td><td>94.07</td></tr><tr><td>CTL+CL+WL+EC</td><td>99.75</td><td>99.11</td><td>94.16</td></tr><tr><td colspan=\"4\">Table 4.1: Performance (F -measure %) of each model and SFS</td></tr><tr><td colspan=\"2\">result also indicates that the length-related features</td><td/><td/></tr><tr><td colspan=\"2\">are still useful, even though they are relatively un-</td><td/><td/></tr><tr><td>reliable.</td><td/><td/><td/></tr></table>", |
| "type_str": "table", |
| "text": ".1 clearly shows that the proposed robust model achieves a 60% F -measure error reduction (from 14.4% to 5.8%) compared with the baseline model (i.e., improving the cross-style performance from 85.6% to 94.2% in F -measure). The", |
| "num": null, |
| "html": null |
| }, |
| "TABREF3": { |
| "content": "<table><tr><td/><td/><td/><td colspan=\"5\">2: Comparison of prior distributions</td><td/></tr><tr><td/><td>Cognate</td><td/><td colspan=\"3\">Length Features (\u03b4 c , \u03b4 w )</td><td/><td colspan=\"3\">\u03b4 cognate \u03b4 T ransf er\u2212Lexicon</td></tr><tr><td/><td>Occurrence Rate 7</td><td>c</td><td>s 2 c</td><td>w</td><td>s 2 w</td><td>r</td><td>\u03bb</td><td>\u00b5</td><td>\u03c3 2</td></tr><tr><td>Caterpillar</td><td>36.4%</td><td colspan=\"5\">0.65 0.87 3.45 6.09 -0.02</td><td>0.06</td><td>-0.72</td><td>0.21</td></tr><tr><td>Sinorama</td><td>1.1%</td><td colspan=\"5\">0.59 1.79 2.76 7.80 -0.46</td><td>0.25</td><td>-0.60</td><td>0.02</td></tr><tr><td/><td colspan=\"7\">Table 5.3: List of all associated parameters</td><td/></tr><tr><td colspan=\"5\">lexicons are not sufficient enough to override the</td><td/><td/><td/><td/></tr><tr><td colspan=\"5\">wrong preference given by the length-based feature</td><td/><td/><td/><td/></tr><tr><td colspan=\"5\">(which would assign similar score to both merge-</td><td/><td/><td/><td/></tr><tr><td>directions).</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"5\">The second case shows an example of</td><td/><td/><td/><td/></tr><tr><td colspan=\"5\">\"Length&Lexicon-Type Error\", in which the</td><td/><td/><td/><td/></tr><tr><td colspan=\"5\">source sentence (E1) is erroneously deleted and</td><td/><td/><td/><td/></tr><tr><td colspan=\"5\">results in an erroneous alignment [E1: Delete] and</td><td/><td/><td/><td/></tr><tr><td colspan=\"5\">[E2 : C1]. (The correct alignment should be [E1,</td><td/><td/><td/><td/></tr><tr><td>E2 : C1]</td><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>", |
| "type_str": "table", |
| "text": "", |
| "num": null, |
| "html": null |
| } |
| } |
| } |
| } |