| { |
| "paper_id": "O97-4004", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:10:34.875968Z" |
| }, |
| "title": "Aligning More Words with High Precision for Small Bilingual Corpora", |
| "authors": [ |
| { |
| "first": "Sue", |
| "middle": [ |
| "J" |
| ], |
| "last": "Ker", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [ |
| "S" |
| ], |
| "last": "Chang", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In this paper, we propose an algorithm for identifying each word with its translations in a sentence and translation pair. Previously proposed methods require enormous amounts of bilingual data to train statistical word-byword translation models. By taking a word-based approach, these methods align frequent words with consistent translations at a high precision rate. However, less frequent words or words with diverse translations generally do not have statistically significant evidence for confident alignment. Consequently, incomplete or incorrect alignments occur. Here, we attempt to improve on the coverage using class-based rules. An automatic procedure for acquiring such rules is also described. Experimental results confirm that the algorithm can align over 85% of word pairs while maintaining a comparably high precision rate, even when a small corpus is used in training.", |
| "pdf_parse": { |
| "paper_id": "O97-4004", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In this paper, we propose an algorithm for identifying each word with its translations in a sentence and translation pair. Previously proposed methods require enormous amounts of bilingual data to train statistical word-byword translation models. By taking a word-based approach, these methods align frequent words with consistent translations at a high precision rate. However, less frequent words or words with diverse translations generally do not have statistically significant evidence for confident alignment. Consequently, incomplete or incorrect alignments occur. Here, we attempt to improve on the coverage using class-based rules. An automatic procedure for acquiring such rules is also described. Experimental results confirm that the algorithm can align over 85% of word pairs while maintaining a comparably high precision rate, even when a small corpus is used in training.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Research based on bilingual corpora has attracted an increasing amount of attention. Brown et al. (1990) advocated a statistical approach to machine translation (SMT) based on the bilingual Canadian Parliamentary debates. The SMT approach can be understood as a word-by-word model consisting of two sub-models: a language model for generating a source text segment ST and a translation model for mapping ST to a target text segment TT. They recommend using an aligned bilingual corpus to estimate the parameters in the translation model. Various levels of alignment resolution are possible; from section, paragraph, sentence, phrase, to word. In the process of word alignment, translation of each source word is identified. Their study focused primarily on identifying word-level alignment. In the context of SMT, Brown et al. (1993) presented a series of five models for estimating translation probability. The first two models have been used in research on word alignment. Model 1 assumes that translation probability depends only on lexical translation probability. Model 2 enhances Model 1 by considering the dependence of translation probability on the distortion probability.", |
| "cite_spans": [ |
| { |
| "start": 85, |
| "end": 104, |
| "text": "Brown et al. (1990)", |
| "ref_id": null |
| }, |
| { |
| "start": 814, |
| "end": 833, |
| "text": "Brown et al. (1993)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "There are statistical tools that can help determine the relative association strength of bilingual word pairs with respect to translatability. Gale and Church (1991) used 1 2 to identify the word correspondence from a bilingual corpus while Fung and Church (1994) proposed a K-vec approach, which is based on a k-way partitioning of the bilingual corpus, to acquire a bilingual lexicon. Such tools usually provide low coverage due to the fact that low frequency words are in the majority and high frequency words tend to have diverse translations.", |
| "cite_spans": [ |
| { |
| "start": 143, |
| "end": 165, |
| "text": "Gale and Church (1991)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 241, |
| "end": 263, |
| "text": "Fung and Church (1994)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Estimates of word-to-word translation probability based on lexical co-occurrence (Gale and Church, 1991; Kay and Roscheisen, 1993; Fung and Church, 1994; Fung and McKeown, 1994; Utsuro, Ikeda, Yamane, Matsumoto and Nagao, 1994; Smadja, McKeown and Hatzivassiloglou, 1996) are highly unreliable for sparse data. In general, some kind of filtering is required to reduce noise. This leads to a low coverage rate. For instance, Gale and Church (1991) reported that their 1 2 method produced highly precise (95%) alignment for only 61% of the words in 800 sentences tested. Wu and Xia (1994) employed the EM algorithm (Dempster, Laird and Rubin, 1977) to find the optimal word alignment from a sentence-aligned corpus. The authors claim that they obtained a high precision rate of between 86% and 96%. However, the coverage rate was not reported.", |
| "cite_spans": [ |
| { |
| "start": 81, |
| "end": 104, |
| "text": "(Gale and Church, 1991;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 105, |
| "end": 130, |
| "text": "Kay and Roscheisen, 1993;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 131, |
| "end": 153, |
| "text": "Fung and Church, 1994;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 154, |
| "end": 177, |
| "text": "Fung and McKeown, 1994;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 178, |
| "end": 227, |
| "text": "Utsuro, Ikeda, Yamane, Matsumoto and Nagao, 1994;", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 228, |
| "end": 271, |
| "text": "Smadja, McKeown and Hatzivassiloglou, 1996)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 424, |
| "end": 446, |
| "text": "Gale and Church (1991)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 569, |
| "end": 586, |
| "text": "Wu and Xia (1994)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 613, |
| "end": 646, |
| "text": "(Dempster, Laird and Rubin, 1977)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "The above survey clearly indicates that word-based methods offer limited lexical coverage even after they are trained with a very large bilingual corpus. For most applications, low coverage is just as serious a problem as low precision. For aligned corpora to be useful for NLP tasks, such as machine translation and word sense disambiguation, a coverage rate higher than 60% is desirable, even at the expense of a slightly lower precision rate. A bilingual corpus with all instances of polysemous words correctly connected to their translations provides valuable training material for developing a WSD system (Gale, Church, and Yarowsky, 1992) .", |
| "cite_spans": [ |
| { |
| "start": 610, |
| "end": 644, |
| "text": "(Gale, Church, and Yarowsky, 1992)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "In this paper, we propose a word-alignment algorithm, SenseAlign, based on classes derived from sense-related categories in existing thesauri. SenseAlign relies on an automatic procedure to acquire class-based alignment rules (Ker and Chang 1996) . To make even broader coverage possible, we exploit additional sources of knowledge; connections that are evident from some, but not necessarily all, knowledge sources can still be aligned. The algorithm aligns over 85% of word pairs with a comparably high precision rate of 90%.", |
| "cite_spans": [ |
| { |
| "start": 226, |
| "end": 246, |
| "text": "(Ker and Chang 1996)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "The rest of this paper is organized as follows. The next section describes SenseAlign and discusses its main components. Section 3 provides illustrative examples taken from the Longman English-Chinese Dictionary of Contemporary English (Longman 1992, LecDOCE, henceforth) . Section 4 summarizes the experimental results. Additionally, typological and quantitative error analyses are also reported. Section 5 compares SenseAlign to several other approaches that have been proposed in the literature of computational linguistics. Finally, Section 6 considers ways in which the proposed algorithm might be extended and improved.", |
| "cite_spans": [ |
| { |
| "start": 236, |
| "end": 271, |
| "text": "(Longman 1992, LecDOCE, henceforth)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "SenseAlign is a class-based word alignment system that utilizes both existing and acquired knowledge. The system contains the following components and distinctive features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Word Alignment Algorithm", |
| "sec_num": "2." |
| }, |
| { |
| "text": "In this work, the categories for Chinese text are taken from a thesaurus for Mandarin Chinese (Mei, Zhu, Gao, and Yin 1993, CILIN henceforth) . The categories for English text are taken from the Longman Lexicon of Contemporary English (McArthur 1992, LLOCE henceforth) . The division of words into semantic categories in the two thesauri is somewhat different. The categories in CILIN are organized as a conceptual ontology of three levels: gross categories, intermediate categories and detailed categories.", |
| "cite_spans": [ |
| { |
| "start": 94, |
| "end": 141, |
| "text": "(Mei, Zhu, Gao, and Yin 1993, CILIN henceforth)", |
| "ref_id": null |
| }, |
| { |
| "start": 235, |
| "end": 268, |
| "text": "(McArthur 1992, LLOCE henceforth)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Thesauri", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Unlike CILIN, the categories of LLOCE are organized primarily according to subject matter. The LLOCE categories are also organized as three levels: subjects, titles and sets. In the first level, fourteen major subjects are denoted with reference letters from A to N. For detailed descriptions of CILIN and LLOCE see Appendices A and B.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Thesauri", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Two taggers are utilized to resolve part-of-speech ambiguity. Morphological and idiom analyses are also performed to determine the lexical unit and lexeme. Only thesaurus categories consistent with the part-of-speech determined in the analysis are considered in subsequent processes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical Analyses", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The part-of-speech taggers for the two languages involved are built using a strategy proposed by Brill (1992) . We use the tag set in the Brown Corpus for the English tagger and the part-of-speech system proposed by Chao (1968) for the Chinese tagger. Tables 1.1 and 1.2 present the two tag sets.", |
| "cite_spans": [ |
| { |
| "start": 97, |
| "end": 109, |
| "text": "Brill (1992)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 216, |
| "end": 227, |
| "text": "Chao (1968)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical Analyses", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "To eliminate the difficult cases of 0-1 fertility (one target word aligned with nothing in the source sentence), certain morpho-syntactical constructs in Chinese are identified. They are, mainly, Chinese constructions (see Table 2 ) that have no parallel in English, such as direction or phrase complements (Di, VH, or Ng) following a verb and measure nouns (Nf) following a determinant/quantifier (Ne). Tables 3.1 and 3.2 list the outputs of the two taggers for Examples (1e, 1c). ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 223, |
| "end": 230, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Lexical Analyses", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "< (leisheng) Na Bf06 , (tongchang) Dd Ka10 _ (suizhe) V+Di Hj36 \u00c2= (shandian) Na Bf06 \u00d6 (er) C Kc02, Kc03, Kc08 (lai) V Hj12, Hj63, Jd07", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical Analyses", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The main mechanism of SenseAlign is the class-based alignment rules. Those rules form a subset of the Cartesian product of the categories in the two thesauri. We were inspired by the revision model proposed by Brill and Resnik (1994) in designing an automatic acquisition procedure for alignment rules. The procedure employs the greedy method to find a set of rules capable of providing optimal alignment in a bilingual corpus.", |
| "cite_spans": [ |
| { |
| "start": 210, |
| "end": 233, |
| "text": "Brill and Resnik (1994)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Greedy Learner", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The rule capable of providing the most instances of plausible alignment is preferred and selected first. First, the bilingual example sentences go through some lexical analyses. The lexemes are then looked up in the thesauri to find the possible categories under which they may be listed. At this stage, no information is available regarding what classes of words are likely to align with each other. Second, we randomly match up words and their categories across the sentence pairs to form tentative alignment rules. Third, after producing tentative alignment rules for all the sentences, we make a conservative estimate of applicability. The rule with the highest estimated applicability is selected. Sentences where the rule applies are identified. The matched connections (s, t) in those sentences are removed. In addition, connections (s, t') and (s', t) for all s'S s and all t'S t are removed because they are inconsistent with the selection of (s, t). The acquisition process is repeated for the remaining data until applicability of the best rule runs below a certain threshold. The learning algorithm can be applied to acquire rules having different levels of resolution.We have run the learning algorithm on the 25,000 bilingual examples in LecDOCE. This procedure for learning rules has been applied to the detailed categories of CILIN and the topical sets of LLOCE to produce 392 rules. Table 4.1 and Table 4 .2 present the ten rules with high and middle applicability. Figure 1 shows the accumulative applicability distributions of 392 rules. Obviously, these 392 rules do not cover all English words, nor all Chinese words. To remedy this problem, the procedure is repeated for broader 2-letter classes represented by topics in LLOCE and intermediate categories in CILIN. See Table 4 .3 for three rules acquired on the 2-letter level. Table 5 presents the number of rules acquired on two levels of resolution. No. of rules Applicability (%)", |
| "cite_spans": [ |
| { |
| "start": 776, |
| "end": 782, |
| "text": "(s, t)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1400, |
| "end": 1421, |
| "text": "Table 4.1 and Table 4", |
| "ref_id": "TABREF6" |
| }, |
| { |
| "start": 1483, |
| "end": 1492, |
| "text": "Figure 1", |
| "ref_id": null |
| }, |
| { |
| "start": 1792, |
| "end": 1799, |
| "text": "Table 4", |
| "ref_id": "TABREF6" |
| }, |
| { |
| "start": 1851, |
| "end": 1858, |
| "text": "Table 5", |
| "ref_id": "TABREF9" |
| } |
| ], |
| "eq_spans": [], |
| "section": "The Greedy Learner", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The cumulative applicability of the acquired rules.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 1", |
| "sec_num": null |
| }, |
| { |
| "text": "When matching words in ST and TT against a rule, we use the term fan-out to denote the number of words that match the rule. For instance, for a rule r = (C, D), and a sentence pair (ST, TT), the rule r has a fan-out of n-m if there are n and m words in ST and TT listed under classes C and D, respectively. The degree of fan-out of a connection applying rule r is given by", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Fan-out", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "F = n~m.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Fan-out", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "For instance, the learner produces the connections shown in Table 6 for Example 1. Both \"lightning\" and \"thunder\" in (1e) are listed under LLOCE set Lc058 (thunder and lightning) while both \" < (leisheng, thunder-sound)\" and \" \u00c2= (shandian, flash-electricity)\" in (1c) are listed under CILIN category Bf06 ( < (lei, thunder); \u00c2 = (shandian, flash-electricity)). Therefore, the rule (Lc058, Bf06) applies to the first four connections shown in Table 6 . The rule is said to have a fan-out value of 2-2 in sentence pair (1e, 1c). On the other hand, the rule (Nc056, Ka10) applies to only one connection (usually, , (usually, normally)) with a 1-1 fan-out. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 60, |
| "end": 67, |
| "text": "Table 6", |
| "ref_id": null |
| }, |
| { |
| "start": 443, |
| "end": 450, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Fan-out", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "_(suizhe, accompany) (Kb032, Hj36) 1-1 VB Kb032 accompany V Hj12 (lai, come) (Kb032, Hj12) 1-1 VB Kb032 accompany V Hj63 (lai come) (Kb032, Hj63) 1-1 VB Kb032 accompany V Jd07 (lai come) (Kb032, Jd07) 1-1 VB Mb053 accompany V+Di Hj36 _(suizhe, accompany) (Mb053, Hj36) 1-1 VB Mb053 accompany V Hj12 (lai, come) (Mb053, Hj12) 1-1 VB Mb053 accompany V Hj63 (lai, come) (Mb053, Hj63) 1-1 VB Mb053 accompany V Jd07 (lai, come) (Mb053, Jd07) 1-1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Fan-out", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "Talbe 6 The tentative connections for Example (1e, 1c)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Fan-out", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "Some rules are more specific because they apply to two small sets of words in the thesauri. The more specific a rule, the more likely it applies to words that are interchangeable translations. Therefore, we define the specificity S for a connection (s, t) to which a rule r = (C, D) is applicable as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Specificity", |
| "sec_num": "2.5" |
| }, |
| { |
| "text": "S = -log ( Pr(x C )~Pr(y D) ) if (s, t) (C, D) R, = 0 otherwise,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Specificity", |
| "sec_num": "2.5" |
| }, |
| { |
| "text": "where R is the set of acquired rules; Pr(x C ) and Pr(y D) are the probabilities of generating words x and y in classes C and D, respectively. Thus, the specificity S of r reflects the probability of generating, by chance, a pair of words (s, t), s C and t D to which r is applicable.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Specificity", |
| "sec_num": "2.5" |
| }, |
| { |
| "text": "Assuming that the distribution of the words in a given class is uniform, the degree of specificity of a connection applying a rule r is S r = -log ( For instance, consider Example (2e, 2c), where the rule (Ac053, Bi08) is used to connect \"cat\" to \" % \". The number of words in LLOCE class \"Ac053\" is 34, and the number of words in CILIN class \"Bi08\" is 42. Therefore, we obtain the following specificity:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Specificity", |
| "sec_num": "2.5" |
| }, |
| { |
| "text": "S (Ac053, Bi08) = -log ( ~ )= 10.48.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Specificity", |
| "sec_num": "2.5" |
| }, |
| { |
| "text": "(2e) I only knew that it is the dog not the cat that bit me. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Specificity", |
| "sec_num": "2.5" |
| }, |
| { |
| "text": "The acquired rules may have different degrees of applicability. Applicability refers to the number of instances of word pairs in the bilingual corpus to which is applicable. The higher the applicability an alignment rule has, the more reliable are the connections it predicts. Furthermore, including the factor of applicability also results in more connections being chosen. Therefore, we define applicability for a connection (s, t) to which a rule r = (C, D) is applicable as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Applicability", |
| "sec_num": "2.6" |
| }, |
| { |
| "text": "Applicability : A r = C r B ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Applicability", |
| "sec_num": "2.6" |
| }, |
| { |
| "text": "where C r is the number of connections for which the rule r is applicable in the corpus, and B is the number of bilingual sentences in the corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Applicability", |
| "sec_num": "2.6" |
| }, |
| { |
| "text": "For instance, consider the rule (Ac053, Bi08) in Example (2e, 2c) again. There are 55 instances of connections in a corpus of 25,000 sentences to which (Ac053, Bi08) is applicable. Therefore, we can obtain the following applicability:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Applicability", |
| "sec_num": "2.6" |
| }, |
| { |
| "text": "A (Ac053, Bi08) = = 0.0022.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Applicability", |
| "sec_num": "2.6" |
| }, |
| { |
| "text": "We use a new distortion measure in addition to the alignment rules to evaluate the plausibility of a connection candidate. To be more specific, we adopt the relative distortion to form a model of position which is much smaller than that obtained based on absolute position. This choice is based on the observation that many language con-structions are preserved in the translation process. Therefore, the target position of a connection relative to that of some connection in the same construction has a much smaller variance in statistical distribution. However, we use an approximation of relative distortion for lack of structural analysis. Assuming that some connections have been selected, we can always evaluate a candidate (s, t) relative to these connections. Let s and t be the ith and jth words in ST and TT, respectively. There exist two closest connections, (i L , j L ) and (i R , j R ), on both sides of s. Relative distortion rd(s, t) is approximated using the following formula:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relative Distortion", |
| "sec_num": "2.7" |
| }, |
| { |
| "text": "rd(s, t) = min( |d L |, |d R | ) where d L = ( j -j L ) -( i -i L ), d R = ( i -i R ) -( j -j R ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relative Distortion", |
| "sec_num": "2.7" |
| }, |
| { |
| "text": "Empirical data confirm that connections with small rd values are more likely to be correct. Figure 2 indicates that a candidate with 0 rd values is much more probable (nearly .80) as a correct connection than is a candidate with 0 absolute distortion (.43). In the following, Example (3e, 3c) demonstrates how distortions play an influential role in determining the correct connection.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 92, |
| "end": 100, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Relative Distortion", |
| "sec_num": "2.7" |
| }, |
| { |
| "text": "(3e) Please 1 answer 2 all 3 questions 4 on 5 this 6 list 7 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relative Distortion", |
| "sec_num": "2.7" |
| }, |
| { |
| "text": "(3c)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relative Distortion", |
| "sec_num": "2.7" |
| }, |
| { |
| "text": "\u00ee 1 \u00e0 2 ! 3 d 4 B 5 p 6 \u00ac \u00b6 7 8", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relative Distortion", |
| "sec_num": "2.7" |
| }, |
| { |
| "text": "qing huida ben biao shang zh suoiou wunti Please answer this list on CTM all question After lexical processing for both languages is completed, the initial estimations of relative distortion for all possible connection candidates are calculated (Table 7 .1). Notably, many true connections receive a value of 3 for their relative distortion. These large rd values are due to forward transfer of the prepositional phrase \"on this list\" to the front of the attached noun phrase, \"all questions.\" However, the rd estimates become more and more accurate with each iteration. For instance, if the connection (question, wunti) is selected, the rd for the candidate (all, \u00ac \u00b6 suoyiou) is re-evaluated correctly at 0. Similarly, if (list, d biao) is selected, the rd for the candidate (this, ! ben) is also re-evaluated at 0. Tables 7.1 through 7.3 provide further details. (ben, this) 3 Ne -3 0 0 Table 7 .3 The relative distortion if (list, d ) is selected initially.", |
| "cite_spans": [ |
| { |
| "start": 866, |
| "end": 877, |
| "text": "(ben, this)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 245, |
| "end": 253, |
| "text": "(Table 7", |
| "ref_id": "TABREF13" |
| }, |
| { |
| "start": 890, |
| "end": 897, |
| "text": "Table 7", |
| "ref_id": "TABREF13" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Relative Distortion", |
| "sec_num": "2.7" |
| }, |
| { |
| "text": "English Position POS Chinese Position POS d L d R rd answer 2 VB \u00e0 ( huida, answer) 2 V 0 1 0 all 3 AT \u00ac \u00b6 (suoyiou,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relative Distortion", |
| "sec_num": "2.7" |
| }, |
| { |
| "text": "Dictionary translations are an important knowledge source for word alignment. Approximately 40% of the targets in correct connections have at least one Chinese character in common with dictionary translations for the corresponding source word (Ker and Chang, 1997) . Such a target and translation can be thought of as synonyms. Consider Example (1e, 1c) again. Four translations are listed in LecDOCE for \"accompany\":", |
| "cite_spans": [ |
| { |
| "start": 243, |
| "end": 264, |
| "text": "(Ker and Chang, 1997)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Similarity between Connection Target and Dictionary Translations", |
| "sec_num": "2.8" |
| }, |
| { |
| "text": "1. \" \u00f4 \" (ban, keep somebody company), 2. \" \u00a9 \" (pei, to be with somebody), 3. \" _ \" (suei, to follow), and 4. \" \u00f4\u00c9 \" (banzou, to make supporting music for). The connection target \" _ \" (suizhe, to follow + ASP) of \"accompany\" has one character in common with the third translation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Similarity between Connection Target and Dictionary Translations", |
| "sec_num": "2.8" |
| }, |
| { |
| "text": "To exploit this thesaury effect (Fujii and Croft 1993) in translation, we need a way to measure the similarity between words. The similarity measure of the Dice coefficient (Dice 1945 ) seems to be a good choice. Equation 1shows the formulation of the Dice coefficient. An unweighted version of the Dice coefficient shown in Equation 2can also be used for simplicity: In Example (4e, 4c), connection candidates such as (yesterday, O\u00ef zhuotian), (today, \u00d3\u00ef jintian), and (feel, o\u00a8juede) match the entries in LecDOCE completely; therefore, they receive a similarity value of 1. On the other hand, the connection (ill, \u00c1;/ (bushufu, not comfortable)) receives a similarity score of 0.33 since it shares the Chinese character \" \u00c1 \" (bu, no) with an LecDOCE translation of \"ill,\" \" \u00c1\u00e8 \" (buhaode, not good).", |
| "cite_spans": [ |
| { |
| "start": 32, |
| "end": 54, |
| "text": "(Fujii and Croft 1993)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 173, |
| "end": 183, |
| "text": "(Dice 1945", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Similarity between Connection Target and Dictionary Translations", |
| "sec_num": "2.8" |
| }, |
| { |
| "text": "(1) 2 w E w C w D k 1 |E | k 1 |D | k 1 |C | ( ) ( ) ( ) k k k = = = \u2211 + \u2211 \u2211 , (2) 2 |E | | C | |D | +", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Similarity between Connection Target and Dictionary Translations", |
| "sec_num": "2.8" |
| }, |
| { |
| "text": "(4e) Yesterday I was ill but today I am feeling A-1. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Similarity between Connection Target and Dictionary Translations", |
| "sec_num": "2.8" |
| }, |
| { |
| "text": "As mentioned earlier, Brown et al's Model 2 (1993) stipulates that a connection be given a probability value as the product of lexical translation probability and distortion probability under the assumption of independency. In the same spirit, we give a composite probabilistic value for each connection candidate by multiplying the probabilities of these factors. Therefore, the formula of evaluating the composite probability is as follows: The probabilities of these factors are estimated according to the principle of maximum likelihood estimation (MLE). For instance, if there are k connections in a sample of n candidates (s, t) whose degree of fan-out is f, then the alignment probability Prob(s, t | f) for each (s, t) is given the same MLE value, i.e. Prob(s, t | f) = k/n for all pair (s, t). By using a small sample of a few hundred sentences, the MLE probabilities for various factors can be estimated quite reliably. Table 8 Factor types with MLE probability.", |
| "cite_spans": [ |
| { |
| "start": 22, |
| "end": 50, |
| "text": "Brown et al's Model 2 (1993)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 930, |
| "end": 937, |
| "text": "Table 8", |
| "ref_id": "TABREF16" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation of Connection Candidates", |
| "sec_num": "2.9" |
| }, |
| { |
| "text": "Prob", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation of Connection Candidates", |
| "sec_num": "2.9" |
| }, |
| { |
| "text": "For instance, consider the Example (5e, 5c), focusing on the word pair (yesterday, ) :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation of Connection Candidates", |
| "sec_num": "2.9" |
| }, |
| { |
| "text": "(5e) I caught a fish yesterday.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation of Connection Candidates", |
| "sec_num": "2.9" |
| }, |
| { |
| "text": "(5c) z \u00a2) +\u00ba \u00b9z huotian wuo budao yitiao yu. yesterday I catch one fish.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation of Connection Candidates", |
| "sec_num": "2.9" |
| }, |
| { |
| "text": "Prob(yesterday, )= Max (Prob(yesterday, |fan-out(Lh225,Tq23)) Prob(yesterday, |rd(yesterday, )) prob(yesterday, |Ar(Lh225,Tq23)) Prob(yesterday, |Sr(Lh225,Tq23)) Prob(yesterday, |sim(yesterday, ))) = Max (Prob(yesterday, |f=1) Prob(yesterday, |rd =4) prob(yesterday, |A r = 0.0097) Prob(yesterday, |S r = 11. 2) ~Prob(yesterday, |sim = 1.0)) = Max (0.85~0.04~0.90~0.77~0.94) = 0.0022", |
| "cite_spans": [ |
| { |
| "start": 309, |
| "end": 311, |
| "text": "2)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation of Connection Candidates", |
| "sec_num": "2.9" |
| }, |
| { |
| "text": "Distortion is used in our algorithm in a way similar to that in Gale and Church (1991) . However, our consideration of the right neighbor of a candidate in addition to the left one realizes a much tighter approximation of relative distortion. Other factors are also introduced to distinguish the case in which fan-out and distortion alone can not determine the right alignment. Empirical data have indicated that connections suggested by using a rule with a higher degree of specificity or applicability are more likely to be correct. The preference for using rules with higher applicability also has the effect of boosting the overall hit rate. Applicability and specificity are analogous to term frequency and inverse document frequency, the two weighting factors most widely used in IR research.", |
| "cite_spans": [ |
| { |
| "start": 64, |
| "end": 86, |
| "text": "Gale and Church (1991)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation of Connection Candidates", |
| "sec_num": "2.9" |
| }, |
| { |
| "text": "Our algorithm for word alignment is a decision procedure for selecting the preferred connection from a list of candidates. Initial anchors for calculating relative distortion can be established by placing two dummies at the front and end of ST and TT. The initial list contains two connections that are formed from those four dummies. Consequently, the highest scoring candidate is selected and added to the list of solutions. The newly added connection serves as an additional anchor for more accurate estimation of relative distortion. The connection candidates that are inconsistent with the selected connection are removed from the list. Consequently, the rest of the candidates are re-evaluated again. Figure 3 presents the SenseAlign algorithm. Table 9 summarizes all of the factors used in SenseAlign. Applicability:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 707, |
| "end": 715, |
| "text": "Figure 3", |
| "ref_id": null |
| }, |
| { |
| "start": 751, |
| "end": 758, |
| "text": "Table 9", |
| "ref_id": "TABREF17" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Alignment Algorithm", |
| "sec_num": "2.10" |
| }, |
| { |
| "text": "A r = C r B", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment Algorithm", |
| "sec_num": "2.10" |
| }, |
| { |
| "text": ", where C r = the number of connections for which r is applicable in the corpus, B = the number of bilingual sentences in the corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment Algorithm", |
| "sec_num": "2.10" |
| }, |
| { |
| "text": "Relative Distortion:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment Algorithm", |
| "sec_num": "2.10" |
| }, |
| { |
| "text": "rd(i, j) = min( |d L |,, |d R | ), d L = ( j -j L )-( i -i L )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment Algorithm", |
| "sec_num": "2.10" |
| }, |
| { |
| "text": ", where i = the subscript of the source sentence, Similarity between the connection target and dictionary translation: 2. Place two dummies to the left of the first and to the right of the last word of the source sentence. Two similar dummies are added to the target sentence. The left dummy in the source and target sentences align with each other. Similarly, the right dummies align with each other. This establishes anchor points for calculating the relative distortion score.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment Algorithm", |
| "sec_num": "2.10" |
| }, |
| { |
| "text": "d R = ( i -i R )-( j -j R ), j =", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment Algorithm", |
| "sec_num": "2.10" |
| }, |
| { |
| "text": "sim = 2 |E | | C | |D | + ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment Algorithm", |
| "sec_num": "2.10" |
| }, |
| { |
| "text": "3. Perform the part-of-speech tagging and analysis for the sentence in both languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment Algorithm", |
| "sec_num": "2.10" |
| }, |
| { |
| "text": "4. Lookup the words in LLOCE and CILIN to determine the classes consistent with the part-of-speech analyses.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment Algorithm", |
| "sec_num": "2.10" |
| }, |
| { |
| "text": "5. Follow the procedure in Section 2.9 to calculate a composite probability for each connection candidate according to fan-out, applicability, specificity of alignment rules, relative distortion, and dictionary evidence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment Algorithm", |
| "sec_num": "2.10" |
| }, |
| { |
| "text": "6. Select the highest scoring candidate and add it to the alignment list.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment Algorithm", |
| "sec_num": "2.10" |
| }, |
| { |
| "text": "7. Remove the connection candidates that are inconsistent with the selected connection from the candidate list.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment Algorithm", |
| "sec_num": "2.10" |
| }, |
| { |
| "text": "8. Re-evaluate the rest of the candidates again according to the new list of connections.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment Algorithm", |
| "sec_num": "2.10" |
| }, |
| { |
| "text": "9. Repeat Steps 4-8 until all words in the source sentence are aligned or every remaining word pair is associated with a score lower than some preset threshold h.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment Algorithm", |
| "sec_num": "2.10" |
| }, |
| { |
| "text": "Alignment algorithm for SenseAlign", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 3", |
| "sec_num": null |
| }, |
| { |
| "text": "To illustrate how SenseAlign works, consider the sentence pair (5e, 5c) mentioned previously:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Illustrative Examples", |
| "sec_num": "3." |
| }, |
| { |
| "text": "(5e) I caught a fish yesterday. yesterday I catch one fish.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Illustrative Examples", |
| "sec_num": "3." |
| }, |
| { |
| "text": "After", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Illustrative Examples", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Step 3 of SenseAlign is executed, the algorithm produces the analyses shown in Table 10.1 and Table 10 .2. Table 11 provides the glossary of class codes involved.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 79, |
| "end": 102, |
| "text": "Table 10.1 and Table 10", |
| "ref_id": null |
| }, |
| { |
| "start": 107, |
| "end": 115, |
| "text": "Table 11", |
| "ref_id": "TABREF21" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Illustrative Examples", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Consequently, the algorithm selects the highest-scored connection, (yesterday, zhuotian). Next, this connection and other inconsistent connections are removed. In the subsequent iterations, the connections (fish, \u00b9 yu), (I, z wuo) and (a, +\u00ba yi-tiao)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Illustrative Examples", |
| "sec_num": "3." |
| }, |
| { |
| "text": "are selected. Table 12 shows the remaining connections after each iteration. After aligning \"I\" and \"z\" \u00b9 1 -1 0 .7 5 1 1 5 .3 0 .0 0 1 7 .0 1 5 9", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 14, |
| "end": 22, |
| "text": "Table 12", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Illustrative Examples", |
| "sec_num": "3." |
| }, |
| { |
| "text": "P P I G h 2 8 0 N h N a0 2 z 1 -1 1 1 0 0 .0 0 7 6 P P I G h 2 8 0 N h N a0 5 z 1 -1 1 1 0 0 .0 0 7 6 N N fish A f1 0 0 N a B i1 4 \u00b9 1 -1 0 .7 5 1 0 0 .0 0 3 4 N N fish A h 1 2 0 N a B i1 4 \u00b9 1 -1 0 .7 5 1 0 0 .0 0 3 4 N N fish E a 0 1 7 N a B i1 4 \u00b9 1 -1 0 .7 5 1 0 0 .0 0 3 4 N N fish E b 0 3 1 N a B i1 4 \u00b9 1 -1 0 .7 5 1 0 0 .0 0 3 4 A T a N d 0 9 8 N e Q a0 4 + \u00ba 1 -1 0 .5 1 0 0 .0 0 2 8 N R ye ste rd a y L h 2 2 5 N a B i1 4 \u00b9 1 -1 0 0 0 0 .0 0 2 3 V B c au g h t D e0 9 8 V + D i H m 0 5 \u00a2 1 -1 0 1 0 0 .0 0 1 0 N N fish A f1 0 0 N d T q 2 3 1 -1 0 3 0 0 .0 0 0 4 N N fish A h 1 2 0 N d T q 2 3 1 -1 0 3 0 0 .0 0 0 4 N N fish E a 0 1 7 N d T q 2 3 1 -1 0 3 0 0 .0 0 0 4 N N fish E b 0 3 1 N d T q 2 3 1 -1 0 3 0 0 .0 0 0 4 N N fish A b 0 3 2 N d T q 2 3 1 -1 0 3 0 0 .0 0 0 4", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Illustrative Examples", |
| "sec_num": "3." |
| }, |
| { |
| "text": "A fte r a lig n in g \" y e ste r d a y \" a n d \" \"", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Illustrative Examples", |
| "sec_num": "3." |
| }, |
| { |
| "text": "N N fish A b 0 3 2 N a B i1 4 \u00b9", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Illustrative Examples", |
| "sec_num": "3." |
| }, |
| { |
| "text": "1 -1 0 .7 5 1 1 5 .3 0 .0 0 1 7 .0 1 5 9", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Illustrative Examples", |
| "sec_num": "3." |
| }, |
| { |
| "text": "P P I G h 2 8 0 N h N a0 2 z 1 -1 1 1 0 0 .0 0 7 6 P P I G h 2 8 0 N h N a0 5 z 1 -1 1 1 0 0 .0 0 7 6 N N fish A f1 0 0 N a B i1 4 \u00b9 1 -1 0 .7 5 1 0 0 .0 0 3 4 N N fish A h 1 2 0 N a B i1 4 \u00b9 1 -1 0 .7 5 1 0 0 .0 0 3 4 N N fish E a 0 1 7 N a B i1 4 \u00b9 1 -1 0 .7 5 1 0 0 .0 0 3 4 N N fish E b 0 3 1 N a B i1 4 \u00b9 1 -1 0 .7 5 1 0 0 .0 0 3 4 A T a N d 0 9 8 N e Q a0 4 + \u00ba 1 -1 0 .5 1 0 0 .0 0 2 8 V B c au g h t D e0 9 8 V + D i H m 0 5 \u00a2 1 -1 0 1 0 0 .0 0 1 0", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Illustrative Examples", |
| "sec_num": "3." |
| }, |
| { |
| "text": "A fte r a lig n in g \" fish \" a n d \" \u00b9 \" \u00a2 (bu-dao, catch) Hm05 a Nd098", |
| "cite_spans": [ |
| { |
| "start": 44, |
| "end": 59, |
| "text": "(bu-dao, catch)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Illustrative Examples", |
| "sec_num": "3." |
| }, |
| { |
| "text": "P P I G h 2 8 0 N h N a0 2 z 1 -1 1 0 0 0 .0 1 7 9 P P I G h 2 8 0 N h N a0 5 z 1 -1 1 0 0 0 .0 1 7 9 A T a N d 0 9 8 N e Q a0 4 + \u00ba 1 -1 0 .5 0 0 0 .0 0 6 7 V B c au g h t D e0 9 8 V + D i H m 0 5 \u00a2 1 -1 0 0 0 0 .0 0 2 3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Illustrative Examples", |
| "sec_num": "3." |
| }, |
| { |
| "text": "+\u00ba (yi-tiao, one) Qa04 fish Ab032", |
| "cite_spans": [ |
| { |
| "start": 3, |
| "end": 17, |
| "text": "(yi-tiao, one)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Illustrative Examples", |
| "sec_num": "3." |
| }, |
| { |
| "text": "\u00b9 (yu, fish) Bi14 yesterday Lh225", |
| "cite_spans": [ |
| { |
| "start": 2, |
| "end": 12, |
| "text": "(yu, fish)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Illustrative Examples", |
| "sec_num": "3." |
| }, |
| { |
| "text": "(zuotian, yesterday)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Illustrative Examples", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Tq23 Table 13 The final alignment of Example (5e, 5c).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 5, |
| "end": 13, |
| "text": "Table 13", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Illustrative Examples", |
| "sec_num": "3." |
| }, |
| { |
| "text": "In this section, we present the results of algorithms for word alignment. Roughly 25,000 bilingual example sentences from LecDOCE were used as the training data. The training data were used primarily to acquire rules by the greedy learner and to determine empirically probability functions related to various factors. The algorithm's performance was then tested on the outside data. The outside test was on a set of 416 sentence pairs from a book on English sentence patterns. We chose this test set because it contained a comprehensive group of fifty-five sets of typical sentence patterns. Table 14 indicates that acquired lexical information and existing lexical information in a bilingual dictionary can supplement each other to produce optimum alignment results. The generality of the approach is evident from the high coverage (88.2%) and precision rates (90.0%).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 592, |
| "end": 600, |
| "text": "Table 14", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental Results", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "This section thoroughly analyzes the alignment results from the experiments and, in particular, the data related to cases where the algorithms failed. The analytical results demonstrate the strength and limitations of the methods and suggest possible improvements of the algorithm.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Typological Analysis of Alignment Errors", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Metaphorical expressions are often language dependent, thus giving rise to a connection target which is different from the relevant dictionary translations. For instance, by the metaphorical expression (6e), one does not mean that someone really has green fingers, only that he is good at gardening. This metaphorical implication will not get across with a literal translation. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Metaphorical Usage", |
| "sec_num": null |
| }, |
| { |
| "text": "Collocation is another reason for the deviation of the connection target from the dictionary translation, leading to failure of SenseAlign. However, unlike other deviations, bilingual collocations are not easily to tackle using class-based rules. For instance, in example sentence (7e, 7c), \"give order\" is a collocation, and the translation for \"give\" in such a collocation is usually \" @ \". However, the applicability is too low to warrant a mapping from \"give\" to \" @ \". In any case, deriving a give-to-@ mapping would be an over-generalization.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Collocation", |
| "sec_num": null |
| }, |
| { |
| "text": "(7e) The officer is the one who gives the orders. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Collocation", |
| "sec_num": null |
| }, |
| { |
| "text": "Paraphrased translation is a major source of alignment failure. Due to various considerations, including style and cultural differences, the translator does not always translate literally on a word-by-word basis. Adding and deleting words is commonplace, sometimes resulting in free translation. Such translations obviously create problems for word alignment. A significant amount of free translation arises due to the use of 4-morpheme Mandarin idioms for stylistic considerations. For instance, the clause \"hit close to home\" in (8e) translates into the idiom \" +\u00be\u00e0\u00e8 \", and \"completely off base\" in (9e) translates into the idiom \" U3\u00fd3 .\" Apparently, these free translations are beyond the reach of the proposed method.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Four-morpheme Mandarin Idioms and Free Translations", |
| "sec_num": null |
| }, |
| { |
| "text": "(8e)Everyone felt that the speaker's remarks hit close to home. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Four-morpheme Mandarin Idioms and Free Translations", |
| "sec_num": null |
| }, |
| { |
| "text": "Now we will look at alignment failure from a different angle: the part-of-speech. The error analysis by part-of-speech is shown in Table 15 . Note that the majority of errors come from common nouns, light verbs, adverbs and prepositions. Obviously, function words are much more language-dependent and, therefore, more difficult to align correctly. Closer examination shows that connections related to function words are often one-to-many or even many-to-many, adding to the difficulty of connecting them correctly. These observations indicate the necessity of treating each part-of-speech differently -a context-sensitive lexical translation model for light verbs or perhaps a more elaborate model of fertility for function words. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 131, |
| "end": 139, |
| "text": "Table 15", |
| "ref_id": "TABREF9" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Quantitative Error Analysis by Part-of-speech", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "In this section, we will justify the use of machine-readable lexical resources and a class-based approach. Although it is always difficult to compare different methods directly, we can contrast SenseAlign with other works related to word alignment in terms of resource requirements and statistical estimation reliability.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5." |
| }, |
| { |
| "text": "The crux of the NLP problems lies in knowledge acquisition, which is widely recognized as a bottleneck in development of NLP technology. To avoid this knowledge acquisition bottleneck, researchers have recently switched from manual and qualitative approaches to the MRD-based and corpus-based approaches. However, word-level knowledge acquired from dictionaries or corpora offers limited coverage. Take word sense disambiguation, a specific NLP task, for example. Lesk (1986) described a word-sense disambiguation technique based on the number of overlaps between words in a dictionary definition and words in the local context of the word to be disambiguated. Weak performance (50-70%) was reported. Yarowsky's (1992) WSD approach based on Roget's categories is a step in the right direction. The author reported a 92% precision rate for automatic disambiguation of the instances of 12 words in the Grolier Enclopedia.", |
| "cite_spans": [ |
| { |
| "start": 464, |
| "end": 475, |
| "text": "Lesk (1986)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 701, |
| "end": 718, |
| "text": "Yarowsky's (1992)", |
| "ref_id": "BIBREF38" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Class-based Approach to Exploiting Machine-readable Lexical Resources", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "The problem of word alignment is no exception. We believe that our proposed algorithm addresses the above problem by exploiting existing thesauri in addition to MRDs and corpora. The corpora provide us with training and testing materials, so that empirical knowledge can be derived and evaluated objectively. The thesauri provide a classification system that can be utilized to generalize the empirical knowledge gleaned from corpora. The approach of coupling corpora with thesauri to gain both empiricality and generality is broadly in line with the approaches used by Yarowsky (1992) , Resnik and Hearst (1993) , Utsuro, Uchimoto, Matsumoto and Nagao (1994) , and Vanderwende (1994) . The Vanderwende (1994) approach of using thesaurus-like information to interpret noun sequences is particularly of interest. Contrary to previous MRD-based works, an element of inference is added to word-for-word matching. The inference is realized through taxonomic relations, such as hyponyms and hypernyms extracted from LDOCE.", |
| "cite_spans": [ |
| { |
| "start": 570, |
| "end": 585, |
| "text": "Yarowsky (1992)", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 588, |
| "end": 612, |
| "text": "Resnik and Hearst (1993)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 615, |
| "end": 659, |
| "text": "Utsuro, Uchimoto, Matsumoto and Nagao (1994)", |
| "ref_id": null |
| }, |
| { |
| "start": 666, |
| "end": 684, |
| "text": "Vanderwende (1994)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 691, |
| "end": 709, |
| "text": "Vanderwende (1994)", |
| "ref_id": "BIBREF36" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Class-based Approach to Exploiting Machine-readable Lexical Resources", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "SenseAlign achieves a degree of generality since the elements of a word pair can be correctly aligned, even when they occur rarely or even once in the corpus. This kind of generality is unattainable by statistically trained word-based models. Class-based models obviously offer additional advantages of a smaller storage requirement and higher system efficiency. Such advantages have their costs, for class-based models may be over-generalized and miss word-specific rules. However, class-based systems have produce results indicating that the advantages outweigh the disadvantages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Class-based Approach to Exploiting Machine-readable Lexical Resources", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Obviously, SenseAlign is only one of many possible formulations of the class-based approach to word alignment using both a dictionary and thesaurus. Ker and Chang (1997) described a similar ClassAlign algorithm with a number of differences: 1. ClassAlign does not commit itself to a certain segmentation of Chinese sentences as SenseAlign does. 2. ClassAlign does not identify any morpho-synctical constructions in Chinese sentences as SenseAlign does. 3. Unlike SenseAlign's repeated evaluation of distortion, ClassAlign calculates the distortion, once and for all, relative to anchors cast by the DictAlign algorithm. 4. ClassAlign selects alignment rules by balancing both applicability and specificity; thus, it does not increase coverage at the expense of precision. ClassAlign tends to make fewer commitment errors while SenseAlign tends to make fewer omission errors. Chen, Chang, Ker and Chen (1997) described a much simpler TopAlign algorithm which does not require segmentation of Chinese sentences. TopAlign takes advantage of various clusters based on a source language thesurus, LLOCE, instead of one thesurus for each of the two languages as in the case of SenseAlign and ClassAlign. Experimental results show that TopAlign runs much faster.", |
| "cite_spans": [ |
| { |
| "start": 149, |
| "end": 169, |
| "text": "Ker and Chang (1997)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 888, |
| "end": 907, |
| "text": "Ker and Chen (1997)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Class-based Approach to Exploiting Machine-readable Lexical Resources", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Gale and Church (1991) showed through a near-miss example that 1 2 , a X 2 -like statistic, works better than mutual information for selecting strongly associated word pairs for use in word alignment. In their study, they contended that the X 2 -like statistic works better because it uses co-nonoccurrence and the two off-diagonal values of the contingency table (the number of sentences where one word occurs while the other does not), which are often larger, more stable, and more indicative than co-occurrence used in mutual information. Their results indicate that although precision is improved, coverage is not higher than that of other word-based approaches.", |
| "cite_spans": [ |
| { |
| "start": 9, |
| "end": 22, |
| "text": "Church (1991)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Other Methods Based on Mutual Information, X 2 -like Statistics, and Frequency", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Focusing on improving coverage, we have chosen to use frequency coupled with simple filtering according to fan-out in the acquisition of class-based rules. Rules that provide the most instances of plausible connection are selected. Our approach differs from those based on a word-specific, mutual information-like statistic that select strongly associated word pairs which may have a weak presence in the data. The experimental results confirm the findings of several recent works on terminology extraction and structural disambiguation. Daille (1994) demonstrated that simple criteria related to frequency coupled with a linguistic filter work better than mutual information for terminology extraction. Justeson and Katz (1995) also gave experimental results supporting a similar finding. Recent work involving structural disambiguation (Alshawi and Carter, 1994; Brill and Resnik, 1994) has also indicated that statistics related to frequency outperform mutual information and the X 2 statistic.", |
| "cite_spans": [ |
| { |
| "start": 538, |
| "end": 551, |
| "text": "Daille (1994)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 704, |
| "end": 728, |
| "text": "Justeson and Katz (1995)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 838, |
| "end": 864, |
| "text": "(Alshawi and Carter, 1994;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 865, |
| "end": 888, |
| "text": "Brill and Resnik, 1994)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Other Methods Based on Mutual Information, X 2 -like Statistics, and Frequency", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "This paper has presented an algorithm capable of identifying words and their translations in a bilingual corpus. It is effective for specific linguistic reasons. A significant majority of words in bilingual sentences have diverging translations; those translations are not often found in a bilingual dictionary. However, these deviations are largely limited within the classes defined in thesauri. Therefore, by using a class-based approach, the problem's complexity can be reduced. The results of experiments in this study have demonstrated that the method provides hefty coverage and precision rates well over 85%. In general, a small amount of precision can apparently be sacrificed to gain a substantial increase in coverage.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Concluding Remarks", |
| "sec_num": "6." |
| }, |
| { |
| "text": "The algorithm's performance discussed here can definitely be improved by enhancing the various components of the algorithm, e.g., morphological analyses, a bilingual dictionary, monolingual thesauri, and rule acquisition. However, this work has presented a workable basis for processing bilingual corpus. This has wide implications for a variety of language tasks ranging from the obvious, machine translation and word sense disambiguation, to the unexpected, second language acquisition.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Concluding Remarks", |
| "sec_num": "6." |
| }, |
| { |
| "text": "While this paper has specifically addressed only English-Chinese corpora, the linguistic issues that motivated the algorithm are quite general and are, to a great degree, language independent. If this is true, the algorithm presented here should be adaptable to other language pairs. The prospects for Japanese, in particular, seem highly promising. Work on alignment of English-Japanese texts using both dictionaries and statistics has been described by Matsumoto, Ishimoto and Utsuro (1993) and Utsuro, Ikeda, Yamane, Matsumoto and Nagao (1994) .", |
| "cite_spans": [ |
| { |
| "start": 455, |
| "end": 492, |
| "text": "Matsumoto, Ishimoto and Utsuro (1993)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 497, |
| "end": 546, |
| "text": "Utsuro, Ikeda, Yamane, Matsumoto and Nagao (1994)", |
| "ref_id": "BIBREF33" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Concluding Remarks", |
| "sec_num": "6." |
| }, |
| { |
| "text": "There are a number of exciting future directions for continuing this work, including: (1) adding an automatic preprocessing step, sentence alignment, (2) representing the alignment results at the structural or symbolic level, (3) applying the result of alignment to statistical or hybrid machine translation systems.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Concluding Remarks", |
| "sec_num": "6." |
| }, |
| { |
| "text": "S. J. Ker, J. S. Chang", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The authors would like to thank the National Science Council of the Republic of China for financial support of this work under Contract No. NSC 82-0408-E-007-195. We would like to thank Liming Yu at Zebra English Service Union and Betty Teng and Nora Liu at Longman Asia Limited for making machine readable dictionaries available to us. Special thanks are due to Mathis H. C. Chen for preprocessing work on the MRD. Thanks are also due to Keh-Yih Su for many helpful comments. We are also thankful to the anonymous reviewers for many useful suggestions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| }, |
| { |
| "text": "In CILIN, the categories are organized as a conceptual ontology of three levels: gross categories, intermediate categories and detailed categories. In Table A From the above description, we can see that words are organized mostly according to their semantic properties. Thus, personal Pronouns such as z (wuo, I), are listed under \"A\" categories alone with content nouns like 3' (ren-ming, people). This would cause problems for the task of word alignment. Therefore, we have identified places where function words and content words are listed under the same category and given those function words a different code:(1) Personal Pronouns were given new categories Na, Nb, and Nc to distinguish them from the content words listed under Aa. (2) Function words related to quantity, number, and measurement were taken out of the intermediate category Dn and given intermediate categories Qa, Qb, and Ma, respectively. (3) The gross category C was split into two new gross categories T and L for time and location, respectively. (4) To broaden the coverage of CILIN, we have also added words into detailed categories of CILIN using an automatic procedure which exploits the so-called thesaury effect of Chinese characters and Kanji ( Fujii and Croft 1993) . Some examples of added words are shown as Table A-3.Chinese Word Category Chinese Word Category o (zhouzhang, governor) Af10 L (jian, key)", |
| "cite_spans": [ |
| { |
| "start": 1227, |
| "end": 1250, |
| "text": "( Fujii and Croft 1993)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 151, |
| "end": 158, |
| "text": "Table A", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Appendix A Description for CILIN", |
| "sec_num": null |
| }, |
| { |
| "text": "\u00bbo (cizhang, deputy)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bp13", |
| "sec_num": null |
| }, |
| { |
| "text": "*\u00bb+W (jimaodanzi, duster) Bp13 \u00ebo (cezhang, conductor) Af10 I\u00bd (tiedien, nail) Bp13 o (dianyuzhang, warden) Af10 IC (tieban, iron rod)", |
| "cite_spans": [ |
| { |
| "start": 34, |
| "end": 54, |
| "text": "(cezhang, conductor)", |
| "ref_id": null |
| }, |
| { |
| "start": 63, |
| "end": 78, |
| "text": "(tiedien, nail)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Af10", |
| "sec_num": null |
| }, |
| { |
| "text": "Table A-3 Some added words and their categories from CILIN.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bp13", |
| "sec_num": null |
| }, |
| { |
| "text": "Unlike CILIN, the categories of LLOCE are organized primarily according to subject matter. In the first level, fourteen major subjects are denoted with reference letters from A to N as shown in Table B Under each title, there are from 10 to 50 sets of related words. Each set is given a 3-digit reference number. The titles are not reflected in the original LLOCE reference code. In order to represent this implicit grouping, we have assigned a lower case letter to each title. For example, \"objects generally\" is denoted using the letter b, and the reference code H030 is replaced with Hb030. Therefore, each set is denoted by a upper case SUBJECT letter, a lower case TITLE letter and a 3-digit SET number. There are 2504 sets in total. Some sets from LLOCE are listed in Table B -3.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 194, |
| "end": 201, |
| "text": "Table B", |
| "ref_id": null |
| }, |
| { |
| "start": 774, |
| "end": 781, |
| "text": "Table B", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Appendix B Description for LLOCE", |
| "sec_num": null |
| }, |
| { |
| "text": "Related words Gb030 knowing and being conscious recognize, be aware, be conscious of Jf130 selling and butying sell, retail, realize, market, buy, purchase, acquire, get, pawn, treat, patronize ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Meaning", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Training and Scaling Preference Functions for Disambiguation", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Alshawi", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Carter", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Computational Linguistics", |
| "volume": "20", |
| "issue": "4", |
| "pages": "635--648", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alshawi, H. and D. Carter, \"Training and Scaling Preference Functions for Disambiguation,\" Computational Linguistics, 20:4, 1994, pp. 635-648.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "A Rule Based Approach to Prepositional Phrase Attachment", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Brill", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Resnik", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Proceedings of the 15th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1198--1204", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brill, E. and P. Resnik, \"A Rule Based Approach to Prepositional Phrase Attachment,\" In Proceedings of the 15th International Conference on Computational Linguistics, 1994, pp. 1198-1204, Kyoto Japan.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "A Simple Rule-Based Part of Speech Tagger", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Brill", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proceedings of the third Conference on Applied Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "152--155", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brill, E., \"A Simple Rule-Based Part of Speech Tagger,\" In Proceedings of the third Conference on Applied Natural Language Processing, 1992, pp. 152-155, ACL, Trento, Italy.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Aligning Sentences in Parallel Corpora", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [ |
| "F" |
| ], |
| "last": "Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "C" |
| ], |
| "last": "Lai", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "L" |
| ], |
| "last": "Mercer", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Proceedings of the 29th Annual Meeting of Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "169--176", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brown, P. F., J. C. Lai, and R. L. Mercer, \" Aligning Sentences in Parallel Corpora\", In Proceedings of the 29th Annual Meeting of Association for Computational Linguistics, 1991, pp. 169-176, Berkley, CA, USA.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "A Statistical Approach to Machine Translation", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [ |
| "S" |
| ], |
| "last": "Mercer", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Roosin", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Computational Linguistics", |
| "volume": "16", |
| "issue": "2", |
| "pages": "79--85", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mercer, and P. S. Roosin, \"A Statistical Approach to Machine Translation,\" Computational Linguistics, 16:2, 1990, pp. 79-85.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "The Mathematics of Statistical Machine Translation: Parameter Estimation", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [ |
| "F" |
| ], |
| "last": "Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "A" |
| ], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [ |
| "J" |
| ], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "L" |
| ], |
| "last": "Mercer", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational Linguistics", |
| "volume": "19", |
| "issue": "2", |
| "pages": "263--311", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brown, P. F., S. A. Della Pietra, V. J. Della Pietra, and R. L. Mercer, \"The Mathematics of Sta- tistical Machine Translation: Parameter Estimation,\" Computational Linguistics, 19:2, 1993, pp. 263-311.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "A Grammar of Spoken Chinese", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [ |
| "R" |
| ], |
| "last": "Chao", |
| "suffix": "" |
| } |
| ], |
| "year": 1968, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chao, Y. R., A Grammar of Spoken Chinese, University of California Press, 1968.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Towards Generality and Modularity in Statistical Word Sense Disambiguation", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "N" |
| ], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "S" |
| ], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Proceeding of 2nd Pacific Asia Conference on Formal and Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "45--48", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chen, J. N. and J. S. Chang, \"Towards Generality and Modularity in Statistical Word Sense Disambiguation,\" In Proceeding of 2nd Pacific Asia Conference on Formal and Compu- tational Linguistics, 1994, pp. 45-48.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Structural Ambiguity and Conceptual Information Retrieval", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [ |
| "H C" |
| ], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "S" |
| ], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceeding of 10th Pacific Asia Conference on Language, Information and Computation", |
| "volume": "", |
| "issue": "", |
| "pages": "115--120", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chen, M. H. C. and J. S. Chang, \"Structural Ambiguity and Conceptual Information Retrieval,\" In Proceeding of 10th Pacific Asia Conference on Language, Information and Computation, 1995, pp. 115-120, Hong Kong.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "TopAlign: Word Alignment for Bilingual Corpora Based on Topical Clusters of Dictionary Entries and Translations", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [ |
| "H C" |
| ], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "S" |
| ], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "J" |
| ], |
| "last": "Ker", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "N" |
| ], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of the 7th International Conference on Theoretical and Methodological Issues in Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "127--134", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chen, M. H. C., J. S. Chang, S. J. Ker, and J. N. Chen, \"TopAlign: Word Alignment for Bilingual Corpora Based on Topical Clusters of Dictionary Entries and Translations,\" In, Proceedings of the 7th International Conference on Theoretical and Methodological Issues in Machine Translation, 1997, pp. 127-134.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Aligning Sentences in Bilingual Corpora Using Lexical Information", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [ |
| "F" |
| ], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Proceedings of the 31st Annual Meeting of Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "9--16", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chen, S. F., \"Aligning Sentences in Bilingual Corpora Using Lexical Information,\" In Proceed- ings of the 31st Annual Meeting of Association for Computational Linguistics, 1993, pp. 9-16.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Char-Align: A Program for Aligning Parallel Text at the Character Level", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [ |
| "W" |
| ], |
| "last": "Church", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Proceedings of the 31st Annual Meeting of Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1--8", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Church, K. W., \"Char-Align: A Program for Aligning Parallel Text at the Character Level,\" In Proceedings of the 31st Annual Meeting of Association for Computational Linguistics, 1993, pp. 1-8.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Robust Bilingual Word Alignment for Machine Aided Translation", |
| "authors": [ |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Dagan", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [ |
| "W" |
| ], |
| "last": "Church", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [ |
| "A" |
| ], |
| "last": "Gale", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Proceedings of the Workshop on Very Large Corpora: Academic and Industrial Perspectives", |
| "volume": "", |
| "issue": "", |
| "pages": "1--8", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dagan, I., K. W. Church and W. A. Gale, \"Robust Bilingual Word Alignment for Machine Aided Translation,\" In Proceedings of the Workshop on Very Large Corpora: Academic and Industrial Perspectives, 1993, pp. 1-8.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Study and Implementation of Combined Techniques for Automatic Extraction of Terminology", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Daille", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "The Balancing Act: Combining Symbolic and Statistical Approaches to Language, Workshop at the 32nd Annual Meeting of the ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "29--36", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daille, B., \"Study and Implementation of Combined Techniques for Automatic Extraction of Terminology,\" In The Balancing Act: Combining Symbolic and Statistical Approaches to Language, Workshop at the 32nd Annual Meeting of the ACL, 1994, pp. 29-36.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Maximum Likelihood from incomplete data via the EM Algorithm", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Dempster", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Laird", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Rubin", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Journal of the Royal Statistical Society", |
| "volume": "39", |
| "issue": "B", |
| "pages": "1--38", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dempster, A., N. Laird, and D. Rubin, \"Maximum Likelihood from incomplete data via the EM Algorithm,\" Journal of the Royal Statistical Society, 39(B), pp. 1-38.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Measures of the Amount of Ecologic Association between Species", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [ |
| "R" |
| ], |
| "last": "Dice", |
| "suffix": "" |
| } |
| ], |
| "year": 1945, |
| "venue": "Journal of Ecology", |
| "volume": "26", |
| "issue": "", |
| "pages": "297--302", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dice, L. R. \"Measures of the Amount of Ecologic Association between Species,\" Journal of Ecology, 1945, 26: 297-302.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "A Comparison of Indexing Techniques for Japanese Text Retrieval", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Fujii", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [ |
| "B" |
| ], |
| "last": "Croft", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Proceedings of the 16th International ACM SIGIR Conference on Research and Development in Information Retrieval", |
| "volume": "", |
| "issue": "", |
| "pages": "237--246", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fujii, H. and W. B. Croft, \"A Comparison of Indexing Techniques for Japanese Text Retrieval,\" In Proceedings of the 16th International ACM SIGIR Conference on Research and Devel- opment in Information Retrieval, 1993, pp. 237-246.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "K-vec: A New Approach for Aligning Parallel texts", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Fung", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [ |
| "W" |
| ], |
| "last": "Church", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Proceedings of the 15th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1096--1102", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fung, P. and Church K. W., \"K-vec: A New Approach for Aligning Parallel texts.\" In Proceed- ings of the 15th International Conference on Computational Linguistics, 1994, pp. 1096-1102.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Aligning Noisy Parallel Corpora Across Language Groups: Word Pair Feature Matching by Dynamic Time Warping", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Fung", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Mckeown", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Technology Partnerships for Crossing the Language Barrier, Proceedings of the First Conference of the Association for Machine Translation in the Americas", |
| "volume": "", |
| "issue": "", |
| "pages": "81--88", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fung, P. and K. McKeown, \"Aligning Noisy Parallel Corpora Across Language Groups: Word Pair Feature Matching by Dynamic Time Warping,\" In Technology Partnerships for Crossing the Language Barrier, Proceedings of the First Conference of the Association for Machine Translation in the Americas, 1994, pp. 81-88, Columbia, Maryland, USA.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Using Bilingual Materials to Develop Word Sense Disambiguation Methods", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [ |
| "A" |
| ], |
| "last": "Gale", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [ |
| "W" |
| ], |
| "last": "Church", |
| "suffix": "" |
| }, |
| { |
| "first": "Yarowsky", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proceedings of the Fourth International Conference on Theoretical and Methodological Issues in Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "101--112", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gale, W. A. and K. W. Church, and Yarowsky, \"Using Bilingual Materials to Develop Word Sense Disambiguation Methods,\" In Proceedings of the Fourth International Conference on Theoretical and Methodological Issues in Machine Translation, 1992, pp. 101-112, Montreal, Canada.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Identifying Word Correspondences in Parallel Texts", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [ |
| "A" |
| ], |
| "last": "Gale", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [ |
| "W" |
| ], |
| "last": "Church", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Proceedings of the Fourth DARPA Speech and Natural Language Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "152--157", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gale, W. A. and K. W. Church, \"Identifying Word Correspondences in Parallel Texts,\" In Proceedings of the Fourth DARPA Speech and Natural Language Workshop, 1991, pp. 152-157, Pacific Grove, CA, USA.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Technical Terminology: Some Linguistic Properties and An Algorithm for Identification in Text", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "S" |
| ], |
| "last": "Justeson", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "M" |
| ], |
| "last": "Katz", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Natural Language Engineering", |
| "volume": "1", |
| "issue": "1", |
| "pages": "9--27", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Justeson, J. S. and S. M. Katz, \"Technical Terminology: Some Linguistic Properties and An Algorithm for Identification in Text,\" Natural Language Engineering, 1:1, 1995, pp. 9-27, Cambridge University Press.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Text-Translation Alignment", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Kay", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Roscheisen", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational Linguistics", |
| "volume": "19", |
| "issue": "1", |
| "pages": "121--142", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kay, M. and M. Roscheisen, \"Text-Translation Alignment,\" Computational Linguistics, 19:1, 1993, pp. 121-142.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "A Class-base Approach to Word Alignment", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [ |
| "J" |
| ], |
| "last": "Ker", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "S" |
| ], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Computational Linguistics", |
| "volume": "23", |
| "issue": "2", |
| "pages": "313--343", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ker, S. J. and J. S. Chang, \"A Class-base Approach to Word Alignment,\" Computational Linguistics, 1997, 23:2, pp. 313-343.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Aligning More Words with High Precision for Small Bilingual Corpora", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [ |
| "J" |
| ], |
| "last": "Ker", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "S" |
| ], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of the 16th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "210--215", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ker, S. J. and J. S. Chang, \"Aligning More Words with High Precision for Small Bilingual Cor- pora,\" In Proceedings of the 16th International Conference on Computational Linguistics, 1996, pp. 210-215, Copenhagen, Danmark.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Automatic Sense Disambiguation Using Machine Readable Dictionaries: How to Tell a Pine Cone from an Ice Cream Cone", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [ |
| "E" |
| ], |
| "last": "Lesk", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "Proceedings of the ACM SIGDOC Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "24--26", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lesk, M. E., \"Automatic Sense Disambiguation Using Machine Readable Dictionaries: How to Tell a Pine Cone from an Ice Cream Cone,\" In Proceedings of the ACM SIGDOC Conference, 1986, pp. 24-26, Toronto, Canada.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Structural Matching of Parallel Texts", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Matsumoto", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ishimoto", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Utsuro", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1--30", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matsumoto, Y., H. Ishimoto and T. Utsuro, \"Structural Matching of Parallel Texts,\" In Proceed- ings of the 31st Annual Meeting of the Association for Computational Linguistics, 1993, pp. 1-30, Ohio, USA.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Longman Lexicon of Contemporary English", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Mcarthur", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "McArthur, T., Longman Lexicon of Contemporary English, Published by Longman Group (Far East) Ltd., 1992, Hong Kong.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Structure Ambiguity and Conceptual Relations", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Resnik", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "A" |
| ], |
| "last": "Hearst", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Proceedings of the Third Conference on Applied Natural Language Processing, Association for Computational Linguistics, ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "104--110", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Resnik, P., and M. A. Hearst, \"Structure Ambiguity and Conceptual Relations,\" In Proceedings of the Third Conference on Applied Natural Language Processing, Association for Compu- tational Linguistics, ACL, 1993, pp. 104-110.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Translating Collocations for Bilingual Lexicons: A Statistical Approach", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Smadja", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [ |
| "R" |
| ], |
| "last": "Mckeown", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Hatzivassiloglou", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Computational Linguistics", |
| "volume": "22", |
| "issue": "1", |
| "pages": "1--38", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Smadja, F., K. R. McKeown, and V. Hatzivassiloglou, \"Translating Collocations for Bilingual Lexicons: A Statistical Approach,\" Computational Linguistics, 1996, 22:1, pp. 1-38.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Bilingual Text Matching Using Bilingual Dictionary and Statistics", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Utsuro", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ikeda", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Yamane", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Matsumoto", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Nagao", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Proceedings of the 15th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1076--1082", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Utsuro, T., H. Ikeda, M. Yamane, M. Matsumoto, and M. Nagao, \"Bilingual Text Matching Using Bilingual Dictionary and Statistics,\" In Proceedings of the 15th International Conference on Computational Linguistics, 1994, pp. 1076-1082, Kyoto Japan.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Thesaurus-Based Efficient Example Retrieval by Generating Retrieval Queries from Similarities", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Utsuro", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Uchimoto", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Matsumoto", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Nagao", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Proceedings of the 15th", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Utsuro, T., K. Uchimoto, M. Matsumoto, and M. Nagao, \"Thesaurus-Based Efficient Example Retrieval by Generating Retrieval Queries from Similarities,\" In Proceedings of the 15th", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "International Conference on Computational Linguistics", |
| "authors": [], |
| "year": 1994, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "1044--1048", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "International Conference on Computational Linguistics, 1994, pp. 1044-1048, Kyoto Japan.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Algorithm for Automatic Interpretation of Noun Sequences", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Vanderwende", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Proceedings of the 15th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "782--788", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vanderwende, L., \"Algorithm for Automatic Interpretation of Noun Sequences,\" In Proceedings of the 15th International Conference on Computational Linguistics, 1994, pp. 782-788, Kyoto, Japan.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Learning an English-Chinese Lexicon from a Parallel Corpus", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Xia", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Proceeding of the first Conference of the American Machine Translation Association: Technology Partnerships for Crossing the Language Barrier", |
| "volume": "", |
| "issue": "", |
| "pages": "206--213", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wu, D. and X. Xia, \"Learning an English-Chinese Lexicon from a Parallel Corpus,\" In Proceeding of the first Conference of the American Machine Translation Association: Technology Partnerships for Crossing the Language Barrier, 1994, pp. 206-213, Columbia, Maryland, USA.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Word Sense Disambiguation Using Statistical Models of Roget's Categories Trained on Large Corpora", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Yarowsky", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proceedings of the 14th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "454--460", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yarowsky, D., \"Word Sense Disambiguation Using Statistical Models of Roget's Categories Trained on Large Corpora,\" In Proceedings of the 14th International Conference on Computational Linguistics, 1992, pp. 454-460, Nantes, France.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "num": null, |
| "uris": null, |
| "text": "r = (e, c), W e = the number of English words in class e, W c = the number of Chinese words in class c." |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "num": null, |
| "uris": null, |
| "text": "Precision rates for candidates with different values of distortion" |
| }, |
| "FIGREF2": { |
| "type_str": "figure", |
| "num": null, |
| "uris": null, |
| "text": "where |C| = the length of C, Mandarin translation in connection, |D| = the length of D, Mandarin morphemes in dictionary, |E| = the length of E, common Mandarin morpheme in C and D, w(C k ) = weight of the k-th Mandarin morpheme in C, w(D k )= weight of the k-th Mandarin morpheme in D, w(E k ) = weight of the k-th Mandarin morpheme in E. sbc tbd" |
| }, |
| "FIGREF3": { |
| "type_str": "figure", |
| "num": null, |
| "uris": null, |
| "text": "(s, t) = MaxProb(s, t|fan_out(c,d))~Prob(s,t|rd(s,t))~Prob(s,t|A r (c,d)) Prob(s,t|S r (c,d))~Prob(s,t|sim(s,t)) where s = source word, t = target word, c = an LLOCE class of which s is a member, and d = a CILIN class of which t is a member." |
| }, |
| "FIGREF4": { |
| "type_str": "figure", |
| "num": null, |
| "uris": null, |
| "text": "where n = the number of C-class words in ST, m= the number of D-class words in TT. Specificity: S r = -log ( W e~W c ), where r = (e, c), W e = the number of English words in class e, W c = the uumber of Chinese words in class c." |
| }, |
| "FIGREF5": { |
| "type_str": "figure", |
| "num": null, |
| "uris": null, |
| "text": "the subscript of the target sentence, i L = the position of the closest connection to the left of the i-th word in the source sentence, j L = the aligned target word of i L , i R = the position of the closest connection to the right of the i-th word in the source sentence, j R = the aligned target word of i R ." |
| }, |
| "FIGREF6": { |
| "type_str": "figure", |
| "num": null, |
| "uris": null, |
| "text": "where |C| = the number of morphemes in C, |D| = the number of morphemes in D, |E| = the number of common morphemes in C and D." |
| }, |
| "TABREF0": { |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>Department</td><td>of</td><td>Computer</td><td>Science,</td><td>Soochow</td><td>University,</td><td>Taipei,</td><td>Taiwan,</td><td>ROC.</td></tr><tr><td>E-mail</td><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>", |
| "num": null, |
| "text": ":ksj@volans.cis.scu.edu.tw. +Department of Computer Science, National Tsing Hua University, Hsin-chu, Taiwan, ROC." |
| }, |
| "TABREF2": { |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>English Word</td><td>POS</td><td>Reference Code</td></tr><tr><td>lightning</td><td>NN</td><td>Lc058</td></tr><tr><td>usually</td><td>RB</td><td>Nc056</td></tr><tr><td colspan=\"2\">accompanies VB</td><td>Kb032</td></tr><tr><td>thunder</td><td>NN</td><td>Lc058</td></tr></table>", |
| "num": null, |
| "text": "Verb-complement and determinant-measure constructs" |
| }, |
| "TABREF3": { |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td colspan=\"2\">.1 Tagging results of Example (1e)</td></tr><tr><td>Chinese Word</td><td>POS Reference Code</td></tr></table>", |
| "num": null, |
| "text": "" |
| }, |
| "TABREF4": { |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td colspan=\"3\">.2 Tagging results of Example (1c)</td><td/><td/></tr><tr><td colspan=\"3\">(1e) Lightening usually accompanies thunder. (1c) < , _</td><td>\u00c2=</td><td colspan=\"2\">\u00d6 l</td></tr><tr><td>eisheng</td><td colspan=\"2\">tongchang suizhe</td><td>shandian</td><td>er</td><td>lai</td></tr><tr><td>thunder</td><td>usually</td><td colspan=\"2\">accompany lightning</td><td colspan=\"2\">and come</td></tr></table>", |
| "num": null, |
| "text": "" |
| }, |
| "TABREF6": { |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td/><td colspan=\"4\">.1 Ten rules with high applicability</td></tr><tr><td colspan=\"2\">Rule# #App.</td><td>POS</td><td>Rule</td><td>Gloss for LLOCE</td><td>Gloss for CILIN</td></tr><tr><td>101 102 103 104 105 106 107</td><td>95 95 94 93 91 91 90</td><td colspan=\"4\">NN Na Ab030, Ba02 living things JJ A Jc063, Ed26 relating to measurement JJ A Bh110, Ed03 good bodily condition VB V Ma004, Id21 leaving and setting out NN Na Mh202, Bc02 front, back and sides\u00b0 ( bian, side) \u00e1 ( jiao, corner) 3\" ( shengwu, living things) F ( zungui, nobility) hao, good), O (huai, bad) >h ( kaojin, coming on) JJ A Nd096, Ua01 much, many zD ( feichang, very) NN Na Kh196, Bd03 cricket \u00fa ( diqiu, earth)</td></tr><tr><td>108 109</td><td>90 90</td><td colspan=\"3\">NN Na Cn270, Di11 war and peace NN Na Lh226, Tp22 measuring time</td><td>( zhanzheng, war) Z ( xingqing, week)</td></tr><tr><td>110</td><td>89</td><td colspan=\"3\">NN Na Gf233, Dd15 word and names</td></tr></table>", |
| "num": null, |
| "text": "" |
| }, |
| "TABREF7": { |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td colspan=\"2\">Rule# #App.</td><td>POS</td><td>Rule</td><td>Gloss for LLOCE</td><td>Gloss for CILIN</td></tr><tr><td>1 2 3</td><td colspan=\"2\">1628 VB V 1251 VB V 980 JJ A</td><td colspan=\"3\">Ma, Hj moving, coming, and going 34(shenghuo, activities in daily life) Gc, Hi Communicating %L(shejiao, social activities) Mh, Ed locating and direction \u00a4(xingzh, property)</td></tr></table>", |
| "num": null, |
| "text": "Ten rules with the middle applicability" |
| }, |
| "TABREF8": { |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>Class level</td><td>No. of rules acquired</td></tr><tr><td>set vs detailed category</td><td>392</td></tr><tr><td>Title vs intermediate category</td><td>3</td></tr></table>", |
| "num": null, |
| "text": "Three rules with the high estimated applicability on the 2-letter level" |
| }, |
| "TABREF9": { |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>40</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>30</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>20</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>10</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>0</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>0</td><td>30</td><td>60</td><td>90</td><td>120</td><td>150</td><td>180</td><td>210</td><td>240</td><td>270</td><td>300</td><td>330</td><td>360</td><td>390</td></tr></table>", |
| "num": null, |
| "text": "The number of selected pairs" |
| }, |
| "TABREF13": { |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>English Position POS Chinese answer 2 VB \u00e0 ( huida, answer) all 3 AT \u00ac \u00b6 (suoyiou, all) on 5 IN * (shang, up) this 6 AT ! (ben, this) list 7 NN @ (biao, list)</td><td>Position POS 2 V 7 Ne 5 Ng 3 Ne 4 Na</td><td>d L d R 0 4 4 0 -4 1 -7 4 -7 4</td><td>rd 0 0 1 4 4</td></tr></table>", |
| "num": null, |
| "text": "" |
| }, |
| "TABREF14": { |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td colspan=\"3\">English Position POS Chinese answer 2 VB \u00e0 ( huida, answer) all 3 AT \u00ac \u00b6 (suoyiou, all)</td><td colspan=\"2\">Position POS 2 V 7 Ne</td><td colspan=\"2\">d L d R 0 -3 4 -7</td><td>rd 0 4</td></tr><tr><td>question on this</td><td>4 5 6</td><td>NN (wunti, question) IN * (shang, up) AT !</td><td>8 5</td><td>Na Ng</td><td>4 0</td><td>-7 -3</td><td>4 0</td></tr></table>", |
| "num": null, |
| "text": "The relative distortion after selection of (question, \u00b1" |
| }, |
| "TABREF16": { |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>Factor type</td><td/><td colspan=\"3\">Condition and empirically estimated probability</td><td/></tr><tr><td>Fan-out</td><td>condition</td><td>f = 1</td><td>f = 2</td><td>f = 3</td><td>f > 3</td></tr><tr><td/><td>probability</td><td>0.85</td><td>0.61</td><td>0.44</td><td>0.42</td></tr><tr><td>Applicability</td><td>condition</td><td>A \u226510 -2</td><td>10 -2 > A \u226510 -3</td><td>10 -3 >A \u226510 -4</td><td>10 -4 > A</td></tr><tr><td/><td>probability</td><td>0.95</td><td>0.90</td><td>0.85</td><td>0.43</td></tr><tr><td>Specificity</td><td>condition</td><td>10>S > 0</td><td>12> S \u226510</td><td>S \u226512</td><td>S = 0</td></tr><tr><td/><td>probability</td><td>0.95</td><td>0.77</td><td>0.45</td><td>0.20</td></tr><tr><td colspan=\"2\">Relative distortion condition</td><td>rd = 0</td><td>rd = 1</td><td>rd = 2</td><td>rd > 2</td></tr><tr><td/><td>probability</td><td>0.26</td><td>0.11</td><td>0.07</td><td>0.04</td></tr><tr><td>Similarity to</td><td>condition</td><td>Sim =1.0</td><td>1.0>Sim\u22650.66</td><td>0.66> Sim\u22650.2</td><td>Sim < 0.2</td></tr><tr><td>Dictionary</td><td>probability</td><td>0.94</td><td>0.42</td><td>0.35</td><td>0.12</td></tr><tr><td>translation</td><td/><td/><td/><td/><td/></tr></table>", |
| "num": null, |
| "text": "summarizes the MLE probabilistic values obtained using 200 manually aligned sentences from the LecDOCE." |
| }, |
| "TABREF17": { |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>1. Read a pair of English-Chinese sentences.</td></tr></table>", |
| "num": null, |
| "text": "Summary of factors and formula used in SenseAlign Algorithm" |
| }, |
| "TABREF18": { |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td colspan=\"4\">summarizes the connections selected to form a word alignment solution.</td></tr><tr><td>English word</td><td>POS</td><td>Class code (e)</td><td>W e</td></tr><tr><td>I</td><td>PP</td><td>Gh280</td><td>13</td></tr><tr><td>caught</td><td>V</td><td>De098</td><td>12</td></tr><tr><td>a</td><td>AT</td><td>Nd098</td><td>6</td></tr><tr><td>fish</td><td>NN</td><td>Af100, Ah120, Ab032, Ea017, Eb031</td><td>32, 6, 22, 21, 9</td></tr><tr><td>Yesterday</td><td>NR</td><td>Lh225</td><td>8</td></tr></table>", |
| "num": null, |
| "text": "" |
| }, |
| "TABREF19": { |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td colspan=\"2\">Chinese Word POS</td><td>Class code (c)</td><td>W c</td></tr><tr><td>z \u00a2 +\u00ba \u00b9</td><td>Nd Nh V+Di Ne Na</td><td>Tq23 Na02, Na05 Hm05 Qa04 Bi14</td><td>47 76, 21 27 52 15</td></tr></table>", |
| "num": null, |
| "text": "Results of lexical processing for example sentence 5e." |
| }, |
| "TABREF20": { |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>Code</td><td>Gloss</td><td>Code</td><td>Gloss</td></tr><tr><td colspan=\"2\">Gh280 personal Pronouns</td><td>Qa04</td><td>number</td></tr><tr><td colspan=\"2\">Nd098 some and any</td><td>Bi14</td><td>fish, shrimp</td></tr><tr><td colspan=\"2\">Ab032 kinds of living creature</td><td>Tq23</td><td>today, yesterday, tomorrow</td></tr><tr><td colspan=\"2\">Lh225 time</td><td>Na02</td><td>I, we</td></tr><tr><td colspan=\"2\">De098 taking and catching things</td><td>Na05</td><td>oneself, others, somebody</td></tr><tr><td>Af100</td><td>common fish</td><td>Hm05</td><td>arrest, release</td></tr><tr><td colspan=\"2\">Ah120 parts around the head and neck</td><td/><td/></tr><tr><td>Ea017</td><td>courses in meals</td><td/><td/></tr><tr><td colspan=\"2\">Eb031 meat, etc.</td><td/><td/></tr></table>", |
| "num": null, |
| "text": "Results of lexical processing for example sentence 5c." |
| }, |
| "TABREF21": { |
| "type_str": "table", |
| "html": null, |
| "content": "<table/>", |
| "num": null, |
| "text": "Glossary of class codes relevant to the Example (5e, 5c)." |
| }, |
| "TABREF22": { |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td colspan=\"2\">ST: English sentence</td><td colspan=\"2\">TT: Chinese sentence</td></tr><tr><td>W ord I</td><td>Sense code Gh280</td><td>W ord z (wuo, I)</td><td>Sense code Na05</td></tr><tr><td>caught</td><td>De098</td><td/><td/></tr></table>", |
| "num": null, |
| "text": "Various factors for connection candidates." |
| }, |
| "TABREF27": { |
| "type_str": "table", |
| "html": null, |
| "content": "<table/>", |
| "num": null, |
| "text": "Error analysis by POS." |
| } |
| } |
| } |
| } |