{ "paper_id": "P09-1016", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:54:37.703183Z" }, "title": "Transliteration Alignment", "authors": [ { "first": "Vladimir", "middle": [], "last": "Pervouchine", "suffix": "", "affiliation": {}, "email": "vpervouchine@i2r.a-star.edu.sg" }, { "first": "Haizhou", "middle": [], "last": "Li", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Bo", "middle": [], "last": "Lin", "suffix": "", "affiliation": {}, "email": "linbo@pmail.ntu.edu.sg" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper studies transliteration alignment, its evaluation metrics and applications. We propose a new evaluation metric, alignment entropy, grounded on the information theory, to evaluate the alignment quality without the need for the gold standard reference and compare the metric with F-score. We study the use of phonological features and affinity statistics for transliteration alignment at phoneme and grapheme levels. The experiments show that better alignment consistently leads to more accurate transliteration. In transliteration modeling application, we achieve a mean reciprocal rate (MRR) of 0.773 on Xinhua personal name corpus, a significant improvement over other reported results on the same corpus. In transliteration validation application, we achieve 4.48% equal error rate on a large LDC corpus.", "pdf_parse": { "paper_id": "P09-1016", "_pdf_hash": "", "abstract": [ { "text": "This paper studies transliteration alignment, its evaluation metrics and applications. We propose a new evaluation metric, alignment entropy, grounded on the information theory, to evaluate the alignment quality without the need for the gold standard reference and compare the metric with F-score. We study the use of phonological features and affinity statistics for transliteration alignment at phoneme and grapheme levels. The experiments show that better alignment consistently leads to more accurate transliteration. In transliteration modeling application, we achieve a mean reciprocal rate (MRR) of 0.773 on Xinhua personal name corpus, a significant improvement over other reported results on the same corpus. In transliteration validation application, we achieve 4.48% equal error rate on a large LDC corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Transliteration is a process of rewriting a word from a source language to a target language in a different writing system using the word's phonological equivalent. The word and its transliteration form a transliteration pair. Many efforts have been devoted to two areas of studies where there is a need to establish the correspondence between graphemes or phonemes between a transliteration pair, also known as transliteration alignment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One area is the generative transliteration modeling (Knight and Graehl, 1998) , which studies how to convert a word from one language to another using statistical models. Since the models are trained on an aligned parallel corpus, the resulting statistical models can only be as good as the alignment of the corpus. Another area is the transliteration validation, which studies the ways to validate transliteration pairs. For example Knight and Graehl (1998) use the lexicon frequency, Qu and Grefenstette (2004) use the statistics in a monolingual corpus and the Web, Kuo et al. (2007) use probabilities estimated from the transliteration model to validate transliteration candidates. In this paper, we propose using the alignment distance between the a bilingual pair of words to establish the evidence of transliteration candidacy. An example of transliteration pair alignment is shown in Figure 1 . Like the word alignment in statistical machine translation (MT), transliteration alignment becomes one of the important topics in machine transliteration, which has several unique challenges. Firstly, the grapheme sequence in a word is not delimited into grapheme tokens, resulting in an additional level of complexity. Secondly, to maintain the phonological equivalence, the alignment has to make sense at both grapheme and phoneme levels of the source and target languages. This paper reports progress in our ongoing spoken language translation project, where we are interested in the alignment problem of personal name transliteration from English to Chinese.", "cite_spans": [ { "start": 52, "end": 77, "text": "(Knight and Graehl, 1998)", "ref_id": "BIBREF10" }, { "start": 434, "end": 458, "text": "Knight and Graehl (1998)", "ref_id": "BIBREF10" }, { "start": 486, "end": 512, "text": "Qu and Grefenstette (2004)", "ref_id": "BIBREF23" }, { "start": 569, "end": 586, "text": "Kuo et al. (2007)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 892, "end": 900, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper is organized as follows. In Section 2, we discuss the prior work. In Section 3, we introduce both statistically and phonologically motivated alignment techniques and in Section 4 we advocate an evaluation metric, alignment entropy that measures the alignment quality. We report the experiments in Section 5. Finally, we conclude in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A number of transliteration studies have touched on the alignment issue as a part of the transliteration modeling process, where alignment is needed at levels of graphemes and phonemes. In their seminal paper Knight and Graehl (1998) described a transliteration approach that transfers the grapheme representation of a word via the phonetic representation, which is known as phonemebased transliteration technique (Virga and Khudanpur, 2003; Meng et al., 2001; Jung et al., 2000; Gao et al., 2004) . Another technique is to directly transfer the grapheme, known as direct orthographic mapping, that was shown to be simple and effective (Li et al., 2004) . Some other approaches that use both source graphemes and phonemes were also reported with good performance (Oh and Choi, 2002; Al-Onaizan and Knight, 2002; Bilac and Tanaka, 2004) .", "cite_spans": [ { "start": 209, "end": 233, "text": "Knight and Graehl (1998)", "ref_id": "BIBREF10" }, { "start": 414, "end": 441, "text": "(Virga and Khudanpur, 2003;", "ref_id": "BIBREF25" }, { "start": 442, "end": 460, "text": "Meng et al., 2001;", "ref_id": "BIBREF19" }, { "start": 461, "end": 479, "text": "Jung et al., 2000;", "ref_id": "BIBREF7" }, { "start": 480, "end": 497, "text": "Gao et al., 2004)", "ref_id": "BIBREF5" }, { "start": 636, "end": 653, "text": "(Li et al., 2004)", "ref_id": "BIBREF15" }, { "start": 763, "end": 782, "text": "(Oh and Choi, 2002;", "ref_id": "BIBREF21" }, { "start": 783, "end": 811, "text": "Al-Onaizan and Knight, 2002;", "ref_id": "BIBREF1" }, { "start": 812, "end": 835, "text": "Bilac and Tanaka, 2004)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "To align a bilingual training corpus, some take a phonological approach, in which the crafted mapping rules encode the prior linguistic knowledge about the source and target languages directly into the system (Wan and Verspoor, 1998; Meng et al., 2001; Jiang et al., 2007; Xu et al., 2006) . Others adopt a statistical approach, in which the affinity between phonemes or graphemes is learned from the corpus (Gao et al., 2004; AbdulJaleel and Larkey, 2003; Virga and Khudanpur, 2003) .", "cite_spans": [ { "start": 209, "end": 233, "text": "(Wan and Verspoor, 1998;", "ref_id": "BIBREF26" }, { "start": 234, "end": 252, "text": "Meng et al., 2001;", "ref_id": "BIBREF19" }, { "start": 253, "end": 272, "text": "Jiang et al., 2007;", "ref_id": "BIBREF6" }, { "start": 273, "end": 289, "text": "Xu et al., 2006)", "ref_id": "BIBREF29" }, { "start": 408, "end": 426, "text": "(Gao et al., 2004;", "ref_id": "BIBREF5" }, { "start": 427, "end": 456, "text": "AbdulJaleel and Larkey, 2003;", "ref_id": "BIBREF0" }, { "start": 457, "end": 483, "text": "Virga and Khudanpur, 2003)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In the phoneme-based technique where an intermediate level of phonetic representation is used as the pivot, alignment between graphemes and phonemes of the source and target words is needed (Oh and Choi, 2005) . If source and target languages have different phoneme sets, alignment between the the different phonemes is also required (Knight and Graehl, 1998) . Although the direct orthographic mapping approach advocates a direct transfer of grapheme at run-time, we still need to establish the grapheme correspondence at the model training stage, when phoneme level alignment can help.", "cite_spans": [ { "start": 190, "end": 209, "text": "(Oh and Choi, 2005)", "ref_id": "BIBREF22" }, { "start": 334, "end": 359, "text": "(Knight and Graehl, 1998)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "It is apparent that the quality of transliteration alignment of a training corpus has a significant impact on the resulting transliteration model and its performance. Although there are many studies of evaluation metrics of word alignment for MT (Lambert, 2008) , there has been much less reported work on evaluation metrics of transliteration alignment. In MT, the quality of training corpus alignment A is often measured relatively to the gold standard, or the ground truth alignment G, which is a manual alignment of the corpus or a part of it. Three evaluation metrics are used: precision, recall, and F -score, the latter being a function of the former two. They indicate how close the alignment under investigation is to the gold standard alignment (Mihalcea and Pedersen, 2003) . Denoting the number of cross-lingual mappings that are common in both A and G as C AG , the number of cross-lingual mappings in A as C A and the number of cross-lingual mappings in G as C G , precision P r is given as C AG /C A , recall Rc as C AG /C G and F -score as 2P r \u2022 Rc/(P r + Rc).", "cite_spans": [ { "start": 246, "end": 261, "text": "(Lambert, 2008)", "ref_id": "BIBREF12" }, { "start": 755, "end": 784, "text": "(Mihalcea and Pedersen, 2003)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Note that these metrics hinge on the availability of the gold standard, which is often not available. In this paper we propose a novel evaluation metric for transliteration alignment grounded on the information theory. One important property of this metric is that it does not require a gold standard alignment as a reference. We will also show that how this metric is used in generative transliteration modeling and transliteration validation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We assume in this paper that the source language is English and the target language is Chinese, although the technique is not restricted to English-Chinese alignment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transliteration alignment techniques", "sec_num": "3" }, { "text": "Let a word in the source language (English) be {e i } = {e 1 . . . e I } and its transliteration in the target language (Chinese) be {c j } = {c 1 . . . c J }, e i \u2208 E, c j \u2208 C, and E, C being the English and Chinese sets of characters, or graphemes, respectively. Aligning {e i } and {c j } means for each target grapheme tokenc j finding a source grapheme token\u0113 m , which is an English substring in {e i } that corresponds to c j , as shown in the example in Figure 1 . As Chinese is syllabic, we use a Chinese character c j as the target grapheme token.", "cite_spans": [], "ref_spans": [ { "start": 462, "end": 470, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Transliteration alignment techniques", "sec_num": "3" }, { "text": "Given a distance function between graphemes of the source and target languages d(e i , c j ), the problem of alignment can be formulated as a dynamic programming problem with the following function to minimize:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grapheme affinity alignment", "sec_num": "3.1" }, { "text": "D ij = min(D i\u22121,j\u22121 + d(e i , c j ), D i,j\u22121 + d( * , c j ), D i\u22121,j + d(e i , * )) (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grapheme affinity alignment", "sec_num": "3.1" }, { "text": "Here the asterisk * denotes a null grapheme that is introduced to facilitate the alignment between graphemes of different lengths. The minimum distance achieved is then given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grapheme affinity alignment", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "D = I i=1 d(e i , c \u03b8(i) )", "eq_num": "(2)" } ], "section": "Grapheme affinity alignment", "sec_num": "3.1" }, { "text": "where j = \u03b8(i) is the correspondence between the source and target graphemes. The alignment can be performed via the Expectation-Maximization (EM) by starting with a random initial alignment and calculating the affinity matrix count(e i , c j ) over the whole parallel corpus, where element (i, j) is the number of times character e i was aligned to c j . From the affinity matrix conditional probabilities P (e i |c j ) can be estimated as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grapheme affinity alignment", "sec_num": "3.1" }, { "text": "P (e i |c j ) = count(e i , c j )/ j count(e i , c j ) (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grapheme affinity alignment", "sec_num": "3.1" }, { "text": "Alignment j = \u03b8(i) between {e i } and {c j } that maximizes probability", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grapheme affinity alignment", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P = i P (c \u03b8(i) |e i )", "eq_num": "(4)" } ], "section": "Grapheme affinity alignment", "sec_num": "3.1" }, { "text": "is also the same alignment that minimizes alignment distance D:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grapheme affinity alignment", "sec_num": "3.1" }, { "text": "D = \u2212 log P = \u2212 i log P (c \u03b8(i) |e i ) (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grapheme affinity alignment", "sec_num": "3.1" }, { "text": "In other words, equations (2) and (5) are the same when we have the distance function d(e i , c j ) = \u2212 log P (c j |e i ). Minimizing the overall distance over a training corpus, we conduct EM iterations until the convergence is achieved. This technique solely relies on the affinity statistics derived from training corpus, thus is called grapheme affinity alignment. It is also equally applicable for alignment between a pair of symbol sequences representing either graphemes or phonemes. (Gao et al., 2004; AbdulJaleel and Larkey, 2003; Virga and Khudanpur, 2003) .", "cite_spans": [ { "start": 491, "end": 509, "text": "(Gao et al., 2004;", "ref_id": "BIBREF5" }, { "start": 510, "end": 539, "text": "AbdulJaleel and Larkey, 2003;", "ref_id": "BIBREF0" }, { "start": 540, "end": 566, "text": "Virga and Khudanpur, 2003)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Grapheme affinity alignment", "sec_num": "3.1" }, { "text": "Transliteration is about finding phonological equivalent. It is therefore a natural choice to use the phonetic representation as the pivot. It is common though that the sound inventory differs from one language to another, resulting in different phonetic representations for source and target words. Continuing with the earlier example, Figure 2 shows the correspondence between the graphemes and phonemes of English word \"Alice\" and its Chinese transliteration, with CMU phoneme set used for English (Chase, 1997) and IIR phoneme set for Chinese (Li et al., 2007a) .", "cite_spans": [ { "start": 501, "end": 514, "text": "(Chase, 1997)", "ref_id": "BIBREF3" }, { "start": 547, "end": 565, "text": "(Li et al., 2007a)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 337, "end": 345, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Grapheme alignment via phonemes", "sec_num": "3.2" }, { "text": "A Chinese character is often mapped to a unique sequence of Chinese phonemes. Therefore, if we align English characters {e i } and Chinese phonemes {cp k } (cp k \u2208 CP set of Chinese phonemes) well, we almost succeed in aligning English and Chinese grapheme tokens. Alignment between {e i } and {cp k } becomes the main task in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grapheme alignment via phonemes", "sec_num": "3.2" }, { "text": "Let the phonetic transcription of English word {e i } be {ep n }, ep n \u2208 EP , where EP is the set of English phonemes. Alignment between {e i } and {ep n }, as well as between {ep n } and {cp k } can be performed via EM as described above. We estimate conditional probability of Chinese phoneme cp k after observing English character e i as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phoneme affinity alignment", "sec_num": "3.2.1" }, { "text": "P (cp k |e i ) = {epn} P (cp k |ep n )P (ep n |e i ) (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phoneme affinity alignment", "sec_num": "3.2.1" }, { "text": "We use the distance function between English graphemes and Chinese phonemes d(e i , cp k ) = \u2212 log P (cp k |e i ) to perform the initial alignment between {e i } and {cp k } via dynamic programming, followed by the EM iterations until convergence. The estimates for P (cp k |ep n ) and P (ep n |e i ) are obtained from the affinity matrices: the former from the alignment of English and Chinese phonetic representations, the latter from the alignment of English words and their phonetic representations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phoneme affinity alignment", "sec_num": "3.2.1" }, { "text": "Alignment between the phonetic representations of source and target words can also be achieved using the linguistic knowledge of phonetic similarity. Oh and Choi (2002) define classes of phonemes and assign various distances between phonemes of different classes. In contrast, we make use of phonological descriptors to define the similarity between phonemes in this paper.", "cite_spans": [ { "start": 150, "end": 168, "text": "Oh and Choi (2002)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Phonological alignment", "sec_num": "3.2.2" }, { "text": "Perhaps the most common way to measure the phonetic similarity is to compute the distances between phoneme features (Kessler, 2005) . Such features have been introduced in many ways, such as perceptual attributes or articulatory attributes. Recently, Tao et al. (2006) and Yoon et al. (2007) have studied the use of phonological features and manually assigned phonological distance to measure the similarity of transliterated words for extracting transliterations from a comparable corpus.", "cite_spans": [ { "start": 116, "end": 131, "text": "(Kessler, 2005)", "ref_id": "BIBREF9" }, { "start": 251, "end": 268, "text": "Tao et al. (2006)", "ref_id": "BIBREF24" }, { "start": 273, "end": 291, "text": "Yoon et al. (2007)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Phonological alignment", "sec_num": "3.2.2" }, { "text": "We adopt the binary-valued articulatory attributes as the phonological descriptors, which are used to describe the CMU and IIR phoneme sets for English and Chinese Mandarin respectively. Withgott and Chen (1993) define a feature vector of phonological descriptors for English sounds. We extend the idea by defining a 21-element binary feature vector for each English and Chinese phoneme. Each element of the feature vector represents presence or absence of a phonological descriptor that differentiates various kinds of phonemes, e.g. vowels from consonants, front from back vowels, nasals from fricatives, etc 1 .", "cite_spans": [ { "start": 187, "end": 211, "text": "Withgott and Chen (1993)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Phonological alignment", "sec_num": "3.2.2" }, { "text": "In this way, a phoneme is described by a feature vector. We express the similarity between two phonemes by the Hamming distance, also called the phonological distance, between the two feature vectors. A difference in one descriptor between two phonemes increases their distance by 1. As the descriptors are chosen to differentiate between sounds, the distance between similar phonemes is low, while that between two very different phonemes, such as a vowel and a consonant, is high. The null phoneme, added to both English and Chinese phoneme sets, has a constant distance to any actual phonemes, which is higher than that between any two actual phonemes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phonological alignment", "sec_num": "3.2.2" }, { "text": "We use the phonological distance to perform the initial alignment between English and Chinese phonetic representations of words. After that we proceed with recalculation of the distances between phonemes using the affinity matrix as described in Section 3.1 and realign the corpus again. We continue the iterations until convergence is reached. Because of the use of phonological descriptors for the initial alignment, we call this technique the phonological alignment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phonological alignment", "sec_num": "3.2.2" }, { "text": "Having aligned the graphemes between two languages, we want to measure how good the alignment is. Aligning the graphemes means aligning the English substrings, called the source grapheme tokens, to Chinese characters, the target grapheme tokens. Intuitively, the more consistent the mapping is, the better the alignment will be. We can quantify the consistency of alignment via alignment entropy grounded on information theory.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transliteration alignment entropy", "sec_num": "4" }, { "text": "Given a corpus of aligned transliteration pairs, we calculate count(c j ,\u0113 m ), the number of times each Chinese grapheme token (character) c j is mapped to each English grapheme token\u0113 m . We use the counts to estimate probabilities", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transliteration alignment entropy", "sec_num": "4" }, { "text": "P (\u0113 m , c j ) = count(c j ,\u0113 m )/ m,j count(c j ,\u0113 m ) P (\u0113 m |c j ) = count(c j ,\u0113 m )/ m count(c j ,\u0113 m )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transliteration alignment entropy", "sec_num": "4" }, { "text": "The alignment entropy of the transliteration corpus is the weighted average of the entropy values for all Chinese tokens:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transliteration alignment entropy", "sec_num": "4" }, { "text": "H = \u2212 j P (c j ) m P (\u0113 m |c j ) log P (\u0113 m |c j ) = \u2212 m,j P (\u0113 m , c j ) log P (\u0113 m |c j ) (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transliteration alignment entropy", "sec_num": "4" }, { "text": "Alignment entropy indicates the uncertainty of mapping between the English and Chinese tokens resulting from alignment. We expect and will show that this estimate is a good indicator of the alignment quality, and is as effective as the Fscore, but without the need for a gold standard reference. A lower alignment entropy suggests that each Chinese token tends to be mapped to fewer distinct English tokens, reflecting better consistency. We expect a good alignment to have a sharp cross-lingual mapping with low alignment entropy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transliteration alignment entropy", "sec_num": "4" }, { "text": "We use two transliteration corpora: Xinhua corpus (Xinhua News Agency, 1992) of 37,637 personal name pairs and LDC Chinese-English named entity list LDC2005T34 (Linguistic Data Consortium, 2005) , containing 673,390 personal name pairs. The LDC corpus is referred to as LDC05 for short hereafter. For the results to be comparable with other studies, we follow the same splitting of Xinhua corpus as that in (Li et al., 2007b) having a training and testing set of 34,777 and 2,896 names respectively. In contrast to the well edited Xinhua corpus, LDC05 contains erroneous entries. We have manually verified and corrected around 240,000 pairs to clean up the corpus. As a result, we arrive at a set of 560,768 English-Chinese (EC) pairs that follow the Chinese phonetic rules, and a set of 83,403 English-Japanese Kanji (EJ) pairs, which follow the Japanese phonetic rules, and the rest 29,219 pairs (REST) being labeled as incorrect transliterations. Next we conduct three experiments to study 1) alignment entropy vs. F -score, 2) the impact of alignment quality on transliteration accuracy, and 3) how to validate transliteration using alignment metrics.", "cite_spans": [ { "start": 160, "end": 194, "text": "(Linguistic Data Consortium, 2005)", "ref_id": null }, { "start": 407, "end": 425, "text": "(Li et al., 2007b)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "As mentioned earlier, for English-Chinese grapheme alignment, the main task is to align English graphemes to Chinese phonemes. Phonetic transcription for the English names in Xinhua corpus are obtained by a grapheme-to-phoneme (G2P) converter (Lenzo, 1997) , which generates phoneme sequence without providing the exact correspondence between the graphemes and phonemes. G2P converter is trained on the CMU dictionary (Lenzo, 2008) .", "cite_spans": [ { "start": 243, "end": 256, "text": "(Lenzo, 1997)", "ref_id": "BIBREF13" }, { "start": 418, "end": 431, "text": "(Lenzo, 2008)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Alignment entropy vs. F -score", "sec_num": "5.1" }, { "text": "We align English grapheme and phonetic representations e \u2212 ep with the affinity alignment technique (Section 3.1) in 3 iterations. We further align the English and Chinese phonetic representations ep \u2212 cp via both affinity and phonological alignment techniques, by carrying out 6 and 7 iterations respectively. The alignment methods are schematically shown in Figure 3 .", "cite_spans": [], "ref_spans": [ { "start": 360, "end": 368, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Alignment entropy vs. F -score", "sec_num": "5.1" }, { "text": "To study how alignment entropy varies according to different quality of alignment, we would like to have many different alignment results. We pair the intermediate results from the e \u2212 ep and ep \u2212 cp alignment iterations (see Figure 3) to form e \u2212 ep \u2212 cp alignments between English graphemes and Chinese phonemes and let them converge through few more iterations, as shown in Figure 4 . In this way, we arrive at a total of 114 phonological and 80 affinity alignments of differ-ent quality. We have manually aligned a random set of 3,000 transliteration pairs from the Xinhua training set to serve as the gold standard, on which we calculate the precision, recall and F -score as well as alignment entropy for each alignment. Each alignment is reflected as a data point in Figures 5a and 5b . From the figures, we can observe a clear correlation between the alignment entropy and Fscore, that validates the effectiveness of alignment entropy as an evaluation metric. Note that we don't need the gold standard reference for reporting the alignment entropy.", "cite_spans": [], "ref_spans": [ { "start": 226, "end": 235, "text": "Figure 3)", "ref_id": null }, { "start": 377, "end": 385, "text": "Figure 4", "ref_id": null }, { "start": 774, "end": 792, "text": "Figures 5a and 5b", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Alignment entropy vs. F -score", "sec_num": "5.1" }, { "text": "We also notice that the data points seem to form clusters inside which the value of F -score changes insignificantly as the alignment entropy changes. Further investigation reveals that this could be due to the limited number of entries in the gold standard. The 3,000 names in the gold standard are not enough to effectively reflect the change across different alignments. F -score requires a large gold standard which is not always available. In contrast, because the alignment entropy doesn't depend on the gold standard, one can easily report the alignment performance on any unaligned parallel corpus. Results for precision and recall have similar trends .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alignment entropy vs. F -score", "sec_num": "5.1" }, { "text": "We now further study how the alignment affects the generative transliteration model in the framework of the joint source-channel model (Li et al., 2004) . This model performs transliteration by maximizing the joint probability of the source and target names P ({e i }, {c j }), where the source and target names are sequences of English and Chinese grapheme tokens. The joint probability is expressed as a chain product of a series of conditional probabilities of token pairs P ({e i }, {c j }) = P ((\u0113 k , c k )|(\u0113 k\u22121 , c k\u22121 )), k = 1 . . . N , where we limit the history to one preceding pair, resulting in a bigram model. The conditional probabilities for token pairs are estimated from the aligned training corpus. We use this model because it was shown to be simple yet accurate (Ekbal et al., 2006; Li et al., 2007b) . We train a model for each of the 114 phonological alignments and the 80 affinity alignments in Section 5.1 and conduct transliteration experiment on the Xinhua test data. During transliteration, an input English name is first decoded into a lattice of all possible English and Chinese grapheme token pairs. Then the joint source-channel transliteration model is used to score the lattice to obtain a ranked list of m most likely Chinese transliterations (m-best list).", "cite_spans": [ { "start": 135, "end": 152, "text": "(Li et al., 2004)", "ref_id": "BIBREF15" }, { "start": 786, "end": 806, "text": "(Ekbal et al., 2006;", "ref_id": "BIBREF4" }, { "start": 807, "end": 824, "text": "Li et al., 2007b)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Impact of alignment quality on transliteration accuracy", "sec_num": "5.2" }, { "text": "We measure transliteration accuracy as the mean reciprocal rank (MRR) (Kantor and Voorhees, 2000) . If there is only one correct Chinese transliteration of the k-th English word and it is found at the r k -th position in the m-best list, its reciprocal rank is 1/r k . If the list contains no correct transliterations, the reciprocal rank is 0. In case of multiple correct transliterations, we take the one that gives the highest reciprocal rank. MRR is the average of the reciprocal ranks across all words in the test set. It is commonly used as a measure of transliteration accuracy, and also allows us to make a direct comparison with other reported work (Li et al., 2007b) .", "cite_spans": [ { "start": 70, "end": 97, "text": "(Kantor and Voorhees, 2000)", "ref_id": "BIBREF8" }, { "start": 658, "end": 676, "text": "(Li et al., 2007b)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Impact of alignment quality on transliteration accuracy", "sec_num": "5.2" }, { "text": "We take m = 20 and measure MRR on Xinhua test set for each alignment of Xinhua training set as described in Section 5.1. We report MRR and the alignment entropy in Figures 6a and 7a for the affinity and phonological alignments respectively. The highest MRR we achieve is 0.771 for affinity alignments and 0.773 for phonological alignments. This is a significant improvement over the MRR of 0.708 reported in (Li et al., 2007b) on the same data. We also observe that the phonological alignment technique produces, on average, better alignments than the affinity alignment technique in terms of both the alignment entropy and MRR.", "cite_spans": [ { "start": 408, "end": 426, "text": "(Li et al., 2007b)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 164, "end": 181, "text": "Figures 6a and 7a", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Impact of alignment quality on transliteration accuracy", "sec_num": "5.2" }, { "text": "We also report the MRR and F -scores for each alignment in Figures 6b and 7b , from which we observe that alignment entropy has stronger correlation with MRR than F -score does. The Spearman's rank correlation coefficients are \u22120.89 and \u22120.88 for data in Figure 6a and 7a respectively. This once again demonstrates the desired property of alignment entropy as an evaluation metric of alignment.", "cite_spans": [], "ref_spans": [ { "start": 59, "end": 76, "text": "Figures 6b and 7b", "ref_id": "FIGREF6" }, { "start": 255, "end": 264, "text": "Figure 6a", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Impact of alignment quality on transliteration accuracy", "sec_num": "5.2" }, { "text": "To validate our findings from Xinhua corpus, we further carry out experiments on the EC set of LDC05 containing 560,768 entries. We split the set into 5 almost equal subsets for crossvalidation: in each of 5 experiments one subset is used for testing and the remaining ones for training. Since LDC05 contains one-to-many English-Chinese transliteration pairs, we make sure that an English name only appears in one subset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Impact of alignment quality on transliteration accuracy", "sec_num": "5.2" }, { "text": "Note that the EC set of LDC05 contains many names of non-English, and, generally, non-European origin. This makes the G2P converter less accurate, as it is trained on an English phonetic dictionary. We therefore only apply the affinity alignment technique to align the EC set. We use each iteration of the alignment in the transliteration modeling and present the resulting MRR along with alignment entropy in Figure 8 . The MRR results are the averages of five values produced in the five-fold cross-validations. We observe a clear correlation between the alignment entropy and transliteration accuracy expressed by MRR on LDC05 corpus, similar to that on Xinhua corpus, with the Spearman's rank correlation coefficient of \u22120.77. We obtain the highest average MRR of 0.720 on the EC set.", "cite_spans": [], "ref_spans": [ { "start": 410, "end": 418, "text": "Figure 8", "ref_id": null } ], "eq_spans": [], "section": "Impact of alignment quality on transliteration accuracy", "sec_num": "5.2" }, { "text": "Transliteration validation is a hypothesis test that decides whether a given transliteration pair is genuine or not. Instead of using the lexicon frequency (Knight and Graehl, 1998) or Web statistics (Qu and Grefenstette, 2004) , we propose validating transliteration pairs according to the alignment distance D between the aligned English graphemes and Chinese phonemes (see equations samples that are not transliteration pairs. We mark the EC name pairs as genuine and the rest 112,622 name pairs that do not follow the Chinese phonetic rules as false transliterations, thus creating the ground truth labels for an English-Chinese transliteration validation experiment. In other words, LDC05 has 560,768 genuine transliteration pairs and 112,622 false ones. We run one iteration of alignment over LDC05 (both genuine and false) with the distance function d(e i , cp k ) derived from the affinity matrix of one aligned Xinhua training set. In this way, each transliteration pair in LDC05 provides an alignment distance. One can expect that a genuine transliteration pair typically aligns well, leading to a low distance, while a false transliteration pair will do otherwise. To remove the effect of word length, we normalize the distance by the English name length, the Chinese phonetic transcription length, and the sum of both, producing score 1 , score 2 and score 3 respectively. We can now classify each LDC05 name pair as genuine or false by having a hypothesis test. When the test score is lower than a pre-set threshold, the name pair is accepted as genuine, otherwise false. In this way, each pre-set threshold will present two types of errors, a false alarm and a miss-detect rate. A common way to present such results is via the detection error tradeoff (DET) curves, which show all possible decision points, and the equal error rate (EER), when false alarm and miss-detect rates are equal. Figure 9a shows three DET curves based on score 1 , score 2 and score 3 respectively for one one alignment solution on the Xinhua training set. The horizontal axis is the probability of missdetecting a genuine transliteration, while the vertical one is the probability of false-alarms. It is clear that out of the three, score 2 gives the best results.", "cite_spans": [ { "start": 156, "end": 181, "text": "(Knight and Graehl, 1998)", "ref_id": "BIBREF10" }, { "start": 200, "end": 227, "text": "(Qu and Grefenstette, 2004)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 1903, "end": 1912, "text": "Figure 9a", "ref_id": "FIGREF9" } ], "eq_spans": [], "section": "Validating transliteration using alignment measure", "sec_num": "5.3" }, { "text": "We select the alignments of Xinhua training set that produce the highest and the lowest MRR. We also randomly select three other alignments that produce different MRR values from the pool of 114 phonological and 80 affinity alignments. We use each alignment to derive distance function d(e i , cp k ). Table 1 shows the EER of LDC05 validation using score 2 , along with the alignment entropy of the Xinhua training set that derives d(e i , cp k ), and the MRR on Xinhua test set in the generative transliteration experiment (see Section 5.2) for all 5 alignments. To avoid cluttering Figure 9b, we show the DET curves for alignments 1, 2 and 5 only. We observe that distance function derived from better aligned Xinhua corpus, as measured by both our alignment entropy metric and MRR, leads to a higher validation accuracy consistently on LDC05.", "cite_spans": [], "ref_spans": [ { "start": 302, "end": 309, "text": "Table 1", "ref_id": "TABREF2" }, { "start": 585, "end": 591, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Validating transliteration using alignment measure", "sec_num": "5.3" }, { "text": "We conclude that the alignment entropy is a reliable indicator of the alignment quality, as confirmed by our experiments on both Xinhua and LDC corpora. Alignment entropy does not require the gold standard reference, it thus can be used to evaluate alignments of large transliteration corpora and is possibly to give more reliable estimate of alignment quality than the F -score metric as shown in our transliteration experiment. The alignment quality of training corpus has a significant impact on the transliteration models. We achieve the highest MRR of 0.773 on Xinhua corpus with phonological alignment technique, which represents a significant performance gain over other reported results. Phonological alignment outperforms affinity alignment on clean database.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "We propose using alignment distance to validate transliterations. A high quality alignment on a small verified corpus such as Xinhua can be effectively used to validate a large noisy corpus, such as LDC05. We believe that this property would be useful in transliteration extraction, cross-lingual information retrieval applications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "The complete table of English and Chinese phonemes with their descriptors, as well as the transliteration system demo is available at http://translit.i2r.astar.edu.sg/demos/transliteration/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Statistical transliteration for English-Arabic cross language information retrieval", "authors": [ { "first": "Nasreen", "middle": [], "last": "Abduljaleel", "suffix": "" }, { "first": "Leah", "middle": [ "S" ], "last": "Larkey", "suffix": "" } ], "year": 2003, "venue": "Proc. ACM CIKM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nasreen AbdulJaleel and Leah S. Larkey. 2003. Sta- tistical transliteration for English-Arabic cross lan- guage information retrieval. In Proc. ACM CIKM.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Machine transliteration of names in arabic text", "authors": [ { "first": "Yaser", "middle": [], "last": "Al-Onaizan", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2002, "venue": "Proc. ACL Workshop: Computational Apporaches to Semitic Languages", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaser Al-Onaizan and Kevin Knight. 2002. Machine transliteration of names in arabic text. In Proc. ACL Workshop: Computational Apporaches to Semitic Languages.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A hybrid back-transliteration system for Japanese", "authors": [ { "first": "Slaven", "middle": [], "last": "Bilac", "suffix": "" }, { "first": "Hozumi", "middle": [], "last": "Tanaka", "suffix": "" } ], "year": 2004, "venue": "Proc. COLING", "volume": "", "issue": "", "pages": "597--603", "other_ids": {}, "num": null, "urls": [], "raw_text": "Slaven Bilac and Hozumi Tanaka. 2004. A hybrid back-transliteration system for Japanese. In Proc. COLING, pages 597-603.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Error-responsive feedback mechanisms for speech recognizers", "authors": [ { "first": "Lin", "middle": [ "L" ], "last": "Chase", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin L. Chase. 1997. Error-responsive feedback mech- anisms for speech recognizers. Ph.D. thesis, CMU.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A modified joint source-channel model for transliteration", "authors": [ { "first": "Asif", "middle": [], "last": "Ekbal", "suffix": "" }, { "first": "Sudip", "middle": [], "last": "Kumar Naskar", "suffix": "" }, { "first": "Sivaji", "middle": [], "last": "Bandyopadhyay", "suffix": "" } ], "year": 2006, "venue": "Proc. COLING/ACL", "volume": "", "issue": "", "pages": "191--198", "other_ids": {}, "num": null, "urls": [], "raw_text": "Asif Ekbal, Sudip Kumar Naskar, and Sivaji Bandy- opadhyay. 2006. A modified joint source-channel model for transliteration. In Proc. COLING/ACL, pages 191-198", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Phoneme-based transliteration of foreign names for OOV problem", "authors": [ { "first": "Wei", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Kam-Fai", "middle": [], "last": "Wong", "suffix": "" }, { "first": "Wai", "middle": [], "last": "Lam", "suffix": "" } ], "year": 2004, "venue": "Proc. IJCNLP", "volume": "", "issue": "", "pages": "374--381", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Gao, Kam-Fai Wong, and Wai Lam. 2004. Phoneme-based transliteration of foreign names for OOV problem. In Proc. IJCNLP, pages 374-381.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Named entity translation with web mining and transliteration", "authors": [ { "first": "Long", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Lee-Feng", "middle": [], "last": "Chien", "suffix": "" }, { "first": "Cheng", "middle": [], "last": "Niu", "suffix": "" } ], "year": 2007, "venue": "IJCAI", "volume": "", "issue": "", "pages": "1629--1634", "other_ids": {}, "num": null, "urls": [], "raw_text": "Long Jiang, Ming Zhou, Lee-Feng Chien, and Cheng Niu. 2007. Named entity translation with web min- ing and transliteration. In IJCAI, pages 1629-1634.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "An English to Korean transliteration model of extended Markov window", "authors": [ { "first": "Sung", "middle": [ "Young" ], "last": "Jung", "suffix": "" }, { "first": "Sunglim", "middle": [], "last": "Hong", "suffix": "" }, { "first": "Eunok", "middle": [], "last": "Paek", "suffix": "" } ], "year": 2000, "venue": "Proc. COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sung Young Jung, SungLim Hong, and Eunok Paek. 2000. An English to Korean transliteration model of extended Markov window. In Proc. COLING, vol- ume 1.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The TREC-5 confusion track: comparing retrieval methods for scanned text", "authors": [ { "first": "", "middle": [ "B" ], "last": "Paul", "suffix": "" }, { "first": "Ellen", "middle": [ "M" ], "last": "Kantor", "suffix": "" }, { "first": "", "middle": [], "last": "Voorhees", "suffix": "" } ], "year": 2000, "venue": "Information Retrieval", "volume": "2", "issue": "", "pages": "165--176", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul. B. Kantor and Ellen. M. Voorhees. 2000. The TREC-5 confusion track: comparing retrieval meth- ods for scanned text. Information Retrieval, 2:165- 176.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Phonetic comparison algorithms", "authors": [ { "first": "Brett", "middle": [], "last": "Kessler", "suffix": "" } ], "year": 2005, "venue": "Transactions of the Philological Society", "volume": "103", "issue": "2", "pages": "243--260", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brett Kessler. 2005. Phonetic comparison algo- rithms. Transactions of the Philological Society, 103(2):243-260.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Machine transliteration", "authors": [ { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Graehl", "suffix": "" } ], "year": 1998, "venue": "Computational Linguistics", "volume": "", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Knight and Jonathan Graehl. 1998. Machine transliteration. Computational Linguistics, 24(4).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A phonetic similarity model for automatic extraction of transliteration pairs", "authors": [ { "first": "Jin-Shea", "middle": [], "last": "Kuo", "suffix": "" }, { "first": "Haizhou", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ying-Kuei", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2007, "venue": "ACM Trans. Asian Language Information Processing", "volume": "6", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jin-Shea Kuo, Haizhou Li, and Ying-Kuei Yang. 2007. A phonetic similarity model for automatic extraction of transliteration pairs. ACM Trans. Asian Language Information Processing, 6(2).", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Exploiting lexical information and discriminative alignment training in statistical machine translation", "authors": [ { "first": "Patrik", "middle": [], "last": "Lambert", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrik Lambert. 2008. Exploiting lexical informa- tion and discriminative alignment training in statis- tical machine translation. Ph.D. thesis, Universitat Polit\u00e8cnica de Catalunya, Barcelona, Spain.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "t2p: text-to-phoneme converter builder", "authors": [ { "first": "Kevin", "middle": [], "last": "Lenzo", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Lenzo. 1997. t2p: text-to-phoneme converter builder. http://www.cs.cmu.edu/\u02dclenzo/t2p/.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The CMU pronouncing dictionary", "authors": [ { "first": "Kevin", "middle": [], "last": "Lenzo", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Lenzo. 2008. The CMU pronounc- ing dictionary. http://www.speech.cs.cmu.edu/cgi- bin/cmudict.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A joint source-channel model for machine transliteration", "authors": [ { "first": "Haizhou", "middle": [], "last": "Li", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Su", "suffix": "" } ], "year": 2004, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "159--166", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haizhou Li, Min Zhang, and Jian Su. 2004. A joint source-channel model for machine transliteration. In Proc. ACL, pages 159-166.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A vector space modeling approach to spoken language identification", "authors": [ { "first": "Haizhou", "middle": [], "last": "Li", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Chin-Hui", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2007, "venue": "IEEE Trans. Acoust., Speech, Signal Process", "volume": "15", "issue": "1", "pages": "271--284", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haizhou Li, Bin Ma, and Chin-Hui Lee. 2007a. A vector space modeling approach to spoken language identification. IEEE Trans. Acoust., Speech, Signal Process., 15(1):271-284.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Semantic transliteration of personal names", "authors": [ { "first": "Haizhou", "middle": [], "last": "Li", "suffix": "" }, { "first": "Khe Chai", "middle": [], "last": "Sim", "suffix": "" }, { "first": "Jin-Shea", "middle": [], "last": "Kuo", "suffix": "" }, { "first": "Minghui", "middle": [], "last": "Dong", "suffix": "" } ], "year": 2007, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "120--127", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haizhou Li, Khe Chai Sim, Jin-Shea Kuo, and Minghui Dong. 2007b. Semantic transliteration of personal names. In Proc. ACL, pages 120-127.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Linguistic Data Consortium. 2005. LDC Chinese-English name entity lists LDC2005T34", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linguistic Data Consortium. 2005. LDC Chinese- English name entity lists LDC2005T34.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Generate phonetic cognates to handle name entities in English-Chinese cross-language spoken document retrieval", "authors": [ { "first": "Helen", "middle": [ "M" ], "last": "Meng", "suffix": "" }, { "first": "Wai-Kit", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Berlin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Karen", "middle": [], "last": "Tang", "suffix": "" } ], "year": 2001, "venue": "Proc. ASRU", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Helen M. Meng, Wai-Kit Lo, Berlin Chen, and Karen Tang. 2001. Generate phonetic cognates to han- dle name entities in English-Chinese cross-language spoken document retrieval. In Proc. ASRU.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "An evaluation exercise for word alignment", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Pedersen", "suffix": "" } ], "year": 2003, "venue": "Proc. HLT-NAACL", "volume": "", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea and Ted Pedersen. 2003. An evaluation exercise for word alignment. In Proc. HLT-NAACL, pages 1-10.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "An English-Korean transliteration model using pronunciation and contextual rules", "authors": [ { "first": "Jong-Hoon", "middle": [], "last": "Oh", "suffix": "" }, { "first": "Key-Sun", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2002, "venue": "Proc. COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jong-Hoon Oh and Key-Sun Choi. 2002. An English- Korean transliteration model using pronunciation and contextual rules. In Proc. COLING 2002.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Machine learning based english-to-korean transliteration using grapheme and phoneme information", "authors": [ { "first": "Jong-Hoon", "middle": [], "last": "Oh", "suffix": "" }, { "first": "Key-Sun", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2005, "venue": "IEICE Trans. Information and Systems", "volume": "", "issue": "7", "pages": "1737--1748", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jong-Hoon Oh and Key-Sun Choi. 2005. Machine learning based english-to-korean transliteration us- ing grapheme and phoneme information. IEICE Trans. Information and Systems, E88-D(7):1737- 1748.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Finding ideographic representations of Japanese names written in Latin script via language identification and corpus validation", "authors": [ { "first": "Yan", "middle": [], "last": "Qu", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Grefenstette", "suffix": "" } ], "year": 2004, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "183--190", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yan Qu and Gregory Grefenstette. 2004. Finding ideo- graphic representations of Japanese names written in Latin script via language identification and corpus validation. In Proc. ACL, pages 183-190.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Unsupervised named entity transliteration using temporal and phonetic correlation", "authors": [ { "first": "Tao", "middle": [], "last": "Tao", "suffix": "" }, { "first": "", "middle": [], "last": "Su-Youn", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Yoon", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Fisterd", "suffix": "" }, { "first": "Chengxiang", "middle": [], "last": "Sproat", "suffix": "" }, { "first": "", "middle": [], "last": "Zhai", "suffix": "" } ], "year": 2006, "venue": "Proc. EMNLP", "volume": "", "issue": "", "pages": "250--257", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tao Tao, Su-Youn Yoon, Andrew Fisterd, Richard Sproat, and ChengXiang Zhai. 2006. Unsupervised named entity transliteration using temporal and pho- netic correlation. In Proc. EMNLP, pages 250-257.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Transliteration of proper names in cross-lingual information retrieval", "authors": [ { "first": "Paola", "middle": [], "last": "Virga", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Khudanpur", "suffix": "" } ], "year": 2003, "venue": "Proc. ACL MLNER", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paola Virga and Sanjeev Khudanpur. 2003. Translit- eration of proper names in cross-lingual information retrieval. In Proc. ACL MLNER.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Automatic English-Chinese name transliteration for development of multilingual resources", "authors": [ { "first": "Stephen", "middle": [], "last": "Wan", "suffix": "" }, { "first": "Cornelia", "middle": [ "Maria" ], "last": "Verspoor", "suffix": "" } ], "year": 1998, "venue": "Proc. COL-ING", "volume": "", "issue": "", "pages": "1352--1356", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Wan and Cornelia Maria Verspoor. 1998. Au- tomatic English-Chinese name transliteration for de- velopment of multilingual resources. In Proc. COL- ING, pages 1352-1356.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Computational models of American speech. Centre for the study of language and information", "authors": [ { "first": "M", "middle": [ "M" ], "last": "Withgott", "suffix": "" }, { "first": "F", "middle": [ "R" ], "last": "Chen", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. M. Withgott and F. R. Chen. 1993. Computational models of American speech. Centre for the study of language and information.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Chinese transliteration of foreign personal names", "authors": [ { "first": "Xinhua", "middle": [], "last": "News Agency", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinhua News Agency. 1992. Chinese transliteration of foreign personal names. The Commercial Press.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Modeling impression in probabilistic transliteration into Chinese", "authors": [ { "first": "Lili", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Atsushi", "middle": [], "last": "Fujii", "suffix": "" }, { "first": "Tetsuya", "middle": [], "last": "Ishikawa", "suffix": "" } ], "year": 2006, "venue": "Proc. EMNLP", "volume": "", "issue": "", "pages": "242--249", "other_ids": {}, "num": null, "urls": [], "raw_text": "LiLi Xu, Atsushi Fujii, and Tetsuya Ishikawa. 2006. Modeling impression in probabilistic transliteration into Chinese. In Proc. EMNLP, pages 242-249.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Multilingual transliteration using feature based phonetic method", "authors": [ { "first": "Kyoung-Young", "middle": [], "last": "Su-Youn Yoon", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Kim", "suffix": "" }, { "first": "", "middle": [], "last": "Sproat", "suffix": "" } ], "year": 2007, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "112--119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Su-Youn Yoon, Kyoung-Young Kim, and Richard Sproat. 2007. Multilingual transliteration using fea- ture based phonetic method. In Proc. ACL, pages 112-119.", "links": null } }, "ref_entries": { "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "An example of grapheme alignment (Alice, \u827e\u4e3d\u65af), where a Chinese grapheme, a character, is aligned to an English grapheme token." }, "FIGREF2": { "uris": null, "num": null, "type_str": "figure", "text": "An example of English-Chinese transliteration alignment via phonetic representations." }, "FIGREF3": { "uris": null, "num": null, "type_str": "figure", "text": "Aligning English graphemes to phonemes e\u2212ep and English phonemes to Chinese phonemes ep\u2212cp. Intermediate e\u2212ep and ep\u2212cp alignments are used for producing e \u2212 ep \u2212 cp alignments. Example of aligning English graphemes to Chinese phonemes. Each combination of e \u2212 ep and ep \u2212 cp alignments is used to derive the initial distance d(e i , cp k ), resulting in several e \u2212 ep \u2212 cp alignments due to the affinity alignment iterations." }, "FIGREF5": { "uris": null, "num": null, "type_str": "figure", "text": "Correlation between F -score and alignment entropy for Xinhua training set alignments." }, "FIGREF6": { "uris": null, "num": null, "type_str": "figure", "text": "Mean reciprocal ratio on Xinhua test set vs. alignment entropy and F -score for models trained with different affinity alignments." }, "FIGREF7": { "uris": null, "num": null, "type_str": "figure", "text": "(2) and (5)). A distance function d(e i , cp k ) is established from each alignment on the Xinhua training set as discussed in Section 5.2.An audit of LDC05 corpus groups the corpus into three sets: an English-Chinese (EC) set of 560,768 samples, an English-Japanese (EJ) set of 83,403 samples and the REST set of 29,219 Mean reciprocal ratio on Xinhua test set vs. alignment entropy and F -score for models trained with different phonological alignments. Mean reciprocal ratio vs. alignment entropy for alignments of EC set." }, "FIGREF9": { "uris": null, "num": null, "type_str": "figure", "text": "Detection error tradeoff (DET) curves for transliteration validation on LDC05." }, "TABREF1": { "num": null, "html": null, "type_str": "table", "text": "", "content": "
LDC
classification
EER, %
12.3960.7734.48
22.5290.7644.52
32.5860.7614.51
42.6210.7574.71
52.6250.7544.70
" }, "TABREF2": { "num": null, "html": null, "type_str": "table", "text": "Equal error ratio of LDC transliteration pair validation for different alignments of Xinhua training set.", "content": "" } } } }