ACL-OCL / Base_JSON /prefixP /json /P04 /P04-1021.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P04-1021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:43:25.221969Z"
},
"title": "A Joint Source-Channel Model for Machine Transliteration",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Haizhou",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Zhang",
"middle": [],
"last": "Min",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Su",
"middle": [],
"last": "Jian",
"suffix": "",
"affiliation": {},
"email": "sujian@i2r.a-star.edu.sg"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Most foreign names are transliterated into Chinese, Japanese or Korean with approximate phonetic equivalents. The transliteration is usually achieved through intermediate phonemic mapping. This paper presents a new framework that allows direct orthographical mapping (DOM) between two different languages, through a joint source-channel model, also called n-gram transliteration model (TM). With the n-gram TM model, we automate the orthographic alignment process to derive the aligned transliteration units from a bilingual dictionary. The n-gram TM under the DOM framework greatly reduces system development effort and provides a quantum leap in improvement in transliteration accuracy over that of other state-of-the-art machine learning algorithms. The modeling framework is validated through several experiments for English-Chinese language pair.",
"pdf_parse": {
"paper_id": "P04-1021",
"_pdf_hash": "",
"abstract": [
{
"text": "Most foreign names are transliterated into Chinese, Japanese or Korean with approximate phonetic equivalents. The transliteration is usually achieved through intermediate phonemic mapping. This paper presents a new framework that allows direct orthographical mapping (DOM) between two different languages, through a joint source-channel model, also called n-gram transliteration model (TM). With the n-gram TM model, we automate the orthographic alignment process to derive the aligned transliteration units from a bilingual dictionary. The n-gram TM under the DOM framework greatly reduces system development effort and provides a quantum leap in improvement in transliteration accuracy over that of other state-of-the-art machine learning algorithms. The modeling framework is validated through several experiments for English-Chinese language pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In applications such as cross-lingual information retrieval (CLIR) and machine translation, there is an increasing need to translate out-of-vocabulary words from one language to another, especially from alphabet language to Chinese, Japanese or Korean.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Proper names of English, French, German, Russian, Spanish and Arabic origins constitute a good portion of out-of-vocabulary words. They are translated through transliteration, the method of translating into another language by preserving how words sound in their original languages. For writing foreign names in Chinese, transliteration always follows the original romanization. Therefore, any foreign name will have only one Pinyin (romanization of Chinese) and thus in Chinese characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we focus on automatic Chinese transliteration of foreign alphabet names. Because some alphabet writing systems use various diacritical marks, we find it more practical to write names containing such diacriticals as they are rendered in English. Therefore, we refer all foreign-Chinese transliteration to English-Chinese transliteration, or E2C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Transliterating English names into Chinese is not straightforward. However, recalling the original from Chinese transliteration is even more challenging as the E2C transliteration may have lost some original phonemic evidences. The Chinese-English backward transliteration process is also called back-transliteration, or C2E (Knight & Graehl, 1998) .",
"cite_spans": [
{
"start": 325,
"end": 348,
"text": "(Knight & Graehl, 1998)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In machine transliteration, the noisy channel model (NCM), based on a phoneme-based approach, has recently received considerable attention (Meng et al. 2001; Jung et al, 2000; Virga & Khudanpur, 2003; Knight & Graehl, 1998) . In this paper we discuss the limitations of such an approach and address its problems by firstly proposing a paradigm that allows direct orthographic mapping (DOM), secondly further proposing a joint source-channel model as a realization of DOM. Two other machine learning techniques, NCM and ID3 (Quinlan, 1993) decision tree, also are implemented under DOM as reference to compare with the proposed n-gram TM.",
"cite_spans": [
{
"start": 139,
"end": 157,
"text": "(Meng et al. 2001;",
"ref_id": "BIBREF1"
},
{
"start": 158,
"end": 175,
"text": "Jung et al, 2000;",
"ref_id": "BIBREF7"
},
{
"start": 176,
"end": 200,
"text": "Virga & Khudanpur, 2003;",
"ref_id": null
},
{
"start": 201,
"end": 223,
"text": "Knight & Graehl, 1998)",
"ref_id": "BIBREF3"
},
{
"start": 523,
"end": 538,
"text": "(Quinlan, 1993)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper is organized as follows: In section 2, we present the transliteration problems. In section 3, a joint source-channel model is formulated. In section 4, several experiments are carried out to study different aspects of proposed algorithm. In section 5, we relate our algorithms to other reported work. Finally, we conclude the study with some discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Transliteration is a process that takes a character string in source language as input and generates a character string in the target language as output. The process can be seen conceptually as two levels of decoding: segmentation of the source string into transliteration units; and relating the source language transliteration units with units in the target language, by resolving different combinations of alignments and unit mappings. A unit could be a Chinese character or a monograph, a digraph or a trigraph and so on for English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problems in transliteration",
"sec_num": "2"
},
{
"text": "The problems of English-Chinese transliteration have been studied extensively in the paradigm of noisy channel model (NCM). For a given English name E as the observed channel output, one seeks a posteriori the most likely Chinese transliteration C that maximizes P(C|E). Applying Bayes rule, it means to find C to maximize",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phoneme-based approach",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P(E,C) = P(E | C)*P(C)",
"eq_num": "(1)"
}
],
"section": "Phoneme-based approach",
"sec_num": "2.1"
},
{
"text": "with equivalent effect. To do so, we are left with modeling two probability distributions: P(E|C), the probability of transliterating C to E through a noisy channel, which is also called transformation rules, and P(C), the probability distribution of source, which reflects what is considered good Chinese transliteration in general. Likewise, in C2E backtransliteration, we would find E that maximizes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phoneme-based approach",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P(E,C) = P(C | E)*P(E)",
"eq_num": "(2)"
}
],
"section": "Phoneme-based approach",
"sec_num": "2.1"
},
{
"text": "for a given Chinese name.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phoneme-based approach",
"sec_num": "2.1"
},
{
"text": "In eqn (1) and 2, P(C) and P(E) are usually estimated using n-gram language models (Jelinek, 1991) . Inspired by research results of grapheme-tophoneme research in speech synthesis literature, many have suggested phoneme-based approaches to resolving P(E|C) and P(C|E), which approximates the probability distribution by introducing a phonemic representation. In this way, we convert the names in the source language, say E, into an intermediate phonemic representation P, and then convert the phonemic representation into the target language, say Chinese C. In E2C transliteration, the phoneme-based approach can be formulated as P(C|E) = P(C|P)P(P|E) and conversely we have P(E|C) = P(E|P)P(P|C) for C2E back-transliteration.",
"cite_spans": [
{
"start": 83,
"end": 98,
"text": "(Jelinek, 1991)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phoneme-based approach",
"sec_num": "2.1"
},
{
"text": "Several phoneme-based techniques have been proposed in the recent past for machine transliteration using transformation-based learning algorithm (Meng et al. 2001; Jung et al, 2000; Virga & Khudanpur, 2003) and using finite state transducer that implements transformation rules (Knight & Graehl, 1998) , where both handcrafted and data-driven transformation rules have been studied.",
"cite_spans": [
{
"start": 145,
"end": 163,
"text": "(Meng et al. 2001;",
"ref_id": "BIBREF1"
},
{
"start": 164,
"end": 181,
"text": "Jung et al, 2000;",
"ref_id": "BIBREF7"
},
{
"start": 182,
"end": 206,
"text": "Virga & Khudanpur, 2003)",
"ref_id": null
},
{
"start": 278,
"end": 301,
"text": "(Knight & Graehl, 1998)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phoneme-based approach",
"sec_num": "2.1"
},
{
"text": "However, the phoneme-based approaches are limited by two major constraints, which could compromise transliterating precision, especially in English-Chinese transliteration: 1) Latin-alphabet foreign names are of different origins. For instance, French has different phonic rules from those of English. The phoneme-based approach requires derivation of proper phonemic representation for names of different origins. One may need to prepare multiple language-dependent grapheme-to-phoneme (G2P) conversion systems accordingly, and that is not easy to achieve (The Onomastica Consortium, 1995) . For example, /Lafontant/ is transliterated into \u62c9\u4e30\u5510(La-Feng-Tang) while /Constant/ becomes \u5eb7\u65af\u5766\u7279(Kang-Si-Tan-Te) \uff0c where syllable /-tant/ in the two names are transliterated differently depending on the names' language of origin.",
"cite_spans": [
{
"start": 557,
"end": 590,
"text": "(The Onomastica Consortium, 1995)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phoneme-based approach",
"sec_num": "2.1"
},
{
"text": "2) Suppose that language dependent graphemeto-phoneme systems are attainable, obtaining Chinese orthography will need two further steps: a) conversion from generic phonemic representation to Chinese Pinyin; b) conversion from Pinyin to Chinese characters. Each step introduces a level of imprecision. Virga and Khudanpur (2003) reported 8.3% absolute accuracy drops when converting from Pinyin to Chinese characters, due to homophone confusion. Unlike Japanese katakana or Korean alphabet, Chinese characters are more ideographic than phonetic. To arrive at an appropriate Chinese transliteration, one cannot rely solely on the intermediate phonemic representation.",
"cite_spans": [
{
"start": 301,
"end": 327,
"text": "Virga and Khudanpur (2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phoneme-based approach",
"sec_num": "2.1"
},
{
"text": "To illustrate the importance of contextual information in transliteration, let's take name /Minahan/ as an example, the correct segmentation should be /Mi-na-han/, to be transliterated as \u7c73-\u7eb3-\u6c49 (Pinyin: Mi-Na-Han).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Useful orthographic context",
"sec_num": "2.2"
},
{
"text": "/mi- -na- -han/ Chinese \u7c73 \u7eb3 \u6c49 Pinyin Mi Nan Han",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English",
"sec_num": null
},
{
"text": "However, a possible segmentation /Min-ah-an/ could lead to an undesirable syllabication of \u660e-\u963f-\u5b89 (Pinyin: Min-A-An).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English",
"sec_num": null
},
{
"text": "/min- -ah- -an/ Chinese \u660e \u963f \u5b89 Pinyin Min A An",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English",
"sec_num": null
},
{
"text": "According to the transliteration guidelines, a wise segmentation can be reached only after exploring the combination of the left and right context of transliteration units. From the computational point of view, this strongly suggests using a contextual n-gram as the knowledge base for the alignment decision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English",
"sec_num": null
},
{
"text": "Another example will show us how one-to-many mappings could be resolved by context. Let's take another name /Smith/ as an example. Although we can arrive at an obvious segmentation /s-mi-th/, there are three Chinese characters for each of /s-/, /-mi-/ and /-th/. Furthermore, /s-/ and /-th/ correspond to overlapping characters as well, as shown next.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English",
"sec_num": null
},
{
"text": "English /s- -mi- -th/ Chinese 1 \u53f2 \u7c73 \u65af Chinese 2 \u65af \u5bc6 \u53f2 Chinese 3 \u601d \u9ea6 \u745f",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English",
"sec_num": null
},
{
"text": "A human translator will use transliteration rules between English syllable sequence and Chinese character sequence to obtain the best mapping \u53f2-\u5bc6-\u65af, as indicated in italic in the table above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English",
"sec_num": null
},
{
"text": "To address the issues in transliteration, we propose a direct orthographic mapping (DOM) framework through a joint source-channel model by fully exploring orthographic contextual information, aiming at alleviating the imprecision introduced by the multiple-step phoneme-based approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English",
"sec_num": null
},
{
"text": "In view of the close coupling of the source and target transliteration units, we propose to estimate P(E,C) by a joint source-channel model, or n-gram transliteration model (TM). Unlike the noisy-channel model, the joint source-channel model does not try to capture how source names can be mapped to target names, but rather how source and target names can be generated simultaneously. In other words, we estimate a joint probability model that can be easily marginalized in order to yield conditional probability models for both transliteration and back-transliteration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint source-channel model",
"sec_num": "3"
},
{
"text": "Suppose ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint source-channel model",
"sec_num": "3"
},
{
"text": "A bilingual dictionary contains entries mapping English names to their respective Chinese transliterations. Like many other solutions in computational linguistics, it is possible to automatically analyze the bilingual dictionary to acquire knowledge in order to map new English names to Chinese and vice versa. Based on the transliteration formulation above, a transliteration model can be built with transliteration unit's ngram statistics. To obtain the statistics, the bilingual dictionary needs to be aligned. The maximum likelihood approach, through EM algorithm (Dempster, 1977) , allows us to infer such an alignment easily as described in the table below.",
"cite_spans": [
{
"start": 568,
"end": 584,
"text": "(Dempster, 1977)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transliteration alignment",
"sec_num": "3.1"
},
{
"text": "The aligning process is different from that of transliteration given in eqn. (4) or (5) in that, here we have fixed bilingual entries, \u03b1 and \u03b2 . The aligning process is just to find the alignment segmentation \u03b3 between the two strings that maximizes the joint probability:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transliteration alignment",
"sec_num": "3.1"
},
{
"text": ") , , ( max arg \u03b3 \u03b2 \u03b1 \u03b3 \u03b3 P = (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transliteration alignment",
"sec_num": "3.1"
},
{
"text": "A set of transliteration pairs that is derived from the aligning process forms a transliteration table, which is in turn used in the transliteration decoding. As the decoder is bounded by this table, it is important to make sure that the training database covers as much as possible the potential transliteration patterns. Knowing that the training data set will never be sufficient for every n-gram unit, different smoothing approaches are applied, for example, by using backoff or class-based models, which can be found in statistical language modeling literatures (Jelinek, 1991) . ",
"cite_spans": [
{
"start": 567,
"end": 582,
"text": "(Jelinek, 1991)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transliteration alignment",
"sec_num": "3.1"
},
{
"text": ") | ( ) | ( ) , , ( \u03b3 \u03b2 \u03b1 (8) ) , , ( \u03b3 \u03b2 \u03b1 P ) , | , ( 1 1 \u2212 = > < > < \u2248 \u220f k k K k c e c e P (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transliteration alignment",
"sec_num": "3.1"
},
{
"text": "The formulation of eqn. (8) could be interpreted as a hidden Markov model with Chinese characters as its hidden states and English transliteration units as the observations (Rabiner, 1989) . The number of parameters in the bigram TM is potentially 2 T , while in the noisy channel model (NCM) it's 2 C T + , where T is the number of transliteration pairs and C is the number of Chinese transliteration units. In eqn. (9), the current transliteration depends on both Chinese and English transliteration history while in eqn. (8), it depends only on the previous Chinese unit.",
"cite_spans": [
{
"start": 173,
"end": 188,
"text": "(Rabiner, 1989)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transliteration alignment",
"sec_num": "3.1"
},
{
"text": "As",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transliteration alignment",
"sec_num": "3.1"
},
{
"text": "2 2 C T T + >>",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transliteration alignment",
"sec_num": "3.1"
},
{
"text": ", an n-gram TM gives a finer description than that of NCM. The actual size of models largely depends on the availability of training data. In Table 1 , one can get an idea of how they unfold in a real scenario. With adequately sufficient training data, n-gram TM is expected to outperform NCM in the decoding. A perplexity study in section 4.1 will look at the model from another perspective.",
"cite_spans": [],
"ref_spans": [
{
"start": 142,
"end": 149,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Transliteration alignment",
"sec_num": "3.1"
},
{
"text": "We use a database from the bilingual dictionary \"Chinese Transliteration of Foreign Personal Names\" which was edited by Xinhua News Agency and was considered the de facto standard of personal name transliteration in today's Chinese press. The database includes a collection of 37,694 unique English entries and their official Chinese transliteration. The listing includes personal names of English, French, Spanish, German, Arabic, Russian and many other origins.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The experiments 1",
"sec_num": "4"
},
{
"text": "The database is initially randomly distributed into 13 subsets. In the open test, one subset is withheld for testing while the remaining 12 subsets are used as the training materials. This process is repeated 13 times to yield an average result, which is called the 13-fold open test. After experiments, we found that each of the 13-fold open tests gave consistent error rates with less than 1% deviation. Therefore, for simplicity, we randomly select one of the 13 subsets, which consists of 2896 entries, as the standard open test set to report results. In the close test, all data entries are used for training and testing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The experiments 1",
"sec_num": "4"
},
{
"text": "1 demo at http://nlp.i2r.a-star.edu.sg/demo.htm The Expectation-Maximization algorithm 1. Bootstrap initial random alignment 2. Expectation: Update n-gram statistics to estimate probability distribution 3. Maximization: Apply the n-gram TM to obtain new alignment 4. Go to step 2 until the alignment converges 5. Derive a list transliteration units from final alignment as transliteration table",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The experiments 1",
"sec_num": "4"
},
{
"text": "The alignment of transliteration units is done fully automatically along with the n-gram TM training process. To model the boundary effects, we introduce two extra units <s> and </s> for start and end of each name in both languages. The EM iteration converges at 8 th round when no further alignment changes are reported. Next are some statistics as a result of the model training: The most common metric for evaluating an ngram model is the probability that the model assigns to test data, or perplexity (Jelinek, 1991) . For a test set W composed of V names, where each name has been aligned into a sequence of transliteration pair tokens, we can calculate the probability of test set NCM closed 1-gram 670 729 655 716 2-gram 324 512 151 210 3-gram 306 487 68 127 Table 2 . Perplexity study of bilingual database",
"cite_spans": [
{
"start": 505,
"end": 520,
"text": "(Jelinek, 1991)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 687,
"end": 788,
"text": "NCM closed 1-gram 670 729 655 716 2-gram 324 512 151 210 3-gram 306 487 68 127 Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Modeling",
"sec_num": "4.1"
},
{
"text": "#",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u220f = = V v v v v P W p 1 ) , ,",
"eq_num": "( )"
}
],
"section": "Modeling",
"sec_num": "4.1"
},
{
"text": "We have the perplexity reported in Table 2 on the aligned bilingual dictionary, a database of 119,364 aligned tokens. The NCM perplexity is computed using n-gram equivalents of eqn. (8) for E2C transliteration, while TM perplexity is based on those of eqn (9) which applies to both E2C and C2E. It is shown that TM consistently gives lower perplexity than NCM in open and closed tests. We have good reason to expect TM to provide better transliteration results which we expect to be confirmed later in the experiments.",
"cite_spans": [],
"ref_spans": [
{
"start": 35,
"end": 42,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Modeling",
"sec_num": "4.1"
},
{
"text": "The Viterbi algorithm produces the best sequence by maximizing the overall probability, ) , , ( \u03b3 \u03b2 \u03b1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling",
"sec_num": "4.1"
},
{
"text": ". In CLIR or multilingual corpus alignment (Virga and Khudanpur, 2003) , N-best results will be very helpful to increase chances of correct hits. In this paper, we adopted an N-best stack decoder (Schwartz and Chow, 1990) in both TM and NCM experiments to search for N-best results. The algorithm also allows us to apply higher order n-gram such as trigram in the search.",
"cite_spans": [
{
"start": 43,
"end": 70,
"text": "(Virga and Khudanpur, 2003)",
"ref_id": null
},
{
"start": 196,
"end": 221,
"text": "(Schwartz and Chow, 1990)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "P",
"sec_num": null
},
{
"text": "In this experiment, we conduct both open and closed tests for TM and NCM models under DOM paradigm. Results are reported in Table 3 and Table 4 In word error report, a word is considered correct only if an exact match happens between transliteration and the reference. The character error rate is the sum of deletion, insertion and substitution errors. Only the top choice in N-best results is used for error rate reporting. Not surprisingly, one can see that n-gram TM, which benefits from the joint source-channel model coupling both source and target contextual information into the model, is superior to NCM in all the test cases.",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 144,
"text": "Table 3 and Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "E2C transliteration",
"sec_num": "4.2"
},
{
"text": "The C2E back-transliteration is more challenging than E2C transliteration. Not many studies have been reported in this area. It is common that multiple English names are mapped into the same Chinese transliteration. In Table 1 , we see only 28,632 unique Chinese transliterations exist for 37,694 English entries, meaning that some phonemic evidence is lost in the process of transliteration. To better understand the task, let's compare the complexity of the two languages presented in the bilingual dictionary.",
"cite_spans": [],
"ref_spans": [
{
"start": 219,
"end": 226,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "C2E back-transliteration",
"sec_num": "4.3"
},
{
"text": "Table 1 also shows that the 5,640 transliteration pairs are cross mappings between 3,683 English and 374 Chinese units. In order words, on average, for each English unit, we have 1.53 = 5,640/3,683 Chinese correspondences. In contrast, for each Chinese unit, we have 15.1 = 5,640/374 English back-transliteration units! Confusion is increased tenfold going backward.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C2E back-transliteration",
"sec_num": "4.3"
},
{
"text": "The difficulty of back-transliteration is also reflected by the perplexity of the languages as in Table 5 . Based on the same alignment tokenization, we estimate the monolingual language perplexity for Chinese and English independently using the n-gram language models 62.1% 14.7% 5-best 8.2% 0.94% 43.3% 5.2% 10-best 5.4% 0.90% 24.6% 4.8% Table 7 . N-best word error rates for 3-gram TM tests",
"cite_spans": [],
"ref_spans": [
{
"start": 98,
"end": 105,
"text": "Table 5",
"ref_id": "TABREF6"
},
{
"start": 340,
"end": 347,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "C2E back-transliteration",
"sec_num": "4.3"
},
{
"text": "A back-transliteration is considered correct if it falls within the multiple valid orthographically correct options. Experiment results are reported in Table 6 . As expected, C2E error rate is much higher than that of E2C.",
"cite_spans": [],
"ref_spans": [
{
"start": 152,
"end": 159,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "C2E back-transliteration",
"sec_num": "4.3"
},
{
"text": "In this paper, the n-gram TM model serves as the sole knowledge source for transliteration. However, if secondary knowledge, such as a lookup table of valid target transliterations, is available, it can help reduce error rate by discarding invalid transliterations top-down the N choices. In Table 7 , the word error rates for both E2C and C2E are reported which imply potential error reduction by secondary knowledge source. The N-best error rates are reduced significantly at 10-best level as reported in Table 7 .",
"cite_spans": [],
"ref_spans": [
{
"start": 292,
"end": 299,
"text": "Table 7",
"ref_id": null
},
{
"start": 507,
"end": 514,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "C2E back-transliteration",
"sec_num": "4.3"
},
{
"text": "It would be interesting to relate n-gram TM to other related framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "5"
},
{
"text": "In section 4, one observes that contextual information in both source and target languages is essential. To capture them in the modeling, one could think of decision tree, another popular machine learning approach. Under the DOM framework, here is the first attempt to apply decision tree in E2C and C2E transliteration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DOM: n-gram TM vs. ID3",
"sec_num": "5.1"
},
{
"text": "With the decision tree, given a fixed size learning vector, we used top-down induction trees to predict the corresponding output. Here we implement ID3 (Quinlan, 1993) algorithm to construct the decision tree which contains questions and return values at terminal nodes. Similar to n-gram TM, for unseen names in open test, ID3 has backoff smoothing, which lies on the default case which returns the most probable value as its best guess for a partial tree path according to the learning set.",
"cite_spans": [
{
"start": 152,
"end": 167,
"text": "(Quinlan, 1993)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DOM: n-gram TM vs. ID3",
"sec_num": "5.1"
},
{
"text": "In the case of E2C transliteration, we form a learning vector of 6 attributes by combining 2 left and 2 right letters around the letter of focus k e and 1 previous Chinese unit 1 \u2212 k c . The process is illustrated in Table 8 , where both English and Chinese contexts are used to infer a Chinese character. Similarly, 4 attributes combining 1 left, 1 centre and 1 right Chinese character and 1 previous English unit are used for the learning vector in C2E test. An aligned bilingual dictionary is needed to build the decision tree.",
"cite_spans": [],
"ref_spans": [
{
"start": 217,
"end": 224,
"text": "Table 8",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "DOM: n-gram TM vs. ID3",
"sec_num": "5.1"
},
{
"text": "To minimize the effects from alignment variation, we use the same alignment results from section 4. Two trees are built for two directions, E2C and C2E. The results are compared with those 3-gram TM in Table 9 . 1) English transliteration unit size ranges from 1 letter to 7 letters. The fixed size windows in ID3 obviously find difficult to capture the dynamics of various ranges. n-gram TM seems to have better captured the dynamics of transliteration units;",
"cite_spans": [],
"ref_spans": [
{
"start": 202,
"end": 209,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "DOM: n-gram TM vs. ID3",
"sec_num": "5.1"
},
{
"text": "N I C _ > \u5c3c _ N I C E \u5c3c > _ N I C E _ _ > \u65af I C E _ _ \u65af > _",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DOM: n-gram TM vs. ID3",
"sec_num": "5.1"
},
{
"text": "2) The backoff smoothing of n-gram TM is more effective than that of ID3;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DOM: n-gram TM vs. ID3",
"sec_num": "5.1"
},
{
"text": "3) Unlike n-gram TM, ID3 requires a separate aligning process for bilingual dictionary. The resulting alignment may not be optimal for tree construction. Nevertheless, ID3 presents another successful implementation of DOM framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DOM: n-gram TM vs. ID3",
"sec_num": "5.1"
},
{
"text": "Due to lack of standard data sets, it is difficult to compare the performance of the n-gram TM to that of other approaches. For reference purpose, we list some reported studies on other databases of E2C transliteration tasks in Table 10 . As in the references, only character and Pinyin error rates are reported, we only include our character and Pinyin error rates for easy reference. The reference data are extracted from Table 1 and 3 of (Virga and Khudanpur 2003). As we have not found any C2E result in the literature, only E2C results are compared here.",
"cite_spans": [],
"ref_spans": [
{
"start": 228,
"end": 236,
"text": "Table 10",
"ref_id": null
},
{
"start": 424,
"end": 431,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "DOM vs. phoneme-based approach",
"sec_num": "5.2"
},
{
"text": "The first 4 setups by Virga et al all adopted the phoneme-based approach in the following steps: 1) English name to English phonemes; 2) English phonemes to Chinese Pinyin; 3) Chinese Pinyin to Chinese characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DOM vs. phoneme-based approach",
"sec_num": "5.2"
},
{
"text": "It is obvious that the n-gram TM compares favorably to other techniques. n-gram TM presents an error reduction of 74.6%=(42.5-10.8)/42.5% for Pinyin over the best reported result, Huge MT (Big MT) test case, which is noteworthy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DOM vs. phoneme-based approach",
"sec_num": "5.2"
},
{
"text": "The DOM framework shows a quantum leap in performance with n-gram TM being the most successful implementation. The n-gram TM and ID3 under direct orthographic mapping (DOM) paradigm simplify the process and reduce the chances of conversion errors. As a result, n-gram TM and ID3 do not generate Chinese Pinyin as intermediate results. It is noted that in the 374 legitimate Chinese characters for transliteration, character to Pinyin mapping is unique while Pinyin to character mapping could be one to many. Since we have obtained results in character already, we expect less Pinyin error than character error should a character-to-Pinyin mapping be needed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DOM vs. phoneme-based approach",
"sec_num": "5.2"
},
{
"text": "Trainin g size In this paper, we propose a new framework (DOM) for transliteration. n-gram TM is a successful realization of DOM paradigm. It generates probabilistic orthographic transformation rules using a data driven approach. By skipping the intermediate phonemic interpretation, the transliteration error rate is reduced significantly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "Furthermore, the bilingual aligning process is integrated into the decoding process in n-gram TM, which allows us to achieve a joint optimization of alignment and transliteration automatically. Unlike other related work where pre-alignment is needed, the new framework greatly reduces the development efforts of machine transliteration systems. Although the framework is implemented on an English-Chinese personal name data set, without loss of generality, it well applies to transliteration of other language pairs such as English/Korean and English/Japanese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "It is noted that place and company names are sometimes translated in combination of transliteration and meanings, for example, /Victoria-Fall/ becomes \u7ef4 \u591a \u5229 \u4e9a \u7011 \u5e03 (Pinyin:Wei Duo Li Ya Pu Bu). As the proposed framework allows direct orthographical mapping, it can also be easily extended to handle such name translation. We expect to see the proposed model to be further explored in other related areas.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Maximum likelihood from incomplete data via the EM algorithm",
"authors": [
{
"first": "A",
"middle": [
"P"
],
"last": "Dempster",
"suffix": ""
},
{
"first": "N",
"middle": [
"M"
],
"last": "Laird",
"suffix": ""
},
{
"first": "D",
"middle": [
"B"
],
"last": "Rubin",
"suffix": ""
}
],
"year": 1977,
"venue": "J. Roy. Stat. Soc., Ser. B",
"volume": "39",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dempster, A.P., N.M. Laird and D.B.Rubin, 1977. Maximum likelihood from incomplete data via the EM algorithm, J. Roy. Stat. Soc., Ser. B. Vol. 39, pp138",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Generate Phonetic Cognates to Handle Name Entities in English-Chinese cross-language spoken document retrieval",
"authors": [
{
"first": "Helen",
"middle": [
"M"
],
"last": "Meng",
"suffix": ""
},
{
"first": "Wai-Kit",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Berlin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Tang",
"suffix": ""
}
],
"year": 2001,
"venue": "ASRU",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helen M. Meng, Wai-Kit Lo, Berlin Chen and Karen Tang. 2001. Generate Phonetic Cognates to Handle Name Entities in English-Chinese cross-language spoken document retrieval, ASRU 2001",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Self-organized language modeling for speech recognition",
"authors": [
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
}
],
"year": 1991,
"venue": "Readings in Speech Recognition",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jelinek, F. 1991, Self-organized language modeling for speech recognition, In Waibel, A. and Lee K.F. (eds), Readings in Speech Recognition, Morgan Kaufmann., San Mateo, CA",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Transliteration of Proper Names in Crosslingual Information Retrieval",
"authors": [
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Graehl",
"suffix": ""
}
],
"year": 1998,
"venue": "Computational Linguistics",
"volume": "24",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Knight and J. Graehl. 1998. Machine Transliteration, Computational Linguistics 24(4) Paola Virga, Sanjeev Khudanpur, 2003. Transliteration of Proper Names in Cross- lingual Information Retrieval. ACL 2003 workshop MLNER",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "C4.5 Programs for machine learning",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Quinlan",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quinlan J. R. 1993, C4.5 Programs for machine learning, Morgan Kaufmann , San Mateo, CA",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A tutorial on hidden Markov models and selected applications in speech recognition",
"authors": [
{
"first": "Lawrence",
"middle": [
"R"
],
"last": "Rabiner",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the IEEE",
"volume": "77",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rabiner, Lawrence R. 1989, A tutorial on hidden Markov models and selected applications in speech recognition, Proceedings of the IEEE 77(2)",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The N-best algorithm: An efficient and Exact procedure for finding the N most likely sentence hypothesis",
"authors": [
{
"first": "R",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Y",
"middle": [
"L"
],
"last": "Chow",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of ICASSP 1990",
"volume": "",
"issue": "",
"pages": "81--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schwartz, R. and Chow Y. L., 1990, The N-best algorithm: An efficient and Exact procedure for finding the N most likely sentence hypothesis, Proceedings of ICASSP 1990, Albuquerque, pp 81-84",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "An English to Korean Transliteration Model of Extended Markov Window",
"authors": [
{
"first": "Sung",
"middle": [
"Young"
],
"last": "Jung",
"suffix": ""
},
{
"first": "Sung",
"middle": [
"Lim"
],
"last": "Hong",
"suffix": ""
},
{
"first": "Eunok",
"middle": [],
"last": "Paek",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of COLING The Onomastica Consortium",
"volume": "1",
"issue": "",
"pages": "829--832",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sung Young Jung, Sung Lim Hong and Eunok Paek, 2000, An English to Korean Transliteration Model of Extended Markov Window, Proceedings of COLING The Onomastica Consortium, 1995. The Onomastica interlanguage pronunciation lexicon, Proceedings of EuroSpeech, Madrid, Spain, Vol. 1, pp829-832",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Chinese transliteration of foreign personal names",
"authors": [],
"year": 1992,
"venue": "Xinhua News Agency",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinhua News Agency, 1992, Chinese transliteration of foreign personal names, The Commercial Press",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "alternative to the phonemebased approach for resolving eqn. (1) and (2) by eliminating the intermediate phonemic representation.",
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"num": null,
"text": "DOM: n-gram TM vs. NCM Although in the literature, most noisy channel models (NCM) are studied under phoneme-based paradigm for machine transliteration, NCM can also be realized under direct orthographic mapping (DOM). Next, let's look into a bigram case to see what n-gram TM and NCM present to us. For E2C conversion, re-writing eqn (1) and eqn (6) , we have",
"type_str": "figure"
},
"TABREF0": {
"num": null,
"html": null,
"text": "that we have an English name Oftentimes, the number of letters is different from the number of Chinese characters. A Chinese character may correspond to a letter substring in English or vice versa.",
"type_str": "table",
"content": "<table><tr><td>m n y x y ... x x ... 2 1 y 2 1 Chinese characters. m = \u03b1 and a Chinese transliteration = \u03b2 where i x are letters and j y are i i x x x x x x x ... ... 2 1 3 2 1 + + n j y y y y ... ... 2 1 where there exists an alignment \u03b3 with &gt; =&lt; &gt; &lt; 1 1 1 , , y x c e &gt; =&lt; &gt; &lt; 2 3 2 2 , , y x x c e \u2026 and &gt; =&lt; &gt; &lt; n m K y x c e , , . A transliteration unit correspondence &gt; &lt; c e, is called a transliteration pair. Then, the E2C transliteration can be formulated as ) , , ( max arg</td></tr></table>"
},
"TABREF1": {
"num": null,
"html": null,
"text": "Here are some examples of resulting alignment pairs.",
"type_str": "table",
"content": "<table><tr><td>\u65af|s</td><td>\u5c14|l</td><td>\u7279|t</td><td>\u5fb7|d</td></tr><tr><td>\u514b|k</td><td>\u5e03|b</td><td>\u683c|g</td><td>\u5c14|r</td></tr><tr><td>\u5c14|ll</td><td>\u514b|c</td><td>\u7f57|ro</td><td>\u91cc|ri</td></tr><tr><td>\u66fc|man</td><td>\u59c6|m</td><td>\u666e|p</td><td>\u5fb7|de</td></tr><tr><td>\u62c9|ra</td><td>\u5c14|le</td><td>\u963f|a</td><td>\u4f2f|ber</td></tr><tr><td>\u62c9|la</td><td>\u68ee|son</td><td>\u987f|ton</td><td>\u7279|tt</td></tr><tr><td>\u96f7|re</td><td>\u79d1|co</td><td>\u5965|o</td><td>\u57c3|e</td></tr><tr><td>\u9a6c|ma</td><td>\u5229|ley</td><td>\u5229|li</td><td>\u9ed8|mer</td></tr></table>"
},
"TABREF4": {
"num": null,
"html": null,
"text": "",
"type_str": "table",
"content": "<table><tr><td>.</td><td/><td/><td/></tr><tr><td>open</td><td>open</td><td>closed</td><td>closed</td></tr><tr><td>(word)</td><td>(char)</td><td>(word)</td><td>(char)</td></tr><tr><td>1-gram 45.6%</td><td>21.1%</td><td>44.8%</td><td>20.4%</td></tr><tr><td>2-gram 31.6%</td><td>13.6%</td><td>10.8%</td><td>4.7%</td></tr><tr><td>3-gram 29.9%</td><td>10.8%</td><td>1.6%</td><td>0.8%</td></tr><tr><td colspan=\"4\">Table 3. E2C error rates for n-gram TM tests.</td></tr><tr><td>open</td><td>open</td><td>closed</td><td>closed</td></tr><tr><td>(word)</td><td>(char)</td><td>(word)</td><td>(char)</td></tr><tr><td colspan=\"2\">1-gram 47.3% 23.9%</td><td>46.9%</td><td>22.1%</td></tr><tr><td colspan=\"2\">2-gram 39.6% 20.0%</td><td>16.4%</td><td>10.9%</td></tr><tr><td colspan=\"2\">3-gram 39.0% 18.8%</td><td>7.8%</td><td>1.9%</td></tr></table>"
},
"TABREF6": {
"num": null,
"html": null,
"text": "",
"type_str": "table",
"content": "<table><tr><td colspan=\"4\">language perplexity comparison</td></tr><tr><td colspan=\"3\">(open/closed test)</td><td/></tr><tr><td>open</td><td>open</td><td>closed</td><td>closed</td></tr><tr><td>(word)</td><td>(letter)</td><td>(word)</td><td>(letter)</td></tr><tr><td colspan=\"2\">1 gram 82.3% 28.2%</td><td>81%</td><td>27.7%</td></tr><tr><td colspan=\"2\">2 gram 63.8% 20.1%</td><td>40.4%</td><td>12.3%</td></tr><tr><td colspan=\"2\">3 gram 62.1% 19.6%</td><td>14.7%</td><td>5.0%</td></tr><tr><td colspan=\"4\">Table 6. C2E error rate for n-gram TM tests</td></tr></table>"
},
"TABREF7": {
"num": null,
"html": null,
"text": "E2C",
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">transliteration using ID3 decision</td></tr><tr><td colspan=\"3\">tree for transliterating Nice to</td></tr><tr><td colspan=\"3\">\u5c3c\u65af (\u5c3c|NI \u65af|CE)</td></tr><tr><td/><td>open</td><td>closed</td></tr><tr><td>ID3 E2C</td><td>39.1%</td><td>9.7%</td></tr><tr><td>3-gram TM E2C</td><td>29.9%</td><td>1.6%</td></tr><tr><td>ID3 C2E</td><td>63.3%</td><td>38.4%</td></tr><tr><td>3-gram TM C2E</td><td>62.1%</td><td>14.7%</td></tr><tr><td colspan=\"3\">Table 9. Word error rate ID3 vs. 3-gram TM</td></tr><tr><td colspan=\"3\">One observes that n-gram TM consistently</td></tr><tr><td colspan=\"3\">outperforms ID3 decision tree in all tests. Three</td></tr><tr><td colspan=\"2\">factors could have contributed:</td><td/></tr></table>"
}
}
}
}